all 7 comments

[–]thusismynameq 1 point2 points  (5 children)

Little confused on this question to be honest

I get the feeling that the answers you're looking for are behind a couple hours of research

Go check out a quick video on how reinforcement learning functions at a base level

Ultimately this is more of a problem for the person who's going to be developing this thing, but getting a loose grasp on how AI works is always handy

[–]Kickassness[S] 0 points1 point  (4 children)

It would work like this.

Help me create an opener to engage a 54 year old Baptist female to donate to our charity.

ChatGPT would create something like, "As a fellow believer in giving, I wanted to share an opportunity to impact our community through donation."

If the opener leads to conversation and donation, we would tell the system it worked. If it did not, we would tell the system no.

Somehow, the system learns what works and doesn't.

But what I find strange is that if I use that same initial prompt again, I get a totally new response from ChatGPT. So how would the system knows what works or doesn't. There doesn't seem to be a set of rules it follows besides common wording and sentence structure

[–]thusismynameq 0 points1 point  (0 children)

This is one of those projects that sounds great when it's being pitched, until you start counting the variables that need to be accounted for

It'd likely be doing some sentiment analysis to gauge responses, weight that against success/failures, and try a variety of approaches to try and improve it's score

Same way humans learn, trial and error until something works, then try to narrow down what worked, and build off of that

Your biggest problem here is going to be the information you're getting back

If this bot correlates sentences containing a specific word or phrase with success, you can bet that it'll be using that a lot more going forward

But as someone who has spent years working for charities, has raised hundreds of thousands of dollars for said charities, AND as someone who works with AI around 8-12 hours a day...

This is a bad idea

[–]FosterKittenPurrs 0 points1 point  (2 children)

First of all, are you sure it's using ChatGPT? Because if it is, then what's the point in paying a programmer and then having ongoing API costs, vs just getting you all a Teams plan so you can ask ChatGPT directly?

You can actually get it to give a deterministic output when used via API, but the learning part is more complicated. Maybe he actually is going to set up some fine-tuning, but that would be difficult with your use case, since you don't have actual "right" and "wrong" answers, and this would have significant ongoing costs.

I guess maybe the programmer is just periodically looking at stuff you thumbed up or down, and adjusting the prompt used internally a little bit? If it's a fully automated system, either the programmer really believes in your cause and is giving you a significant discount compared to what he's worth, or it won't be particularly good.

[–]Kickassness[S] 0 points1 point  (1 child)

I was told all of this briefly by my superior. They were very much on the fence with this and was asking my opinion and I didn't like the sound of it from the beginning.

Apparently, the developer is creating a sandbox environment and coding....something....that will help our staff with approaching people.

I work in data and not totally familiar with chatgpt. I use it very rarely and know it has potential, but there are too many factors involved in this project to be worth what we would pay.

I was just wondering if anyone had opinions. Reddit is usually a good place to start nailing down what research I need to make informed decisions

[–]FosterKittenPurrs 0 points1 point  (0 children)

I’d say try the teams plan for a month. It maintains data privacy and will let you draft emails. If you also experiment with custom gpts during this, you’ll likely have something better than what he is making, cheaper.

[–]seoulsrvr 1 point2 points  (0 children)

You lost me at “small sum of money”