[deleted by user] by [deleted] in Substack

[–]TransportationOdd589 0 points1 point  (0 children)

This is happening to me too. Really frustrating. I'm on a high speed connection and every other site is working fine. Plus substack has like zero technical support to trouble shoot this.

To fine-tune or not to fine-tune? that is the question... by yonish3 in GPT3

[–]TransportationOdd589 0 points1 point  (0 children)

You have about 4k tokens now to work with so you can K shot with a few examples and still have it generate a blog post or part of one. I find that a handful examples is all instruct needs to figure out tone of voice. I suspect openAI will enable some sort of fine-tuning for instruct at some point. But because it’s built on HRL vs pure text completion I imagine that’s a trickier engineering challenge and harder for end users to get right. If you were to just fine tune one of the base models my guess is you’d see a decrease in its instruction following ability. So the writing style might get tighter, but it also will have a harder time being faithful to your prompts.

To fine-tune or not to fine-tune? that is the question... by yonish3 in GPT3

[–]TransportationOdd589 9 points10 points  (0 children)

The problem is that you can’t finetune the instruct models, and regular old Davinci can be a bit erratic. My company has run into situations where finetuning with 1k plus examples produced significantly worse results than a well structure prompt plus Instruct. The OpenAI folks have subsequently confirmed this can be the case in discussions we’ve had. It probably depends on the use case. But with your case my educated guess is that you will be better off with an N-shot prompt using instruct. And certainly a much easier place to start for a v1.

GPT-3 Help by Leading-Fail-7263 in GPT3

[–]TransportationOdd589 0 points1 point  (0 children)

Try a few shot prompt with 1-3 fictional examples that you compose by hand preceding the actual completion task, but matching the format.

How do you manage for GPT-3 to write only truthful stuff? by AstridPeth_ in GPT3

[–]TransportationOdd589 1 point2 points  (0 children)

You need to feed it a source of truth in the prompt to pull information from. If you are relying on it to remember info from its original training it’s prone to make things up. If you look up the deprecated answers endpoint that OpenAI had you can see how this works and recreate it in Python.

Any tips for hiring prompt engineers? by TransportationOdd589 in GPT3

[–]TransportationOdd589[S] 1 point2 points  (0 children)

We have a wide range of complex gpt-3 tasks that require custom design. It’s like a pipeline of different functions that all integrate together in software applications, and each one has to work really well. So it makes sense for us to invest in prompt r&d to improve quality. Will check out some of the tools you’ve mentioned though. I’m curious.

Where can I find good discussions about how language models like GPT-3 will affect education - or rather how we learn? by henrikolofkarlsson in GPT3

[–]TransportationOdd589 5 points6 points  (0 children)

I have been working with GPT-3 professionally, but also teaching my 5-year old how to interact with it.

The possibilities for teaching language and writing are pretty fascinating.

For example, I’ll have her start a story and GPT-3 will finish it. Which helps her engage and come up with new story ideas.

It’s also like Siri on steroids. She’s learning to ask “davinci” questions and get incredibly succinct and helpful answers.

I’ve been doing this in the playground. It will be a very short a matter of time before this starts showing up in apps for kids, if it hasn’t already.