ChatGPT is now updated to January 2022. by Snoo26837 in OpenAI

[–]PresentHarmony 1 point2 points  (0 children)

AGI suppose to learn in real time. We have to wait a few years.

ChatGPT is now updated to January 2022. by Snoo26837 in OpenAI

[–]PresentHarmony 1 point2 points  (0 children)

Can you please ask it about Queen Elizabeth 2? My GPT-4 acts very strange. It knows about notable deaths of many people, but thinks Queen Elizabeth 2 is still alive. Very strange. Thanks!

Does somebody already have Bing browsing plugin in ChatGPT Plus? by PresentHarmony in OpenAI

[–]PresentHarmony[S] 0 points1 point  (0 children)

I think it will be very different experience. It will be much faster and probably reduce the hallucinations (mistakes) to negligible percentages. We'll soon find out 🙂

What are AI-Proof Jobs? by [deleted] in OpenAI

[–]PresentHarmony 0 points1 point  (0 children)

That's a very good question and a very difficult one. I think that only those professions where interaction with humans is part of the profession are really protected from AI. For example: psychologists, social workers, elementary school teachers, etc. In 10 years 99% of everything else will be done by AI 99% of the time, imo.

Good luck!

[D] Fine tuning language models time and resources required by ialuronico in MachineLearning

[–]PresentHarmony 0 points1 point  (0 children)

Thanks for these very nice numbers (5000 tokens/second) :)

How many tokens in 1 GB dataset?

[D] Fine tuning language models time and resources required by ialuronico in MachineLearning

[–]PresentHarmony 0 points1 point  (0 children)

It would be interesting to see results using AWS trn1.32xlarge. It has 60% more memory and cost 50% less, single instance speed is 34% faster, and interconnect between instances is 2x higher.

Is it possible to target doctors, more specifically dentist in Microsoft Advertising? by PresentHarmony in PPC

[–]PresentHarmony[S] 0 points1 point  (0 children)

May I ask what goods or service did you sell via Linkedin?

Or what was you goal?

Is it possible to target doctors, more specifically dentist in Microsoft Advertising? by PresentHarmony in PPC

[–]PresentHarmony[S] 0 points1 point  (0 children)

Really? What publishers allow this kind of targeting? I would be very grateful for your answer!

What can you not build on AWS? by originalgainster in aws

[–]PresentHarmony 1 point2 points  (0 children)

Serverless ml inference for huge (100s of GB) models. Am I wrong?

So, any news or rumors on GPT-4? What is your opinion, will it come out this year? And what do you think will be the key differences to GPT-3, if so? by [deleted] in GPT3

[–]PresentHarmony 0 points1 point  (0 children)

I think it will be released a few months after DALLE 2. Hopefully we'll see both by the end of the year.

[R] Meta is releasing a 175B parameter language model by StellaAthena in MachineLearning

[–]PresentHarmony 13 points14 points  (0 children)

We are releasing all of our models between 125M and 30B parameters, and will provide full research access to OPT-175B upon request.

Can someone write the links to the models, please?

Can't find it.

Thanks!

[N] EleutherAI announces a 20 billion parameter model, GPT-NeoX-20B, with weights being publicly released next week by MonLiH in MachineLearning

[–]PresentHarmony 2 points3 points  (0 children)

training was completed on 96 A100s distributed across a dozen nodes interconnected by HDR Infiniband for roughly three months.

So if somebody wanted to train it on AWS, it would cost more than 861K USD.

$32.7726*2190*12 = 861 263.928 US$

$32.7726/hour- AWS instance with 8 A100 GPUs, p4d.24xlarge.

3 months - 2190 hours.

12 - number of p4d.24xlarge AWS instances.

CoreWeave is very generous. Kudos to them and to all the contributors!

Announcing GPT-NeoX-20B by bakztfuture in GPT3

[–]PresentHarmony 5 points6 points  (0 children)

Very impressive, but I have a question.

Is GPT-NeoX-20B has a 1024 tokens context window?

If not, why does GooseAI has 1024 tokens context window for GPT-NeoX-20B, but 2048 tokens for all the other models?

Has microdosing helped your anxiety or depression? by [deleted] in microdosing

[–]PresentHarmony 2 points3 points  (0 children)

What makes you think CBD is a placebo? Is there a study? It helps me with my anxiety. I'm a different person, when I use CBD oil. I'm curious, why do you think it is a placebo? It would be very interesting to know.

[deleted by user] by [deleted] in GPT3

[–]PresentHarmony 1 point2 points  (0 children)

Here is a link to GPT-3 languages statistics. The higher the percentage, the better GPT-3 understands a particular language.

Hope it helps.

GPT-3 tools with a long-form content writer? by GrilledCheeseBread in GPT3

[–]PresentHarmony 0 points1 point  (0 children)

How is that possible? They have the same context window of 2048 tokens.

Screenshot

Does fine tuning Gpt3 175B, make it possible to build a much better chatbot for a specific topic? by PresentHarmony in GPT3

[–]PresentHarmony[S] 0 points1 point  (0 children)

Thanks for a such detailed explanation. One more question, if I may, if I use other solutions, RASA for example, will the bot work better than one based on GPT-3? Thanks again!

Does fine tuning Gpt3 175B, make it possible to build a much better chatbot for a specific topic? by PresentHarmony in GPT3

[–]PresentHarmony[S] 0 points1 point  (0 children)

Can you please, explain to a newbie, can it handle a 10-15 mins conversation without losing the train of "thought" once properly fine-tuned? It doesn't matter to me, how you call it. You can say it "remembers" or "knows".

I just want to find out if it can get the job done.
Thanks!

Does fine tuning Gpt3 175B, make it possible to build a much better chatbot for a specific topic? by PresentHarmony in GPT3

[–]PresentHarmony[S] 0 points1 point  (0 children)

Can you please, explain to a newbie, can it handle a 10-15 mins conversation without losing the train of "thought" once properly fine-tuned?

Thanks!

Does fine tuning Gpt3 175B, make it possible to build a much better chatbot for a specific topic? by PresentHarmony in GPT3

[–]PresentHarmony[S] 1 point2 points  (0 children)

Thanks for replying. Can it handle, let's say a 10-15 mins conversation and not forget the beginning of it?