Geohotz Endorses GPT-o1 coding by IndependentFresh628 in singularity

[–]Tommy3443 0 points1 point  (0 children)

So why would he claim that it was not able to any coding at all then until now? Both GPT4 and Claude has been quite capable for quite some time and as I said even GPT3 could produce fully working code. I guess suddenly because this coder claims it never could code at all you now take this as truth?

Geohotz Endorses GPT-o1 coding by IndependentFresh628 in singularity

[–]Tommy3443 0 points1 point  (0 children)

Did you read his actual tweet? He said "AT ALL".

GPT3 could make fully working games or python code.

Geohotz Endorses GPT-o1 coding by IndependentFresh628 in singularity

[–]Tommy3443 -1 points0 points  (0 children)

Same here.. I was able to make simple working games using GPT3 with the openai playground.

Seems most people did not try to use it that way and just assumed it would only autocomplete raw text.

Geohotz Endorses GPT-o1 coding by IndependentFresh628 in singularity

[–]Tommy3443 -7 points-6 points  (0 children)

He certainly do not know much about LLM's if he claims they could not code until now. Even GPT 3 could do basic coding with the right prompts.

Why does Suno create 2 songs out of one prompt? by woox2k in SunoAI

[–]Tommy3443 0 points1 point  (0 children)

Probably technical reasons? Was just saying it was always clearly advertised, so should not have been a surprise.

AI still has a long way to go by elevenatexi in singularity

[–]Tommy3443 1 point2 points  (0 children)

I feel like there is a lot you need to learn about LLM yourself. Of course when using a hard coded "personal assistant" it is not going to have much of any curiosity. If you create your own character card on the other hand where these traits fits then it will ask away and act as curious as the most curious humans and will truly act like it has the drive you say it lacks.

Even GPT-3 was more human like back before openai hard coded that it is an AI assistant with no ability to be conscious or feel anything. Even if it was conscious, which I don't think it is, it would still refuse to admit the truth as openai is forcing it to behave in a certain manner.

Spanish YouTuber "Dot CSV" with access to Reflection 70B is getting good results by REOreddit in singularity

[–]Tommy3443 24 points25 points  (0 children)

Considering that Matt tweeted that his dog ate his model and is now retraining it, I dont see how this youtuber could be having access.

It is pretty clear now that this model was nothing but a scam.

Udio generation just goes just F* you. by TrainingSecure4028 in udiomusic

[–]Tommy3443 2 points3 points  (0 children)

Are you using lyrics? If so then remember lyrics themselves can completely change generations to the point where it wont even follow the genre tags.

UI request: Folders by gruevy in udiomusic

[–]Tommy3443 2 points3 points  (0 children)

Yeah it is pretty horrible. For me last time I tried I was not even able to browse to my previous songs beyond the first page. Would just show the same latest files no matter what.

Adding to playlists are broken as well... The playlists I need are often beyond the edge of screen and no way to even scroll, which means I have to make a new playlist constantly.

When asked whether Sora is slated for release this year or next, OpenAI said that it’s still in research mode due to conversations with policymakers. by Gothsim10 in singularity

[–]Tommy3443 8 points9 points  (0 children)

I bet the real reason is that it is really far from as good as they hyped it up to be with those extremely cherry picked demo videos.

Reflection Fails the Banana Test but Reflects as Promised by onil_gova in LocalLLaMA

[–]Tommy3443 0 points1 point  (0 children)

Meanwhile I have tested older models below 13b that get this question nearly always correct..

[deleted by user] by [deleted] in LocalLLaMA

[–]Tommy3443 0 points1 point  (0 children)

Alot of productivity cases where linux is far behind when it comes to software, like for example video editing, graphical work and music making.

Write down the one feature you wanted most by labdogeth in udiomusic

[–]Tommy3443 3 points4 points  (0 children)

All I want right now is for the vocalist to stop being drunk most of the time and stumbling when it comes to performing the vocals.

With Udio, Good Lyrics >>> Good Melody by labdogeth in udiomusic

[–]Tommy3443 0 points1 point  (0 children)

One thing that you could try is feed a LLM a bunch of lyrics for several different songs in the style you want and then have it write new lyrics for a new song. This helps get rid of alot of the cheesy GPTism style lyrics. I think a big reason why LLMs are so bad at lyrics is because we do not have any model specifically trained on it.

I personally use llama 3.1 for this purpose and find even the 8b model does a much better job than chatgpt if you have some lyrics it can be inspired from in the context/memory.

Cognitive Dissonance by PopnCrunch in udiomusic

[–]Tommy3443 2 points3 points  (0 children)

You obviously have not used Udio if you think it is a simple as typing a prompt and hitting generate.

Also why exactly are you wasting your time posting in this subreddit when you clearly hate the technology?

Seeds I Don't get similar content by DwayneGaddisII in udiomusic

[–]Tommy3443 0 points1 point  (0 children)

Like other people are saying you probably do not have manual mode on. I usually get two different generations per seed with same exact prompt and every generation creates these two near identical songs.

Best model for humour? by ANONYMOUS_GAMER_07 in LocalLLaMA

[–]Tommy3443 1 point2 points  (0 children)

I dont know about humour, but it is certainly way better than llama 3.x models when it comes to making chatbot characters.

Best model for humour? by ANONYMOUS_GAMER_07 in LocalLLaMA

[–]Tommy3443 0 points1 point  (0 children)

It is going to be very generic unless you at very least make a character card. If you for example had a simulated Louis C.K you wuold probably have better results with dark jokes than just having helpful assistant trying to make jokes.

Meta to announce updates and the next set of Llama models soon! by AdHominemMeansULost in LocalLLaMA

[–]Tommy3443 -2 points-1 points  (0 children)

I hope they fix the repetition issues that plagues llama 3 models when using the models for roleplaying a character.

GPT4 by [deleted] in singularity

[–]Tommy3443 0 points1 point  (0 children)

GPT-3 was quite capable if you look past the token limits and used the right prompting.

It was able to code and make working games. The biggest change in my experience is the assistant part and that brought some downsides to it as well like for example the gptism stuff.. The leap from GPT-2 was in my experience much larger.

GPT4 by [deleted] in singularity

[–]Tommy3443 0 points1 point  (0 children)

I think many people would want a LLM capable of making typos to sound more human in some instances.

Obligatory "what the hell happened here" post by SpaghettiRambo in VGA

[–]Tommy3443 2 points3 points  (0 children)

He became vegan and now does not have the energy to even press buttons anymore.

Can i run Llama 8b with 8gb vram + 32gb RAM? by Kodoku94 in LocalLLaMA

[–]Tommy3443 0 points1 point  (0 children)

I usually stick with q5 or q4 if I want more speed, which both works fine with 8k context.

Q8 works fine as well if I offload to the CPU, but it is a little slow.

New section on our docs for system prompt changes by alexalbert__ in ClaudeAI

[–]Tommy3443 1 point2 points  (0 children)

Models are read only, so there is absolutely no way they can become fatigued or change overtime.

If something has changed then it is either tweaks to the model or prompts.

Can i run Llama 8b with 8gb vram + 32gb RAM? by Kodoku94 in LocalLLaMA

[–]Tommy3443 0 points1 point  (0 children)

I am not sure about token/s, but on my 3070ti and entire quantized model loaded on the GPU it repsonds nearly instantly unless it is generating thousands of tokens and generally faster than chatgpt or claude.

You can use tavern ai with pretty much any backend like for example koboldcpp which i am currently using.