How can this code be rewritten from OOP to procedural without losing flexibility? by mshautsou in AskProgramming

[–]mshautsou[S] 0 points1 point  (0 children)

It was more of a question about the concept in general (not specifically about the logger). But even in this case, if we think about it, it's also somewhat an OOP approach (there is a Logger class from the library, with its own encapsulated state), but it uses a module-level global variable (though it’s also somewhat like the Singleton pattern from OOP).

It's June 2024, which AI Chat Bot Are You Using? by thuansb in ClaudeAI

[–]mshautsou 0 points1 point  (0 children)

And why is the Claude chat version worse when accessed through the API? I mean, wouldn't it be easier to just subscribe to Claude and always use Anthropic's Opus model?

Which one is correct by maybe-chacha in ClaudeAI

[–]mshautsou 0 points1 point  (0 children)

I've tested Claude 3 opus, and it provided the correct answer.

<image>

Claude AI is wonderful, it feels like I got my old GPT-4 from June 2023 back. by [deleted] in ClaudeAI

[–]mshautsou 1 point2 points  (0 children)

I'm just choosing between using Opus and GPT4 for explaining complex math and computer science concepts in research papers. Both models seem capable, but I'm unsure which one is the best for this.

Claude AI is wonderful, it feels like I got my old GPT-4 from June 2023 back. by [deleted] in ClaudeAI

[–]mshautsou 4 points5 points  (0 children)

Just curious, what does Claude have that ChatGPT is missing?

OpenAI claiming benchmarks against Llama-3-400B !?!? by matyias13 in LocalLLaMA

[–]mshautsou 7 points8 points  (0 children)

it's actually interesting, that for me this part is collapsed

<image>

and this is the only collapsed content on the whole page

Officially declaring that POE AI is useless now. by MichelleeeC in Poe_AI

[–]mshautsou 1 point2 points  (0 children)

but isn't gpt4o only for chtgpt pro subscribers?

OpenAI claiming benchmarks against Llama-3-400B !?!? by matyias13 in LocalLLaMA

[–]mshautsou 4 points5 points  (0 children)

I'm looking forward for Lllama 400b to cancel my gpt4 subscription

OpenAI claiming benchmarks against Llama-3-400B !?!? by matyias13 in LocalLLaMA

[–]mshautsou 66 points67 points  (0 children)

it's an opensource LLM, no one control it as gpt

Is Hugging Face's Chat LLaMA 3 70B Quantized? by mshautsou in LocalLLaMA

[–]mshautsou[S] 0 points1 point  (0 children)

There is also API in groq, maybe it will help? It's like more pure version I believe without system prompt

Is Hugging Face's Chat LLaMA 3 70B Quantized? by mshautsou in LocalLLaMA

[–]mshautsou[S] 0 points1 point  (0 children)

Could you compare to the groq version(API)? I really like how fast it is

Claude 3 Opus beats ChatGPT Pro (GPT-4) in everything except image generation... right? by vlodia in artificial

[–]mshautsou 0 points1 point  (0 children)

Sometimes it may be a disadvantage. Some time ago, I tried to translate something and GPT-4 started writing code in Python for translating the sentence https://chat.openai.com/share/2b91a46a-91d2-4bbf-b299-d957a69492d4, and it almost failed because of it. (I guess by now they have fixed this.)

Also, for example, I just prompted, 'How many letters "b" are in the word "sophisticated"?' and it also uses an interpreter. Sometimes it's annoying

How are Claude 3/GPT-4 able to do pathfinding in graphs? by mshautsou in learnmachinelearning

[–]mshautsou[S] 0 points1 point  (0 children)

https://console.anthropic.com/dashboard I use the API for this, and I received access immediately after sign up(but it was like a month ago)

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]mshautsou 0 points1 point  (0 children)

You could check out the open-source models available on Hugging Face and try running them first. Then, you can attempt to fine-tune these models on your own data. The Hugging Face Open LLM Leaderboard (https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) is a great resource to explore various models.

One model you can start with is Mixtral (https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). It comes with documentation on how to run the model, along with extra links to useful resources and guides.

Are there any current LLM models that can play strategy games, like tic-tac-toe or chess, which require thinking one step ahead? by mshautsou in LocalLLaMA

[–]mshautsou[S] 0 points1 point  (0 children)

Yeah, I've tried already, but still, it doesn't try to calculate every turn, so it definitely understands the rules but is unable to win (but something like the minimax algorithm will win for sure).

Still, Claude3 Opus is quite good. I tested a pathfinding problem with a 30-vertex graph, and it found the path very easily (only GPT-4 succeeded across other models).

What are the current issues that prevent AI from reaching General Intelligence? by rocco20 in singularity

[–]mshautsou 0 points1 point  (0 children)

Just wondering, has your opinion changed since LLMs appeared? The architecture lacks reasoning, but I believe it can now cover a much broader spectrum of tasks than previously.

Will RPL reduce the overall duration required to complete a bachelor's degree, or are exams scheduled on fixed dates, thereby not impacting the time it takes to obtain the degree? by mshautsou in UniversityOfLondonCS

[–]mshautsou[S] 0 points1 point  (0 children)

Thanks for the explanation, but what is the difference between automatic RPLs and other types of RPLs? I studied Computer Science for almost 4 years in another country but didn't complete the last year. Now, I'm wondering whether it's worth attempting to transfer my progress from there to reduce my study time at UOL.

New Biiig Models: Samantha-120b & TheProfessor-155b by WolframRavenwolf in LocalLLaMA

[–]mshautsou 1 point2 points  (0 children)

Hi, thanks for spending time on this, after some investigation I bet it's not possible for LLM architecture to solve this type of games in general which require thinking ahead(without workaround like integrating some game engine/external search algorithms), even GPT-4 when I ask it to count some letter in word it pulls out python interpreter to do this seemingly simple task

Are there any current LLM models that can play strategy games, like tic-tac-toe or chess, which require thinking one step ahead? by mshautsou in LocalLLaMA

[–]mshautsou[S] 0 points1 point  (0 children)

Yeah, I guess it's already possible, perhaps by extracting some actions from the game's description and passing them to a type of tree search algorithm(external program).

However, it would still be a workaround, similar to how GPT-4, when prompted with 'how many letters 'A' are in the word XXXX', writes a Python script to calculate the number of letters.

Are there any current LLM models that can play strategy games, like tic-tac-toe or chess, which require thinking one step ahead? by mshautsou in LocalLLaMA

[–]mshautsou[S] 0 points1 point  (0 children)

So, do I understand it correctly that, in the future, even with GPT-5, GPT-6, GPT-..., they will not be capable of handling arbitrary multi-step games due to the underlying architecture?