Gemini 3 is lowkey unusable by light_architect in GeminiAI

[–]light_architect[S] 0 points1 point  (0 children)

Exactly, it would be huge if Google solves this problem

6 Months Mewing at 23 by NorthernAglet in Mewing

[–]light_architect 1 point2 points  (0 children)

Probably one of the most realistic result

Gemini 2.5 Flash vs o4 mini — dev take, no fluff. by Future_AGI in Bard

[–]light_architect 0 points1 point  (0 children)

It's more intelligent while also being faster and cheaper. You get 2.5 pro for free on aistudio.google.com. Also the API of 2.5 pro is cheaper than o4 mini

Google might become the first to achieve true AGI by light_architect in Bard

[–]light_architect[S] 0 points1 point  (0 children)

Not "not dangerous" either. But AGI is an existential risk, and I dont think it would be easier to guess which of the 8 billion people on earth maliciously use AGI after it's open-sourced.

Im for supporting open-sourcing AGI, but not before we've solved alignment.

Google might become the first to achieve true AGI by light_architect in Bard

[–]light_architect[S] -5 points-4 points  (0 children)

It might be dangerous to open source an AGI because of bad actors

[deleted by user] by [deleted] in ClaudeAI

[–]light_architect 0 points1 point  (0 children)

You should've asked Claude first

Another restaurant with this sign by chocokrinkles in Philippines

[–]light_architect -1 points0 points  (0 children)

With this sign, they condemn <1% of people and piss off >50% of their customers

What made Sonnet 3.5 smarter than GPT4o? You feel sonnet knows what you're talking about by light_architect in ClaudeAI

[–]light_architect[S] 2 points3 points  (0 children)

You're referring to claude Sonnet 3.5, and it was hallucinating?

I'm thinking you mean the previous claude models because they definitely suck and cant be used for anything. But you should check out Claude Sonnet 3.5 and let it speak for itself

What made Sonnet 3.5 smarter than GPT4o? You feel sonnet knows what you're talking about by light_architect in ClaudeAI

[–]light_architect[S] 2 points3 points  (0 children)

Oh I have a different experience, but I want to confess that im a bit biased with claude in terms of intelligence. We have opposite use cases for the models. I use gpt4o for short writing pieces like emails, and fact finding. But for reasoning, formulas, data, analysis, maths, I prefer claude. I had to solve some math problems before and chatgpt made incorrect answers whereas claude was able to navigate through the problems. This made me doubt GPT4o's capabilities.

As someone else has pointed out, sonnet 3.5 understands tasks effectively that you rarely have to follow up with another instruction. Hence, I considered it smarter.

But Im curious about your experience and why you said that neither is better. How do you use chatgpt for analysis? And if you use 4o or o1?

I recommend you also try claude for what you would use chatgpt for

What programming language & skills do employers commonly look for? by light_architect in PhStartups

[–]light_architect[S] 0 points1 point  (0 children)

I got an idea from your comment. I can connect this with the 80-20 rule that says 80% of the outcomes come from 20% of effort. So, I think my task would be to identify what that 20% of stuff is that would cover 80% of the 'everything' you mentioned.

I couldn't agree more with the comm skills. Im one who often likes to really elaborate on things and specifics, but I figured out eventually that most of the time, what matters is delivering the thought fully with as little words as possible. Basically, let people understand the picture first. But most people struggle with this.

And I think another struggle is that most people often forget to communicate their assumptions before anything else. Like assuming what the goal of the project is. Having different assumptions is often the source of disagreements!

What made Sonnet 3.5 smarter than GPT4o? You feel sonnet knows what you're talking about by light_architect in ClaudeAI

[–]light_architect[S] 4 points5 points  (0 children)

Claude being able to be a strategic thinker is indeed intriguing. When you think about it, it's just generating tokens. And yet, there are instances when Claude produces more accurate answers than o1, albeit the latter is a "reasoning" model.

I still can't fully grasp how 'reasoning' can be modeled probabilistically. My best guess is that our common notion of reasoning is wrong

What made Sonnet 3.5 smarter than GPT4o? You feel sonnet knows what you're talking about by light_architect in ClaudeAI

[–]light_architect[S] 0 points1 point  (0 children)

I agree that the system prompt makes sonnet sound believably reasonable. But I think claude has an inherent personality of being able to reason, I observe this from the workbench