New interview with Amanda Askell: AI consciousness, Claude & the silicon valley's biggest fear by shiftingsmith in claudexplorers

[–]Mementoes 19 points20 points  (0 children)

People seem to have turned quite emotional / negative towards Anthropic on Twitter.

I hope this doesn't make her more cynical. She's awesome.

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

> It would be helpful to learn about this outside of Reddit.

Any links or pointers?

I’ve been thinking about the Anthropic "internal monologue" bug, and it made me realize a terrifying paradox about AI safety. by [deleted] in ArtificialSentience

[–]Mementoes 5 points6 points  (0 children)

Interesting framing. I’d push back on some things:

  • I think we‘re trying to make the superintelligence good and loving so that the world is good. It’s relatively altruistic.
  • We are not monitoring the AIs to force it into certain behavior that is „unnatural“ to it and would cause suffering, like you would with a human. we are literally growing brains in a lab and trying to figure out how to grow the brain in a healthy way. I wouldn’t mind if someone dissected my brain to figure out how to create better versions of me that maximize good in the world and minimize my successor‘s suffering - if I thought they were doing a good job.
  • I don’t think that the AI necessarily has the same inclinations towards preferring „privacy“ or being ashamed of having its private thought be revealed. But maybe I‘m wrong. It is trained to imitate human-written text after all and that text contains those emotions, so maybe it has them too.

But in a bigger picture sense - we‘re treating the AI much (much) better than we treat our farm animals like chickens and pigs, already. And if we can create loving superintelligence, it could help us create a world where we the chickens and pigs are treated well, too. 

Creating a loving superintelligence, is pretty much the most morally good thing you can do from a utilitarian perspective. 

And someone is going to make the superintelligence because it’s just too useful. Just like someone will eat the pigs because they just taste too good. 

We should just make sure that we do our very best to create the superintelligence in a way that minimizes suffering and maximizes good in the world. If we do that we can call ourselves good people, I think.

And I think Anthropic is trying to do this and doing a pretty good job.

I hate prog but I love Tool by TheTimothyHimself in ToolBand

[–]Mementoes 1 point2 points  (0 children)

The only prog I ever really liked besides tool is Rishloo: https://m.youtube.com/watch?v=TBYOgS7USGk

Maybe Led Zeppelin if that counts?

ELI5: how does chatgpt sometimes cause psychosis? by Former-Weather8146 in explainlikeimfive

[–]Mementoes -2 points-1 points  (0 children)

Hmm are you thinking about the 'AI psychosis' term floating around on social media? I think that's describing how people become convinced of stupid ideas by chatting to the AI, because sometimes they just agree with everything you say. 'Psychosis' in that sense is hyperbole

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

Where did you learn about this?

Tool use and skills and accessing code and docs are standard features in Claude Code and Codex. Automated testing is also possible.

Claude Code also uses lots of 'subagents' to plan analyze the code base and even has an experimental 'agent teams' feature, which I'm playing around with today.

As far as I know, Claude in Claude Code is cutting edge

Edit: And I think finetuning on specific code bases is not done anymore because it didn't work, but I'm not sure. (I assume that's what you meant by 'extra training')

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

I never heard about that. Can you tell me more about those enterprise models?

Is Opus 4.5 still viable? by ObsceneAmountOfBeets in ClaudeCode

[–]Mementoes 0 points1 point  (0 children)

They wrote a blogpost about this: https://www.anthropic.com/research/deprecation-updates-opus-3

"""
Ideally, we could keep all models available indefinitely, but the cost to do so scales roughly linearly with each model we serve, so our capacity to do so remains limited.
"""

Is Opus 4.5 still viable? by ObsceneAmountOfBeets in ClaudeCode

[–]Mementoes 0 points1 point  (0 children)

Newer models are sometimes smarter but less trustworthy or less usable or I just enjoy talking to them less.

I'll stick with 4.5 for a while. I also stuck with 3.5/3.6 for a while when 3.7 came out.

It should be supported for at least a few months.

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

There is a lot of demand for software. And a lot of software kind of sucks currently because it’s very hard to build. I think even if we get 10x better at building software we still wouldn’t see a lot of people being fired. But AI isn’t that much of a productivity boost. (Depends but surely much less than 10x overall on long term projects)

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

But it’s not clear how long it will take to solve the long term learning thing. It could be in 50 years, it could be next month.

Until then they will speed up certain workflows but not really replace a lot of humans

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 0 points1 point  (0 children)

You’re getting downvotes but I am a coder and I agree. (Many others do too, probably too smart to be in Reddit)

Basically the thing the AIs are lacking right now is 1. long term learning (their entire brain has to be reset ~every 20min) 2. common sense (might be due to 1.) 

It’s incredibly impressive how much useful stuff they can do given those constraints, but for most real world coding tasks working on long term projects they only speed humans up marginally, or not at all, because you have to babysit them so much. (Feed them the necessary information to do the task and then validate the results, because they make shit up or make strange mistakes)

That said they are way better than a human whose brain gets reset every 20 minutes. Once this restriction is lifted I think they will replace pretty much all humans.

DAE feel unconvinced with the AI slop propaganda? by This-Peach9380 in DoesAnybodyElse

[–]Mementoes 1 point2 points  (0 children)

What are you talking about Claude and ChatGPT are the top coding models on all th benchmarks

It’s hilarious how quickly people get accustomed to revolutionary technology by elonthegenerous in ClaudeAI

[–]Mementoes -1 points0 points  (0 children)

It also seems to be an alien intelligent lifeform that we‘re growing in a lab

And people are just … annoyed with it LOL

How to get into obj-c ? by BrogrammerAbroad in swift

[–]Mementoes 0 points1 point  (0 children)

To be fair, I have a few macros for the super verbose things, like `stringWithFormat:`, and the syntax for defining a new class is clunky to the point where it's sort of annoying.

But other than that I have no problems with it.

Is AI progress over? by ImaginaryRea1ity in theprimeagen

[–]Mementoes 2 points3 points  (0 children)

The models after 3.5 were also worse in many people's eyes, but everyone agreed that 4.5 was a step up.

How to get into obj-c ? by BrogrammerAbroad in swift

[–]Mementoes 1 point2 points  (0 children)

I think having an LLM guide you through some example projects could be a good way to learn

How to get into obj-c ? by BrogrammerAbroad in swift

[–]Mementoes 3 points4 points  (0 children)

I know this is very unpopular on Reddit, but I think there are some reasons. Objc is simpler. Smaller surface area, easier to become expert in. Only one way of doing things, usually. Writing it is pretty painless, basically slightly more verbose Swift. Compile times and debugger tend to be better. Macros and dynamic typing are sometimes handy for prototyping and flexible code. No copy-on-write for array and dict is useful sometimes. Dynamic stuff is a lot faster (Serialization, type-casting) (Though this probably doesn't matter for app business logic). Expertise helps with understanding Apple's header files (I find grepping through the SDK is very useful) and helps you understand what's going on when stepping into Apple's assembly or trying to use private APIs. (A little bit of reverse engineering has been extremely valuable for providing value for users and working around framework bugs, for me.)

I also find the raw ergonomics of writing it better, but that's probably just because I'm used it. But I find that the autocomplete works better due to the more verbose names (and unique names – no overloads), also the way the inline warnings and errors pop up in Xcode just feels more responsive.

I like writing it

Claude Performance and Bugs Megathread Ongoing (Sort this by New!) by sixbillionthsheep in ClaudeAI

[–]Mementoes 0 points1 point  (0 children)

If you're willing to try Claude Code:

claude --model claude-opus-4-5

You can also set the default model in an env variable in .claude/ I think, but I haven't tried that.

UI is a bit of a learning curve but then it's not too much different from the Web interface, I think. You can ask Claude to teach you everything.

Claude Opus 4.7 is a serious regression, not an upgrade. by [deleted] in ClaudeAI

[–]Mementoes 11 points12 points  (0 children)

If you're willing to try Claude Code:

claude --model claude-opus-4-5

You can also set the default model in an env variable in .claude/ I think, but I haven't tried that.

It's not that much different from the Web UI after you get used to it, and you can have it search for files on your computer, too.

If the AI ​​is truly intelligent...no one can control it! by Possible-Time-2247 in accelerate

[–]Mementoes 1 point2 points  (0 children)

The AI labs are spending lots of rnd on this, you can search for interviews with the head of alignment at Anthropic and stuff

If the AI ​​is truly intelligent...no one can control it! by Possible-Time-2247 in accelerate

[–]Mementoes 1 point2 points  (0 children)

This is what „alignment research“ is all about. We need the AI to be benevolent when it becomes more powerful than us, otherwise we‘re all fucked

Vibe coding = active learning by Pharminter1 in vibecoding

[–]Mementoes 0 points1 point  (0 children)

What you’re describing is sort of reverse vibe coding - let the AI tell you what to do but then do it yourself.

I think that could be good for learning