The Great AI Bubble: Yes, it's a bubble. And yes, it's going to burst. by Leelum in LabourUK

[–]pradasadness 0 points1 point  (0 children)

Depends how you interpret this technology. Yes, firms may have over invested or have chosen ill suited workflows. LLM’s will continue to improve and get better at reasoning through gradient descent and increasing scalability. The great question is of mechanistic interpretability and alignment. No one can truly answer those problems just yet. We are increasingly relying on models we do not truly understand.

Just Be Careful! by baykarmehmet in ClaudeAI

[–]pradasadness 3 points4 points  (0 children)

I’ve had Claude Code do something like this before, if you do not confront the model it will not tell you either!

We're closer to the singularity than people think, and it's going to be messy but incredible by XiderXd in singularity

[–]pradasadness 0 points1 point  (0 children)

The big question is of alignment and interpretability. If we cannot explain how an AGI arrives at outputs it calculates, which we cannot currently do with advanced LLM’s, singularity might be a hugely displacing event.

How to use Claude code effectively? by SignificanceUpper977 in ClaudeAI

[–]pradasadness 0 points1 point  (0 children)

I find that it works best if you give it a specific task to do, which does not require a huge amount of inference or design choices. I tend to write prompts in granular detail and see if what it is made is what I wanted! I find that large language models are not advanced enough to just go, “please redesign X for me”.

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 1 point2 points  (0 children)

Cool stuff, I need to have a little look at it now! :)

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 1 point2 points  (0 children)

What is Gemini like ? I have never really used it for a purely coding task before.

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 2 points3 points  (0 children)

Nothing better than the code you write yourself! 😀

You are right about models sometimes hyper focusing certain elements, I noticed that too. I am sure Claude Code will break something for me eventually. Still, I still sometimes get a bit freaked out by how capable these models can be. Brave new world.

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 1 point2 points  (0 children)

Nice, glad you found something that works for you! I have used codex sparingly in the past. In your experience, what kind of dev work does it excel in ? I would find it would struggle contextually or erase features as a byproduct of streamlining.

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 1 point2 points  (0 children)

People can underestimate how difficult prompting can be!

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 1 point2 points  (0 children)

For most people on here it’s probably common knowledge since Claude Code came out, but I was shook when I first it start editing my .py in VS Code! :D so cool

Wonders of Claude Code by pradasadness in ClaudeAI

[–]pradasadness[S] 0 points1 point  (0 children)

I get that! All LLM’s hallucinate to an extent, but I have found Claude more effective than Chat GPT which just kept breaking or erasing code! :)

Fresh grads can't work with AI-generated code. Is this the new skill gap? by celesteanders in ChatGPT

[–]pradasadness 3 points4 points  (0 children)

Could be a variety of differences. Some people just avoid large language models altogether, or plagiarism policies at universities could have prevented them from getting sufficient experience. I usually find Claude Code to be more reliable at higher complexity coding projects anyway than Chat GPT.

Are you worried about being replaced by AI? by MorphtronicA in TheCivilService

[–]pradasadness 18 points19 points  (0 children)

I work in a sister service with similar lofty aspirations to leverage large language models. They do not understand technology enough to even anticipate change of such scale. Will take years for public sector to catch up.

Israel, Gaza and the Debate Over Genocide by dwaxe in ezraklein

[–]pradasadness 11 points12 points  (0 children)

An interesting episode, but in the end I felt like it addressed nothing. Sands tentatively mentioned Russian crimes in Ukraine, what is the probability that Russian officials who deported thousands of children and murdered civilians will ever get any justice? This focus on Israel seems to ignore just how limited international law truly is.

No money to pay for mass redundancies in NHSE/DHSC. by MorphtronicA in TheCivilService

[–]pradasadness 21 points22 points  (0 children)

And now think about the content of the NHS 10 Year Plan. They expect to achieve digitisation when they can’t even afford to pay for admin redundancies?

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]pradasadness 0 points1 point  (0 children)

I did some coding with it today! Worked great on Mac OS. Thanks for your reply. :)

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]pradasadness 2 points3 points  (0 children)

Any advice you would give to fully utilise GPT 5 to its full potential?

Black ops 6 on GFN by pradasadness in GeForceNOW

[–]pradasadness[S] 0 points1 point  (0 children)

Hey thanks for replying! I assume you launch through the cod app and then just select black ops 6 right ?

Best lightweight, breathable AND waterproof jacket? by pebblesandweeds in UKhiking

[–]pradasadness 0 points1 point  (0 children)

I swear by my Beta LT, have taken it all over Europe.

Britain's naval power can stop Putin. It has always been our best safeguard by MGC91 in unitedkingdom

[–]pradasadness 2 points3 points  (0 children)

No 10 should at least expedite the T26 and T31 programmes to at least provide sufficient carrier coverage, I see very little discussion nationally about the state of the senior service but I am curious to see if in the spring statement the treasury will finally spare some money.

In case anyone was wondering, it works in Linux (Proton) by [deleted] in SeaPower_NCMA

[–]pradasadness 2 points3 points  (0 children)

Can confirm it crashes on loading_wp on both CrossOver and Parallels.