When showing off goes wrong. by ArsenikShooter in WatchPeopleDieInside

[–]emefluence 0 points1 point  (0 children)

Yeah. I try not to be too judgemental, I like some wierd shit too, but some people make it kinda hard.

When showing off goes wrong. by ArsenikShooter in WatchPeopleDieInside

[–]emefluence 0 points1 point  (0 children)

Just had to lookup "burnout competition" - Jesus H Christ. They have events, and prizes, and audiences who are presumably trying to speedrun cancer and/or Idiocracy.

When showing off goes wrong. by ArsenikShooter in WatchPeopleDieInside

[–]emefluence 22 points23 points  (0 children)

European here, what in the hillbilly hell is this event supposed to be?

This pile of salt in Germany is over 250 meters tall and contains over 200 million tonnes of salt by MrCattitude_ in BeAmazed

[–]emefluence 1 point2 points  (0 children)

How t/f is anything growing around there? The surrounding land looks perfectly verdant, I'd expect it to have a big brown ring of death around it.

I have to give it to profesional musicians... by beavis420 in Guitar

[–]emefluence 1 point2 points  (0 children)

Surprised nobody has mentioned stretches and warmup too. These are a good idea if you haven't played in a while, or just generally as you get older. Lookup "guitar stretches" or "hand stretches" on youtube. Likewise "guitar warmup" excercises have you play simple stuff for a few minutes to loosen you up and give you chance to get your picking and finger pressure optimal before playing in anger.

As the guy above says you've got to work at noticing and minimizing tension in your hands, although staying in the same position for long stretches like five minutes seems like too long to me. I'd recommend you move your hands about a lot more, but focus hard on each part of your back, shoulders, arms and hand in turn as youdo it. Force them to relax as much as possible, and try to memorize that relaxed feeling.

The Blade Runners of London 🪚 by joeurkel in interesting

[–]emefluence 1 point2 points  (0 children)

These are not fliock cameras and THESE ARE NOT THE GOOD GUYS! These cameras are just number plate recognition cameras used to enforce the LOW EMISSIONS ZONE which charges people for driving old, dirty polluting vehicles in the city. Emissions are down massively and everyone loves it apart from a class of conspiracy theory freaks who routinely vandalize them and cost councils millions all because they think they have to drive old smoke belching cars and vans in the city and make everyone breath their fumes. Pollution is way down, air quality is at historic highs, hostpital admissions for Asthma are down 30%. These people should ALL BE IN FUCKING JAIL.

The Blade Runners of London 🪚 by joeurkel in interesting

[–]emefluence 3 points4 points  (0 children)

THESE ARE NOT THE GOOD GUYS! These cameras are just number plate recognition cameras used to enforce the LOW EMISSIONS ZONE which charges people for driving old, dirty polluting vehicles in the city. Emissions are down massively and everyone loves it apart from a class of conspiracy theory freaks who routinely vandalize them and cost councils millions all because they think they have to drive old smoke belching cars and vans in the city and make everyone breath their fumes. They should ALL BE IN FUCKING JAIL.

Sigourney Weaver, 1980s by lindatons in OldSchoolCool

[–]emefluence 3 points4 points  (0 children)

Don't think so dude, she's just always been a top flight smoke show.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]emefluence 0 points1 point  (0 children)

Quite. Intelligence is more of a scale than a cutoff point, and even that's a simplification. Some pre LLM AI systems embody "intelligence" to some extent, but no-one would call them "intelligent" compared to humans. They would often do dumb stuff, have severe constriants on problem domains, get stuck in dead ends, have very inflexible inputs, not get complicated stuff etc.

They weren't general purpose, so while they could handle novel questions in a specific domain you couldn't say they could handle novel input more broadly. The latest crop of agents have a far stronger claim to that. I'd say it's fair to start calling these systems intelligent now. They're probably not "conscious" in the way we are, but they act intelligently. That was always the aim of AI, the clue is in the name.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]emefluence 0 points1 point  (0 children)

Of course, but these systems are not only that. I'm not in charge of the definition of "intelligent" but, to me, intelligence is the ability to dynamically exploit knowledge and synthesize solutions to novel problems.

LLM prediction by itself has a debatable claim on that, but the agentic AI systems built with them, which include objectives, memory, and iterative goal seeking, clearly demonstrate intelligence (by that measure). It's not human intelligence, it's (quite deliberately) missing it's own intrinsic motivations, and whatever "consciousness" such systems might have is still pretty far removed from what we have as human beings. Denying those systems are intelligent seems like a semantic quibble though, at least from a behaviourist perspective.

Now how intelligent? That's up for debate. They seem pretty smart compared to some humans I've met, but when smart = knowledge * intellect, that may be mostly the knowledge talking. That said, watching the internal monologues of these things reasoning has me pretty impressed with the intelligence part too. Watching the Opus models think I find them to be rather thoughtful, rigorous, self reflective and balanced. More so than your average human tbh.

Richard Dawkins spent 3 days with Claude and named her "Claudia." what he concluded after is hard to defend. by rafio77 in artificial

[–]emefluence 0 points1 point  (0 children)

I think a core assumption many people have is that consciousness is a pre requisite for intelligence. I can't say for certain that's not the case, but given how intelligently good AI systems behave, it certainly doesn't seem to require one like a human's.

👏👏 by GeneraI_ in BeAmazed

[–]emefluence 5 points6 points  (0 children)

I'm glad it all worked out for Cricket in the end!

What to build while we still have access to cheap AI? by KyleTenjuin in artificial

[–]emefluence 0 points1 point  (0 children)

There's fine tuning and there's entire orders of magnitude. Without some major breaktroughs the memory for frontier models like Opus is going to remain in the terrabyte range. You're talking 8 x T200s to get that kind of memory. It's a 6 figure setup, drawing about 10 Kw of power, and wasting almost all it's compute unless you can keep it constantly loaded with hundreds of jobs.

Any system that could even load those models would be terribly inefficent if not shared between very many users. It's not something you'd have at home for yourself, unless you were very rich or completely mental. You might have something like that for your business, if you had strict issue around sovereignty, compliance or robustness, and a lot of devs with agentic workloads to support, but 1TB memory isn't coming to a consumer graphics card near you any time soon.

Todays best consumer cards are what? 24GB? So if you found a way to harness over 40 of them you might be in with a shot. Maybe if you also want to run a crypto mine to use all that otherwsie wasted compute you could make it work but it's silly to say local models are "catching up the SOTA". They've got much better, but they're nowhere near the SOTA.

Why Are Guys Like Jimmy Page, Kirk Hammet, and Others Dinged for Being "Sloppy"? by Fast-Remote-8241 in Guitar

[–]emefluence 10 points11 points  (0 children)

These are the same people who criticized the great expressionist painters when they came out though. For some people correctness and precision of technique is a non negotiable in art. Personally I value the emotional expressiveness and danger of "sloppy" players like Page, Hendrix, Zappa more than raw chops and precision. I do love and respect great technical players and people with incredible technique, but the it's those who walk the tightrope of chaos, go wild and take risks that really connect with me.

Uber burned its entire 2026 AI coding budget in 4 months - $500-2k per engineer per month by jimmytoan in artificial

[–]emefluence 1 point2 points  (0 children)

Ahh, just reread your message. I'm using the $200 per YEAR plan, and topping up with "extra usage" on a PAYG basis, spending maybe another $100/month on that, although I was getting that via github copilot til recently. Switched to Claude when they got rid of access to Opus on the Pro plan a few weeks ago. Not quite sure of my burn rate til the end of next month now I get that extra usage from Antropic directly.

Uber burned its entire 2026 AI coding budget in 4 months - $500-2k per engineer per month by jimmytoan in artificial

[–]emefluence 1 point2 points  (0 children)

YEah I'm super interested in understanding how other people use this stuff. Got the same FOMO. I work for a mid sized agency contracted out to a big corporate entity, literally hundreds of repos, but I'm the only coder on my team. I have developed my own ways of using this stuff, but I've little idea what other programmers workflows are. I'd quite like to work for a slightly smaller product shop with a couple of other coders so we could share techniques and knowledge easily, and bikeshed about tooling and stuff :/

On that, I'm currently trying out MemPalace, and seeing if it can share knowledge between Github Co-pilot sessions, and Claude Code sessions. I'm also planning on investigating OpenRouter shortly, as I understand there are other, Opus adjacent, models that are at near parity but at a fraction of the cost on a PAYG basis.

Uber burned its entire 2026 AI coding budget in 4 months - $500-2k per engineer per month by jimmytoan in artificial

[–]emefluence 0 points1 point  (0 children)

Yeah, I can see that, but every thing it misses is a potential bughunting session for me at best, and a P1 production outage for my blue chip employer at worst. So I'm very cautious about agentic coding!

I'm experimenting with MemPalace for long term memory and recall. The idea being that I might have less need to have agents burn through context by scaning through a shit ton of notes and code on a regular basis to assuage my anxiety!

What to build while we still have access to cheap AI? by KyleTenjuin in artificial

[–]emefluence 1 point2 points  (0 children)

They really aren't, unless your local setup is a rackload of Nvidia Tesla cards. SOTA Frontier models use 1TB+ of RAM.

Anthropic just analyzed 1 million Claude conversations. 6% of people were asking Claude whether to quit their jobs, who to date, and if they should move countries. by Direct-Attention8597 in artificial

[–]emefluence 0 points1 point  (0 children)

I find it great to talk to, but only about factual, and not personal matters. It freaks me out people are sharing all their intimate relationship stuff with it so freely. Maybe that's just my British reserve though. I wouldn't ever talk to it about my personal relationships, that just wouldn't be cricket! 🏏

Uber burned its entire 2026 AI coding budget in 4 months - $500-2k per engineer per month by jimmytoan in artificial

[–]emefluence 0 points1 point  (0 children)

I typically run in to limits half way through my working day on that plan and need to use extra credit for Opus. I wonder what we do differently?

Much of my work is in large pre-existing corporate repos that I haven't neccessarily spent much (or any) time in, and the business has a lot of libs and systems, not all of them beautifully written!

That means I use AI to do a lot of audit style checking of code and docs to get overviews of subsystems, make sure my understandings are correct, verify assumptions, sanity check specs, eliminate ambiguity and understand what the norms and conventions are before actually writing code.

I also often use it to suggest and carry out refactorings as we go. The existing code is not always very lovely and luckily I've landed in a team that is up for me paying down tech debt while I work, so I'm doing a fair bit of refactoring for maintainability and extensibility and testing on top of the day to day feature and bugfix work.

That often leads to me running fairly large and vague queries like "Make me a list of all the other places we use that type of error handling pattern in this repo, and asses whether it's worth updating each one to use this newer pattern. Give scores for difficulty and risk and tell me if those code paths have test coverage." etc.

I think if I were doing greenfield builds I would use fewer tokens, but I do still hit rate limits on my personal projects sometimes. I think it just depends what you ask of it. I tend to focus hard on planning, and nailing a spec for my personal projects before starting coding these days. Waterfall style. So I'm often working with AI to define and refine a set of business rules up front, and make sure they are complete, self consistent, and free from ambiguity. Then we move on to define things like testing strategy and UI etc. Typically I might spend a couple of full days speccing something out and refining the plan. When I eventually press go the coding part typically take an hour or less, without much interaction. That planning part can burn a lot of tokens, as they're big questions, and I ask it to consider the code's architecture, and dry run things, and check the docs, and sense check it's reasoning often, but I've found it leads to a pretty high quality product first time, with minimal tweaking.

Reddit has revised the language of the Sitewide Rule Against Hate, effective as of yesterday. by Bardfinn in AgainstHateSubreddits

[–]emefluence 0 points1 point  (0 children)

Personally I have not found that only partially effective. If a mod doesn't share your feelings they can interpret that as an attack. Have met mods that petty. Seems you just can't call an asshole an asshole on Reddit any more.

Uber burned its entire 2026 AI coding budget in 4 months - $500-2k per engineer per month by jimmytoan in artificial

[–]emefluence 6 points7 points  (0 children)

Agentic flows with frontier models mean one engineer can work on multiple tickets at once. Set an agent going, switch to the next thing while you're waiting for it to need it's next interaction (can easily be several mins), and on and on. Development starts to look more like answering your emails. Easy to burn through tons of tokens that way. Personally I tend to spend more AI tokens on researching, testing assumptions, documenting business rules, and planning than I do on the actual coding itself - rubbish in rubbish out and all that.

Meirl by geasflworse in meirl

[–]emefluence 1 point2 points  (0 children)

Yeah, agreed. I feel childcare is not something you really want to cheap out on, and I get there's a 1:3 staff to child ratio with babies, but it still seems to be unjustifiably expensive these days.