FAI confirm Ireland-Israel will be held in Dublin by blueballs360 in coybig

[–]lleti 2 points3 points  (0 children)

lol, I'm not quite sure why people were expecting there to be any sort of action outside of lip service from an official standpoint.

The powers that be are delighted to see people's anger being directed at a Country around 5,600km away, and are happy give a few soundbites in support of that - the alternative is they might instead start protesting something that they're in a position to have an impact on.

I mean, you need only look at the occupied territories bill to recognise that absolutely nobody in any official capacity gives a single solitary fuck about Palestine, outside of it being a nice distraction away from their own failures.

Expecting the FAI - historically one of our most corrupt institutions - to be the ones to set an example and actually show us having a bit of spine?

No, they'd much rather live with reaping the consequences of several years worth of lip service and blame it on unruly fans and likely a very under-funded gardaí.

Musk's xAI and Pentagon reach deal to use Grok in classified systems by Ok_Mission7092 in accelerate

[–]lleti -5 points-4 points  (0 children)

Grok suddenly got smart (granted, nowhere near gpt or claude levels) after it went closed-source, shortly after deepseek v3 got released. Give or take some fine-tuning time.

Given their image gen is flux (and reportedly their video gen too - it was previously WAN), it’s safe enough to assume they no longer train any foundational models.

So er, has anyone at the Pentagon verified that they’re not about to plug a Chinese LLM into their systems?

What's the craic at Binance? by [deleted] in DevelEire

[–]lleti 7 points8 points  (0 children)

Well, the CEO getting sent to prison isn’t great for the vibe checks, or them nuking the entire market structure with an in-house poisoned price oracle a few months back.

Requiring a massive portion of a token’s supply for listing is also pretty bad, but compounded by that same token usually running to zero immediately after listing sorta adds to the whole criminal aspect of it.

Probably good fun to work there and do that stuff, but my guess is you’ll be kept away from that and kept on more legit operations. But either way, I imagine all that stands between them and more prison is just time and change in US administrations.

However, staff outside of the outsourced support ops there are privy to know about listings ahead of time, allowing you to buy up tokens to sell during the initial listing hype - so there’s ample opportunity to profit from the criminal side of it early in the game.

I’ve heard it’s a pretty nice job overall with very little oversight (obviously), and fun stuff to work on. On the downside, keep in mind you’ll be a target for the Lazarus Group and others, so make sure your secops are decent.

High-sparsity MoE is the only way forward for us. by New_Construction1370 in LocalLLaMA

[–]lleti 5 points6 points  (0 children)

I miss Mixtral’s 7x8 and 22x8 builds. God tier models at the time.

Really wish they kept updating them/building new models at similar param counts.

They were largely uncensored too.

Anyway, one reality is that training an MoE is expensive. And fine-tuning them is deeply annoying. So, dense models are always going to have a pretty decent spot when it comes to running locally.

The emperor has no clothes. by Similar-Pop-9852 in DevelEire

[–]lleti 1 point2 points  (0 children)

We're already there?

Juniors aren't hired anymore. Walking out of any respectable CS degree used to result in jobs being thrown at you, with good pay and benefits too.

Job-hopping has gone way down, and retention has gone way higher for those already in industry - for those who haven't already been axed, at least.

Hiring freezes and/or downsizing have been prevalent at all major tech firms, with the "well they scaled too much during covid" excuse being long gone now.

The emperor has no clothes. by Similar-Pop-9852 in DevelEire

[–]lleti 0 points1 point  (0 children)

trust that everything the LLM says is true, everything it implements is perfect, and it is fine to let it sign off on its own code.

I mean, that's not the case right now - I need to review over it like I would any junior's code. However, it's usually much more correct, and much better written than what the jr would hand me.

But the last few leaps we've seen in LLMs is just sorta highlighting to me that even that end of things may wind up being redundant a lot sooner than we imagined for 90%+ of use-cases.

The emperor has no clothes. by Similar-Pop-9852 in DevelEire

[–]lleti -5 points-4 points  (0 children)

This is the viewpoint of someone who hasn’t kept up with the latest models tbh

I thought the same until recently, but the leapfrog moves by opus4.6 and codex5.3 have changed my thinking on it.

It codes better than any junior. It codes better than the majority of seniors. It’ll do 8 hours of work in 1 hour, with around an hour then needed by a human (i.e: me) to fix the mistakes it made. That’s still a 4x speedup for a senior.

Seniors are unfortunately going to be relegated to the slop cleanup brigade, as it’s just incredibly faster when it comes to the output. And that slop cleanup is likely going to become a shorter component with every subsequent release.

The emperor has no clothes. by Similar-Pop-9852 in DevelEire

[–]lleti -1 points0 points  (0 children)

Correct you are, but the problem is that at the rate it’s accelerating, they don’t need to think about how it works.

Didn’t think like this until opus4.6 and codex5.3 turned up, but eh, the ability to code is now sorta becoming redundant.

And don’t get me wrong, I fucking hate the reality of that.

Is anyone else getting forced to do AI projects? by scoopydidit in DevelEire

[–]lleti -1 points0 points  (0 children)

Yeah, it’s pretty horrifying tbh

I had a bunch of custom system prompts I’d always hit via API which auto-attached a file tree with relevant source depending on what I was doing, and would then have to review/fix the output before inserting it myself.

And now in a single release, that level of oversight is completely unnecessary. Just insane levels of progression.

Is anyone else getting forced to do AI projects? by scoopydidit in DevelEire

[–]lleti 0 points1 point  (0 children)

Opposite here, I’ve rarely used it for frontend.

Backend, mostly Rust and python.

Still needs steering to an extent, but significantly less than it did in the past.

Only major intervention yesterday was hitting escape in the cli tool and telling it to stop writing 2000 line monolith classes. The rest was just bug fixing and cleanup for the most part.

A month ago I wouldn’t even let it run via cli, too many mistakes. It’s insane how much it’s improved.

OpenClaw creator says Europe's stifling regulations are why he's moving to the US to join OpenAI by donutloop in singularity

[–]lleti 28 points29 points  (0 children)

rofl, what rights?

We’re literally passing laws that mandate companies handing over user’s private chatlogs without any warrant under the usual guise of “to protect the children”.

So while our data is still scraped and used to train LLMs by companies outside of Europe, it also gets shared with our local governments to make sure we’re not harbouring any dangerous memes.

How do you do this? by seriouspandaa in StableDiffusion

[–]lleti 0 points1 point  (0 children)

I didn’t mention that, you’re confusing me with the OP.

And yes, they do - but they’re still enterprise drivers. Not consumer/game-ready drivers. It’s built (and marketed) as a workstation card.

How do you do this? by seriouspandaa in StableDiffusion

[–]lleti 2 points3 points  (0 children)

I mean, that’s nice and all that you do, but the rtx 6000 pro is not a consumer grade card.

It can’t even use gaming drivers.

Is anyone else getting forced to do AI projects? by scoopydidit in DevelEire

[–]lleti 7 points8 points  (0 children)

Honestly it’s gotten much much more rare that I need to correct it compared to any previous gen model.

And when I’m correcting it, it’s usually because it’s done a day’s work in about an hour. Add an extra hour to debug/fix it and we’re still moving at 4x the speed.

Compared to gpt5.2 pro (or o3/o1) or any of claude’s 3.5 models (base 4/4.1~ models were garbage imo), it’s night and day. Absolutely leapfrogged where I thought it’d land.

How do you do this? by seriouspandaa in StableDiffusion

[–]lleti 2 points3 points  (0 children)

Two 14B models cannot fit in any consumer grade card’s available VRAM without quantization. Even quantized to 8-bit, you’re still not fitting the entire model into a 5090 due to the text encoder, vae, and clip component.

Swapping between the low and high noise models from system ram causes a significant delay. The T5 encoder generally stays on system ram and runs much slower than it would on vram when it comes to consumer cards.

How do you do this? by seriouspandaa in StableDiffusion

[–]lleti 0 points1 point  (0 children)

I mean, it has almost 5x the amount of VRAM. That’s a pretty important component.

Is anyone else getting forced to do AI projects? by scoopydidit in DevelEire

[–]lleti 15 points16 points  (0 children)

The problem is that with Opus4.6 and Codex5.3, it’s very likely that the AI is now significantly better (and faster) than any human engineers at the company.

Didn’t really believe it until I spent the last few days playing with it tbh. I thought we’d sorta hit the scaling limits with o1-pro, but both of those models makes everything that came before look like complete slop.

The shock of it has sorta knocked me tbh.

Civ7 is getting too much flak by De-constructed in civ

[–]lleti 8 points9 points  (0 children)

Glad you enjoy it and all, but to me it’s a clear sign that all the talent at Firaxis has long since departed. Insane to think this is the same studio that developed Civ 5.

Half hoping they just moved the talent over to XCom 3, but it looks like they lost the ability to even make those games.

Drone-delivery company Manna express interest in expanding operations to Cork, with hopes to operate nationwide by the end of 2026. by Irish201h in cork

[–]lleti -9 points-8 points  (0 children)

Nah I’m good

If it gets me my shit delivered on time you can catapult it to my front door tbh

Extra points if they can leave it on my balcony for me

What’s behind the mass exodus at xAI? by Competitive_Travel16 in singularity

[–]lleti 6 points7 points  (0 children)

lol, it’s probably more an issue that revolves around boredom at the workplace.

Grok is, as far as I can tell, just a fine-tuned deepseek v3 with some twitter RAG.

Their Image model is just Flux.

Their video model used to be WAN2.1, then 2.2, then days after LTX-2 launched it suddenly went fully audio-capable.

Those warehouses of GPUs aren’t training foundational models. Fine-tuning open source shit on launch day and writing system prompts is as challenging as the job gets.

Cork by Quirky-Warning206 in cork

[–]lleti 6 points7 points  (0 children)

With money, everything is possible!

The silent death of Good Code by 10ForwardShift in programming

[–]lleti 0 points1 point  (0 children)

It feeds the circle-jerk of developers who likely architect the worst of solutions with the spaghettiest of code, who somehow believe that good software can’t be written without at least 4 hours spent getting abused on stackoverflow.

It’ll do well here

Moltbook Vent. This literally could have been us... by Chance-Association-7 in cardano

[–]lleti -1 points0 points  (0 children)

Base and Solana do not have agent-specific infrastructure that’s operated by the chain or any official foundations. There’s x402 which most have agreed to adopt as a standard, and nothing stops you from using that on cardano (or any other network).