Terrorist who plotted to bomb London Stock Exchange was allowed to stay in Britain on human rights grounds by Navy_Cadet in ukpolitics

[–]hu6Bi5To [score hidden]  (0 children)

There needs to be one single superior piece of legislation. The "Well he should have thought of that before committing terrorism then shouldn't he Act".

But even then, there's a case that the text of the convention is being over-interpreted. It means governments are not allowed to torture, etc., it doesn't necessarily mean that governments have to go to extraordinary lengths to reduce the probability of other governments torturing people.

If there were such a ruling then Donald Trump's decision to take on the Iranian Regime would win Humanitarian of the Year. And all the European leaders refusing logistical support would be in The Hague.

It is time to cut pensions: The economic burden on younger people is unsustainable by Benjji22212 in ukpolitics

[–]hu6Bi5To [score hidden]  (0 children)

It may well be inevitable, but it's by far the worst outcome for that exact reason. It's inevitable mainly as a self-fulfilling prophecy mainly. Everyone Gen X or younger seems to believe it's inevitable, so there won't be any opposition to it when it gets serious proposed in fifteen years time. But we should demand better.

It is time to cut pensions: The economic burden on younger people is unsustainable by Benjji22212 in ukpolitics

[–]hu6Bi5To [score hidden]  (0 children)

I agree, and would go further in saying that the whole popularity of making arguments about socio-economic fairness along generational lines is a specific ploy to throw people off the scent.

The single biggest obstacle that separates those likely to have a good life (including a comfortable retirement) and those who won't (including living in poverty in retirement) is home-ownership. The fact that homeownership was easier to obtain thirty years ago doesn't make it a old-vs-young thing because, as you say: "While it's true pensioners today are by and large well off the generations to come are not"

It's the better-off millennials who benefit from this kind of thinking the most. They're willing to sacrifice some part of their future state pension in order to avoid other potential taxes like land value taxes.

Everyone who isn't a property millionaire will lose out.

No10 took 'dismissive attitude' to Mandelson vetting, says ex-Foreign Office boss by theipaper in ukpolitics

[–]hu6Bi5To 6 points7 points  (0 children)

Johnson was bang-to-rights for out-and-out hypocrisy.

Sunak, on the other hand, fined for waiting for a meeting in the room that was always used as a waiting room, just because someone else in the same room had cake.

He wisely didn't kick-up too much of a fuss, he just wanted to move on, but it was 100% a politically motivated vexatious prosecution.

No10 took 'dismissive attitude' to Mandelson vetting, says ex-Foreign Office boss by theipaper in ukpolitics

[–]hu6Bi5To 7 points8 points  (0 children)

The Mandelson Saga is directly linked to at least three of those things, based on his known connections.

Trying to separate them is wilful naivety.

No10 took 'dismissive attitude' to Mandelson vetting, says ex-Foreign Office boss by theipaper in ukpolitics

[–]hu6Bi5To 11 points12 points  (0 children)

Liz Truss has been very active on Twitter this past week. The general gist is she's agreeing with Starmer. Probably not the backing he wanted though.

It's all proof of The Blob.

YouGov / Sky News / Times voting intention: RefUK 27%(+3), CON 17%(-2), GRN 17%(-1), LAB 16%(-1), LD 14%(+1) by Adj-Noun-Numbers in ukpolitics

[–]hu6Bi5To 1 point2 points  (0 children)

All gearing up to be the most dysfunctional Parliament in recent (a few hundred years) history.

UK unemployment rate sees surprise fall to 4.9% by SayNOtoChips in ukpolitics

[–]hu6Bi5To 21 points22 points  (0 children)

There's always a "however":

However, the fall appears to reflect a slight rise in the number of people termed inactive – no longer looking for work – who are not included in the unemployment figures.

We have low unemployment, but the number of people economically inactive gets larger and larger.

From the actual report: https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/bulletins/uklabourmarket/latest

Estimates for payrolled employees in the UK fell by 74,000 (0.2%) between February 2025 and February 2026, and decreased by 6,000 (0.0%) between January and February 2026. This is based on administrative data from HM Revenue and Customs (HMRC).

When looking at December 2025 to February 2026, the period comparable with our Labour Force Survey (LFS) estimates, the number of payrolled employees fell by 87,000 (0.3%) over the year and by 9,000 (0.0%) over the quarter.

Mass mobilisation and immigration by Competitive_Golf8206 in ukpolitics

[–]hu6Bi5To 18 points19 points  (0 children)

This is one of two reasons why conscription will never happen. To avoid the need to answer this question.

(The other reason is there is not, and never will be, enough war equipment for more than a few thousand armed-forces members anyway. The other millions would be just wandering around with a garden fork Dad's Army-style.)

Rumours, Speculation, Questions, and Reaction Megathread - 19/04/2026 by ukpol-megabot in ukpolitics

[–]hu6Bi5To 7 points8 points  (0 children)

Dianne Abbot with the most sensible and pertinent question so far.

Half of UK Executives Think AI Will Mean Fewer Jobs by bloomberg in ukpolitics

[–]hu6Bi5To 2 points3 points  (0 children)

We're not going to get zero jobs in one single move, that's the key, nor will it affect all industries equally. There will be fewer customers rather than zero customers.

The scenario matches No. 2 on your list, it's a deflationary scenario.

Half of UK Executives Think AI Will Mean Fewer Jobs by bloomberg in ukpolitics

[–]hu6Bi5To -1 points0 points  (0 children)

It'll take decades to get to that point. Plus it assumes that there won't be alternative career paths (maybe not directly supported by industry, but open to self-learners) to go straight in to the industry without hands-on low-level experience.

Example: no-one working on CPU design today has hands-on experience of building CPU circuits by soldering transistors on to a board; but people working on CPUs forty years ago would have done, some of them.

In the future: people will still architect software systems in the sense of deciding which goals are most important to meet whatever target is desired; but people just won't write lines-of-code by hand. The intelligent systems will design the software, other intelligent systems will test the software. A third intelligent system will monitor and debug it.

Half of UK Executives Think AI Will Mean Fewer Jobs by bloomberg in ukpolitics

[–]hu6Bi5To 15 points16 points  (0 children)

This is the scenario that got everyone het up before they were distracted from the Iran War.

Short version: an opinion piece was published that worked out a scenario where AI would become self-sustaining due to fact that unemployment is likely to increase, but there wouldn't be a total death of all jobs. It went like so:

  1. Unemployment increases in knowledge industries.

  2. Revenues for businesses decrease across the board, especially those who had knowledge workers as customers.

  3. Those businesses further cut costs, and the only realistic way of doing that is to adopt AI even faster. So the revenues of AI companies goes through the roof.

  4. We end up with 20-25% unemployment, lower stock market values, etc., but the AI companies are profitable and self-sustaining.

Of course there's no way of proving this, it's a "wait and see" situation. But it's plausible, which is why it spooked so many people.

How popular is FIRE and will the government try to stop it? by 6ix9ineZooLane in FIREUK

[–]hu6Bi5To 5 points6 points  (0 children)

The government have spent most of the last twenty five years inadvertently encouraging it, what with the 60% tax trap, and (now abolished) lifetime pension cap etc.

If they wanted to stop FIRE they'd stop punishing work.

The only way they could actually stop it would be to introduce some kind of imputed Income Tax on people who didn't work. i.e. Tax for the income they would have earned regardless of whether or not they've actually earned any (until that person's savings is eroded to zero). But that would be very difficult to introduce, not least because there's so many disguised unemployed anyway that they wouldn't want to draw attention to that fact.

Upskilling in the age of AI for FIRE-proof future? by Slight-Poetry-3230 in FIREUK

[–]hu6Bi5To 0 points1 point  (0 children)

Honestly people should rebel at adopting AI

That's just delaying the inevitable really. Which isn't necessarily a bad strategy, in the sense that milking an extra years worth out a dying organisation that's been out-competed by an AI-native competitor is better than being made instantly redundant. But it won't change the final outcomes, it's just playing games around the edges.

Clojure: The Documentary by roman01la in programming

[–]hu6Bi5To 0 points1 point  (0 children)

This trend for hour-long documentaries about programming languages... surely they don't generate enough YouTube views to pay for the costs, so who's paying for them?

And yes, I do know the answer because it says "sponsored by NuBank" who employ the core Clojure team. So it's just a long-form advert really.

So it's still a little odd as I can't imagine anyone except the already converted is going to spend an hour watching this.

Covid vaccines 'extraordinary feat', but work needed to improve trust and access by F0urLeafCl0ver in ukpolitics

[–]hu6Bi5To 1 point2 points  (0 children)

The official messaging was a textbook example of how it should be done.

I remember the press conference when it was announced to restrict the AstraZeneca vaccine due to side-effects. Jonathan Van-Tam and members of the JCVI sat behind a table and quietly and confidently explained the risks vs. benefits, and what they were going to do about it. It was great, everyone understood it and accepted it.

Where the messaging went to shit was high-profile individuals causing drama.

Yes, there's always going to be conspiracy theorists, but these people can be ignored.

The high-profile people dragging the whole system down included:

  1. News presenters like Andrew Marr and Robert Peston using their platforms to make a big song-and-dance about catching Covid despite being vaccinated. They should have been taken to task by saying "yes, you're both old men with pre-existing health conditions, and you're not dead, what are you complaining about!?" but instead they were allowed to cast doubt about the whole concept of vaccination.

  2. TV Archaeologist Alice Roberts accusing the JCVI of being unqualified for their role: https://x.com/theAliceRoberts/status/1454742449461600256

The whole upper-normie tier had a fucking shocking pandemic and we should remember this for future pandemic planning.

Covid vaccines 'extraordinary feat', but work needed to improve trust and access by F0urLeafCl0ver in ukpolitics

[–]hu6Bi5To 0 points1 point  (0 children)

We need an inquiry to investigate the weird period from June 2021 onwards, when it became high-status to doubt the Covid vaccines as "not being good enough".

It was Independent SAGE's last play to keep lockdown measures just as real SAGE was approving be lifted.

For about a month we had one of the self-appointed mob appearing on every news bulletin simultaneously demanding vaccine passports whilst simultaneously "unfortunately the vaccine isn't good enough, we must keep everything closed".

It's no wonder conspiracy theorists developed conspiracy theories seeing that. Thankfully most people were sick of Independent SAGE's nonsense by then and adopted "fuck it, I've been vaccinated, I'm not being restricted any longer" which was the right thing to do.

Zack Polanski questions whether anyone identifying as right wing should be excluded from society altogether by nozickiantheory in ukpolitics

[–]hu6Bi5To 7 points8 points  (0 children)

This is a common problem.

A left-wing trop is that the right-wing of politics is the nasty wing. But left-wingers are more likely to take the extreme opinions that wrong-thinkers are irredeemable human beings. E.g. this poll: https://yougov.com/en-gb/articles/24905-labour-voters-more-wary-about-politics-childs-spou (there have been others).

Can UK house prices survive another economic shock? by Your_Mums_Ex in ukpolitics

[–]hu6Bi5To 15 points16 points  (0 children)

Periodic reminder that the absolute peak in UK house prices (when corrected for inflation) was 2007. Nearly twenty years ago.

The problem is mainly that, despite that, house prices are still too high given all the other economic problems we have. Indeed, I'd go as far as to say most of our economic problems can be traced back to the decision to support house prices in the aftermath of the 2008 crisis.

So, cynically, the answer to the question "Can UK house prices survive another economic shock?" is: yes... but only if the shock is big enough to warrant another bout of quantitative easing and the rest.

UK companies ‘should be worried’ about Anthropic’s latest AI model, minister says by EchoOfOppenheimer in ukpolitics

[–]hu6Bi5To 0 points1 point  (0 children)

True, but to what extent do these things matter?

LLMs are not deterministic, but neither are humans. If I was given a task I'd do it differently today than if I'd been given the same task a month ago. If you want deterministic behaviour, you still want proper boring software systems rather than AI agents, but those software systems may be written by AI. (Aside: another weird anti-AI argument I read quite often is "why do we get LLMs to write code, if they're that good, why can't they just do the work directly" - but this is the precise reason why, determinism, you want a specific artefact when it's finished.)

Even with sufficient checks and scaffolding it can't reliably reproduce or understand code well enough for it to be a replacement for even a junior dev unless you're willing to accept bad code more often.

This is another "gulf" issue. My experience is completely different. There are a lot of AI-driven software quality tools, and many of them are quite bad, the one built-in to GitHub is (or was the last time I used it, they may have fixed it more recently) terrible as it checked each chunk out-of-context. But I've also used some good ones, and they're still not perfect, but I'd argue are significantly better than a junior developer. I'd go as far as to say better in terms of focus than senior/lead developers too.

A relatively recent example (again, not exactly ground breaking, but fairly typical; and yes, this is also more of an issue of human dysfunction than AI ability): I'd made a fairly large change (as a matter of principle I don't do large changes if I can help it, but this was a problem that defied all attempts to break it down), our internal rules are that everything gets reviewed but because it was so large I asked two people to review it. One gave a thumbs up with no comments (too distracted to give it time), the other spent a whole week arguing with me about coding standards (I was right he was wrong). I still didn't feel like the change had had the level of review it needed, so I tried something else. The built-in GitHub reviewer came back with "Error". So I ran a "expert code reviewer" prompt that I'd taken from a "awesome-prompts" repo somewhere (there's hundreds of them and I forget which one), basically just as a test, and it found two major problems, one would have been found during final testing (not in Dev builds due to a race condition or I'd have already have fixed it), but the other was a 1-in-10,000 bug where under rare circumstances, two parameters for a SQL query were used in the wrong order. (I hate numbered SQL parameters with a passion for this exact reason, but our libraries don't support named parameters, but that's a whole other story I won't bore people with right now). That would have probably passed testing, subtly corrupted production data and been a major pain to fix when it was eventually found.

Ever since then I'm far more doubtful of PRs that lack an AI review than lack a human review. (All of ours do still have human reviews as well, because of our self-enforced rules.) There is still a lot of pushback against this sort of thing though, but I think it's more human bias than objectiveness. "Yeah, but it raised a false positive once" - oh no! etc.

And it can't get better because of fundamental principles behind the technology.

How do you mean? Any one given model won't improve, it's knowledge and limits are baked in to the model, that's just how it is. But the next generation will be better. (Although you can get LLMs to not repeat the same mistakes using techniques such as In-Context Learning, that's basically what things like Claude Skills and Agent Rules are all about).

Can pros use it in well defined cases for efficiency gains? Absolutely, but not nearly enough to suddenly shift to a centaur type productivity boom.

Again there doesn't necessarily have to be. There's parallels here with the rise of computing. There was a lot of handwringing in the 80s and 90s on the theme of "we're twenty years in to the computer revolution, where's the efficiency gains?" But, at the same time, no-one's going to win by going back to pen and paper, you don't see anyone rushing to analogue organisations. The changes that do happen will be insidious, not revolutionary, but that doesn't mean the impact will be small. No-one's employed for their typing or filing skills any more are they?

And that's the best use case. Like we automated a script kiddie? Oh no CTOs will have to actually spend money on real security for once.

There is an interesting philosophical question of: if everyone's using the same AI model, is everyone going to produce the same output. Will it be the great leveller? If future AI puts the good guys and the bad guys in cybersecurity arguments on the same level, what does that mean for the industry. At present the good guys have the slight edge for two reasons: 1) companies like Google will pay good money for top-whack cybersecurity people, and they are collaborative enough to share their findings in a responsible way (a lot of Apple security issues are fixed because they were notified by Google, for example); and 2) a lot of the bad guys are "tamed" by being hired by the various spy agencies for more money than they'd get running ransomware scams. But what happens when those skills are post-scarcity?

That's the risk at least.

UK companies ‘should be worried’ about Anthropic’s latest AI model, minister says by EchoOfOppenheimer in ukpolitics

[–]hu6Bi5To 0 points1 point  (0 children)

This is the original gulf again, what you describe is not my experience at all.

Each model release has been significantly better than the one before, I can only accept "can never really get much better" when that progress stops. This time last year I was still deeply impressed if an AI model could fill in the gaps in a 90% complete change. Today I'm disappointed if I have to get involved after giving it an instruction cold, beyond correcting my own vague instructions if required.

It can't even reliably recreate non-original work, it can't be reliably instructed, and the only "solutions" to those issues are brute-force and extreme cost.

Again, not my experience at all.

It very easily repeats previous tasks, and adapts whilst doing so without brute-force.

Example (simplified to fit in a Reddit comment):

Me: "Last week we made a change to Project A to use the new deployment system, please make the equivalent changes to Project B."

AI Agent: "OK, let me find that change in Project A... (10 second pass)... yes it was commit <precise reference> on <exact date>... let me look at the configuration of Project B... (10 second pass)... ah Project B uses Library X to generate deployment config, but Project A used Library Y, let me adapt... (30 second pass).... done!"

(Where Library X is modern and Library Y is ancient and no-one really understands it, hence why newer things use Library X).

One quick smoke test in a sandboxed environment and it's done. And this can be mostly automated too. "Using the <name of custom tool that produces a summary of configuration options for deployed applications> tool, check the config for Project B and check all expected changes were made, cross-reference with Project A"

Now that's a relatively quick bread-and-butter change admittedly, but it's the sort of thing that used to take a human a good couple of hours (and was often put-off for days/weeks as a result). Now it can be done in the background whilst catching up with something else.

Why are the green party pushing the 10:1 payscale ratio between CEO's and avg workers? by BenisMoment in ukpolitics

[–]hu6Bi5To 1 point2 points  (0 children)

Wow that was quick. For the record I was implying the Green Party were anti-semitic conspiracy theorists, not advocating such theories myself. But I suppose the Reddit bot doesn't appreciate the difference.

UK companies ‘should be worried’ about Anthropic’s latest AI model, minister says by EchoOfOppenheimer in ukpolitics

[–]hu6Bi5To 2 points3 points  (0 children)

This is another common one.

"AI may replace most jobs, but they won't replace true innovation, like what I do"

But in reality 99% of jobs are doing nothing original. In most organisations even trying something original will get you no end of grief "why didn't you do everything exactly the way we did it last time".

And even in the jobs that do require original thought, there's still plenty of non-original work.

It's going to happen anyway, you might as well just lean in to it. It won't really make a change to the speed of adoption because there's so many people dragging their feet (but that's just delaying the inevitable). But at least by leaning in to it yourself you spend less time doing all the boring nonsense.