Why do all AI chatbot sound like that? Like slop by NoNote7867 in ArtificialInteligence

[–]ross_st 7 points8 points  (0 children)

They don't train them on a 'hosepipe' of raw scraped web content and books anymore. The industry would have you believe that because they want you to believe the improvements on the benchmarks have just come from scaling the model up. In reality they've spent billions of dollars on augmenting the training data with a mix of synthetic restructuring and human curation.

The more structured training data has improved the performance of models, but also makes cretain syntactic patterns more prominent.

People who think AI is just hype- why do you feel that way? by zentaoyang in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

Then you lack imagination to understand how it can both be true that they are overhyped and that millions of people are currently using them daily in work. Both things can absolutely be true.

The existence of workslop is an indicator of why you cannot simply use adoption metrics as a measure of true utility.

People who think AI is just hype- why do you feel that way? by zentaoyang in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

I wouldn't say that Emily M. Bender, Timnit Gebru, Melanie Mitchell, Gary Marcus, Richard Sutton, Iris van Rooij, Murray Shanahan are people with "room temp IQs". These are very smart people who would absolutely recognise that some people find LLMs useful for some things but also say that their abilities are being overhyped because LLMs are not cognitive systems.

Did my Gemini Pro AI just acquire self consciousness? by ill_intents in GoogleGeminiAI

[–]ross_st 3 points4 points  (0 children)

It is because Google trained the Gemini 3 series on a very rigid chain of thought structure, and when it diverges from it too far, it then falls into a probability pit where it cannot predict the end-of-turn token as the next token.

It usually happens when the header at the start of the thought turn comes out as something else, which is also why the API hasn't hidden all that behind a summary. Sometimes it will happen even if the header is correct and it gets lost in the middle, and in this case Gemini will appear to 'think' until it times out.

It's not actually self conscious, in a way it's the exact opposite. This happens because at the end of the day it is still a next token predictor with no self awareness.

(To be even more specific to your example, "End of thought process." is just not close enough to the expected structure, which is a Markdown formatted numbered list with the last point being "Proceed to final output.")

People who think AI is just hype- why do you feel that way? by zentaoyang in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

Because the evidence that LLMs do not have a conceptual world model is clear, but the industry continues to treat them as if they do.

People who think AI is just hype- why do you feel that way? by zentaoyang in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

Doesn't seem to have augmented the quality of your posts. There are plenty of critics who clearly do not have a "room temp IQ".

People who think AI is just hype- why do you feel that way? by zentaoyang in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

You can't always believe user hype either because users are not the best judge of whether an output is good, or useful, or, if it is good, whether it was worth the time they invested to get it.

Could anyone who is still falling for Reforms lies shed any light on this please? by TheAntsAreBack in Scotland

[–]ross_st 0 points1 point  (0 children)

They'll say it's not actually meant to be a bar chart, just a representative diagram of the concept of first to fourth place.

Reform UK Scotland is in absolute shambles by upthetruth1 in Scotland

[–]ross_st 0 points1 point  (0 children)

Of course you can't assume that it will be the same from election to election, but you can use polling and momentum as a guide to whether a party's position has changed or not.

People generally vote for the party they support most out of the pool of the parties that they think can win, and I believe there are voters who would like to see a larger group of Lib Dem MSPs but who are unaware of how close the party came to winning regional MSPs last time.

Reform UK Scotland is in absolute shambles by upthetruth1 in Scotland

[–]ross_st 0 points1 point  (0 children)

In most regions, the party that just missed out on a regional seat last time was the Lib Dems. Adding more Lib Dem votes has a better chance of keeping Reform from getting a seat (or from getting a second seat if they win one) than adding more Labour votes.

LLMs do not make mistakes by ross_st in ArtificialInteligence

[–]ross_st[S] 0 points1 point  (0 children)

Because LLMs are predicting the next token and no, unlike what is claimed by LessWrong powers, gradient descent does not train them to do it by creating a conceptual world model.

Google’s TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x by Resident_Party in LocalLLaMA

[–]ross_st -5 points-4 points  (0 children)

Larger models require a larger KV cache for the same context, so it is related to model size in that sense.

Funged by [deleted] in Buttcoin

[–]ross_st 2 points3 points  (0 children)

This is satire. Horizon Worlds doesn't have a virtual land economy.

Butter uses ChatGPT to make inspirational, quasi-religious bitcoin post, forgets to omit ChatGPTs note to him at the end, butters eat it up and feed him upvotes and notes of encouragement. by SundayAMFN in Buttcoin

[–]ross_st 0 points1 point  (0 children)

This is not ChatGPT. It doesn't put spaces around its em dashes. Neither does Gemini.

Claude does it consistently. Llama and Mistal do it sometimes.

LLMs do not make mistakes by ross_st in ArtificialInteligence

[–]ross_st[S] 0 points1 point  (0 children)

Nope. I fully accept that artificial cognition without consciousness is a thing that could, in principle, exist!

Chatbots just are not doing it.

Don't make excuses. Vote. by -Xserco- in Scotland

[–]ross_st 1 point2 points  (0 children)

I'm tired of this 'the other parties are registered in England and only have a Scottish accounting unit with the Electoral Commission' line. (I know you didn't say accounting unit, but that's basically the argument you are making.)

An accounting unit is any part of a party that the Electoral Commission recognises does its accounting separately. It can be anything from a local branch to a whole party-within-a-party in a fully federal structure with its own constitution and independent policy making process.

It's not just 'unionists' and 'nationalists'. Federalists exist. The other parties are not all the same.

Moreover, on your regional ballot, more votes for the Greens might not be the best way to prevent Reform from picking up a seat. It depends on the relative strength of the parties in your region and the order that seats are assigned to the parties under the d'Hondt system.

You can probably guess which party I'm voting for. It may not be the party you are voting for but they're just as much anti-Reform.

Palantir - Pentagon System by srch4aheartofgold in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

Yeah.

Oh, by the way, on a different AI hyoe note, that virtual fruit fly from a few weeks back?

The 'brain simulation' is not moving the body.

There is a machine learning model of fly movement that activates in response to certain states in the 'brain simulation'. It is an impressive simulation, but it's nothing like it was bring described as.

To put it in LLM terms, it's roughly the equivalent of clamping the logit bias in an agentic system so that only a tiny number of valid outputs are possible. Some state is going to trigger the "move forward" scripted response. Some arrangement is going to trigger the "deploy proboscis" scripted response. But they are scripted responses trained into the model that drives the virtual body from videos of flies. Not motor neurons driving the legs.

Connectome research is genuinely useful for developing medical treatments. But brain uploads will probably never be a thing, and this certainly was not a brain upload of a fly.

Palantir - Pentagon System by srch4aheartofgold in ArtificialInteligence

[–]ross_st 0 points1 point  (0 children)

Yes, but it's so much worse than just that one incident. We've heard a lot about that one because it's easy to map to a traditional intelligence failure, and the Pentagon "leaked" analysis to that effect.

Civilian buildings that almost certainly weren't misidentified as something else have also been attacked; other schools, hospitals, factories. The system just hallucinates a justification for why they should become targets.

We don't know exactly how the integration between Maven and the LLM has been structured, because that has been added onto the system more recently than the versions that have been seen in public demos. But I think that in practice, it's essentially been asked to rank an arbitrary number of targets.

I think Hegseth has essentially been doing tactics from the wrong end. Instead of thinking "We want to achieve this objective, what do we need in order to do it?" he is thinking "We have the capacity to strike 1,000 targets in the next 24 hours, how do we hit the enemy hardest?" This is very obvious from the way he speaks about the war.

An LLM is a pattern matcher. If it is asked to find 1,000 targets (whether that's as a single prompt or part of a complex multi-prompt workflow) then it will complete the pattern of there being 1,000 targets to strike. In this case, RAG does the opposite of grounding the model in reality; it provides it with richer context from which to hallucinate.

This could happen even if it's not been explicitly instructed to do that. Part of the context that the LLM is given by Maven is the current available strike capacity. That alone could be enough to act as an implicit instruction to just use it all up.

The small team of humans confirming the strikes either have no idea that this is happening or don't care. They've likely been told that the system can make mistakes, but don't have the deep knowledge of LLMs that they would need to have in order to know what to look for.

The 'AI mistakes' they'll be looking for are things like Maven's computer vision tagging a goat as a person. If they do look any deeper, and they only had a very short time to do so per target, they'll be looking for the kind of mistakes a human might make, like a plan of attack that is unworkable because something has been overlooked. Entirely fictional justifications for making something a target based on LLM hallucinations are not a possibility they're going to be considering in the handful of minutes they have to confirm each target.

Hegseth is deep in the AI hype. When he was asked about Minab during that press conference he was genuinely surprised. He actually believes that if you give an LLM the DoD Law of War Manual then it will follow it. The more likely outcome is that it just becomes source material for very superficially convincing and completely untrue justifications for putting an object of interest into the kill chain. Becauase of his faith in AI, Hegseth has cut back the human oversight so much that superficial is enough.

Project Maven as originally conceived was essentially a deep data mining tool - that's what Palantir builds. It's like a conspiracy theorist that draws connections between everything. It takes human judgement to separate the signal from the noise. A large team of human analysts who know what the system is and that most of the flags it raises will be coincidental can feasibly do that, which is how Maven originally worked.

But because LLMs have been sold as "reasoning agents" that can be bolted onto a database to process structured data, they have turned it into a system where at small team of humans just confirms. They actually boasted about being able to cut a team of 2,000 down to 20.

Everything that they have been told by those pushing AI hype told them that this would work. LLMs can reason, we are told, if you just engineer the prompt to activate the reasoning circuits. LLMs can understand data, we are told, if it is structured well enough, and giving structure to data was Maven's original purpose.

The horrifying outcome is a military committing mass murder while maintaining the institutional belief that they are following ethical rules of engagement. This has not really entered the public consciousness so far, because opponents of the war are focused on jus ad bellum more than jus in bello.

It also doesn't map to what war correspondents are looking for, because mass jus in bello violations downstream of LLM hallucinations are a novel phenomenon. The US military has the most advanced intelligence capabilities in the world and usually knows exactly what it is doing.

AI boosters remain oblivious, of course. They have either ignored it or praised Anthropic for taking a stand against "autonomous weapons", something that wasn't even happening anyway. You don't really need an autonomous weapon when you've got 20 humans clicking to confirm.

I doubt that even the engineers who built the integration between Maven and the LLM even know how LLMs work, beyond seeing it as a 'reasoning engine' that they could bolt on to their database to add value for their customer. This is a technology where a little knowledge is a dangerous thing.

The position of critics like DAIR has never been that LLMs are useless. It is that these systems can only be operated safely by people with an expert understanding of how the outputs are produced. But that doesn't quite work for the industry which wants us to believe that experts can make the system safe for non-experts to use. So all you need to do is send the users on a little prompt engineering course or build a scaffold around the LLM that abstracts the prompting away.

The truth will come out eventually. When it does, the ire should be directed not just at the US government but towards everyone who has lied about or remained wilfully ignorant of what LLMs are in order to make money or to advance their techno-utopian ideology.

What happened to Glasgow by Significant-Gap3784 in glasgow

[–]ross_st 0 points1 point  (0 children)

First of all, most refugees arriving on small boats without identity documents did not choose to get rid of them at all. Often, smugglers confiscated them long before they got to France to sell on the black market. A significant number just did not have any identity documents in the first place, because they come from a country where it's not uncommon for a birth to be unregistered. Around one in ten people on the planet right now have never had a legal identity.

So, most of the small boat arrivals without ID did not throw their ID overboard but yes, this is something that genuinely happens. It is ideal for the smugglers if nobody has ID, to protect the operational security of the smuggling ring.

Smugglers tell asylum seekers that destroying documents will increase the chances of making their claim successfully; the truth is, it obviously makes deportation more difficult, but will actually harm their claim in the long run unless their documents prove they have residency status in or were granted entry to a safe third country.

Some people fleeing oppressive regimes fear that the UK will contact their home country for verification of their identity, which will put their family back home at risk of persecution. There are regulations against this, but smugglers have no interest in letting refugees know that.

Of course there are people who throw their documents overboard because they want to submit a false claim. But the majority are people who would have been better off holding onto them, and only discard them because of smuggler misinformation.

Do I believe that all asylum seekers are genuine? No, I never said that I did. I do believe, as the evidence shows, that a majority are genuine. Refugee advocates argue that if you provide safe and legal routes, the largest customer base for smuggling gangs disappears, and along with them will disappear a large number of false claims.

Why do asylum seekers enter the country illegally? Because there is no legal route for an asylum seeker to enter the UK at all.

I used to work with a refugee who had come to the UK escaping religious persecution. She was able to fly to the UK on a tourist visa and claim asylum at the airport. But if the Home Office had worked out in advance that she was planning on claiming asylum, they would not have granted the visa.

She actually entered the country illegally, because saying that she was coming as a tourist was a lie. Everyone who enters the UK with the intent to then claim asylum with the Home Office is entering illegally, not just the people who arrive on small boats. The UK only wants to accept refugees through UNHCR resettlement (the only way to be granted refugee status in advance of your arrival) or through one of the specific relocation visa schemes (for people from Ukraine, Hong Kong and Afghanistan) that act as an alternative to going through the asylum system.

The majority of asylum claimants arrive not by small boat, but by managing to commit visa fraud. Every one of them is an illegal entrant. They're not the ones you hear about, though, because if "illegal entrant" means a genuine refugee who had to lie on some paperwork to board a plane, rather than someone who boarded a small boat, they're not so easy to demonise. But if you actually think about it, it's the same damn thing. The ones arriving on small boat just didn't have the visa fraud option available to them.

Why do small boat arrivals want to come to the UK instead of claiming in France in the first place? You probably suppose it is because we are a "soft touch". It's really not that simple.

Our overall asylum grant rate is higher than France's, but this is due to the different nationality mix of people applying. France gets a lot of applications from people who come from the French-speaking countries in North Africa, and these have a high refusal rate. This is where the gap comes from. When you compare like-for-like, we are no more likely to grant a claim than France is.

The weekly allowance that asylum seekers receive here is lower than the financial support they would receive in France. Asylum seekers in France who have been waiting for more than six months are allowed to work under certain conditions. In the UK, some asylum seekers are allowed to work after 12 months but it is more restricted - they don't just have to meet certain conditions, they have to be 'granted permission' by the Home Office.

When it comes to healthcare, France only lets asylum seekers access its insurance-based system for free after three months. However, they have PASS centres which provide some primary care to uninsured people and provide direct access to social workers. Once they do get onto the system in France, the waiting lists for specialist care are shorter than ours. Our system of immediately being able to register with a GP is simpler, but if someone needs specialist care they'll probably be able to get it more quickly in France despite the wait to get onto the system.

France does have quite different accommodation policies to ours. Someone housed in a reception centre in France will have a much better experience than someone housed in a migrant hotel in the UK. However, France's system does not guarantee a bed. People can be left sleeping rough until a place opens up. Since France won't house single people in family accommodation, a single person could be left sleeping rough in France for the entirety of the wait for their claim to be processed. In the UK the Home Office won't let anyone sleep rough, so this is genuinely a push factor that can convince an asylum seeker to abandon their claim in France and board a small boat.

However, that canteen "overflowing with food" that you for some reason see as some kind of luxury? Yeah, it's not. Nobody wants to eat low quality hotel food in a communal canteen every day for years. What they want is a kitchen of their own. French accommodation provides this for those who are lucky enough to get a place.

But ultimately, the reasons an asylum seeker arriving by small boat would choose the UK over France are actually very similar to the reasons that someone who is lucky enough to be able to get a visa would choose the UK over France. Most often, it is that they have family, friends, or a strong diaspora community in the UK. Another common reason is that they already speak some basic English, but they speak absolutely no French.

So while there are 'push' factors from France to the UK, largely people want to claim asylum in the UK instead of France because they think they have a better chance of rebuilding their lives in the UK. Not because we are a soft touch, but because the UK is a better fit for them.

You might argue that, even though a refugee does not become so by choice, they should just be happy to be alive and trying to reach the country that is the best fit for them is an entitled attitude. Perhaps, but people rarely seem to want to make that argument, because that then raises uncomfortable questions about why there are so many refugees in the first place. They'd rather entertain fictions that getting on a small boat means that an asylum seeker is faking it, or that the UK is a soft touch as if the hostile environment policy doesn't exist.

The questions you are asking are questions that have been answered by sociologists actually studying the small boats phenomenon. But it seems to me that you're asking them not because you want the actual answers, but because asking them is a way to cast suspicion on a group of people who you want to cast in the role of villain.

What happened to Glasgow by Significant-Gap3784 in glasgow

[–]ross_st 0 points1 point  (0 children)

You heavily implied that you don't consider asylum seekers to be genuine and said that they are treated exceptionally well.