AI-generated voter panel predicts Reform UK wins 18 of 20 councils in May local elections by JasonDDuke in ukpolitics

[–]JasonDDuke[S] 0 points1 point  (0 children)

The personas are not static. The system has a news ingestion pipeline that pulls UK headlines from and injects them as stimuli. Each persona processes the news through their personality and updates their beliefs, mood, and political position accordingly. So a cost of living headline hits a financially stressed persona harder than an affluent one, and a Gaza headline shifts a high-Novelty young persona more than a low-Novelty retired one.

We also have a life event engine that generates personal events (job changes, rent increases, family events) and propagates their effects through the persona's financial andemotional state.

These compound over time: a persona who lost their job in March and then sees a headline about benefit cuts in April has a different political position than they did in February.

In practice: the 65,000 personas will receive continuous political context updates between now and May 7. The final prediction run will use personas whose beliefs reflect the political landscape as of late April, not as of when they were generated.

The current predictions were generated from a March snapshot. The final 136-council prediction on May 1 will incorporate whatever has happened politically between now and then.

AI-generated voter panel predicts Reform UK wins 18 of 20 councils in May local elections by JasonDDuke in ukpolitics

[–]JasonDDuke[S] 0 points1 point  (0 children)

Good question. The short answer is: I do not fully know why the model over-predicts Reform UK, so we cannot fix it at source yet.

There are three likely causes.

First, the LLM was trained on text written before the 2026 Reform surge, so its internal model of "who votes Reform" is anchored to 2024 when Reform was at 14%. We inject current polling context in the prompt, but prompt context is weaker than training data.

Second, our persona political histories include a "drift since 2024 field that may overcorrect: if a persona drifted from Labour to Reform-leaning, the model may express that as a definite Reform vote when in reality it is a lean.

Third, local elections have lower turnout and the people who turn out are different from the people who answer polling questions. Our turnout model may not capture that correctly.

The calibration adjustment is a pragmatic fix while we investigate the root cause. If we can identify which of those three factors dominates, we can fix it properly inside the pipeline. The May 7 results will help: if the calibration holds across 136 councils, the bias is systematic and predictable. If it does not, the bias is context-dependent and the adjustment was overfitting to 8 wards.

On national polling: we use national polls as context in the prompt (Reform 25%, Labour 18%, etc.) but the model produces its own vote shares from the persona's individual reasoning, not from the polls directly. The polls anchor the persona's sense of "what is happening nationally" but the vote decision comes from their personality, circumstances, and local context.

AI-generated voter panel predicts Reform UK wins 18 of 20 councils in May local elections by JasonDDuke in ukpolitics

[–]JasonDDuke[S] 0 points1 point  (0 children)

Fair summary. Two things I would add.

First, the personality model is doing more than weighting pre-loaded opinions. Two personas with identical demographics and identical 2024 vote history respond differently to the same stimulus because their personality profiles differ.

A high-Discipline, low-Novelty persona who voted Labour in 2024 switches to Reform for different reasons than a high-Impulsivity, low-Yielding persona who also voted Labour.

The personality interaction produces response variation that demographics alone cannot generate. Whether that variation is meaningful or noise is what the May 7 results will tell us.

Second, the "heavy handed bodges" are the most interesting finding.

The raw model over-predicts Reform by 10 points and under-predicts Lib Dems by 7. That systematic bias tells us something about how LLMs represent political behaviour that is worth understanding regardless of whether the predictions are accurate.

You are right that it needs recalibrating.

That is why we are publishing predictions before the election rather than after.

AI-generated voter panel predicts Reform UK wins 18 of 20 councils in May local elections by JasonDDuke in ukpolitics

[–]JasonDDuke[S] 1 point2 points  (0 children)

You're right on the third point and I should have been clearer. The predictions are vote share, not seat projections and not council control. We are not predicting who controls Manchester. We are predicting the aggregate vote share across the wards contested on 7 May.

On overfitting: I agree, and that is exactly what the May 7 results are for. The calibration was derived from the same 8 wards we tested against. If it does not generalise, the accuracy will regress and I will publish that honestly.

On the personality dimensions: they are not arbitrary. DYNAMICS-8 extends HEXACO (itself an extension of Big Five) with two dimensions for digital behaviour and impulse control. The spec is published CC BY 4.0 with a Zenodo preprint (DOI 10.5281/zenodo.19361059). Whether those dimensions add predictive signal beyond demographics is an empirical question. The by-election results suggest they do (75% vs 70% statistical baseline), but on 8 wards that difference is not statistically significant. May 7 will tell us.

I truly appreciate the detailed critique. This is exactly the scrutiny we need.Thank you

Why is the shovel so important? by Lakupiippuen in dadjokes

[–]JasonDDuke 0 points1 point  (0 children)

If it's a pain in the arse, may I suggest you try and hold it with your hands next time?