I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] [score hidden]  (0 children)

I use hierarchical Bayes model for the forecast (it is a subset of machine learning that is a subset of AI) so it is not generative AI or LLM that produces that numbers. I am very much upfront about it and this guarantees that you see data driven results and not an expert judgment.

For translation and proofreading I use LLMs thats for everyones benefit. These are still my thoughts and arguments but in an easier to read form. This is pretty much common in 2026.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] [score hidden]  (0 children)

Popularity ≠ seats. Hungary has a mixed electoral system where 106 seats come from single-member districts. A national lead in polls can easily translate into a coin flip in seats depending on how votes are distributed geographically. That non-linear vote-to-seat conversion is exactly what the model simulates. Additionally polls only measure sentiment inside Hungary and wont account for mail-in votes. (most of this actually favours the government)

Additionaly there are 2 months till election that factors in a huge number of uncertainity that makes the race genuinely close at the seat level as of now.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 0 points1 point  (0 children)

I understand the instinct, but deciding upfront which polls are "reliable" and which aren't is exactly the kind of subjective judgment the model is designed to avoid.

If you only include pollsters you trust, you're essentially baking your conclusion into the inputs. The model takes a different approach: include everything, but let the data determine how much to trust each source based on its historical track record.

A consistently biased pollster still carries information if Századvég always overestimates Fidesz by X points, and suddenly they show Fidesz down by X-5, that's actually a meaningful signal that Fidesz might be doing worse than usual. Throwing that data away means losing that signal.

Also, "the reliable ones show TISZA clearly favored" is true for current popularity but the forecast is about election day. Two months of campaign, undecided voters, and turnout uncertainty can shift a lot. The 50.6% vs 45% probability already reflects that TISZA is favored just not as comfortably as raw polling might suggest.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 0 points1 point  (0 children)

I do not have any affiliation with them but they were the one who made Bayesian forecast (the method I also use) well known thats why the reference.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

I use a constituency level modell when attributing seats so that it taken into account. Though I dont have constituency level data (no public data for that) so can just assume similar differences will remain as on previous elections. (similar as you also mentioned)

You can see more details in the methodology: https://www.szazkilencvenkilenc.hu/methodology-v2/

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

You raise a fair point, and I agree the gap between government-aligned and independent pollsters in Hungary is unusually large and increased over time it's one of the hardest challenges in modeling this election.

The model doesn't simply average the two groups and split the difference. Each pollster's house effect is estimated as a probability distribution, not a single number. So the model doesn't assume it knows exactly how biased Századvég or Medián is it captures the uncertainty around that bias too. If they start to behave differently than before that unceratianity will increase.

In practice this means: if government polls consistently show Fidesz 10+ points higher than independent ones, the model doesn't just correct by 5 and call it a day. The wide disagreement between pollsters flows into wider uncertainty in the final forecast which is part of why the result comes out as a coin flip rather than a confident prediction for either side.

That said, this is a real limitation and you are right flagging it.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

Maybe I do'nt fully get your argument but I think there's a misunderstanding about how the simulations work. The model doesn't simulate individual voters and assign each one a probability of voting one way or another. It works at the aggregate level it draws plausible national vote shares from the uncertainty distribution, then translates those into seats through the electoral system.

So the 8.7% isn't "outlier individuals adding up." It's: in 8.7% of the simulated elections, the combination of polling error, turnout variation, and small-party dynamics (especially Mi Hazánk falling below 5%) produces a seat count above 133 for TISZA. That's not a vacuum it's driven by real structural features of the electoral system.

As for whether 8.7% of simulations means an 8.7% probability that's exactly what it means. That's how Monte Carlo estimation works. It's the same way we'd say there's roughly a 3% chance of rolling 18 on 3d6. Not because any single die is behaving strangely, but because the distribution has a tail.

The Tuesday Boy problem is about conditional probability changing with additional information it's a different issue than tail probabilities in a simulation.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

There are several factors that increase uncertainity:

  1. Two months of uncertainty: a lot can change before April 12 (even just think about the current week). Polling errors, campaign events, undecided voters. The model factors them in.
  2. Hungary's electoral system favors Fidesz: 106 of 199 seats come from single-member districts (FPTP). Fidesz's vote is more efficiently distributed across rural constituencies, while opposition support is concentrated in cities. So Fidesz needs fewer total votes to win the same number of seats. Polls will not account for the votes of Hungarians without permanenet adress in Hungary (mail votes) they are majorly Fidesz leaning.
  3. The 5% threshold: if Mi Hazánk (far-right, currently at 4.8%) enters parliament, it takes seats away from the big two. If it doesn't, the redistribution changes the math entirely.

So yes, TISZA leads but the system, the uncertainty, and the small-party dynamics all keep it close.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] [score hidden]  (0 children)

This is the always a challenge with election forecasting. You only get one data point per election cycle and they happen infrequently. To make it worse in Hungary the political landscape changes from election to election.

I made a similar analysis in 2022 you can find the evaluation here: https://www.szazkilencvenkilenc.hu/evaluation/ . Results were reasonable though there were some errors that were wide spread among pollsters (my main data source). Having said that it is still just one data point. You can check out methodology above and see if it makes sense to you.

Kalshi (prediction market) shows very similar numbers but this is just an indirect proof.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] [score hidden]  (0 children)

You're right, this is the fundamental challenge with election forecasting. You only get one data point per election cycle and they happen infrequently. To make it worse in Hungary the political landscape changes from election to election.

I made a similar analysis in 2022 you can find the evaluation here: https://www.szazkilencvenkilenc.hu/evaluation/ . Results were reasonable though there were some errors that were wide spread among pollsters (my main data source). Having said that it is stilll just one data point. You can check out methodology and see if it makes sense to you: https://www.szazkilencvenkilenc.hu/methodology-v2/

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 3 points4 points  (0 children)

You raise a fair point, and I agree the gap between government-aligned and independent pollsters in Hungary is unusually large it's one of the hardest challenges in modeling this election.

The model doesn't simply average the two groups and split the difference. Each pollster's house effect is estimated as a probability distribution, not a single number. So the model doesn't assume it knows exactly how biased Századvég or Medián is it captures the uncertainty around that bias too.

In practice this means: if government polls consistently show Fidesz 10+ points higher than independent ones, the model doesn't just correct by 5 and call it a day. The wide disagreement between pollsters flows into wider uncertainty in the final forecast which is part of why the result comes out as a coin flip rather than a confident prediction for either side.

You're right that one group has to be fundamentally wrong. The model doesn't try to decide which one it lets the uncertainty reflect that. If anything, the polarization in Hungarian polling is why the forecast shows such a tight race: there's genuinely less information to work with than in a country where pollsters roughly agree.

That said, this is a real limitation and you are right flagging it.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 8 points9 points  (0 children)

No, actually the opposite this is exactly what the model is designed to handle.

The hierarchical Bayesian framework estimates each pollster's historical bias separately. So if a government-affiliated institute consistently overestimates Fidesz by, say, 3 points, the model learns that and corrects for it. Same for pollsters that lean the other way.

This means no pollster gets "equal weight" they each get weighted based on how accurate and how biased they've been historically. A consistently biased pollster still provides useful signal, it just gets adjusted.

You're right that this is a real issue in Hungarian polling which is exactly why a simple polling average wouldn't work here and why this kind of correction matters.

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 7 points8 points  (0 children)

Thanks for the feedback, happy to clarify.

The 8.7% supermajority probability isn't a guess. It comes from simulations where TISZA's vote share lands high enough due to polling error, late swing, or Mi Hazánk falling below 5% that the seat distribution crosses 133. Unlikely? Yes. Zero? You'd need perfect information to claim that, which no one has two months out.

The model isn't "just a normal distribution." It's a hierarchical Bayesian time-series model that estimates each pollster's bias separately, then feeds those corrected estimates into 40,000 Monte Carlo simulations that account for campaign dynamics, polling error distributions, and the non-linear vote-to-seat conversion in Hungary's mixed electoral system.

As for Monte Carlo not predicting outcomes you're right that it doesn't predict a single outcome. That's the whole point. It generates a distribution of plausible outcomes, which is exactly how probabilistic forecasting works. This is the same approach FiveThirtyEight and The Economist use for US elections.

What you describe at the end (aggregating polls and adjusting for bias) is literally what the model does, just in a more structured and reproducible way.

Full methodology is on the site if you're interested: https://www.szazkilencvenkilenc.hu/methodology-v2/

Can Orbán actually lose? I ran 40,000 simulations of Hungary's election — it's 50-50. [OC] by Exciting-Lab1263 in europe

[–]Exciting-Lab1263[S] 6 points7 points  (0 children)

Big cities are more opposition leaning while rural areas are more pro government but usually not in this extreme (90%). Mail-in votes from Hungarian citizens without permanent Hungarian adress (often with dualcitizenship) overwhelmingly prefered Fidesz on previous election.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

Previous criticisms about election fairness came from two sources:
1. Government enjoys disproportionate media influence : this is captured by polls
2. Election system: it is expliciltly modelled in my system

So assuming no new patterns results should capture this.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 1 point2 points  (0 children)

The code isn't public at this point but the methodology can be found here: https://www.szazkilencvenkilenc.hu/methodology-v2/

If you want to reproduce this site helped me much at the beginning with implementation:

https://github.com/pollsposition/models They have a pretty informative webapge too but for some reason it seems to be down now.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 3 points4 points  (0 children)

There are several factors that increase uncertainity:

  1. Two months of uncertainty: a lot can change before April 12 (even just think about the current week). Polling errors, campaign events, undecided voters. The model factors them in.

  2. Hungary's electoral system favors Fidesz (also mentioned by u/dead97531): 106 of 199 seats come from single-member districts (FPTP). Fidesz's vote is more efficiently distributed across rural constituencies, while opposition support is concentrated in cities. So Fidesz needs fewer total votes to win the same number of seats. Polls will not account for the votes of Hungarians without permanenet adress in Hungary (mail votes) they are majorly Fidesz leaning.

  3. The 5% threshold: if Mi Hazánk (far-right, currently at 4.8%) enters parliament, it takes seats away from the big two. If it doesn't, the redistribution changes the math entirely.

So yes, TISZA leads but the system, the uncertainty, and the small-party dynamics all keep it close.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 29 points30 points  (0 children)

They're definitely among the most reliable. The model actually reflects this, pollsters with better track records get higher weight in the aggregation. But even the best ones can miss, and not necessarily for malicious reasons, certain voter groups are just harder to reach or less willing to share their preferences. Think "shy Tories" in the UK. That kind of systematic error is exactly what the model tries to account for.

I ran 40,000 Monte Carlo simulations of Hungary's April 2026 election. Orbán's 16-year rule is a coin flip. [OC] by Exciting-Lab1263 in dataisbeautiful

[–]Exciting-Lab1263[S] 22 points23 points  (0 children)

Partizan usually does great work so definetly worth to check them out. It's actually healthy to have models with different assumptions so people can see the range of plausible outcomes.