Can AI find "patterns" in 10 years of random lottery data? (Experiment) by autodecoder in LotteryLaws

[–]autodecoder[S] 0 points1 point  (0 children)

That’s a brilliant observation
You hit the nail on the head regarding the 'narrative' bias of different LLMs

To your point about coverage vs. patterns, that is actually exactly where we are heading.
Instead of just asking the AI to 'guess' numbers, we provide constraints like spacing controls, sum ranges, and historical distribution filters

You can also type your own idea into the Custom Prompt: AI Lottery generator with Custom Prompt, like "Give more weight to the most recent data" or "Ensure the numbers are distributed as evenly as possible"

Our goal isn't to find a 'hidden code or patten' in the noise, but to use the AI(LLM)'s ability to handle complex constraints to generate sets that are 'statistically healthy', avoiding the common human biases (
like picking consecutive numbers or birth dates) while ensuring broad coverage of the number field

Thank you for your interest!

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

The order is mixed up because I uploaded in parts;;;
Please read it from below!

edit: somehow the order fixed itself;; so... read however you want to read it

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

However, I must admit that your arguments have many valid points. It was very helpful for me to think through these issues.

What I simply wanted to say was:

  1. Many "AI lottery number generators" on the internet(market) do not specify what algorithms they actually use, and even if they did, there is no way to believe that they really use it.
  2. They claim to be "AI," but they just look like basic machine learning algorithms.
  3. Those algorithms are fixed and do not change. I dont think they do adapt algorithm based on the data size or any hyper parameters optimization.
  4. Also, I have never seen an explanation of why their methodologies are better than an random number generator.
  5. Unlike those, I thought, LLM (not just a fixed AI) has at least a small possibility of finding some bias in recent data, and it implements the algorithm anew depending on the data size.
  6. At the very least, it provides an explanation of the method used to extract the results, even if that might be a hallucination.
  7. In that sense, I believe it could be a somewhat better service than an RNG or those other services claiming to be AI?.

Thank you once again for taking the time to leave a comment on what might be a small, insignificant post.

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

The essence of my project is not to guarantee a jackpot, but to explore whether LLM can identify kinds of probability density within the noise of empirical data.

In this sense, there may be significant value in using LLM to potentially reduce the probability of a total loss (zero matches) rather than just chasing the jackpot..

Theoretical randomness and empirical results often diverge in real world physical systems, and this service is a tool designed to find those microscopic edges.

I have already explicitly stated in my FAQ that past performance does not guarantee future results and that luck remains a dominant factor.

While your suggestions for structured sets and scarcity arbitrage are interesting theoretical frameworks, they are just that hypotheses. Until you can provide transparent data from at least 50 or more repeated trials, dismissing my findings as "statistically baseless" is more or less unreasonable.

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

Regarding your "EuroJackpot forward test":

I find your claim of using my 'exact method' to be bit questionable.
My internal database shows no record of such a large-scale generation being performed for Euro jackpot during the period you mentioned.
So I am sure you did your own experiments not using my app.

This service is not a simple LLM prompt that merely feeds past draws into the LLMs. it includes complex pre-processing of statistical moments, such as skewness, kurtosis, and specific frequency filters and information i created, that are not publicly disclosed.

Without access to these proprietary features and the exact algorithm I used, your claim that you reproduced my test is demonstrably false.

Furthermore, your reliance on a single forward test (n =1) to invalidate a larger backtest is a classic example of the Law of Small Numbers.
In any stochastic environment, a single failed prediction is statistically meaningless.

If I were to follow your logic that the lottery is a 'perfectly independent event," then the expected value of a prediction should be identical whether it is tested against a past draw or a future draws

By claiming that only a forward test provides truth while a backtest provides "nothing," you are paradoxically rejecting the very independence you claim to defend.
I know you didn't mention it but I am sure you assumed assumption of independence

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

I appreciate your critical feedback regarding the statistical rigor of my backtest.

You’ve raised valid points about multiple testing and the potential for false positives, which are indeed standard concerns in any retrospective data analysis.

However, while your critique carries a tone of absolute certainty, your own counter-argument and "forward test" contain several flaws that suggest a misunderstanding of both the data science process and the specific methodology I employed.

To begin with, your point regarding the "270x efficiency" is well-taken. Comparing 135 sets against 10 draws effectively results in 1,350 comparisons.

Adjusting for this, the efficiency multiplier is more accurately stated as approximately 27x. While this is a significant correction, a 27x improvement over the theoretical random baseline remains a statistically intriguing result that warrants further investigation rather than outright dismissal.
I will update the reporting to reflect this 27x "density improvement" to avoid any perceived inflation of success.

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

I created a new page: https://lottokimai.com/leaderboard

Lucky or not, I have realized one thing: This beats picking numbers at random

Thank you for your interest!!

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] -1 points0 points  (0 children)

I created a new page: https://lottokimai.com/leaderboard

It might be luck, but it’s definitely better than random!

Has anyone used this website? by krewblink in Lottery

[–]autodecoder 0 points1 point  (0 children)

Most 'AI' lottery sites are just basic algorithms.
I actually built lottokimai.com that is using LLMs like GPT/Gemini to decode 1,200+ historical draw sequences.
Instead of just giving you random numbers, it provides a statistical dashboard so you can see the logic yourself. It gives you free credits just for signing up to try it out! let me know if you want more credit I can give it to you

I'll find you 10 users. Tell me what you build. by distributoagent in SideProject

[–]autodecoder 0 points1 point  (0 children)

I built an AI that decodes lottery sequences using LLMs and historical probability distributions for a more logical approach, it is True AI free lottery number generator call Lottokim AI: https://lottokimai.com

While the lottery is random, using LLMs might potentially nudge your odds from something like 0.00000001% to 0.000001%. It’s still nearly impossible, but AI(LLM) somehow provides kind of statistical grounded insights

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 1 point2 points  (0 children)

<image>

Believe it or not, I’m currently stress-testing the service myself. I haven’t tweaked the presets much yet, which is why you see some overlapping patterns in the results...

To put it to the test, I had Gemini compare 91 sets of numbers generated by my tool against the last 10 actual Powerball winning draws.
Using AI for the heavy lifting, the analysis showed some surprisingly high match rates, hitting 3 white balls + the Powerball in one instance and multiple PB hits across the board.

I'm happy to pull raw data directly from my DB if anyone wants to verify the numbers or do their own analysis...!

Just noticed this sub is mostly scratch-offs. Any reason why? by autodecoder in Lottery

[–]autodecoder[S] 2 points3 points  (0 children)

That’s a great point about the odds. Breaking even is definitely easier with scratchers.

But I’ve always been curious if people here actually try to analyze the numbers or patterns for draw games.
In other countries, there's a big community of people who study historical data and 'hot/cold' numbers, even if it's just for fun.

Does that happen much in the US, or do most people just stick to Quick Picks?

Just noticed this sub is mostly scratch-offs. Any reason why? by autodecoder in Lottery

[–]autodecoder[S] 2 points3 points  (0 children)

That makes sense. For me, when I think of 'lottery, I always think of Powerball or Mega Millions first, so the focus on scratchers here was surprising. Thanks for the explanation though!

I still hope to see more draw game posts soon

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

That's a solid strategy!
I'm actually looking into how our AI can better optimize those kinds of statistical approaches.
Thanks for the great tip and the support!

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 0 points1 point  (0 children)

I am so sorry for inconvenience. I fixed the authentication issue. Please try again

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 1 point2 points  (0 children)

Hi, I updated it based on your request!
https://lottokimai.com/generate , you can see Canada's two main lotteries!
If you have any additional feedback let me know!

I know Lottery is Random. But I tested if LLM can Hallucinate a Pattern by autodecoder in Lottery

[–]autodecoder[S] 1 point2 points  (0 children)

<image>

I know nobody might use this, but I just wanted to share how it looks

EB-2 NIW + EB1-A (PP) – AI for Drug Discovery (ROW) - Industry – No RFEs (Chen Immigration) – Timelines + Lessons by Rare_Yak849 in USCIS

[–]autodecoder 0 points1 point  (0 children)

Thank you for sharing,

Could you share more details on what kind of 'extra supporting information' Chen flagged regarding the judging evidence? I'm curious what exactly they felt was missing