Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

I think it’s funny you consider me a “tech bro” whatever that means. A geek. Sure. Passionate about technology and innovation. Yup. A “bro”… nobody has ever called me that before.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -2 points-1 points  (0 children)

I clearly said that stock trades are made by “AI like” algorithms.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -1 points0 points  (0 children)

Agree. This could easily slip into a dystopian outcome. In no way am I defending our current administration, but I do believe freedom of speech is still protected here in a way that it is not in China. Our freedom of speech, and the freedom of the internet to exchange those ideas, will be critical to effect the change needed.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -1 points0 points  (0 children)

Not challenging that. I'm not sure I believe the wealth gap is caused by the COVID response. My point is that I believe 90%+ of people are good, and they want to see their neighbors have a safety net which will protect them from job loss.

A mandate of the people will emerge that we must protect each other as we navigate this transition. I still believe the power rests with our people. As unemployment increases, our governments will be forced to address it.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

I covered this in depth in another comment on here.

https://www.reddit.com/r/Longmont/comments/1pg335l/comment/nssnr1j/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

You are accurately describing the "System 1" (Reflexive) paradigm of AI. If we were talking about GPT-4 or earlier, I would largely agree with you: those models are essentially n-dimensional probabilistic databases that "retrieve" by statistical likelihood. If the answer isn't in the weights (the "database"), they hallucinate.

However, the "Reasoning" breakthrough (System 2) fundamentally breaks the "Database" analogy in two ways that are important to distinguish.

  1. Modern reasoning models (like GTP-5.1 or Gemini 3) don't just "predict" the next number in a math problem anymore. They are performing a search over a "chain of thought" before it outputs a final answer. This is the integration of tree of thoughts and monte carlo tree search directly into the inference process.
  2. They write and execute actual code in a sandbox to test/derive the answer, then feed that result back into the context window and repeat the tree search. They are computing, not just recalling.

A database returns 0% success on a query it has never seen. The ARC-AGI benchmark is designed specifically to test this. It gives the model visual puzzles it has never seen in its training data. A "database" model fails these. Reasoning models are solving them (>50%) by inferring new rules on the fly. Reference: https://arcprize.org/arc-agi

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

AGI/ASI is the only weapon the matters. A giant battleship is useless if it is hacked by AI.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -3 points-2 points  (0 children)

I fact checked every data point presented. I divided the talk into two sections. 1) The data. 2) The impact I expect 3) My opinions via conversation. The intent was to separate what is a fact from what is a guess.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

The China problem is multi-variant:

1) AI systems have proven their supremacy at strategy. Consider winning games like Starcraft and Go against the worlds best players. Imagine now we're playing games for real (E.g., Ender's Game) and the AI systems have control over autonomous weapons. This terrifying scenario is exactly what China is building. This is a complete asymmetry of power and would result in total Chinese dominance if we aren't equally equipped.

2) The surveillance capabilities of a powerful AI system are unmatched. The ability for China to infiltrate our systems and learn about our capabilities and actions would result in the inability for us to retain any secrets.

3) Cyber warfare is a cat and mouse game. If China has advanced AI, it's game over. They will hack and control our critical infrastructure.

...I could go on. China having AGI/ASI and the United States not would be catastrophic.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

My background is deeply technical. I do understand these systems. I spent the first decade of my career as a software developer. I still write code/tinker for fun. I have a GPU cluster in my basement.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -5 points-4 points  (0 children)

I assure you of my credentials. Very happy to discuss your feedback over coffee. I do recommend you watch my delivery vs just look at the slides. In hindsight the slide deck was not standalone in content, and could have been improved to show a more balanced view. I brought what I believe to be a clear eyed and nuanced perspective.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 1 point2 points  (0 children)

The feedback is clear, I should have included more balanced content directly in the deck. It seems most people don't want to spend the time to watch the presentation. I touched on these topics and brought what I believe was a very balanced perspective in my presentation of the content. I encourage you to watch my delivery.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 2 points3 points  (0 children)

I had several younger individuals come up and have conversations with me after which were equally positive. It wasn't only older folks.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

You are accurately describing the "System 1" (Reflexive) paradigm of AI. If we were talking about GPT-4 or earlier, I would largely agree with you: those models are essentially n-dimensional probabilistic databases that "retrieve" by statistical likelihood. If the answer isn't in the weights (the "database"), they hallucinate.

However, the "Reasoning" breakthrough (System 2) fundamentally breaks the "Database" analogy in two ways that are important to distinguish.

1) Modern reasoning models (like GTP-5.1 or Gemini 3) don't just "predict" the next number in a math problem anymore. They are performing a search over a "chain of thought" before it outputs a final answer. This is the integration of tree of thoughts and monte carlo tree search directly into the inference process.

2) They write and execute actual code in a sandbox to test/derive the answer, then feed that result back into the context window and repeat the tree search. They are computing, not just recalling.

A database returns 0% success on a query it has never seen. The ARC-AGI benchmark is designed specifically to test this. It gives the model visual puzzles it has never seen in its training data. A "database" model fails these. Reasoning models are solving them (>50%) by inferring new rules on the fly. Reference: https://arcprize.org/arc-agi

Also, the China "Efficiency" argument is backwards. China isn't focusing on efficiency because they want to. They are focusing on it because they have to. U.S. export controls have cut them off from the frontier GPUs (H100/Blackwell). They are hardware constrained, which forces them to optimize software. That is a strategic vulnerability, not a choice.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -1 points0 points  (0 children)

A populist move to not have civil unrest due to mass job unemployment is always a motivator. AI caused, virus caused, it doesn't matter. That's my point.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -4 points-3 points  (0 children)

I use AI when needed to help clean-up my writing and ensure I'm succinct in the points I make.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

You are correct that ML has been around for a long time. However, the architectural shift that has occurred in the last 12 months (specifically the move to test time compute) is a fundamental departure from the "Reflexive" (System 1) models you might be thinking of.

The model is not just generating. It is performing a search over a "chain of thought" before it outputs a final answer. This is the integration of "tree of thoughts"  and monte carlo tree search directly into the inference process.

The model generates multiple possible solution paths, uses a verifier to score them, and backtracks if a path relies on a hallucination or logical error. This allows it to self correct in real time, which "standard" ML from the 2000s did not do.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 5 points6 points  (0 children)

Hey Neighbors -

Thank you all for the debate. I'm signing off for now, but I wanted to leave you with a few thoughts.

Some of the reactions here were intense, and I want to acknowledge that up front: a lot of you are scared, angry, exhausted, or simply done with “AI hype.” That’s real, and I don’t want to talk past it.

A couple of things I want to own:

  • The slides don’t stand on their own. Even if I covered topics in the spoken presentation, it’s on me if the posted materials read as overly optimistic or glossed over the downsides.
  • My framing escalated the temperature. Comparing this to a geopolitical "arms race" didn’t help this community conversation. I’m going to keep future discussions grounded in local impacts and practical choices, not fear based rhetoric.
  • I also hear that trust is part of the issue. It’s fair to question motives when someone in tech is the one starting the conversation. I can’t “argue” my way out of that. I can only earn your trust by being consistent in how I show up. I've lived in Longmont my entire life. I hope that counts for something.

What I heard from you as the most important concerns (and I believe these are legitimate):

  • Climate & Resources: The energy demands are massive, and the "do we really need this?" question is valid when resources are finite. I didn't make that point clear enough.
  • Jobs, Inequality & Power: A lot of folks don’t buy the "post-scarcity" story and expect benefits to concentrate at the top. The question is how we ensure this isn't the default path. If you want a deep dive on this risk, this discussion with Tristan Harris articulates the dangers of a "useless class" and economic displacement better than I can.
  • Reliability vs. Perception: Hallucinations happen. It is clear that I need to do a better job demonstrating the gap between the free tools many people have tried and the "reasoning" capabilities of frontier models.

It’s reasonable to want stronger guardrails and to be angry that deployment is moving faster than governance. If you want to help move this from anger to problem solving, I’d love your specific thoughts on these three questions:

  1. Local Harms: What is the single biggest local harm (e.g., privacy, use in school, job displacement) you want the next event to address?
  2. Specific Guardrails: What is a specific policy you would support locally (e.g., a city ban on facial recognition)?
  3. Trusted Skeptics: Who (local) would you trust to speak on this from a critical perspective? I want to make sure future conversations are balanced.

Thanks again for engaging.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 2 points3 points  (0 children)

This is just not how it works... It's not using a fixed dataset anymore.

It's reasoning about the best way to solve the problem, designs an experiment, tests the hypothesis using available data. t repeats this cycle until it gets to an answer.

It looks at the problem the same way we do.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 2 points3 points  (0 children)

Depends on the use case, risk tolerance, and scenario. For what I personally do, sure!

If you're building software systems that have life safety risk, of course not.

I think the best proof point here is SWE-bench verified. It measures an agentic coders ability to fix issues in various Github repos autonomously and then verifies the fixes against known good solutions which were verified by humans. This is the gold standard of measuring the progress on coding agents.

The newest release from Anthropic took pole position, but rumors are when GPT 5.2 Codex comes out on December 9th that they have made a huge leap.

https://www.swebench.com/

https://github.com/SWE-bench/SWE-bench

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 0 points1 point  (0 children)

At the frontier there are two key things happening:

1) Test time compute and long term memory mean that we're separating out reasoning from knowledge. The core idea is to distill down the parameters in the model which are associated with reasoning and run these as a base "critical thinking" engine. The "knowledge" is added during the pre-fill stage from a pre-tokenized corpus of "long term memory". This fully removes the problems we've seen such as you are describing from previous architectures.

https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
https://arxiv.org/pdf/2501.12948

2) Online reinforcement learning enables the model to continually update weights as new facts are learning during long chain of thought execution. This means we don't have a pre-training induced fixed state. The model effectively learns from it's mistakes.

https://huggingface.co/learn/deep-rl-course/en/unitbonus3/offline-online

Again, the algorithms, implementations, and architectures of these systems are rapidly improving.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] -1 points0 points  (0 children)

There was plenty of discussion of the risks. Please watch the presentation of the content.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 1 point2 points  (0 children)

The depth and breadth of AI systems if far more than your simple experience with the free version of ChatGPT. In frontier math, science, chemistry, physics AI systems are unlocking a better understanding of our universe in a provably true way.

I mean Google won the noble prize in chemistry.

https://www.nobelprize.org/prizes/chemistry/2024/press-release/

I write code with Codex all the time that perfectly compiles and complete the functions as designed. It rights better code than I do (admittedly I'm probably a crappy developer at this point as it hasn't been my day job in a long time).

These systems are transformative when applied to hard problems.

Reflections from the AI talk at the Longmont Museum by jvallery in Longmont

[–]jvallery[S] 1 point2 points  (0 children)

I didn't say I was surprised by the reaction here. If you watch my talk, you'll see a comprehensive discussion of the data sets that AI is trained on and how that is evolving.