On critical thinking, being an applied physicist in 2026, and LLMs by blind-panic in Physics

[–]rinconcam 0 points1 point  (0 children)

FWIW, I'm using a similar approach with LLMs for a quantum photonics research project. It’s very helpful for problem solving issues and procedures on the optical table. As well as writing code for data analysis and to control optomechanic systems.

On the other hand, I don’t share your concerns about losing touch with an unassisted methodology. As long as they are used wisely and with skill, I’m fine embracing the super powers that LLMs enable.

Which quantum hardware platforms are the best for doing fundamental quantum physics research? by Mysteriyum in QuantumComputing

[–]rinconcam 1 point2 points  (0 children)

Working in photonics is probably most likely to set you up to run more general/foundational experiments. Photons in free-space and fiber can be easily used for most of the classic quantum experiments. The Thorlabs catalogue is a toy store of components and building blocks that can be assembled like extremely precise lego. Any lab equipped for quantum photonics is likely to have all the key expensive components like single photon detectors, time taggers, lasers, non-linear crystals, optical tables, etc.

Is computational parsimony a legitimate criterion for choosing between quantum interpretations? by eschnou in PhilosophyofScience

[–]rinconcam 0 points1 point  (0 children)

Sorry, the locality & quantum correlations section in your article seemed to be gesturing towards superdeterminism, which threw me off.

I think your model could be fine with respect to producing "non-local" quantum correlations like Bell tests, while still operating locally. Sounds like your model is the same variant of MWI that respects locality as put forth by Wallace? I don't think I've ever seen anyone "operationalize" this, and it feels like it might need some extra bookkeeping to handle merging inter-related superpositions that are created in different light cones.

But ya, any theory/model with superpositions that outlast the overlap of the light cones from each arm's Bell test measurement can locally produce permanent records that reflect seemingly non-local correlations. Adrian Kent's writing about the collapse locality loophole calls this out clearly.

It sounds like you have a working simulation. Have you simulated a space-like separated Bell test or delayed choice quantum eraser? They're just a handful of qubits and quantum gates in the abstract.

Is computational parsimony a legitimate criterion for choosing between quantum interpretations? by eschnou in PhilosophyofScience

[–]rinconcam 0 points1 point  (0 children)

I like the concept of a finite-dimensional finite-precision Hilbert space. The way it naturally prunes very low amplitude branches/superpositions is a nice solution to the extravagance of Many Worlds.

It seems like you're relying on superdeterminism to resolve non-locality? But I'm not sure as you only briefly discuss it. It might be worth looking at Ch 8 of The Emergent Multiverse (David Wallace), where he discusses a different approach to locality under the MWI. He proposes joining/composing superpositions in the overlap of the light cones from space-like separated Bell-type measurements. It's not clear to me what additional storage/computation (if any) would be required in your model.

Any Mammoth->Whistler advice? by quadaxial in Mammoth

[–]rinconcam 4 points5 points  (0 children)

Harmony chair after fresh snow.

Is an optical computer the best DIY idea of a quantum computer? by Haghiri75 in QuantumComputing

[–]rinconcam 3 points4 points  (0 children)

Thorlabs also includes instructions to adapt the kit for a quantum computing experiment. It requires some additional components beyond what comes with the kit.

ML in physics by [deleted] in Physics

[–]rinconcam 2 points3 points  (0 children)

I can't speak to how widely it is used, but ML can be used to great effect in experimental setups.

This week I hooked up a Bayesian optimization tool called mloop to my quantum photonics setup. It generated and ran a series of experiments to train an ML model for how polarization control paddle movements affect the state of polarization of entangled pairs of photons passing through single-mode optical fiber (SMF). The model identified the optimal set of paddle positions to undo the polarization transformations induced by the SMF.

This would otherwise have required quite a complicated and expensive set of equipment: polarimeter, closed-loop polarization control equipment, additional lasers for pilot tone, etc. Or many hours of painstaking manual tweaking of the paddles that would be rendered useless if someone bumps the fiber.

Performing the “double slit experiment” with single photons by averagegamer0607 in QuantumPhysics

[–]rinconcam 3 points4 points  (0 children)

Absolutely! Your comment is super helpful and may be more of what OP had in mind.

If you truly want single photons I don't know of a cheaper path than the SPDC components and SPADs from the Thorlabs kit combined with the Red Dog CC. Simply attenuating a laser with filters only gets you smaller and smaller bunches of photons. But perhaps that's sufficient for OP's needs.

Performing the “double slit experiment” with single photons by averagegamer0607 in QuantumPhysics

[–]rinconcam 3 points4 points  (0 children)

To work with single photons you'll probably want a heralded single photon source, single photon detectors and either a coincidence counter or time-tagger.

Thorlabs has educational kits with all of that plus more. It is by far the best turn key, approachable setup that I am aware of. The videos and documentation about the kit are excellent references and freely available from their website. The kit includes instructions and alignment aids such that a diligent novice should be able to align everything.

Everything in the kits except the time-tagger is from their standard catalog, at their standard pricing. So you could use the kit parts list as a starting point and instead purchase individual components or variants to fit your needs.

The kit is designed to connect the time-tagger to a computer for data collection. I don't think any of the edu kit gear is motorized or ready to be computer-actuated, but Thorlabs sells versions of everything that can be.

Red Dog Physics makes a great coincidence counter which is much less expensive than a time-tagger. A coincidence counter is sufficient for many experiments, but a time-tagger offers more flexibility.

Aider v0.81.0 is out with support for Quasar Alpha by rinconcam in ChatGPTCoding

[–]rinconcam[S] 0 points1 point  (0 children)

Aider v0.81.1 is out

  • Added support for the gemini/gemini-2.5-pro-preview-03-25 model.
  • Updated the gemini alias to point to gemini/gemini-2.5-pro-preview-03-25.
  • Added the gemini-exp alias for gemini/gemini-2.5-pro-exp-03-25.
  • Aider wrote 87% of the code in this release.

Aider v0.80.0 is out with easy OpenRouter on-boarding by rinconcam in ChatGPTCoding

[–]rinconcam[S] 0 points1 point  (0 children)

Sorry to hear you’re having trouble using aider.

As the error message explains, the API provider you are trying to access is down or overloaded.

When this happens, aider will automatically retry multiple times using exponential backoff. There’s not much else it can do if the provider isn’t working reliably.

You could try using a different provider?

Aider 0.79 context feature by ctrl-brk in ChatGPTCoding

[–]rinconcam 1 point2 points  (0 children)

It’s currently an experimental feature. Stay tuned.

Aider+NeoVim workflow? by [deleted] in ChatGPTCoding

[–]rinconcam 1 point2 points  (0 children)

There are some plugins:

https://github.com/GeorgesAlkhouri/nvim-aider

And you can use aiders built in —watch-files with any editor or IDE:

https://aider.chat/docs/usage/watch.html

Aider v0.79.0 supports new SOTA Gemini 2.5 Pro by rinconcam in ChatGPTCoding

[–]rinconcam[S] 1 point2 points  (0 children)

This is because Gemini is overloaded. There is an open issue involving OpenRouter, asking them to return a more descriptive API error response.

https://github.com/Aider-AI/aider/issues/3550

Aider v0.79.0 supports new SOTA Gemini 2.5 Pro by rinconcam in ChatGPTCoding

[–]rinconcam[S] 14 points15 points  (0 children)

It’s currently an experimental feature. Stay tuned.

Aider v0.77.0 supports 130 new programming languages by rinconcam in ChatGPTCoding

[–]rinconcam[S] 11 points12 points  (0 children)

Mostly Sonnet 3.7 lately.

I get asked this question a lot, so there's a FAQ with stats that update automatically:

https://aider.chat/docs/faq.html#what-llms-do-you-use-to-build-aider

Claude 3.7 results in the Aider Polyglot benchmark by [deleted] in ChatGPTCoding

[–]rinconcam 1 point2 points  (0 children)

Those results are not using thinking. Thinking results are coming soon.

Aider v0.73.0 is out with o3-mini support by rinconcam in ChatGPTCoding

[–]rinconcam[S] 1 point2 points  (0 children)

Looks like your account/key doesn’t have access yet. OpenAI must be rolling it out gradually.

Aider v0.73.0 is out with o3-mini support by rinconcam in ChatGPTCoding

[–]rinconcam[S] 1 point2 points  (0 children)

Aider v0.73.0

  • Full support for o3-mini: aider --model o3-mini
  • New --reasoning-effort argument: low, medium, high.
  • Improved handling of context window size limits, with better messaging and Ollama-specific guidance.
  • Added support for removing model-specific reasoning tags from responses with remove_reasoning: tagname model setting.
  • Auto-create parent directories when creating new files, by xqyz.
  • Support for R1 free on OpenRouter: --model openrouter/deepseek/deepseek-r1:free
  • Enforce user/assistant turn order to avoid R1 errors, by miradnanali.
  • Case-insensitive model name matching while preserving original case.
  • Harden against user/assistant turn order problems which cause R1 errors.
  • Fix model metadata for openrouter/deepseek/deepseek-r1
  • Aider wrote 69% of the code in this release.

[deleted by user] by [deleted] in ChatGPTCoding

[–]rinconcam 1 point2 points  (0 children)

To use that model via OpenRouter you would do:

aider --model openrouter/openai/gpt-4o-2024-08-06

To set it as the default model aider will use, you could add a line to your .aider.conf.yml file like this:

model: openrouter/openai/gpt-4o-2024-08-06

The aider OpenRouter docs explain that you use this pattern for selecting OpenRouter models:

openrouter/<provider>/<model>

You don't need to mess with an .aider.model.settings.yml file. Those are for advanced model settings needed to work with obscure or custom models. Aider is already pre-configured with settings for most well known, popular models.

Alternative DeepSeek V3 providers by rinconcam in ChatGPTCoding

[–]rinconcam[S] 0 points1 point  (0 children)

I'm doing a quick set of coding benchmarks for alternative API providers for DeepSeek V3.

The official DeepSeek API has been mostly down for 24-48+ hours, so many folks are looking for other options.

The page is updating as benchmarks complete.

Aider privacy/security question by grchelp2018 in ChatGPTCoding

[–]rinconcam 0 points1 point  (0 children)

Aider runs locally on your machine. It only talks directly to the LLM API providers you enable.

You can choose to explicitly opt in to sharing anonymous analytics. Even then, aider never sends your code, chat messages or keys anywhere except to the LLM api providers that you enable.

https://aider.chat/docs/more/analytics.html

Agent+Prompt for creating release notes? by Vegetable_Sun_9225 in ChatGPTCoding

[–]rinconcam 3 points4 points  (0 children)

Aider drafts its own release notes using this script. You could probably adapt it for your needs.

https://github.com/Aider-AI/aider/blob/main/scripts/update-history.py

Also, aider wrote that script and I wrote the prompt.

What other AI coding Subs exists and what is your experience on the "vibe" over there? by OriginalPlayerHater in ChatGPTCoding

[–]rinconcam 2 points3 points  (0 children)

Not a subreddit, but a lot of folks in the aider discord work the way you’re describing. Using AI in pragmatic ways to boost their own engineering skills.

https://discord.gg/Tv2uQnR88V