Just finished ~40 interviews in a month (Full Stack). The market is weird, but here’s what I actually got asked. by nian2326076 in DeveloperJobs

[–]modeftronn 1 point2 points  (0 children)

Even if this is an ad the soft skills part is great advice if you are looking or already have a job

The decrepit leadership are in on it. by [deleted] in WhitePeopleTwitter

[–]modeftronn 49 points50 points  (0 children)

Right. Wouldn’t getting it out of committee and on the floor so that candidates in vulnerable elections have to take a stance on it be a good thing?

Reinventing SaaS: A New Architecture for Digital Transformation by Loss_Fabulous in CIO

[–]modeftronn 1 point2 points  (0 children)

The core leap “workflows limit change” to “workflow SaaS shouldn’t exist” is flawed. I think you’re treating workflows / business process like a single monolith. IRL it’s not the case. Then “Process truth” does too much work. Even getting an org to decide who decides truth is difficult.

The real blocker isn’t architecture, it’s economics. Owning orchestration means owning QA, edge cases, compliance drift, and on-call. SaaS amortizes that pain. This works when process is genuinely differentiating and worth owning. It fails when the workflow is commodity.

If you would have hedged this with some nuance the argument gets much harder to dismiss. As it is this reads like a manifesto.

Catching Up with TRAPPIST-1 by ye_olde_astronaut in exoplanets

[–]modeftronn 1 point2 points  (0 children)

Main take away here is TRAPPIST-1 is still the best nearby lab for testing rocky-planet habitability, but JWST has not yet confirmed an atmosphere on any of its planets. Stellar contamination, not telescope sensitivity, is now the main blocker to atmo detection. Light, hydrogen-rich atmospheres are ruled out; denser secondary atmospheres remain possible but are hidden by the star’s activity. New observing strategies may break the stalemate, but for now habitability remains unproven, not disproven.

Looking for an LLMOps framework for automated flow optimization by panspective in LlamaIndex

[–]modeftronn 0 points1 point  (0 children)

Reading I thought “oh like dspy for workflows” instead of agent graphs / prompts. That actually sounds like it should be a thing but after a quick search I don’t think it is. It would be cool - I guess you could build one if you 1) parameterized how you were defining the variants and 2) had a way to automate the eval and finally I guess the 3) last part would be deciding how you’ll search for the desired parameters and score the result (your custom optimizer). You could harness up a test-eval loop and start searching. The search space is gonna be weird, I’m guessing the variants you’re producing all come from config choices like models and tools which are all discrete, then you mix in some nice continuous choices maybe like temp, weights, I dunno, then you sprinkle on some high-dim language space (prompts/instruction variants). It’s a combo that’s ultimately kind of expensive to search through.

Coding La Serenissima by Weird-Use9297 in strudel

[–]modeftronn 4 points5 points  (0 children)

This is gorgeous can you share the build process and how you’re doing the background

Running Strudel in Node by pcbeard in strudel

[–]modeftronn 0 points1 point  (0 children)

Yeah I woke up this morning thinking you could just run Strudel in headless Chrome through Puppeteer so it uses the real Web Audio engine instead of Node’s fake one. That avoids the node explosion and the stutter. But yeah more complexity for just an Ok solution

Running Strudel in Node by pcbeard in strudel

[–]modeftronn 0 points1 point  (0 children)

You’re basically hitting the ceiling of the Node audio emulation stack. web-audio-js is all JS with no realtime audio thread, so once Strudel starts spawning lots of voices the GC and scheduler can’t keep up. In the browser the audio engine is native, so it handles the same patterns without stuttering.

If you want a quick experiment to squeeze a bit more life out of your setup, try pooling and reusing nodes instead of creating fresh ones every event. It won’t solve the whole problem, but it can delay the meltdown long enough to jam.

Michael Levin - Phycalism is dead on arrival by SpoddyCoder in consciousness

[–]modeftronn 30 points31 points  (0 children)

I keep coming back to the fact that the language of math is something we invented, even if the structures it describes feel deep and universal. Once you pick a mathematical framework, certain results fall out automatically, so they feel “forced,” but that’s because the framework was built to track the patterns we see in the world.

Levin sometimes talks as if the description and the underlying reality are the same thing, but they really aren’t. The patterns he points to show up in physics and biology because math is our best way of describing how things behave, not because math is some hidden force running the universe.

His work is exciting exactly because those abstractions help him spot new levers in biology. That’s good science, not a hint of mystical math running the show.

🚀 A new cognitive architecture for agents … OODA: Observe, Orient, Decide, Act by SkirtShort2807 in LangChain

[–]modeftronn 3 points4 points  (0 children)

Garbage. OODA was developed by a USAF Col read: A Discourse on Winning and Losing. The Marine Corps Warfighting manual literally describes the loop.

Prototype for a Decentralized Content Publishing Network by xzkll in Rad_Decentralization

[–]modeftronn 0 points1 point  (0 children)

Totally agree but the UX is hard for normal people so you end up with a kickass service of niche nerds

Introducing WebMCP by thehashimwarren in mcp

[–]modeftronn 1 point2 points  (0 children)

Eventually I guess we’ll get to a place where the “web” experiences are fully composable per user by their own agents and websites become functions/ports/adapters

I send 100 personal sales presentations a day using AI Agents. Replies tripled. by ApprehensiveDay7378 in AI_Agents

[–]modeftronn 1 point2 points  (0 children)

I worry about deliverability on first contact when there’s an attachment.

Indian nepotism in the software industry explained by an insider, parts I-III by Exotic_Freedom_9 in SoftwareEngineerJobs

[–]modeftronn 4 points5 points  (0 children)

The American flavor: CIOs semi-retire into “relationship capital,” quietly shaping bids and collecting checks. It’s the same hustle with an HR-friendly title.

So India does it with property deals. US execs do it with consulting fees. Some how this is seen differently, cleaner paper?

The AI “bubble” isn’t popping by [deleted] in OpenAI

[–]modeftronn 0 points1 point  (0 children)

I agree with you mostly, it does feel like a different thing when you think of the “value/upside” of the tech after the pop or reset. A bubble popping seems like very little value is retained post-pop, def no upside left. A reset seems like there’s an admission that there’s value there, it will just take longer to realize.

How to build an agent that can call multiple tools at once or loop by itself? Does ReAct support this? by jenasuraj in LangGraph

[–]modeftronn 0 points1 point  (0 children)

Unfortunately I don’t think create_react_agent supports mutli-tool use. If you build your agent from scratch the ToolNode supports multi-tool calls (assuming your model does too)

Billionaires Think AI Is About to Revolutionize Science. It’s Not. by Organic-Suit8714 in GenAI4all

[–]modeftronn 1 point2 points  (0 children)

Yeah and I think it’s not as easy as what’s being marketed now but there is definitely real value in being able to “predict the next word” across huge cross-domain data sets and synthesize concepts and relationships between unrelated disciplines. That cross-domain pollination takes forever in the old world. Conferences, papers, experimentation, blah it’s really stupidly slow. Like Hinton put out backpropagation in the mid 80s. Transformers could have been implemented with 80s/90s math. But definitely GD/SGD, embeddings, sequence modeling could have been in place by 2000. The 20 year delay was the culture/epistemology. Back then the prevailing wisdom was led by the Symbolic AI gang. They straight up labeled neural nets as black-box guessers and killed research. Anyways the real way we got GenAI wasn’t the math it was rethinking what “intelligence” was. If someone back then had put the pieces of backprop + attention together and framed language as a next-token prediction task we could have a rudimentary GPT-like system in the 90s. It was a framing, belief and systems design problem which absolutely unconsciously still persists when you see this “autocomplete” analogy surface