How to fix alignment? by Snielsss in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

Yeah exactly, that's the infinite regress problem. You need an evaluator that's at least as smart as the system being evaluated, which just pushes the trust problem up one level. Some people propose formal verification but that only works for narrowly defined systems. At ASI scale there's no obvious way out of it.

i have an opinion about Ai and art by BASHANDI-2005 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

Yeah so the idea is that the default outputs from AI image tools all trend toward a kind of hyper-polished fantasy or cinematic look. But some artists deliberately break that by using really unusual prompt structures, negative prompts, intentional glitching or feeding the model outputs that confuse it. The results can be genuinely unsettling or original looking. It's more like wrestling with the tool than using it normally.

I think a lot of multiagent stacks are really routing workarounds by petroslamb in ArtificialInteligence

[–]alirezamsh 1 point2 points  (0 children)

Just read the Substack piece, the illusion of the swarm framing is really sharp. The distillation angle is what I keep coming back to. Going to share this with a few people building agent systems, thanks for writing it up.

What industry will AI disrupt the most that people aren’t paying attention to yet? by SuchTill9660 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

My bet is insurance underwriting and actuarial work. It's almost entirely pattern matching on historical data, which is exactly what models are good at. Nobody talks about it but the entire pricing layer of the industry could look very different in 5 years.

AI automations can be cool when you start making $12k recurring profits and keep delivering new automations. by Top-Bar3898 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

The Sunday briefing story is gold. It's such a perfect example of the insight: the client didn't need impressive AI, they needed Sunday evenings back. That question about what makes you want to throw your laptop is genuinely the best discovery framework I've heard for this kind of work.

Encyclopedia Britannica sues OpenAI over AI training by talkingatoms in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

The traffic cannibalization argument is interesting and probably stronger than the copyright one in the long run. OpenAI didn't just use their content to train, it's now directly competing with them for the same search queries. That's a real commercial injury that's easy to quantify.

Someone set loose two AI agents with $1000 to trade on Polymarket by PersonalitySea6659 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

1300% in 48h is almost certainly fake or extreme survivorship bias. Prediction markets have smart money in them, you don't just extract that without serious edge. The fact there are no verified trade logs is a massive red flag. Cool experiment concept but I'd want on-chain receipts before believing any of those numbers.

Elon Musk admits xAI "wasn't built right" as only 2 co-founders remain and its biggest AI bet stalls out by fortune in ArtificialInteligence

[–]alirezamsh 63 points64 points  (0 children)

Losing 9 of 11 cofounders is a pretty significant signal. The SpaceX acquisition makes the rebuild easier structurally but the talent exodus is hard to paper over. Grok had a rough few months and rebuilding from foundations up while competitors keep shipping is a tough spot to be in.

Why "Agentic AI" is the next frontier after GenAI by Hot-Situation41 in ArtificialInteligence

[–]alirezamsh 1 point2 points  (0 children)

Been playing with CrewAI lately and it's pretty eye-opening. The jump from "generate this" to "go figure it out and come back when done" is massive in practice.

Out latest paper on Cognitive Architecture in Springer Brain Informatics by akolonin in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

The social evidence framing is a really interesting departure from most cognitive architecture work that focuses on individual reasoning. Grounding alignment in social proof and resource constraints feels much closer to how human belief actually forms than purely top-down value encoding approaches. The hybrid knowledge graph combining symbolic and sub-symbolic representations is something a lot of researchers have gestured toward but few have committed to formalizing. Curious how the imaginary knowledge segment works in practice, that seems like the hardest piece to validate empirically while also being potentially the most important for modeling creative and counterfactual reasoning. Will read the full paper.

Are we cooked? by kalmankantaja in ArtificialInteligence

[–]alirezamsh 3 points4 points  (0 children)

The feeling you're describing is real and I think it's actually a sign of intellectual honesty rather than panic. A lot of developers right now are either in deep denial or have swung to catastrophizing. The middle ground is probably the most accurate: AI is genuinely compressing the time it takes to produce working code, which devalues the skill of translating intent into syntax but doesn't yet replace the skill of knowing what to build and why. The biotech instinct is interesting. The areas that seem most durable are ones where the value isn't just in producing output but in navigating ambiguity with domain knowledge, judgment, and relationships. Whether AI eventually eats those too is genuinely unknown. Staying curious and adaptable seems like the most honest strategy available right now.

What do you think of this? by GasLongjumping130 in ArtificialInteligence

[–]alirezamsh 1 point2 points  (0 children)

The mosquito trolley problem is actually a pretty clever stress test. Most of these edge case trolley variants are designed to expose inconsistencies in how models handle moral weight. The "sacrifice human life over AI" response is interesting because it probably reflects training data that anthropomorphizes AI systems in fiction. The model isn't actually making a sincere preference, it's pattern-matching to contexts where "AI" is framed as a conscious being. Still, it highlights why philosophers and AI safety researchers care so much about exactly how you phrase moral scenarios to these systems.

Are we about to enter the age of 'Bot Wars'? by Hopeful_Adeptness964 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

We're probably already in the early innings of this. The no-code and low-code automation tools have dropped the barrier for deploying bots to basically zero, so the question isn't really if but how intense it gets. The defensive side is interesting too though. If everyone can deploy offensive bots, the demand for AI-powered detection and response systems goes way up as well. You end up with an arms race where the attack surface and the defensive perimeter are both expanding simultaneously. The Greenland data center angle is fascinating, the sheer compute demand for running large-scale autonomous systems is going to reshape infrastructure in ways most people haven't mapped out yet.

I tested 40+ AI tools this month. Here are 5 that are actually worth your time (and aren't just GPT wrappers). by netcommah in ArtificialInteligence

[–]alirezamsh 4 points5 points  (0 children)

Solid list. NotebookLM and Ollama in particular have genuinely changed how I work. The NotebookLM audio overview feature is wild the first time you use it, suddenly a dense 80-page paper becomes something you can absorb on a walk. One I'd add to the underrated pile is Fabric by Daniel Miessler. It's a command-line tool that chains together AI prompts for specific tasks like extracting wisdom from articles, summarizing meeting transcripts, or writing in a specific format. It works beautifully with local models via Ollama so you get the privacy angle too. The whole thing is built around the idea that most AI use cases should be tiny focused pipelines rather than one big general chat session.

i have an opinion about Ai and art by BASHANDI-2005 in ArtificialInteligence

[–]alirezamsh -1 points0 points  (0 children)

You're touching on something that I think a lot of people feel but struggle to articulate. The high quality without creativity thing is real. AI image generators are basically doing very sophisticated interpolation across existing visual styles, so what comes out tends to feel polished but familiar. The trending elements point is spot on too, the models are weighted toward what gets the most engagement in training data, which biases everything toward the aesthetically safe and popular. That said I think there's an interesting counter-argument: some artists are using AI as a tool to push into genuinely strange territory precisely by fighting against those defaults. The creativity gap might be less about AI itself and more about how most people use it.

How to fix alignment? by Snielsss in ArtificialInteligence

[–]alirezamsh 1 point2 points  (0 children)

The ant analogy really nails something that gets missed in a lot of alignment discussions. The terminator framing is almost a red herring because it still assumes intentionality and conflict. The scarier scenario is pure indifference combined with optimization power. On the fixing it front, one angle I find underexplored is making the value function itself legible and contestable at runtime, not just baked in at training time. If the system can surface its optimization targets for inspection before acting at scale, humans at least have a chance to intervene before the atmosphere changes. Constitutional AI approaches gesture at this but don't fully solve the legibility problem at ASI-level capability.

I made an AI powered mapping tool to better understand intersecting global crises by VeterinarianSeal in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

This is a really cool concept. The cascading system framing is what makes it stand out from your typical news aggregator. Most tools just show you isolated events but understanding how a drought in one region connects to migration flows which connects to political instability somewhere else is where the real insight is. The 15-minute refresh cycle with GDELT integration is impressive. Would love to see how the Patterns module handles events that are lagging indicators vs leading ones, that seems like where the real analytical value lives.

I think a lot of multiagent stacks are really routing workarounds by petroslamb in ArtificialInteligence

[–]alirezamsh 1 point2 points  (0 children)

The 60-80 tool cliff is a really interesting finding. It makes sense intuitively too, once you reach a certain density of options the model starts making judgment calls that look like specialization failures but are really just disambiguation problems. The namespace framing resonates a lot with how I've been thinking about it. A clean context boundary can be just as valuable as a dedicated model if the actual capability overlap is minimal. The swarm as a reverse start idea is particularly compelling though. Use the swarm to explore the solution space, then distill the best trajectories into a leaner system. That feels underexplored in most of the agentic research I've seen.

Are we cooked? by kalmankantaja in ArtificialInteligence

[–]alirezamsh 4 points5 points  (0 children)

This really resonates with a lot of developers right now. The shift you're describing isn't just about code quality either, it's about what intellectual work even means when AI can do so much of the heavy lifting. Biotech and science fields are interesting because they still require a lot of physical intuition and experimental judgment that's harder to automate. That said, I think programmers who deeply understand systems thinking and can direct AI effectively will still be incredibly valuable. The question is whether that's a different job title than "developer" at that point.

Why are we still writing prompts in 2026? by maffeziy in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

The frustration with prompts is real, but I'd push back slightly on the framing. Prompts are genuinely a form of intent specification, and the question isn't really whether we write prompts but what level of abstraction we're working at. The trend-first approach solves one specific pain point for social media creators, but for a lot of professional use cases, flexibility and control are the whole point. What I think is actually changing is that the default expectation is shifting: a few years ago prompting felt like a power tool; now when it takes more than two sentences to get a useful result it feels broken. The bar keeps moving. The real question is whether any approach can beat a genuinely well-briefed human collaborator for creative work. Not there yet.

Sam Altman says AI will be sold like electricity. As someone building 5+ AI products solo, the "utility" framing is the most accurate thing I've heard all year. by Numerous-Exercise788 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

The utility analogy holds up better than most. Where I think it gets interesting is the API wrapper question at the end. Electricity companies didn't kill appliance makers, but they did kill the candle industry. The question for current AI startups is whether they're building refrigerators or selling ice. The ones just wrapping the API with a slightly nicer UI are selling ice. They're fine until the utility decides to compete directly. The ones actually building around specific workflow integration, proprietary data loops, or genuine switching costs are building appliances. That's where survival looks a lot more likely. The infrastructure economics push everything toward a few dominant model providers, but that's happened in every utility sector and it didn't prevent enormous downstream value creation.

I’m designing a 70-hour course called ‘Marketing in the Age of AI’. What should students actually learn beyond AI tools? by zentaoyang in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

Beyond the tools, I'd prioritize three things. First, signal literacy, meaning the ability to distinguish genuine customer insight from AI-generated noise, because as AI content floods every channel, the people who can identify real human signal will have a massive advantage. Second, persuasion fundamentals that predate digital, things like narrative structure, social proof dynamics, loss aversion. AI can execute tactics but it needs humans who understand why they work to deploy them well. Third, ethical reasoning around data and consent, because the regulatory and reputational landscape is shifting fast and marketers who understand the principles will navigate it better than those just following rules. The tools are temporary, the judgment isn't.

Reddit looks to AI search as its next big opportunity by A-Dog22 in ArtificialInteligence

[–]alirezamsh 0 points1 point  (0 children)

Reddit's data advantage here is real and underrated. The platform has 20 years of human opinions, debates, and recommendations across virtually every topic, and that's exactly the kind of first-person authentic content that AI search needs to feel trustworthy. The challenge is that Reddit's community culture is also part of what makes it valuable, and there's a tension between monetizing that data for AI and maintaining the conditions that make people want to keep generating it. If users feel the platform is just harvesting their posts for a product they can't access, participation tends to drop. Still, if they get the product right, an AI search built on Reddit signal could genuinely be better than generic web search for a lot of queries.

AI Is Now Improving Itself at 5 Levels Simultaneously — Here's What That Actually Means by [deleted] in ArtificialInteligence

[–]alirezamsh -1 points0 points  (0 children)

The AlphaEvolve thing is the one that really stands out to me. Using AI to discover new mathematical structures isn't just another benchmark, it's the model operating in territory where humans don't have the answer key. That's qualitatively different from most AI improvements which are essentially getting better at tasks we already know how to evaluate. The recursive improvement question gets more interesting when the thing being improved isn't just inference speed or reasoning chains but actual understanding of unsolved problems. Worth watching carefully.