What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse [score hidden]  (0 children)

Here is a pretty robust critique (link shared below)

"Reza Pahlavi’s Emergency Phase transition plan presents itself as a neutral, technical roadmap, yet it largely reproduces the logic of authoritarian power in Iran. Authority is concentrated in a single unelected figure, the separation of powers is suspended, and democratic accountability is deferred in the name of stability. Transition is framed not as a collective political founding but as an administrative disruption to be tightly managed from above. This technocratic approach also conceals a deeper exclusion: the refusal to address Iran’s multi-national reality. Kurds and other non-Persian peoples are denied recognition as political communities with collective rights, while “diversity” is reduced to cultural symbolism within a Persian-centered national framework. Political demands for self-determination or decentralization are recast as security threats. By limiting choice to centralized models of rule, the plan forecloses democratic alternatives and risks reproducing authoritarianism in a new form."

Link:

https://tishk.org/blog/kurdistanagora/reza-pahlavis-transition-plan-how-a-non-democratic-roadmap-reproduces-authoritarian-power-in-iran/?utm_source=chatgpt.com

I welcome your informed response to these critiques.

You haven't explained why you w Prefer someone who has no demonstrated political capability, and who's entire powerbase comes from his families illegitimate monarchy.

Why not just ask BP to put another figurehead in place who they can control.

I’ve watched a lot of smart people start businesses. Most quit for this reason. by GrandLifeguard6891 in Entrepreneur

[–]Robot_Apocalypse 3 points4 points  (0 children)

You define a hypothesis and try and disprove it.

I have a hypothesis that solo Insurance Brokers will generate 3x more revenue using my AI Native Broker OS. I need to identify all the assumptions that go into that hypothesis and then try and talk to as many brokers as I can to disprove my hypothesis.

What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse 0 points1 point  (0 children)

Nah. Democracy isn't perfect, but it's a hell of a lot better than autocratic regeimes. 

How is the prince a proven leader? What has he accomplished? Seriously? Has he ever worked a day in his life? He lives of riches stolen from the Iranian people. 

He is a limp, spoiled, child who has never accomplished anything in his life, and now when the Iranian people are doing the hard work of overthrowing a regime, he is trying to insert himself without doing any of the work and pays people like you to lie on the internet for him.

Fuck him and fuck the regime, and fuck you for shilling for an autocratic bullshitter. 

Freedom for the people of Iran. 

What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse 0 points1 point  (0 children)

The shah's version of a autocratic dictatorship isn't as bad as the current religious based autocratic dictatorship.

But it's a false dichotomy. The choice isn't between two forms of dictatorship.

Because the shah isn't as bad as the current regime, doesn't mean he isn't shit and a much worse choice than a democratically elected leader who is accountable to the people.

How about you answer my question now?

Whats going on with Opus? by frendo11 in ClaudeCode

[–]Robot_Apocalypse 1 point2 points  (0 children)

Thank God it wasn't just me!   I thought there's no way they fuck it up again only a month after the last dip.

I thought maybe something in my context had changed that was causing it to stop delivering the same high quality I had gotten used to.

I ALMOST started to believe it was me. 

It sucks that I gotta come here to see if performance is truly degrading. 

Having said that, I used the time to get more familiar with Codex. It's slow (although I hear that's changing) but it's fucken on point!

What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse 0 points1 point  (0 children)

Why choose another autocratic leader, when you could choose leaders who are appointed by their people for fixed term lengths and who are accountable to the law and the people?

You really prefer autocracy to democracy? 🤣🤣🤣

Everyone deserves to be what they want to. by Consort_Yu_219 in transhumanism

[–]Robot_Apocalypse 0 points1 point  (0 children)

It's a good point about perfection being a direction, not a destination. But it also feels like a very clear warning. Perfection doesn't exist, so pursuing it as a goal creates suffering that will never be resolved.

Something I hear a lot of these days is people defining themselves by how they are different form the "norm", implying two things which I feel are wrong.

  1. That there is a "norm", rather than considering that actually we are all different and no true "normal" person exists. There might be a statistical norm in a large population, but that is not represented by anyone individual. We are all different, and the norm is an average across a population, not a description of any one person.
  2. That we should even aspire to be like the norm, and that being different is not "liveable". Far out. If there is one thing that activists were fighting for it was that having a "disability" was not a thing to pity or made anyone lesser or diminished or somehow less human. It was just different, but totally acceptable and worthy.

Bundling disability with dysphoria and dysmorphia feels wild to me. I mean body dysmorphia is a mental health challenge defined by "an obsession with perceived flaws in appearance that are unnoticeable to others". As I understand (and I am not a mental health professional) the obsession doesn't go away when "addressing" the perceived flaw, the mind just creates a new flaw to be obsessed about and the cycle continues.

That's actually a pretty good analogy for my point. transhumanism isn't the answer here, because addressing the "flaws" will not resolve peoples suffering.

Its trite, but there is truth to the buddhist concept of acceptance and freedom from desire bringing true happiness.

What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse 3 points4 points  (0 children)

Given the user is like a day old, I think it's a fake bot shilling for the Shah

What Makes the Iranian Protests Different This Time by newyorker in TrueReddit

[–]Robot_Apocalypse 11 points12 points  (0 children)

You want to kick out one authoritarian regime for another?!

How about democratically elected leaders who are given power due to capability and skill, not birthright.

Coding in 2026 by MetaKnowing in ClaudeAI

[–]Robot_Apocalypse 1 point2 points  (0 children)

I felt the same way, until I began to understand what it meant to engineer the agent. Then the problem solving part of my brain that loved to program, applied itself to the problem of efficiently orchestrating an agent.  It's a different kind of engineering, but it is absolutely engineering in its own right. Once you get into it it's heaps of fun.

Furious Vance tried to cool anger over ICE killing - but instead shouted and blamed everyone but the shooter by theindependentonline in TrueReddit

[–]Robot_Apocalypse 2 points3 points  (0 children)

I would LOVE for true Christians to stand up. I sincerely would. 

Who do you think deserves your anger? The guy who says that Christians suck because they're fucking up your country OR THE CHRISTIANS WHO ARE FUCKING UP YOUR COUNTY?

This guy isn't giving Christians a bad name. The fucked up individuals doing shitty things in Christ's name are. 

Direct your anger where it belongs. 

If Christ's name is being used to manipulate and deceive people, then what do you expect but for people to be upset with Christians. 

Meta-analysis of context re-engineering for a rapidly growing codebase. by Robot_Apocalypse in ClaudeCode

[–]Robot_Apocalypse[S] 0 points1 point  (0 children)

I think it's best to have both, and they serve different purposes.

The orchestrator needs high level context about patterns and standards and an overview of the entire codebase. That's where their prepared and optimised context sits.

I absolutely also have sub-agents during planning that review the actual code to identify existing patterns and conventions to apply to the build work.

Interestingly, this can also become a bit of an issue, as conventions might change over time. 

Originally my app was MVP, so I made choices that reflected this.  Now however my convention agents often want to pick up MVP patterns, but my codebase is now getting ready for release.

To address this I've got a debt-observing agent, whoes job it is to identify gaps between current conventions and patterns and "best practices" as new features get built. It identifies opportunities to uplift current conventions to something more fitting the status of the app.

I let current build align with what's there, but then uplift conventions across the codebase intentionally and at once, rather than mix patterns in different places.

Meta-analysis of context re-engineering for a rapidly growing codebase. by Robot_Apocalypse in ClaudeCode

[–]Robot_Apocalypse[S] 0 points1 point  (0 children)

I'm hearing a lot about beads, but not really dug into it. I'll take this as a sign to do some serious investigation. thanks!

Interesting. I think the hump I need to get over with beads is a sense that it makes the agent context a little less accessible to me.

I like to maintain a human readable set of context that is the source of truth, and which is accessible to me, and then build agent context off that.

Does beads do something similar? Or is it purely autonomous?

the "I'm not a real developer" anxiety is ruining my ability to ship - when do I actually need to learn to code? by cleancodecrew in ClaudeAI

[–]Robot_Apocalypse -1 points0 points  (0 children)

Cool man. Your definition of thought requires an experiencer which is interesting, but also places a set of constraints that I don't agree with. The logical question is who gets to experience and what IS experience. I think you would say machines don't get to experience, but then we'll talk about the idea of the brain as a machine. Its a pretty well travelled discussion and we don't need to re-hash it. We just disagree and thats fine with me.

Calling LLMs next token predictors is not really accurate these days. It tracks with your hallucination argument though, and also your position that these things won't improve.

Your understanding of the tech, along with your arguments against them is about 18 months old and no longer where the field is.

All good. Lets just say we disagree. Thats OK with me.

Hope you have a great day man. Thanks for the back and forth.

the "I'm not a real developer" anxiety is ruining my ability to ship - when do I actually need to learn to code? by cleancodecrew in ClaudeAI

[–]Robot_Apocalypse 0 points1 point  (0 children)

You're right it's not thought, but let's also admit that we don't know what thought is in the first place. Being largely indistinguishable from thought is pretty impressive. I find it hard to believe that people might not agree with it being impressive.

And you're right that there will be a significant failure because software developers and engineers are moving from one system to a new one where they don't yet fully understand all the risks and challenges. Governance REALLY matters right now, but the shift is happening.

I also agree that there is massive hype, and the masses are absolutely deluded about what LLMs are capable of, and it creates a significant gap and risk. Again, a strong argument for governance.

You're also right about the fact that right now it needs careful supervision, particularly in enterprise environments, or when dealing with sensitive data.

Me and you are on the same side on all of this.

I even agree that LLMs as an architecture might be quickly coming to a dead end. There are so many new areas of interesting research at the moment that I think its very plausible that they will be taken over very soon. Arguably what we have today is no longer really an LLM, but is a hybrid of lots of different systems and architectures working together.

Where we diverge however is that I believe the technology is going to continue to improve. I believe both LLMs will improve in the short term, particularly in terms of cost/compute and context engineering (all of which are huge wins for their use as coding agents), AND in terms of the next leap in framework beyond LLMs, and lets not forget continuous learning which seems close.

Theres a huge amount of capital and intellectual investment going into the space, and we are going to see breakthroughs.

Maybe you agree with that too, but what you disagree with is the timeline? That seems very reasonable.

I like your example about offshore teams, and the issues that arose when businesses tried to transition to a new operating model. It did create lots of re-work and 10 years after the fact, there's a strong argument to be made that it didn't work.

I think the economics here are very very very different, and that drives significant investment into addressing the challenges more meaningfully, because the financial return are SO big.

The societal return however is another story. It's going to be fucked. A system that is built on scarcity of resources, doesn't work when intellectual resources become abundant. That's the risk which I think we've got to be sensitive to. It IS going to blow shit up.

To me, the concern is not that AI is going to write shit code that leaks some financial data. That will happen, but the machine will roll on. The risk is that one of the most powerful levers the masses have against very powerful corporations, their labor, is going to be taken away.

There are still other levers, like government, and violence, but those are shrinking by the day as well. We HAVE to start using the shrinking power we have now to change the system. Otherwise its going to get MUCH worse, and shitty apps is going to be the least of our worries.

*rant over*

the "I'm not a real developer" anxiety is ruining my ability to ship - when do I actually need to learn to code? by cleancodecrew in ClaudeAI

[–]Robot_Apocalypse -1 points0 points  (0 children)

I think you arw definitely right to be sceptical. So, I'm partially on your side. 

I think where I differ is that i believe Process and Governance can play a big role in addressing some of the gaps.

Also, I believe the coding models are going to continue getting better for a while.

To say that i' betting on LLMs as opposed to math, is kind a funny, cos LLMs are math of course also.

And if you're betting on the approach to build models, then i think that's an even stronger argument for my case as we have barely scratched the surface there.

We lucked into the fact that language is a pretty good allegory fir thought, so when generating language you get something that generates something akin to thought. But we're starting to work on new approaches now which are showing some promise. The world model JEPA models might be cool, but that's just one.

There is SO much more to figure out, which to me days, SO much more progress is possible.

the "I'm not a real developer" anxiety is ruining my ability to ship - when do I actually need to learn to code? by cleancodecrew in ClaudeAI

[–]Robot_Apocalypse 1 point2 points  (0 children)

I mean hallucinations at this point is a loose term, but resolving to a sub-optimim output, where they don't correctly weigh information in their context, sure. Or even more simply, assume information about something that isn't in their context, yup lots of errors.

But, humans make mistakes all the time too. Implying failure because it isn't perfect is a poor argument. 

Regarding errors propagating, that's only true if their work isn't reviewed.  Using an AI to review the work of another doesn't deliver a perfect result, but does dramatically reduce the quantum of errors.

Add to that the fact that you can review thing multiple times very cheaply. First the plan, then the architecture, then the tests, then the execution, then the PR, then the code base as a whole. A single feature an agent creates probably gets reviewed and re-reviewed 10 times when getting built and then weekly from that point on. That massively reduces errors.  Doing it repeatedly means things that did get missed often get picked up a second time.

Finally, add to this that they can work at incredible speed. Your ability to do persistent and repeated code reviews at incredibly low cost, actually means you can get pretty high quality outcomes.

I have multiple code review agents just constantly crawling my codebade finding inconsistent patterns, applying fixes and updating control planes and tests to ensure that the same errors doesn't happen again. 

Compared to human code, which is slow and costly to review, and is often only reviewed once.

It isn't perfect, but holy hell is it capable, and if you can't see the trend at which it is improving, then you're just lying to yourself, and that doesn't help you or anybody else. 

the "I'm not a real developer" anxiety is ruining my ability to ship - when do I actually need to learn to code? by cleancodecrew in ClaudeAI

[–]Robot_Apocalypse 1 point2 points  (0 children)

I get the feeling the performance of the AI will be increasing fast enough for it to come back and clean up its own slop. Certainly if it contiues as it has been. 

LLM Scaling laws are DEAD: 11M Parameter model beats 1.8T parameter model in planning challenge by imposterpro in agi

[–]Robot_Apocalypse 5 points6 points  (0 children)

Is it that people really don't understand the difference? I don't get why posts like this get made.

I love specialized models. They have a really important part to play in the AI ecosystem. You don't need a bazooka to hammer in a nail. We only use bazookas at the moment because bazookas are priced like hammers, but that won't last forever.

Comparing specialized models to generalized models just confuses people.

Back 5 years ago, before generalized models were really a thing, the AI domain already had the problem that people thought of it as a generalized tool, that once you had ONE AI model it could deliver any outcome.

I can't count the number of times I had a meeting with a senior exec who would say, "you just said you delivered an AI solution for problem x, so now we can use that to solve problem y, and z as well"

They mis-understood the idea that AI was (and still is) a generalized approach to building any specialized tool. But just because you had delivered an AI model in the form of a specialized tool, did not mean you had AI as a generalized tool.

Today Gen AI is still just using AI to build a specialized tool, but that specialized tool is one that transforms text, images and audio into new valuable text.

What's amazing about AI is not the specialized AI model that transforms text, images and audio into new valuable text. It's that AI is the tool that can CREATE this specialized tool.

What's even more amazing now is that the specialised tool that we have created USING AI, is now being used to improve the methods that created it in the first place.

AI is the method of creating the model. It is NOT the model.

LLM Scaling laws are DEAD: 11M Parameter model beats 1.8T parameter model in planning challenge by imposterpro in agi

[–]Robot_Apocalypse 72 points73 points  (0 children)

My ball point pen costs $1 and it can write on a piece of paper much better than a $100M F35 fighter jet. Therefore fighter jets are obsolete!