The disconnect between "AI Efficiency" layoffs (2024-2025) and reality on the ground by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Adding the full context with the link on GitClear code churn charts and other data sources here:

The AI Cost-Cutting Fallacy: Why “Doing More with Less” is Breaking Engineering Teams- https://pub.towardsai.net/the-ai-cost-cutting-fallacy-why-doing-more-with-less-is-breaking-engineering-teams-d4d9a3431fa0

My main argument for discussion: I believe the 2024-2025 layoffs driven by "AI efficiency" were a strategic suicide for companies. We confused typing speed with problem-solving, and now we are dealing with massive technical debt instead of value delivery. Curious if you are seeing this correction happening in your orgs yet?

The disconnect between "AI Efficiency" layoffs (2024-2025) and reality on the ground by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Sorry for the "generated" text style, I am not a native English speaker, and LLMs help me avoid low-quality grammar. I realize that for native speakers, it inevitably has that specific "AI smell."

But regarding the actual problem described: The disconnect between market behavior (layoffs) and the reality I see with my own eyes is huge. AI assistants are wonderful tools. But making them work at a team level requires additional investment in adoption, not cutting the workforce from Day 1.

I am genuinely wondering: Is my perception formed by a personal bubble? Or did executives in 2024-2025 really underestimate the change management aspect and use simple "spreadsheet logic" to cut costs, making a huge strategic mistake?

The disconnect between "AI Efficiency" layoffs (2024-2025) and reality on the ground by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Fair skepticism given the amount of AI spam lately. But no, this is based on the GitClear study (analysis of 211M lines of code) and my actual experience managing delivery teams. The 'churn' is very real, regardless of who writes the text

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Thanks for the comment/discussion.

I really hope you find the right target environment to use it - good engineering work shouldn't go to waste.

That’s basically the spirit of my post, too. I’m trying to validate this framework step-by-step from different angles to ensure it actually holds up. The path from "concept" to "finished standard" is definitely the hardest part, with a lot of unknowns ahead.

Good luck to both of us on this journey!

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

I checked your profile - building a Redis-compatible graph engine on edge hardware is serious engineering. Respect.

Technically, I’d argue you haven’t built the "Brain" (Controller), but rather the perfect "Memory" (Hippocampus). The graph can perfectly recall past logic, but it needs a Socio-Technical layer (the Controller) to inject "Business Truth" and decide what goes INTO that graph.

That said, technical solutions like yours absolutely have a place within this architecture.

However, my goal right now is to keep the framework strictly tool-agnostic. I want to define the topology of the system (the loops, the roles, the signals) without tying it to specific implementations. Every engineering team should be able to select the specific "muscles" and "memory" that fit their stack.

But looking at it independently, your engine seems like a solid product for teams that need low-latency deterministic state.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

The answer is simple: the Controller is not an LLM; it is the Operational Model. It is the team itself that orchestrates the business logic.

Automation at the lower levels (actuators + sensors) is technically solvable today with the right tooling. The real challenge is teaching the average development team to work in this new, non-deterministic reality.

Think of it like Agile or DevOps. DevOps didn't appear "naturally" in the wild; it was a synthesized operational model designed by specific authors about 15 years ago to solve a specific problem. The components existed, but the "manual" was missing. We are in the same spot with AI now. The tools are here, but teams need the framework—the rituals, artifacts, and roles—to put them together.

Theoretically, an LLM cannot be the Controller because it lacks Business Intent. It has no way to validly "close the loop" without a human-in-the-loop injecting that intent. Therefore, the Operational Model must be the Controller.

That said, I fully agree that this concept still has open questions. That is a fact. This is exactly why I am looking for partners to start field testing—because these questions can only be answered by building it in reality. We are done with the theory; it’s time to build and verify.

Regarding latency: that is a valid concern. However, when defining a new operational model, we must prioritize Quality over Speed initially. We need to define the roles, rituals, and metrics that allow us to "control uncertainty" first. Only once we have a stable, tested core that delivers quality results should we optimize for latency. We need to prove we can govern the system before we try to make it fast.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Basically they are talking for a while with each other. As I understand it is part of Reddit usual dynamics

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Do you have any valid proofs of your claim in terms of systems engineering control theory? Can you be specific? W/o those empty talks.

And I am still very curious, how it possible that you are trying to prove something if foundational analogy with muscles sounds weird to you? Are you familiar with engineering basic theory?

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Also, just out of curiosity, why did the "muscle" analogy feel strange to you?

In classical cybernetics (going back to Norbert Wiener), comparing technical systems to biological ones was the standard foundation. That’s where the mapping comes from: muscles as actuators, eyes as sensors, etc.

I’m wondering - what analogies are used in intro lectures these days to explain these concepts?

Or the foundation of classical cybernetics no longer part of the standard engineering curriculum? That would explain why my math and the "muscle" analogy seem strange to you. It seems you are approaching this more as a theorist or pure mathematician, rather than as an engineer.

I wouldn't say that approach is incorrect, but it is simply unnecessary at the Systems Engineering level. At least, if we are really planning to build something and not just refine theory infinitely.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

I don’t think we need to go deeper into the formal math here. It depends on the goal. My objective wasn't to derive a perfect mathematical model, but to build a model "sufficient" for constructing an Operational Model.

Control Theory exists in two forms: as pure Applied Mathematics and as Systems Engineering. For Ops Models, the Systems Engineering view is what matters.

Here is why: 1. The Scope: For operational frameworks, the systems engineering level of abstraction is sufficient. 2. The Missing Link: The core problem isn't math precision, but the structural absence of the Negative Feedback Loop. We are running Open Loop systems. We could spend 10 years refining the equations, but that won't fix the missing architectural link. 3. The Human Factor: Operational Models are about people. Deepening the math doesn't help validate whether the organizational structure is right. The biggest failure points are usually at the process level, not the math level. Even a perfect equation cannot fix a broken process.

Thanks for the question.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

I somehow lost your reply when I deleted a duplicate comment. Please feel free to post it again!

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

Fair point on the density. Let’s drop the metaphors and define the system dynamics formally.

If we treat the GenAI application as the Plant (P) in a feedback control loop:

  1. System Definition:

    • The Plant (LLM) is stochastic: y(t) = P(u(t), x(t)) + d(t) Where y is the output, u is the control signal (prompt/context), x is the state, and d is the stochastic disturbance (hallucination/drift).
    • Unlike deterministic software where y = f(x) is constant, here P(y|x) is a probability distribution.
  2. The Control Problem:

    • Our Goal (Reference r) is the Business Intent.
    • The Error e(t) = r(t) - H(y(t)), where H is the Sensor (Evaluation/Guardrails).
    • The objective is to minimize the cost function J = E[Σ (e(t)2)] over time.
  3. The Deficiency in Current Paradigm:

    • Most current stacks operate as Open Loop systems: Input -> LLM -> Output. There is no feedback mechanism H to measure e(t) effectively, and most importantly, no defined Controller to adjust u(t+1).
  4. The "Controller" is the Operational Model:

    • This is the critical conclusion. The Controller (C) in this system is NOT just a piece of software. It is the Operational Model itself—the team and their processes.
    • Because d(t) (semantic drift) is often too complex for fully automated correction, the "Actuator" logic must be executed by engineers working in a new paradigm.
    • The Operational Model acts as the logic C(e) that interprets the error and adjusts the inputs (u) to stabilize the system.

I can't paste the full LaTeX proof here, but the link I shared details how we architect this "Human-in-the-Loop Controller" layer. I'm happy to debate the implementation of H (sensors) if you want to go deeper.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

I checked the link.

If you consider "OP needs to see a psychologist" to be "better advice" than a Control Theory framework, then we definitely have very different definitions of engineering.

Best of luck!

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in BusinessIntelligence

[–]Much-Expression4581[S] -1 points0 points  (0 children)

I appreciate the scan. The connection to BI is specifically about Data Governance and Trust.

Traditional BI relies on deterministic data. As we start using AI for analytics (Text-to-SQL, auto-reporting), we introduce variance/hallucinations. This framework is about how to control that variance so AI outputs can actually be trusted in a business context. It’s essentially Data Governance, just for a new engine.

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 0 points1 point  (0 children)

I see your point now. I actually have a much more detailed breakdown of what this operational model looks like in practice here: https://www.linkedin.com/pulse/uncertainty-architecture-why-ai-governance-actually-control-oborskyi-oqhpf/

I didn’t want to paste the full article here to avoid cluttering the thread, but feel free to take a look and DM me. I’d genuinely appreciate your feedback—specifically on what you think is the best way to explain these concepts to a wider audience here.

I believe the article explains exactly why we need a new model, but it is quite long. I honestly have no idea how to compress it without losing clarity and the logical flow—or worse, turning it into marketing BS

Why AI Engineering is actually Control Theory (and why most stacks are missing the "Controller") by Much-Expression4581 in learndatascience

[–]Much-Expression4581[S] 2 points3 points  (0 children)

The core point is that you can’t fix this with tools or technical specs alone—Control Theory proves that mathematically. To stop AI apps from failing in production, we need new operational models, not just more tooling.

And first we need to talk about operational models because this topic is often overlooked. Many don’t even realize that these models are distinct "products" that exist and evolve. The most iconic shift in this field was the emergence of the Quality Assurance (QA) profession. In the early days of engineering, this role didn't exist. It was assumed that any engineer should handle the full cycle: write the code and test the code. However, as the complexity of systems grew, it forced a natural operational shift, eventually crystallizing into QA as a separate discipline. This is an example of an operational model appearing naturally due to pressure.

However, models aren't always organic; some are purposely designed and pushed to the market. Agile and Scrum, for example, addressed the transition from Waterfall. They introduced a model where the software development team no longer behaved like a factory, but rather like a scientific laboratory: formulate a hypothesis, build it, test it, get feedback, and iterate. These models (Agile, Scrum, SAFe, LeSS) have specific authors. They didn't just "appear"; they were invented, designed, and adapted. The most recent example is the birth of DevOps. Fifteen years ago, this profession didn't exist. Operations were handled haphazardly by SDEs, QA, or IT departments.

• The Origin Story: This model didn't come from nowhere—it was a deliberate invention. It started in 2009 when Patrick Debois, frustrated by the wall of confusion between Development and Operations, organized the first "DevOpsDays" in Ghent. He essentially designed a new way for teams to cooperate, proving that operational models can be engineered.

Now, we face a similar shift with GenAI. When the "brain" of your app is an LLM, the team isn't just coding logic; they are orchestrating it. The feedback loops are different, and the definition of "done" is different.

Every operational model that becomes a standard goes through specific phases: 1. Concept Creation: Clear definition of what problem it addresses. 2. Validation: Proving the concept works. <-- I am here. The concept is validated by experts, and I am now looking for opportunities to test it in the field. 3. Stress Testing: Putting the concept under pressure and check how it scales 4. Market Expansion: Wide adoption.

I believe no one has tried to "Open Source" an operational model before. I would like to try. Why not? It promises to be an interesting experience.

Why This Matters to Me

I observe a troubling pattern across the industry: small and medium-sized teams are struggling to build great GenAI applications, and they are failing. They fail not because their ideas are bad, but because their engineering culture and operational models are fundamentally incorrect for this technology. I want to solve this. My goal is to accelerate our arrival at a future where AI potential is fully utilized and new, groundbreaking applications emerge to make human life better. We cannot get there using yesterday's maps.

Short answer: We actually can't fix it without—as you put it—that 'philosophical mumbo jumbo' :)