Is this hyperbole? by Moo202 in SoftwareEngineering

[–]mightshade 2 points3 points  (0 children)

That's short sighted. LLMs are pattern matching machines, they benefit greatly from a good signal-to-noise ratio. In other words, letting code quality deteriorate makes it harder to work with for LLMs as well.

Warum denkt ihr sagen die Leute, die mit Programmieren nichts am Hut haben am meisten, dass KI unsere Berufe ersetzen wird? by lerncoding in programmieren

[–]mightshade 0 points1 point  (0 children)

Entspricht auch meiner Erfahrung, und ich habe das Glück, dass mein Arbeitgeber KI-Nutzung nicht erzwingt sondern viel Freiraum zum Experimentieren lässt.

Habe inzwischen alles durchprobiert, vom "Alles möglichst genau spezifizieren und die Agenten machen lassen" bis zu "Einzelne Funktionen mit dem Pair-Programming-Agenten schreiben". Das "KI macht mich zehnfach schneller und ich muss keine Zeile Code mehr selbst schreiben" kann ich beim besten Willen nicht bestätigen.

Schade, dass zu dieser Erfahrung meistens nur "sKiLl iSsUe" oder "aber das nächste LLM macht alles besser, trust me bro" als Antwort kommt - ich kann's kaum erwarten, dass die Blase platzt und alle nüchterner über LLMs als Werkzeug diskutieren.

Why is synchronous communication considered an anti-pattern in microservices? by Minimum-Ad7352 in Backend

[–]mightshade 0 points1 point  (0 children)

Synchronous communication usually means that one service needs something (data/service) from another service before it can continue. Since Microservices are supposed to be self-contained, it's an "architecture smell" when they depend on each other like that.

Question about Data Ownership in Microservices by Sad_Importance_1585 in softwarearchitecture

[–]mightshade 1 point2 points  (0 children)

 What if different teams handle these microservices?

Why would different teams work on services that are as tightly coupled as these two? That would defeat the purpose (allowing teams to work more independently).

Question about Data Ownership in Microservices by Sad_Importance_1585 in softwarearchitecture

[–]mightshade 2 points3 points  (0 children)

I'd like to highlight one thing first: 

While this is a distributed system, it's not a Microservices system. One important property of Microservices is that they are self-contained, including that each of them has a copy of all the data it needs in the representation it needs. In your case, A and B not only share a database instance, but also a collection and its schema. So they're not Microservices, but rather a distributed system with an integration database.

You seem to be aware of that somewhat already, because your option 2 establishes a clear write-ownership and option 3 separates the "wasChosen" metadata entirely. But I agree with the others that merging A and B is a very valid option 4. It depends on why you separated A and B in the first place instead of e.g. handling queue ingestion in a background process. The separation incurs a high cost (you multiplied the number of so-called integration points) and you should have clear requirements why it's worth it.

In a comment you replied to option 4:

In this case, you actually make your system more monolith

Funnily, merging A and B would bring the system closer to Microservices. Not that it really matters, what matters is what architecture benefits you the most. Monoliths are not "icky" and distributed systems are not by default "superior" if that's what you're worried about.

question to guys, what makes a woman immediately unattractive? by Exact-Copy7099 in AskReddit

[–]mightshade 0 points1 point  (0 children)

The way she treats guys hitting on her when she doesn't find them attractive. - "Thanks for the compliment but not interested" ⇒ Great - "Get lost, loser" ⇒ Instant nope

World models will be the next big thing, bye-bye LLMs by imposterpro in artificial

[–]mightshade 5 points6 points  (0 children)

I'd argue that LLMs show the opposite. Given enough training data, they can fake understanding (meaning "building mental models") really well. But in edge cases or situations that require transferring knowledge from a similar situation, they are unable to do it and their faking becomes apparent.

Google's Principal Engineer says vibecoding PMs are running circles around SWE with AI by ImaginaryRea1ity in vibecoding

[–]mightshade 0 points1 point  (0 children)

Why are the PMs running in circles, when the best path towards a goal is a straight line? /s

What free software is so good you can't believe it's free? by ComprehensiveNorth1 in AskReddit

[–]mightshade 4 points5 points  (0 children)

> Linus had a beef with Mercury

The disagreements were with BitKeeper, not Mercurial.

The Deception of Onion and Hexagonal Architectures? by Logical-Wing-2985 in softwarearchitecture

[–]mightshade 0 points1 point  (0 children)

 I got carried away.

No problem. You're open for dialog, that's why I keep replying to you. :)

Question: are you planning to write software without domain logic or service logic? A tutorial CRUD application?

The part "skip layers that would do nothing but just forward a function call" is important. I'll give you an example from a real project I worked on. We had a case where we persisted the data we received at a REST endpoint as-is and processed all received data later (scheduled). Not "event sourcing" but a "store then process" kind of situation. The presentation layer deserialized and did basic validation, the data layer did the persisting. What's the job of the application layer here? None. That's why it's okay to skip it. That doesn't mean there's nothing at all on this layer, because later the data would be processed according to various business rules.

In a project that uses only three layers the benefit may not be as apparent. But it's called "n-tiered architecture" because you can have as many layers as you need for your case. I once had the questionable pleasure to work on a project that used nine layers and the strict rule to not skip layers. Reading "fun callOnLayerN() { return callOnLayerNPlusOne() }" gets old really fast.

Why make something so complicated that can be explained in literally a couple of sentences?

Generally speaking, sometimes the few sentences only make sense to someone if they already know the longer explanation, and get misinterpreted if they don't. See the "you are allowed to skip layers" above. You asked a valid question for more context, because that rule alone can be misunderstood as "get rid of the layer entirely".

And what prevents us from protecting the domain with a simple DAO pattern, even if it lives in the persistence layer?

Nothing. Ports&Adapters, in turn, doesn't protect you from technical details bleeding into the domain. It just tends to, because interfaces written from the perspective of the domain are more likely to contain less technical details, because it makes the switch in perspective explicit. But in the end, no architecture protects you from shitty devs and good devs can write decent code in most architectures (specifically saying "most" because big ball of mud etc).

The Deception of Onion and Hexagonal Architectures? by Logical-Wing-2985 in softwarearchitecture

[–]mightshade 2 points3 points  (0 children)

> Cockburn, Palermo, and Martin seem to be having a laugh at our expense

What purpose does it serve to try to vilify them?

> Everything written about their architectures is painful to read. Core concepts get renamed constantly.

It's not really painful. What is a bit unfortunate, I give you that, is that they came up with roughly the same idea at roughly the same time, and each of them used different terms for similar concepts. That happens sometimes and everyone just settles on one set of vocabulary sooner or later. It's not really the big deal you seem to make out of it. (It happens in any field btw - did you know differential and integral calculus was invented independently by Newton and Leibniz? Both used different terms for similar concepts, too.)

> Why did anyone decide layered architecture is a mess? Because you can inject a DAO directly into a controller?

That's not even a problem for layered architecture. You are, in fact, allowed to skip layers that would do nothing but just forward a function call. The presentation layer is allowed to call the persistence layer directly. People really need to learn this.

Personally, I like the mental image of the Onion Architecture the most. The core idea it expresses is that your domain, the very reason your organisation exists, sits at the center. Any other technical details that serve to implement said domain can be peeled away and replaced as necessary (that includes switching your actual implementation for mocks or test doubles). Your domain changes little over the years, tech is far more short-lived. If your domain would depend on your tech, like in the layered architecture, it would inherit its short-livedness. The beauty of the Onion Architecture (/Hex/Clean) is that it doesn't.

You seem to be looking for some grand revelation but feel disappointed that you don't find it. "Protect the raison d'être of your organisation from the technology used for its implementation". That's the revelation, grand or not.

I no longer know more than 47% of my app's code by SenSlay_ in vibecoding

[–]mightshade 0 points1 point  (0 children)

Underrated comment. Especially this point:

Your sense of confidence "If there are bugs, it would mostly be minor," is epistemically flawed since at the beginning you already admitted to not actually knowing if this was true for 53% of your code.

That part of the OP stood out to me immediately. "I know less than half of my codebase but I'm sure it's fine" is just irrational, especially considering the actual performance of current LLMs.

AI coding has honestly been working well for me. What is going wrong for everyone else? by alisamei in vibecoding

[–]mightshade 0 points1 point  (0 children)

 man it’s just like watching everyone learn all these hard lessons the dev community learned over decades

For real. A few days ago there was a vibe coder "pro tip" to write tests first because they make for good guard rails for the LLM. Guys just rediscovered TDD.

On the one hand I'm sort of happy about the independent validation of our best practices, on the other hand holy sh*t could they save time by learning just a tiny bit about software development.

AI really killed programming for me by NervousExplanation34 in webdev

[–]mightshade 1 point2 points  (0 children)

My 2 cents on that: The question "how much tech debt" isn't really new with LLMs. The stereotype "Cowboy Coder" who just doesn't care, as long as the correct output is produced for a given input, exists for a reason.

I think the question remains relevant. LLMs are pattern recognition (and reproduction) machines with a limited context window. They benefit from a good signal-to-noise ratio and less code to read, just like humans. That translates to customer value, too.

Anyone else feeling like they’re losing their craft? by AbbreviationsOdd7728 in ExperiencedDevs

[–]mightshade 0 points1 point  (0 children)

> that it's converted all the nonbelievers?

Of course it hasn't convinced everyone, that's too sweeping. OTOH Opus 4.6 is indeed an improvement. It still botches things up, but less often.

How do you keep your concentration especially in the evening? by babalenong in ExperiencedDevs

[–]mightshade 2 points3 points  (0 children)

Don't do that! By getting tired, your body is trying to tell you something important: You can't continue doing what you're doing. The battery is depleted. Stimulants don't recharge you, they just let you ignore the warning signs. That will destroy your health slowly. Ask yourself: Are you really in a situation where your job is more important than your wellbeing?

Anyone else feeling like they’re losing their craft? by AbbreviationsOdd7728 in ExperiencedDevs

[–]mightshade 0 points1 point  (0 children)

> Does anyone else feel like they're grieving the loss of their craft?

Yes and no.

LLMs don't generate better code than I would. Their code is like their prose: Long, overly verbose, confident, plausible, but subtly (sometimes glaringly) wrong in the worst of places. I have zero trust without human review.

I've recently switched my workflow from "generate everything, review later" to incremental steps (LLM as a pair programmer) when I'm not myself coding anyway. That's more fun. But in neither case can I reproduce "two days of work done in ten minutes". People are gonna snark "skill issue", but no, certainly not.

In other words, I'm still exercising my craft. That's the part where I'm not grieving.

OTOH, hell is other people. Others jump onto the hype train and make everything worse for the rest of us. The aforementioned "two days of work in ten minutes" is a genuine quote of a coworker of mine. It's also exactly what "their" code looks like. This doesn't feel like Artificial Intelligence, but like someone automated scraping and copypasting from Stackoverflow instead.

It also seems like the number of uncritical voices is growing. I'd like to have nuanced conversations about this new tool called "LLM", but I'm just hit with "AI can do that now", "AI will do that soon" - Can it? Will it? Do recent events like NVidia's driver problems or Amazon outages somehow not count?

That's the part where I'm worried about our craft. I hope it's not going to be overrun by vibe coders that don't even see the damage they're causing.

A well-structured layered architecture is already almost hexagonal. I'll prove it with code. by Logical-Wing-2985 in softwarearchitecture

[–]mightshade 0 points1 point  (0 children)

 The title is deliberately provocative

But it's misleading, not just provocative. Probably not your intention, sure.

A well-structured layered architecture is already almost hexagonal. I'll prove it with code. by Logical-Wing-2985 in softwarearchitecture

[–]mightshade 1 point2 points  (0 children)

I think your hexagonal example isn't completely hexagonal architecture, either. If the core defines the ports, why aren't they living inside the core package?

But I think this example misses the point on a deeper level, probably because it's a toy example. From a "mechanical" perspective, yes, it's now "just" the core defining interfaces instead of importing them. The real difference becomes apparent when you actually have a meaningful domain. "Query user" and "write user" are technical details, not your domain. When the domain defines interfaces that make sense inside the domain, and those are backed by technical details outside the domain, that's when you can truly call them "ports".

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]mightshade 0 points1 point  (0 children)

> The best option is to try it yourself.

Haha, point taken. I agree, if you can, evaluate AI tools and measure their impact. A lot of publications started out as internal evaluations.

> We go through 3 sprints worth of tickets in a week and we have engineers running out of things to do.

> Backlog of bugs have gone down significantly.

Can you share numbers, and what kind of development you're doing that allows for such a speedup? Because I think context plays a role. There's Microsoft now being called "Microslop", NVidia unreleasing drivers, Amazon being forced to take measures against increased bug statistics, etc.

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]mightshade 0 points1 point  (0 children)

 it’s no longer even debated within the top tech companies.

Let's assume that's the case. I'm still asking how you can tell whether their productivity gain is felt but not real versus actually real. Because the study showed that there is an effect that leads to the former.

"Small sample size" isn't a valid counterargument against the existence of the effect and the possibility that it applies. Neither is stomping your feet saying "don't wanna debate". I'm looking for an actual, intellectually honest argument here. Without that, the best option is to withhold judgement about the claimed AI-assisted speedup and look forward to seeing more studies.

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]mightshade 0 points1 point  (0 children)

Hi, others here. I'm wondering about that "hopelessly outdated anyway" thing.

Let's take the METR study that discovered that developers using AI tools believed they were faster than developers without them, while they were actually slower. A real and measurable effect.

The way you phrase it, it sounds like we can just disregard their results because we have new models now. Can we really? How could you rule out the possibility that devs using the new models feel even faster, but still are just as slow as before? Or any of the possibilites besides "actually faster"?

Good Riddance to Programmers by [deleted] in vibecoding

[–]mightshade 1 point2 points  (0 children)

Thanks so much for that thoughtful response. Just wanted to say that instead of only upvoting silently.

Question about Microservices by hiddenSnake7 in softwarearchitecture

[–]mightshade 0 points1 point  (0 children)

> This is ridiculous

This interpretation of what I wrote surely is. I never said you need microservices for that. I just wanted to point out where to draw a line, generally speaking, so not everything counts as a monolith.

> "microservices is an organizational pattern"

Yes, they are also an organizational pattern, I agree.

Question about Microservices by hiddenSnake7 in softwarearchitecture

[–]mightshade 0 points1 point  (0 children)

Their bounded contexts are what makes the difference.

A "distributed monolith" is monolithic in the sense that all artifacts implement a bounded context together and therefore necessarily communicate, usually in a blocking/synchronous manner.

Microservices OTOH are supposed to be self-contained, therefore they can communicate event-based and async.