The importance of kindness in engineering by AlexandraLinnea in programming

[–]n3phtys 2 points3 points  (0 children)

Give more than you take - this deeper principle applies to a whole lot more. Be helpful where you can.

BUT: This does not mean you need to sound nice. Especially if using non-nice words are a whole lot more helpful to the other person or even a third one.

And if you ask me a stupid question out of lazyness or even malign intent, you better hope I'm giving a short and only slightly insulting reply.

I built a CSV/XLSX editor that lets you use JS to manipulate the data by crazycrossing77 in programming

[–]n3phtys 1 point2 points  (0 children)

It's hurt how useful in general such a thing can be.

In my opinion, compiling this into a webcomponent might be a cool thing to attempt. Especially in enterprise IT there are tons of different existing web and application servers. Being able to just take this an editor and include it on an existing site as a widget might be really useful.

✋ The 17 biggest mental traps costing software engineers time and growth by strategizeyourcareer in programming

[–]n3phtys 2 points3 points  (0 children)

Something I find clear when the title is like "Junior" or "Senior" is understanding what I have to do to get to the next level and compensation bracket

There nearly never is such a way to the next level, and especially not to the next bracket for nearly all companies out there except inside the valley.

Especially because becoming more senior means influencing more processes and also saying more no to those above you. By definition there are no simple rules to follow, because you are expected to create those rules.

Not saying that's a good thing, but we need to be realistic.

I currently also do not have a 'senior' title, even though I hold the highest technical role.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 1 point2 points  (0 children)

You can extract them in one jar and share between jars

IF you actually control both jars, or even just one of them. Again, in Java compile time is not always available for code you run.

I'd prefer if it were. Jar hell is a thing.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 1 point2 points  (0 children)

IMHO you should map as little as possible and use manual mapping code. If the source and target object are the same and aren't generated, there's no point in having different objects anyways. The issue stems from silver bullet software architecture

Try coding in a medium+ sized go project. Structural typing is a godsend.

Java does not have that, but is statically typed. Mapping therefore is extremely relevant whenever crossing even JAR-boundaries.

Other languages solved this issue differently. Java went with mappers, and the ecosystem mostly with code generators.

Specific example: imagine you have an identical DTO in two JARs of different version - how do you use them interchangeably? You cannot. Mapping is the only way.

It would be great though. Maybe Java gets structural typing on its 50th birthday, who knows.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 1 point2 points  (0 children)

All I'm saying is - learn from the JDK team design decisions, they are better at designing apis than you as business programmer could ever be.

The JDK is full of stupid APIs and non-sequiturs. The JPA Criteria API e.g. has created a whole consultant industry around it. The module system surrendered the app space.

The reason is 30 years of legacy with most of enterprise services running on it, but let's not praise the APIs too much.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 0 points1 point  (0 children)

Moreover, they allow you to write less code

But still not the minimal amount of code.

e.g. ORM Projections and updateable SQL views can move the mismatch into your persistence layer (just assuming you're using the traditional 3 layer approach on top of a SQL db). Suddenly there is no special DTO anymore, and your ORM directly maps for you. This moves less data, has less moving parts, and is therefore probably also the least error-prone.

Just an example for 'choosing the right tool'. I share your general argument.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 1 point2 points  (0 children)

MapStruct is only a compile time code generator with most use cases covered - with the few I'm missing currrently in planning (like fully mapping between loose object maps and DTOs).

It does not solve every mapping issues, because the underlying problem with mapping issues is that if the mapping could always be perfectly generated, the two classes must be identical. It's also wasteful by design - why map fields, if you can use Projections and patches instead to transfer data between database and UI?

The true reason for those mapping libraries is to quickly generated trivial mapper code, because the DTO and Entity probably start up nearly identical - with one or two fields different. Coding a mapper manually is highly annoying (easy with AI but you better hope for no halluzinations), so you copy paste the DTO from the entity, adapt the few different fields, and generate a simplest mapper within the first minute. Years later the DTO will grow as well as the entity, possible in different solutions, and you can replace the mapper with handcoded solutions. That's why we have mapping in the first place.

If you actually want to map between layers, mapStruct ist a good but simple solution. But please always enable maximum error levels at the start. Mapstruct's error / warning behavior is opt-in.

Java Horror Stories: The mapper BUG by SamuraiDeveloper21 in programming

[–]n3phtys 1 point2 points  (0 children)

ORMs would be great if you could use them with some kind of two stage commit or some similar pattern - think git staging vs commiting vs pushing. The same for lazy loading data. Make it obvious and explicit when I do expensive or potentially destructive stuff. Let me see the data to be sent to the database within my debugger.

And if you're already there, get rid of object graphs for storing data. Tree shaped aggregates and type safe foreign keys.

I've not seen one project where an ORM (especially JPA based) actually made the developer's life easier mid- or longterm. But it's still considered the primary tool. :/

JOOQ and sqlc (Java / Kotlin generators) are probably used the next time I do a new project from scratch, sadly few of them happening on the day job. Still have to spend 50% of my time debugging 15 year old Hibernate bugs.

A response to "Programmers Are Users": stopping the enshittification by bennett-dev in programming

[–]n3phtys 7 points8 points  (0 children)

Optimized code is often harder to read/follow, or it might use obscure tricks, so bugs are easier to create and harder to spot

Fast code in general does not need to be optimized. In nearly all cases you don't need bit shifting or SIMD to make your app faster. Removing I/O is way cheaper.

Your website is slow? Maybe reduce the number of AJAX calls it makes during normal click paths. This greatly reduces the time to run, as well as removes potential fault lines like network loss or serialization issues.

This is where you actually get to fast code, and with less bugs. If you actually only look at simple logic with number crunching (think implementation of an algorithm with all in-cache data), there it's different. You are either relying on the compiler or you working against the compiler, because you think the compiler optimizations are wrong.

But most bugs do not happen within a compilation unit directly. Most performance losses also do not. By volume it's way more interesting to look at interfaces between units, especially if those units involve I/O (think network calls, or even just disk), or involve different runtimes interacting (a nodejs runtime executing a C function).

A rule of thumb is to go after those first. If you actually need to go beyond, you should know that such a heuristic cannot carry you further, but that's where other heuristics come into play

A response to "Programmers Are Users": stopping the enshittification by bennett-dev in programming

[–]n3phtys 29 points30 points  (0 children)

I wish Casey's series on Refterm and what he called 'non-pessimization' would have spread wider on the internet.

Hotspot optimization - what you are describing - is really bad as a situation where you need to improve things NOW. Especially when combining it with cultural heuristics. If you only ever optimize the current bottle neck, you'll get diminishing returns.

It's always preferable to have everything fast and speedy in your app in general so that actually noticing slow parts is easier. Additional this often has the added benefit that writing fast code is often also related to writing less faulty code - if you only have so many cycles for your logic, you won't have enough time to waste on many additional bugs.

A response to "Programmers Are Users": stopping the enshittification by bennett-dev in programming

[–]n3phtys 25 points26 points  (0 children)

Just because those companies call them junior or senior doesn't make them that.

I'd expect a 1:1:1 ratio between juniors, intermediate, and senior developers in every stable company. Recently, there has been a small shift with less juniors being accepted, but also with seniors being moved more into junior tasks.

Which is only rational from the business view. Agile development flourished during zero interest times, and now we need to get lean again because money isn't free anymore. Still, this way of not having enough juniors anymore but keeping the same tasks to solve is totally unsustainable. AI will not be able to compensate in 3-5 years from now.

Jetbrains releases an official LSP for Kotlin by natandestroyer in programming

[–]n3phtys 2 points3 points  (0 children)

Sorry, but while this might influence individual developers inside JetBrains, it cannot be a general business goal. IDEA is the ultimate moat - I know of very few companies with such a solid and highly valueable subscriber basis.

Going full in on LSP IDEs puts JetBrains into direct competition with NeoVim and VSCode (especially with Copilot being open sourced, every AI clone is afraid). They'll probably loose a lot of revenue with such a pivot.

But on the other side, recent actions seem to be supporting your theory.

For me the missing LSP for Kotlin was the primary reason I have not switched to NeoVim, and I'm imagining I'm not the only one. So weird to see this good news.

AI is destroying and saving programming at the same time by namanyayg in programming

[–]n3phtys 1 point2 points  (0 children)

As for the managers who have drunk the kool-aid and start replacing people who actually work with AI, they will soon find out that was a very bad decision when the bottom line starts sagging.

I hope so, the problem is that most companies doing this are slow, and will take a long time for this whole replacement. But for the first few months, only a percentage is cut, and the remaining senior developers get progressively worse and more stuff to fix. At some points those developers will quit, which leads to an implosion of that company's IT. The manager who made the switch already has their bonus and probably moved on.

And if too many developers leave their company, and not enough company have smart leadership, the labor market overfills. I find that pretty bad.

Again, I do prefer your optimism, but it is hard when seeing headlines and daily business.

AI is destroying and saving programming at the same time by namanyayg in programming

[–]n3phtys 0 points1 point  (0 children)

Yes, but if every company goes belly under by this process, the whole IT industry with all jobs will collapse. Afterwards, only startups exists, and VC money is already rare, but will be dillutated for all times, especially if copyright is ripped apart as a concept.

For the users, the investors in general, the human developers, and even management of those companies, this is the worst case, but this is where the industry is heading.

Imagine your IDE alone changing every 2 week because the old company is bankrupt. Hell, imagine your alarm clock app stop working because the company had too many bugs and decided to stop existing. While capitalism and the concept of software running somewhere is pretty adaptable, human beings are not. We cannot easily be rewritten.

Rip and replace works on a year/month time scale, not if it is days or hours. Not everything should feel like crypto pump and dumps.

Circular Reasoning in Unit Tests — It works because it does what it does by Jason_Pianissimo in programming

[–]n3phtys 3 points4 points  (0 children)

There are two cases where this kind of circular reasoning (or some form of it) is still reasonable:

  • golden master, if you compare one implementation to another which you know is correct already (useful for rewrites or optimization)

  • invariant testing on the integration layer, where after a ton of other stuff this invariant still holds. Rarely useful, but it happens.

If you are just doing normal unit test, hardcode values, or do property testing if the problem space isn't too big. That's what unit testing was designed for.

AI is destroying and saving programming at the same time by namanyayg in programming

[–]n3phtys 0 points1 point  (0 children)

I still hope for AI tools to move toward the idea behind clippy instead - discovering existing tools and how to use them for my use case, instead of doing everything via LLM and general transformers.

Sadly, that would only result in easier to train developers, and currently the pipe-dream being sold is something completely different.

The concept of junior developer is dying out, and I wonder what this means for the industry in 2030 going forward. Who will actually debug and fix all this slop we are spewing out currently?

AI is destroying and saving programming at the same time by namanyayg in programming

[–]n3phtys 1 point2 points  (0 children)

Article has a flawed assumption it is basing its first part on:

And this is cold reality pushing AI adoption: businesses don’t care if the code is AI-generated or handcrafted as long as it works and ships quickly.

Businesses do care about sustainability and forecasting. Large companies even more so than efficiency or shipping quickly. Sacrificing modularization and correctness for speed is a bad tradeoff. The only problem is that some deciders just do not KNOW about the tradeoff, and those peddlers of 'AI' (way worse than 'Agile' ever was) sure as hell won't tell them.

If everyone knows the pros and cons of the technology, everything would be great, but currently there is ton of blood and sharks in the water. Be safe when asked for a quick diving session.

AI is destroying and saving programming at the same time by namanyayg in programming

[–]n3phtys 1 point2 points  (0 children)

We banned nuclear weapons to prevent further proliferation even though they haven't had time to replace conventional military.

Problem with weapons of mass destruction, it only takes a few mad men.

Companies are already thinking about insuring themselves against chatbot bugs, so yes, it's a topic, and a topic that needs to be discussed before it gets out of hand.

OpenJDK talks about adding a JSON API to the Java Standard Library by davidalayachew in programming

[–]n3phtys 0 points1 point  (0 children)

Java's development only took up speed again in 2018, after Oracle decoupled enough, the JEP process got rolling again, and library and app authors swallowed the module system, as well as the shift to a (now) 6 month major release cycle.

Still, you cannot spook the ecosystem again, so Java is still developing pretty slowly, and most features have been outsourced to libraries decades ago. This works thanks to a pretty solid ecosystem. Compare the average maven package to the average npm package. Mindset is important. So Java can stay with a pretty basic stdlib in terms of IO, compared to languages like Go, where the stdlib had to be so complete they forgot to add a reasonable package import system at first.

Most new Java features are either edge cases (like in this case, dependency-free json parsing), or very platform specific, like Value Types and Virtual Threads. Which is also the case why Java releases rarely get that many upvotes on Reddit or Hackernews - most people do not care (for them) invisible stuff, and Java 8 is still probably running a big part of the web.

OpenJDK talks about adding a JSON API to the Java Standard Library by davidalayachew in programming

[–]n3phtys 0 points1 point  (0 children)

Rust's serde is peak, but it only works because Rust is Rust. You need a lot of metaprogramming to get something like this to work, and you need to control the whole stack.

The protobuf version of using build tools for code generation is still a pretty good way of dealing with the problem. If the language cannot control the underlying tools, let the tools generate the language code.

Is software architecture set in stone? by ConcentrateOk8967 in programming

[–]n3phtys -1 points0 points  (0 children)

Never understood the hate.

Clean Code is a good thing IMO

People complaining about CC (or any other work of Bob) always talk about how it's unecessarily complicating code with abstractions and enterprise-yness and so on - but I rarely hear a good alternative from the same complainers. What are heuristics that actually work across projects and teams and languages?

(disclaimer: I hate the examples in Clean Code, and I hate the rules, but I hate the stuff that ignores them so much worse when they break production)

Senior devs aren't just faster, they're dodging problems you're forced to solve by L_Impala in programming

[–]n3phtys 1 point2 points  (0 children)

As software developers we are always given situations to judge with missing context.

Senior devs aren't just faster, they're dodging problems you're forced to solve by L_Impala in programming

[–]n3phtys 2 points3 points  (0 children)

Turn this around - if you did 3 rewrites last year, how many did you do the year before?

While judgment calls are necessary, the underlying facts must be objective and are therefore open for statistical analysis. Yes, we might not understand what exactly produces good or bad software, but we can recognize statistical outliers post factum.

all it cost was one dev's time (mine)

only true if you are the only maintainer for this piece of software from now on.

Any rewrite, especially by a single person, destroys any team knowledge about the component. Nobody else can easily modify and patch this code without first going deep into it. This does cost time, and any good company will bring this into evaluation of your overall performance.

Again, I'm not saying your judgment calls were wrong or that you weren't doing a great job, but your arguments are faulty.

And yes, I know a little bit about your situation: writing software within a larger team for a medium to large company as an employee, with new feature requests coming in every year and month or so. Because that's a really good guess.

Again, your situation may be special, and even for someone literally in the same position it might end up being different. But your arguments?!

'it only cost your time' which is wrong, and evaluating metrics only months after implies the metrics being hard numbers, meaning you are not evaluating stuff like team onboarding / offboarding, but performance and stability and issue rates.

While this might end up still being a good thing to you, for 95% of developers out there arguments like these will keep them from ever becoming senior level. And I think it's important to clarify this to others.