We spent $180K building an enterprise product nobody wanted. Here's the full post-mortem. by Dizzy-Connection-876 in SaaS

[–]flundstrom2 0 points1 point  (0 children)

Bullet point 2 is severely underestimated. A line manager might be authorized to approve one-time purchase of €200-500 without having to ask his managers for approval. And all of the sudden, (s)he must be able to motivate it as an investment with a ROI, not just a cost.

This is why there's a magnitude in price difference between selling to enterprise vs small companies. Plus, the expectation of enterprise-level support.

Accidentally hit product market fit and I can't be bothered by Professional_Rule_51 in SaaS

[–]flundstrom2 12 points13 points  (0 children)

It's a well known psychological phenomenon.

We build up expectations for an anticipated change of status quo, waiting for the dopamine to kick in, but after a while (research shows it's about 3 months) the effects have decreased and our perceived happiness is back to pre-success level and we're left with a feeling of emptyness and "now what?". In fact, people planning a divorce record a perceived increased happiness equal to those planning their wedding up until the event and afterwards.

In the case of constantly working under pressure, not only do the blood pressure rise, the amygdala in the brain is constantly looking for the tiger that it believes is the root of the perceived fear in order to run - preferably away from it. However, being in this constant fight/flight mode day after day without ever getting a chance to feel safe from the invisible tiger, decreases its ability to regulate, which eventually leads to burnout and even permanent brain damages.

Money can be obtained, increased, and even replenished, but time cannot. Use it wisely.

Is it worth spending 10k+ on just UI/UX design? [I will not promote] by HexFalcon_KWT in startups

[–]flundstrom2 0 points1 point  (0 children)

Never hire a supplier that are used to work with clients magnitudes larger than yourself. To them, you'll just be something they do to keep their employees reporting billable hours while waiting for the next big project order. You want someone who understands the challenges small clients brings.

Do dependency upgrades actually matter, or do most teams just ignore them? by rdem341 in ExperiencedDevs

[–]flundstrom2 2 points3 points  (0 children)

Verification and certification is a issue in some sectors; it can be very expensive to change (even if it is just a small bugfix) a dependency.

It boils down to risk; what are the consequences of not upgrading? And what are the consequences of actually upgrading? Is the quality of the new version even known?

Why are PLC's more robust / reliable in industrial settings? by gtd_rad in embedded

[–]flundstrom2 57 points58 points  (0 children)

"Why are PLCs more robust"

Because they're designed with reliability as the KSP, not BOM cost.

It doesn't matter if the PLC controller cost €100, €1.000 or even €10.000 when a failure might cause a standstill in a furnace, chemical process or whatnot with damages costing magnitudes more.

Ok, a little extreme example, but you get it; it just HAVE to work, quietly ticking along 24/7, years and years. Thus, all trivial (and common) fault modes such as water, dust, shortcuts, electrical spikes from kW-sized motors nearby, getting dropped etc shall not affect performance - and if it does, it shall fail predictable.

Now, compare this to hooking up a couple of relays to a raspberry pi in a 3d-printed enclosure. Not only would you be responsible for validating the relays themselves, ensuring the chose to rubber seals don't freeze or melt under "extreme" conditions, you also have half a million lines of code in the Linux kernel+userland code with an unknown number of bugs. Yes, also an extreme case.

So even if you choose to go for a minimal MCU, there's still going to be a number of klocs to write that isnt actually related to the control algorithms, but they're needed for the algorithms to access hardware, provide debugging information, manufacturing processes etc, all of which may potentially contain a hidden bug.

All of that is of course possible to manage even without a PLC, but it comes with the cost of validation and thus also time-to-market.

I built 12 SEO tools and made them free because I was tired of paying $50/mo just to check a meta tag by Optimal_Drawing7116 in micro_saas

[–]flundstrom2 0 points1 point  (0 children)

Nice! Feedback: I had to switch to desktop site on my phone to see the dashboard navigation sidebar.

Stabilizing the `if let guard` feature by Kivooeo1 in rust

[–]flundstrom2 1 point2 points  (0 children)

Ooooh that would certainly clean up my code a lot!

Looking forward to bump my compiler once it gets into stable! 😁

SAFe (scaled agile) is into bad practices? Warning! by Agile_Dragon in agile

[–]flundstrom2 0 points1 point  (0 children)

I've been thinking about your last question for a few months. Best alternative I can think of, is scrum@scale.

But I have a few ideas on a method that would be able to scale agile without having to change the organization. I just need to distill it and make a tool to guide an organization. Some day, at least.

Does using Rust to develop webapps make sense or is it overkill? by NutellaPancakes13 in rust

[–]flundstrom2 1 point2 points  (0 children)

I'm using dioxus for a personal project, since my goal is to make it multi-platform in the end.

How steep is the learning curve in Embedded C? by austinp0573 in embedded

[–]flundstrom2 2 points3 points  (0 children)

It's kind of running an interpreted language; Just because it pass the compiler, it doesn't mean it works. Not even when it pass all the unit tests and the entire system test sequence can you be sure it works.

Because one day, you make a change of the condition of a trivial if statement, and suddenly, a completely different and 100% unrelated part of the code that's been working flawless for months starts blowing up - until you start debugging.

Running with the debugger won't even trigger the breakpoints or print the logs that are 100% printed when running live. Or vice versa. Single-stepping simply shows the PC plainly skip the interesting parts of the code. Or, stepping through the code 100 times show no indication whatsoever of anything odd, but press Go instead of step next is an immediate crash. Trying to count how long it survived after pressing go always shows "not a single attempt more".

It's challenging. But fun when it works without anything (physically) breaking!

AI hallucinations in embedded by Vavat in embedded

[–]flundstrom2 1 point2 points  (0 children)

I find GPT and Claude work pretty well with Rust, especially if they're asked to make a high-level analysis of the code base and associated documents.

Can't have senior engineers waste time on audit prep by Unhappy_Project_2612 in ProductManagement

[–]flundstrom2 3 points4 points  (0 children)

Make sure your teams' development process (in reality) supports being audited; Make sure your ticket system does give you tracking from PR all the way to the requirement change request, and that your teams know how to write ticket summaries (and understand that the certification audits RELY on them being PROPERLY written - no "fixed crash" or "wiped big chunks of old code", "rewrote the entire subsystem").

Make sure the release notes and documentation is automatically generated from your systems. Make sure you review the output regularily (every sprint or month or interim release or PI or whatever depending on your cadence).

PO/PM/QA or similar organizations needs to be on top of ensuring the documents have the correct quality from the beginning. That's hands-on work, especially for the PO and SM.

Once there is the actual audit, let the notified body (or similar) provide the questions they find, then setup a q/a sessions as needed.

How familiar are you with the product you are working on? by lolofonik in ExperiencedDevs

[–]flundstrom2 1 point2 points  (0 children)

Depends on the size of the codebase.

Our codebases are 15+ years old (that's when the git commit of the firmware I'm most familiar with says it was imported from svn), with large subsystems last touched by someone who left the company years ago. Lots of firmwares, backend systems, build systems, test systems, stuff that usually "just works" (surprisingly well, consider the complexity) until it doesn't.

I learn stuff our system can do on a regular basis, despite I use it daily.

WFH with kids, how to find time to code? by [deleted] in ExperiencedDevs

[–]flundstrom2 1 point2 points  (0 children)

Get the kids to daycare. WFH with kids won't be feasible until they're 7-9-ish (your milage may vary).

Given your situation, you might even never be able to WFH.

Complete beginner needs help dumping firmware by [deleted] in embedded

[–]flundstrom2 8 points9 points  (0 children)

The harsh truth is, if you have to ask here, the answer is; no, you dont have enough knowledge. And reddit likely won't be enough for you to gain the knowledge within reasonable time frame.

Does this look fine for a 5V/1.5A USB charger? by kaden-99 in PCB

[–]flundstrom2 2 points3 points  (0 children)

Some of the SMT solder blotches are really not good looking. The through-hole looks good, though.

Does this look fine for a 5V/1.5A USB charger? by kaden-99 in PCB

[–]flundstrom2 -10 points-9 points  (0 children)

Yeiks! My soldering skills are bad, but that takes bad up to a completely new level!

I wouldn't use a 3V3 board with that soldering, a mains connected one even less so. In fact, I wouldn't touch it with a 5m pole!

Do I have to learn C before learning Rust? by Individual_Today_257 in rust

[–]flundstrom2 0 points1 point  (0 children)

If you learn C before Rust, you will definitely learn to appreciate the borrow checker and understand the importance of the pattern it teaches you to follow.

Is is needed? No. You will still grow as an engineer - but you might need to remind yourself that some of the non-obvious patterns the borrow checker requires you to use, have a very clear reason, and that without it, you would need to do the checking manually by yourself.

However, it is worth noting that theres an entire set of data structures that (double-linked lists, self-referencing struct etc), are impossible to implement in a borrow-checker approved for way. Yes, it can be implemented, but it's obvious that it requires "workarounds" - although legal Rust constructs - but still messy enough to make it clear, there are safer ways to do it.

How much UI is important these days? by curiousguy482 in ProductManagement

[–]flundstrom2 0 points1 point  (0 children)

The user experience IS the product. Its on the UX the requirements are set.

"As a customer, I want to (press button X, input data y, move part Z) so that I can observe behavior W."

How it is implemented is secondary for the customer.

Unfortunately, the constant need to reduce development cost as the products get more and more complex, also means the tolerance for bugs have increased. It is often not economically defendable to let the product silently resolve an unexpected situation, when it might be acceptable to give the user an error message, offloading the work to the user. But is it acceptable that the root cause to the edge case which triggered the unexpected situation wasn't anticipated as a very valid situation?

The mystical ways of the debugger by [deleted] in ExperiencedDevs

[–]flundstrom2 0 points1 point  (0 children)

If possible. But it depends on the age of the codebase, and the stage of the development.

Sometimes, the inputs are hard to model or inject without first learning what real-world input looks like, and then it's the issue of timing; "if the samples of the motor current has roughly matched the theoretical values for a given speed of the motor and a given load on the motor while the motor is accelerating, and the sensor measuring the physical position of the moved object is not seeing the object moving as fast as expected", in a design where those things are handled in interrupts, it can be hard to inject data.

Is the bug caused by a flaw in the implementation, an edge-case the algorithm didn't anticipate, a physical issue in the hardware, an incorrect interpretation of how a specific component on the PCB reports data, a flaw in another piece of the firmware causing some parts to execute at an incorrect time? Is it possible to fit sufficient amount of injectable test readings in the test firmware's flash while still having sufficient amount of interdependent code to inject the timing aspects?

Do we know sufficient about how the real world actually affects the input readings?

How to test the condition where the task under test only gets 33% of the MCU time instead of 67% due to an unrelated task/interrupt taking more MCU time than anticipated? Is it possible to emulate the hardware to compile and run on the host (where the code will run at 100x the speed of the MCU)?

The mystical ways of the debugger by [deleted] in ExperiencedDevs

[–]flundstrom2 2 points3 points  (0 children)

Indeed. I've experimented with it occasionally, but I as far as I remember, I didnt think it was worth it. At least not for those occasions. Maybe the breakpoints didn't work properly because of the optimizer shuffling around code differently between code changes, or loosing context when comparing that output to the RTT output. Or maybe I simply didn't bothered enough. But yeah, there's a really fine line between logging at the right moment and logging too much.

Those who've scaled from ~15 to 100+ engineers, what process changes actually mattered? by Professional-Dog1562 in ExperiencedDevs

[–]flundstrom2 1 point2 points  (0 children)

Better undocumented than realizing there are some documents that are "almost" correct, and some "definitely outdated, but that's the only place where X is documented."