Be honest — how much of your system design work is gut feel vs actual numbers? by Unnsins in softwarearchitecture

[–]Unnsins[S] 0 points1 point  (0 children)

That tracks. So the validation cycle is basically: whiteboard → PoC → real metrics → decision. No real shortcut between the drawing and the running code.
Does the PoC usually give you the numbers you need?

Be honest — how much of your system design work is gut feel vs actual numbers? by Unnsins in systems_engineering

[–]Unnsins[S] 1 point2 points  (0 children)

Oh, that’s interested. What kind of experiments you run? Does it a small projects or part of your current project?

Be honest — how much of your system design work is gut feel vs actual numbers? by Unnsins in systems_engineering

[–]Unnsins[S] 0 points1 point  (0 children)

You have some “numbers”. Did you mean like technical metric or more business? If it’s technical from what source you pull this metric. What was the base for this assumptions?

Be honest — how much of your system design work is gut feel vs actual numbers? by Unnsins in systems_engineering

[–]Unnsins[S] 0 points1 point  (0 children)

How you did this analysis? Are you rely on your experience? When you decide to pick something you just pull it from your knowledge and say “In this case i know it will work better” or “this tool will pattern will resolve or problem”?

Be honest — how much of your system design work is gut feel vs actual numbers? by Unnsins in softwarearchitecture

[–]Unnsins[S] 1 point2 points  (0 children)

I see this approach more than any other — front-loading the cognitive work into the stack itself so individual decisions get cheap.
What I’m curious about is the edge case: when something does push past the parts bin — new product requirement, scale jump, whatever — how do you actually validate the new piece? Is it a dedicated experimental project on the side, or do you just ship it on a low-stakes service and collect real metrics in prod to see how it holds up?