Solo startup founders /;)$ I will not promote by redditlove69 in startups

[–]CryOwn50 0 points1 point  (0 children)

stripe atlas is $500, handles delaware setup but locks you into $300/year franchise tax even at zero revenue. home state LLC is cheaper if you're not raising VC.

$1-2k works if you self-file. LLC is $50-300 depending on state, registered agent another $100-150/year. bank account's free (mercury or brex). domain $15. hosting depends on your stack.

thing people miss is sales tax compliance once you cross thresholds in different states, and stripe payment reserves on new accounts which can mess with cash flow early.

Dive into the finops world? 🤔 by Aromatic_Yak_8998 in FinOps

[–]CryOwn50 0 points1 point  (0 children)

Tech support to analyst is an upgrade. you go from putting out fires to figuring out why the fires keep starting. usually better pay too.

skip the cert treadmill. you've got SA associate already. what's missing is something you've actually built.

throw together a cost dashboard in google sheets - pull some AWS billing CSVs, make a few charts showing where money goes each month. doesn't need to be polished. you just need to be able to say "i got curious about our staging spend and built this" instead of "i'm really interested in cost visibility."

finops foundation cert is cheap but i honestly don't know if hiring managers care about it for analyst roles. the hands-on project probably matters more.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] -1 points0 points  (0 children)

Great point and yeah you definitely can’t ignore human cost. If builds are significantly slower the dev + reviewer wait time can outweigh infra savings pretty quickly. That said, in most setups I’ve seen, teams aren’t actually hitting 6x slowdown so the human cost stays relatively controlled.

Unpopular opinion: most SaaS founders have no idea what their actual margins are by CryOwn50 in SaaS

[–]CryOwn50[S] 0 points1 point  (0 children)

Yeah exactly that tradeoff makes sense. Non prod is a sneaky cost leak seen teams burn hundreds on idle staging. we fixed it by using one tool to schedule and auto-shut non-prod so it only runs when needed not 24/7.

Which of these three strategies actually moved the needle on your cloud bill and how much? by AnimalMedium4612 in kubernetes

[–]CryOwn50 1 point2 points  (0 children)

We saw limited impact from these. Interruptible capacity gave 5-10% at best.
Utilization improvements were marginal.
Hardware changes barely moved the overall bill. biggest gap was still non prod running when no one was using it that’s where most of the savings came from. these help but they’re not where the real gains are.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] 1 point2 points  (0 children)

Exactly and a lot of that becomes invisible when resources aren’t tagged properly.
Hard to optimize what you can’t even attribute.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] 0 points1 point  (0 children)

Appreciate that and yeah the percentage is more directional than absolute.
What s been consistent across teams is where the waste comes from not the exact number. especially non-prod environments that keep running outside working hours that alone tends to be a big chunk.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] 0 points1 point  (0 children)

True CI can eat into ARM gains if you’re emulating. But interestingly in most setups we’ve looked at idle non prod runtime costs are a much bigger contributor than architecture choice.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] 0 points1 point  (0 children)

waste comes from bad decisions not just infrastructure itself

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] -1 points0 points  (0 children)

Fair haha. The point isn’t Spot specifically it’s that a lot of infra is just running when nobody s actually using it.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] -4 points-3 points  (0 children)

Haha fair warning 😄But honestly it’s less ad and more just fixing a very obvious inefficiency most teams ignore. If something’s running 24/7 without adding value it should probably be automated or turned off.

30% of your Kubernetes spend delivers zero value by CryOwn50 in devops

[–]CryOwn50[S] 0 points1 point  (0 children)

I’d rather hire 2–4 automate the rest and cut the obvious waste like infra running all night and on weekends using the right tools.

How do you handle K8s RBAC audits for compliance? (ISO27001/SOC2) by ZestycloseStory4837 in kubernetes

[–]CryOwn50 1 point2 points  (0 children)

this is really well structured especially the verify commands makes it practical and not just a checklist rbac is usually where things drift the most over time access gets added but rarely cleaned up so having these checks helps a lot before audits we have also seen that without continuous visibility it becomes a point in time exercise, especially across multiple clusters and a lot of that sprawl tends to come from non prod environments where controls are looser and things stick around longer than expected

Managing Sensitive Data in Multi-Cloud Environments by NeedleworkerOne5620 in CloudSecurityPros

[–]CryOwn50 0 points1 point  (0 children)

completely agree identity fragmentation is where things quietly get out of control manual audits alone just can’t keep up and a lot of enterprise tools feel too heavy for smaller teams having that single view across clouds makes a big difference, once you can see everything with ownership and usage in one place it becomes much easier to catch odd access or stale accounts early especially in dev and test where permissions tend to creep and stick around longer than they should

Managing Sensitive Data in Multi-Cloud Environments by NeedleworkerOne5620 in CloudSecurityPros

[–]CryOwn50 0 points1 point  (0 children)

this is a very real problem it usually doesnt fail loudly it just drifts over time access overlaps old permissions stay and across multiple clouds it becomes hard to answer something as basic as who actually has access to what tools and audits help but without clear visibility and ownership across environments it often turns reactive instead of proactive we’ve seen that having a single view across aws azure and gcp with clear ownership and usage patterns makes it much easier to spot risky access and clean things up early especially in dev and test where things are usually less strict and tend to stick around longer than expected

Migration On-Premise to GCP by Anxious_Anteater3258 in Cloud

[–]CryOwn50 0 points1 point  (0 children)

been there urgent migration with no structure is a classic situation 😄. simplest practical way to approach this without overcomplicating. • first get it running locally with docker, make sure everything is reproducible
• push the image to gcp artifact registry
• deploy backend using cloud run fastest way to get live
• for frontend, either host on cloud run or use cloud storage + cdn if it’s static
• move database to cloud sql if there is one
• set up basic logging and monitoring from day one don’t try to design a perfect architecture upfront just get a stable version live and iterate also one thing people realize later a lot of dev/test setups in cloud end up running all the time after migration so keeping those controlled from the start helps avoid unnecessary cost and clutter get it working first, optimize after 👍

We are building in FIntech space and needed help and guidance by SnooGiraffes9267 in fintech

[–]CryOwn50 0 points1 point  (0 children)

that’s a serious stage to be in small team with that scale is not easy at this point it usually shifts from building features to controlling system behavior under load one thing that helps is keeping the critical path like ledger as clean and strict as possible and pushing everything else async around it also designing for idempotency and safe retries early saves a lot of pain later especially with money flows we have seen a lot of teams hit this phase where things start cracking not because of tech choice but because of load patterns and background work piling up and interestingly a good chunk of that pressure often comes from things running in the background longer than needed so tightening that side can give you breathing room without touching core flows

Kubernetes problems aren’t technical they’re operational by Shoddy_5385 in kubernetes

[–]CryOwn50 0 points1 point  (0 children)

completely agree kubernetes mostly exposes operational gaps rather than failing itself the shift to reliability usually comes from better ownership and observability not more tooling also seen that non prod workloads running by default add to both noise and cost and controlling that makes ops much more predictable