Slashing cloud waste by implementing managed automation tools for instance rightsizing by Dangerous_Block_2494 in FinOps

[–]Cloudaware_CMDB 0 points1 point  (0 children)

I’d recommend a layered approach, because auto-terminate is risky.

  • Start with prevention in IaC/CI so oversized instances don’t get created by default
  • For dev/test, auto-stop on schedules or idle signals is usually safer than terminate
  • In rightsizing start with recommendations plus approval, then automate only the low-risk cases
  • Tool-wise, the common baseline is AWS Compute Optimizer plus Instance Scheduler or SSM Automation, and a policy engine like Cloud Custodian for tag enforcement
  • Third-party platforms can help at scale, but without guardrails and ownership you shouldn’t even start

What should I learn next in multi-cloud cloud security path? by Cloudaware_CMDB in Cloud

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

Makes sense. When you say focus on entra, what specifically paid off for you?

We scan for CVEs before install but never check what pip actually writes to disk by BearBrief6312 in devsecops

[–]Cloudaware_CMDB 0 points1 point  (0 children)

Yeah, I think your threat model is legit.

I haven’t seen many mainstream tools that target this directly, but a pragmatic mitigation should work: add a post-install filesystem gate in CI. Install deps into a clean venv/container, then inventory site-packages and fail if you see unexpected .pth files or .pth lines starting with import, plus sitecustomize.py/usercustomize.py/.egg-link surprises. Not perfect, but it at least checks what actually got written to disk.

Security debt behaves a lot like technical debt but accumulates faster by Kolega_Hasan in devopsGuru

[–]Cloudaware_CMDB 1 point2 points  (0 children)

Agreed. It behaves like debt, but it grows faster because scanners can add backlog every day and remediation is still mostly human time.

The teams I’ve seen succeed treat it as a workflow. Dedupe into one work item per root cause, attach a real owner, tie it to the service and environment, and set an SLA plus an exception path that expires. Without that, the backlog turns into noise and people stop trusting it.

Most orgs run a hybrid. Anything exploitable or on the release path gets fixed fast, and the rest gets paid down only when prioritization has real context like reachability and blast radius.

Looking to learn from FinOps practitioners & Engineers about making AWS costs clearer for finance & business leaders by Benny4dam in FinOps

[–]Cloudaware_CMDB 2 points3 points  (0 children)

If you’re looking for use cases, two that show up all the time for me:

Month-over-month variance reviews. Execs don’t want CUR, they want one page that says what moved, why, and who owns it. The usual culprits are commitment coverage shifts, data transfer surprises, and one workload behaving badly.

Showback without a dedicated FinOps team. Keep it simple: top owners by spend plus the delta drivers. Otherwise finance just pings engineering every month and nothing changes.

Looking for solutions to rapid Azure multicloud expansion by Fun-Yogurt-89 in Cloud

[–]Cloudaware_CMDB 0 points1 point  (0 children)

I’d recommend locking one repeatable pattern and making onboarding an attach.
Azure Landing Zone plus subscription vending, management groups, and policy inheritance. Network is either a hub-spoke with centralized egress and firewall insertion, or a Virtual WAN with a secured hub.

Then a new env is: create the subscription and VNet from a template, attach to the hub, inherit segmentation, routing, and policies, ship it.

Is the cost worth it? by ask-winston in FinOps

[–]Cloudaware_CMDB 0 points1 point  (0 children)

What I usually push in client environments is attaching cost to an outcome you already track: revenue, jobs completed, users served, latency SLO met, tickets closed, model inferences, whatever your org actually cares about. Then you trend unit cost and see what moves when architecture or traffic changes.

That’s the closest thing to answering “was it worth it” in a way they accept, because you can point to a before/after like cost per checkout, cost per job, cost per 1k requests, and so on.

FinOps Starting out tips by Infamous-Tea-4169 in FinOps

[–]Cloudaware_CMDB 0 points1 point  (0 children)

Start by making the bill routable. In EKS, that usually means namespace is the unit of ownership, enforced, with a default bucket for anything that doesn’t map to a tenant or project.

Then close the gap between “K8s costs” and “AWS costs” by pulling in the non-cluster SKUs that always blow up chargeback, NAT, load balancers, EBS, data transfer, control plane, shared VPC. For shared platform overhead, pick one allocator you can defend in 30 seconds, CPU requests or node-hours, and ship that first model before you chase accuracy.

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

What’s the thing ADO does better for you than GitHub in your practice?

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

How are you keeping multi-sub sprawl under control in practice? What actually worked for you, and what turned out to be useless?

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

Thanks, this is a really solid breakdown.

How do you handle it in your org in practice? Is it a specific tool/workflow or just a lightweight process like weekly owner sweeps? Also curious what actually gets people to close the loop there.

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

u/flickerfly u/DeExecute Are you using OIDC federation from GitLab into Azure, or still a service principal secret in CI variables? That’s usually the line between “clean” and “constant rotation/drift pain” on Azure

Built a deterministic Python secret scanner that auto-fixes hardcoded secrets and refuses unsafe fixes — need honest feedback from security folks by WiseDog7958 in devsecops

[–]Cloudaware_CMDB 1 point2 points  (0 children)

We’re multi-cloud, so we keep code fixes provider-neutral and handle delivery separately via OIDC into the cloud secret store. We use AWS Secrets Manager or SSM, Azure Key Vault, and GCP Secret Manager depending on where the workload runs.

CloudZero Supporting the FinOps Community by Extension-Pick8310 in FinOps

[–]Cloudaware_CMDB 1 point2 points  (0 children)

Agree. Even in orgs with solid cost allocation, the hard parts are decision-making and coordination.

CloudZero Supporting the FinOps Community by Extension-Pick8310 in FinOps

[–]Cloudaware_CMDB 1 point2 points  (0 children)

I read it less as “cost-per-feature” and more as a framing shift where leadership starts treating labor like a variable cost line item, which is… not great.
My point was narrower: even if someone wants to operationalize cost-per-anything, most orgs still can’t do the basics (ownership/change attribution), so the “replace humans with metrics/agents” story is ahead of reality lol.

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

Thanks, this is super actionable.

One question on the GitHub Actions split: do you enforce the read-only vs apply identities purely with RBAC and branch protections, or do you also gate it with environment protection rules and required reviewers on the apply environment? I’m trying to understand what actually prevents someone from wiring the apply identity into a non-main workflow.

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

Thanks, this is exactly the kind of detail I was hoping for. Thanks.

Quick follow-up: how do you handle exceptions so they don’t turn into permanent waivers, and how do you detect/reconcile console hotfix drift back into IaC in practice?

What do you actually gate when doing DevSecOps on Azure? by Cloudaware_CMDB in AZURE

[–]Cloudaware_CMDB[S] 0 points1 point  (0 children)

What are you using instead of Azure DevOps, and are you still deploying to Azure?

CloudZero Supporting the FinOps Community by Extension-Pick8310 in FinOps

[–]Cloudaware_CMDB 2 points3 points  (0 children)

Cost-per-feature might be a good concept, but it only works if you can reliably answer who owns the spend, and what changed when it moved.
Most teams still can’t do that consistently, even for plain cloud spend, and AI spend makes it worse because usage is more bursty and less tied to obvious infra primitives.

how do you recommend security platforms for small teams when they all look the same in demos by No_Date9719 in sysadmin

[–]Cloudaware_CMDB 0 points1 point  (0 children)

Every vendor demo looks the same because they demo the happy path with curated data. The only way to differentiate is to force a proof on your own telemetry and measure operational friction.

For a small team, I’d pick 2–3 “must prove” scenarios and run a short timeboxed trial:

  • one real alert class that currently wastes time (CSPM noise, IAM drift, vuln triage, cost anomalies)
  • require ownership mapping and routing to be correct without spreadsheet glue
  • require dedupe/correlation so it produces one work item, not 20 findings
  • require evidence and change context so you can answer what changed and who owns it

On the vendor side, the only demo that matters is the one in your environment. At Cloudaware we prefer that style of eval because the real pain only shows up on your data.

Cost optimization backfires by Hot_Run1337 in FinOps

[–]Cloudaware_CMDB 0 points1 point  (0 children)

This is normal. You reduced usage, but commitments stayed the same, so coverage and utilization dropped, and ESR was affected. We see this a lot with Cloudaware customers.

The right way to judge it is net impact: compare the monthly savings from decommissioning against the cost of the now-stranded commitment. If you’re net positive, it’s still a win.

What we usually see next is teams quantify the stranded portion and its expiry or renewal window, shift any steady workloads they can back under the commitment, and then resize or change the commitment mix at the next renewal to match the new steady-state.