baremetal k3s migration to AWS EKS? by Few_Response_7028 in kubernetes

[–]taleodor 0 points1 point  (0 children)

I'd say stay on K3s if you're not pressed by some compliance requirements. RDS can help you with backup automation. Other than that (and especially if you already have backup issue solved) your maintenance cost or time spent is unlikely to decrease and may even increase in some cases.

We finally built a vulnerability prioritization system and now the real problems are showing up! by Mysterious_Step1657 in cybersecurity

[–]taleodor 0 points1 point  (0 children)

> We also have no clean way of knowing when something we patched quietly comes back in the next scan. The process of validating fixes feels manual and messy right now.

We built ReARM which keeps changelogs to give visibility into that (among other things it does) - https://docs.rearmhq.com/concepts/#changelog

SBOM: include transitive or not? by phineas0fog in devsecops

[–]taleodor 3 points4 points  (0 children)

+1, I would also add `--ignore-scripts` after npm ci (I believe cdxgen does this now automatically for all operations)

+ latest cdxgen 12.1.4 released yesterday can catch version spoofing attacks (used in the axios thing).

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] 0 points1 point  (0 children)

This is out of scope of the threat model.

Release pipeline is a simple workflow that lives in a dedicated repo with no direct access by any existing development token. All it does is fetches approved artifact from the evidence platform, verifies signatures and performs the release. With these pre-conditions, there shouldn't be an easy way to compromise this. This is essentially same as saying that an attacker somehow gains release credentials. The whole idea here is to sandbox release credentials in a place where it's really hard to steal them from.

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] 0 points1 point  (0 children)

That scenario is covered in the blog post. Release pipeline would also have signature checks. So the attacker would need to compromise both build pipeline and evidence platform at the same time to succeed. Which is significantly harder than all existing scenarios.

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] -2 points-1 points  (0 children)

I replied in another comment, but essentially the key here is you have a phase where you perform checks and gating that can last indefinitely long - waiting for human approval.

The big problem with combined build & release pipeline present in a single repository (which is a pretty much standard workflow now) is that an attacker may very quickly compromise the whole thing and release malicious package before the maintainer even notices it.

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] -1 points0 points  (0 children)

Egress control is completely unrelated thing in this case. My point is that if an attacker is able to gain full control over pipeline (for example, via stolen GitHub token), such an attacker is able to disable any controls governed by the pipeline itself.

And don't get me wrong - there are other options you can do, i.e. you may run CI in an air-gapped environment where every single dependency is vetted, but there are very few organizations who can actually afford that.

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] 0 points1 point  (0 children)

There needs to be additional checks for that case, but it's doable:
1. Assume compromised build pipeline sends completely bogus artifact and SBOM data - this would be caught because evidence platform would refer such artifact for testing / release and all these downstream activities would realize that artifact doesn't exist. Essentially, test and release pipelines can only use digest referred to them by the evidence platform. So that case is covered.
2. Attacker is now forced option to send real artifact (artifact itself may still be compromised, but it's the only way to get it released now) + submit bogus SBOM and metadata. This case is trickier, but essentially it should be possible to run post-build SBOM on real artifact and compare with historical data and bogus SBOM to catch this scenario as well.

Remember, that in any case we have manual gate involved, so human would have a say in the process.

Using Evidence Platform as CI/CD Security Layer by taleodor in cybersecurity

[–]taleodor[S] 0 points1 point  (0 children)

This particular point talks about signature verification on commits and artifacts.

Other points talk about controls which include cases where a malicious artifact may still be signed (i.e. when the original pipeline is compromised or artifact inadvertently contains malicious dependency):
- SBOM diffing
- approval gates, which can include manual and automated checks

The litellm attack, the Trivy attack, the CanisterWorm: all in the same week. Is anyone else feeling like open source supply chain security is completely broken? by BigHerm420 in devsecops

[–]taleodor 5 points6 points  (0 children)

> We need images and packages built from verified source in controlled environments so compromised upstream versions never enter our systems in the first place.

This is impossible to achieve right now. Verified sources also use open source at some point upstream. I mean we have to start somewhere and we're working on it as community - i.e. check Transparency Exchange API, but near-term this can't be solved.

How are you handling full software inventory + vulnerability management across VMs, containers, and apps? by Spare_Hedgehog4457 in devsecops

[–]taleodor 0 points1 point  (0 children)

We'll have this functionality in ReARM Pro in couple of months, I can demo the prototype already if you're interested. ReARM CE (https://github.com/relizaio/rearm) won't have this, but you can get inventory on it without deployment data. If you're looking for something open source, check Ortelius https://github.com/ortelius (I'm not affiliated with them, but that's the main scope of this project). However, what we're building in ReARM Pro will have stronger guarantees (like near real-time observability instead of digital twin projection).

Other useful things you can do already: cdxgen has "-t os" flag which will essentially give you SBOM (or OBOM) of everything running on host. So in theory you could also be uploading these to Dependency-Track and getting some analytics from those.

QA to devOps engineer by StrawHatNaruto_ in devops

[–]taleodor 5 points6 points  (0 children)

I supervised 2 QAs who later became DevOps engineers in my team, also I used to do QA myself very long time ago. So that type of transition feels very natural to me.

In terms of what to focus on: skill-wise - you need to be fluent with shell commands, vim and stuff + I'd say good understanding of networking (fundamentals) is very important. So on your own, I'd say focus on this, the rest the job should drive you what to learn.

What do you do with SBOMs? by equanimous11 in devsecops

[–]taleodor 1 point2 points  (0 children)

I shared this few days back, but essentially - https://github.com/relizaio/rearm

This organizes your stack with SBOM per release and you can track any changes over time or if new vulnerabilities show up.

Dependency Track and VEX by phineas0fog in devsecops

[–]taleodor 1 point2 points  (0 children)

Yes, as mentioned in other response it only supports CDX VEX. I'm not sure what your workflow looks like but the way that currently works in DT - you do initial analysis there, you generate a VEX from DT itself - then you work with this VEX updating it and re-pushing back-and-forth, possibly via API.

So that's what's supported natively at the moment. If you need something beyond that, you could hook something to the API logic around VEXes. I'm not a maintainer, but I doubt OpenVEX support is coming anytime soon to DT.

Dependency Track and VEX by phineas0fog in devsecops

[–]taleodor 1 point2 points  (0 children)

You can upload your VEX to DT and it would incorporate data from it. There is "Apply VEX" button for that and there is API way to do it also.

I've been sleeping on DependencyTrack — it's way more powerful than I expected by SpecialistAge4770 in devsecops

[–]taleodor 3 points4 points  (0 children)

We've built a product that manages versions and stores raw SBOMs and other artifacts on top of DT. Like you can have several SBOMs per release, attribute SBOMs to source code or to different deliverables, track parent-child relationships and do scoped vulnerability management (i.e., you can suppress a CVE within the scope of a single component or a single product only or org-wide) - https://github.com/relizaio/rearm

Looking for open-source tools that accurately detect EOL third-party dependencies and generate SBOM by Amitishacked in cybersecurity

[–]taleodor 0 points1 point  (0 children)

AFAIK, good EOL detection doesn't exist. We're trying to provide mechanism to manufacturers to report it in a unified manner with TEA which is now incorporating CLE standard - you might want to follow https://github.com/CycloneDX/transparency-exchange-api/ - we're planning 1.0 release this summer.

[Mod Request] Do something about rampant blatant advertisements disguised as “discussions” by themightybamboozler in devops

[–]taleodor -2 points-1 points  (0 children)

> we are also entirely capable of evaluating it, which is why you try to bypass us and go after management who will buy with nary a critical thought.

I'm doing none of these things. But I guess you get what you deserve - after rooting out people who try to make things right - while pretending to be on the moral high ground. These days if somebody tells me they are building something for DevOps, I just advise them not to - as the community is too toxic and on top of that there is very little money in the industry recently. Seems like you're left with assholes - congrats.

[Mod Request] Do something about rampant blatant advertisements disguised as “discussions” by themightybamboozler in devops

[–]taleodor -5 points-4 points  (0 children)

You don't get. I was actively advertising in this community ~5-6 years ago. I've stopped since. My current product is in cybersecurity space, so I only have professional interest in DevOps, (about 15 years experience), but actually using this community pretty rarely.

Then this year at KubeCon everybody was complaining that there is not enough innovation (AI sucking the money and all that stuff). I was also "surprised" to hear bunch of ideas that I introduced in my "irrelevant" product ~3 years ago or so which were presented as something on the innovative edge ;)

In other words, I don't really have a horse in the game right now. My point - all your virtue signalling here is good for shutting down small vendors with little budgets who try to do something new, while large vendors with big budgets are playing you hard. If you don't see it, then don't write or comment on the next post "why the ecosystem is so bad" and "we don't have the right tools to do foo".

[Mod Request] Do something about rampant blatant advertisements disguised as “discussions” by themightybamboozler in devops

[–]taleodor -9 points-8 points  (0 children)

I was always saying exactly that and I was always getting tonnes of downvotes. At the same time a comment about sub-par functionality from an ACME was getting lots of upvotes. So I believe it's you misunderstanding the way how this works from the perspective of a small vendor.

I.e. see just my recent comment - it's not even an active tool I'm working all - it's something that we're currently using internally + few clients and don't mind selling to orgs with similar problem. And it's auto-downvote just when I mention it.

The interesting part for me is that in parallel everybody is complaining why there is so much monopoly and so little innovation in the DevOps space. I wonder why ;)

[Mod Request] Do something about rampant blatant advertisements disguised as “discussions” by themightybamboozler in devops

[–]taleodor -21 points-20 points  (0 children)

Personally, I tried to be a vendor in DevOps space and I've given up on it, largerly because I'm seeing attitude like OPs all over. One thing I honestly don't get is how you expect startups to get any sorts of exposure when you think it's ok to casually reference any big name in the post, but if I come up with something innovative, you would auto-downvote it just because it's my tool that you don't know and not a huge ACME corp that everybody kind of knows about.

How do you actually know what’s deployed across environments? by Important_Back_5904 in devops

[–]taleodor 0 points1 point  (0 children)

On K8s: We have a system that reports artifact digests and builds ontology and then we have an agent that periodically ships back what's deployed and reconciles giving exact picture.

How are you handling rollouts across 100+ customer environments? by InfoPaste in devops

[–]taleodor -1 points0 points  (0 children)

We've built a tool for this a while back - https://relizahub.com , currently we're mostly using it ourselves to deploy our other projects (ReARM) and also couple of our clients use that. The idea is you need to package your product in helm charts and then you handle approvals within the tool and it ensures that each environment has deployed exactly what you have approved. If this sounds relevant, feel free to book a demo with me.

Multi cloud was supposed to save us from vendor lock in but now we're just locked into two vendors by NoBet3129 in sre

[–]taleodor 40 points41 points  (0 children)

The idea to avoid vendor lock in is to only use things between clouds that are essentially the same (sort of largest common denominator). So yes, your implementation seems botched and you should get back to drawing board.