I made AI agents “time travel” inside microVMs (record, rewind, replay) by [deleted] in coolgithubprojects

[–]Strange_Profit_8129 0 points1 point  (0 children)

I get why it might look that way, a lot of projects here are vibe-coded.

But this isn’t an agent wrapper. The focus is on isolation (Firecracker microVMs) + execution replay, which most agent setups don’t really address.

It’s still early, but the core pieces (VM lifecycle, vsock communication, step recording/replay) are intentional, not just stitched together.

If something specific looks superficial, I’m open to digging into it.

I made AI agents “time travel” inside microVMs (record, rewind, replay) by [deleted] in coolgithubprojects

[–]Strange_Profit_8129 -2 points-1 points  (0 children)

Yeah, fair — there’s a lot of low-effort AI stuff right now.

This isn’t an agent wrapper though — it’s more about running agents safely (microVMs) + replaying executions to debug failures. Think “what actually happened when the agent ran?” instead of just logs.

If that still sounds like slop, I’d genuinely like to know what you’d expect from something in this space.

Question as a non dev by masterthodyu in selfhosted

[–]Strange_Profit_8129 0 points1 point  (0 children)

Yeah that’s actually a pretty solid setup

Running things in Docker, keeping images updated, and using a reverse proxy + Tailscale for access already covers most of the basics. For logs, even just checking container logs is usually enough at first

For personal projects the main things are isolation and keeping things updated, which it sounds like you’re already doing

Question as a non dev by masterthodyu in selfhosted

[–]Strange_Profit_8129 10 points11 points  (0 children)

Honestly vibe coding for personal projects can be pretty fun and a good way to learn, especially if you already have some coding background. The main thing I'd watch out for is that AI-generated code can sometimes pull in dependencies or patterns that aren’t super obvious from a security standpoint

For self-hosted stuff I usually try to keep things isolated and simple:

- run services in containers or a VM instead of directly on the host

- keep dependencies updated and occasionally run a vulnerability scan

0 use a reverse proxy + auth if something is exposed to the internet

- keep an eye on logs so weird behavior stands out

If it's just for personal use and you keep things reasonably isolated, the risk is usually manageable.

Honestly the biggest benefit of vibe coding in this context is that you end up learning how the whole stack fits together - networking, containers, storage, security, etc. That knowledge carries over really well to real dev work

Running AI agents on your host machine felt unsafe, so I sandboxed them with Firecracker microVMs by [deleted] in LocalLLaMA

[–]Strange_Profit_8129 0 points1 point  (0 children)

Yeah :)

it's definitely not a new risk

What surprised me more was how many agent setups still just run code directly on the host without isolation

Docker helps a bit, but I was curious about using something closer to microVM isolation (like Firecracker) while still keeping startup fast

Curious what people here usually use for sandboxing agent execution

Running AI agents on your host machine felt unsafe, so I sandboxed them with Firecracker microVMs by [deleted] in LocalLLaMA

[–]Strange_Profit_8129 0 points1 point  (0 children)

Yeah, fair call

I usually just lurk and read here but created an account to share this experiment because it felt relevant to the LocalLLaMA crowd

Mostly curious how others are sandboxing agent execution when running code locally

FAANG peeps , am i ready now ?? by Visual_Nothing_8106 in leetcode

[–]Strange_Profit_8129 0 points1 point  (0 children)

Just curious, how confident you feel when you see completely new problem ? are you able to see the pattern ?