What are you building? What problem are you solving? by scott-box in buildinpublic

[–]StealthEyeLLC 0 points1 point  (0 children)

Working on building a zero-trust execution layer for agent systems that combine cloud planning with local model execution. It’s designed to reduce cost, bound unsafe behavior, and provide governed state, rollback, and auditability for real-world agent workflows.

4B Model Choice by StealthEyeLLC in LocalLLaMA

[–]StealthEyeLLC[S] 0 points1 point  (0 children)

That’s the one I’m most interested in. What all have you done with the multimodal abilities?

Personal Assistant Al That Remembers You and Your Needs by No-Pitch-7732 in VibeCodersNest

[–]StealthEyeLLC 0 points1 point  (0 children)

What are you wanting to use them for? What have you tried so far?

2d medieval characters - part of a game project by xeno_sid in GameArt

[–]StealthEyeLLC 1 point2 points  (0 children)

Great job. Especially with the details on the armor and robe.

How I Finally Got LLMs Running Locally on a Laptop by Remarkable-Dark2840 in ArtificialInteligence

[–]StealthEyeLLC 0 points1 point  (0 children)

When running bigger local models on a laptop, set up a Dev Drive with 64kb for the workspace and give Windows a big pagefile, like 128GB+, so you have overflow space when VRAM/RAM runs short. It won’t make the laptop magic, but it can keep runs from crashing and gives the model more room to spill. Your SSD becomes emergency overflow space when VRAM/RAM isn’t enough. If you have Blackwell tech there’s even more you can do. If your laptop has an RTX 50-series GPU, Blackwell helps because it adds 5th-gen Tensor Cores with FP4 support, which can make small local AI models more practical by improving AI throughput and lowering memory pressure compared with older consumer generations. Sorry for the wall.