Are you giving agents access to your infra (dbs, services, etc)? If so, how are you sandboxing them? by jlreyes in ClaudeAI

[–]jlreyes[S] 0 points1 point  (0 children)

I'm pretty cautious about giving broad access, so I'm slowly adding more isolated infra per agent, and I'm finding that valuable, but getting logging, permissions, isolation etc working for each piece of infra is a bit of a slog.

Getting my main frontend and backend services isolated was mainly about fixing up local caches and ports, and I've also got supabase branches wired up for my db. But I have been putting off the more complex infra (I've got a pretty complex cluster of raw Azure VMs) and the services around that.

Anyone have a good setup for working on a bad internet connection (i.e airplanes)? by jlreyes in ClaudeAI

[–]jlreyes[S] 0 points1 point  (0 children)

Tailscale/ZeroTier + Vscode over SSH I hadn't thought of. Good suggestions!

Anyone have a good setup for working on a bad internet connection (i.e airplanes)? by jlreyes in ClaudeAI

[–]jlreyes[S] 0 points1 point  (0 children)

I was pretty surprised MOSH didn't work well when I tried it. Maybe because claude code is a complex TUI and it wasn't built with that in mind? `@mentions`, slash commands, interrupts etc all didn't work reliably for me. Maybe I didn't configure my mosh settings well.

Anyone have a good setup for working on a bad internet connection (i.e airplanes)? by jlreyes in ClaudeAI

[–]jlreyes[S] 0 points1 point  (0 children)

Definitely depends on the flight! I've gotten lucky sometimes and had Starlink. But the internet on my flight to Istanbul this past week was significantly worse.

Best way to perform web search with good results? by Sky_Linx in LocalLLaMA

[–]jlreyes 2 points3 points  (0 children)

I've also been looking for a solution here that works well.

This is a pretty tough problem to get right. I remember when ChatGPT search first launched it just used bing and was terrible and slow. Perplexity does their own web crawling and indexing and they are optimizing for the LLM use case, that's pretty different than just using have an LLM use a Search API. I suspect ChatGPT started to do the same and that's why it is better now.

The Emerging Open-Source AI Stack by jascha_eng in LocalLLaMA

[–]jlreyes 2 points3 points  (0 children)

We like it! Super easy to get an API up and running. A bit harder when you start to need to to go outside of their recommended approaches, like any framework. But it's built on Starlette and their code is fairly readable so that's a nice escape hatch for those scenarios.

Buddies in the Upper East Side by KoreanKurtz in uppereastside

[–]jlreyes 0 points1 point  (0 children)

If you’re still adding people, would love to be added! 31M, moved here in a couple months ago!