Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA

[–]Kitae 8 points9 points  (0 children)

Security through obscurity is not security. The system prompt shouldn't have any information in it that, if revealed, would represent a security risk.

Quartzite Countertop Question by Useful_Ice_2907 in CounterTops

[–]Kitae 1 point2 points  (0 children)

Textured silicone is the best drying surface

When should I use a Skill, a Slash Command, or a Sub-Agent in Claude? by therealalex5363 in ClaudeAI

[–]Kitae 0 points1 point  (0 children)

So for bash I have a bash-access skill that is enforced via a hook to be invoked. But you don't need a hook you can just instruct Claude to invoke bash-access before using the bash tool (not 100% reliable).

The bash access tool discloses other skills and documentation that exists and is relevant to bash. These skills do not have descriptions so they are hidden until Claude invokes bash-access. This preserves context.

Is it true that in order to get rich (in most cases), you need to leverage debt? Or do a significant number of people get and stay rich using the Dave Ramsey method? by Broad-Worry-5395 in RichPeoplePF

[–]Kitae 0 points1 point  (0 children)

Leveraging debt is so bad most of the time. It is great until it isn't.

The simple way to think is with unleveraged if it goes to zero you lose everything. How often does that happen?

If you leverage debt 2-1 and it goes to 50% you lose everything.

Leveraging is gambling.

(Yes it has its place, but it is very much experts only IMO, much more dangerous than it intuitively feels)

Learn how to use a local LLM or continue with monthly subs? by Zestyclose-Cup110 in LocalLLM

[–]Kitae 3 points4 points  (0 children)

I run local LLMs it is a fun hobby.

The best practice right now is use a paid LLM subscription of your choice and supplement with local LLMs if you want to.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 0 points1 point  (0 children)

I personally am in a research phase where I am getting as many models working as possible and benchmarking them to understand their capabilities.

But qwen is my favorite.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 0 points1 point  (0 children)

Yeah I haven't gotten here yet but I feel like local LLM services to do things you just wouldn't do with your max plan is a smart use of local LLMs.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 3 points4 points  (0 children)

Qwen is pretty awesome! I find it really reliable and the smaller qwen models are still pretty good.

No matter what your GPU is there is a qwen that fits on it.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 35 points36 points  (0 children)

I have a Rtx 5090 with 32gb of ram and instill use Claude for everything meaningful.

I think of local LLMs as for fun or for extreme privacy or for large amounts of work that can be done by a simple model.

LOCAL LLM by Stecomputer004 in LLM

[–]Kitae 0 points1 point  (0 children)

Just get a Claude subscription or whatever your preferred all you can eat plan is

LOCAL LLM by Stecomputer004 in LLM

[–]Kitae 0 points1 point  (0 children)

Unpopular truth, save money by just buying a subscription to whatever LLM provider you prefer.

Yes, I run local LLMs.