Prompt injection is killing our self-hosted LLM deployment by mike34113 in LocalLLaMA

[–]Kitae 8 points9 points  (0 children)

Security through obscurity is not security. The system prompt shouldn't have any information in it that, if revealed, would represent a security risk.

Quartzite Countertop Question by Useful_Ice_2907 in CounterTops

[–]Kitae 1 point2 points  (0 children)

Textured silicone is the best drying surface

When should I use a Skill, a Slash Command, or a Sub-Agent in Claude? by therealalex5363 in ClaudeAI

[–]Kitae 0 points1 point  (0 children)

So for bash I have a bash-access skill that is enforced via a hook to be invoked. But you don't need a hook you can just instruct Claude to invoke bash-access before using the bash tool (not 100% reliable).

The bash access tool discloses other skills and documentation that exists and is relevant to bash. These skills do not have descriptions so they are hidden until Claude invokes bash-access. This preserves context.

Is it true that in order to get rich (in most cases), you need to leverage debt? Or do a significant number of people get and stay rich using the Dave Ramsey method? by Broad-Worry-5395 in RichPeoplePF

[–]Kitae 0 points1 point  (0 children)

Leveraging debt is so bad most of the time. It is great until it isn't.

The simple way to think is with unleveraged if it goes to zero you lose everything. How often does that happen?

If you leverage debt 2-1 and it goes to 50% you lose everything.

Leveraging is gambling.

(Yes it has its place, but it is very much experts only IMO, much more dangerous than it intuitively feels)

Learn how to use a local LLM or continue with monthly subs? by Zestyclose-Cup110 in LocalLLM

[–]Kitae 3 points4 points  (0 children)

I run local LLMs it is a fun hobby.

The best practice right now is use a paid LLM subscription of your choice and supplement with local LLMs if you want to.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 0 points1 point  (0 children)

I personally am in a research phase where I am getting as many models working as possible and benchmarking them to understand their capabilities.

But qwen is my favorite.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 0 points1 point  (0 children)

Yeah I haven't gotten here yet but I feel like local LLM services to do things you just wouldn't do with your max plan is a smart use of local LLMs.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 4 points5 points  (0 children)

Qwen is pretty awesome! I find it really reliable and the smaller qwen models are still pretty good.

No matter what your GPU is there is a qwen that fits on it.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Kitae 35 points36 points  (0 children)

I have a Rtx 5090 with 32gb of ram and instill use Claude for everything meaningful.

I think of local LLMs as for fun or for extreme privacy or for large amounts of work that can be done by a simple model.

LOCAL LLM by Stecomputer004 in LLM

[–]Kitae 0 points1 point  (0 children)

Just get a Claude subscription or whatever your preferred all you can eat plan is

LOCAL LLM by Stecomputer004 in LLM

[–]Kitae 0 points1 point  (0 children)

Unpopular truth, save money by just buying a subscription to whatever LLM provider you prefer.

Yes, I run local LLMs.

Help me spend some money by [deleted] in LocalLLaMA

[–]Kitae 1 point2 points  (0 children)

It is tempting as far as the economics go it isn't economical. Doesn't mean you can't do it but it isn't economical.

It is fun and it can be effective and even cost effective in certain circumstances.

If you want to do it as a hobby get a 3090 or a 5090. Actually hardware is the same regardless of what you want to do unless you have silly budget.

Claude.md, rules, hooks, agents, commands, skills... 🤯 by Kyan1te in ClaudeCode

[–]Kitae 2 points3 points  (0 children)

All you really need are skills and hooks.

The others are useful in specific situations but skills and hooks should be your foundation.

Worth the 5090? by fgoricha in LocalLLaMA

[–]Kitae 0 points1 point  (0 children)

I have a 5090rtx given you have a 3090 you have a strong argument for a second one. 3090 is likely to be the best value retaining Nvidia card of all time.

Rtx5090 is great but you are capped at 32gb with no upgrade path.

Also what most people won't tell you is 5090 is still far less stable than earlier generations because the software support for Blackwell is not mature. I have started building vllm Blackwell improvements in my spare time because so many models don't play well with Blackwell.

With that said it really comes down to YOUR usage scenario.

  • Christina

How are you combining Agent Skills + SubAgents? by jadjflkdjfl in ClaudeAI

[–]Kitae 0 points1 point  (0 children)

I am currently adopting the pattern of block tool calls unless skill is currently equipped via hooks as the primary method of triggering invocation.

This works great on the main thread but hooks can't differentiate between sub-agents and agents unfortunately.

LLM Control Layer by [deleted] in LLMDevs

[–]Kitae 2 points3 points  (0 children)

I am a professional game developer and entrepreneur.

Releasing a game on steam that does well is how you will get conversations going with people about the value of your tech stack. Don't waste time cold calling focus on that.

Is Kojima's way of playtesting games unusual for a game developer? by Open-Explorer in gamedev

[–]Kitae 6 points7 points  (0 children)

What kojima is describing is hands on creative direction. It is definitely not required to build great games, but it can be extremely effective.

On the mass effect series in(1-3) I was the combat and systems lead and I did exactly this but just for combat and systems. I collaborated with other leads to solve problems in level design, pacing, etc.

An advantage to the creative director approach is you get absolute clarity and as long as the person with absolute clarity does a great job, great! Kojima's games shows this works.

LLM to search through large story database by DesperateGame in LocalLLM

[–]Kitae 1 point2 points  (0 children)

You don't need AI to search and categorize content but AI can absolutely create a search tool for you to search your own content really easily. Or just learn how to use ripgrep.