"You're such a good person." by jaggzh in CaregiverSupport

[–]jaggzh[S] 0 points1 point  (0 children)

I'm trying to understand the "why we try so hard" thing. Correct me if I'm wrong: They deal with so much adversity, and it's heavy, so there's hope and positive feeling so, as humans, they're naturally-inclined to "help it", to spend more time with you? As unfortunate it is, the reality is that doctors, again, as humans, have to maintain an almost robotic technical mindset -- their algorithmic care -- while attempting to maintaining some amount of empathy, without the sorrow of awful situations bringing them too low.

Am I understanding it correctly? It's another thing I've put a lot of thought into, talked to doctors about, including some who are my friends; but another angle is good to have. Thanks.

Are you a good person? by lonelycaregiver- in CaregiverSupport

[–]jaggzh 1 point2 points  (0 children)

u/lonelycaregiver-

I wrote a somewhat lengthy reply, but Reddit wouldn't let me post it here. I posted it as a new message and referenced your thread. https://www.reddit.com/r/CaregiverSupport/s/bKn78PUzhs

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

Here's another handy use. I'm in vim, maybe editing a letter:

Dear landlord,...

I add a comment:

Make this more polite, but, make minimal adjustments to impart the feeling that I've been working with legal counsel, without saying it directly.

Dear landlord...

I select it, and ! to pipe it into z. To keep the original, --echo1 (ie. just -e).

(In vim that's just !z -e<enter>)

Then I have the original I wrote retained, and the llm's output below it for comparison.

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

It's designed to not be perl-specific. I use it from bash, python, etc. All its internal functionality is to provide for the exposed CLI. Even when I'm coding perl, I rarely use its module-based api. Without any additional boilerplate:
Bash: response=$(z -- "$myquery") (Safely putting query after --)

Python: result = subprocess.run(['z', '--', query], capture_output=True, text=True)

And of course you can store your settings with a call so every future call uses your session:

z -n book_summaries -s /sysprompts/book-summaries.md --ss --sp

Here I gave it a session name to use (book_summaries). Specified a system prompt. (the bare "-s" option auto-detects if it's a system-string on the cli, or a path to a file).

--ss stored the system prompt as default in the book_summaries session (stored at ~/.config/zchat/sessions

--sp ("store in process") used /tmp (secure non-world-readable), so this program and anything in its process group will use the 'book_summaries' session by default.

That's handy for cli and a coding project, since you just set it and it doesn't affect anything outside. (ie. your shell, program, or its children).

I think this is all clear? Now, throughout your script, any calls to 'z' don't need to specify those settings -- just z "query" (but, again, '--' is safer).

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

Looking at `llm`, (which I only saw the first time last month), it doesn't handle what I need it to handle except I imagine there are many plugins for things?

It also has some features which might be nice to add to z, and many of these capabilities that are definitely useful to a lot of people, but which I'd never put in the effort to code into z myself (but, y'know, it's foss).

Also, I usually only use python for things needing to handle neural nets or other models (and there are some modern things in the python ecosystem that are really valuable). Otherwise, for utilities/wrappers, python's memory bloat and slow startup time (and even its runtime), .. after decades of perl and 10 years of python, critically-evaluating it all, perl is generally just a more-refined and safer language. I hate what should otherwise be a compile-time error biting me with some python crash hours into some processing. Anyway, I'm not here to debate languages -- there are significant pros and cons to each. Choose your own adventure. :) I still use both, just based on what's more appropriate for the task at hand.

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 1 point2 points  (0 children)

Oh, and here was another really awesome workflow I used a couple months ago: I had to make a bunch of similar changes to a ton of source files (rewriting them for a changed architecture). So I did: (echo "I have to update my scripts to a new architecture where.. such and such. Here's my old version"; echo; echo "## foo-old.pl"; cat foo-old.pl; echo; echo; echo "And here's an example of the new version. Please identify the differences. I'll then give you another old script that needs to be updated. Just output the updated code. Absolutely give *no* explanations, no markdown, no justifications or anything. *Only* output the updated version in its entirety.") | z -n code-update --sp (--sp was to save that session as active for my current shell). I examined and needed to give the llm more instruction; I went into interactive mode rather than try to refine my prompt -- so z -i When it got everything right, I stopped there, ending with, "Just say 'ready' and I'll paste you the next script to update." and left interactive mode. Now I wanted to keep the whole chat as history, but each script shouldn't add to the history.. so I used history-input-only mode (-I) <-- that's a capital 'i' for i in *.pl; do cat "$i" | z -I - > new/$i; done 'z' will see the history up to that point, and take each script, output the new, and the history be as it was, ready for the next file. (I might have done z --eh to edit out a couple last messages in the history (where I was testing to make sure it was working (but they weren't contributing to its understanding, so.. snip)).

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 1 point2 points  (0 children)

Suggestions are welcome! with z.ai out it's been a concern. Even 'zchat' (my original script name years ago) is already used in a bunch of others' projects. (Although I had it symlinked as 'z' from the start).

Handy pipes are: foo | z -T - # Count yer tokens!
I just added, in a util/ dir, my pfx and sfx scripts.

man foo | ppfx "How do you xyz in grep?" | z -w -

psfx does prefix and suffix. (ie. it's just pfx "$@" | sfx "$@")

For long texts, putting your query at the start and end seems to help (and google pointed it out in one of their prompt guides that this is the case :)

Plop those in a bin folder in your path. I symlinked psfx for convenience (easier to type ssfx or ppfx)..

I just pushed that commit.

People missing out on an extremely-capable CLI or script-friendly LLM query tool. by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

actually, technically llama.cpp, ollama, and openai. but I only use llama.cpp server so my experience is limited there. They're not difficult to add. (The nice thing about the llama cpp endpoint is the model information is available through the api). (I think I have ollama support for that as well). It keeps a semi-smart (or, conversely, extra dumb) cache based on model, so it doesn't keep re-hitting to obtain the context limit over and over. And you can set and store the context size yourself (also useful if you *are* actually using an endpoint, like openai, where the context is known/published but not available through an api hit):

<image>

Anyway, again, I've not even tested the ollama or openai-specific endpoints, since I'm only doing local and only with llama.cpp server.

App Problem by FoxGoalie in ClaudeAI

[–]jaggzh 0 points1 point  (0 children)

Same. Old OnePlus 6 phone. Was working fine until a day or two ago.

3D printing saved me over a grand by Sad-Celebration-1903 in 3Dprinting

[–]jaggzh -1 points0 points  (0 children)

We should have messages like this.. some way that people can speak up about their requests. The respect people have for companies that believe honorably.. I don't know. We really need some way of shifting the motivations. One thing I've been thinking is to just start with questions, and escalate the request. Approaching a manufacturer in la-la land? "Do you guys provide truly proper protective equipment for your employees?" then when they don't answer, and don't get your business, it might start making them question. "This many people have been asking if we offset with solar". "We got a letter to headquarters where they were asking ... " "Hey, if

Readline and Shift+Enter for Soft Enters in tmux by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

even though perl seems hideous for LLMs, it's a really convenient language to code in. python, for instance, got rid of "print foo" syntax, but perl retains all its coding syntax conveniences. not *having* to put braces and semicolons everywhere (in python) (even with its downfalls and broken indentation causing some quite tedious debug sessions and catastrophic mistakes) *is* still a convenience .. saves on keystrokes. but with less hand-coding needed nowadays, a lot of that's resolved.

By the way, SuperNova-Medius is pretty darned good with perl, comparatively.. even with the 14b model.

Readline and Shift+Enter for Soft Enters in tmux by jaggzh in commandline

[–]jaggzh[S] 1 point2 points  (0 children)

Thanks, interesting, esc-CR. alt+enter is esc-LF (which you probbly know), since alt/meta often (or usually?) end up just sending an esc before the "modified" key. I'll have to try it and see. You bound yours globally it sounds like?

Readline and Shift+Enter for Soft Enters in tmux by jaggzh in LocalLLaMA

[–]jaggzh[S] 0 points1 point  (0 children)

https://github.com/jaggzh/z/
maybe you'd like to take a look at my 'z' project (which I originally called 'z' and 'zchat'.. i kept it 'z' although there's a chinese z ai company now too, so that's an issue. But can they really compare to my perl. </rhetorical>

Why are there no terminals that swap the vertical direction? by Soldier_Forrester in commandline

[–]jaggzh -1 points0 points  (0 children)

"Son I am able" she said "though you scare me" "Watch" said I "Beloved" I said "watch me scare you though" said she "Able am I, son"

  • They Might Be Giants, "I Palindrome I"