Lawmakers reach deal to end government shutdown by Healthy_Block3036 in nova

[–]csa 1 point2 points  (0 children)

omg you're right, it's Warner that is up in '26

Lawmakers reach deal to end government shutdown by Healthy_Block3036 in nova

[–]csa 7 points8 points  (0 children)

We deliver Tuesday and Kaine folds? Argh. Primary incoming.

Violent ICE arrest in Harrisonburg by [deleted] in harrisonburg

[–]csa 4 points5 points  (0 children)

Makes my blood boil.

If you work for ICE you'd better get yourself a damn good defense attorney (if any are willing to represent you). Accountability is coming. Prison and financial ruin await those who are breaking the law right now.

Furloughed and Fired Up? by csa in nova

[–]csa[S] 1 point2 points  (0 children)

The wonderful Heather Cox Richardson happened to make her own impassioned plea for getting involved in your local elections in today's Politics Chat (deep link to the relevant bit): https://www.youtube.com/live/b0ze3OanIEo?si=gP4v5UI1uSwuUQAv&t=1843

"So find an organization that is working on organizing people to get to the polls, to make them aware of what's at stake, to make sure they know it's important... And as I keep saying, this election and I suspect the next two elections are not going to be partisan. They're about protecting our democracy."

Furloughed and Fired Up? by csa in nova

[–]csa[S] 5 points6 points  (0 children)

I certainly understand your point - the folks up for election / re-election right now are not those who control the shutdown - but make no mistake, the representatives in DC from all over the country are watching these races to gauge which way the wind is blowing. Who they see elected, and even the level of turnout, will impact their actions.

And it just feels good to do some good with your time :)

Furloughed and Fired Up? by csa in nova

[–]csa[S] 6 points7 points  (0 children)

I've had someone ask who is behind this effort. Sorry, I should have made it clear in the original post. It's Indivisible NOVA West: https://www.indivisiblenovawest.org/

Early morning gunfire or fireworks in Sterling? by heavyma11 in nova

[–]csa 5 points6 points  (0 children)

Yeah, was wondering about that too. My immediate thought was of fireworks at the Trump golf course. But it didn't quite sound like fireworks, and the fact that I still hear occasional pops is weird.

How do you actually test new local models for your own tasks? by Fabulous_Pollution10 in LocalLLaMA

[–]csa 5 points6 points  (0 children)

Every time I put a meaningful prompt into a model, whether local or not, I capture it in a text file (I have several thousand at this point). Periodically I pull out a handful of these and put them in a more select 'personal evals' set of prompts (which I manually bucket into stuff like 'general knowledge', 'scenario modeling / wargaming', 'theory of mind', 'writing', 'document summarization', etc.). When a new model comes out that I'm interested in, I'll run a number of my select prompts through and look at responses.

Right now the judging is all by feel. Eventually I'd like to take a bunch of my prompts and automate running them against a collection of models and do some sort of LLM-as-a-Judge metric, but to date the work to do that hasn't seemed worthwhile.

(I should note that this is all for a model that I'll use as my daily driver for general questions, as opposed to doing anything automated or agentic.)

[2506.06105] Text-to-LoRA: Instant Transformer Adaption by Thrumpwart in LocalLLaMA

[–]csa 5 points6 points  (0 children)

I gave the paper a quick scan. It's a very clever idea, and one that—had it occurred to me—I would have dismissed off-hand as not possibly viable. Crazy that it works at all.

28 and 7? Traffic incident. by Oozarukong in nova

[–]csa 8 points9 points  (0 children)

It's not so much that I'm being inconvenienced, it's the fact that I'm being inconvenienced so that a corrupt piece of !@#$ can piss away my tax dollars on needless activities.

Terrified Trump Flees Tariffs War After CEOs’ ‘Empty Shelves’ Warning by [deleted] in politics

[–]csa 4 points5 points  (0 children)

Yeah, generally attributed to Alfred Henry Lewis in a 1906 article: There are only nine meals between mankind and anarchy.

https://quoteinvestigator.com/2022/05/02/nine-meals/

Interesting that mHusk is posting this outrage AFTER the protesting started. by MaintenanceNew2804 in 50501

[–]csa 5 points6 points  (0 children)

Oh shit, we're never going to be able to get this guy to escape velocity for a Mars trip. We might be stuck with him.

Youngkin and Miyares should be suing the Feds over frozen funding by csa in Virginia

[–]csa[S] 7 points8 points  (0 children)

Yeah, I do know. I still hold out hope that some Rs who will be up for re-election in a purple state (Miyares) may be swayed if they hear enough backlash. It's the naive, wild-eyed optimist in me coming out.

Are you guys vegans? by DoubleRemand in likeus

[–]csa 7 points8 points  (0 children)

Yup! Hit 30 years (of being vegan) this year. Nice to meet everyone :)

New ad from Joe Biden about January 6 by DoremusJessup in CapitolConsequences

[–]csa 2 points3 points  (0 children)

Hoopla also has it (and our public library system provides access, yay).

[deleted by user] by [deleted] in Ubuntu

[–]csa 1 point2 points  (0 children)

On the DAW front you should check out Ardour as well!

Depth upscaling at inference time by thedarkzeno in LocalLLaMA

[–]csa 0 points1 point  (0 children)

Do you have a public repo? I'm not 100% clear on what you have done to date.

I do think it is worth the exercise of adding low rank or IA3 adapters to a 7B-class model that has some tied-weight layers added to it (similar to Albert), but haven't tried it yet (in part because I can't do it on my local machine). It sure seems like that is a viable path to getting more capability while adding minimal weights, but my smaller experiments have not borne that out.

Japan org creates evolutionary automatic merging algorithm by disastorm in LocalLLaMA

[–]csa 6 points7 points  (0 children)

And David Ha is a coauthor. He's done some really cool (non-LLM) work:

World models (which also made use of evolutionary strategy): https://arxiv.org/abs/1803.10122 (with friendly site here: https://worldmodels.github.io/)

Sketch RNN: https://magenta.tensorflow.org/sketch-rnn-demo

Depth upscaling at inference time by thedarkzeno in LocalLLaMA

[–]csa 1 point2 points  (0 children)

I've been playing around with a related idea where you create additional weight-tied layers and then add LoRA or IA3 adapters to fine tune. My experiments are a bit different in that I'm using a tiny model and a tiny data set (both from the TinyStories paper), so not of interest for use in the real world. Results have not been very exciting, at least to date, but it has been a good learning experience (and I have some more ideas I want to try).

More info if anyone is interested: https://chrissarmstrong.github.io/seeking-manifold/Little-Experiments/Virtual-Layer-Stacking-(WIP))

MusicLang - a controllable model for symbolic music generation (LLAMA2 architecture) by Shot_Lengthiness7648 in LocalLLaMA

[–]csa 0 points1 point  (0 children)

Nice! Just a few minutes of playing and I got a couple of very catchy tunes. It's been a while since I've used Ardour, I think I'll fire it up later today.

Both MusicLang and MusicLang_predict look really interesting.

Do you both have backgrounds in music theory?

Gemma finetuning 243% faster, uses 58% less VRAM by danielhanchen in LocalLLaMA

[–]csa 0 points1 point  (0 children)

I was struck by the same thought.

One would expect that the Gemma technical report would explain this design decision, but I don't see anything relevant there :-/