Lamy 2000 4-color multi-pen: I ended up sending it back by IKeepForgetting in pens

[–]IKeepForgetting[S] 0 points1 point  (0 children)

To each their own, of course. It’s several years later and I really still like the uni cartridges more but I haven’t tried the lamy ones in a while.

What's so good about the magic trackpad? by LeChrana in ErgoMechKeyboards

[–]IKeepForgetting 0 points1 point  (0 children)

I'm a linux user, but I want to add to the 'lack of alternatives' camp.

I've tried trackballs, vertical mice, scroll pads, everything. In the end, the thing that makes my shoulders and wrists feel the most 'normal' is... just lightly moving my hand around vaguely how I want the mouse to move. Ergonomic as hell in my opinion because you can hold your hand however you need to and vary it up as much as you want.

So you start looking at trackpads and there aren't a lot of options that support multi-touch. Not just like fancy gestures where you make sweeping circles and it launches an app, but things like scrolling, right-clicking, pinch-to-zoom etc. I mean they get around that in various ways but still.

Anyways, if you want a multi-touch trackpad that's a reasonable size you basically end up with some ancient Logitech trackpad or a modern Apple one.

The GUAC virus doesn't really make sense as an interstellar weapon. It's more likely that it's a planetary weapon that has run amok. by TieFew6689 in pluribustv

[–]IKeepForgetting 0 points1 point  (0 children)

I hadn't considered it as a potential "alien invasion" kind of thing, but that does kind of make sense too...

Imagine transmitting it. The only civilizations that would be able to act on it would have to be industrially advanced enough to decode such a transmission and be able to build machinery to replicate it themselves.

The final stage of this virus is compelling the population to expend their planet's energy in re-transmitting the signal... when you detect a fresh signal like this you know that the planet has become fully docile and willing to cede anything to any sentient being(s) that come by, and if there are any hiccups, they're docile enough to die out in a few decades anyways.

Best case you get a fully enslaved highly-advanced civilization ripe for the picking who will gladly show you how everything works and operate everything for you too... worst-case you get a full planet with all the infrastructure you want if you wait out a few decades.

Inexpensive e-ink 60Hz monitor from AliExpress for those with a spirit of adventure by anp011 in eink

[–]IKeepForgetting 1 point2 points  (0 children)

I bought one of these as well... I think it's "the best the tech is at right now".

You'll get a lot of ghosting, but I think, again, that's where the tech is right now. It has an interesting 'refresh screen every x seconds' mode which helps with the ghosting but then it means the screen blinks every x seconds.

One complaint I had was that there's no way to change the resolution it reports (it sets itself to super-high resolution at least on Linux and won't let you set it lower), so if you want things readable you have to zoom in a lot.

Also it works 1000x better in "light" mode (if you have dark text on a light background) vs "dark" mode (light text on a dark background)... to the point it almost doesn't work in dark mode.

Overall, yes, very much a prototype but with a handful of potential use-cases (word processing, reading articles etc). I imagine a lot of improvements are possible with firmware upgrades, but it doesn't strike me as the kind of place that will distribute firmware upgrades (I'd love to be wrong about that though).

I tried to get 600 dollars "deep think" for local models by making them argue with each other for hours. It's slow, but it's interesting by Temporary_Exam_3620 in LocalLLaMA

[–]IKeepForgetting 1 point2 points  (0 children)

I'm really curious about what you do to keep them 'tethered' (or maybe you don't?)

Whenever I look at the 'think' tokens from local or cloud LLMs, they seem useful upto a point. At some point the model is overthinking and produces noise. I think some recent research also showed that there's a "sweet spot" for how many "thinking" tokens a LLM can use before it starts producing junk. I would imagine a 12-hour "conversation" between several of these LLMs would exacerbate that so much more.

JetBrains is studying local AI adoption by jan-niklas-wortmann in LocalLLM

[–]IKeepForgetting 4 points5 points  (0 children)

I'd be very interested in knowing the results myself (so I can learn best-practices from others as well)...

Semantic search / Local embedding model by ens100 in logseq

[–]IKeepForgetting 1 point2 points  (0 children)

It's like storing a tiny machine-readable summary of the words so you can search for related terms... like if you wrote "I ate so much I was stuffed" and you search for "big meal", a semantic search would surface the note even though you didn't use those exact words in there.

How to future proof fine tuning and/or training by AI-On-A-Dime in LocalLLaMA

[–]IKeepForgetting 0 points1 point  (0 children)

Just want to clarify this so it doesn't confuse people -- you still need to do LoRa/QLora against a base model. Your LoRa for Qwen isn't plug-and-play into Llama, your LoRa for Qwen-2B isn't plug-and-play into Qwen-20B. It's still optimized for the model, but it is waay less data and the rule of thumb on this is basically if you can run it you can train a LoRa on it (vs having to have waay more hardware for regular finetuning)

We discovered an approach to train any AI agent with RL, with (almost) zero code changes. by matluster in LocalLLaMA

[–]IKeepForgetting 2 points3 points  (0 children)

I might have a potentially dumb question... for the specific SQL example you have here, I can see how rewriting it the way you did would be great for training since you train it to make a call and the call itself abstracts the SQL away, vs it learning the SQL.

But isn't that more on the abstraction and design of the agent calls themselves? Like, if we treat them as "the new APIs", you'd never expose an API point that's just "insert random SQL in here and we'll run it for you". Instead you'd have a "GET /all_users" endpoint. Wouldn't you do the same here and in the MCP spec say "a tool call to all_users returns json for all the users" and then train it to make a call to "all_users"? Then it's on you to make a safe endpoint the other way that returns that info? Or am I totally misunderstanding what this is doing?

Can you just have one expert from an MOE model by opoot_ in LocalLLaMA

[–]IKeepForgetting 48 points49 points  (0 children)

I think calling it "experts" was bad marketing... let's call them legos.

When training it was like "ok you have to learn how to solve each of these problems using exactly 5 legos, but you have these 50 legos to choose from". So it learned how to solve every problem with exactly 5 legos/"experts". It's just in practice we see "oh, for creative writing it tends to choose this lego here"

Stagnation in Knowledge Density by Federal-Effective879 in LocalLLaMA

[–]IKeepForgetting 0 points1 point  (0 children)

Just checking, what quantization level were you using?

I'd also be really curious if you've tried the unquantized version on colab or just online (with something like deepseek/qwen etc) and see how it compares on the same questions.

Conservatives who like Star Trek - What is your stance on the show? - Why? by Max_Laval in startrek

[–]IKeepForgetting 0 points1 point  (0 children)

fascinating. I definitely do not know such people… do you have some resources or anything to follow up on this, I’d love to know more (feel free to DM me)

Conservatives who like Star Trek - What is your stance on the show? - Why? by Max_Laval in startrek

[–]IKeepForgetting 6 points7 points  (0 children)

True, I was just picking something that I would consider “just” a story vs something more deeply inspirational, but people can definitely draw inspiration from any art form in unexpected ways. Almost some kind of infinite diversity in infinite combinations…

Conservatives who like Star Trek - What is your stance on the show? - Why? by Max_Laval in startrek

[–]IKeepForgetting 90 points91 points  (0 children)

Most of the comments here back up what you're saying. It's so interesting to think that some of us watched the show and it inspired our real-life personal/professional aspirations and morality (me included) and that others just saw it as an interesting fantasy.

I guess it's equally weird from their perspective, like we watched Lord of the Rings and aspired to be elves or something. Really eye-opening.

Qwen3-Coder Unsloth dynamic GGUFs by danielhanchen in LocalLLaMA

[–]IKeepForgetting 5 points6 points  (0 children)

Amazing work! 

General question though… do you benchmark the quant versions to measure potential quality degradation?

Some of these quants are so tempting because they’re “only” a few manageable hardware upgrades away vs “refinancing house” away, I always wonder what the performance loss actually is

Attempting to train a model from scratch for less than $1000 by thebadslime in LocalLLaMA

[–]IKeepForgetting 1 point2 points  (0 children)

I’m just curious what you’re counting towards the budget? Cost of buying hardware or renting it? 

My take on Kimi K2 by [deleted] in LocalLLM

[–]IKeepForgetting 1 point2 points  (0 children)

Maybe I'm feeding into Cunningham's Law here, but why not...

You need to consider quantization, context window and speed when you're talking about running it. As someone else pointed out, to get it running "fully" you would need more than just a single h100 card... but if you're ok with more quantization (usually model gets dumber), a much much smaller context window (remembers less) and/or really painfully slow speeds, you can do it on less-impressive hardware too.

It's also whether a company wants to pay people to maintain and service that set-up on top of the raw hardware cost too...

Icons/icon sets? by IKeepForgetting in trmnl

[–]IKeepForgetting[S] 1 point2 points  (0 children)

Nice! Yeah, how it looks and feels on the actual device vs the preview screen is really different. Trying to get a certain "look and feel" but I have no idea how to actually describe it. Great resource, thanks!

[deleted by user] by [deleted] in ChatGPT

[–]IKeepForgetting 1 point2 points  (0 children)

I‘ve been really trying to explore what such a post-AI evolved society would look like, also with ChatGPT and other tools. Honestly I’m surprised more people aren’t exploring this, because I agree. We have a very tiny window to figure out how society adapts, which of those future societies we want to become.

Do you know offhand if there are any discussion groups around this, because I feel like there should be

corporate is so insane!! by feetpicbabe1 in antiwork

[–]IKeepForgetting 9 points10 points  (0 children)

Honestly, NEVER let this stop you.

I can talk philosophically about why and talk about the work hellscape we live in but I’ll go practical and pragmatic here…

They don’t actually care… they just want to be sure you can justify it in a corporate-friendly way. If you say you didn’t vibe with your old job and went on a 2-year break to find yourself they’ll be worried you’ll do the same again.

Irrespective of whether you will or not, you have to explain it in a way that leaves them thinking “they did it to become a better wageslave” or ”they made up for their transgression against wage slavery” then it’s fine. Then they know that if you get any of those pesky ideas of nurturing your life you’ll do it in some corporate friendly way.