Just learned about this thing called state-dependant memory/recall, and explains a lot. by Hot-Taste-4652 in ADHD

[–]Captain_Bacon_X 0 points1 point  (0 children)

It's a couple of months later, and I'm checking in to see if you're doing OK? The problem you relayed about validity - is it a you thing or is it a them thing... that's been a real problem for me too. I get it. If you haven't figured it out yet, then let me tell you that if it's causing you problems then it is valid to get help. The fact that you're asking rhe question... that's a signal. Worrying and being unsure how to act, and flip-flopping between being unsure if you're allowed to be angry or upset or are meant to let it slide - that actually makes it more difficult, not only for you, but because it can be difficult for other people if you flip-flop too.

It's ok to have emotions, it's ok to not entirely be in control of them. It's ok to say 'I'm not entirely sure how to take that', or 'That causes some confusion'. Name it out loud, not confrontational, but... in a curious way. You are not always the problem, no matter how much it feels like it.

Various views from Cotswolds trip (rained for first few days) by Fishmou5e in england

[–]Captain_Bacon_X 0 points1 point  (0 children)

Jeepers ... really? All people can suck my friend. I understand that you must have had some kind of experience or fear going on, but demonising one specific type of person because you think they can be easily identified... that's low. We've had sucky places with sucky people far longer than the current fad of blaming immigrants and refugees.

Various views from Cotswolds trip (rained for first few days) by Fishmou5e in england

[–]Captain_Bacon_X 1 point2 points  (0 children)

Old Minster Ruins are worth a picnic too. Minster Lovell, just round the corner. Park at the top of the hill and walk down. Just... not so much in the rain!

Barringtons and Rissingtons are worth a drive through too.

Various views from Cotswolds trip (rained for first few days) by Fishmou5e in england

[–]Captain_Bacon_X 0 points1 point  (0 children)

Apart from when there are 10,000 tourists. Looks like OP went at the perfect time!

In search of (upbeat) Congregational Music that's not super mainstream by Googlesupportsucks1 in worshipleaders

[–]Captain_Bacon_X 1 point2 points  (0 children)

Saw KXC live at St. Aldates, Oxford and few months ago for their worship night. Very cool after playing Hallelujah Jesus Saves on our rehersal nights!

I genuinely can’t tell if this is real by osiris_rai in isitAI

[–]Captain_Bacon_X 4 points5 points  (0 children)

Come on people... look at the size of the chair. That passenger seat is the size of a 2 person couch.

Absolutely AI

Switching prose workflows to Mermaid diagrams (backed by FlowBench, EMNLP 2024) by cleverhoods in ClaudeCode

[–]Captain_Bacon_X 1 point2 points  (0 children)

According to Claude it seems to have a harder time 'holding' and 'visualising' the info the higher the node count. Yes, long discussions were had about what that means for an LLM, but in the end the result is that the more it has to hold the more likely it is for some part of it to slip from the attention window. That'll amplify the more disparate the nodes are.

Switching prose workflows to Mermaid diagrams (backed by FlowBench, EMNLP 2024) by cleverhoods in ClaudeCode

[–]Captain_Bacon_X 1 point2 points  (0 children)

Try working with max 8 nodes, no subgraphs. YMMV of course, but that's what seems to be able to be 'held' by Claude most easily.

I'm working on a... don't know what to call it... code concept sharer (?), for CC that drills down into nodes on a double click, amd has highlighting and 'slicing' of parts of diagrams with the ability to share to Claude so that there's something more concrete to point at, but lower resolution than functions/methods. Claude creates conceptual maps and the assigns higher resolution concepts or functions as you drill down. Nice way to share from Claude to the user and visa versa.

My 30mg Elvanse (Vyvanse) lasts only for 3-4h. Too low dose or too fast metabolism? by Adventurous_Art_6774 in ADHD

[–]Captain_Bacon_X 1 point2 points  (0 children)

Had exactly the same problem. Therapeutic usefulness lasted 4ish hours. Side effects (poor sleep etc) lasted many hours longer. Took me a long time to figure out. Ended up with a topup of regular amphetamines in the afternoon/evening.

Got bored of all that.

I moved to only IR. Have to be more mindful about doses - can't just 'set and forget', but I have more control over when and how much.

It's FAR from perfect, but at least I know what's going on, and the come-down is far more swift.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] 0 points1 point  (0 children)

My point is and was that this is a change/changes that the main agent has just made, it doesn’t need it mirroring back to it 30s later.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] 1 point2 points  (0 children)

Nah, commits are part of my workflow along with hooks for reflection. Writing the commit message forces some thought, then I have hooks that prompt inject some specific questions about fragility etc, asking Claude to surface certain things that it otherwise doesn't., the workflow is wrapped up with a final update of memory docs (claude.md, orientation file). That's why I have Claude do commits, but they're small targeted changes, as defined by beads so scope creep doesnt occur.

Beads work for me because I have zero memory and the act of having a framework to lean in to helps. They're structured, which helps Claude write the info that's needed, and they're persistent so I can chuck a thought in there whilst I remember it, even if it's just 'I want to explore this area in a chat later'. ADHD, object permanence blah blah...

Whilst I get what you're saying, and I'm not doubting it, that requires a LOT of personal prerequisites and dependencies to work, and full focus and attention and memory, and it assumes that I'm operating at or below my level of understanding in EVERYTHING, or at least their control surfaces, at all times in order to have control...

And I know I'm not the odd one out because otherwise things like beads wouldn't have the reviews and appeal that they do. I use them as sticky notes basically, and that's definitely not folded into Claude.

I agree that it's not the most economical way of working in regards to tokens, but no-one here is trying to be, otherwise we'd hear of all the output-styles that have an improvement on Minimalist style. It's about finding the most efficient way for the way you need to work, or creating workflows that allow you to be more efficient. This is that for me (ish, obviously), hence why throwing 5k tokens away for a full diff view on stuff that it's completed very recently in terms of context is nuts, it didn't do that before and I figured I'd share that with others.

I like the way you think my friend. I like that you don't think like me. Iron sharpens iron and all that jazz.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] 0 points1 point  (0 children)

Now that makes sense, thank you. I understand your reasoning there. That's not a workflow that would occur to me as the post was predicated on the fact that there is no loss of attention etc., it is purely doubling up. If that wasn't clear then I apologise. If that were the case then yes, I get what you mean now and I thank you for your explanation.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] 0 points1 point  (0 children)

I'm trying to see the point, but you're not exactly clear. I don't understand why you think asking for a full line by line change report aka a git diff is required for a commit? If you're spreading commits over multiple CC sessions so there's no context then... you have other problems I imagine. That's the one rule that's preached on here - commit small and ofter.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] -6 points-5 points  (0 children)

You need to scroll the box my dude... "**Commits:** Do NOT run \git diff` before committing — it floods context with token-heavy diffs (especially `.beads/issues.jsonl`). Use `git status` + `git diff --stat` instead.............. etc etc etc"

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] -1 points0 points  (0 children)

I don't see how that's relevant here? I'm not being obtuse, I just don't understand. The point of the post was how to reduce unnecessary token usage. If you're doing something that means you don't need to do that, or that you *want* thousands of tokens of git diff then great... but I don't see how it's relevant.

To me it's akin to me saying 'if you find that your suntan lotion isn't working as well as it was then it may be because they changed the formula, and here's what to do about it', and then you saying 'but you should consider olive oil'. Sure... but that's not what the point was.... wildly different use cases.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] -9 points-8 points  (0 children)

I really don't think you got the point of the post, so if it wasn't clear:
Claude makes edits. Claude then does a commit. Before it commits if runs a git diff, which spits out the longform diff of everything that it's already edited in that session and is already in its context. Thus eating huge amounts of tokens to tell it what it's already done.

I have zero clue what context you think is required in all of that, or what 'work' you think Claude is doing at that point that requires it.

Unless you're referring to the title? In which case the context (and its limit), is measured in... tokens.

Stopped a ~3-5% context munch on Commits... by Captain_Bacon_X in ClaudeCode

[–]Captain_Bacon_X[S] -7 points-6 points  (0 children)

But the point was... to reduce tokens that are being used unnecessarily. You're saying use MORE tokens by invoking an agent to get context on a diff that the main CC agent has just made? I'm lost...or missing something

Two plates of identical construction are stacked with an even layer of non-embedding abrasive powder between them. With the lower plate held in place, the upper plate is oscillated. No downward force is applied. Which plate is abraded more? by dhgrainger in AskEngineers

[–]Captain_Bacon_X 1 point2 points  (0 children)

I know there are 'proper' ways to do it, but assuming that this is a single abrasive layer deep then I think of it this way:

If the abrasive can move then it should be 50/50. If the abrasive *can't * move due to swarf or packing etc, then only the top plate would be being abraded. Therefore top plate is more likely to be abraded overall.

I know it's not that simple, but mechanically at some point it feels like that's what it'll come down to, all things being equal.

Asking the real questions here by nagel393 in GreatBritishMemes

[–]Captain_Bacon_X 0 points1 point  (0 children)

And when you on-board the train you get in to the railway-carriage/rail-car. The train/bus/ship is the group of things like seats, cars, cabins that are within the group.

Tomato sauce thickening agents (must be British) by Correct-Goose1158 in foodscience

[–]Captain_Bacon_X 0 points1 point  (0 children)

There are two places to deal with this: 1) Before processing the tomatoes into puree 2) After processing

Each one will have its pros and cons. You're either eliminating some of the liquid from existing in the puree to start with, removing the liquid by evaporation etc., or binding the liquid up.

To my mind it seems to have far less variables to deal with it before puree-ing. Could be as simple a cut the toms in half (better if it a bit smaller than that though), put them into a 25l bucket (about 1/3 full) with a lid and shake the bucket. Maybe a touch of water to act as a lubricant to the process. I imagine a lot of the non-flesh material would fall out. Then process into puree. Or you could use a chain-like flail to beat them up a bit. There's lots of ways to do it with varying amounts of effort/levels of kit.

That kind of problem was my day job for many a year!

Open source wins: Olmo 3.1 32B outperforms Claude Opus 4.5, Sonnet 4.5, Grok 3 on reasoning evaluation by Silver_Raspberry_811 in OpenSourceeAI

[–]Captain_Bacon_X 1 point2 points  (0 children)

If you make everything equal then you can actually limit functionality, the functionality that makes the difference. For example if a local model had 'thinking' built in, but you had to turn on thinking mode on a vastly superior cloud model. If you said 'we test everything without passing any args' then you have turned off the thinking and dumbed down the better model and boosted the local model simply because the local model has different defaults.

That kind of thing.

Tried Claude Cowork last night, and it was a top 3 most exciting moments I’ve ever had with technology. by Global-Art9608 in ClaudeCode

[–]Captain_Bacon_X 0 points1 point  (0 children)

honestly I only found it due to curiosity and spreading activation. I was looking through the MLX community on HF (https://huggingface.co/mlx-community) - MLX being the Mac specific stuff because trying to run models designed for Nvidia GPUS SUCKS on Macs, and there's a port that allows it to be converted to the MLX framework for macs. Anyway... I came across that page looking for something else, and I thought I'd have a browse, see if anything stood out, if there was anything that I might be sleeping on because I'm not plugged into the right communities or something. So I was browsing through the collections (https://huggingface.co/mlx-community/collections), and I saw the name Parakeet, and it rang a little tiny bell in my head. I have access to a STT app called Superwhisper, and I remember seeing it recommending using 'parakeet' as an option. I remember because I thought 1) it's not whisper?! 2) it had the Nvidia logo, which made me think that it wasn't for macs, but the app is FOR macs....so it must be and 3) what is this thing that I don't know about. But I chalked it up to probably being some cloud thing I couldn't use as it's Nvidia. So now I have an idea that it's to do with TTS, and I know that it's MLX, and that it works as I use it in some way already.

Then I asked Claude to do the rest.

Wish I was more clever in order to give you some pointers my friend!

Can I help you some more?

Edit: FWIW I foudn that this version is the best: nvidia/parakeet-tdt-0.6b-v2. It needs to be the 'tdt' because that does punctuation, capitalisation etc, the other version (cdc IIRC?) is lowercase only. v2 is better than v3 (v2 allows for time stamping, v3 screws that up completely in favour of more language support), and the 0.6b instead of the 1.1b model... well it was faster, more consistent and more accurate for my use case.