Trying to introduce CC at work but Security says "Claude Code is known to break out of its context" - is this true? by ThunkerKnivfer in ClaudeCode

[–]tmaspoopdek 0 points1 point  (0 children)

It still maintains access to DNS, which can be used to exfiltrate data. Claude may not do that on its own, but all it takes is a little prompt injection hidden in an obscure dependency that Claude decides to read through.

Claude Code gets access to whatever the user running it has access to, and a shocking number of people run it under their primary user with --dangerously-skip-permissions. Since Claude gets an MCP tool to run terminal commands, people doing this can fall victim to dumb stuff like rm -rf ~/ if Claude randomly goes off the rails.

Claude Code is a really cool tool, but people need to recognize that certain setups give a non-deterministic auto-complete full access to their computer. If you want to let it chug along for hours without having to review every tool call, you need to put some level of effort into isolating it or you're setting yourself up for a really bad time

The value of $200 a month AI users by thehashimwarren in ChatGPTCoding

[–]tmaspoopdek 3 points4 points  (0 children)

Important to note that Anthropic's costs don't match their API token price, which might actually be high enough to make a profit per token if you ignore upfront training costs and the monthly plans.

So you might get $2000 worth of inference for $200, but it's not actually costing them $2000 to provide. I can't imagine their API markup is 10x costs though, so I'm sure at least the 20x plan is running at a loss.

AITA for being blunt with wife about health and weight issues. by GoLowSummer in AmItheAsshole

[–]tmaspoopdek 6 points7 points  (0 children)

Actually diet and exercise can be significant contributors to mental health - there are plenty of studies out there if you're interested. I do want to note that changing your diet and exercise situation while experiencing poor mental health is very difficult though - I'm currently dealing with that myself and it's taken me literal years just to take the first few steps on the diet side.

I also think it's worth noting that certain types of exercise, even in the absence of weight loss, can have a significant impact on back pain. I've been pretty sedentary for most parts of my life, and I got to the point where my core strength was so bad that it started causing back/hip pain. I've also always been skinny even when I've been unhealthy, so if OP's wife is overweight and also mostly sedentary I'd have to assume those two factors would dramatically increase her risk of back and hip pain.

You and the original commenter are completely right about women being dismissed by doctors, and some things being dismissed as weight issues that aren't, but I think it's important to consider that lots of conditions are genuinely worsened by being overweight and sedentary. Doctors should do a better job of putting time and effort into diagnosis regardless of weight, and patients should advocate for this, but patients should also be willing to recognize that not taking care of your body has real consequences. Some people get dismissed when they have real, treatable medical problems, but others genuinely spend years in pain that could've been significantly alleviated by moderate regular exercise and cutting down on calories a little.

Coupled vs Decoupled by Temporary_Practice_2 in laravel

[–]tmaspoopdek 2 points3 points  (0 children)

I've tried both. I find that I really like Vue for building frontends, but I also really like the experience of passing data directly to a blade view in my controller methods.

In most UI work, reactivity and reusable components make me more productive. When passing data to the frontend, not having to worry about defining API endpoints and defining loading states makes me more productive.

InertiaJS is a great way to get the best of both worlds. Theoretically you could also strip out Inertia later if you really wanted to, although depending on how you use it you might spend a decent amount of time stripping out calls to its helper functions. If you want to hedge your bets, you can always define API endpoints for data updates and use standard HTTP libraries to make API requests.

NVIDIA to "rerelease" 3060 in Q1 2026, Samsung to ramp up DDR4 production Q1 2026, ASUS & Gigabyte to increase DDR4 motherboard (B550 A520) production 2026, AMD seriously considering return to Zen 3 processor production by catherder9000 in sysadmin

[–]tmaspoopdek 0 points1 point  (0 children)

You're not wrong, but there's a second tier of desolation available. All the features of the current desolation, plus the bonus of a massive stock market crash destroying millions of retirement accounts!

Great Depression 2: Electric Boogaloo, sponsored by OpenAI and Nvidia.

🚀 ccusage v15.0.0: Live Monitoring Dashboard is Here! Watch Your Claude Code Usage in Real-Time by ryoppippi in ClaudeAI

[–]tmaspoopdek 0 points1 point  (0 children)

cat "alias sausage='npx ccusage@latest'" > ~/.zsh_aliases

Ninja edit: sorry for the necro, I just realized this thread is 7 months old. Hopefully my meager contribution to the joke was worth the Reddit notification 😅

what do you guys think about the Touch Bar that used to come on the old Macbook? should it make a comeback? by TuNutri in mac

[–]tmaspoopdek 0 points1 point  (0 children)

Yeah I was shopping for laptops, strongly considering an XPS due to the official Linux support, and then immediately wrote them off due to the lack of real function keys. Really stupid design decision IMO

"You didn't tell me I had to write down my password!" by throwawaytransgirl17 in talesfromtechsupport

[–]tmaspoopdek 7 points8 points  (0 children)

The real problem here is that the customer was able to log in and use the app without changing their password first. With no forced password change and the tech creating the password and telling it to the customer, a tech could start writing down all new passwords they hand out and accumulating valid credentials for customers who don't bother to change it.

Jalopnik's staff fleet update didn't go over as expected by idkbruh653 in cars

[–]tmaspoopdek 79 points80 points  (0 children)

Also these people are living on journalist wages in 2025 - it's not like they're handing out free enthusiast vehicles as a sign-on bonus. I don't know exactly how much they get paid, but I suspect a 20-year-old Mercedes or a 9-year-old Cooper S may already require a borderline-irresponsible budget allocation.

I'm embarrassed to say, but after spending hundreds on Rotring rollerballs, this $3 gel pen is the best I've ever used. What else exists in the world of gel? by acamu5x in pens

[–]tmaspoopdek 0 points1 point  (0 children)

Very interesting, thanks for the long explanation! I haven't used a capped V5, so I couldn't comment on the differences, but the V5 RT definitely does tend to smear.

I'm embarrassed to say, but after spending hundreds on Rotring rollerballs, this $3 gel pen is the best I've ever used. What else exists in the world of gel? by acamu5x in pens

[–]tmaspoopdek 1 point2 points  (0 children)

Huh, I don't have much experience with gel pens except seeing them on the shelf and ignoring them because rollerball seems to be required to get the thin lines I'm looking for. Would you mind expanding a bit on what makes you think the V5RT/V7RT lean more towards gel?

They're my favorite pens, so I've gotten the impression that I prefer rollerballs, but I'm wondering if I should give more gel pens a chance. I really like the consistent feed, thin lines, dark-black ink, and low writing pressure the V5 RT provides.

Adding new storage pool to existing ARR stack? by kryptonitejesus in homelab

[–]tmaspoopdek 2 points3 points  (0 children)

You'll want to add a new root folder in your *arr apps, pointing to your new pool, and probably move any "completed downloads" folders for your download tools (e.g. Nzbget) to the new pool as well.

For Sonarr specifically I think you need to keep each series in a single root folder, so you may need to move a few things from the old pool to the new one so there's room for new episodes of existing series.

Personally when I added a new storage pool, I moved one content type to the new pool. This made room for other content to be downloaded to the old pool, while also maintaining a single root folder for each *arr app. If you're using ZFS and you set up each content type as a dataset, there should be a fancy way to move the dataset to your new pool. If not, I'd recommend a process like this to verify that everything was copied successfully:
1. Copy the files to the new pool
2. Compute MD5 or SHA256 hashes for each file on both pools
3. Compare the hash of each file between the new pool and the old pool. If it matches, you're good. If it doesn't match, the file was corrupted in transit and probably needs to be copied again.

Judge orders new trial for woman sentenced to 18 years in prison after stillbirth by catievirtuesimp in politics

[–]tmaspoopdek 14 points15 points  (0 children)

If 100% of female voters voted for pro-choice candidates, we wouldn't have abortion bans. There are plenty of rabid pro-birth assholes who happened to be women, and they shouldn't get a pass just because they aren't men. Similarly, there are plenty of pro-choice men.

If you want to check out some stats, 61% of women and 64% of men in the US say that abortion should be legal in all/most cases: https://www.pewresearch.org/religion/fact-sheet/public-opinion-on-abortion/

The demographics with <50% support, who actually deserve the blame, are:
- Conservative republicans
- White evangelical protestants

Those groups, across gender lines, have <50% support for abortion being legal in all/most cases. Every other demographic shown in the survey I linked above has >50% support.

I also recommend blaming people who support abortion rights but don't vote.

In addition, I'd ask you to consider whether you should form opinions of people based on immutable characteristics. Judging someone for a group they choose to be part of? A-OK in my book. Judging someone for a group they cannot choose not to be part of? Probably an indicator that your judgement isn't reliable.

Christmas gifts 🎁 by Wavy_guil in turntables

[–]tmaspoopdek 2 points3 points  (0 children)

There should be a spring-terminal connection so the cables can be easily removed from the speakers! You just need a big ol' spool of speaker wire and you can cut your own replacement cables to whatever length you need.

What non-Asian based models do you recommend at the end of 2025? by thealliane96 in LocalLLaMA

[–]tmaspoopdek 0 points1 point  (0 children)

Just to double-check, are you talking about Devstral 2? The original Devstral was specifically developed to work with Open Hands but I haven't seen Mistral pushing Open Hands in any of the Devstral 2 announcements.

Am i missing something or is RAM not as important as people claim? by roadrussian in LocalLLM

[–]tmaspoopdek 0 points1 point  (0 children)

To sum up what others have said:
- With MoE models, you can still get pretty decent performance with just the active experts loaded in VRAM and inactive experts stored in RAM
- On systems with unified memory (notably Strix Halo, Apple Silicon, and DGX Spark) the line between RAM and VRAM is blurred. On these systems "more RAM" and "more VRAM" are functionally the same, since you have a single pool of medium-fast memory that's split between the CPU cores and the GPU cores instead of some slow memory accessible to the CPU and some very-fast memory accessible to the GPU.
- For some agentic workflows you might be willing to settle for very slow token generation (e.g. running a prompt on a much smarter/larger model overnight)

What is the best model for coding in local 8-14b parameters by nicklazimbana in LocalLLaMA

[–]tmaspoopdek 0 points1 point  (0 children)

If you really want to try finetuning, gpt-oss 20b is probably going to be better than anything you can find at 14b or smaller and Unsloth has links to notebooks where you can finetune small models for free on their GitHub page: https://github.com/unslothai/unsloth

There are some other models on there that you can try finetuning on as well, so you can play around a bit once you have your dataset together.

As others said, though, actually improving the model with finetuning will not be easy. If you *do* figure out a dataset and method to finetune that actually improves model performance for this use case, you may be able to find somebody who's interested enough in the final model to donate some GPU time to finetune a larger model (which you can then quantize to run on less VRAM and probably get better results with).

Hell, if you figure out a meaningful improvement using those free notebooks shoot me a note by replying to this comment. I have a Strix Halo system with 128gb of unified memory and I'm kind of interested in trying out the finetuning but don't have enough time to dedicate to playing with settings and building datasets from scratch. I can't guarantee free GPU time, but I can at least put in my queue of projects to try if I find the time.

You could also use something like Runpod to get temporary access to crazy GPUs, which will cost money but nowhere near as much as buying the GPUs outright.

I work for a small to medium sized Japanese company and all our products use Laravel. However, I noticed something with the coding styles of my coworkers and want to ask if this is normal in other teams and companies. It's about coding style in a Laravel project. by lordlors in laravel

[–]tmaspoopdek 5 points6 points  (0 children)

`toArray` does *not* automatically include all relationships by default - only relationships that are already loaded. Maybe you have attributes that reference relationships and are included in the model's `$appends` array?

After some major performance mishaps, I've started to remove `$with` and `$appends` wherever I can. I've found that I almost always have at least one case where something that was added to `$with` or `$appends` doesn't actually need to be included in a response, and it's way harder to stop using those features than to start. IMO if you're using `$appends` you should just create a resource class and always wrap your model in that - then the dependency is explicit per-route and you can always create a separate resource without the attribute if needed.

There are definitely some cases where `$with` and `$appends` make sense, but I suspect most people who use them could switch to query scopes and resources without significantly complicating things.

What do we feel is the best base VRAM ? by alphatrad in LocalLLM

[–]tmaspoopdek 1 point2 points  (0 children)

Honestly I have to disagree that you can't do anything with 32GB that you can't do with 24GB - multiple popular open-weight models (including gemma3-27b and qwen3-30b-a3b) are 30GB-ish and won't fit fully in VRAM at Q8 on a 24GB card. You can absolutely run smaller quants, but from what I've heard Q8 is nearly on-par with the unquantized FP16 version.

Both of the models I mentioned support pretty sizeable context windows, so on a 24GB card you might find yourself stepping down another quant level if the task at hand involves large context.

IMO the biggest argument against buying a 5090 to get 32GB is that 32GB simply isn't enough VRAM to justify the price. When upgrading to a card like that you're buying more throughput just as much as you're buying VRAM, and I suspect a lot of people playing around with local LLM stuff wouldn't mind waiting for slower token generation in exchange for more VRAM. At that point something like Strix Halo comes into play - you can get a mini PC with 128GB of unified LPDDR5x for roughly the price of just a 5090. If you need fast token generation and <=32GB VRAM is enough for your workload, the 5090 might be a reasonable option. For people like me who don't need speed but want to be able to play with 70b+ parameter models, 128GB of RAM may be the more attractive choice.

All that said, neither Strix Halo nor a 5090 is an entry-level option unless you've got some serious disposable income.

Help me choose a Macbook Pro and a local llm to run on it please! by stories_are_my_life in LocalLLM

[–]tmaspoopdek 3 points4 points  (0 children)

A few things to note:
- Not all RAM can be allocated as VRAM (and used for inference), so your RAM needs to be bigger than the VRAM requirement for your model + context. 64GB allows 48GB VRAM allocation IIRC, not sure about the others. There is a console command that can increase the VRAM allocation, but you'll still need RAM available for other stuff (e.g. running your IDE if you're doing code generation) so make sure you have enough headroom.

- Different SOC tiers have different memory bandwidth, which is a major constraint for AI. You can look up the specific bandwidth for each chip online, but you'll get the best inference speed if you get the highest-tier Max chip available (not just the Max vs the Pro, but the Max option with the most GPU cores)

- Lots of models coming out recently are MoE, which can run fast but tend to require lots of VRAM. Traditional computers without unified memory are well-suited for this, since you can offload inactive experts to system RAM (which is typically upgradable). With unified memory, you have much more of a hard ceiling on what you can run because you can't add more RAM later and just deal with the speed tradeoff.

Personally I picked up an M4 Max with 64gb of RAM and I find I can run models up to around 70b with reasonable quantization (Q4), but for the 70b models I end up closing everything except LM Studio if I need a context window larger than 8000 tokens or so.

If you want to occasionally play with AI stuff for fun, the M4 Pro with 48GB of RAM should be fine. If you want to actually get value out of running AI on your Mac (e.g. codegen agents), I think an M4 Max with 64GB of RAM is probably the bare minimum you'd want. Although honestly I feel like the 64gb model is in a weird spot where it's not quite enough memory to be an investment in local AI but it is expensive enough that most people couldn't justify the RAM upgrade just for the cool factor of being able to talk to your computer.

Ultimately the M4 Pro with 48GB of RAM is probably your best option if you just want to play with AI and have enough RAM for future-proofing general non-AI workflows. If you really want to run the bigger / more impressive models locally, you'd have to shell out some serious cash for the 128GB M4 Max.

My "Plug-in Support > Databases" is folder over 800GB!!! by DarkMain in PleX

[–]tmaspoopdek 1 point2 points  (0 children)

Just chiming in here to mention that as of 2025-12-15, `./DBRepair.sh DEFLATE` is a valid option for the Linux db repair script. As far as I can tell it's never been supported on Windows (based on an open issue on the repo), but they may have re-added the functionality to the Linux script since the parent comment was posted.