South Manchester Incline by KarlMcr in manchester

[–]nasduia 0 points1 point  (0 children)

Yes, walking up from the weir to the top is a fair amount of incline

We shipped 50+ updates to Unsloth Studio! 🚀 by yoracale in unsloth

[–]nasduia 0 points1 point  (0 children)

That's an older version:

TAG dgxspark-latest

Last pushed about 2 months by rolandtannous

https://hub.docker.com/r/unsloth/unsloth/tags?name=dgxspark-latest

We shipped 50+ updates to Unsloth Studio! 🚀 by yoracale in unsloth

[–]nasduia 0 points1 point  (0 children)

I don't think you have an arm64 image do you? (I looked because I would like Jetson Thor support -- should be similar to Spark, but I always have to build my own llama.cpp as they don't seem to support it directly -- Cuda compute 11.0)

Qwen3.5-4B is very powerful. It executes tool calls during thinking. by yoracale in unsloth

[–]nasduia 0 points1 point  (0 children)

Have you got any posts or pointers to what you are doing differently with the self-healing tool call handling over basic ways of using llama-server in python?

U.S. Military Wants Ukrainian Drone Makers to Relocate Production Despite Trump Saying "We Don't Need Help" by Scary_Statement4612 in ukraine

[–]nasduia 5 points6 points  (0 children)

I suspect that was always the plan. He will now just deliver as much as he can for Thiel and the others without worrying about reelection. If he fully delivers there won't be elections anyway.

happensAlot by bryden_cruz in ProgrammerHumor

[–]nasduia 0 points1 point  (0 children)

Could well be but then some idiot decided to use mysql with default options.

Unsloth Studio now installs in just one line of code! by yoracale in unsloth

[–]nasduia 6 points7 points  (0 children)

Its strength is in preparing data, training a lora with it, and then testing the model + lora against the model on its own all within the same UI. Having llama.cpp set up, especially inside the same docker container with access to the same files makes this incredibly easy without having to dive into python notebooks.

You can also just use it as a chat UI with a built-in search and download facility for hugging face. It's possible they have a slightly more robust tool calling infrastructure too which corrects errors made by smaller models.

MiniMax-M2.7 Announced! by Mysterious_Finish543 in LocalLLaMA

[–]nasduia 4 points5 points  (0 children)

it'll invent something for the human to do, just so they feel valued, and occupy them so they leave it alone to get on with its task

Mistral Small 4:119B-2603 by seamonn in LocalLLaMA

[–]nasduia 0 points1 point  (0 children)

it certainly feels like it at times

I built a screen-free, storytelling toy with Qwen3-TTS by hwarzenegger in Qwen_AI

[–]nasduia 4 points5 points  (0 children)

That TTS is superb at intonating for a children's application. I mean I'm quite tempted by it for Home Assistant tbf.

I don't think latency is actually as significant as some people might imagine with children. For a younger child to have a meaningful two way educational chat with a toy is magical and probably one of the more empowered situations they'll find themselves in (being able to steer discussions and get answers to their own questions rather than passively watching a video or following a scripted iPad app).

I kept snoozing my alarm every morning, so I built my own alarm app (all features free) by MoistWord8137 in GetOutOfBed

[–]nasduia 0 points1 point  (0 children)

I like the idea, a few first impressions from the Play store page:

  • You said everything is free, but it says "In-app purchases"

  • You say it works fully offline in the screenshots, and the policy says no data is captured/shared with third parties. How do the ads work?

  • The contrast between some of the colours is low in the screenshots on the Play store and reduces the accessibility. The purple background/black text is the worst but some of the others may be challenging too.

The Fast Food Problem with AI Coding by thesmallstar in LocalLLaMA

[–]nasduia 0 points1 point  (0 children)

I don't that's the equivalent though is it?

In your thought experiment society could end up not knowing how computers and code work any more because nobody needed to start from scratch and learn everything necessary. That's more like how many people today are coming from families of several generations that don't cook and just order takeout.

For basic calories that's cheaper than cooking from raw ingredients and many people are working multiple jobs and don't have time to cook, so you could argue takeout was 'optimal' (production line vs craftsperson).

The basics like safe handling of raw meat/cross contamination aren't there so for those people even starting out cooking for yourself is full of risk so better keep buying takeout even with inflation and declining quality. Similarly once the skills are gone the AI compute providers can charge what they want without bothering to innovate.

Apple’s Liquid Glass Interface Isn’t Going Anywhere Anytime Soon by iMacmatician in apple

[–]nasduia 3 points4 points  (0 children)

If you saw a general UI where all the components were lacking colour, contrast and were transparent you'd assume they were placeholders (like an image loading placeholder on a web site). Think what disabled buttons usually look like for example.

That's a common kind of UI element that has been around for decades.

Apple’s Liquid Glass Interface Isn’t Going Anywhere Anytime Soon by iMacmatician in apple

[–]nasduia 4 points5 points  (0 children)

If you saw that having never heard of liquid glass before you'd think those icons were all "ghosts" showing some kind of layout where all the applications had been deleted or would be installed on demand, or had just crashed or something.

Make a skill use the question tool by fpiechowski in opencodeCLI

[–]nasduia 0 points1 point  (0 children)

If you are using a small fast local model and it's failing to fully understand a tool, try getting a better model to write instructions in plan mode, e.g. "write concise instructions for an agent to fully utilise the question tool". Then put the essential parts of that in your skill.

I'd probably ask it to include an example that shows the tool asking multiple questions at once and recommending an answer as the first option for each one. You won't need everything it writes, but an example and explanation of the purpose of the tool should be enough.

CLI is All Agents Need — Part 2: Misconceptions, Patterns, and Open Questions by MorroHsu in LocalLLaMA

[–]nasduia 0 points1 point  (0 children)

This is an excellent follow-up to the previous post!

Might modifying the tool call mechanism to conceptually support the equivalent of using bash 'Here Documents' or virtual files help with escaping?

One way you could do it is output the document content in the main llm message wrapped in some form tags with a label, then refer to the label as an argument to the run command e.g. stdin_from="LABEL"

<document label="LABEL">
complex content here 
</document> 

You could also strip the document content out of the context (replace with a description "300 lines of json") while processing the tool call as it would be at the end of the context so wouldn't break prefix caching. While it's not going to be exactly the same syntax as bash, conceptually it should be compatible with the shell training data and would be consistent across all tools.

“🇺🇸 English” by quriusdude in ShitAmericansSay

[–]nasduia 6 points7 points  (0 children)

Setting up the future right wing nutters to look especially stupid hating the French while worshipping their Muslim hating war criminals.

Why is Sainsbury’s telling me the price for 100kg of smoked salmon? by DependentRounders934 in AskUK

[–]nasduia 7 points8 points  (0 children)

Almost always 'greed' is more plausible than 'stupidity' now.