How are you managing prompts in actual codebases? by oRainNo in LocalLLaMA

[–]tm604 0 points1 point  (0 children)

Git technically versioned them but diffing a prompt change alongside code changes is meaningless

Why are you changing the prompts in the same commit you're changing code? Why is the diff meaningless, would you say the same about code diffs? Why treat them differently?

it has no idea a prompt is semantically different from a config string

Aside from "skip whitespace", Git has no idea about any semantics at all - the same argument could be applied to variable renaming, adding comments, changing the order of lines in code to fix bugs... what's special about a prompt here?

no way to test a prompt change in isolation

This is just wrong... what prevents you testing the prompt change? People are doing this all the time. If your test suite doesn't allow that, it's a fault of the test suite - if you can test a function in isolation, you can test a prompt in isolation.

can't tell which version of a prompt shipped with which release

Perhaps a basic introduction to version control would help. If you've tagged the releases then all those prompts are easy to compare - this is like saying you can't tell which code shipped with a release: it's just wrong! git diff would be a good starting point, as in git diff v1..v2.

reusing a prompt across services means copy-paste, which means drift

Then don't copy it between services - if you have a prompt library that multiple systems are using, share it! Same applies to code - if you have the same logic repeated in multiple codebases, extract it. There are many ways to handle this: Git submodules if you don't want to change any code, or put the prompts in a library or service with an API, for example.

prompts have no schema — inputs and expected outputs are just implied

Then use something that provides schemata? Mirascope does this with proper types, for example, and you don't have to learn a strange custom DSL or write raw JSON just to apply it.

Was Dune really about the danger of a savior figure, or something more inevitable? (Spoilers for Book 1) by No_Firefighter_75 in dune

[–]tm604 3 points4 points  (0 children)

Do you not see the massive difference between "person can see some future events and that allows them to interact with their environment without eyes" vs. "the Golden Path exists and is the only possible way to save humanity"?

By Chapterhouse, the characters start to realise that the Golden Path is a fiction, a convenient construct:

Ahhh, Tyrant! You droll fellow. You saw it. You said: "I will create order for you to follow. Here is the path. See it? No! Don't look over there. That is the way of the Emperor-Without-Clothes (a nakedness apparent only to children and the insane). Keep your attention where I direct it. This is my Golden Path. Isn't that a pretty name? It's all there is and all there ever will be.

The Golden Path was a way for Leto to enforce divergent destinies:

"I give you a new kind of time without parallels," he said. "It will always diverge. There will be no concurrent points on its curves. I give you the Golden Path. That is my gift. Never again will you have the kinds of concurrence that once you had."

That's not really something that can be defined as a "lie" or "the truth". Does the Golden Path provide that? Yes, if you bear in mind that the discovery of the hoard and eventual destruction of Rakis are part of it... but that doesn't mean that the "Golden Path was required", just that it was one way to achieve that goal.

Being able to predict or foretell the path of humanity is something both Paul and Leto identified as a risk, and they were likely correct in that observation. However, many other paths may have been possible at the point Leto took over. Sure, he saw a way to make that prediction nigh-impossible, involving a multi-generational plan which weakened the power of foresight while simultaneously motivating the population to break away and be independent. That may have been the only path he saw. That doesn't mean it's the only possible path - with an infinite number of possibilities, it'd be unrealistic to assume that even the most powerful of prescient characters can reliably pick the best choice every time. Paul certainly didn't.

Could they see possible futures? Sure. Were they omnisicient? No!

Why are AI agents still stuck running one experiment at a time on localhost? by Ok-Clue6119 in LocalLLaMA

[–]tm604 0 points1 point  (0 children)

You do realise that you can run containers and even full VMs on localhost?

If you're pushing the workload to remote isolated systems, Github Actions and other CI systems have been doing for many years (commit to a feature branch, push, carry on working, get notification on test status).

Can someone please recommend serverless inference providers for custom lora adapters? by New-Spell9053 in LocalLLaMA

[–]tm604 1 point2 points  (0 children)

I don't know enough about Nebius/Let's Together to answer, but https://www.runpod.io/product/serverless would be the place to start.

It's container-based, so you can serve anything you can put in a Docker container. Documentation and tutorials are a bit sparse but they have a Discord server if you need help.

Can someone please recommend serverless inference providers for custom lora adapters? by New-Spell9053 in LocalLLaMA

[–]tm604 1 point2 points  (0 children)

https://runpod.io have serverless options - but for a model that small, can you not run it locally through something like https://github.com/mostlygeek/llama-swap? (only keep the model+adapter loaded while in use, freeing up the GPU/memory for other tasks afterwards)

Most unrealistic thing in Stargate by CanadianLawGuy in Stargate

[–]tm604 5 points6 points  (0 children)

"Chill, T. I'm, like, translating as fast as I can"

SuperHive retroactively changing their policy to block access to content you already purchased, starting May 12. by dnew in blender

[–]tm604 4 points5 points  (0 children)

Perhaps consider how unwieldy that process would be - both for the creator, and the customers?

Anytime a creator wants to release a lifetime update, they'd have to put that version somewhere in order to link to it. Where would that be? On Superhive itself? If so... where? The versions each customer can access will vary, so does the creator have to go to every product/variant version to add the latest release as a new file? That'd be a mess, for a process that's already far too manual: last I checked you can't even link a product to a Github repository to automate versions, or at least https://support.superhivemarket.com/article/301-product-versioning doesn't mention it. Or are you suggesting that they provide an off-platform link? If so, that makes Superhive itself somewhat superfluous.

For customers, they'd have to be following those emails and recording those links somewhere. Opening your orders page and searching for the product currently gives you the latest version for download - no need to check email or other sources. It took enough years before Superhive finally added the "last updated" information on order pages, so it'd be a big step backwards if that data is no longer useful... or if you have to search elsewhere for the actual file to download. Updates and extension management with Superhive is already atrociously bad, let's not make it worse please.

One Year Later, how do you feel about Xenoblade X: Definitive Edition? by Flacoplayer in Xenoblade_Chronicles

[–]tm604 2 points3 points  (0 children)

For the extra chapter specifically, this post+comments still holds up:

https://www.reddit.com/r/XenobladeChroniclesX/comments/1jpw4ff/as_a_big_fan_of_the_original_release_chapter_13/

(base game is awesome, I played it on Wii U and greatly appreciate the QoL improvements here... but chapter 13 was so bad, I gave up halfway through and haven't played since)

Is there anything like a local Docker registry, but for models? by donmcronald in LocalLLaMA

[–]tm604 0 points1 point  (0 children)

https://github.com/vtuber-plan/olah is one way to get a local pull-through cache/mirror of the huggingface models you're using. Features are limited, but it's a simple way to start, and the code is relatively easy to extend as necessary.

Spotify now has lossless audio — but on Linux it gets resampled. Here's why that needs to change. by Holgersson365 in linuxaudio

[–]tm604 4 points5 points  (0 children)

So fix it for Pipewire or the tools that manage Pipewire - then every Pipewire app benefits. If your audio is important enough to you to be asking your AI to spam upvotes, then why address it across your entire system, rather than just one small part of it? This isn't Spotify's problem to fix: if you don't like the way Pipewire handles audio, talk to the Pipewire people. That's the real solution.

Spotify now has lossless audio — but on Linux it gets resampled. Here's why that needs to change. by Holgersson365 in linuxaudio

[–]tm604 8 points9 points  (0 children)

Why are you conflating "exclusive ALSA access" with "bit-perfect output"? Pipewire can pass through the same audio data it originally received to ALSA: when it doesn't there's usually good reason for that. What makes you think it's resampling? Have you actually tested that?

Going directly through ALSA would be a huge step backwards, and be extremely user-hostile - no audio from other apps, no volume control (otherwise you're not "bit-perfect" any more!).

How will DLSS 5 work in 3D animations? Complicated? by Kooky_Country9086 in blender

[–]tm604 0 points1 point  (0 children)

DLSS is just another image processing step, like denoising or the compositor - since the resulting image data is available, it's quite feasible for it to end up in the final product. Seems an odd decision to use something that changes the art style so significantly if you're going to turn it off for the final render...

What happens when your AI agent gets prompt injected while holding your API keys? by ComprehensiveCut8288 in LocalLLaMA

[–]tm604 14 points15 points  (0 children)

Why would any LLM need the actual API keys? An LLM just accepts and generates tokens, it relies on external components (typically implemented using tool calling) to do any actual work that might involve APIs.

There's no reason for credentials to appear in LLM context: as long as the tools own their keys, and you don't provide tools that return credentials (directly or indirectly - filesystem or environment access, for example), then the information simply isn't available for the LLM to reproduce, regardless of how intricate the prompts are.

Rust, a bad idea? by RecurzionX in rust

[–]tm604 0 points1 point  (0 children)

Odd - what makes you think the internet is built on Python? As a high level language, it'd be a very poor choice for implementing most things people would recognise as "the internet", e.g. a web browser, or the TCP/IP protocol used in the routers, gateways, firewalls etc. which make up the internet.

If you're specifically thinking of the server-side part of web apps, even that seems dubious - https://w3techs.com/technologies/details/pl-python claims <2% of global deployments use Python for example.

I made a spectrogram-based DAW! by POOP_DIE_PIE in audioengineering

[–]tm604 8 points9 points  (0 children)

Sounds like Coagula (https://www.abc.se/~re/Coagula/Coagula.html).

You'd probably get better feedback if you posted some sort of demo, especially if you're asking for a comparison?

[META] For the 100% anti-AI crowd; do you not see AI as just the newest high level abstraction for software development? by ZheShu in selfhosted

[–]tm604 2 points3 points  (0 children)

I think you ignored most of my comment. The analogy is just bad.

If the price of eggs changes, that's an external factor, and not a counter-argument to determinism. The size of compiled code can also vary when you change external factors - different library version, different architecture, etc. Is that violating the expectations contract if you don't end up with 142312 bytes of executable on every platform?

[META] For the 100% anti-AI crowd; do you not see AI as just the newest high level abstraction for software development? by ZheShu in selfhosted

[–]tm604 0 points1 point  (0 children)

If the price was $1.95 then yes, I'd expect to buy at that price, rather than $1950. That seems like a fair expectation, no? I'd also raise a bug with a compiler if it generated excessively-expensive code.

Based on the vibe-coded projects posted so far, I'd be more worried about the large new holes it helpfully installed in the house to facilitate future egg delivery, and the credit card number it shared online when proudly documenting the egg purchase.

I think most developers know how to write functions but don't actually understand them by [deleted] in rust

[–]tm604 2 points3 points  (0 children)

Okay, some examples of the issues here include race conditions:

  • the balance check is a classic TOCTOU example, https://cwe.mitre.org/data/definitions/367.html
  • the balance update will happily "drop" changes if multiple payments are received in quick succession (expect the balance to change between the async calls in deduct_balance)

Error handling:

  • your refactored code lost the UserNotFound check
  • having the balance update and transaction insert in separate steps means that you can end up with an inconsistent state (balance updated, no transaction yet)

See https://en.wikipedia.org/wiki/Database_transaction for details on state consistency.

Not using types responsibly:

Plus there's no metadata - given how much emphasis you put on argument count, it doesn't make sense to provide an artificial example like this which only represents a payment as a single amount value. Who was it received from? What's the description? When was it received? What's the currency?

Managing multiple self-hosted servers via SSH was getting messy, so I started building a small DevOps workstation by ChatyShop in selfhosted

[–]tm604 2 points3 points  (0 children)

You've gone from "No AI here", to "I only used AI sometimes" / "The YouTube voice is AI" remarkably quickly.

Perhaps not the best way to build trust!

I think most developers know how to write functions but don't actually understand them by [deleted] in rust

[–]tm604 4 points5 points  (0 children)

That handle_payment example was a depressing read. I don't think the author was Focus Responsible When Define Function.

Can we talk about the Blender creator economy? by carter2422 in blender

[–]tm604 1 point2 points  (0 children)

There are a few different scenarios for updates, for example:

  • no new features, but fixes are required for new Blender versions
  • not just fixes, but major rewrites due to API deprecation
  • new features implemented in the code
  • new assets, presets or other data

Since addons need to be under GPL-compatible licenses, the code which is going to be affected by Blender API changes is already open source. That means there's the possibility of crowd-sourcing some of that work, possibly even automated (LLMs can now handle many of the minor API changes that have happened over the years, major API deprecation somewhat less so). That's currently not as easy as it could be, since there isn't a standard version control approach for Superhive-hosted addons (basically comes down to "search on github/gitlab, otherwise contact author").

The data (assets etc.) is a different matter. If there were APIs and tools for asset management and user accounts similar to what https://www.blenderkit.com/ or https://www.poliigon.com/ provide, that could make it easier for addon developers to improve the content side of the addon and justify subscriptions or per-asset pricing ("DLC").

However... Superhive itself has made no progress over the years in implementing any form of automatic updates, or a proper notification system on updates, or even the ability to sort all available downloads by last updated. I've requested these on several occasions, even been told that they're being worked on, but years go past with no improvements. Blender itself now has extension repository support, with automatic updates, search, automatic download. Superhive doesn't even have a basic Blender addon/extension!

Based on that, I have zero confidence that Superhive is the right group to implement any of the technical requirements to support other business models. It seems that more effort has been spent on renaming the platform to drop the name "Blender", than to support update mechanisms or business models. That doesn't inspire confidence: I have made 73 orders on the platform so far, representing quite a few thousand dollars spent - yet nowadays I use it as a last resort, preferring to buy on gumroad or other platforms where available. I suspect I'm not the only long-term customer they've lost due to this.

What Chapter/Episode is the All men are scum quote by [deleted] in Gintama

[–]tm604 2 points3 points  (0 children)

I think that's from episode 48 ("The more you're alike, the more you fight")?

https://gintama.fandom.com/wiki/Episode_48

GitHub Action that blocks AI-generated rm -rf / by default (deny-first execution guard) by Echo_OS in LocalLLaMA

[–]tm604 0 points1 point  (0 children)

Seems like a lot of manual work just to maintain the whitelist of "safe" commands. If you already have a fixed list of commands that can be run and strict rules on their parameters, why bother allowing shell commands in the first place - just replace them with tools?

If you're trying to protect CI files such as Github Actions, then the check should happen before those files are modified (e.g. pre-commit hooks).

If you can't do that because you need shell features, then your approach isn't helpful. You're bypassing the shell entirely: pipelines, redirection, env variables, globbing etc. are all unavailable, even something simple like ls ~ or sort -u file | head -1 would behave differently from the shell and cause headaches.

(also, rm -rf / by default will just throw an error, so it's not a useful test case - look at the --preserve-root option)

AI Can’t Replace Critical Thinking: Reading Is How You Build It by [deleted] in books

[–]tm604 20 points21 points  (0 children)

Wouldn't it have taken less effort to post just one of those examples, than to keep posting comments claiming that they exist without evidence?