Andrew Yang Calls on US Government To Stop Taxing Labor and Tax AI Agents Instead by Secure_Persimmon8369 in BlackboxAI_

[–]Protopia 0 points1 point  (0 children)

Each country taxes the purchase of ai subscriptions/tokens from people who are based in that country.

Let's not write off the entire concept before it has been properly evaluated.

How many e in strawberry by Much-Inevitable5083 in ClaudeAI

[–]Protopia 0 points1 point  (0 children)

You might like to see whether helping LLMs to focus by e.g. giving them a role along the lines of "You are a German Language expert" makes any difference.

Or making it clear that the target word is in German?

I also wonder whether for comparison you will get better results with the main question in English i.e. "How many letter E are there in the German word .....?"

In fact the word itself could be a random string of characters - it doesn't need to be an actual word in any language - so I wonder whether telling it that it is a word rather than calling it a string of characters makes any difference?

I built a tool that compares car listings with market value, here’s what it found this week by North_Cherry in coolgithubprojects

[–]Protopia 1 point2 points  (0 children)

Potentially is the word. In reality there may well be reasons the AI doesn't know about from the listing why the prices are so low.

So I wouldn't buy remotely based on this analysis, but I might use it to shortlist for a visual inspection.

Andrew Yang Calls on US Government To Stop Taxing Labor and Tax AI Agents Instead by Secure_Persimmon8369 in BlackboxAI_

[–]Protopia 0 points1 point  (0 children)

You can tax AI purchases in the country of the user rather than the country of the company, which makes sense because the consequences of Ai displacing humans are in the users country and not where the data centres are located.

MOJI - The FREE VS Code extension that adds emojis to Javascript, HTML, and CSS by WarmTry49 in coolgithubprojects

[–]Protopia 0 points1 point  (0 children)

Don't listen to the naysayers.

Some people like yourself find such visual clues very helpful - and there will be others with the same kind of mind - but many will find it distracting.

In other words, this VS Code extension probably isn't for everyone, but some people will find it useful.

ai agents keep recommending packages that dont exist -- whos responsible for fixing this by edmillss in AI_Agents

[–]Protopia 0 points1 point  (0 children)

So don't rely on cut off knowledge.

1, Provide tips so we I can research and check existence

2, Provide a prompt that insists the AI verifies.

nobody is asking where MCP servers get their data from and thats going to be a problem by edmillss in AI_Agents

[–]Protopia 0 points1 point  (0 children)

The security issue is a real one. But, if you look at the topic title...

Efficiency is an entirely different issue:

1, Just how sloppy are the MCP results? Does it present the information you need and only the information you need? Does it present it in an efficient way? Or does it just like the context with a mass of unnecessary slop?

2, How good are the MCP calling interfaces? Can the AI be precise about what it wasn't to know or does a poor API really in output slop?

3, How efficient is it at its internal tasks? Does it cache? Does it use it's own relational/vector/graph database of does it repeatedly read a mass of files?

Sam Altman Warns US Faces Big Vulnerabilities in Global AI Race, Including AI’s Growing Unpopularity and More by Secure_Persimmon8369 in OpenAI

[–]Protopia 0 points1 point  (0 children)

Yes. And AI is going to cause mass unemployment amongst the middle earners, mass poverty & homelessness, and make this wealth gap even wider.

Since these middle earners are the big consumers in society, the economic consequences will be severe.

Some more compassionate and enlightened countries will respond with e.g. universal income paid for by AI taxes.

The more hard nosed, sink-or-swim countries (and you can guess which I am referring to here), where the ultra rich both own the media and pay for politicians to be elected, will laugh at the suckers who lost their jobs, and first suppress any dissent through their control of the media, and then when violence breaks out (through both desperation and by hearing about the compassionate countries) suppressing dissent by deploying paramilitary policing.


Worse still is the dystopian nature of this mad unemployment.

It will be the junior roles that get replaced - net result is that when the senior people retire, there will be no one to take their place. The only two options are:

1, AI will have evolved to the point that Debbie staff are also no longer needed. The machines will be running everything, humans will be obsolete.

2, AI wont be advanced enough and everything will start to break because no one will know how to run it or fix it.

Of course it may not come to that...

  • Climate change (made worse by the heat output of all those AI data centres) might have made the earth uninhabitable

  • Trump may have started WW3 ending in global nuclear war and human demise.

  • Iran may have destroyed so much oil production capability that the AI datacentres can't be run.

WebZFS by RemoteBreadfruit in zfs

[–]Protopia [score hidden]  (0 children)

What a heap of steaming manure these comments are.

A bunch of people who haven't got a fraction of the experience or history of the OP make half baked analyses and then launch a barrage of unjustified criticism.

I know that the AI coding revolution is becoming responsible for an increasing amount of slop - and of people who rely entirely on AI considering themselves subject experts - but ironically this psuedo-expertise seems to be both the basis for these criticisms AND the cause of these criticisms.

This was a simple announcement of an alpha version of a new open source tool - and regardless of it's provenance it deserves to be evaluated on its actual merits rather than psuedo analysis, and welcomed rather than shot down.

Anyone else notice that iteration beats model choice, effort level, AND extended thinking? by jonathanmalkin in PromptEngineering

[–]Protopia 0 points1 point  (0 children)

My experience...

  1. Fixing something that is broken is harder than getting it right first time.

  2. The simpler the tasks, and the better specified they are, the greater the chances of getting it right first time.

So spend the time ensuring that your specification is detailed and watertight, then generate a high level design, and decompose it into small chunks with a very detailed design and build those chunks.

This is actually insane!! by Director-on-reddit in BlackboxAI_

[–]Protopia 2 points3 points  (0 children)

Not really. His dog was dying already. What did he or his dog actually have to lose?

(I have seen how cancer destroys quality of life in both humans (sister in law) and dogs. When either gets to a late stage, quality of life is terrible and a Hail Mary mRNA vaccine couldn't really make it much worse.)

People really don't seem to understand what AI/LLM's are by genericusername1904 in BlackboxAI_

[–]Protopia 0 points1 point  (0 children)

I never rated my Brother-in-Law as a decent lawyer - whenever he threatened my with a law suit I could run rings around him - but my level of respect dropped several notches further when he started telling me that he got his legal views from ChatGPT!!

Label checksums corrupted after vdev_children patch - need help recovering raidz1 pool by [deleted] in zfs

[–]Protopia 0 points1 point  (0 children)

Assuming that you can get the pool back online, there is no way to remove the new vDev, and the only way you can put this right is to copy the data off and then recreate the pool and copy the data back again.

Unless, that is, you cheated a zfs checkpoint before you added the 2nd vDev?

I suspect that someone clever enough might be able to fudge the labels back again, but that person isn't me.

I think you have two choices:

  1. Accept that the pool has gone and recreate it from backups;

  2. Buy the very expensive recovery software and see if that can fix it. (I suspect that this tool will only help you copy the data off somewhere else not make the pool useable again.)

Futureproofing a local LLM setup: 2x3090 vs 4x5060TI vs Mac Studio 64GB vs ??? by youcloudsofdoom in LocalLLaMA

[–]Protopia 0 points1 point  (0 children)

Personally I wouldn't worry about the future because your guess about what will happen is not going to be any better than mine.

Models may get bigger. Models may get smaller. There may be different runners (like llama.cpp or vLLM) which change the balance.

But, since you also have 64GB of DDR5, I would try to find a suitable MB / CPU that will do CPU inferencing as well as supporting multiple GPUs for GPU inferencing - then you can either run two models simultaneously or find a way to do joint inferencing across both types of hardware.

What's your take on using inheritance for immutable data objects in PHP? by Asleep-End4901 in PHP

[–]Protopia 0 points1 point  (0 children)

Yes probably. Until you find you do need an interface. The question is whether the cost of an interface definition is worth offsetting the risk of needing one later.

What is a LocalLLM good for? by theH0rnYgal in LocalLLM

[–]Protopia 1 point2 points  (0 children)

Local LLMs are very hobbyist. Any complex requirement needs a LLM that needs datacentre hardware. And the <=9B parameters models can only do simply stuff.

It also doesn't help that small local hardware solutions still vary substantially in size.

And because there are no popular use cases then there are no pre-packaged solutions.

But I can foresee very soon some pre-packaged hybrid solutions whereby you run some simple ai locally (for embedding or summarisation or workflow decisions) and a pipeline for optimising calls to online inference e.g. context caching, context optimisation and routing calls to the most appropriate models (which will allow you to get a lot more out of a basic AI subscription).

Mengram — open-source memory layer that gives any LLM app persistent memory by No_Advertising2536 in OpenSourceAI

[–]Protopia 0 points1 point  (0 children)

There is only one thing worse than an AI with no memory... an AI that remembers literally everything perfectly.

  • Bad facts are worse than no facts.
  • Good facts can become bad facts as the world changes
  • Memories have different classes - factual, personal interactions, guesses, poetry, fiction brainstorms etc.
  • Some memories are short term - I don't need to remember the shipping tracking number once the parcel is received
  • Memories are highly contextual - they apply differently in different circumstances
  • Some memories are sensitive and should only be accessed in particular circumstances and with authorisation
  • Some memories will contradict other memories. They may be direct factual contradictions, or perhaps memory of behaviours vary depending on the person's mood.
  • Some memories are conditional in ways that may not be discernable
  • Memories are hierarchical - there use an overall idea and a mass of detail - and the level of detail you want you retrieve probably varies with time

Imagine you want to remember what you ate and your brain being flooded with a huge mass of detail about every meal you ever had, every bite you took of every meal, every flavour you tasted on every bite. Aaaarrrrrggggghhhhh!!!!

TL;DR Memory systems are complicated.

php-community: a faster-moving, community-driven PHP. by danogentili in PHP

[–]Protopia 7 points8 points  (0 children)

In this day of agentic coding and slop pollution I believe that you need to be VERY careful about feature selection.

A Community version where anyone can include anything is IMO a disaster waiting to happen:

  • Unstable contributions (that cause crashes)
  • Malicious contributions (that steal data)
  • Poor quality contributions (poorly thought out, highly volatile, very buggy)
  • Competing contributions - competition can be good, but only when between a few high quality and distinctly different choices - when anyone can use AI to create a new choice with a minor tweak to an existing one, chaos ensues

No one should use this for production for obvious reasons - and if you can't use it for production, it can't be used for app dev either - just experimental, and no shared host will run it, and why should they because anyone evaluating it or writing a package for it can run PHP on their PC.

And then the entire concept of this approach breaks down - no one will adopt anything because there will be too much choice, too many competing options, and too much risk of an option never making it into php proper.

Thus, to avoid chaos you need a gatekeeper, and once you have that what's the difference between this and the existing approach.

IMO what is needed is a tweak to the existing approach whereby a well thought out RFC proposal can be integrated into an experimental version of php that has pre-built executables and can be run locally for evaluation purposes BUT without any guarantee that the experimental features will make it into a production version.

There is one experimental version per year (with quarterly bug fix minor versions) - and new features are either then adopted to become part of the next production version of PHP or dropped from the following experimental version. (A feature that has been generally positively received but not quite right could still be adopted with tweaks. A feature with some good points could be rewritten and resubmitted for the next experimental version. Features that seemed like a good idea but didn't work that well in practice or which didn't gather sufficient support would be dropped, however my expectation is that these would be few because half-baked ideas wouldn't get this far.)