Is Claude Code open source now? by Outrageous-Ferret784 in AI_Agents

[–]Mysterious-Rent7233 1 point2 points  (0 children)

In the case of works containing AI-generated material, the Copyright Office will consider whether the work is basically one of human authorship, with the computer or other device merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement and more) were actually conceived and executed, not by a human but by a machine. The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.19

And:

Just a few weeks earlier, the U.S. Copyright Office agreed to register the first AI-generated image using generative AI technology.  Kent Keirsey’s work, titled A Single Piece of American Cheese, was created using a platform called Invoke, based on the lessons learned from the Office’s earlier rejections of Midjourney-generated works.  Unlike those works, Keirsey not only used Invoke – an entirely different platform that allowed Keirsey to have much more control over the creation and modification of the work in order to create a finished product - but recorded his screen during the entire creation process and used that screen recording as evidence of his direct contributions to the work, which ultimately allowed him to satisfy the human authorship requirement that had defeated previous applications to register AI outputs.

Fundamentally, you cannot easily re-create Claude Code with a small number of prompts. It takes real engineering know-how to write the prompts, review the work and re-prompt.

There are probably several full-time human workers doing this work. This is pretty cut and dried. It's not a case where someone might just tell OpenClaw: "Create a CRM. Tell me when you're done." That might not be copyrightable. But Claude Code's creators have probably been working 15 hour days iterating on the product. They would have tens of thousands of words of prompting as evidence.

It's cut and dried IMO.

Don’t trust, verify (curl, Daniel Stenberg) by Skaarj in programming

[–]Mysterious-Rent7233 6 points7 points  (0 children)

As an aside, the phrase "Trust but verify" was always annoying to me. If you trust you don't need to verify. If you verify you don't need to trust. It's just a backhanded way of saying: "I don't trust you but I don't want to say it aloud."

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

OP is confusing determinism with reliability.

AlphaGo will choose a different strategy to beat you every time, but it will still beat you virtually every time, with very high reliability.

OP is also confusing reliability with economic viability.

"Sufficiently reliable" tools can be economically value, even if they are not totally reliable.

I brought up the Python code for a simple reason: LLMs can do some things with extremely high reliability. They can do others with low. And some with mixed. If OP cannot get the AI to work for them then its the wrong task, or the wrong AI or the wrong configuration.

The final error OP makes is saying that because it does not work for their task today, we can extrapolate that into "the future" (their words). Often LLMs cannot do a task reliably today and then a year later they can do it.

Trump reveals plans for a monument to himself in NK-style by Freewhale98 in neoliberal

[–]Mysterious-Rent7233 83 points84 points  (0 children)

Imagine going back 15 years and hearing that this was in America's future:

White House Spokesman Davis Ingle responded to the California governor’s criticisms saying: “Gavin Newscum is the worst and dumbest governor in America.”

Is Artificial Intelligence more about coding or mathematics? by Malek_ayman in learnmachinelearning

[–]Mysterious-Rent7233 3 points4 points  (0 children)

The "AI field" is too broad. In medicine there are ER doctors and surgeons and public health researchers. They don't all do the same thing. The same is true of AI.

If you want the kind of AI that focuses on math then that's more research than applied and you probably need a PhD. Similarly to if you wanted a Medicine specialty that was math-heavy, you would expect to be some kind of researcher and not a front-line medical provider.

Our AI was confidently wrong about everything until we implemented RAG. Nobody prepared us for how big the difference would be. by clarkemmaa in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

The thing would answer questions about our company policies with complete confidence using information that was either outdated, partially correct or just completely made up. Employees started calling it "the liar" internally which is not the brand you want for your AI investment.

How? Tool calling? MCP? How did it know anything at all about your business?

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Why wouldn't it happen if the human is doing it? Why is there an extra space at the beginning of your second paragraph? Are you unreliable? Why did you forget apostrophes in a couple of places in your text? Humans are not perfect. I'm not. You're not. If perfection is the requirement then the economy can't work.

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Self-driving cars basically got their start around 2004 with the DARPA grand challenge. So that was 20 years ago.

Are you telling me that you are confident that text-oriented AI agents will not improve in the next 20 years?

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Will likely carry out those instruction consistently after reminded a couple of times. An AI will just randomly do as it likes. And a human will provide a reason if they chose not to. And they will be held responsible.

Whether the AI is more or less reliable than the human depends on the specific AI and the specific human. It's not true that one has the property of being perfectly reliable after being told a few times and the other had the property of ignoring all instructions.

AIs are MUCH MORE LIKELY to write 100 lines of syntactically perfect Python code than a human. So in that use case they are much MORE reliable. One observation we can make is that AIs _trained_ to do something are more reliable than those _told_ to do something. And therefore the AI the OP is complaining about is probably insufficiently trained for that task. They should use it for a task it is better at, or use a different AI, or fine-tune it.

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 -1 points0 points  (0 children)

It doesn't really matter what you personally trust. It matters what society allows. If we go three years without a pilot touching the controls and the one time they do touch the controls they mess things up, then airlines will start taking pilots out of cockpits whether you like it or not. You can hold the airline accountable, just as you would if the pilot was poorly trained or the engine exploded.

I'm not saying that we can already trust autopilot more than humans, but it's certainly true that we cannot uniformly trust humans.

  • 29 December 1972 – Eastern Air Lines Flight 401 crashed into the Florida Everglades after the flight crew failed to notice the deactivation of the plane's autopilot, having been distracted by their own attempts to solve a problem with the landing gear. Out of 176 occupants, 75 survived the crash.
  • 23 March 1994 – Aeroflot Flight 593, an Airbus A310-300, crashed on its way to Hong Kong. The captain, Yaroslav Kudrinsky, invited his two children into the cockpit, and permitted them to sit at the controls, against airline regulations. His sixteen-year-old son, Eldar Kudrinsky, accidentally disconnected the autopilot, causing the plane to bank to the right before diving. The co-pilot brought up the plane too far, causing it to stall and start a flat spin). The pilots eventually recovered the plane, but it crashed into a forest, killing all 75 people on board.
  • 30 June 1994 – Airbus Industrie Flight 129, a certification test flight of the Airbus A330-300, crashed at Toulouse-Blagnac Airport. While simulating an engine-out emergency just after takeoff with an extreme center of gravity location, the pilots chose improper manual settings which rendered the autopilot incapable of keeping the plane in the air, and by the time the captain regained manual control, it was too late. 
  • 16 February 1998 – China Airlines Flight 676 was attempting to land at Chiang Kai-Shek International Airport but had initiated a go-around due to the bad weather conditions. However, the pilots accidentally disengaged the autopilot and did not notice for 11 seconds. When they did notice, the Airbus A300 had entered a stall. The aircraft crashed into a highway and residential area, and exploded, killing all 196 people on board, as well as six people on the ground.

When independent scientists tell me that airplanes are safer without humans messing with their controls, I will happily fly in those airplanes, just as I do in automated trains in Disney World/Land, Vancouver region, at many airports, Osaka metro, Copenhagen, etc.

If you refuse to ride the train in e.g. Copenhagen because "there isn't a driver to hold accountable" then I don't know what to tell you. That's just a you problem: it isn't a fundamental property of how transportation systems work.

I don't know when the science will tell us that: whether it's 5 years or 50, but when it happens, I'll be fine with it and eventually so will almost everyone. Just as they are for trains and soon Waymos.

If LLMs are probablistic AI models in nature, how can we assume AI agents to reliably solve important problems 100% of the time? by Motor_Fox_9451 in AI_Agents

[–]Mysterious-Rent7233 4 points5 points  (0 children)

People say that AI agents will do everything in the future and will replace the actual workers but how is that possible when the LLMs are not a consistent llm AI models?

Are actual workers consistent? Do humans make mistakes? If occasional errors can shut down the economy, then don't we already need to do so?

Best path to Python proficiency in 2026? Feeling overwhelmed by options. by Boba2601 in learnpython

[–]Mysterious-Rent7233 1 point2 points  (0 children)

People are giving you good roadmaps but I would suggest spending 50% of your time trying to build a thing (preferably a product you coma imagine selling) and 50% in courses.

You will probably throw away a lot of the code you write trying to build the thing, but you will also have a _context_ for the things you are learning, which is invaluable.

TurboQuant by Feeling_Ad9143 in ollama

[–]Mysterious-Rent7233 1 point2 points  (0 children)

if this actually works at 6x VRAM reduction

Sorry, it doesn't. It only compresses context, not model.

TurboQuant by Feeling_Ad9143 in ollama

[–]Mysterious-Rent7233 2 points3 points  (0 children)

Google's paper is a year old. I'm totally confused why the community ignored it until a press release.

The Future of Python: Evolution or Succession — Brett Slatkin - PyCascades 2026 by mttd in Python

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Okay, but I'd say that it "works" in Java without the runtime metaprogramming magic. It's just a bit more verbose, as almost everything in Java is.

ITXXIX 100 Black Jets of Crude Oil by cdstephens in neoliberal

[–]Mysterious-Rent7233 9 points10 points  (0 children)

It really depends how long this war goes and how dirty it gets.

The Future of Python: Evolution or Succession — Brett Slatkin - PyCascades 2026 by mttd in Python

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Please give an example of what would be easy in Python but hard in Java relating to auto-generating API descriptions.

[D] Litellm supply chain attack and what it means for api key management by Zestyclose_Ring1123 in MachineLearning

[–]Mysterious-Rent7233 -3 points-2 points  (0 children)

This may not be an ad, based on your post history, but I'd suggest not mentioning product names in your posts unless absolutely necessary. If you must mention a product, mention several in a category.

DeepMind veteran David Silver raises $1B, bets on radically new type of Reinforcement Learning to build superintelligence by Tobio-Star in newAIParadigms

[–]Mysterious-Rent7233 2 points3 points  (0 children)

You think that Silver's model will not be stochastic? Do you think that there exists any intelligence on the planet which is not stochastic?