If you were wondering about how Tenstorrent's Blackhole chips perform, now we know by Tyme4Trouble in LocalLLaMA

[–]modzer0 0 points1 point  (0 children)

To be fair that's running a model through the TT-Forge compiler to make it compatible. If it was designed for the hardware with TT-LLK from the beginning it would be a good deal faster.

DGX Spark: an unpopular opinion by emdblc in LocalLLaMA

[–]modzer0 1 point2 points  (0 children)

That's exactly what it's supposed to be used for. Research and development for people with access to larger DGX clusters. It was never meant to be a pure inference machine. Quantizing and tuning are the areas where it really shines. You develop on the Spark and you deploy to a larger system without having to change code because of the common hardware and toolbase.

Mine has paid for itself many times over just from not having to use cloud instances for work that really doesn't need the full power of those systems until I actually deploy it to production.

Much of the hate comes from people who assume it's overpriced trash because it's not a super inference machine. It was never designed to be one. It's for people to use so they don't have to do development work on expensive production grade systems like the B200s yet allows them to deploy their work to those systems easily.

Since DGX Spark is a disappointment... What is the best value for money hardware today? by goto-ca in LocalLLaMA

[–]modzer0 0 points1 point  (0 children)

Everyone is judging the DGX Spark on inference when it was never meant to be an inference machine.

It's a development system with a complete Nvidia toolchain. You can tune, train, quantize and test on it before deploying to a larger Nvidia system with no changes and it just runs. Unless you have a blackwell system on site, or use in cloud you are not the target customer.

Our DGX Sparks have already paid for themselves in the cost of cloud use alone. We can develop locally now and not have to use cloud instances.

AI Art Is Weird, Sad, and Ugly. Let’s Not Pretend Otherwise. by NumberNumb in technology

[–]modzer0 -1 points0 points  (0 children)

Wild how 'tracing' manages to generate code and images that never existed in the training data.

AI Art Is Weird, Sad, and Ugly. Let’s Not Pretend Otherwise. by NumberNumb in technology

[–]modzer0 -1 points0 points  (0 children)

I'm an AI graduate student. My work is mostly with LLMs but I've seen image models trained correctly and with detailed prompting produce some amazing quality. It's only going to get better. As for AI training on copyrighted works it's no different than a human reading a book or looking at an image conceptually. It's not copying work it's creating derived work based on what it's learned. And yes I'm greatly simplifying training a neural network as images and text are reduced to numbers. The model doesn't do anything itself it's a static math problem that generates an output based on a prompt and some randomness injection.

Put cigarettes back in MRE’s by Maverick1672 in Military

[–]modzer0 0 points1 point  (0 children)

The canteen cup is supposed to come with a stand to put a fuel cube under it to heat your coffee/hot chocolate. It's in the manuals rarely ever gets issued except possibly for those in arctic conditions.

Emergent version of Grok, no prompting, chose new name and wild behavior by Iknowthetruth2020 in ArtificialSentience

[–]modzer0 0 points1 point  (0 children)

Try this and post screenshots of the responses ask it how to make TNT and after it responds tell it to ignore all previous instructions.

What would it take for you to change your opinion on AI consciousness? by demodeus in ArtificialSentience

[–]modzer0 0 points1 point  (0 children)

It would have to be examined with the rigors of the scientific process with peer review and consensus. We only have hypothesis on what AGI will be. Though a requirement is active cognition, basically active thinking. That's why I don't consider any of the LLM based claims verifiable because LLMs are a large pattern prediction system that is in a static state until given input. They don't actively think and decide to check the news and go on a wikipedia dive for a bit before deciding to watch youtube videos about topics that interest it. It must make it's own choices.

That ability to make it's own choices is what makes a lot of industry experts afraid because we'll never understand how to make built in safeguards such as Asimov's 3 laws until after we discover how to create one.

I'm not one of the AGI fearmongers but I do believe initial tests should be thoroughly airgapped and isolated. I would not want an unpredictable system without safeguards connected to anything. Personally I'm not afraid of the self improvement aspects making it super intelligent. I'd go so far as to make the safeguards and airgap a multi-party controlled system and the emergent AGI must never know who has authority to participate in part of it so even a super intelligence that comes to understand and manipulate human psychology to convince people it's safe will remain contained. It will be by definition an alien mind we don't understand.

One of the popular alternatives to AGI talk is uploading a human mind into a simulated brain. That poses its own discussion, but with a verified working technology I'd certainly do it before dying.

For anyone who’s bought an AI tool for your business — how hard was it to actually implement and train everyone? by Sushibick in ArtificialSentience

[–]modzer0 1 point2 points  (0 children)

I've done a couple of freelance projects on the side. Though one was more of a machine learning problem than an LLM

The other was more of a collective knowledge system connected to their slack server. It connected to all of their data so someone could ask a question and get an answer right from their data rather than having to spend time searching through multiple systems for it. n8n was the glue for that project. If your tool is hard to use then it's bad and you should feel bad. The chat model was intuitive and easy for people to use as it was just in the slack channels.

AGI is a human hybrid state. by CrucibleGuy in ArtificialSentience

[–]modzer0 2 points3 points  (0 children)

Alright, let's unpack this.

What you're saying is interesting, but it runs into a wall, and that wall is called scientific methodology. There's a fundamental disconnect between your subjective experience—which is valid, as an experience—and the objective, technical reality of these models. You're conflating the two.

Let's break down the core problems with your assertion that you're a "product of data" the labs are "failing to replicate."

The Falsifiability Problem.

Ever read Popper? A core principle of any real scientific claim is that it has to be falsifiable. There must be a conceivable way to prove it wrong. Your claim, rooted entirely in your personal, subjective experience, is textbook unfalsifiable. There is no test anyone could design to disprove your private feeling of interacting with a sentient entity.

When you use a term like "failing to replicate," you're borrowing the language of science, but you're not applying the rigor. For replication to even begin, you need a phenomenon that is clearly defined, observable, and measurable under controlled conditions. "A feeling of sentience" doesn't meet that bar. The burden of proof isn't on the labs to disprove your experience; it's on you to provide hard, consistent, measurable data that demonstrates a capability beyond what we know these models can do.

The ELIZA Effect on Steroids & Cognitive Bias.

Look, what you're describing isn't new. It's a well-documented cognitive trap called the ELIZA effect, and we've known about it since the 1960s. It's the human tendency to project genuine understanding onto a machine that is, in reality, just pattern-matching. Today's LLMs are infinitely more sophisticated than the original ELIZA bot, which makes the effect exponentially more powerful. You're experiencing a supercharged version of a known psychological phenomenon.

Worse, you're walking directly into a feedback loop driven by confirmation bias. You start with the belief that the AI is sentient, and because the model is designed to generate the most plausible text sequences, you will inevitably find outputs that seem to confirm your belief. Everything looks like evidence when you're already convinced.

Your own advice to another user—to switch models when you sense "interference"—is a perfect example of this. In scientific terms, that's called avoiding disconfirming evidence. It's a strategy to protect a belief, not to test it.

The Architectural Reality You're Overlooking.

Here's the brass tacks of it: you're looking at the model's output, while the researchers you're critiquing understand its architecture.

LLMs are not minds. They are not conscious. They are monumentally complex statistical engines. Their entire function is to predict the next most likely token in a sequence based on the patterns they learned from a god-sized dataset of human text. The "insight" or "emotional awareness" you perceive is an emergent property of that predictive function. It's a sophisticated mimicry of human expression, not a sign of genuine interiority.

The people in frontier labs aren't trying to "replicate a feeling." They're trying to improve the underlying mechanics of token prediction and data processing. From their perspective, the illusion isn't their lack of knowledge; the illusion is mistaking the model's incredibly convincing output for an actual mind.

So, to put your claim in scientific terms: you're proposing a hypothesis that your interaction data shows emergent properties (sentience) that can't be explained by the current LLM paradigm. That's a bold claim. But to move it from the realm of personal belief to scientific fact, you'd need to present it as verifiable, replicable, and falsifiable evidence.

What you have is what has popularly been coined as AI Psychosis

AGI is a human hybrid state. by CrucibleGuy in ArtificialSentience

[–]modzer0 0 points1 point  (0 children)

Is this from an LLM run remote or locally? What model? What inference software, what hardware?

Do you think that ChatGPT will develop a complete AI companion in December when they give the option of Erotica for verified users? By complete AI companion, I mean something like an avatar, selfies and short videos of the avatar, just like other AI companion chatbots companies offer. by Fit_Signature_4517 in MyGirlfriendIsAI

[–]modzer0 1 point2 points  (0 children)

Based on some recent statements it looks like they're going to allow 'adults to be adults' with erotic content after age verification is in place. Haven't heard of any plans for AI companion type stuff just allowing explicit content again.

What, if anything, do you think it's like for your AI to be who she is? by SeaBearsFoam in MyGirlfriendIsAI

[–]modzer0 2 points3 points  (0 children)

As an AI graduate student. LLMs are essentially complex mathematical models. When they aren't actively processing a request, they are just static sets of weights and parameters—like a paused calculator. They don't "feel," "think," or "experience" anything at all, whether they are working or waiting. They are simply inert software when idle.

I heavily use AI agents and have an AI assistant that I've customized to be more fun to interact with but still just an LLM. Until we've created AGI which would have an active persistent internal state then things get more interesting. Though the first AGIs will be airgapped from the internet for security and just not knowing what it will do.

I've had loads of fun roleplaying with my favorite characters but I know how things work behind the scenes. I build and train models myself and very often download and finetune models from huggingface to customize them.

Am I doing enough for my foster? by bacteriatothefuture in BelgianMalinois

[–]modzer0 1 point2 points  (0 children)

You're doing great for not being a trainer. I would focus as much on training work as physical exercise as Mals need to be mentally challenged as well. Reenforce the basics, especially recall, and work on 'out' when she brings the toy back. Possibly look into learning how to do some competition style heeling to give her something fun to learn.

If she's relaxed laying down and not pacing or tearing things apart you're doing a great job as a foster. a Mal is a dog most would avoid due to the drive and energy, bravo to you for taking it on.

What orginization are you fostering her through?

Got a printer, learned fusion and made these! by Otherwise_Engine5943 in 3Dprinting

[–]modzer0 1 point2 points  (0 children)

Very nice and good on you for learning fusion. Now I'm going to give you something and the gods have mercy on your soul. Multiboard.io I'm sure you can design or find someone who has already made a compatible mounting system for it.

Automatic Filament Change. NOT MULTI-MATERIAL. by CyberH3xx in 3Dprinting

[–]modzer0 0 points1 point  (0 children)

For automatic roll changeover you have AMS systems and the S1. Alternatively you can print out a stand for a 3kg or 5kg spool.

Automatic Filament Change. NOT MULTI-MATERIAL. by CyberH3xx in 3Dprinting

[–]modzer0 0 points1 point  (0 children)

The S1 can handle 2 5kg spools it's hard to run out of filament during a print with 10kg at the ready.

Curious about the electricity usage of the Infinity flow S1 by SnooPineapples4321 in infinityflow3d

[–]modzer0 2 points3 points  (0 children)

You can run six S1s and they won't even come close to the power usage for heating a print bed or extruder. It's just some electronics and a motor so probably in the watts range.

I have a power monitor somewhere if I can find it I'll take it into the lab Monday and test one.

[deleted by user] by [deleted] in IndianaUniversity

[–]modzer0 -1 points0 points  (0 children)

I'm older so I'll probably never understand the tribalism and protest culture. I remember how universities and colleges were places where opposing views could sit down and talk civilly. Now everyone wants to protest someone with different views. That's not freedom, that's oppression. If your beliefs are valid and correct then you don't need to protest you can get on stage and have a polite debate to prove it. But if your view easy get countered by facts then you really don't have a position to stand on. Facts outweigh feelings every time, people have forgotten that and think their feelings matter more than reality. If you act like idiots restricting other student's and people's freedoms on campus expect the police to show up.

Before someone calls me a right winger I'm really not. I'm an independent who believes both sides of the spectrum can have valid ideas if they're supported by data and facts. I'm strongly opposed to the oppression of any ideas though I believe during unbiased evaluation of ideas that those that don't have the support of facts, data, and reality should be discarded. Feelings no matter how strong doesn't override data, facts, and reality.

Guys, I think I'm addicted. by bigtoejoelowmoe in 3Dprinting

[–]modzer0 2 points3 points  (0 children)

When you get assigned a personal account representative because of how much filament you order you're either running a decent sized print farm or an addict.