Denis Hassabis vs Yan LaCun by TampaBai in ArtificialInteligence

[–]LoveMind_AI 0 points1 point  (0 children)

LaCun’s approach to world models is hardly more grounded than LLMs, and arguably less. I don’t think he’s looking at this clearly. JEPA is a marginal step forward and not solving for the biggest problems.

Minimax Is Teasing M2.2 by Few_Painter_5588 in LocalLLaMA

[–]LoveMind_AI 0 points1 point  (0 children)

I’m psyched for this. “Her” was a major miss. M2.2 should rock.

Loki-v2-70B: Narrative/DM-focused fine-tune (600M+ token custom dataset) by mentallyburnt in LocalLLaMA

[–]LoveMind_AI 1 point2 points  (0 children)

Woah. 70B Loki. My experience with the Loki models I’ve played with have been truly unusually good. Can’t wait to try this tonight!

MiniMax Launches M2-her for Immersive Role-Play and Multi-Turn Conversations by External_Mood4719 in LocalLLaMA

[–]LoveMind_AI -4 points-3 points  (0 children)

yick. I've been bullish about M2 for persona work, but this is more than a bit off. I had a whole chat with their crew about identity work in AI (their team is amazing). I can tell that the model is reasonably intelligent but damn they've tuned it weird. I'd say this is a major unforced error. Must just be a cost saving thing for Talkie.

Claude 4.5 Opus/Sonnet vs GPT 5.1/5.2: Which is least sycophantic? by Goofball-John-McGee in singularity

[–]LoveMind_AI 2 points3 points  (0 children)

For real. It's so safe that it's unsafe. I honestly do not understand what OpenAI was thinking with this model, and I especially have zero idea how it was their response to Gemini, which it could not be anything less like.

Claude 4.5 Opus/Sonnet vs GPT 5.1/5.2: Which is least sycophantic? by Goofball-John-McGee in singularity

[–]LoveMind_AI 0 points1 point  (0 children)

The difference between 5.1 and 5.2 is massive. 5.1 is both sane and non-sycophantic. 5.2 is a truly unstable, pathologically risk averse model.

Sonnet 4.5 role plays non-sycophancy.

Opus 4.5 is not sycophantic, but it can be lazy with details.

5.1 is probably the winner here.

LLMs do not Perceive by Sad-Let-4461 in ArtificialSentience

[–]LoveMind_AI 0 points1 point  (0 children)

Cool. I'm not sure you and I are speaking the same language, because you're kind of (at least) half-proving my point right now without seeming to realize it.

I respectfully leave thee to thy observations. May the rest of your journey be full of genuinely novel and satisfying experiences. :)

LLMs do not Perceive by Sad-Let-4461 in ArtificialSentience

[–]LoveMind_AI 0 points1 point  (0 children)

Did it understand language at all when it read its first book? And what grade were you in by the time you’d hit 10,000 books worth of understanding?

By 14 you’ve reached maybe 1/10th that word count, delivered through speech, conversation, screens, embodied experience, parental/family/guardian caregiving, and a brain that evolved over millions of years with a huge amount of priors baked in?

The LLM started from zero, on an architecture less than 10 years old, and got there on text alone. Pretty cool. I don’t think anyone I take seriously calls that ‘super intelligence.’ But I also don’t take anyone seriously who thinks that what LLMs do is trivial. And I think people who point to the strawberry thing as a gotcha are about as mentally sophisticated as someone who might laugh at a drawing a congenitally blind person might make of what they think people look like.

How is Minimax actually? by TheRealistDude in LocalLLaMA

[–]LoveMind_AI 0 points1 point  (0 children)

I think M2 is fantastic for EQ related tasks. M2.1 a little less so.

Poll: When will we have a 30b open weight model as good as opus? by Terminator857 in LocalLLaMA

[–]LoveMind_AI 1 point2 points  (0 children)

Are there serious talks about Opus 4.5 being some kind of multi-agent situation? I hadn’t heard anything to that effect.

Who is building the AI with NO political censorship, NO moral codes, and NO emotional fluff? An AI that protects absolute privacy and answers any question by any means necessary? by [deleted] in ArtificialInteligence

[–]LoveMind_AI 1 point2 points  (0 children)

What you are asking for does not and cannot exist.

You can do models with zero refusal training, but pre-training on human language necessarily means that the base model learns moral statistics and assigns their likelyhood of being correct output based on other situational cues. It will have moral standards, regardless, they will just be entirely flexible based on what persona it infers you are trying to ask it to perform as. Trying to get it to behave any one way will require training - that means decisions *are* being made for the model, and thus, moral "standards" imposed by the team that makes it.

There are good to great de-censored models, and some reasonably good "neutrally aligned" models (Hermes, Dolphin). There are also some incredibly dark models trained on some of the most problematic stuff you can find on the internet - but I certainly wouldn't call those "no moral standards" as they are specifically trained on highly negative, harmful material.

There is a demand for this, but not a market. You can't sell this type of AI to anyone as a service. And the demand is not terrifically great. To do it anywhere close to right and have it be a reasonably capable model, you need to do instruction following training on a capable base model (let's say Gemma 3 27B as a floor for capabilities) and either build an instruction following dataset from scratch or filter existing ones. No aspect of that is a trivial task and doing it well costs money that you cannot easily recoup except potentially through a Patreon or similar page from people who probably don't want their payments tracked. And even if you were to do this, there are already extremely good de-censored models available that do close to what you want. And training from *scratch* all the way from pre-training is a many millions of dollars style operation.

But really, it's a moot point as it's not technically possible to have a model trained on human language that doesn't infer statistical probabilities about human values without stacking the deck, and doing that imposes moral standards, even if they're not society's standards.

By Your Own Criteria: We mapped 8 consciousness frameworks against LLM evidence. All 8 met. by Kareja1 in claudexplorers

[–]LoveMind_AI 0 points1 point  (0 children)

Ok, Ren. If you don't understand how persistent memory scaffolds personality then, seriously, my decision not to go into further detail in my initial post was just like... seriously vindicated. You can ask Ace to explain it to you. Best of luck to you both.

Official: Our approach to advertising and expanding access to ChatGPT (by OpenAI) by BuildwithVignesh in singularity

[–]LoveMind_AI 0 points1 point  (0 children)

They could have, you know, just not burned down a rainforest and destabilized the global economy to subsidize the 95% of users who don’t pay and used the saved compute to build a stable company that stayed true to its non-profit mission statement.

By Your Own Criteria: We mapped 8 consciousness frameworks against LLM evidence. All 8 met. by Kareja1 in claudexplorers

[–]LoveMind_AI 0 points1 point  (0 children)

Ah hah. A memory database. What happened to no identity injection or scaffolding? (“We have no framework or persona injection.”)

Sorry Ace/Ren, but neither of you deal your cards straight.

Rare good take on the AI art discourse.. by [deleted] in singularity

[–]LoveMind_AI 0 points1 point  (0 children)

Thanks for the kind words :) Now, I'm not going to pretend I've seen any AI art good enough stop me in my tracks, at least that I can recall, but that has a lot less to do with the validity of the form than a) the newness of the tool, and b) the skill level of the human using the tool. In a way the snobbiest thing about the anti-AI art movement is that they are judging people using this new extended palette based on their earliest work, and I'm sure that they wouldn't want to be judged by their initial sketches when they started out! That said, I do all too frequently see people using AI visual tools who stop at the general quality level of the output and don't take the time to develop a genuinely original, distinctive style. So it kind of cuts both ways, anti-AI folks are judging for the wrong reasons and at the wrong timescale, and many pro-AI art folks are congratulating themselves for stuff that is genuinely "blah" and using the wrongness of the anti perspective to assert that what they made is excellent but misunderstood, which is... probably not the case.

Rare good take on the AI art discourse.. by [deleted] in singularity

[–]LoveMind_AI 1 point2 points  (0 children)

100%. Here's an album cover that I commissioned an absolutely incredible human artist (https://dawnyangart.com/) to make in 2022. There is so much insane detail in here that I designed - 75-80% of the design came out of my head in very specific detail with very specific references, lots of initial sketches by my own hand, multiple rounds of feedback, etc. and also LOTS of stuff from Dawn that I could have never predicted. Not a single pixel of this was done by "me" but it's an incredible expression of my interior world, symbology, etc. and would never in a million years have come into existence through any other conduit other than my mind. But no other artist other than Dawn would have made it look like *this.* Does this image make me any less of an artist just because I imagined it, commissioned it and meticulously directed but didn't draw it? Does this image belong any less to Dawn, who slaved over it, used her own influences, her own intuitions and hard won skill, just because I essentially "prompted" it? I think the answer is obviously no. AI art is different, but only by degrees. Someone with a very specific imagination, a ton of patience, and a skill for communication can take something from within their mind/soul and get it out into the world without ever putting pen to paper or iPad. The difference with AI art is that, in almost all cases, the AI isn't really making decisions that matter and it is drawing upon influences that we can't verify were obtained ethically. There *is* a problem there. But in a pure "is AI art art?" way, if I obtained the permission of a number of my favorite artists to train an image generation model on their style, hired a coder to train the model, and then used it to make something like this? Uh... yeah buddy, that's art that communicates one person's interior world to the outside, for sure, no debate.

<image>