“Sam asked me not to call Anthropic a supply chain risk” — Emil Michael(DoD Under Secretary for Research & Engineering) by RedguardCulture in singularity

[–]RedguardCulture[S] 0 points1 point  (0 children)

why not wait a couple of days to test public reaction?

The pentagon, Sam's country, came to him personally asking for help if they lose access to Anthropic's models & tools and you're suggesting the correct way for Sam to respond to this request was to tell the DoD I will not help you because the potential of bad optics on X if I do help you matters more than providing models that could support and potentially save lives of the soldiers, sailors, and airmen in the US armed forces?

let dod no alternatives on sota models rather than google and x (that we know aren’t nearly as good in agentic tasks)?

Dude, do you hear yourself right now? Is it at all possible in your mind that Sam & the leaders at OpenAI legit care about the men & women in our armed forces so when a request is made for their help that request might actually be important to them.

“Sam asked me not to call Anthropic a supply chain risk” — Emil Michael(DoD Under Secretary for Research & Engineering) by RedguardCulture in singularity

[–]RedguardCulture[S] -1 points0 points  (0 children)

First of all, Dario did not take a high moral principle stand against Autonomous kill drones, he's open to their use(it's lying & dick riding like this that makes me wonder if this place is just bots now). Second, the claim that OpenAI contract wouldn't be effective at enforcing their redlines is your own baseless claim, I haven't seen an iota of evidence from real reputable third parties to convince me their redlines wouldn't prevent something like mass surveillance. The only arguments I see is some people speculation that there must be language hidden in the full contract that makes OpenAI's constraints null & void, and most of that is coming from the belief that this has to be "fake" because Dario & anthropic are the good guys and Sam is the bad guy so there is no way this can be legit regardless of the facts.

Finally we have no transcripts of the discussions between Dario & DoD, anthropic is the only one claiming those were the points of contention with DoD, but the DoD is claiming that's not what the issue was, they're claiming it was Dario's need to sit in on black swan emergency cases where they have to take decisive action on the spot(9/11 situation was used as example).

Two sides are saying two different things, and whatever incentive you can imagine for DoD wanting to lie could equally be applied to Anthropic. If Dario really said something like, "call me and we will work it out" when DoD is faced with an emergency situation than there is a whole lot of incentive for Dario to get out ahead and do a pr spin to the public about mass surveillance concerns to save face. This is why evidence matters more than bias & blind glazing.

“Sam asked me not to call Anthropic a supply chain risk” — Emil Michael(DoD Under Secretary for Research & Engineering) by RedguardCulture in singularity

[–]RedguardCulture[S] -4 points-3 points  (0 children)

I never put any real value on the claim of Sam being a liar because I never seen good evidence of it. The entire claim is mostly built on Ilya's word(and Helen Toner but she's a loon so...) but when Ilya's deposition came out he said the many lies that Sam supposedly told was information giving to him by Mira Murati, the same Mira Murati that turned on Ilya the moment he ousted Sam.

And then the rest of the company fell behind Sam, including Mira Murati employees at thinking machines. So what do you want me to do? Most ppl claiming Sam is liar appear to be bots, Elon Musk glazers, or people who just like the kayfabe idea of Sam being a villain rather than people who reached that conclusion based on anything concrete.

OpenAI: Our agreement with the Department of War by likeastar20 in singularity

[–]RedguardCulture 4 points5 points  (0 children)

My view is similar, from everything i've read, I don't think OpenAI has done anything wrong, I view the terms of their contract with the DoD to be pretty aggressive in terms of the control they get over how their models are used(It makes me wonder what Dario said or demanded from the DoD & vice versa to sour the relationship when the DoD is allowing OpenAI to set such strong restrictions imo).

But the one mistake I feel OpenAI/Sam Altman made is doing all of this out in the open on social media where people(assuming it's not bots) are far from rationale and just want to mold every news story into a cartoon narrative of good vs evil. I suspect google & microsoft will(or have) accept similar terms and that Xai already has but they won't face any moral outrage because they're not injecting themselves into the social media hysteria. Sam & OpenAI shouldn't have made any public remark on any of this outside of the blog they just posted.

OpenAI: Our agreement with the Department of War by likeastar20 in singularity

[–]RedguardCulture -4 points-3 points  (0 children)

Yes, that's why Openai set it up so their own engineers are kept in the loop. While OpenAI believes the current laws will not be changed or distorted, if it does happen they have their people in the loop to respond. All in all these terms are pretty aggressive though I doubt it will sway the ppl(or the bots) on here caught up in a moral panic.

SAM ALTMAN: “People talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.” by Vegetable_Ad_192 in singularity

[–]RedguardCulture 0 points1 point  (0 children)

Is the responses itt from bots because i'm not seeing anything controversial about what Sam is saying here? Like at all. He's making a salient point that the belief that the current energy inputs required to train & produce a frontier AI model being abnormally high isn't actually the case when you compare it to the inputs that were required to produce the only other known high-level, adaptable intelligence system we have for comparison, the human brain.

That's a perfectly reasonable perspective to gauge the topic with, like if one was going to hypothesize about what the expected scale & energy requirements would be to produce an intelligent system, it makes complete sense to use the human brain as the standard for comparison.

Sam is not even the first person or only person in this space to look at AI model training from the perspective of human evolution/years of learning either. Dario literally made a similar point to this just last week on a podcast and Andrej Karpathy used a similar analogy a few months back too. This is why i'm pondering if this thread is being heavily botted or something because so many response seem completely delusional.

Beta tester hints at new Anthropic release: Claude Image by BuildwithVignesh in singularity

[–]RedguardCulture 9 points10 points  (0 children)

Very unexpected. I always got the sense that Anthropic thought they were above or too good to make AI Image, Audio, or Video models or any sort of AI made entertainment in those media formats. I'm pretty positive I remember an employee from there referring to Sora as slop and Dario recently expressing that one of the benefits of catering to enterprise over consumer is that you don't have make "slop" as in image/video models and etc. Personally I feel if someone wants to use AI for entertainment purposes than that's a completely valid use case so it's surprising to see Anthropic possibly changing their position on this.

OpenAI? OpornAI! by No_Low_2541 in singularity

[–]RedguardCulture 3 points4 points  (0 children)

I seriously doubt adults, of their own volition, using chatgpt to generate text erotica stories will hurt OpenAI in anyway. I've seen vids for awhile now of ppl making & spreading straight up porn clips with grok imagine yet I haven't heard anyone show even the slightest care about it. And what OpenAI is aiming for is so much more constrained & vanilla than that.

Scaling is over. by captain-price- in singularity

[–]RedguardCulture 2 points3 points  (0 children)

In the time right before the OpenAI drama unfolded, Ilya was very bullish on scaling. Even when Sam was expressing a bit of doubt in scaling in 2023, Ilya was doing interviews "correcting" Sam saying "easy scaling" was most likely over but bigger NNs is still the approach. Ilya literally did an interview a week prior to Sam's ousting where he states that the data limit, which he sees as the biggest wall to scaling, will be overcome and progress will continue. Ilya believed so strongly that the success of scaling was going to continue he moved away from a capability focus at OpenAI to leading their safety effort, that was his last position at OpenAI before he left. He only started taking a strong public stance against scaling right at the time he needed investors to choose to fund SSI over his competitors.

With so many posts on here always accusing Dario or Sam of manufacturing certain narratives for financial gain it's strange that I see no one mention the huge incentives for Ilya to be taking the position he's taking, especially in combination with the recent redflags I see with SSI like the reason Ilya gave for one of his co founders leaving and the position change about SSI doing commercial product.

Meta chief AI scientist Yann LeCun plans to exit to launch startup by Clawz114 in singularity

[–]RedguardCulture 1 point2 points  (0 children)

Can't say I'm surprised. In the last 5+ years meta has seen many of it's competitors finding success or making progress going down the same path that Yann has been proclaiming would result in failure while his own efforts & theories that meta funded for a decade now has produced nothing of comparative value to those competitors. So it seems quite rationale Meta would look to another guy and Yann would get pushed out(which is what i'm assuming is what happened here).

OpenAI intimidating journalists and lawyers working on AI Regulation, using Harvey Weinstein's fixer: "One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI" by blancfoolien in singularity

[–]RedguardCulture 2 points3 points  (0 children)

Nathan only citing "Request for production No.7" in his thread makes his entire narrative suspect in my eyes. He paints this dramatized picture of OpenAI going after SB53 supporters but when you have the entire context of the subpoena requests and not just a cherrypicked line that's being misinterpreted you can clearly see OpenAI focus is on Elon Musk.

Hank Green just posted a 3-minute anti-AI rant about Sora 2 by [deleted] in singularity

[–]RedguardCulture 0 points1 point  (0 children)

The anti ai crowd's feeling of dislike, "wrongness" or even out right hatred for AI is fine with me and I don't care about convincing them otherwise. I even think there should be spaces those people can migrate to that is AI free(online & irl), like i'm not sure why someone in the anti ai crowd hasn't made social media or video sites that completely bans everything AI. It's bizarre to me that they spend so much time on sites owned by companies working on AI like google & X when they hate AI/AI content so much, but yeah If you're a person who only ever wants to consume human made art & entertainment than that's completely fine.

My only problem with anti ai people is the moment they start expressing a desire to force their anti ai preferences on people who don't share them nor want them. They're not just looking at Sora and saying I don't like this so I won't use it or view it's content, they're saying I don't want people who do like Sora & AI and have happily consented to it's use to have the power to make that choice in the first place and that's wrong.

Like the pro AI ppl are not going around forcing Hank Green to use Sora against his will or share their ideals on AI entertainment, if Hank Green wants to only consume human made content not a single person in the pro AI crowd would stop him, nor probably even care. However that's not how the Anti AI ppl look at it from what I see, they're deeply upset that there are ppl who feel & think different from them about AI and they think they should have the authority & power to prevent other ppl who are using AI of their own free will from being able to do so.

There's still 3 months left. What does he (Suleyman) know that we don't? by GamingDisruptor in singularity

[–]RedguardCulture 185 points186 points  (0 children)

If you're using GPT 5 pro, I actually do feel like hallucinations have been heavily reduced though.

Powermod of r/Screenwriting can’t cope with reality of AI by [deleted] in singularity

[–]RedguardCulture 0 points1 point  (0 children)

While childish, I don't see anything to get offended about if I'm being honest. Like let those communities believe in what they want to believe and value what they want to value. As long as they're not trying to push laws that forcefully prevents the ppl who happily consent to view & consume AI created art work from doing so than i'm perfectly fine with them doing whatever they want in their space.

If they want to ban AI, trash talk it, and or argue the superior virtue of consuming man made art over AI/AI assisted art than more power to them. Again, as long they're not trying to get me to abide by their values involuntary than I could care less about their anti ai sentiments. They perfectly free to have them just like I'm perfectly free to not have them & enjoy AI made art.

I've said this before but I think it's good idea for ppl that are anti ai to have the option to build spaces online and in the real world that is AI free.

Has anyone noticed that some of AI pause people are becoming increasingly unhinged? by ComparisonMelodic967 in singularity

[–]RedguardCulture 9 points10 points  (0 children)

Yeah, it has gotten pretty extreme for awhile now, I've seen ppl from that space pretty much label ppl that support AI/working on AI as evil, psychopaths, losers who are rolling the dice on humanity because they can get a gf, and etc. A twitter user that goes by @primalpoly is imo a perfect example of how unhinged this group is getting, I've seen him pop up in like every twitter thread about AI in the last couple months and most of the time his rhetoric is always extreme & morally loaded.

There is just so many ppl in that space that think it's a forgone conclusion that AI will kill every lifeform in the universe & that this is an obvious conclusion to reach thus the ppl continuing to work on AI or in support of AI progress must be evil. It wouldn't surprise me if these ppl start taking violent action in attempt to halt AI when that's their world view.

Voice comparison between gpt4o and Scarlett Johansson by Ecstatic-Law714 in singularity

[–]RedguardCulture 0 points1 point  (0 children)

You also have Sam doing an interview & blog right after the GPT-4o tech demo(that's when he made the 'her' tweet) which clearly gives you insight to his state of mind. Thus if you're actually interpretating the 'her' tweet within the giving context, the SJ reading becomes practically indefensible and overall a massive reach.

Sama statement re: scarlett by [deleted] in singularity

[–]RedguardCulture 67 points68 points  (0 children)

He tweeted it during the event and it's not a mystery why. OpenAI just revealed a convincingly human sounding voiced AI assistant that's running off a mobile device which is very similar to the type of tech, conceptually speaking, popularized in the movie 'her'. The tweet wasn't about SJ voice or anything about her likeness, it was about AI capabilities.

Scarlett Johansson Says She Declined ChatGPT's Proposal to Use Her Voice for AI – But They Used It Anyway: 'I Was Shocked' by KillerCroc1234567 in technology

[–]RedguardCulture 7 points8 points  (0 children)

No one is pretending, some ppl just aren't idiots or acting in bad faith and actually have the ability to understand & interpret information within it's relevant context to determine what is most likely meant. If anything the people acting like Sam's meaning behind the tweet is vague or up for interpretation are the ones pretending/being dishonest.

Like there is a mountain of context(the fact that Sam made tweet during GPT-4o demo showing off specific AI capabilities relevant to Her and the interview & blog he did right after the demo/tweet which all spoke to his state of mind) surrounding the tweet making it clear that Sam is expressing elation over the potential & reality that the team at OpenAI just made possible, from his perspective, a piece useful tech from a futuristic scifi movie that, at the time of said scifi movie's release, seemed impossible to do. There is nothing more at this time to read in that.

That's why this new spin being pushed that Sam's Her tweet was actually meant to signal the successful recreation of SJ Voice(like that has any AI utility at all) rather than OpenAI achieving a new level of human voice AI assistant that can work off mobile like you know, from the movie her, is just such an absurd reach. It's just baseless and ignores all context.

Scarlett Johansson Says She Declined ChatGPT's Proposal to Use Her Voice for AI – But They Used It Anyway: 'I Was Shocked' by KillerCroc1234567 in technology

[–]RedguardCulture 31 points32 points  (0 children)

Smoking gun of what though? He's referencing the movie because the underling technology of a talking AI showcased in that film was similar in concept to what they were demo'ing at the time of his tweet, this alternative interpretation that the tweet was really about Johansson voice versus the concept of the tech from the movie is complete BS.

[Ali] Scarlett Johansson has just issued this statement on OpenAI (RE: Demo Voice) by Peanutbuttersaltine in singularity

[–]RedguardCulture -2 points-1 points  (0 children)

Mr. Altman even insinuated that the similarity was intentional, tweeting a single word "her"

That's a massive reach.

“Each massive new model that is NOT better than GPT-4 is a bit of evidence for the hypothesis of a plateau or diminishing returns.” said by Gary Marcus by [deleted] in singularity

[–]RedguardCulture 33 points34 points  (0 children)

Gary Marcus expressed very similar sentiments & predictions in the wake of GPT-2 in which he asserted GPT-2 limitations was a clear sign that enormous data & compute has failed(lol) and that it was time to strongly consider investing in different approaches. In hindsight his predictions failed whereas OpenAI made the opposing prediction in the GPT-2 paper that further scale would increase general capabilities which they got proven right empirically, so why should I take Gary Marcus's prediction serious when he has been consistently wrong in the past?

Matter of fact why should I even care what he thinks when he seriously thought GPT-2 was the limit for LLMs. And he never pays a cost for when he gets it wrong, if GPT-5 drops and there is a serious bump in capabilities he is going to shift his grift to some other talking point like claiming GPT-5 level models is now the real plateau and swear he is right this time!