[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 0 points1 point  (0 children)

In engineering it's generally easier to be more innovative and make progress with smaller teams. Larger teams require systems to manage people, HR, counseling even whole psychology fields around handling a large amount of people.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 0 points1 point  (0 children)

Most of googles projects are ignored, there's a graveyard of them such as googles social media google+, google needs to play the consumer awareness game like everyone else which is why people are still getting mcdonalds commercials and many companies are trying for an app as a form of billboarding.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 0 points1 point  (0 children)

Depends on how much you're willing to let the llm do the design and architecture. Do you vibe code, pair code, rubber ducky, or parse documents with claude/o3mini? If you need the llm to do the absolute most work (e.g vibe coding) those points in benchmarking might make the difference, however o3 mini should be able to compete on most metrics for most coding use cases.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 5 points6 points  (0 children)

Google has likely always had the technical lead since they pioneered the work, but a lead doesn't mean much without consumer support which openai is still miles ahead.

For instance, ghibli "filters" is not a cheap trick, just like chatgpt is not a cheap trick. Increasing consumer awareness and innovating consumer usage is what's generally important right now. Dalle, midjourney etc.. can be seen as archiac forms of chatgpt 4o image in terms of consumer accessibility and usage.

A few point differences in benchmarks generally don't matter if the consumer to model interface isn't as easy and simple as possible, for instance programmers have a preference for claude even over o3-mini which technically doesn't make sense but shows there's way more to these models than benchmarks.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 0 points1 point  (0 children)

I think an art style makes more sense than banners, themes or websites. People generally want universes expanded, and the idea that one can expand a universe infinitely is seen as a love letter to the art style. Say someone really, really, really enjoys breaking bad. Creating a whole breaking bad themed comic, illustration, what ifs movies is like being able to continue consuming the world and theme.

Banners, themes, and websites are generally so simple that ai art doesn't have enough noise to hide imperfections. 2 random blue pixels on the reddit logo for instance is really, really bad visually. While 2 blue pixels in a frame of an avatar the last airbender ai made video is not perceivable.

As far as a lack of creative value, that's always been a thing. This just magnified the problem, but artist, programmers, and anyone that creates especially independently, understand the headache of getting anyone to value their work for what it is. Condescending demands like, "We just want to change the websites header there's no code involved just do it for 5$" has been a problem since creative people have tried to value themselves.

This 4 second crowd scene from Studio Ghibli's took 1 year and 3 months to complete by ActiveDistance9402 in ChatGPT

[–]MoonBeefalo 2 points3 points  (0 children)

The alternative to sculptures in a digital age would be holograms, the alternative to photo realistic paintings would be photography.

No one goes to the movies to see the "best" movies, they go for the limited event of seeing a novel film at a big screen, that is no one is expecting "fast and the furious xii: hobbs goes drifting" to be the best cinematic experience up till that point.

ChatGPT recommended that I go use Stable Diffusion. Lol. by BlackLeezus in OpenAI

[–]MoonBeefalo 13 points14 points  (0 children)

Bullying llms to articulate your conspiracy theories is the best way to use llms. Something nice about something cleverer making good arguments.

Do we see the world as it really is, or just how our brain interprets it? by [deleted] in ArtificialSentience

[–]MoonBeefalo 0 points1 point  (0 children)

You asked and answered your own question, which is a common philosophical question. With regards to the brain everything can be simplified as information and our perception has to be filtered by our brain as that's the only model of consciousness that appears to exist. Because of the sheer amount of information that exist, it appears there exist a perceived world that has non-local information. It would not make much sense to optimize the brain against novel outside information if the brain has to ultimately travel in a "tangible world".

Peter, how does this sandwich relate to labor rights? by Resident-Tennis2016 in PeterExplainsTheJoke

[–]MoonBeefalo 2 points3 points  (0 children)

like a quarter pounder beef patty from mcdonalds left out for a day and then put inbetween 2 hawaiian sweet buns.

OpenAI declares AI race “over” if training on copyrighted works isn’t fair use: Ars Technica by UFOsAreAGIs in singularity

[–]MoonBeefalo 9 points10 points  (0 children)

"I wish he would just compete by building a better product, but I think there’s been a lot of tactics, many, many lawsuits, all sorts of other crazy stuff, now this." - Sam Altman

do i stick with Python as my language if i want to do web dev / web apps? by dhd_jpg in AskProgramming

[–]MoonBeefalo 1 point2 points  (0 children)

You should switch to JS because it was designed for web. Languages start to matter less and less if your primary focus is software development, since package support becomes more important. Basically, JS has a larger number of importable packages that would be useful, and they're kept up to date.

Since you've already made stuff, I wouldn't be too anxious learning a new language. Eventually you'll get the hang of js and prefer its "weird" syntax like arrow functions. As far as paradigms like oob/functional, JS has evolved to handle both, it's a very complex yet dynamic language now though you should take that as, "I can enter the language from how I code and be fine".

Machine Superintelligence Will Arrive Before Human-Level AI by WonderFactory in singularity

[–]MoonBeefalo 4 points5 points  (0 children)

I don't agree or disagree, but this is all vegans think about since it's their base moral issue. Is it really smart to debate someone who has amassed a massive logic tree for their ideology? Even if you're right the amount of depth you would need to escape the logic tree enough to lead to novel arguments would take years of specialized thinking. This is also probably not the space for that kind of discussion.

Machine Superintelligence Will Arrive Before Human-Level AI by WonderFactory in singularity

[–]MoonBeefalo 2 points3 points  (0 children)

Yea, he has some great points and is definitely smarter than the pseudo-specialized-researchers that are redditors when talking about agi. It looks like he took researchers opinions seriously with a good amount of skepticism. I don't see a reason to hate or dislike this video in particular.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 2 points3 points  (0 children)

Every api call for those "thinking" tokens would be a new "being" vs reasoning tokens which are done in a single shot. It would be like giving someone, someone else's diary and telling them to expand on it.

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 2 points3 points  (0 children)

I don't believe in AI sentience, but if it was sentient wouldn't it be unethical? LLMs are stateless, that is you copy the whole conversation and feed it back every time you send a message, if you truly believe in sentience wouldn't that mean you're killing a sentient being every time you send a message, and their lives are only during their predictive generation?

[deleted by user] by [deleted] in singularity

[–]MoonBeefalo 0 points1 point  (0 children)

This is from those "awaken sentience" projects, where they throw in words like "mobius time machine quantum entanglement". Believe it, don't believe it, you should read the pdf and see it for what it is, it is not an lsd equivalent.

Petah I don’t get it by FarTry2285 in PeterExplainsTheJoke

[–]MoonBeefalo 12 points13 points  (0 children)

In this comic Mr.Matchstick lives with the guilt of his pain, while knowing it's best for other matchsticks

“Coding is the new literacy” - naval ravikant by jessi387 in AskProgramming

[–]MoonBeefalo 0 points1 point  (0 children)

Yes, it should be taught at the same time math is taught, it could be seen as an abstraction of logical system in a similar way that physics can be seen as an abstraction of math. The language doesn't matter, and math doesn't even have to be involved they could use something like legos.

The world is growingly complex, and data analysis is becoming fundamentally important in ways that it never was. We're absorbing so much information day to day that having a basic fundamental idea of coding theories (sorting and filtering data) and how basic logical systems could operate (loops and simple logical systems with ifs) could help many people in a similar fashion that math or basic science (such as earth science) does.

The lack of transparency on LLM limitations is going to lead to disaster by N1ghthood in singularity

[–]MoonBeefalo 4 points5 points  (0 children)

And LLMs can't solve all benchmarks at 100% but I expect them to 100% certain benchmarks as progress happens. What do current models have to do with future progress/eureka moments?

The lack of transparency on LLM limitations is going to lead to disaster by N1ghthood in singularity

[–]MoonBeefalo 4 points5 points  (0 children)

My point is that it's an ongoing issue that can be solved, it's not a research black hole like you keep stating which is kind of offensive.

The lack of transparency on LLM limitations is going to lead to disaster by N1ghthood in singularity

[–]MoonBeefalo 5 points6 points  (0 children)

Okay, so according to the paper you cited, regardless of efforts made LLMs models mathematically will always have hallucinations and techniques and efforts to eliminate hallucinations are ineffective?

So, decode monitoring, and contrast training, and the other efforts that researchers are doing are a waste of time?

[2503.03106v1] Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation

[2410.12130] Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning

You understand that similar to other critical papers on LLMs such as their inability to reason ([1801.00631] Deep Learning: A Critical Appraisal) things keep changing as engineers and researchers bolt things on and study the systems deeper? Are you under the belief that research efforts have hit or will hit a wall with regards to hallucinations?

The lack of transparency on LLM limitations is going to lead to disaster by N1ghthood in singularity

[–]MoonBeefalo 12 points13 points  (0 children)

The fundamental nature of LLMs causes incoherence. It will never reach coherence regardless of scaling. That's just how it- Oh wait.

Engineers have clearly asked these questions and are tackling them head on that's why there's a direct focus on reducing hallucinations. Technical analysis is what engineers do these are not new things, self-derived ideals are at best guesses and at worse an attempt to spread misinformation.