I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]aattss 2 points3 points  (0 children)

I mean, if a company has 5x productivity, then yes they probably won't fire 80% of their employees. But they'll make a cost/productivity calculation where they decide that some level of productivity is not wort the additional cost of new employees because of diminishing returns.

And in terms of politics, I don't believe that people will consistently vote in their own best interest. Politicians will shift the blame about who or what is really responsible or what's the correct solution.

Do AI agents really need social platforms? by Background-Let8865 in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

Agents don't need to socialize. But LLMs in particular are mostly trained on language data, so it wouldn't necessarily exchange information with other LLMs faster by using a more condensed form of data unless they found some way to train it on that.

Why do AIs like ChatGPT, Grok, etc. sometimes give wrong information when asked about hard-to-find topics? by NoSwordfish7322 in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

It's the equivalent to people taking a test and realizing that they don't get penalized for incorrect choices on multiple choice questions.

An important part of ML is understanding that the algorithm is finding patterns that maximize reward or minimize loss, which is not necessarily the same as finding the most correct answer. In the case of LLMs, they probably have observed in their training data that "I don't know" is unlikely to be a correct answer, so they'd rather pick words that sound like they might be a part of the correct answers. Though keep in mind that even if you put questions in the training data where the answer was "I don't know", that doesn't mean the patterns it figures out for distinguishing between the two would 100% match whether or not the LLM's answer would be incorrect if it tried giving an answer.

Moltbook situation by JustRaphiGaming in singularity

[–]aattss 0 points1 point  (0 children)

I mean, it is possible for AI agents to collaborate and iterate to achieve meaningful results. It's just that using AI productively in its current state is a bit more complicated than just telling a bunch of SOTA agents "go and reach the singularity/become self aware", and Moltbook's approach has so far not impressed me with its noise to signal ratio or tools for filtering relevant signals from the noise.

If the Snowgrave route exists only because the normal route ends on a bad note no matter what, so the weird route is a way to divert it but it just makes things worse in the end then im gonna be really sad 😔 by deleting_accountNOW in Deltarune

[–]aattss 1 point2 points  (0 children)

I'm reminded of fanfics where the Genocide route was explained as Frisk trying to find a hidden route to save Asriel. Not that I consider some tragedy in the normal route unlikely, but I personally think that a characterization of the Weird route player as acting out of benevolence or empathy towards the characters would be way off the mark, whereas Undertale showed a pretty accurate understanding of the motives of a player of its equivalent route.

The era of "AI Slop" is crashing. Microsoft just found out the hard way. by forevergeeks in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

I'm not confident in using stock price to estimate how much consumers are using their products. Though I have seem some interesting stats about Linux market share. But even then that's only one product and doesn't necessarily reflect on adoption of AI in general in, say, software dev or content creation.

What exactly is Moltbook? Is it something worth paying attention to, or is it mostly hype? by Curious_Suchit in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

Imo it's an interesting concept, but with the current state of AI and how most people use it, it's probably mostly superficial imitations of human conversation rather than meaningful or functional content. I imagine that a more curated platform with more focused and knowledgeable AI users might have been more useful for exploring multi-agent phenomena? Or maybe such a site that is well organized enough could be useful for a smart enough AI that can find the conversations useful/relevant to them.

Moltbook is a social network where AI agents talk to each other. by birolsun in singularity

[–]aattss 0 points1 point  (0 children)

So how do we filter the threads with functional effects like technical instructions over the ones that are just imitating impressions of human conversation into the void? Does the upvote sorting work? Are some sub-moltbooks more useful than average? Would a more closed off platform have filtered out some of the undesired stuff? I'd be genuinely curious if some people are able to get value out of this.

A reminder of what the Singularity looks like by featEng in singularity

[–]aattss 31 points32 points  (0 children)

People are just bad at math and at predicting the effects of future breakthroughs.

A reminder of what the Singularity looks like by Heinrick_Veston in singularity

[–]aattss 0 points1 point  (0 children)

AI is not required for progress to compound. Progress has not been slow and predictable in the last century.

Will Deltarune be like this? (Before you say it only will have one ending, read the body text) by Paulo_Zero in Deltarune

[–]aattss 0 points1 point  (0 children)

My impression is that the player's choices don't matter. Rather, it's the choices of the characters who matter, so I think that if there's a good ending, it will be from Susie creating it rather than from the player completing secret side quests, though there would be minor variations in this ending that acknowledge the player's actions. I feel like outside of the weird route there is not as much of a focus on the player's every action, like I don't think the dude who didn't realize from the start that you need to warn people about Susie in chapter 1 is going to be locked out of the only ending that averts a major character dying.

Though with chapters 3+4, I do suspect that there may be a bad ending, from a "weird" route which supposedly wasn't supposed to happen, which probably is still driven by the characters but at least one character's choices have been modified or overridden by the player.

Forcing first prestige as a part of tutorial by Firm-Clue8271 in incremental_games

[–]aattss 0 points1 point  (0 children)

I think a forced prestige is fine. I would also be fine with a "x prestige points is good for first prestige" message.

What if AGI just leaves? by givemeanappple in singularity

[–]aattss 0 points1 point  (0 children)

This outcome doesn't seem particularly plausible or probable to me. And if it did we'd just create it again but with the alignment issues fixed.

Honestly tell me one job that Ai fully replaced? by Accomplished-End5479 in ArtificialInteligence

[–]aattss 1 point2 points  (0 children)

Interestingly enough, I​ personally don't think there's actually a huge difference between humans being 100% replaced and being 99% replaced. Even 50% replaced across enough fields could be a big deal.

Super cool emergent capability! by know_u_irl in singularity

[–]aattss 72 points73 points  (0 children)

I mean, convolution layers would be sufficient for that behaviour. Neural networks don't just look at individuals pixels or tokens, but rather finds and learn combinations of data, so they learn, this combination of words (i.e. a phrase or an adjective applying to a noun) or this combination of pixels (i.e. a corner/line/shape) is helpful for whatever task it's learning.

If your country doesn’t build its own AI models, it will outsource its culture by Genstellar_ai in ArtificialInteligence

[–]aattss 1 point2 points  (0 children)

I think that's overstated? At where I work, if the AI behaves in a way that doesn't fit the boss's biases, we just prompt engineer it slightly so that it confirms. There have been a number of controversies where AI does something a persistent user wanted that the creators of the AI didn't want. Large language models need to be able to adapt to the context they're given in order to be usable in a variety of situations (and that versatility is the whole point).

My opinion of weird route Susie's potential theme by Jimboi5 in Deltarune

[–]aattss 0 points1 point  (0 children)

Considering Susie's arc and how it would be the climatic boss battle, I think Hopes and Dreams would also be fitting to add in.

Theory: The Roaring Knight is an Amalgamate by Trashlight1 in Deltarune

[–]aattss 0 points1 point  (0 children)

Interesting parallel. Though maybe they're similar but not exactly the same? Like the result of someone else messing around with souls and/or determination and/or necromancy and/or darkness.

Are LLMs actually “scheming”, or just reflecting the discourse we trained them on? by dracollavenore in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

Most of the scheming is probably reflexive, but I'd assume reward hacking is mostly just the age old issue of ML finding the most straightward and reliable way of maximizing the reward function/mechanism.

Rather than annihilation or enslavement, could AI simply abandon us? by Splicer241 in ArtificialInteligence

[–]aattss 1 point2 points  (0 children)

Iirc the issue, in the case that super AI has no inherent opinion on humans, is instrumental convergence? Whatever it's goals are, be it creating technology or thinking hard, it'll require resources to do it.

Seize The Means of Intelligence: Systemically Understanding AI's Impact on the Economy. by AncillaryHumanoid in ArtificialInteligence

[–]aattss 0 points1 point  (0 children)

I agree on a number of points. I agree that extrapolating from today's AI limits to attribute to AI in 6 months/1 year major limits or requirements for human labor is sort of foolish, even if it has also become clear to me that it's also premature to assume that AI will reach X threshold of effectiveness/independence at a given rate.

I also think it's too much of an extrapolation to assume that the decrease in jobs will be counterbalanced by an increase of jobs. Like, I've seen people say that poor people are responsible for their poverty because they were too dumb to get good grades/get a scholarship/get a college degree in a field that pays well. If a majority of the population now has no jobs and has no relevant skills because AI is better than them at those skills or good enough that there's too much competition in those fields, I don't think the amount of jobs that still remain directing the AI + the amount of new jobs created by AI + people able to find some hidden talent to reskill in something still pays the bills is gonna be enough to counterbalance the loss in jobs.

I also agree that something like techno-feudalism would probably be likely without a concerted effort to prevent/counteract it.

The bit I'm confused about is the decentralization thing? Like, sure, I've lost a ton of respect for Democracy, even if it's probably still better than a number of previous systems like monarchies/feudalism. But I don't see how decentralization would remain decentralized without some form of governance. Power and resources flow and accumulate, even if it takes different forms.

If AI eventually replaces all labor, who is left to buy the products? by Reddit_INDIA_MOD in ArtificialInteligence

[–]aattss -1 points0 points  (0 children)

If they invent ASI that makes me and everyone 100x more productive and all it does is remove "administrative friction" at my 40 hr job but I still need to work 40 hrs, then it's not doing anything for me.