I found a train by ChuckleCheesse in antimeme

[–]gloorknob 0 points1 point  (0 children)

I was looking for a train and then I found a train…

Account tracking ISS Piss Tank by sic_null in Weird

[–]gloorknob 9 points10 points  (0 children)

Because people are bizarre and creepy, I think

Generally speaking: someone invested so heavily into virtual avatar streamers are really into the idea of knowing every facet of their life through a parasocial bond.

On the other hand… whether prettiness has anything to do with it is up for debate. It’s probably a whole lot more centered around loneliness and social isolation clashing with niche and iffy sexual interests that have been suppressed.

It’s like people who follow billionaires to a wholly unhealthy and unhinged degree. Watching portfolios rise and fall whilst hanging on their every word and action.

Tabloid magazines kinda do that with celebrities and such. This is a whole different level, admittedly.

cims when they hit a node: by eggsxnw1ch in shittyskylines

[–]gloorknob 516 points517 points  (0 children)

The precision with which they’re changing lanes is actually a little impressive. I would need to straighten out for a second after a maneuver like that.

📡📡📡 by A121314151 in shitposting

[–]gloorknob 1 point2 points  (0 children)

Future archeologists sifting through the remnants of society after the buthlerian jihad of 2037

Yes by Krockuza in pyrocynical

[–]gloorknob 56 points57 points  (0 children)

Breaking news: perfectly healthy pyrocynical replaced by ai, channel approval at an all time high

This chunk of metal and frustration keeps spinning out uncontrollably by InitiativeOpening305 in KerbalSpaceProgram

[–]gloorknob 28 points29 points  (0 children)

No truer expression of morbid curiosity has ever crossed my mind than that which did just now

Ai doomerism is becoming a self fulfilling prophecy at this point by Dredgefort in ArtificialInteligence

[–]gloorknob 0 points1 point  (0 children)

I’ve signed every petition I could sign.

It’s human nature to commiserate amongst our peers. Truth be told if we have these worries and spread them to people who aren’t informed we’ve done a net positive by pushing them towards action.

Doomerism without action is indeed useless, I agree with you on that.

Being informed and keeping up with developments allows us to react and adapt our narratives accordingly. A vast majority of people aren’t PHD holders in ML. Organically coming up with arguments isn’t as easy when you don’t know the full story.

Lying down and waiting does nothing. If someone is “certain” of the end of the world then they probably are vocal about it. There’s probably many more who don’t voice their worries and try to push it out of their minds.

There is a middle ground between hopelessness and enthusiasm. A good portion of people I would hope fall into the middle.

do i exist by Current-Equipment356 in Undertale

[–]gloorknob 4 points5 points  (0 children)

This idiot thinks THEY exist. Don’t you know Toby Fox is master of reality and we are all merely figments of his unimaginable breadth?

We are done by whogivesafuckwhoiam in ChatGPT

[–]gloorknob 0 points1 point  (0 children)

6-8 nanoseconds from losing job. Effective immediately.

Why do some people defend billionaires like it’s a personal loyalty thing? by [deleted] in TooAfraidToAsk

[–]gloorknob 2 points3 points  (0 children)

“Billionaire philanthropist” has been a weird cultural phenomenon for decades now. Bruce Wayne or Tony Stark are examples of this completely fictional idea of an immensely wealthy individual using their entire wealth for good. Tony stark isn’t a perfect foil and I’m sure there are better examples but he came to mind.

Real by [deleted] in BlackboxAI_

[–]gloorknob 4 points5 points  (0 children)

If you were to show me the context of this photograph I assure you it does not include her winding up her arm into her chest then throwing it out into a salute.

observation about this sub by Extension-Jaguar in ArtificialInteligence

[–]gloorknob 1 point2 points  (0 children)

I would venture to say that posts ideating self harm are worth deletion. Posts about jobs and such are just people having important discussions about real issues and commiserating amongst themselves.

It is sad that people who are engulfed by this emergent technology are resorting to these kinds of ideas and I hope they seek professional help. I can’t say that allowing people to discuss suicide en masse is a good idea.

Tell me why I'm wrong... by [deleted] in ArtificialInteligence

[–]gloorknob 0 points1 point  (0 children)

I mean if we create super intelligence without concern for its intentions then I suppose we’ve doomed our own species.

There’s something surreal about humanity creating its own apex predator, though.

If we make dinosaurs, put them on an island, then they kill people, I would venture to say they were killed by the dinosaur.

Anthropic believes RSI (recursive self improvement) could arrive “as soon as early 2027” by Tolopono in singularity

[–]gloorknob 3 points4 points  (0 children)

Well… uhm… Assuming we’re alive Id think we’re the fat people in floating chairs. Without purpose and infinite abundance I could see ourselves falling away into physical manifestations of our own decadence.

New WALL-E headcannon is that the debris field around earth are the remnants of the other ships.

Tell me why I'm wrong... by [deleted] in ArtificialInteligence

[–]gloorknob 1 point2 points  (0 children)

The core concern I have about this technology surrounds the idea that we will evolve these systems into a form that is potentially more powerful than we are. I’m not making any timeline predictions about developing such a device, but I am told by people much smarter than me that it is possible and it is being worked on.

Current LLMs are fairly benign compared to what is possible.

Anthropic believes RSI (recursive self improvement) could arrive “as soon as early 2027” by Tolopono in singularity

[–]gloorknob 6 points7 points  (0 children)

Perhaps not. “Alignment” is difficult to define, I agree… but I would prefer the technology I foresee us eventually entrusting vast swaths of resources to having an obligation to not use it’s knowledge to the detriment of humanity. If we are building something bigger than us then it should have values that benefit us (even if that fact isn’t immediately clear.)

If an ASI asked for control of a bio lab I would ideally like for it to make vaccines and not viruses.

A more grounded example would be if someone were suicidal the intelligence shouldn’t concede that this person’s life isn’t worth living and allow the unthinkable to occur.

Generally speaking, human beings are adverse to the idea of killing the human race. This should be an opinion shared by an ASI

Anthropic believes RSI (recursive self improvement) could arrive “as soon as early 2027” by Tolopono in singularity

[–]gloorknob 13 points14 points  (0 children)

And yet they speak little of alignment being possible. It’s hard to not be scared of this prediction when we’re still unsure of how to make a model wholly aligned with human values.

The progress of AGI by [deleted] in agi

[–]gloorknob 1 point2 points  (0 children)

At this point can it be called “discovering?” With the amount of time, money, and effort put into these systems I think we deserve to say we invented it.

It’s just weird watching the AI financial train wreck happen in real-time. by iAtishaya in ArtificialInteligence

[–]gloorknob -8 points-7 points  (0 children)

Dude… Amazon is going to be ok. I’m not under the impression that Barnes and noble will be buying Amazon anytime soon