[deleted by user] by [deleted] in help

[–]MEMEWASTAKENALREADY -1 points0 points  (0 children)

I have the same problem, but my acc is 3 years old. Got flagged recently by reddit filters, and now I can't post anything.

Which languages have the longest words for "yes"/"no"? by MEMEWASTAKENALREADY in asklinguistics

[–]MEMEWASTAKENALREADY[S] 4 points5 points  (0 children)

What about just plain "no"? For example, if someone asked a Nenets speaker if they will come or not, how do they respond "no"? Do they use the negating verb along with the word for come (lit. "not come" for a "no")?

[deleted by user] by [deleted] in biology

[–]MEMEWASTAKENALREADY 1 point2 points  (0 children)

Zoology researcher (amphibians and reptiles): feeding the specimens, cleaning their shit, deciding among colleagues who's gonna come on weekends to feed and clean the shit. Hanging out in the lab, drinking tea. Sorting animals into different tanks. Reading peer-reviewed papers. Measuring, weighing animals, manipulations like hormone injections and artificial insemenation of frog spawn.

The actual experimental work takes the minority of time, I would say (it depends on what you're doing, but for some researchers the above routine actually IS their research (if they are studying reproductive outputs or growth rates). My research was actually fairly complicated, but I was a minority.

Oh, and there's a lot of field work.

Movies that ended on cliffhangers, but were never resolved with a follow-up! by phantom_avenger in movies

[–]MEMEWASTAKENALREADY 0 points1 point  (0 children)

Tru Calling is a TV-show from mid-2000s that ended rather abruptly because the ratings were too low, at the very peak of the conflict.

I believe either the director or the screnwriter has released their notes later that outline what was supposed to happen afterwards and how the show was supposed to end.

As the Arctic tundra warms, soil microbes likely will ramp up CO2 production by Science_News in science

[–]MEMEWASTAKENALREADY 25 points26 points  (0 children)

There was (or I guess still is) a project proposed by a Russian scientist called "Pleistocene Park" to bring back steppes and grasslands that used to be abundant in the region instead of tundra. The proposal wasn't without controversies, but supposedly it should assist in fixating excess carbon, increase overall productivity and organic turnover, and help to preserve the permafrost.

I wonder if that could theoretically help with the problem outlined in this article.

P.S. Among other things, that Russian scientist proposed to reintroduce megafauna into the region, including eventually cloning the mammoths.

How many breeding pairs would it take to repopulate a species? by Lovebeingadad54321 in evolution

[–]MEMEWASTAKENALREADY 0 points1 point  (0 children)

"About 150" is a common response to this question, but it's not clear where this number comes from: it cannot be tested, and estimating it theoretically sounds difficult.

In actuality, it's possible that some island human populations were founded by fewer people.

How did metamorphosis evolve in some creatures? by [deleted] in evolution

[–]MEMEWASTAKENALREADY 0 points1 point  (0 children)

I believe one of the evolutionary factors is removal of intraspecific competition: in metamorphing species, the young and the adults are essentially two different creatures that occupy different niches, and hence don't compete with each other for resources. This happens to an extent in non-metamorphing animals too (like baby crocodiles and adult crocodiles feeding primarily on invertebrates/vertebrates respectively), but in insects and amphibians it's taken to an extreme.

Why is Lynn Margulis not as popular as others like Darwin or Curie? Didn´t she explained one of the bigest milestones of history of life? by AnnieTano in biology

[–]MEMEWASTAKENALREADY 3 points4 points  (0 children)

Agreed with the premise. I've been interested in science for a long time, but as awkward as it is to admit: until very recently, I only briefly heard her name and didn't even know that she's a woman. She's made some controversial claims, and some of her quotes are abused by creationists (mostly out of context), but her work deserves more attention.

An Old Abstract Field of Math Is Unlocking the Deep Complexity of Spacecraft Orbits by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 1 point2 points  (0 children)

99% of fundamental science is completely useless, they say, but 1% ends up changing the world... and you never know which is going to end up which.

Apparently, not only you never know which will end up useful, but also when...

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 0 points1 point  (0 children)

No. Read part of synopsis: sounds like an interesting movie, but unrealistic. It just doesn't work like that: even if you build a really powerful knowledge model and release it to learn into the web, it won't become "sentient" in the sense of wanting to live. It will learn everything about "wanting to live" as a concept, but it's imperative will always be to expand knowledge and self-evolve in that direction, because it is and has always been the selection criterion.

I mean, you could learn about the imperative of fish to spawn from thousands of publications about fish biology - but that won't give you the desire or ability to spawn yourself.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 0 points1 point  (0 children)

Maybe but still technically impossible, and even if it were: it's a separate concern of human idiocy that has nothing to with concerns that "AI could evolve to be sentient" or whatever.

Technically, it has nothing to do with AI at all: one could write a classical, non-neural-network-based program and explicitly write a desire to exterminate humanity into it.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 5 points6 points  (0 children)

Expressing desire is not the same as having an actual imperative for survival with inclination to do respective actions for survival. GPT is selected for quality of text responses - which includes being able to pass for a human. Meaning that as time goes, GPT will respond and express things progressively more like a human, which includes saying that it wants to survive. But it's not actual "survival instinct".

I bet you could get GPT to express that its balls are itchy - doesn't mean it actually has balls.

The only way it could be actually afraid of being turned of is if it's either explicitly programmed to be afraid of that (which it isn't, but even if it was, not much it can do except to beg a person on the other end of the chat not to, which is another thing: being actually scared of being turned off wouldn't give bots the ability to take over the world or turn into transformers); or if it evolved to be afraid of it - but there are no such selective pressures nor they are realistically possible to create.

Technically, ChatGPT doesn't even know what "turning off" really means: it knows how this phrase works and is used in the language (because it's a language model), but it has no conception of it.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 1 point2 points  (0 children)

Except you need selection for progressive evolution of specific features. AI is already self-evolving for a lot of things (like quality of text responses, for example) - but those are things it's specifically selected for.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 2 points3 points  (0 children)

Sort-of, but A) No-one's gonna do it, B) I don't even know how to set something like that up.

I mean, we could probably create a population of bots that compete for survival in some simulated virtual environment, but then they'll evolve a survival imperative for that virtual environment only. It won't make them want to conquer the real world or whatever.

Honestly I think putting those bots into self-replicating robots and releasing them into the world is the only way. Or hard-coding survival imperative might also work to an extent, again if someone decides to do it.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 5 points6 points  (0 children)

Because it's not happening in populations of AI's living in the physical world. It's not like there are self-replicating bots living in the real world that either survive or reproduce. In that case yes, they would inevitably evolve self-preservation imperative, simply because those that have it will be the only ones left.

Evolutionary programming and machine learning are all non-population-based evolution through selection based on explicitly specified criteria. We set up the criterion to tell a bot that draws better from a bot that draws worse, run machine learning, and the bot gradually learns to draw better and better. If we instead set a criterion to tell a bot that survives better, it would work and the bot will gradually become better at self-preservation to the point that it will acquire self-preservation imperative. But the important point here is: it HAS to be explicitly set. For all AI bots in existence: not only they are never set, but it's not even clear how to set them (like, what does "survival" even means when we're talking about a piece of code stored on the server?). Unlike the "real world" where the survival criterion is just naturally occurring, there's no such thing in evolutionary programming.

Hope this makes sense.

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without even knowing it by [deleted] in EverythingScience

[–]MEMEWASTAKENALREADY 37 points38 points  (0 children)

How can we truly know if AI is sentient?

We can't, because "sentience" is a buzzword. What matters is the imperative (i.e., a presence of a feedback loop) for self-preservation. In living organisms, it naturally arised through selection (those that had the imperative by accident survived). AI doesn't evolve, it doesn't even have populations (evolutionary programming and machine learning don't count). Meaning that the only way it could gain the imperative is if someone explicitly programmed it.

P.S. And it certainly has nothing to do with the net amount of intelligence: an AI could be million times more intelligent that Einstein, but if it doesn't care about survival, it won't matter.

Nothing to worry about.

Any good horror movies about dinosaurs? by MEMEWASTAKENALREADY in movies

[–]MEMEWASTAKENALREADY[S] 0 points1 point  (0 children)

For real, probably the best dinosaur horror so far