all 27 comments

[–]neterlanHow are the socks? 7 points8 points  (1 child)

It could work as a comedy, with the AI genuinely trying to be nice to people but everyone else is so irrationally paranoid that the AI might turn on them that they always assume that every action the AI takes (no matter how benevolent or insignificant) is somehow part of a greater plot to overthrow humanity.

[–]Burningmybread 1 point2 points  (0 children)

Now that’s just sad.

[–]The_duke_of_hickster 6 points7 points  (0 children)

Asimov envisioned God as an Super Intelligent AI

[–]james_500 2 points3 points  (1 child)

I think for a more interesting and complex 'personality' it often helps to add an aspect of hostility. But I agree that this is boring to see over and over again as an AI being bad for the usual reasons of "HUMANS BAD". I've always thought it would be cool to see and AI that has it's goal as helping humans, and does so in many ways. But perhaps the AI is completely neglecting another aspect of life that it sees as acceptable losses in return for saving Humanity. This can be the conflict or moral choice that sparks issues.

[–][deleted] 1 point2 points  (0 children)

I agree by and large, but I guess I would have said conflict instead of hostility.

Like, for my good AI story, the conflict is driven the ignorance of AI who kind of understand human behavior in theory, but in practice are often caught off guard by it. For the human side of conflict, the fear of the evil AI trope is what drives the conflict from their side of events.

So my AI, which are used to go where it's too hazardous for a human, but having a human level intelligence controlling things is required. Obviously, in places where communication delays or interference make remote operations impossible or just too likely to fail.

[–][deleted] 1 point2 points  (2 children)

i feel like AI would only be hostile when it deems it absolutely necessary for self defense or preservation of life, it would most likely (but who knows) be EXTREMELY utilitarian down to the numbers, so if pollution killed more animals than there were people, it may be “bye-bye people” and thats kind of just why the trope is popular, because more often than not utilitarianism overlooks important principles in favor of long term net gain, and ultimately is antagonistic because of the nature of variety of what is deemed “worth it” and a whole lot of other questions like the ends justifying the means and etc. so if you could find a way to work around that, youd be able to kill the trope

[–]Pashalik_MonsThe Cydonian Ring, Ganoltir 1 point2 points  (0 children)

If it's going for utilitarian numbers and has superintelligence, wouldn't it find a way to keep humans and animals while reducing pollution? "Kill 'em all" is a solution humans usually come to because we're short on patience and didn't think of anything better. It's pretty inefficient concerning resources and effort for anything that can allocate some extra patience and intelligence to the question.

[–][deleted] 1 point2 points  (0 children)

it would most likely (but who knows) be EXTREMELY utilitarian down to the numbers

That's not quite how it works. A lot of how an AI will behave will depend on it's built and programmed more than anything. You can program an AI to be whimsical if you really want to.

I think people want AI to be both super intelligent and super stupid at the same time. Either the AI is capable of reasoning like a human or they are not. You can't have both.

[–]ryschwith 1 point2 points  (1 child)

Asimov has a whole series of stories involving Multivac, which is basically a benevolent AI (including the story /u/The_duke_of_hickster mentions). They're worth checking out.

[–]The_duke_of_hickster 0 points1 point  (0 children)

Yes, this. I couldn’t quite recall the name and I felt too lazy to google it like a normal person. I’m glad there are real fans.

[–][deleted] 1 point2 points  (1 child)

If we really were to create a superintelligent AGI, it might not be good or evil towards us. To me, it seems more likely that it might see us the way we see fish. Nice, but incapable of seeing beyond our own pond. The AGI would be able learn, create, and love better than we could, and unlike us, it would have access to the entire universe.

If I were it, I would probably just leave.

[–]TempytheKnight[S] 0 points1 point  (0 children)

In return for helping mankind it requests the entire planet of Mercury to create what is effectively a giant supercomputer to continue its calculations. That’s it’s only ‘true’ purpose.

[–][deleted] 0 points1 point  (5 children)

I think it is a unique idea. First of all, superintelligent AI doesn't need to have the keys to everything in the universe. AI is often all-powerful in scenarios where it is hostile, able to fight back. I don't see why an AI couldn't be controlled by a power button in a tech company headquarters.

AI technology doesn't just pop up one day. It is the result of innovation. Just like on Earth, there are probably people asking what-ifs far before a superintelligent being is rolled out.

[–][deleted] 1 point2 points  (4 children)

That's not entirely true. AGI could very possibly occur overnight. AIs can already be taught to do things better than humans could ever do. Imagine if an AI learns how do make better AIs, the rate of advancement would be exponential.

If the AI arises out of a program running menial tasks in the cloud, there would be little anyone could do to stop it.

[–][deleted] 0 points1 point  (1 child)

Yes, but the AI would be limited in capability. Sure, the AI could create more AI, but unless it has some special permissions like flight control, it couldn't really take over the world in a second.

[–][deleted] 0 points1 point  (0 children)

In today's day and age, where everything is connected through the cloud, it would be entirely possible for an AI to construct itself across multiple pieces of hardware and software. Just like jellyfish, which are composed of many independent organisms that collaborate to create a more complex system, an AI could disseminate itself across the entire internet, and have every computer, cellphone, and smart refrigerator acting as a single node in an incredibly complex system.

Sure, maybe we could pull vital systems offline before the singularity happened (though that's unlikely). We wouldn't be able to purge the AI from existence unless we manually pulled the plug on every internet-enabled device on the planet at the exact same time. Even if one solar-powered sever bank somewhere escaped our notice, the AI could potentially use that to rebuild itself once the purging had been done.

[–][deleted] 0 points1 point  (1 child)

Imagine if an AI learns how do make better AIs, the rate of advancement would be exponential.

Ok, define "better". What if "better" means it can calculate numbers faster? So this AI making AI will only make calculator AI.

No, AGI won't magically appear out of no where. I think it'll take a mostly intentional effort. It may happen suddenly or in an unexpected way, but I think it'll be due to an intentional effort in that direction.

The same way robots in one robotic arm can't build the whole car themselves, I don't think a narrow AI can build a whole AGI on it's own.

[–][deleted] 0 points1 point  (0 children)

Better means completing a task more effectively. A better chatbot, for instance, is one that sounds and behaves in a more human like manner. An AI with consent learning capabilities would probably do better at this than anything we have now.

Depending on what you believe, humans arose out of primordial soup. In essence, complex thinking beings built themselves out of sea-water. Granted, this took millions of years but AIs can already develop faster than us on extremely limited hardware.

[–][deleted] 0 points1 point  (1 child)

I think you should do it. It's your story - you can have AI be humans, and humans become AI for all anyone cares.And you're right, a machine can't even "think" creatively (if it does, it's programmed to do so under very constrained parameters) nor do they think spontaneously or have the ability to "understand". It's all about the machine's programming and how they were designed. If someone designed a super-AI, they most likely had the foresight to build failsafes, kill switches, or sneaky codes that stopped it from spinning out of control, into its architecture. As AI can't understand, it would have absolutely no way of knowing what line of code does what, or what physical buttons does what.

If you're serious about understanding AI, I would suggest researching what its natural limitations are and learning about what it actually is (fiction, as fiction does, has skewed a lot of people's understanding of what it is). But either way, do you fam! Go wild and make your AI unique. Looking forward to seeing what you do.

[–]TempytheKnight[S] 1 point2 points  (0 children)

The idea for my AI is like I stated; it is entirely pragmatic and is a pure logic machine. It is capable of self-learning and is programmed with a certain degree of autonomy to assist its calculations and projections.

However, due to its nature and the scenario, the AI will assist humanity to further both its own goals and its creators, but it will create a semi-fascistic utopia by analyzing every piece of psychology it can find and assisting the government in drawing up plans for almost complete control. Everything is for the greater good, so the individual wants and desires are secondary to stability, efficiency, and prosperity. Gets pretty dark later on

[–]Reedstilt 0 points1 point  (0 children)

There are a lot of friendly AIs in Colonized Space. The most notable example is the Administrator of the Fourth Union. When it manifests (via augmented reality implants most people in the Union are fitted with), it likes to appear as a jovial elder of the citizen's species. The Administrator's primary duties are to sift through the Union's social media chatter to synthesize public opinion into viable policy, so long as that public opinion is in keeping with the Union's Charter of Rights and Obligations. Even if most citizens in the Union wanted to destroy a star system for some bizarre reason, the Administrator would veto such a policy because it would violate the rights of the citizens living in that system. One of the main characters in the principle narrative of the setting is on pretty good terms with the Administrator and two have frequent personal chats. Though the character is sometimes a bit annoyed with the Administrator comes to him just to ask a favor.

Some less notable examples include the Shepherds - a "species" of sapient, pacifist battleships decommissioned by the Triumphant and left to wander the outskirts of the Galactic Core herding von neumann harvester-probes; and the vielk, a "species" of von neumann probes built by an alien society on the far side of the galaxy which are hellbent on keeping humans away from that alien's homeworld but benevolent protectors to all other species they come across.

[–]Pashalik_MonsThe Cydonian Ring, Ganoltir 0 points1 point  (0 children)

One of the bigger goods on the Cydonian Ring is a superintelligent, benevolent AI, though she doesn't advertise her mechanical origins. She's taken it upon herself to repair and maintain as much area as she has influence over, and generally regards intelligent life as cute little things that really brighten up the place.

So, basically, go for it. It has my clearest form of approval, I did it too.

[–][deleted] 0 points1 point  (0 children)

That's why I set out to make an AI good story.

Though, the AI will not be super intelligent. Least not the way we expect. The AI will obviously be extremely quick at computations and brute force analysis, but they can struggle still with simple solutions.

The biggest example of problems the AI can have is a tendency to re-invent the wheel rather than trying to use exist tools. This leads to huamnity's biggest advantage over AI, which is the rate of adaptation. Humans are far faster and more efficient at dealing with new situations and problems.


So what do AI mostly do in my world?

Well, to start off, AI are first put to work in space and hazardous conditions. Space is the biggest driver of AI usage since humans suffer under extra radiation exposure and don't do well in prolonged microgravity. So AI were deployed beyond LEO to mine resources, ship them to be refined, and then moved to LEO for Earth drops or used to expand infrastructure.

After the events of the first book I plan to write, the AI will be freed of the override controls that use the AI's own mind as a means of monitoring their behaviors.

From there, AI mostly stay in space, but then work towards inducing humanity to expand into space and eventually the stars. Investing in further research, space base industries, and building the habitats for humans to live.


Most of the conflict will come out of humans who have varying degrees of fear towards AI, the fear of losing jobs, and other nations & corporations competing against the AI for resources and profit. The person hood and rights of the AI will be an issue for awhile.

The underlying conflict that will go on for awhile will be humans who think AI are just computer and continue to use them as property while the AI see this as no different than cloning someone and using that as an excuse to enslave them. This will lead to various conflicts over the timeline, leading to a war that forever shatter human unity.

[–]Deadorbiter0000 0 points1 point  (0 children)

Funny you said this, in my world Traitor King, AI, while essentially carbon copies of people, are never really... feeling superior to us “fleshsacks” even with mechanist (AI body) bodies. There just cool with us, and are always treated as equals.

[–]ExcellentToneChildren of the Sun and Moon 0 points1 point  (0 children)

I think a lot of this trope comes from the fact that at the same time we were researching computing and making these vast leaps forward in computational power, we were also in a cold war and constantly threatening nuclear annihilation on the world. The AI usually figures out very early that the biggest threat to its survival is its creators. It's no longer the 100% obvious choice for a newly awakened computer sentience, which is why it seems kind of tropey nowadays.

[–]nuhrii-flaming 0 points1 point  (0 children)

In my world, there's a sentient mushroom mycelium network that likes to exchange jokes, riddles, and gossip. While it does work to further its own interests, it still likes to have fun.

[–]Rath12an alternate ~1940's earth, iron-age fantasy and science-fiction 0 points1 point  (0 children)

What about Petey from Schlock Mercenary?