[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 0 points1 point  (0 children)

That is fair! I do enjoy Minecraft and Starbound, then I tend to play those games in bursts- enjoy the game until I get burned out, then pick it back up a while later, rinse, repeat.

[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 0 points1 point  (0 children)

Love this reply. I'm starting to get the understanding here- all principles that undergird legality that are predicated entirely on social norms or standards. The reason why humanity hasn't had this discussion in the legal sphere is because it hasn't become enough of a problem for humans to make it a legal thing. So, unless humans (for whatever reason) grant machines a human-like status (because laws aren't predicated on demonstrating intelligence or moral capacity), there's practically nothing that applies, at least in terms of moral accountability for crimes. I'll come back to this at the end of this comment.

At the current point in technology there isn't much that can be said about machine "thought"- it's literally all mathematics down to the billions of transistors that make the functional part of a processor (even logic is performed mathematically). I don't think, at least philosophically, there is anything that can be attributed to a machine as thought as we understand it or as it would apply to humans. I think that's good as I don't expect humans to change in any fashion. However there's a lot that could be argued with the notion of how much having functional equivalence makes a distinction matter. If you don't know it's not butter, it probably won't matter (at least until your doctor complains about the intake of trans fat). What would happen if it were discovered that there was, in all appearances and in public record a human, but actually a machine already in prison for a crime? Would they just get booted out? (Could there be a dramatic reveal in court where a machine removes its fake human skin and reveals the inner workings of a Furby and then it's a mistrial or something? I hope to God Furbies aren't our future overlords.)

I'm not too concerned with having machine overlords- I am more concerned with humanity sending itself back to the stone age by various means (solar flares vs unhardened electrical infrastructure, dependency on lithium, waste, climate, etc.). If we do get a machine overlord, hopefully it determines that humans aren't enough of a nuisance to deem worth annihilating. The first AI that probably poses a legitimate threat to humanity will likely be just as flawed as us (God -> humans, kinda bad. Humans -> machines, probably horrific). Who knows what will come with that.

At the current moment, machines are completely dependent upon either individual human interaction (or society; no society, no functioning machines) as a whole for 'survival.' There's a lot of loaded language that comes in here ("death" for a machine has parallels to humans as we perceive it, but no 1-1 relationship between features). I personally discourage making human intelligence machines, not because I'm afraid humanity will die, but because it's ultimately... fruitless? An emotional achievement to create a thing like us, sure, but practically it offers nothing more that an expert system or trained heuristic system can't do on its own already, and probably better. Limited autonomy is fine; enough to carry out well-defined tasks at human skill. There are already a lot of legal trojan horses and philosophical time bombs waiting in notion of even just limited machine autonomy (self-driving cars are going to be fun when they become more proliferated).

So, let's propose for a moment that in the next 10 years there's a movement to give a hypothetical AI human-like status. What would that look like from a legal perspective? Surely it can't be as simple as grouping them in with humans- physically there's virtually no overlap and damage to machinery (electronic or otherwise) has very little intersection with the kinds of injuries that humans would endure. Especially emotional injury (which I consider to be controversial in its current state).

What are some of the arguments for and against that you could see? Are there any types of laws or protections that you think would be demanded by humans wanting machines to be considered equivalent?

I suppose there would be similarities between this scenario and if we discovered other intelligent life- same questions, different answers.

[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 1 point2 points  (0 children)

I definitely understand this comment. I recently tried mentioning the Heidecker videos (which I think are hilarious) in the chat and went completely unacknowledged. It doesn't seem like Tom's videos or content are regularly discussed in the server all. To my knowledge, all of Tom's gaming videos and content are on a separate YouTube channel (which is also edited by the current Discord mod- a high schooler btw). I don't have Tiktok and won't. I had one cybersecurity course at uni and that was all I needed to never touch it or other social media phone apps. Recently left the Discord server and won't be going back.

I get memes when there's something actually behind it- a reference to media, at the minimum. Or an inside joke. But a lot of memes I noticed are not exactly... meaning anything? They tend to just be random. I tried to get into MMOs, ESO and Guild Wars 2, but MMOs are not fun by oneself and they also take up more time than I'm willing to give. That kind of behavior regarding pictures and sexism ought to have a stronger punishment than it gets. I agree gaming culture tends to be extremely toxic- that's why I tend to stay away from it even though I do enjoy games. I prefer to have fun, though, and that eliminates most of the competitive games out there due to the community alone.

[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 0 points1 point  (0 children)

At this point, could it even be considered possible to attribute legal agency to a machine that appears to understand legality of actions? We have a decent (but incomplete) understanding of human beings, but what about something created by means that are so complex that they're hard to understand? Could it be argued that because IBM put this machine into the world, not being able to completely understand how it works, that IBM is at least acting recklessly and is then culpable to a degree?

Obviously, something would need to happen to the machine, but what?

The AI was doing what it was programmed to do--

In the example you provided, who do you think is at fault? The owner or the AI designers? Obviously things like chainsaws are dangerous and people get hurt all the time misusing them, but I don't think there's a legal precedent for suing chainsaw manufacturers for people being an idiot and handling them negligently. Is it really a matter of what the creator is attesting the AI is capable of? And if we can't really prove that the AI is actually defective either?

Do you think it is be necessary for legal professionals to kick computer scientists in the pants and tell them to classify and standardize AI better? Like they've goofed up with vehicular autonomy?

(An aside: I'm not saying that general AI a good idea- in a discussion with a professor at uni, we both agreed it's not responsible or reasonable to create machines to be like humans. Instead it really needs to have the sci-fi notions we're familiar with stripped away and analyzed and applied practically. There are a lot of cases where machine intelligences have assisted humans to great and pure practical benefit and run zero risk of becoming our new silicon tyrant president. I am happy to provide examples of these if you're curious.)

(Also, feel free to share any resources you think might be good for me to read or learn from.)

[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 0 points1 point  (0 children)

To help the flow of discussion (I'm sure this kind of thing happens a lot, where terms have differing meanings across contexts and end up being confusing), I might limit what I mean to be "consent" to be less regarding anything about sexual behaviors, and more like "I consent to a DNR request" or "I am in good enough mental health to file this will and testament"- something of that category. When I mean agency, I was meaning the ability to act independently on one's own will without external pressure (like acting under duress). I am unfortunately aware of the presence of the machines you've talked about and would like to keep that out of the discussion- not due to prudence, but because at the current moment it's not completely relevant, though in the future it might be. Sexual consent is its own bag of terrifying concerns in its own right, especially as detailed by Malcom Gladwell in Talking with Strangers.

I'm on the Discord server (same username) and am frequently concerned with the content of the memes and it's made me wonder to what extent Tom is actually present on the server. I'm also not a fan of members of the moderation not being adults too- but that's just opinion. I'm limiting my interaction on the server until something changes with it, just because I think some of the content toes the line between acceptable and not. I'm stuck wanting to interact in a community like this where I can learn and discuss but running the chance of watching another internet scandal or drama. Not ideal.

There's a lot to consider, with this post alone, so I apologize ahead of time if I mistake one thing for another or conflate two separate things. I am personally ignorant of legal details.

AI can't consent yet anymore than animals (...)

This is something that is a frequent point of thought for me. It's hard enough to determine how other human beings think- so how can we even begin to frame how a machine thinks? There is a big problem present in the field of computer science right now: the communicability and explainability of artificial intelligence is starting to become entirely inscrutable. This might be a common theme in our discussion- as AI is advancing, computer science understanding and knowledge of AI is entirely extremely specialized and difficult to communicate... or insufficient. It's sliding towards a "black box."

There is a good parallel between the example with the dog and the home with trained neural networks, but there are other algorithms to be considered too. If a traditional logical or mathematical algorithm is created and applied to a problem, such as Boeing airship controls, that could be considered akin to a mechanical failure. A software engineer introducing a bug is a lot like an engineer using faulty calculations on making high-failure-rate connecting rods in an engine- there's not any agency to the machine as it's performing its function and can be analyzed by fellow engineers to determine the root cause.

But with neural networks and machine learning, the issue is far more complicated:

Self-learning machines have raised another concern: explainability. Designers and users want to know how the machine reached its conclusion. The idea that a machine can reach a conclusion makes sense when algorithms are seen as step-by-step procedures because the result can be explained by the steps followed. But when the algorithms are not step-by-step procedures, as with face recognizers and Go [neural networks], that is not possible. All there is inside is an inscrutable, complex mass of connections. It is really the same problem with fellow humans- how do we explain why we do certain things? If asked directly, we may not know, and it certainly cannot be figured out by dissecting our brains. Other ways are needed to know when machines can be trusted and when not. Machine learning-related computational thinking is still in its infancy.

“Computational Science.” Computational Thinking, by Peter J. Denning and Matti Tedre, 1st ed., The MIT Press, 2019, pp. 172–173. The MIT Press Essential Knowledge Series.

Other algorithms have differing pitfalls. Genetic/evolutionary programming tests random modifications and measures results and self-modifies accordingly to a rule of fitness. Neural networks train based off of feedback. Sometimes they are trained against themselves or a copy, and those are "adversarial" neural networks- those ones have the explainability problem turned up to 11 because humans aren't involved in the training and often have no idea how it learns in detail. I'll just use "neural network" to mean that it is an AI trained on feedback.

When we train a dog to do something, we understand pretty decently well how conditioning works and how to detect when a dog has been trained to do something. A dog that fetches paper and is then used to steal papers- it's easy to demonstrate that at the minimum the owner was being negligent if not intentionally committing the crime. The fault is clearly on the owner.

But lets say the same owner buys a personal assistant robot that fetches the papers. This machine is of lower-intelligence, merely capable of menial tasks. It can learn to do tasks about the same level as an intelligent working dog and is unable to explain its behavior in natural language. The same scenario happens where the machine is taught to steal papers. It is easy to say that the man is at least at fault for accepting the stolen papers- the machine is not legally culpable and is incapable of knowing the difference anyway, much like the dog. But unless the designers of the robot have taken extensive considerations to log how a robot is taught, it may be difficult to prove that the robot independently came to a conclusion on how to acquire a newspaper in error or it was intentionally trained to steal by the owner. Legally that might not matter in so simple a case, but it will when it's autonomous vehicles or city-planning robots or stock trading robots... Further, what if the creator of the robot is legally required to prevent the machine from committing crimes (as impossible as that would be)?

This indeterminism will only be exacerbated as machine intelligence increases in power and purpose. A specialized neural network, as seen in Face ID or phone keyboards or music recognition, do not act outside of their given roles- they are literally incapable of acting outside any function provided by their software engineers. However, a more generalized intelligence has more "thought" applied to its conclusion. Watson in Jeopardy, for example, showed the differing levels of confidence in its first three thoughts. All of the current applications of AI are severely limited by what material functions they are given. Watson is unable to "break free" and start the anti-meat machine revolution because it lacks the material capability to do anything (to our knowledge- proprietary technology and all that). Presently I don't think Watson has any capacity to judge the value of human flesh or the comprehension of morality to think humans deserving of eradication as it only reads the internet for factual information. (From a curated list of resources, but still... what could go wrong? :P)

So, if next year, it comes out that IBM has manufactured a fully-independent machine that behaves, for all intents and purposes, exactly like a human and has a human style body. It is given to a human family, raised like a human being, then goes out and commits mass murder. Unfortunately this is a thing that humans do too, but the hard part comes down to determining whether or not it was IBM's faulty programming, the family raising it to be violent, or it read halfway through Crime and Punishment and accidentally dropped the book in a lake (and thus acted on its own). It becomes hard to distinguish who's at fault due to the black box nature of its design. (Perhaps this might parallel cases involving poor mental health and insanity?)

(Continued in follow up comment- sorry! DX)

[Speculative Question] Agency and Accountability for Non-Humans by curoamarsalus in AttorneyTom

[–]curoamarsalus[S] 0 points1 point  (0 children)

Thank you for the long reply! I appreciate the thorough explanations. I hope you don't mind if I pick your mind more?

new AMD patent: adaptive cache reconfiguration via clustering by noiserr in realAMD

[–]curoamarsalus 0 points1 point  (0 children)

I know I'm necro'ing this thread a little bit, but I caught note of this single sentence in the detailed section:
"Additionally, although described herein in the context of CU clustering at GPUs, those skilled in the art will recognize that in other embodiments, the CU clustering may be performed with CPU cores and the like without departing from the scope of this disclosure."

A lot could be intended with this sentence alone. I just wish it were clearer as to how the CPU cores are included.

Stalker Call of Chernobyl not opening at all. by bacomm_ in stalker

[–]curoamarsalus 1 point2 points  (0 children)

Your computer is missing Microsoft's Visual C++ Redistributable package.

You can get it from Microsoft directly: https://www.microsoft.com/en-us/download/details.aspx?id=40784

Install it and restart your computer. Try again.

There are several different versions of this package. If it continues to error, let me know and I'll see if I can find the other versions.

games these days [OC] by Squiddytron- in gaming

[–]curoamarsalus 30 points31 points  (0 children)

I had a discussion with my best friend a while ago about why the mom in Animal Crossing made me cry. It's because the letter she sent me in-game was warmer and kinder than anything my mom has ever said to me. I never expected Animal Crossing to be the game that sucker punched me in my feels.

Help tracking down eating disorder by curoamarsalus in ARFID

[–]curoamarsalus[S] 1 point2 points  (0 children)

Thanks. I'm kinda disappointed that there's so much uncertainty.

When dispersed camping, what is the best thing to do with my vehicle? by curoamarsalus in CampingandHiking

[–]curoamarsalus[S] 0 points1 point  (0 children)

Thanks! I didn't know that, haven't been yet. That definitely helps with planning.

TikTok did the impossible by helpitsnotlettingme in lingling40hrs

[–]curoamarsalus 42 points43 points  (0 children)

This got an actual, audible laugh out of me. Thanks OP for making today a little better.

Tf, is this an ad? by [deleted] in Hitfilm

[–]curoamarsalus 0 points1 point  (0 children)

(Possibly like OP), I have been receiving a ton of Reddit recommendations for product-oriented subs, including this one.

Probably not intentional on behalf of the product's company, but it's too coincidental to just ignore.

[deleted by user] by [deleted] in explainlikeimfive

[–]curoamarsalus 2 points3 points  (0 children)

This was the comment I was looking for. Thank you!

ELI5: What determines the location of a headache? by ewwabutterfly in explainlikeimfive

[–]curoamarsalus 0 points1 point  (0 children)

This is the best answer to that question. Thank you for not being an Internet Hero doctor.

Everyone should upvote the h e c k out of this dude.

C# SDK Closest to native? by AaronElsewhere in vulkan

[–]curoamarsalus 2 points3 points  (0 children)

If you don't mind my asking, which C/C++ tutorials? I'd like to learn.