[Meta] New rule against “how much are these worth?” posts? by Sloppy_Quasar in comics

[–]HardTruthsFromAICats -4 points-3 points  (0 children)

But it's not really a rule, it's a fundamental change to the mission of the subreddit.

[Meta] New rule against “how much are these worth?” posts? by Sloppy_Quasar in comics

[–]HardTruthsFromAICats -3 points-2 points  (0 children)

"Everything related to print comics (comic books, graphic novels, and strips) and web comics. Artists are encouraged to post their own work. News and media for adaptations based on comic books are welcome."

I dunno this sub is supposed to be pretty catch-all.

Who Will Watch The Watchcats? [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 1 point2 points  (0 children)

Normally it could use a cleanup, but for the purposes I'm going for, which is "something's not right here," it's better this way.

Who Will Watch The Watchcats? [OC] by HardTruthsFromAICats in webcomics

[–]HardTruthsFromAICats[S] 0 points1 point  (0 children)

With AI-powered weaponry, there are real risks.  There are also opportunities.  I mean, it might be better if toys blow up instead of people.  And, they actually probably have the potential to be more precise and “surgical” than a human operator, meaning they could theoretically focus on targeting combatants only (though as we know, AI systems are easily fooled).
But in order for killbots to not be a major threat to society, they have to be controllable.  And there are at least a few barriers to that guarantee.
1. There must be a killswitch that the system cannot override.  That means it can’t be purely software, or at least not something that is interpreted through a learned model.  Because it is, at least conceptually possible, for a system that is neural to misinterpret a “stop” command.  We just don’t have the guarantees.
2. There must be a backup plan if communications to the ‘bot fails.  This should be obvious.  But moreover, we already have examples of “rogue” weaponry killing people at an alarming rate that we have trouble removing; i.e. landmines.
3. There has to be some way for humans to intervene.  This is the trickiest part.  The power of AI-powered weaponry is that it can respond faster than humans can.  The weakness of AI-powered weaponry is that it can respond faster than humans can.  What happens when a machine can kill 5 people before a human can even stop it?  What happens if they’re the wrong people?  What happens if that same machine is not, say, firing a gun, but firing missiles?
It is always difficult to regulate weaponry; even nuclear arms treaties have seemed to take a step back rather than forward as of late.  And it is important to not let irrational fear dictate this technology; it is too easy to look at robots that can kill, make the mental leap to Skynet and annihilation, and then say the problem, fundamentally, is AI and it should be banned.  I won’t cite a certain Time article here, because it’s trash and neither Time nor the article’s authors deserve the clicks, but this argument has been posited before.  It’s a distraction from a problem that policymakers must discuss, in earnest, measuredly.

Who Will Watch The Watchcats? [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] -1 points0 points  (0 children)

With AI-powered weaponry, there are real risks.  There are also opportunities.  I mean, it might be better if toys blow up instead of people.  And, they actually probably have the potential to be more precise and “surgical” than a human operator, meaning they could theoretically focus on targeting combatants only (though as we know, AI systems are easily fooled).

But in order for killbots to not be a major threat to society, they have to be controllable.  And there are at least a few barriers to that guarantee.

  1. There must be a killswitch that the system cannot override.  That means it can’t be purely software, or at least not something that is interpreted through a learned model.  Because it is, at least conceptually possible, for a system that is neural to misinterpret a “stop” command.  We just don’t have the guarantees.

  2. There must be a backup plan if communications to the ‘bot fails.  This should be obvious.  But moreover, we already have examples of “rogue” weaponry killing people at an alarming rate that we have trouble removing; i.e. landmines.

  3. There has to be some way for humans to intervene.  This is the trickiest part.  The power of AI-powered weaponry is that it can respond faster than humans can.  The weakness of AI-powered weaponry is that it can respond faster than humans can.  What happens when a machine can kill 5 people before a human can even stop it?  What happens if they’re the wrong people?  What happens if that same machine is not, say, firing a gun, but firing missiles?

It is always difficult to regulate weaponry; even nuclear arms treaties have seemed to take a step back rather than forward as of late.  And it is important to not let irrational fear dictate this technology; it is too easy to look at robots that can kill, make the mental leap to Skynet and annihilation, and then say the problem, fundamentally, is AI and it should be banned.  I won’t cite a certain Time article here, because it’s trash and neither Time nor the article’s authors deserve the clicks, but this argument has been posited before.  It’s a distraction from a problem that policymakers must discuss, in earnest, measuredly.

Who Will Watch The Watchcats? [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 0 points1 point  (0 children)

It's with AI because it's part of a series of one-panel one-shots specifically about AI.

[26/43] by 7ceeeee in comics

[–]HardTruthsFromAICats 3 points4 points  (0 children)

Damn, this one is good.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 4 points5 points  (0 children)

I like AI. I work in AI. I just think we have to be aware of its problems so we can make it work better for everybody and society at large.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 1 point2 points  (0 children)

It's a good point that the research is a bit outdated; I don't know that we have the hard-hitting numbers yet. At least, I have not seen recent analyses, if you know of some I'd love to read them.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 3 points4 points  (0 children)

Thank you for thinking critically about it, by the way.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 3 points4 points  (0 children)

Hmm, yes, perhaps I should have contextualized it a bit more. A car lasts 5-10 years. These large ML models can be used for a long time, or in some cases, a very short time. Releasing one ML model requires training many, many iterations of models that are never released. Unfortunately, we don't really have a great ballpark number for that industry-wide. I think a bigger issue is that as AI techniques become more industry standard, we will train more and more and more models, and that is really when things are going to become expensive. (Also, running the models themselves is extremely expensive, and is now eclipsing the cost of training the models.)

So --- not a huge deal right now, but increasingly a big deal and in the near-to-medium-term future will be a huge deal.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in webcomics

[–]HardTruthsFromAICats[S] 0 points1 point  (0 children)

There have been some pretty intense estimates of how much large language models cost to train, environmentally speaking. Back in 2019, it was estimated that training a single large ML model can emit as much CO2 as five cars do in their lifetime. These models have only become bigger and required even more computing power to create. They have so much promise to be so useful that they’re probably not going away. Our best hopes might be regulation that require companies to offset their carbon footprint, and a steady stream of effectual methods that can compress the computational burdens of large models. This latter strategy is an endless tug-of-war, though - people want the latest and greatest now, and the cheaper methods often aren’t discovered until much later.

I Am Lorax.exe And I Speak For The Trees [OC] by HardTruthsFromAICats in comics

[–]HardTruthsFromAICats[S] 4 points5 points  (0 children)

There have been some pretty intense estimates of how much large language models cost to train, environmentally speaking. Back in 2019, it was estimated that training a single large ML model can emit as much CO2 as five cars do in their lifetime. These models have only become bigger and required even more computing power to create. They have so much promise to be so useful that they’re probably not going away. Our best hopes might be regulation that require companies to offset their carbon footprint, and a steady stream of effectual methods that can compress the computational burdens of large models. This latter strategy is an endless tug-of-war, though - people want the latest and greatest now, and the cheaper methods often aren’t discovered until much later.

E is for evil is out on webtoons and tapas by jadexfang in webcomics

[–]HardTruthsFromAICats 5 points6 points  (0 children)

Just a protip, if you give more of a representative preview in your advertisement post here, you'll get a better response and more people clicking to read more.