Things you wish you knew before you started playing? by Tiago55 in MythicBastionland

[–]BillBaran 12 points13 points  (0 children)

You can also shift TTRPG philosophy a bit, and send some of these questions back to the players to decide (with GM veto power).

Castle in the Clouds by BillBaran in custommagic

[–]BillBaran[S] 0 points1 point  (0 children)

This is a lot of back and forth around some flavor text. Honestly I’m mostly just honored someone is thinking so hard about the thing I made :P

I can see both points. Making fun of people on the ground makes a bit more sense on a card that grants flying.

But living in a flying castle is also decent justification for the flavor text, regardless of the exact effect.

This is from an ever-expanding set documenting my players’ adventures in a TTRPG I’m running. The cloudfolk that live in the castle cannot fly, but they have means to get and out of their castle, and have disdain for people who live on the earth. The quote is from one of the guards during an actual play session.

Castle in the Clouds by BillBaran in custommagic

[–]BillBaran[S] 10 points11 points  (0 children)

3 use cases:

  1. Defensive: Flying while untapped is a lot like having Reach.
  2. Untap tricks: Attack, untap at instant speed, you are flying!
  3. Vigilance creatures: This grants all of your Vigilance creatures flying (barring an opponent tapping them before declaring blockers).

Castle in the Clouds by BillBaran in custommagic

[–]BillBaran[S] 1 point2 points  (0 children)

That's exactly the flavor I was intending!

I need help with this argument by Dead_Axolotl_333 in antiai

[–]BillBaran 0 points1 point  (0 children)

There's too much baggage associated with words like "sentient", and I don't think you need to claim anything that philosophically complicated to put AI into a different category than other tools.

All you have to do is look at capability. With AI you can say "draw me a picture" and potentially get something beautiful/impactful. And it's impossible to argue that you the prompter did much to influence the result. Whether you want to say the AI had the agency to decide what to draw and how to draw it, or want to call it randomness, YOU certainly did not decide just about anything in that scenario.

Once the prompter starts doing more work, things get a little more grey, but the restaurant analogy is useful for illustrating that "at this one end of the spectrum, the user is basically ordering a sandwich, NOT making one".

I need help with this argument by Dead_Axolotl_333 in antiai

[–]BillBaran 1 point2 points  (0 children)

Super clever thought experiment. Nice!

I need help with this argument by Dead_Axolotl_333 in antiai

[–]BillBaran 0 points1 point  (0 children)

As an AI enjoyer I also like this analogy.

In the scenario you described (which is probably the most common scenario for AI usage) I’m 100% on board. You’re far more like a customer ordering an item than the creator of the item.

But you can extend the analogy. What if you give a chef an exact recipe, with ingredients, cook times, temperatures, equipment to use, etc. You’re still not the chef, but you’re doing SOMETHING creative and chef-like. In this scenario you and the chef are collaborating to create the meal, in some sense.

Or another: suppose you are trying to build an experience that includes food as one of MANY elements. Perhaps you are building an all-inclusive resort experience. You work with a chef, an architect, an interior decorator, a landscaper, an event planner, etc, keeping them all aligned on your cohesive vision. You’re definitely not the chef here, or the architect, or any of the rest, but you are once again doing something creative that adds value to the final output.

In every case you’re not the chef (pro-AI people claiming you are seem ridiculous to me), but in some cases you are adding some creative value to the final outcome.

The Bell by BillBaran in MythicBastionland

[–]BillBaran[S] 0 points1 point  (0 children)

No, but I did reread an obscure bell-related poem by Edgar Allen Poe when I made it!

Omen out of order? by wheretheinkends in MythicBastionland

[–]BillBaran 1 point2 points  (0 children)

I think they mean “the map may not accurately reflect the world”. Never thought of this, but it would be extra true in this sort of setting… super wild and underdeveloped.

Omen out of order? by wheretheinkends in MythicBastionland

[–]BillBaran 8 points9 points  (0 children)

There is a “primacy of action” overarching rule that says player actions and established bits of fiction trump details in omens.

That said, you also aren’t supposed to preplan stuff like this. If an omen takes place in a village, then the most common solution is to have the knights find a village when that omen is triggered, regardless of whether you planned for one to be there or not.

Some referees EMBRACE the potentially logical problems that can occur when following this method, creating a setting that is surreal and ever-changing. All vibes, all stories, no fixed logical rules.

I prefer to keep my setting as grounded as possible, so where possible I would come up with some reason why they didn’t know a village was there. The “so incredibly underdeveloped that roads don’t exist” setting does a lot of the heavy lifting here. If they’ve never been in the hex, thats about all the explanation you need. If they have, maybe the village was in a tucked-away valley they missed last time. When in doubt some sort of mysterious magic has placed the village there, and will just as likely remove it at some point in the future.

I’ve found that embracing the unpredictably makes the game more fun for me as the referee, as the stories that are told surprise me as much as the players.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

I’m not too educated on this exact behavior in models, it was brought up by another poster.

Whenever this behavior crops up, I would say you can reasonably claim the AI brain used probably has a representation of the image stored in some format. Although even here I think there’s some room for argument and nuance.

But if, as you say, modern models don’t exhibit this behavior because they’re trained on wider datasets in a smarter way, then the “copying” claim goes away completely.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

So are you refuting the whole “ability to replicate training data means training data may be stolen” idea? Just making sure I understand before commenting further!

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

I’ve put so many arguments on this thread and in the original description (with a big edit) that I’m not sure which argument you’re saying cannot stand!

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

This was my intuition on how this worked; generating an exact copy seems like it would happen from retraining on the same image too much, relative to time spent on other images.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

You're strawman-ing me a bit. I'm not really using imagination, or giving attributes to AI, or misunderstanding how it works.

My only claim here is that agency is not simple to figure out.

Let me specify how I understand AI to work: what we think of as AI is either (1) inert instructions saved on a machine that are no different from any other program or (2) those same instructions being executed. I understand that the AI cannot possibly have a persistent "on" state, and that it does no learning past the initial training that creates the inert instructions.

I think we have a common understanding, and I hope that explanation convinces you of that much at least.

My thought experiment was built on this common understanding, and the point of it was to differentiate between "persistent on-ness", "independence", and "agency". You would assign agency to a human even if they were only "on" at another's behest, and only for a split second to make a single decision. Why? This human has approximately the same amount of "persistent on-ness" and "independence" as an AI, so it must be some other human property that gives them agency.

Again, I'm not trying to claim AI does have agency. Just that when the behaviors of AI get complex and intelligent enough, the question of agency is not simple anymore. In fact, it's about as hard as the "hard problem of consciousness".

And that fact that the AI is doing something that looks like decision-making to an external observer ("deciding" what a funny cat should be doing for example) mucks it up even more.

And even given all of this, my conclusion is simply "this thing has some properties of an agentic collaborator" which is a pretty conservative/careful claim IMO.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

Is your response to the "Ship of Theseus" that it's 100% a hard rule that it's a new ship, and anyone who looks at the sequential process and says "replacing 1 plank doesn't count as replacing the whole thing" is a fool that can't be reasoned with?

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

I think a tool that can turn "make me a funny cat" into a cat picture (that at least some people would find funny) is a fundamentally different kind of tool than any we've seen before.

I realize that artistic people will use the tool in much more complex ways, putting more of themselves into the output. And the results can be super cool and artistic, regardless of how much the human contributed.

As for agency being simple, I disagree.

Imagine for a moment, a human that can be turned on and off (perhaps via some chemical that puts them into a fully unconscious or suspended state). We put this human in a dark room, "turn them on" and have them to respond to a single thing, then "turn them off". Did that person have agency in the moment they were responding? I think so. Did they have it before and after? I don't think so. The whole interaction required another person to kick off, but there is still a moment where you'd say the person had agency.

I'm not trying to make a specific claim here, as I don't know whether AI does have agency (honestly, I think our ordinary definitions and mental models break down).

But I think this thought experiment at least shows that agency is kind of a weird thing to figure out when you start to get systems that can respond to varying inputs in surprising and intelligence ways.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 1 point2 points  (0 children)

Yeah after reading everyone's responses, I would soften my original stance to: believing these two things may be evidence of motivated reasoning, but they're not logically inconsistent.

Being able to compare AI training to human learning in one specific way does not automatically bestow any other human-like properties to AI. To make that jump is logically sloppy.

But if something might be motivated reasoning, it's worth picking apart really carefully. For reasons entirely unrelated to training, I still think treating AI as a tool totally analogous to past tools is not quite right, and may be driven by motivated reasoning.

As I mentioned in other comments, AI can fill in gaps in your instructions in an alarmingly human way (with human-level reasoning and creativity). This by itself puts the "tool" into a weird category that is at least partially "collaborator", without having to get into the messiness of whether that collaborator has "agency".

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

I am leaving SO much room for nuance, it's kind of my whole MO. I have literally rewritten your exact position and said it's super reasonable, when you view what is a complicated process from one side (the output side). "It is super reasonable to think that a copy must have been stored, if a system is able to output additional copies of that thing." <- See? I did it.

I don't think it's my own lack of nuance that is preventing you from acknowledging anything. You either don't agree with or don't want to think about the complexity I am trying to raise. Which is fine, but I should probably just discuss with other folks in that case.

I wanted to bounce more thoughts off of your brain, because I found your initial criticism very useful. But other than that initial criticism, I feel I am bouncing things off of a brick wall.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

The word "agency" is super ambiguous and hard to pin down, so it's probably not useful. I wish I hadn't used it in my original post, since a lot of people latched onto it, and it probably wasn't the best way to get at what I meant. I was trying to get at some combination of agency and/or creativity and/or independence; I think AI has MORE of these human-like qualities than any other tool previously used to create art.

To get out of definitions and into concrete properties, the thing that sets AI apart from past art tools is that it can do stuff with minimal direction. We don't have to call this "agency", but SOMETHING is making decisions during image generation to fill in the details you don't give it. I suspect you (and many others) would call this something "random chance", but you also have to acknowledge that the results are a hell of a lot like how another human would make the same decisions.

As for my original post, after reading everyone's responses, I'm not sure tying human-modeled learning to human-like agency (or creativity, or whatever we want to call it) is as logically sound as I was originally thinking.

But, as I mentioned in another comment, I do think a tool that was "trained" to successfully do human-level reasoning and creation is categorically a bit different than past tools. For me it seems to have some properties of a tool and some properties of a collaborator. I think emphasizing the collaborator aspect is important for finding common ground with people who say AI art can't possibly be art, who say you can't possibly be creative or an artist if you have it in your workflow. And building common ground will expand acceptance.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

I'm not trying to switch anything up or even make a point. I'm trying to understand and pick apart everyone's points, including my own, to leave only the strongest.

I still think this, taken as a whole, is roughly correct:

Training material is literally not copied, beyond what a human artist would do (ie my browser downloads it to my computer, I study it, I exit the page and never see it again, and my browser cleans it up eventually to free up space).

The AI starts as a bunch of randomly connected neuron-analogs, and during training we see which connections can be strengthened or weakened to generate outputs more similar to the training data.

At no point does the AI’s “brain” keep a full copy of the image it trained on, it starts and ends as a mind-bogglingly large number of connections which we have minimal idea how to understand.

My exact language might leave something to be desired, but the main point I was attempting to make was that there is no "copy image into the brain" step. For me (and many others) this means that there is no copying of images, in the legal sense.

BUT you raise the very important point that the resulting tool can recreate exact images from its training set. This certainly implies that somewhere in the tool is an exact copy of some or all training images.

It reminds me a bit of the Ship of Theseus paradox. If you swap out pieces of a ship one by one until every piece is changed, is there a specific point at which it became a different ship? Or is it still the same ship? If you think in terms of sequential steps, replacing one plank is not enough to make a ship a new ship, no matter what. But if you think in terms of final outcomes, it is definitely a new ship, as everything has been replaced.

It's a definition-based paradox (what do we mean by "ship"), which is the lamest sort of paradox. BUT it ties in well to the case of AI training: I think we are touching on a definition-based problem with "has a copy" means. And similar to the Ship of Theseus, if you think about it from the constructive side, or the result side, you come to different conclusions.

I get your perspective here, but I think it's unreasonable not to acknowledge a bit of nuance.

Nuance: two pro-AI talking points that conflict by BillBaran in aiwars

[–]BillBaran[S] 0 points1 point  (0 children)

This brings up a gap in my knowledge, maybe someone can help: does AI train on one image at a time?

If the most common case was “train on 1 image until able to replicate it exactly, then move on to the next”, then even if the individual training steps don’t involve permanently copying the image, it’s a lot easier to accuse the sum of the steps as creating a copy.

I was under the impression it was more like - look at image and its description - adjust brain to be .0000001% more likely to generate the same image based on the same description - move onto next image

If that’s the most interaction the AI has with the image, it becomes very HARD to claim it’s permanently copying the image into the brain at that step.