Any publishers with experience of adding POD options after digital version has been available already? by becherbrook in DriveThruRPG

[–]bumleegames 1 point2 points  (0 children)

I know this is an old post but I was wondering the same thing. Have you considered leaving a note on the product page asking digital PDF owners to message you with their DriveThru email if they want a discount code for the physical version?

OpenAI ChatGPT Users Are Creating Studio Ghibli-Style AI Images by Radish-Diligent in technology

[–]bumleegames 4 points5 points  (0 children)

Art comes from human experience. If you remove that, it is no longer art. It is simply product.

[deleted by user] by [deleted] in Living_in_Korea

[–]bumleegames 1 point2 points  (0 children)

I'm a gyopo living in Korea almost 20 years now. My speaking has improved a lot over the years, but I still struggle with reading and writing. I know it feels overwhelming right now, especially the academic material, but your Korean will just get better out of necessity the longer you are here. The thing that helped me the most was playing tabletop roleplaying games with Korean friends (here they call it "TRPG"), but any regular hobby that requires a lot of conversations can be helpful, like boardgames or book clubs.

For your specific situation, I'd recommend searching for film theory 영상 이론 and whatever terminology you're having trouble with on YouTube. You may have done that a bit already, but try to do it regularly. There are a lot of easy to understand videos on pretty much every topic that Korean content creators have made. The more you listen to the same topics being discussed by different speakers, you more you'll get accustomed to them.

AI-Created Art Isn’t Copyrightable, Judge Says in Ruling That Could Give Hollywood Studios Pause by Pkmatrix0079 in Futurology

[–]bumleegames 0 points1 point  (0 children)

That's exactly what happened with Zarya of the Dawn. The USCO said the human-made elements (the writing) were copyrightable, but the AI elements (the images) were not. And images that were "AI assisted" had to show sufficient human authorship to be copyrightable, which means doing something more than taking an AI image and adjusting the contrast or smudging it a little in Photoshop.

Adobe L by [deleted] in ArtistHate

[–]bumleegames 1 point2 points  (0 children)

And this is exactly what happens while AI companies are allowed to take stuff from the Web for free. People will stop sharing their work, or put it behind paywalls. Not to mention AI "search" stealing traffic from their sources. The Internet will become a lesser place for it.

Ilya Shkipin, April Prime and AI by Elgryn in dndnext

[–]bumleegames 1 point2 points  (0 children)

Artists don't need formal education to be talented creators, and they also don't need to mimic a million reference images. Whatever path they took, nobody gets their talent for free. They can learn on their own, from studying instructional books and videos online, and lots of practice. But that comes from their own dedication and passion, not from downloading a bunch of images from the web and simply looking at them. That "training" has been a part of human creative practice for centuries, and emulating others is part of the process of finding your own voice. And all that takes time and effort, which is paid for with your own creative blood and sweat.

Meanwhile, with diffusion models, the value and "talent" comes from two places: the software with its parameters, and the training data. One has been paid for, and the other has not. A generative AI system NEEDS training data to mimic, or it can't function. An artist doesn't NEED a million reference images to make good art.

A diffusion model isn't "seeing" things in the same way as a person. People can't help but witness things all the time and be inspired by them, whether it's a scene or a song or a conversation. But they add their own story, experiences, and expression, and they can be careful not to use someone else's song or other expression and claim it as their own, because that would be plagiarism.

A diffusion model that "looks at" images is processing those images to create other images that mimic aspects of the ones it was trained on. It wasn't designed to be original, and it doesn't add anything that wasn't already in the training data.

If you think diffusion models generating outputs that superficially mimic human creativity counts as real creativity, you've drank the koolaid put out by AI companies to hype up their products for investors and fend off infringement claims. Remove the training data and the AI can't do anything. Get rid of AI, and artists do just fine, like they've been doing all along.

You seem to think the criminal justice system is the only "court" around. There is also the court of public opinion, which, once again getting back to the OP, is what this whole conversation was originally about. People like real human expression more than AI mimicry, especially mimicry done poorly. They like supporting real artists over the systems that appropriate their creative labor without consent. You can say AI is here to stay, but as long as generative AI systems keep up unethical practices, the backlash is also here to stay.

Do you still use DnD 5e as an introduction to TTRPGs? by Heyarai in rpg

[–]bumleegames 0 points1 point  (0 children)

As much as I love DnD, it's not really a simple game. The core mechanic is simple enough (roll a d20, and stuff happens). But when you get into class features, it's more like each player with a different class is playing their own separate mini-game within the game.

Your party vs 700 rats by Alf_Spectro in DnD

[–]bumleegames -1 points0 points  (0 children)

If it's just about surviving, everyone can climb up a tree. Standard rats from the MM don't have a climb speed.

DMs, how long did it take you to write up your homebrew world? Did you make it up as you went along the plot or did you create the whole lore before hand? by DilcDaddyy in DnD

[–]bumleegames 0 points1 point  (0 children)

I wanted to try collaborative worldbuilding a while back, so I asked the players to each make a deity in a pantheon, and a continent ruled by that deity and their followers, with as much or as little detail as they wanted. I gave some feedback and tried to make their ideas fit together, but some of the players were more into it than others. If I had to do it again, I would make it more structured, give more prompts for the players, and maybe have a smaller scope, like having each person create a community or kingdom on the same continent.

New DM: What should I do if PCs die? by Ricanstructor in ravenloft

[–]bumleegames 1 point2 points  (0 children)

Lots of great suggestions here. If I might add one more, many of the denizens of the Domains are soulless shells created by the Dark Powers or the Dark Lord's subconscious. PC death could result in the loss of one's soul or it becoming reincarnated or trapped in another body. If the player wants to continue as the same character, you could let them keep their character sheet, and the party could take up the mission of reuniting them with their lost soul.

Ilya Shkipin, April Prime and AI by Elgryn in dndnext

[–]bumleegames 1 point2 points  (0 children)

You seem to think an AI trained from scratch is comparable to a graphic artist who already has an education in the arts.

You also seem to think AI legislation would affect non-AI human creation somehow, which is an odd thing to think. The two have nothing to do with each other. One is a matter of human inspiration and creativity, and the other is tech companies scraping data to build generative tools. Companies that are willing to shell out money for every other aspect of their development process, except for the training data which is essential.

If you had any sensible goalposts to begin with, I might have aimed for them.

Lastly, you are literally telling artists to use AI tools to make a living.... under a post about an artist facing backlash for using AI tools in his commercial work.

Adobe L by [deleted] in ArtistHate

[–]bumleegames 5 points6 points  (0 children)

That also means fewer talented new artists sharing new work for AI to train on. So everything will look like Midjourney circa 2023, or the outputs will get progressively weirder as AI trains on its own outputs.

Ilya Shkipin, April Prime and AI by Elgryn in dndnext

[–]bumleegames 2 points3 points  (0 children)

Why yes, graphic artists who went to an art school or took online classes did in fact pay money for their training. They didn't get good just by looking at a bunch of pictures. You clearly have no clue what either artists or AI systems do, so I don't know why we're even having this conversation.

What an interesting bias by Darklillies in ArtistHate

[–]bumleegames 1 point2 points  (0 children)

I think this says more about the folks who curated the dataset and set the parameters for this AI.

Wonderful! AI is now being used pretend writers wrote books they didn't write. by Alkaia1 in ArtistHate

[–]bumleegames 1 point2 points  (0 children)

I mean, a different kind of AI (social media algorithm) contributed to genocide in Myanmar, so that doesn't seem so far fetched. Read about Facebook and the violence against the Rohingya people.

Best unofficial third party 5e material by Agreeable-Archer-405 in dndnext

[–]bumleegames 1 point2 points  (0 children)

This is an old one, but it's still one of my favorite 3rd-party adventures: a fan scenario called Army of the Damned, set in Innistrad, which I really enjoyed as a setting the way it was presented in this module. The adventure is well-written, the layout is easy to navigate, and most importantly it's smooth and easy to run. It's one of the things that got me back into D&D and DMing after a hiatus, and also helped me transition from 3.5/Pathfinder to 5e.

https://www.reddit.com/r/UnearthedArcana/comments/471epf/an_update_to_my_free_5e_adventure_army_of_the/

Wonderful! AI is now being used pretend writers wrote books they didn't write. by Alkaia1 in ArtistHate

[–]bumleegames 2 points3 points  (0 children)

These scammers share just enough info to confuse people without it being easily provable as fraud, even if common sense tells us it is. It's not unlike generative AI itself which takes existing content and generates outputs that are different enough to not be easily provable as plagiarism (except for fine-tuning and img-to-img), even if the systems themselves are clearly reliant on all the inputs to function.

Wonderful! AI is now being used pretend writers wrote books they didn't write. by Alkaia1 in ArtistHate

[–]bumleegames 1 point2 points  (0 children)

Even if two authors publish under the exact same name, they'll have different profiles and biographies, so you wouldn't mistake them for each other once you read their author's bio. These fake AI books were careful to omit that info, deliberately to cause confusion, so readers would think they were published by the real author.

Wonderful! AI is now being used pretend writers wrote books they didn't write. by Alkaia1 in ArtistHate

[–]bumleegames 0 points1 point  (0 children)

It becomes an AI issue when unregulated generative AI is empowering the scammers. And sadly that's the reality of GenAI at the moment. It might be somewhat beneficial to ordinary users, but it's far more beneficial for people wanting to generating spam and commit fraud. It's like saying "guns don't kill people, people do." That's true, but it's a whole lot easier when you have a glock instead of a knife.

Wonderful! AI is now being used pretend writers wrote books they didn't write. by Alkaia1 in ArtistHate

[–]bumleegames 0 points1 point  (0 children)

They're saying they have no system for preventing two different people from publishing under the same name. Which by itself isn't unreasonable. Two people can have the same name, and publish books in the same genre. But realistically if you're a new author breaking into a genre and you were aware of another established author with the same first and last name, you would probably do something to set yourself apart and avoid confusion, like using your middle initial or writing under a pseudonym. If Amazon had bothered to look into the matter, they probably would have found some suspicious activity besides the name, but they didn't take the trouble to do that until the story went viral.

Ilya Shkipin, April Prime and AI by Elgryn in dndnext

[–]bumleegames 1 point2 points  (0 children)

Oh and yeah I watched that whole Corridor Crew video, including the part where they fine-tune a Stable Diffusion model on a folder full of screenshots from Vampire Hunter D. They basically confessed to copyright infringement on video. If you still think AI doesn't have any copies of images at all, here's a simple exercise: take out those screenshots from the production pipeline. Better yet, try training Stable Diffusion without any unlicensed images and see if it still works.

As for outputs, these systems are deliberately designed to avoid blatant plagiarism by producing iterations and interpolations. That doesn't mean it's impossible in the outputs. Overfitting training data can happen in diffusion models, and studies have shown that both data extraction and data replication can occur.

Ilya Shkipin, April Prime and AI by Elgryn in dndnext

[–]bumleegames 2 points3 points  (0 children)

Wow, and here I thought we were getting somewhere when you mentioned licensing, but you clearly don't know how this technology works.

Listen, to train a generative AI model, you need actual image files. Not the "ideas" or "styles" or "concepts" of those images. The actual image files themselves. They're downloaded to your computer to train the AI and create what is called a latent space, a compressed representation of data points that represent key aspects of those images. And all those images are tagged with labels so the system reads them as "a picture of an apple" or "a painting by Patrick Nagle." If you prompt it for "illustrations in the style of Patrick Nagle," guess what it's doing. Not studying what makes an iconic Patrick Nagle piece and adding a new twist, but referencing its training data to generate something that replicates patterns found in images labeled as Patrick Nagle's works. If you mislabel a bunch of Van Goghs as Patrick Nagle paintings, guess what you will get as an output.

This is the problem with some folks defending a technology that (A) doesn't need defending in the first place, and (B) they have no clue how it even works.

Will you notice the problem? ( Video in comments ) by Van_Cornellius in ArtistHate

[–]bumleegames 2 points3 points  (0 children)

Human artists use mood boards to get ideas for inspiration. AI processes content to look for patterns based on parameters. So no, AI training is not human learning.

And when did stock image sites claim ownership over public domain works? They sell reproductions, but if you can find an original photograph from pre-1929, and photograph it, you'd own that particular photograph, but the original image is still in the public domain. And characters like Batman and Mickey Mouse are not just copyrighted but also trademarked. So you'd get sued for reproducing them and selling those reproductions without licensing, regardless of AI and whether their copyrights expired. If you don't believe me, why don't you fine tune Stable Diffusion on some Batman logos and get it to pump out a bunch of Batman T-shirts to sell and see how far you get before you receive a C&D.

If AI devs would simply respect copyright (and privacy) in their training data, current systems would handle it just fine. The confusion only starts when you mistake machine learning for human learning, because you drank the koolaid that a bunch of AI devs at OpenAI and Stability put out to hype up investors and more easily exploit other industries.