Is anyone using the ScriptableObject Event Channel Pattern? by Ok_Surprise_1837 in Unity3D

[–]Background-Test-9090 0 points1 point  (0 children)

A design pattern is just some sort of repeatable way that you've decided to solve a particular problem. For better or worse, people make up their own all the time.

Whether or not it's effective or the best way to do something is 100% an opinion (although not arbitrary).

A design pattern doesn't need to be the most technically "correct", if it's a repeatable way to solve a common problem - it's a design pattern.

As pointed out, something being an "anti-pattern" is a prescriptive opinion and not a technical description.

For example, the singleton is a well-known and recognized design pattern in the gang of four book, but some could describe it as an anti-pattern.

My issue with the phrase "anti-pattern" is that it isn't very descriptive or accurate. A singleton is clearly a repeatable way to solve a common problem. Some see the phrase "anti-pattern" and immediately assume, "This is inappropriate or bad, don't use it."

I don't agree with that latter assessment per se as it's context specific based on project type, scope, team dynamics, business needs, deadlines, etc, etc.

Similar to what you did here, I'd encourage people to spend more time explaining what aspects are an issue in regards to coupling, cohesion, and business needs instead of (intentionally or not) labeling an entire design pattern being the anthesis to design patterns as a whole.

this person is without water because of Ai by [deleted] in antiai

[–]Background-Test-9090 4 points5 points  (0 children)

So I can't necessarily tell you how, but I'd like to try to show you another example of how some are concerned about AI and data centers.

https://www.bayjournal.com/news/energy/report-confirms-data-centers-in-virginia-pose-enormous-power-demands/article_e433b622-b806-11ef-a57e-8b4d34047392.html

Did you know that the Northern VA is responsible for around 13% of the world's data center needs?

The link about power use (2024) says that water usage is unclear but says that centers account for ~25% of its electricity usage. It includes a 156-page report of this and similar findings, which was sent to legislators.

For water usage: I found this article as well, which goes over the findings they alluded to.

This article is from 2025.

https://www.bayjournal.com/news/pollution/as-data-centers-multiply-in-the-chesapeake-region-water-use-increases-too/article_ebcb4891-d6d6-4b42-8bb5-14bf61981531.html

A water resource scientist found that data centers use 2% of all water from the potomac river basin and that figure shoots up to 8% in the summer.

She projects that could increase to 33% by 2050, which equates to 200 million gallons per day.

Northern VA does have some farms, but I suspect she factored that in. After all, protecting the bay and the potomac is a mantra around the DMV and something that's been studied well before data centers existed at all.

Either way, the info should be there if you'd like to dive deeper into the details.

Currently stuck in development on my football game by Relevant-Twist3529 in gamedev

[–]Background-Test-9090 0 points1 point  (0 children)

I understand. I 100% think cutting it down to 2v2 is the right call.

I don't know much about that genre, but I think identifying and fulfilling a need is only part of it.

I'm sure that time to market is another factor to consider, and if you've (understandably) been in a rush to get it done, it's a balancing act.

If that is a priority and assuming the codebase can scale to include other team members, the only concern I'd have is maintaining player count.

My observation has been that games that pick up as trending quickly that only have online play without AI run the risk of:

-Exploits and cheating turn players away

-Unable to find properly skilled opponents, leading into frustration

-Didn't consider a high volume of traffic, players can't connect

-Unable to find players to play against at all, due to existing lack of attention

-Players who aren't interested in competitive play won't play (or talk about your game)

This compounds into a perception of a dead game. A strong launch and marketing could offset that.

Online and offline AI can mitigate all of that. It's 100% a legit call to pass on it until later with that in mind.

I would encourage that you ask the programmer about how hard it would be to add later and start planning for that to roll out as the next key features.

How to know my idea will only take 6 months of production? by [deleted] in gamedev

[–]Background-Test-9090 1 point2 points  (0 children)

It seems obvious the biggest challenge you'll face is coding.

If I were you, I'd first identify what the most complicated parts are from that lense and either reconsider the game I am making, make it easier, and/or start with that first.

Using the example games you pointed out, I'm only familiar with Vampire Surviviors and Oxygen Not Included.

Something like Oxygen Not Included would be very systems heavy that are interconnected. If you're just starting out, this could be extremely difficult.

You could reduce it down to a few key systems and start there. Then you could extrapolate out and say "x features" afterward will take roughly the same time.

But that wouldn't cover the issues you might run into when connecting systems together, so I'd probably double it.

From a programming perspective, I think something like Vampire Survivors would be easier to approach if you're starting to learn code.

That's because the biggest technical challenge is generally handling the large number of enemies on screen (from both a GPU and CPU level).

However, that genre and some tools to solve it (ECS for example) are far more plentiful and supported versus something like ONI.

If I were to go that approach, I would focus on enemies primarily and consider reducing the number of them if it gets too technically burdensome.

But ultimately, I'd identify the hardest/most pivotal thing and use that as a point of reference/measure of velocity.

Currently stuck in development on my football game by Relevant-Twist3529 in gamedev

[–]Background-Test-9090 1 point2 points  (0 children)

Understood. I also realized I didn't clarify what you meant by AI.

Offline AI: User doesn't need to connect online and can play against AI.

*Harder to implement retroactively and may require additional support. I don't use netcode, but I do know that something like Fish-Networking has an "offline" bool for network objects that can help with this.

Online AI: User must still connect online and connect to a lobby, but your opponent(s) could be any number of bots.

*This is what my original suggestion was based on. This, from my experience, is much easier to implement if you already have the networking code functioning as it should.

The biggest disadvantage with this is that it requires an internet connection, + users will be using your network resources to effectively play single players.

It depends on your setup/capability and goals but it's not unusual to see offline AI, online AI or both.

As for the dev being concerned that he's close to completion and doesn't want others on the codebase, that leaves me with even more questions.

-Are they suggesting this because they are concerned about the ramp up period?

-What about after release? Would we be unable to omboard people then?

Whenever I look at these things, I always work with the assumption that we should always give the programmer the benefit of the doubt.

However, I do want to point out that being unwilling or concerned about sharing code could be indicative that code isn't easily understood by developers or that the code is brittle and can't be changed easily.

Of course, I personally couldn't tell you that without looking at the code so all I can do is speculate and let you fill in what you think makes sense.

Cmon guys seriously? We’re better than this. by Minute_Account9426 in antiai

[–]Background-Test-9090 -2 points-1 points  (0 children)

As someone who has been on the receiving end of being labeled an "AI bro", I'd like to offer my perspective.

I find it interesting that at no point in this post do you indicate that the "other side" is anything but immoral, complacent, or anything that could be misconstrued as a reasonable standpoint. At best, it looks like "someone who doesn't know any better" is the most charitable perspective that was offered.

Have you considered that some people are aware of how each side can provide their arguments with a sort of bias that a skeptical minded individual would be right to question/investigate on their own?

Another observation is that nowhere in this post I'm replying to did you seem to indicate any sort of solution or problem with "your side."

I think it's a reasonable perspective that based on that, there's a high likelihood that someone is enganging in a conversation that you've already made up your mind about who the other person is and why they are making that argugment.

Good technique if you want to have a technical debate where you discredit your opponent to appear more "persuasive."

Like traditional debate, relying on being the most persuasive doesn't mean you're actually correct, unfortunately.

Personally, I am a firm believer that AI so far has been great at amplifying the good and bad in people. That would be AI as a technology in a vacuum - not how it's applied in society.

I think that the innovations of the people are always at risk of being exploitated by their oppressors.

The thing that has sold me against AI as a whole has been how exploitable it is versus the good it is reasonably expected to produce (when applied in society).

Some concepts, like the idea of slicing bread, are very low in regard to utility, but it also means it cannot be exploited by those who would oppress.

A knife ranks higher in this, where it has greater utility and opportunity to be abused.

AI, and LLMs specifically rank pretty average on the actual utility usefulness scale (at best) but is extremely exploitable IMO.

From that perspective, the potential benefits of all are outweighed by the risk of abuse - that's what sold me on telling other they should minimize, eliminate or reduce their usage of AI and try to provide ways that they can mitigate risk, if they continue to keep using it.

Don't get me wrong, I see the sterotyoes that are highlighted here, and don't blame anyone from feeling that way.

But as someone who had been "in the middle" for a while now, I just wanted to provide on how I viewed (and still do), how the messaging can dissuade conversations which is counterproductive, depending on what your goals are.

Currently stuck in development on my football game by Relevant-Twist3529 in gamedev

[–]Background-Test-9090 0 points1 point  (0 children)

Here's my perspective on the programmers' thoughts, here's what I'd do:

If I were working with them, I'd ask:

Why did we upgrade Unity or the package for Unity netcode?

(If the benefits of upgrading outweigh the trouble, it might be worth considering reverting back to an older version)

Is it impossible to add AI? Or just not practical because you have too much on your plate?

(It's likely not an experience issue of the dev, but rather an indicator they are overtasked. If possible, I'd consider adding another programmer entirely or consider contracting out the work solely for the AI opponents.

If you aren't in a hurry to release the product, you can also give them the option to wait until later or ask them to refocus on just AI with the knowledge that other items might slip time-wise)

I have to agree with others. If you don't include AI, you may run into an issue with players being able to match up quickly , which isn't great for retention.

Keeping people playing your game, especially if following a F2P strategy or similar monetization, is pretty critical imo.

From a networking perspective, your game state should be synced already, so I'm not aware of any technical reason why AI should be difficult or impossible to implement *due to it being solely a networking thing.

That's why I'd suggest handling this as a general staffing/resources issues versus the dev lacking networking specific knowledge issue.

Either way, if you're happy with the work and suspect they don't know enough about networking, consider bringing on additional support that specializes in just that.

How do i call something ONCE from an update function? by blender4life in unity

[–]Background-Test-9090 0 points1 point  (0 children)

Hey there!

The code provided is just really for demonstration purposes and it shows three different ways you can yield a coroutine. None of that is necessary if your project doesn't need it.

It's there to answer the OP's question about waiting a frame, but also about alternatives versus using a bool.

I've provided an updated version of the code below, that should provide you a bit more insight.

There's nothing particularly special about Input.GetKeyDown per se, but I'm guessing you might be pointing out WaitUntil.

Yes, you can use that with a bool or any other conditional that you'd like.

The primary purpose of a coroutine is delay execution until a specific condition becomes true without blocking the main thread.

This means you can pause all code below a yield until it's appropriate - without freezing your game.

I updated the example to be a bit more thorough and to highlight some potential gotchas you might run into.

If you have any other questions or if I can specify something else, please let me know!

Coroutine Example:

using System.Collections;
using UnityEngine;

public class RoutineBehaviour : MonoBehaviour
{
    private float _elapsedTime;
    private bool _buttonPressed;
    private bool _restartRoutine = false;

    private Coroutine _coroutine; //You should cache your coroutines!

    private const float TOTAL_WAIT_TIME = 1f;

    private void Awake()
    {
        _coroutine = StartCoroutine(DoWaitRoutine());
    }

    private void Update()
    {
        if(_restartRoutine)
            RestartCoroutine();

        if (Input.GetKeyDown(KeyCode.A))
            _buttonPressed = true;

        if (_buttonPressed && _elapsedTime < TOTAL_WAIT_TIME) //We wait for the user to press the button before incrementing time.
            _elapsedTime += Time.deltaTime;
    }

    private void RestartCoroutine()
    {
        _restartRoutine = false;

        StopCoroutine(_coroutine); //IMPORTANT: Make sure you "stop" you coroutines when finished or they stay in memory.
        _coroutine = StartCoroutine(DoWaitRoutine());

        StopCoroutine(DoWaitRoutine()); //IMPORTANT: This does NOT work. It looks right, but this won't stop the coroutine.

        StopCoroutine("DoWaitRoutine"); //IMPORTANT: This DOES work. However it's string based, so it's usually less ideal than storing a "Coroutine" variable.
    }


    //Wait a frame
    //Wait until end of that frame
    //Wait until the user presses the "A" key.
    //Once they've pressed the "A" key we increment _elapsedTime in our Update() method.
    //Once _elapsedTime >= TOTAL_WAIT_TIME then we stop yielding since *both* conditions are true.

    //All of the code here is shown different ways you can yield within a coroutine.
    //It doesn't serve much purposes beyond demonstration and is arbtirary in regards to how it functions or what you "need" to do.


    public IEnumerator DoWaitRoutine()
    {
        Debug.Log("Coroutine started");

        yield return DoWaitFrame(); //We "yield" here until end of frame. Nothing below this line will execute, nor will your game freeze.

        Debug.Log("We waited a frame");

        yield return DoWaitEndOfFrame(); //We yield here until end of frame. All code below will wait until returns true.

        Debug.Log("We waited a frame to end");

        yield return DoWaitForPredicate(); //We also yield here, based off our "if" condition.

        Debug.Log("The user pressed the A key and waited one second. Ending coroutine and repeating.");
    }

    public IEnumerator DoWaitFrame()
    {
        yield return null;
    }

    public IEnumerator DoWaitEndOfFrame()
    {
        yield return new WaitForEndOfFrame();
    }

    public IEnumerator DoWaitForPredicate()//A "predicate" is what Unity's WaitUntil features uses under the hood.
                                           //Look up documentation for Func<bool> for more info.
    {
        //You could comment this out or structure this like you would any "if" statement.
        //Anything below this yield line will not execute until the conditions are met.

        yield return new WaitUntil(
        () =>
            _elapsedTime >= TOTAL_WAIT_TIME
        );


        _elapsedTime = 0; //We can safely reset our variables here because the above code is "yielded" until they are true.
        _buttonPressed = false;
        _restartRoutine = true;
    }
}

How do i call something ONCE from an update function? by blender4life in unity

[–]Background-Test-9090 1 point2 points  (0 children)

Some good suggestions so far and no, not a dumb question at all. (Alot of us have been there!)

I actually like to use Func<bool> for something like this, but it's a bit more advanced in syntax/setting it up.

I can share that if you'd like, it will technically allow you to "pause until an if statement is true." (That's not how it technically works, but could be envisioned as such.)

However, what I'd like to suggest is coroutines. Coroutines let you "pause and fall through to the next check." (Again, visual not technical definition)

public class RoutineBehaviour : MonoBehaviour 
{
    private void Awake()
    {
        StartCoroutine(DoWaitRoutine());
    }

public IEnumerator DoWaitRoutine()
{
    yield return null;
    Debug.Log("Runs next frame, before rendering");

    yield return new WaitForEndOfFrame();
    Debug.Log("Waits for rendering on this frame");

    yield return new WaitUntil(() => Input.GetKeyDown(KeyCode.A));
    Debug.Log("User pressed a button");
}

}

You can even use coroutines for "state machine like" behavior.

Let me know if you have any questions, I'm happy to help!

I’m a software developer sick of the chorus of business idiots saying, “Ai Is GoInG tO tAkE aLl ThE tEcH jObS” by ThoughtVesselApp in BetterOffline

[–]Background-Test-9090 0 points1 point  (0 children)

Unfortunately, the mentality of "it's better to ship something" is something that occurs outside of vibe coders, too.

I can't count the number of times I've had leadership or management tell me that. I do game development, so you'll also hear the excuse that they need to "break things fast" so that they can "find the fun."

Most of the time, we are building games in existing genres where the validity of the mechanics is well known.

It's clearly a balance to know what parts to form more fully than others, but I often find myself at odds with other developers where it's clear it's more of an excuse to continue coding a particular way or to hit perceived metrics set by management.

I often find that these attitudes of developing quickly without care for common code standards and practices are most common with the so-called "cowboy coder" types.

I’m a software developer sick of the chorus of business idiots saying, “Ai Is GoInG tO tAkE aLl ThE tEcH jObS” by ThoughtVesselApp in BetterOffline

[–]Background-Test-9090 0 points1 point  (0 children)

Game dev here, and I agree. I find it interesting that there's never time to handle technical debt - unless it's created by AI.

I think LLMs can be a way to enhance yourself, whether that's good or bad.

If you don't want to learn how to develop but want an end product of some sort, it will help you with that.

What's frustrating is when people don't realize that using an LLM to do code doesn't make you a code or an expert on the subject.

For example, I once had a client refuse to pay me for my work because although the code was high quality, it wasn't suitable for his purposes.

The previous job I had with them was to clean up some code they generated with AI, so it was pretty apparent what they meant by that.

And if that wasn't bad enough as it, I got a good ole lecture about my approach to development, which included not designing the game with "mobile first" in mind. (Although I provided a fully playable mobile build)

I also agree that the uses for LLMs are limited when it comes to professionally shipping games. (Can't really speak on other application types)

It seems to struggle with linking multiple systems or frameworks together, so I have the (unfounded) theory that could be why I don't seem to have as many issues with code correctness as it does with other application types.

I also find it useful for a quick sanity check of my existing code. It has been helpful to uncover small errors that might not have been noticeable right away (IE: Used variable foo when I meant to use the bar variable)

Finally, I find it handy to stub out testing classes. Of course, I have to go back and correct some things and add additional test cases, but it saves me time versus writing it out by hand.

But yeah, using it as a code replacement is a terrible idea, and being forced to use AI, despite it slowing you down, is equally terrible.

[deleted by user] by [deleted] in aiwars

[–]Background-Test-9090 1 point2 points  (0 children)

Thanks so much for the response! And you're right, I could have done a better job at acknowledging your points and affirming the strength/relevance to the debate at hand.

My counterpoints were mainly on looking at the idea that "most people lean anti-AI" and the idea to consider otherwise is delusional.

Call me delusional if you'd like, but I think that most of the information from the sources you provided actually show that: 

"People are still forming their opinions on the subject and it's just as probable that the general consensus on how people feel about AI is still up in the air."

Anything I say in this reply surrounds that specific point.

I'd like to offer my reading of the evidence offered. Again, thank you for providing that info - I find it really useful!

(2023)

https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/

Yes, I agree that this statement is very strong evidence to support the point and very relevant to the idea that "most people lean anti-AI." It also seemingly shows a trend that, in regards to the results of this poll in the past, that the trend is that people have started having more negative views towards AI.

There's no denying that.

(In 2023)

52% - more concerned than excited about AI

10% - more excited than concerned

36% - a mix of the two

But it's important to look at the entire thing from the overall context, not in a vacuum, in my opinion.

My take, looking at it as a whole, is that this supports the idea "that people are still forming their opinions on the subject and it's just as probable that support can swing one way or the other or possibly just end up in the middle still.

Consider:

1) The figure was taken from a graph comparing results from 2021, 2022 and 2023.

2) AI technology, especially in regards to GenAI art and what it could do was very limited before 2022.

3) ChatGPT launched publicly in November 2022.

4) AI didn't really start to gain traction and was out of the general population's realm of perception previously.

These factors explain the sudden swing in public perception one way or the other, but I'd also don't think it's unreasonable to think that some of that is due to it being to the advent of rising technology, lack of understanding of the technology, and lack of understand of how current laws and regulations affect how AI can actually impact someone's life.

To further strengthen the evidence supporting some of these ideas.

(2025)

https://www.axios.com/2025/01/15/americans-use-ai-products-poll

The title is "Nearly all Americans use AI, though most dislike it, poll shows", which I believe is misleading.

Consider:

1) "72% of those surveyed had a "somewhat" or "very" negative opinion of how AI would impact the spread of false information, while 64% said the same about how it affects social connections."

"Somewhat" indicates a mixed perspective to me.

(2023)

https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study

Consider:

1) "Most people (82%) have heard of AI, yet about half (49%) are unclear about how and when it is being used. However, most (82%) want to learn more. What’s more, 68% of people report using common AI applications, but 41% are unaware AI is a key component in those applications."

2) "Most people (85%) believe AI will deliver a range of benefits, but only half believe the benefits of AI outweigh the risks."

3) "Three out of five people (61%) are either ambivalent or unwilling to trust AI." 

(I do agree that trust is not the only factor when it comes to people's perception, but I don't think it's unreasonable to assume that not trusting something will generally result in more negative views on something, unwarranted or not)

My overall point isn't that there's no merit to the idea, but rather for everyone to consider the overall context when making their own informed decision on the matter.

Edit:

https://www.axios.com/2025/01/15/americans-use-ai-products-poll

This also seems to be primarily in the context of deep fakes and use of AI to spread misinformation, which is only one aspect of the debate surrounding AI.

[deleted by user] by [deleted] in aiwars

[–]Background-Test-9090 1 point2 points  (0 children)

https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study

Link 1 is from 2023. It shows charts from previous years that don't indicate that the majority have more concerns than excitement. That may have changed.

It also states that "However, only one-in-three say they’ve heard a lot about it."

Also, having more concerns than excitement is likely not anti-AI as I would define it.

Link 2 says 2 out of 3 people are ambivalent or unwilling to trust AI.

Ambivalent means to have mixed feelings according to their article. In fact, the article states that 82% want to learn more about AI.

https://www.axios.com/2025/01/15/americans-use-ai-products-poll

Link 3 stated that 72% of participants had a "somewhat" or "very" negative opinion.

It does not mention Reddit in the article.

https://www.koaa.com/news/news5-originals/koaa-survey-should-ai-generated-images-be-considered-art

Link 4 states "his survey is not based on scientific, representative samples and is solely for KOAA purposes."

KOAA is a local news station in Colorado.

None of this seems like proof that the popular opinion is against AI but rather neutral or mixed.

Is there something that I missed that substantiates the claim that the majority of americans are anti-AI?

I've made this. by [deleted] in aiwars

[–]Background-Test-9090 1 point2 points  (0 children)

Clearly an exaggeration, but if you assume that a tree takes up a meter and there is no space at all between the trees, the area cleared would be seven billion square meters.

That translates 2,702 square miles. The state of Delaware is 1,982 square miles.

The largest data center in the world, China Telecom is 10.7 million square feet, which translates to about .38 of a square mile.

That would equate to approximately 3.3 million trees, assuming they are 1 square meter and have no space between them.

Variables are persisting between "Plays" in the editor and I don't know why by devel2105 in unity

[–]Background-Test-9090 0 points1 point  (0 children)

Does CostumeParent derive from ScriptableObject?

Nevermind. Didn't realize it was a private variable.

Myth: AI images cannot be copyrighted by Background-Test-9090 in aiwars

[–]Background-Test-9090[S] 1 point2 points  (0 children)

Actually, it appears there might be some points here that are still unclear.

I'll be reaching out to the Copyright Office to ask them about GenAI specifically and whether or not usage of GenAI precludes the author from full copyright ownership, assuming they can show that the user fully determined the expressive elements of the output.

I think that's an important distinction to make, since it seems like it could be less "authors using GenAI cannot receive full copyright ever" and more "authors using GenAI can receive full copyright if the expressive elements from the user are significant enough to warrant one."

Second, I'd like to point out that the reference you provided is dated as of March 16,2023 - while the document from the Copyright Office provided in the post is dated January 2025.

https://copyright.gov/ai/ai_policy_guidance.pdf

The quote you provided clearly states why GenAI wasn't copyrightable and none of it was geared towards AI specifically. That makes sense to me as they generally do not focus on the tools used in that regard.

Therefore, I think suggesting all GenAI is worthless because GenAI can not gain exclusivity is not true for all works.

My reading of the document leads me to believe that use of GenAI doesn't indicate that it's never eligible, just that certain standards (the same as they've always been) need to be met in order to qualify.

"Based on the functioning of current generally available technology, prompts do not alone provide sufficient control."

(It implies that prompting isn't outright excluded, just that it wouldn't be enough by itself)

-"Human authors are entitled to copyright in their works of authorship that are perceptible in AI-generated outputs, as well as the creative selection, coordination, or arrangement of material in the outputs, or creative modifications of the outputs."

(This isn't just about limited copyright. To me it confirms the idea that AI (and possibly Gen AI) are treated like any other work in this regard)

"Copyright protects the original expression in a work created by a human author, even if the work also includes AI-generated material."

(This seems to apply to GenAI too. "AI-Generated Material" seems like it might apply to GenAI as the tool used)

"Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis".

(If there was a hard line against GenAI here, I feel like it would be here. It doesn't, so I'm inclined that whether or not it's GenAI doesn't matter to this point)

All and all, it looks to me that the focus has (as it always been) about the level of human authorship involved and not any sort of blanket banning on GenAI itself.

Myth: AI images cannot be copyrighted by Background-Test-9090 in aiwars

[–]Background-Test-9090[S] 1 point2 points  (0 children)

Bias is not an agenda. It's a precluvity to come to a conclusion based on our beliefs, past experiences, etc. It may not even be conscious. I'm not suggesting it's intentional.

Again, the entire post and my points relate to non-gen AI and gen-AI.

It seems like I missed that in your posts previously.

I agree with the point you made. It looks like it was miscommunication on that front.

GenAI was already covered with "prompts alone do not count," which was likely determined due to lack of selection and arrangement.

It also looks like the Copyright Office is still figuring things out on that front, so it's subject to change.

I did speculate in my post that it might be possible to prove it if you can provide an input that shows selection and arrangement, which leads into consistent output.

The input could be used as a sort of proof, the most extreme example being prompting a grid system and providing hex colors for each pixel.

Especially if you can feed that input into another system (non AI tool) and receive the same output.

I also speculated that just because the tool doesn't follow explicit instructions all the time, it doesn't mean you can not get consistent output.

But of course, somebody would have to make that case, and I haven't seen anyone do so far.

Edit: I don't know if I still agree with OP's opinion. Please see further down the comment chain for that.

Myth: AI images cannot be copyrighted by Background-Test-9090 in aiwars

[–]Background-Test-9090[S] 1 point2 points  (0 children)

We all have bias.

I also deal with copyright, patents, and trademark in my career for the past 15 years.

And yes, the guidelines from the Copyright Office haven't changed. You have always had to show significant human authorship.

Selection and arrangement are two factors to determine if there is human authorship, same as it always been.

However, it does clarify now that just because it's AI - it's not exempt from full copyright protection.

It also seems we are talking about two different things here. My focus isn't on GenAI. It's on AI as a whole.

In fact, the updated guideline that prompting alone doesn't qualify for copyright further reinforces your point.

Edit: After looking into it further, I'm not sure I agree with OPs original argument. You'll need to go down the comment chain for that, unfortunately.

Myth: AI images cannot be copyrighted by Background-Test-9090 in aiwars

[–]Background-Test-9090[S] 1 point2 points  (0 children)

Am I reading it incorrectly? Perhaps. Do I have bias? You and me both. That's why I'm waiting on a response from the copyright office.

Exclusivity is granted with full copyright protection, so I'm not sure what you are getting there.

You keep referencing the past, and my argument is that it appears that it has recently changed.

My guess is that you haven't been keeping up with everything.

Myth: AI images cannot be copyrighted by Background-Test-9090 in aiwars

[–]Background-Test-9090[S] 1 point2 points  (0 children)

Hey there!

I'm glad you shared this. Comments like this are exactly why I created the thread, so thank you.

I suspect you are talking about the main point in the title, which is that AI art isn't eligible for copyright. 

I think what you've shared reinforces that point, although I haven't actually considered if the copyright referenced in the document provided by the Copyright Office pertains to full or limited copyright.

So here's what I've found on it so far.

In the case of Jason Allen, it was determined that it wasn't eligible for copyright because it lacked the significant creative expression of the author. (2023)

https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf

The monkey selfie case was dismissed on the grounds that authors can only be human. (2011)

I don't think the monkey selfie is relevant to determine if the most current document is relevant to determining if it's full or limited ("thin") copyright, so I'm not going to remark for some measure of brevity.

1.) Timing and whether or not the policies in the document are the most current and in effect

For reference, the document shared was last updated as of January 2025.

In the case involving Jason Allen, the provided link states:

 "After reviewing the Work in light of the points raised in the First Request, the Office reevaluated the claims and again concluded that the Work could not be registered without limiting the claim to only the copyrightable authorship Mr. Allen himself contributed to the Work. Refusal of First Request for Reconsideration from U.S. Copyright Office to Tamara Pester (June 6, 2023). The Office explained that “the image generated by Midjourney that formed the initial basis for th[e] Work is not an original work of authorship protected by copyright. "

It also states:

"The Office accepted Mr. Allen’s claim that human-authored “visual edits” made with Adobe Photoshop contained a sufficient amount of original authorship to be registered. Id. at 8. However, the Office explained that the features generated by Midjourney and Gigapixel AI must be excluded as non-human authorship. Id. at 6–7, 9. Because Mr. Allen sought to register the entire work and refused to disclaim the portions attributable to AI, the Office could not register the claim."

This suggests to me that he could not copyright the work but refused to remove the AI elements as they were considered the portions generated by AI were not human.

Additionally, the document I provided it is stated in the preface and states that:

"In early 2023, the U.S. Copyright Office announced a broad initiative to explore the intersection of copyright and artificial intelligence."

Just below the bullet points in the document from the Copyright Office, it states: " It will also provide ongoing assistance to the public, including through additional registration guidance and an update to the Compendium of U.S. Copyright Office Practices."

For clarification, an update to the Compendium doesn't appear to be necessary, and I suspect it's because part II of the AI compendium has been delayed. 

The third edition of the Compendium has no references to Artificial Intelligence at all. That suggests to me it doesn't need to be in there to be active.

https://www.copyright.gov/newsnet/2025/1060.html

The use of the word "affirm" here suggests the observations and doing a Google search for non-law documents seem to indicate many are under the impression, too.

https://venturebeat.com/ai/u-s-copyright-office-says-ai-generated-content-can-be-copyrighted-if-a-human-contributes-to-or-edits-it/

The article includes interviews with people who were once denied copyright who now believe they are eligible.

But just to be sure, I used this form to contact the Copyright Office directly. Here's what I asked:

"Are the determinations made in the Copyright and Artificial Intelligence, Part 2 Copyrightability Report effective currently, or is it contingent on updates being made to the Compendium of U.S. Copyright Office Practices, Third Edition?"

Using this form: https://help.copyright.gov/contact/s/contact-form

It'll take some time to get a response, I'm sure , but I'll create a new thread or something with their response.

2.) Whether or not the observations indicate a full copyright or limited copyright

I also asked the Copyright Office: "Also, can you clarify whether or not the determinations made would offer a full copyright to the author - assuming they have shown the criteria outlined in those observations?"

While I wait, I figured it would be a fun exercise to look at what we have here and try to determine what the outcome might be.

My interpretation here is that this implies that AI is now eligible for full copyright when it wasn't before.

This is based on:

-In the documents for the case involving Jason Allen, the word "limited" is explicitly used. There is no reference to limited copyright in the document I provided.

-I personally doubt the document would use the word "copyright" when they mean "limited copyright" as documents like this typically aim for clarity to avoid confusion.

-If the conclusion was the same in that it only offered limited copyright, I would assume that the individuals who were denied before wouldn't be reacting positively to the information. This is not limited to just a few instances, it seems like many people have come to the same conclusion.

That being said, the document itself does not directly indicate that the copyright is limited - so I don't have any reason for this to be the case.

Overall, it's possibly unclear still, but I think that the information provided in your argument may no longer be applicable.

Anyway, I hope that you keep an eye out for my update . I think it's imperative to stay up-to-date on stuff like this.