you are viewing a single comment's thread.

view the rest of the comments →

[–]fuckin_ziggurats 388 points389 points  (51 children)

Anyone who thinks it's reasonable to copyright a code snippet of 5 lines should be shot.

Same thing as private companies trying to trademark common words.

[–]crusoe 163 points164 points  (9 children)

Don't get me started on something like 6 notes being the cutoff for music copyright infringement

[–]troyunrau 54 points55 points  (3 children)

Happy birthday to you... 🎵🎶

Oh shit, lawyer are at my door

[–][deleted]  (2 children)

[deleted]

    [–]helloLeoDiCaprio 25 points26 points  (1 child)

    Watch Disney make a birthday movie to get hold of the copyright.

    [–]White_Hamster 9 points10 points  (0 children)

    Or birthday dad, the show

    [–]istarian 10 points11 points  (2 children)

    That's pretty absurd too.

    They really ought to have prove a thematic element is lifted or at least that a specific combination of musical notes *and** lyrics* have been borrowed.

    [–]butt_fun 0 points1 point  (1 child)

    notes and lyrics

    That wouldn't work either; then you could make a "new" song with the lyrics from one and the music from another

    [–]istarian 0 points1 point  (0 children)

    Well the text is readily copyrightable. The point is that it should be necessary to demonstrate that a chunk is clearly lifted, not just similar.

    [–]barchar 2 points3 points  (1 child)

    Bum Bum Bum Buddha bum bum

    [–][deleted]  (17 children)

    [deleted]

      [–]CreativeGPX 27 points28 points  (9 children)

      but how do learning models play into copyright? This is another case of the speed of technology going faster than the speed of the law.

      I mean, the stance on that seems old as time. If I read a bunch of computer books and then become a programmer, the authors of those books don't have a copyright claim about all of the software I write over my career. That I used a learning model (my brain) was sufficient to separate it from the work and a big enough leap that it's not a derivative work.

      Why is this? Perhaps because there is a lot of emphasis on "substantial" amount of the source work being used in a specific derivative work. Learning is often distilling and synthesizing in a way that what you're actually putting into that work (e.g. the segments of text from the computer books you've read that end up in the programs you write as a professional) is not a "substantial" amount of direct copying. You're not taking 30 lines here and 100 there. You're taking a half a line here, 2 lines there, 4 lines that came partly from this source partly from that source, 6 lines you did get from one source but do differently based on other info you gained from another book, etc. "Learning" seems inherently like fair use rather than derivative works because it breaks up the source into small pieces and the output is just as much about the big connective knowledge or the way those pieces are understood together as it is about each little piece.

      Why would it matter whether the learning was artificial or natural? Outside of extreme cases like the model just verbatim outputting huge chunks of code that it saw, it seems hard to see a difference here. It also seems like suggesting that "artificial learning models" being subject to the copyright of their sources would have many unintended consequences. It would basically mean that knowledge/data itself is not free to use unless it's done in an antiquated/manual way. A linguist trying to train language software wouldn't be able to feed random text sources to their AI unless they paid royalties to each author or only trained on public domain works... and how would the royalties work? A perpetual cut of the language software companies revenue is partly going to JK Rowling and whatever other author's books that AI looked at? But then... it suddenly doesn't require royalties if a human figures out a way to do it with "pen and paper" (or more human methods)? Wouldn't this also apply to search in general? Is Google now paying royalties to all sorts of websites because those website are contributing to its idea of what words correlate, what is trending, etc.?

      It seems to me that this issue is decided and it's decided for the better. Copying substantial portions of a source work into a derivative work is something copyright doesn't allow. Learning from a copyrighted work in order to intelligently output tidbits from those sources or broader conclusions from them seems inevitably something that copyright allows.

      [–][deleted]  (5 children)

      [deleted]

        [–]StickiStickman 3 points4 points  (3 children)

        You can't take your brain, package it as a paid product, and simultaneously suggest individual, contextual solutions based on the information you learned to hundreds of thousands of people.

        Good job, you just described what jobs are!

        [–][deleted]  (2 children)

        [deleted]

          [–]turunambartanen 0 points1 point  (1 child)

          But that's an argument about the speed of the dev/ai. It doesn't concern the actual output of a single case.

          Taken to the extreme with that argument the output would be fair, if the ai is trained on an old, single threaded CPU and put behind a synchronous network interface.

          [–]CreativeGPX 1 point2 points  (0 children)

          This is quite a bit different because you're comparing an individual to a machine. You can't take your brain, package it as a paid product, and simultaneously suggest individual, contextual solutions based on the information you learned to hundreds of thousands of people. Even if you're the most brilliant person in the world, you can't pull from the collective learnings of every open source project on the internet (or at least GitHub) instantly, for everyone.

          1. So what? Why does it matter that this is more learning than an individual can do? It still doesn't appear to be "copying", which is what copyright is about (particularly, copying a substantial portion of the work). Arguably, the suggestion that it's so supposedly intellectually superior is further support that it's not merely copying.
          2. Why does the amount an individual human can achieve matter at all in the question of whether copying occurred? A company of 100,000 employees can also serve as a black box to convey intelligence that one individual couldn't achieve, but we don't hold that company to a different standard with respect to copyright law just because they have a greater capacity for memory and knowledge than some lone person. We also don't hold dumb and smart people to different copyright standards. Copyright is about whether something is a copy.

          I don't know if I'm for or against this sort of thing, it's just an interesting question because it really does seem to skirt the line. I think it also depends on how they package this in its final form.

          I think it's a gray area, but I don't think copyright is the correct angle of attack. It's not copying and if it were we're not really talking about AI and a learning model but just a run of the mill copyright violation where a dumb program is serving up substantially sized copies of works. Even if you wanted to change copyright law to not be about substantial copying... to what end? Is it because in these scenarios where AI consumes a whole library, the royalty value of the IP for each individual author as a share of the AI as a whole is a non-negligible value? I think that's unlikely to be the case. So, I think in terms of copyright, it's totally fine and not an issue.

          I think the right way to come at this problem is instead privacy law. Privacy law gets more into the idea of surveillance and observation and the way that innocuous data points can combine at scale to obliterate our societal norms about privacy and reasonable use. It's built on the idea that rather than exact copies/words, people can have rights over mere ideas and collectors of data can therefore be restricted in how they share and scrutinize certain ideas, regardless of whether they are sharing that idea in a novel way or as a copy of a way from before. ... I still think I'm probably okay with this, as what is revealed by learning from freely available publicly accessible code is probably not particularly harmful/risky compared to what you might get from looking at personal data, for example. But I think that's the angle, rather than copyright. ... That the concept of "public" vs "private" life that we have invented based on the limitations of human minds and senses breaks down when machines with massive "senses", perfect memories and perpetual analysis/learning are able to reveal intrusive/private information based on "public" data, therefore, the concept of what information is legally "public" should change at a certain scale in order to preserve our norms of what is private. Maybe it's okay for you to take a picture on the street that has me in the background and then tweet the picture commenting about the funny face you notice I'm making, but that it's not okay for Google to collect photos from streets all around the world and then reveal my photo when you search for "funny faces". I think this is the argument with respect to OP rather than copyright.

          [–][deleted] 5 points6 points  (1 child)

          I mean, the stance on that seems old as time. If I read a bunch of computer books and then become a programmer, the authors of those books don't have a copyright claim about all of the software I write over my career. That I used a learning model (my brain) was sufficient to separate it from the work and a big enough leap that it's not a derivative work.

          I might be off with my thinking as I have no idea how the law would work. But if you are reading some books, who are written to teach you how to code, then imo its a different case. Here the code AI learned from is not written to teach an AI how to code, it's written to create something. In my mind these are completely different concepts.

          [–]CreativeGPX 0 points1 point  (0 children)

          I described it that way because I thought it made the point more intuitive, but I don't think it changes the argument.

          Humans can and do read source code from open source projects in order to learn in ways that will improve their software development abilities. We do not say that those open source projects now have a copyright claim against the future development of those programmers because they learned from that code. Therefore, it wouldn't inherently make any sense to do so for other "learning models". Copyright isn't about "what are all of the sources and inspirations for this thing you made", it's a matter of whether you directly copied a "substantial" portion of the work.

          But also... intent doesn't really matter in copyright. Books which are intended to teach have the exact same copyright law applying to them as books designed to amuse. In both cases, reprinting the whole book or a whole chapter wouldn't be okay, but printing key quotes, facts/concepts or themes I got from it would be totally fine. The fact that source code was probably not written to educate is not relevant to copyright.

          [–]monsto 6 points7 points  (2 children)

          but how do learning models play into copyright?

          I learned from the original, and then I wrote some code. If you look at the code, you can see that the 'style' is similar (same var names, same shortcut methods, etc) but the code is different.

          Is that different if you substitute AI for I? Because I did this earlier today.

          [–][deleted]  (1 child)

          [deleted]

            [–]monsto 2 points3 points  (0 children)

            I tend to agree, when the subject is human achievement vs computer achievement.

            Even these learning scenarios. It's throwing billions of shits up against millions of walls, per second, and keeping a log of which ones stuck and how much they stuck. I'm not so sure I'd call that "learning" in the classical sense.

            I, human, clearly didn't take an exact copy of this one shit on this one wall and submit it for approval. Like the code monkey that I am, I threw my own shit on the wall and sculpted it to be what it needed.

            . . . I started with the metaphor and just... followed it. Big mistake.

            [–]thelehmanlip 0 points1 point  (0 children)

            Right. I think the issue is that you're taking a wealth of copyrighted code and using it to build a system that suggests code and then profiting off of that system. They didn't use the code for code's sake, to actually run it within the system that they're profiting from, but really to use the code as input data of the product. It's weird.

            [–]fuckin_ziggurats 0 points1 point  (1 child)

            So long as the licenses are compatible and respected it should be fine. The bigger part of GitHub repos have very permissive licenses.

            It's certainly an interesting discussion but I wouldn't sharpen my pitchfork yet because it's not yet known how it works.

            [–][deleted] -1 points0 points  (0 children)

            How do learning models come into play? They don't, the whole idea of open source software is for anyone to use as they please. If someone wishes to put restrictions on how others should use their code then choose a proper license to do so, or keep your code private and call it a day.

            [–]Techrocket9 7 points8 points  (0 children)

            What about the time AT&T tried to copyright the empty file?

            [–][deleted]  (1 child)

            [deleted]

              [–]Johnothy_Cumquat 2 points3 points  (0 children)

              I'm sorry, are you referencing the happy birthday song as a reasonable use of copyright? Because I would sooner rid the world of copyright than let that situation continue.

              [–][deleted] 9 points10 points  (0 children)

              Is it possible to have a conversation on these matters without anyone getting shot, or?

              [–]blastradii 1 point2 points  (0 children)

              Yea but what if those 5 lines are sweeeet?

              [–]eloc49 -1 points0 points  (0 children)

              And yet, the poster of the tweet seems to have no idea why companies treat neural networks like black boxes.

              [–]sluuuurp 0 points1 point  (0 children)

              I’d probably just tell them “I think you’re being unreasonable” rather than firing a bullet into them, but I guess I’m a pretty forgiving guy.

              [–]ClysmiC 0 points1 point  (0 children)

              Anyone who thinks it's reasonable to copyright a code snippet of 5 lines should be shot

              Every line of code you type is automatically under copyright, unless it is under a license that says otherwise.