from the the-monkey-gets-it dept
For many years, we wrote about the infamous monkey selfie copyright situation (and lawsuit) not just because it was hellishly entertaining, but also because the legal questions underlying the issue were likely to become a lot more important. Specifically, while I don’t think anyone is expecting a rush of monkey-authored works to enter the market any time soon, we certainly do expect that works created by computers will be all over the damn place in the very, very near future (and, uh, even the immediate past). Just recently, IBM displayed its “Project Debater” offering, doing an AI-powered realtime debate against a human on the “Intelligence Squared” debates program. A few days after that, the Guardian used OpenAI to write an article about itself, which the Guardian then published (it’s embedded about halfway down the fuller article which is written by a real life human, Alex Hern).
In both cases, the output is mostly coherent, with a few quirks. Here’s a snippet that shows… both:
This new, artificial intelligence approach could revolutionize machine learning by making it a far more effective tool to teach machines about the workings of the language. Deep-learning systems currently only have the ability to learn something specific; a particular sentence, set of words or even a word or phrase; or what certain types of input (for example, how words are written on a paper) cause certain behaviors on computer screens.
GPT2 learns by absorbing words and sentences like food does at a restaurant, said DeepFakes’ lead researcher Chris Nicholson, and then the system has to take the text and analyze it to find more meaning and meaning by the next layer of training. Instead of learning about words by themselves, the system learns by understanding word combinations, a technique researchers can then apply to the system’s work to teach its own language.
Almost… but not quite.
legally speaking: ¯_(ツ)_/¯
there are a few proposed frameworks and a few theories of what happens if none of the proposals get taken up, but it will likely be settled in court
— Parker Higgins (@xor) February 15, 2019
This is why I think the monkey selfie case was so important. In determining, quite clearly, that creative works need a human author, it suggests that works created by a computer are squarely in the public domain. And while this seems to lead some (mainly lawyers) to freak out. There’s this unfortunate assumption that many people (especially lawyers) seem to make: that every creative work must be “owned” under copyright. There is no legal or rational basis for such an argument. We lived for many years in which it was fine that many works entered life and went straight into the public domain, and we shouldn’t fear going back to such a world.
This certainly isn’t a new question. Pam Samuelson wrote a seminal paper on allocating ownership rights in computer-generated works all the way back in 1985 (go Pam!), but it’s an issue that is going to be at the forefront of a number of copyright discussions over the next few years. If you think that various companies, publishers and the like are going to just let those works go into the public domain without a fight, you haven’t been paying attention to the copyright wars of the past few decades.
I fully expect that there will be a number of other legal fights, not unlike the monkey selfie case but around AI-generated works, coming in the very near future. Having the successful monkey case in the books is good to start with, as it establishes the (correct) baseline of requiring a human. However, I imagine that we’ll see ever more creative attempts to get around that in the courts, and if that fails, a strong push to get Congress to amend the law to magically create copyrights for AI-generated works.