*Hi. KateBot is me, Kate O’Neill. Not a bot. But everyone seems to be obsessed with chatbot interactions right now, so this newsletter is imagined as if you were prompting me for answers.
“Can you summarize the state of the public and media reaction to the rise of generative AI chat, graphics, and writing tools?”
KateBot: By now you’ve all been hearing about, reading about, and perhaps even tinkering with generative AI. It's impossible to avoid. And based on the range of hot takes, it’s also impossible to size up. Is it the end of art? Is it the beginning of sentient machines?
“Come on, KateBot. Should we all panic and freak out?”
KateBot: As someone known for snarkily commentating on facial recognition and privacy issues, I have, predictably, a few thoughts. Yes, people sharing their own enhanced images ought to be cautious, as always, about what kind of permissions they're explicitly or implicitly granting through the upload/use of their photos or other content. So before you, say, upload your picture to a tool for the lulz, at least skim the terms and conditions you're agreeing to.
“What about artists, writers, and other creators? Is their work being stolen?”
KateBot: There have been some egregious examples. I Am Not A Lawyer, but the law experts I follow argue that we need IP law enhancements to protect creative works from being subsumed for AI regurgitation. See, the tools come about after learning models are trained on substantial collections of existing artwork and content. You can think of it as a giant vacuum sucking all kinds of images into its bag. The leading models claim that their input is all open source images, but there's reason to think there is some laundering going on, some removal of important metadata, etc.
And then the concepts are available to be invoked in commands. You can’t necessarily invoke artists by name — many of the tools rightly restrict that — but you can dance around it.
“How do these tools change how we think about art and creative protections?”
KateBot: These tools challenge our existing definitions of creative work, authorship, and even what the heck art really is, but that doesn't mean we can't stretch ourselves to consider the future implications of adoption of technologies like these.
One of the books I've read recently is The Seven Signs of Ethical Collapse, and the author has repeatedly pointed out that ethical challenges emerge where the laws and practices aren't clearly defined yet. Skirting compliance through, say, "creative accounting" or, in this case, "creative" data mining and laundering isn't necessarily the same as breaking laws.
Technology will always be one step ahead of regulations. But business traditionally hasn’t needed anything like cutting-edge technology to skirt the law or work outside of what’s best for society as a whole.
Which is why the answer is twofold: tighten the laws, and if you're running a business, tighten your governance with ethical practices. Adobe is trying to do that with its announcement of its embrace of creative AI with certain conditions for the data it collects and generates. And it makes sense, since they're the go-to platform for human creative professionals. (Full disclosure: I am part of the Adobe Insiders influencer program, and occasionally receive perks as part of my participation. That hasn't shaped my opinions here, but since we're talking about ethics, it seems worth mentioning.)
“Should we not use them?”
KateBot: I don't adopt the posture that generative AI tools shouldn't be used; just that unlawful and unethical data mining and use should be aggressively challenged. Fighting the use of the tools as a whole is a losing battle anyway: they're undoubtedly engaging and fun — way more so than the 10 year challenge or other photo sharing memes. The truth is, the enhancement to human productivity and output we can expect in the 1-3 years ahead from generative AI is substantial.
Which is exactly why we need to work on better data and IP protections now.
“What does all this mean about us as humans?”
KateBot: If machines can create what appears to be relevant art or prose from a prompt, by cobbling together countless influences from around the collected knowledge and work shared online, then it does force a reckoning. But as I’ve always pointed out, it isn’t creativity that makes us human — after all, non-human animals can be creative (I own a painting made by a meerkat, if you don’t believe me) and can use tools to solve problems. We are not the only adaptive species, and we won’t be the last.
What makes us special, if we agree that we are special, is our capacity for meaning: meaning as conscious comprehension of context and significance, meaning as embodied sense-making, meaning as cosmic awareness (if not understanding) of the universe larger than ourselves. Meaning in all its manifestations.
“Will machines ever generate something that we regard as meaningful?”
KateBot: Certainly, and soon.
“Will machines seem to comprehend meaning in ways that feels uncannily real?”
KateBot: Almost definitely, and before you think.
“Will machines ever really construct and connect with meaning like we do?”
KateBot: Now that is a different question.
Artificial intelligences are far more likely to construct their own sort of meaning. Ours is very specifically embodied. Until we have a melding of the most sophisticated robot chassis with the most sophisticated machine intelligence, there will be no opportunity to make embodied meaning. But once they do, it could be a very different sort of take on what is meaningful. They will likely demonstrate what we would think of as curiosity, in the sense that their data models will want to collect as much information to solve the meaning puzzles they create and uncover.
Or they may develop a model for meaning-making that isn’t embodied the way ours is, and their notion of “sense-making” may rely on a different set of “senses” from ours. Whereas we rely on taste, touch, smell, meaning-making machines may be able to have, say, a more mathematical “sense” that reveals deeper patterns than we tend to notice.
“So we have something to look forward to? Maybe?”
KateBot: Quite honestly, that could be a very exciting time — if we build the right social and cultural scaffolding to support it.
Generative AI is on one hand, just a tool like any other tool you can use to be creative and productive. But on the other hand, it is trained on a harvest of the work of human creators, and if the output rips them off, that shouldn’t be any different from plagiarism using any other tool. So while we welcome the tools themselves as ushering in a new generation of human productivity, we should also make sure we put the next steps in place to ensure we don’t destroy the delicate economy of human creators. We need human creativity in all its forms. We should nurture it, protect it, and ease it forward into the brilliance that will come from collaborating with more sophisticated machines.
Cheers to that.
Of course you have a painting by a meerkat. OF COURSE YOU DO.
Here's my question(s): did the meerkat artist in question realize they were pursuing a creative endeavor, or were they merely responding to a reward opportunity? Did the meerkat select its own color palette or format? Could the meerkat have decided, "no, I don't want to sell that one, let's try another approach." Do meerkats edit?
The ongoing shaping and reshaping of what defines "creativity" continues unabated...
(My $.02 re: generative AI/machine learning infospawning? As a nominal journalist/former educator/burgeoning parent, it scares the crap out of me. Because it helps erase and redraw the line of what constitutes "fact," even more quickly than does the purposeful application of mis- and disinformation.)