From Moral Man to Grammatical Man: A Tech Humanist's Journey Through AI Ethics
Exploring the intersection of ethics, and information theory, and AI’s impact on society through two decades-old books
In my widely varied reading I often wander between the realms of technology and humanities like a flâneur in a cityscape of ideas, and it is in these intersections that I always stumble across the most unexpected connections. Recently, I've been pouring over two books that hit different, but resonate the same: “Moral Man and Immoral Society: A Study in Ethics and Politics” by Reinhold Niebuhr (whom Barack Obama once identified as a major influence on his thinking) and “Grammatical Man: Information, Entropy, Language and Life” by Jeremy Campbell. Despite being products of different times, both books provide valuable insights into our current dilemma: the ethical impact of rapidly advancing technology.
Niebuhr's work, written back in 1932, posits the troubling notion that “civilization has become a device for delegating the vices of individuals to larger and larger communities.” In other words, we are at our most ethical with the people we’re closest to, while we allow our selfishness to be outsourced to entities more abstract than ourselves and those closest to us. Does that resonate? It does for me. It seems to play out all around us in business, consumer culture, international relations, and — most especially — technology. We're more connected than ever, yet the social order writ large drifts further from our individual values. How do we navigate this landscape where self-interest seems to grow as our connection to individuality decreases?
Campbell’s “Grammatical Man” intrigued me with the concept of entropy as “missing information.” This led me to contemplate the intersection of information theory, probability, and artificial intelligence. As AI systems grow more complex and less predictable, our comprehension of them diminishes. While AI has often been termed a “prediction machine,” it's the unpredictability of its societal impacts that are most concerning. This underscores the pressing need for ethical guidelines that emphasize transparency and accountability in AI systems.
Nearly every set of AI guidelines would seem to agree. The OECD AI Principles include a section on “transparency and responsible disclosure.” The UNESCO human rights approach to AI talks about “transparency and explainability.” The language of the EU AI Act includes transparency, too.
Here's the million-dollar question: How do we implement these guidelines? How do we persuade trillion-dollar companies to adhere to ethical principles in AI development? Do we dangle tax breaks or financial goodies to get them to toe the ethical line in AI? These aren't easy questions. But they're ones we need to grapple with as we navigate the evolving landscape of AI ethics.
We're at a crucial juncture in AI development. It’s a moment where we need to choose between letting AI develop unchecked or taking active steps to ensure it evolves in a way that benefits all of humanity. AI’s unpredictability is a double-edged sword; it's a challenge and an opportunity. It's our job to understand these systems better, to harness their value while mitigating their risks.
Niebuhr and Campbell remind us that the confluence of technology and ethics is not a new frontier, but one that has been explored throughout history. The true challenge lies not in unearthing new principles, but in weaving these timeless truths into the fabric of our current context—a task that requires both intellectual rigor and an unwavering commitment to ethical considerations.
Ultimately, my journey through these two books has strengthened my conviction that the Tech Humanist and Strategic Optimist approach — not the techno-optimist one — offers our best way forward. By considering the human experience and our role in technology, we can begin to untie the ethical knots AI presents. In doing so, we can shape a future where technology benefits not just a select few, but all of humanity.
Kate O’Neill is widely known as the Tech Humanist. She is a speaker, author, researcher, and advocate whose work focuses on making sure that as we progress technologically, we don't lose sight of human values. She is founder and CEO of KO Insights, a strategic advisory and solutions firm that helps help businesses navigate the future by exploring the intersection of technology and humanity; in essence, helping businesses use technology to create a more human-friendly future.