Tech Humanism, not tech optimism
Why today's human lives matter, and why we can't meaning up to machines.
“If you suddenly find yourself thinking about technological optimism today, I highly recommend my friend @kateo's latest book & other work on the topic (plus she consults on strategic techno-optimism which every large org should immediately sign up for)”
Indeed, you may have noticed a dull roar about techno-optimism in the past few days.
That’s because Marc Andreesen published a “techno-optimist’s manifesto.” It cites Milton Friedman, and equates ESG, sustainability, and tech ethics with a mass demoralization campaign. It names patron saints and enemies.
It prompted responses from tech critics and thought leaders, such as Gary Marcus, who called a few things that came up short for him. But what they each called out still doesn’t feel complete to me.
Why my optimism is different
It’s true: I’m associated with optimism. In fact if you search for variations on “tech optimism” you will find references to me sprinkled throughout the results. But I’ve always been skittish about leaning too heavily into outright tech optimism. My own model for optimism is couched in strategic “provided we make good decisions” framing. It’s why I advise leaders on a disciplined approach to insights and foresights.
So for me, all this recent chatter about tech optimism is missing a core component.
I don’t see the point in an optimism that wilfully ignores existing harms being done to people, because any meaningful way forward will need to work to reduce and remove those harms — which means it needs to first acknowledge them.
I don’t see the point in an optimism that doesn’t serve as a blueprint toward building what it is you hope for.
And at the same time I don’t see the value in an optimism that ignores how hard it can actually be to build what you hope for.
We can’t leave meaning up to machines
One of the things I often say in my keynotes is that we cannot leave meaning up to machines to determine. And what I also mean by that is that we cannot let our understanding of what is meaningful be shaped by a fascination with machines.
We need to solve for the problems of humans living today while solving for quality of life tomorrow. I reject the premise that distantly future humans are disproportionately more important than those who live and breathe today; lives have value, today’s lives have value, and tomorrow’s lives will be better for our having solved human problems in practice and at scale, rather than in some future abstract theoretical state.
We need to understand and prioritize the fullness of life as we know it, the richness of experience as we know it, to ensure that the lives and experiences of the future are just as full and rich, and hopefully much more so.
We need to balance our optimism with intellectual rigor and relentess strategy, to ensure that we don’t let ourselves off the hook, that we hold ourselves to the highest standards of quality and care, that we achieve what these technologies are capable of on our behalf and on behalf of life on this planet.