While I was putting the final touches on last week’s post about why AI requires a return to the humanities, OpenAI published this blog and announced a major investment in reaching Superalignment within 4 years.
While it’s noble that OpenAI seeks to solve the “core technical challenges of superintelligence alignment,” it is far from enough. Technology cannot solve a fundamentally human problem and moral void.
OpenAI, in this post, defines the fundamental research question as, “How do we ensure AI systems much smarter than humans follow human intent?” If history has taught us anything, however, it’s that “human intent” is often the problem with society. And, whose’s idea of “human intent” will we choose?
Moreover, over the past decade, we have seen repeatedly how centralized, self-reinforcing systems of belief drive a society to the extremes. Isn’t a superintelligence trained on an idea of “human intent” doubling down on the mistakes we have made in the era of the social internet?
As I articulated in last week’s post, if we want to thrive in an AI world, it’s time for us to rediscover the humanities. I recommend that we focus on three specific tasks.
1. We must spend just as much time and research developing our understanding of the human person, society, culture and ethics as we do on AI’s capabilities. History, philosophy and theology need to be prioritized and brought back into the mainstream to be as prevalent in our media, education and discussions as the sciences and technologies. It is essential that we develop both the sciences AND the humanities.
2. We need to refine our understanding of what it means to be human. For quite some time, “intelligence” has been central to our understanding of what makes us unique. In reality we have learned that logical deduction, memory and other aspects of intelligence are not uniquely human while consciousness and connectedness are. It is essential that we have an explicit and well developed understanding of the distinct human person and experience if we’re going to successfully develop safe AI.
3. We need to rediscover, and realign around, our moral compass. The extreme relativism that exists in our society today — that knowledge, truth and morality exist in relation to culture, society and historical context and are not absolute — has undermined our ability to reach any sort of “Superalignment.” The cultural mores of the United States differ greatly from those of Brazil, which differ greatly from those of Afghanistan, which differ greatly from those of Japan. If we tell our computers that there is no truth, or that truth is relative based on the circumstance, then they will follow our instructions and create a world in which there is no truth. It is essential that we rediscover a sense of right and wrong, fact and fiction.
I applaud OpenAI for their work in the technological advancements necessary for AI alignment. The technology they’ve created like ChatGPT is nothing short of mind-blowing, and it gets better seemingly every day. At the same time, I challenge OpenAI and others to begin putting just as much effort and investment into the advancement of the humanities as they are the technology that is advancing so rapidly. I for one, regardless of my mistrust of generic “human intent,” trust individual humans to do the right thing far more than I do potentially rogue machines.