What’s on the other side of this? It’s a question I ask on a semi-regular basis. What will life look like on the other side of ChatGPT and this AI-frenzied third decade of the 21st century?
I don’t know. But I do think there will be an “other side”. By that I mean that the dust will settle, the potential and the limitations of AI (and AGI) will be fairly well understood, humanity will have a collective understanding of where we compromised with AI’s capacities and where we firmly staked our claim on human-achieved productivity. It’s anyone’s guess, but mine is that we will conclude we ceded quite a bit of ground to our algorithmic colleagues.
How do we reach the other side, and will we know when we’ve arrived? My answers are: soon and probably. I believe in a few years’ time we will look back wryly upon the winter and spring of 2023 and chuckle at our having been worried at the right scale but about the wrong things. Some of machine intelligence’s more comical inadequacies will remain, and we will wonder why we ever thought they wouldn’t. Some major breakthroughs will be achieved as a direct result of OpenAI’s contributions, probably something in the field of medicine, and various morning shows (yes, those will still be with us) will serve up the cry-fest footage to celebrate (uncritically) the boon that is artificial intelligence.
Like with every other seismic development, be it rail travel, electricity, aviation, nuclear power, or Wi-Fi, eventually the mental toll of concerning ourselves with its every risk will become unsustainable and a comfortable acceptance will blanket itself over the whole of humanity. We can only be traumatized, amazed, curious, and willfully naïve for so long before the source of such mental states drops out of the fast lane and settles into a slower autopilot mode suitable for daily commuting.
Why am I thinking about this today? I woke up yesterday to news of the tech community’s proposed self-policing measure, one whose many signatories are urging an agreed-upon, industry-wide halt of the AI research that most directly runs the risk of undermining our species at an existential level. In forwarding it on to an interested party, I mentioned that it was good to see the “should” half of the question being properly prioritized, and I’m right — it is good to see that, I just don’t know how it can work when AI research is as scattered about the globe as it is fueled by a competitiveness approaching the psychotic. They really can’t stop themselves, and the consequences are always meant to be tested for real on, well, real populations of humanity. We didn’t vote on ChatGPT’s release, but maybe we should have. I often say it’s the undemocratic nature of technological advents that should most trouble us all. To write new laws, we send legislators to D.C. Who sent the coders to Silicon Valley? And who asked them to rewrite the very nature of human experience? May I see a ballot?
Is there pain between here and the other side? I think so, terrible though it is to say… and think. Pain of the necessary sort? A growing pain after which humanity will have reached a new height, a new awareness, a new set of capabilities? That actually does seem plausible. Sam Vaknin has spoken (not necessarily in positive terms) of the way smartphone technology and other modern tools have made magic wielders of us all (he might have said “sorcerers”, as I think about it) — we command a great deal of reach and power with very minimal effort, as though our phones and applications are magnifying our presence in the world. Issue an app command, food arrives at your door. Strike a few keys, your message can be seen by a recipient thousands of miles from where you’re standing. It’s the stuff of fantasy, but we can’t be troubled to regard it as fantastical anymore — it’s just how things are. Such will inevitably be the case with AI/AGI.
But to further build on Vaknin’s “magic” point, it seems conceivable that human intellect will now stand with relative stability on a pair of AI-constructed stilts. Consider that the “C” and “A” in “CAD” (Computer-Aided Drafting) indicate that that highly technical work has relied on hard-drive computation for decades.
Now, however, we’re looking at a Computer-Aided Everything, from medicine and law, to (never gets easier to acknowledge) the arts themselves. At some point, seeing your friendly PCP might entail full knowledge that they are simply interpreting what their Medic-Bot 6000 spits out. So what? Is that so different from having them interpret what their medical textbooks have to say on a given malady? In some ways, yes; in others, well, we all wade in a great well of knowledge and practical wisdom filled up by those who came before us — AI will simply be more efficient in extracting the gems from that well.
My own concerns have almost nothing to do with highly specialized trades getting a capability boost from fact-finding, answer-spewing, unsalaried algorithms. I am instead curious as to how human beings will value themselves and strengthen their own thought-shaping apparatuses if AI’s pervasiveness shifts from assisting to overwhelming. We have demonstrated that our inclination is to absorb as much technology as inventive minds can unleash — nobody wants to seem out of touch or unsophisticated. It’s why most arguments opposing TikTok are pointless — statesmen attacking something the populace loves is a well-paved path towards transforming an everyday feature of life into forbidden fruit, which augments the appeal immeasurably. (I’ll be exploring the TikTok discussion in an upcoming piece.)
I find myself uplifted by the knowledge that the tech world is churning out its own techno skeptics. The “halt” letter is a promising development, one whose ultimate effect may end up being rather minimal, but whose presence signals to us all that humanity still has a fighting chance, and we just might make it to the other side of this intact.
But we still have to get there.
-MJM
‘90s Media Recommendation:
An episode of The Outer Limits that aired in 1997 has been on my mind in recent months. It is dated many ways, as you would expect, but the plot points were about as prescient then as anything current-year science-fiction has to offer. If you have 45 minutes to spare, I recommend tracking this down on YouTube or on a streaming service and taking in the bad special effects, the insightful forecasting, and the slightly too-neat vision of what the “other side” might look like.
https://www.imdb.com/title/tt0667957/
Great piece, and we agree with your sentiment. Our worries are what happens to us humans when items such as "critical thinking," "problem solving" and "creativity" have all been outsourced to AI? What will remain of us? Typically, when we outsource a certain skill to technology, we, as a collective, lose that particular skill within a generation, sometimes faster.
"AI will simply be more efficient in extracting the gems from that well" is not the solution, its the problem. When we grow accepting, and then dependent on AI even by our best and brightest...well have reached our peak...whereas AI will be just begun. The struggle for knowledge, has to be first and foremost a struggle, not just sitting at the top of the moutain and admiring the view...its the struggle that gives it meaning and value. And when AI spits out an "answer", who knows what hidden dead ends, promising directions, interesting persuits, it took? Simply getting to the answer is a life long crutch. Its like doing your childs homework assignments for them! How do they learn without the struggle, the mistakes? They won't.