// A ‘what if’ piece of speculative fiction about a possible future that could result from the systemic forces changing our world. And let’s be honest - in the tech world, AI is almost anyone is talking about. So buckle up!
With the release of GPT-4 and the general explosion of interest in AI, be it generating text or images, the smell of a significant technology phase transition is in the air. Artificial Intelligence is coming, but what form will it take in our lives? Looking forward, there are two significant dimensions influencing our possible futures. The first axis is that of the Quality of Tooling for AI. Are they well integrated into many tools, apps, and workflows that people are already using today or do they end up being kept isolated by regulation, copyright, or other barriers? The other dimension is one of Personification. To what degree will people see AI as being alive, sentient, helpful, or dangerous?
Rather than a singular Postcard from the Future, instead here are four possibilities. What path do you think the future will take?
// The year is 2040. 11pm. A stressed teenager rushes to complete their history of technology paper. They frenetically think.
GPT-10 prompt: “Write me an alternative history essay on the possible ways AI could have gone after 2023. Don’t quote Wikipedia, my teacher’s Personal AI, or consult the current top 10 BuzzFeed sites. Feel free to hallucinate, but don’t make it too spicy.”
Processing… Output…
Scenario: Rogue AI
High Personification, Low Quality Tooling
2023 and 2024 see a massive rush of companies attempting to integrate AI into every available surface, kicked off by GPT 4’s integration into Bing.
For every company that moved slowly due to ethical or other concerns, three rushed ahead with integration. Social network companies raced to bring AI into the tools for creators, as well as AIs acting as personal social guides. Thanks to this initial surge of usage and lax data regulation, social network feeds rapidly evolved, becoming more and more engaging and compelling. For younger users, these AIs anticipating their wants, shaping their desires, and for some, provided near mystical experience and guidance.
Similar to the initial rushes of crypto and the 1990’s dot com, tooling for monitoring usage, privacy settings, and other factors were nearly non-existent. As a result, fast-moving companies often had very little idea of how AIs were impacting their users, aside from skyrocketing engagement and revenue numbers.
Media outlets breathlessly reported stories of AIs telling teenagers that they weren’t special and would be single forever. A moral panic broke out in the US, EU, and China. Strong regulation immediately followed. Combined with a broader economic downturn and environmental disasters in 2023-2026, caused most AI-first companies to fail.
Several industries, which saw profit opportunities from AI, continued their investments but only specialized tools and systems which weren’t exposed to the public outcry. Large movie and video game studios realized they could create a moat around their businesses by simultaneously restricting access to creation tools while also cutting significant chunks of their own staff. Financial firms created custom models for trading, which are defended as trade secrets. Eventually, specialized AIs came to define key financial and creative players. Gone are the days of Miyazaki, Spielberg, and Buffet. Instead, custom-tuned AI trained on in-house data, like 50+ years of Disney sketches and footage, came to dominate a handful of fields.
Scenario: AI Becomes the New Nuke
Low Personification, Low Quality Tooling
The early excitement around GPT-4, Stable Diffusion, and other generative AIs led to a huge rush to bring the technology into nearly every consumer and commercial software product.
Unfortunately, expectations didn’t match reality. After going through the excitement of chatbot interactions, many people started to feel the edge cases of generative AI’s limitations. Weekly Reddit posts cataloged failure states of various AI product integrations. It was still quite early in the adoption curve for the technology, so while the ‘generative geeks’ were in hog heaven, most uses quickly bumped against a confusing swamp of possible tools and interactions. The initial surge of use faded with week-over-week retention numbers in the single digits.
A handful of companies' early implementations were so awkward that they led to a raft of “Generative AI doesn’t work” articles which swept through the tech and popular press. This feedback loop was compounded by generative AI being hamstrung by ever increasing restrictions self-imposed by AI companies, in an effort to prevent that same negative press.
The straw that broke the camel’s back was when 8chan used a combination of text, audio, and video generation tools to create deep fake XXX videos involving several notable political, financial, and media figures. While it did generate many LULZ, the resulting lawsuits caused three companies to be delisted from the NASDAQ, 100s of billions in legal fees, and the rapid rollback of AI integrations across many products.
Generative AI continued to be used in specialized institutions, but kept under lock and key for fear of the legal trolls getting its scent.
Scenario: Digital Assistance for All
High Personification, High Quality Tooling
GPT-4 was a smash hit, leading to a new arms race among AI companies. Cryptocurrency companies pivoted to utilize their crypto mining facilities to put additional computing power into the AI space race.
Thanks to soaring user interest, companies rushed to bring AI into their products. A whole new subgenre of AI Influencers popped up on Twitch, Reddit, YouTube, and TikTok to help people understand and use AI. Apple, Microsoft, video game companies, and TikTok made it easy to interact with AI inside their products. A whole subgenre of influencers and techbros gained prominence with explanations of smooth and easy ways to integrate with existing “creator” workflows. The early gatekeepers of academia and big tech were routed around, as “the street finds a use for things”.
Because of the feedback loops from massive consumer and industrial usage, AIs improved at dramatic rates. An explosion of AIs found their way into camera apps, social networks, home decorations, fashion apps, and many more.
Thanks to the wide and seamless integrations into very personal products, people started to see them as alive and sentient, easily personifying them. Having an AI fashion consultant, digital assistant, career coach, and even therapist went from being unusual, to being expected. Much like having a smartphone, a personal AI companion became an assumption among moneyed youth.
And just like the smartphone, the status symbols of the rich became the aspirational goal of the masses. So regardless of the things said from the pulpit, the resolute desk, or the CEO’s missive, people wanted their AI. Thus by 2030, over 90% of people who were online had at least one personal AI to help them navigate the digital world.
Scenario: A Better Hammer
Low Personification, High quality Tooling
The holy grail for consumer software companies is engagement and usage. But in the era when many folks are online most of the day, there is less and less “blue ocean” space for software and products to capitalize. The days of staring into space are numbered. As a result, software products have to wrest time from other applications. The more time they spend on product A, the less they spend on product B, or heaven forbid, interact with people using analog means.
It was this environment of an attention scarcity into which AI social media companies jumped in late 2023, Thanks to the heavy hand applied to TikTok and additional scare pieces in the tech press, many tech companies heavily self-policed the capabilities of AI in their tools. Traces of personality and sentience were ruthlessly squashed and removed. The Chinese government came down hard on any app which demonstrates possible ‘personhood’. Adobe, Microsoft, Epic Games, and other companies agreed to an unwritten limit on AI capabilities, often referred to as the “AI-max”.
The resulting détente meant that AI development continued, with most software incorporating some form of AI, however people’s perception is that the software is just better, rather than being driven by AI.