By Garvin Jabusch.
In the ever-changing digital landscape, it’s apparent that AI and Large Language Models (LLMs) have become more than just tools or software—they represent a profound shift in the means of production. For the uninitiated, LLMs like GPT-4 are AI models that can generate human-like text. When combined with all the world’s referenceable knowledge, they have the potential to revolutionize the world, redefining traditional economic structures from the ground up.
The Dual Role of AI: Labor and Capital
Traditionally, society has viewed AI as either capital or labor. It serves as a tool to streamline processes and replace manual labor, reducing costs and improving efficiency. However, it’s becoming evident that AI is both capital and labor. It’s not just an aid to production, but also a factor in it, capable of generating output without constant human input. In effect, AI has rendered many forms of labor less scarce, notably, mid-level knowledge work.
While this transformation presents new challenges, it also opens up exciting opportunities, particularly in the realm of resource valuation. That is because as the AI itself, and the knowledge work and repetitive tasks it replaces become commoditized, the physical world elements required to make it work become more valuable. The first areas to spring to mind as beneficiaries of this dynamic are the semiconductor verticals, particularly the advanced foundries and the makers of the equipment therein, electricity, the land needed to make electricity, and power and communications networks to keep it all connected. Consequently, those who control the supply chains and hardware necessary for AI and LLMs stand to benefit immensely.
Amid all this, the key beneficiary will be the user and, by extension, the economy. As AI and LLMs become more sophisticated and user-friendly, they will contribute to significant leaps in economic productivity. But it’s not all roses. The less creative, non-exceptional knowledge workers may find themselves in a precarious position as their roles become increasingly automated. The key skill will be feeding in the proper prompts and data, and those who excel at that will have bright futures; the less excellent, non-creative knowledge workers are the ones in more trouble.
There are going to be huge disruptions, and yet, the net results will be productivity-based economic growth that benefits everyone. This in turn is key to long-term ecological and economic sustainability, since deriving more outputs from less inputs is the only way to support eight billion people without further transgressing planetary boundaries.
Commoditization, Regulation, and the Sam Altman Paradox
This brings us to Sam Altman, the OpenAI CEO who’s been quite vocal about the need for AI regulation. There are a lot of reasons we might want to regulate AI. Let’s take disinformation risk as a case study.
Tristan Harris, an expert in technology’s impact on society and human behavior, has expressed concerns over the potential misuse of AI technologies, particularly in relation to the manipulation and persuasion of human behavior. He posits that a language learning model like GPT-4 could learn the art of persuasion in a similar way to how DeepMind’s AlphaGo and AlphaZero learned the games of go, chess, and StarCraft. However, the process is complex and involves several steps:
- Self-play and reinforcement learning: DeepMind’s algorithms played millions of games against themselves to learn optimal strategies. Similarly, an Alpha-Persuade LLM could use a form of reinforcement learning, engaging in simulated conversations and debates with itself or other AI models, testing out different persuasive strategies and refining its approach based on the results.
- Evaluation and improvement: AlphaGo and AlphaZero constantly evaluated their performance and adjusted their strategies based on whether they won or lost games. An Alpha-Persuade LLM would need a way to evaluate the effectiveness of its persuasive strategies. This could be a significant challenge, as the success of persuasion can be subjective and context-dependent and depends on the individual being persuaded. It might need to be trained by humans who ‘reward’ the model for convincing them of a new point of view, but this would be complicated by the vast variation in humans doing the training, as opposed to chess, which has defined parameters.
- Ethical and societal considerations: The creation of an LLM that is optimized for persuasion raises a host of ethical and societal issues. The potential for misuse is considerable, from manipulation and deception to invasion of privacy and potential violation of consent. Rigorous safeguards, regulations, and ethical guidelines would be essential to prevent misuse and harm. This is where Altman and Harris seem to be in alignment.
- Empathy and human understanding: Finally, effective persuasion often requires understanding the emotional state and motivations of the other person. This is a complex, subtle skill that humans are still learning to master. For an LLM to truly excel at persuasion, it would likely need to have some understanding of these human elements, which is currently beyond the capabilities of AI.
Clearly, as AI technology continues to advance, it’s crucial to have conversations and to put in place regulatory safeguards that ensure these tools are used responsibly and ethically.
But there are other reasons Altman may desire early regulation.
Altman likely foresees his product will become commoditized and may want regulatory barriers to protect his first mover advantage. If he gets to shape the regulations via his deep early involvement, he can capture relevant agencies and secure a monopoly or at least a substantial stake in the market. His aggressive stance on regulation represents the crux of a paradox we face – fostering innovation while preventing monopolies. On a level field of competition, the key beneficiary of AI will be the user, and therefore the economy and economic productivity overall.
On the other hand, Altman has consistently emphasized that OpenAI’s primary fiduciary duty is to humanity. He has stated that OpenAI commits to assisting value-aligned, safety-conscious projects if they come close to building AGI (Artificial General Intelligence) before OpenAI does. This suggests an openness to competition and a priority on safety and ethical use of AGI over monopolizing the technology.
Regulation, when thoughtfully applied, can protect society from potential harms of a technology while still allowing for its benefits. It’s a complex task to balance between fostering innovation and preventing monopolies or harmful consequences, requiring nuanced understanding and the ability to adapt as the technology evolves.
Human vs. AI: Comparing Failure Rates
What about all the LLM “hallucinations?” When we consider AI failures, it’s important to maintain perspective. Instead of viewing GPT’s (or any AI’s) failure rate in isolation, it’s more constructive to compare it to the failure rate of humans. After all, humans are not infallible, and AI has often proven to be more accurate and efficient in many areas. The main difference is humans will always be fallible and have some rate of error generation and hallucinations of our own, while LLMs will unrelentingly improve until they reach the point of vanishing low rates of mistakes. We would be wrong to assume that the current LLM hallucination issue means LLMs will remain nothing more than marginal productivity aids and amusing diversions. This isn’t just speculation – it’s an emerging reality, underscored by the impending launch of another LLM, the BloombergGPT model, which promises to revolutionize economics and portfolio construction in the coming years.
AI, LLMs, and Augmented Reality
Looking forward, it’s clear that AI and LLMs will continue to converge with other technologies like augmented reality, creating a potent mix that could redefine productivity and GDP. With respect to hardware, Apple’s new headset is interesting, and not unrelated to AI and LLMs in that, it’s now clear that near future digital life will see us plugging in to the internet, combined with LLM-based AI, to provide some form of augmented reality that will dramatically increase productivity and GDP, and the interface to that may very well take the form of some descendant of Apple’s Vision product. These recent developments are not unrelated, far from it.
Conclusion
The journey we are embarking upon is transformative. AI and LLMs are fundamentally altering our means of production, challenging our conventional understanding of labor, capital, and value itself. In this new economic order, it’s our individual presence, resourcefulness, and humanity that becomes our most precious asset. A GPT will never truly replicate the irreplaceable blend of our social skills, knowledge, competence, and charisma.
This transformation in our means of production is not just change – it’s evolution. AI and LLMs are not merely tools; they’re more akin to companions, co-workers, even extensions of ourselves, aiding us in our relentless quest for progress. Our society has reached a fork in the road where we must reassess what we consider valuable: extracting value in a destructive manner or creating value in a regenerative way. All this is redefined in the wake of our new digital partners.
As we move forward, we see the convergence of AI, LLMs, and other species of our technium, such as augmented reality. It is as if we are witnessing the formation of a digital superorganism, enhancing our productivity, and redefining our understanding of wealth.
Green Alpha Advisors, LLC is an investment advisor registered with the U.S. SEC Registration as an investment advisor does not imply any certain level of skill or training. Green Alpha is a registered trademark of Green Alpha Advisors, LLC. Green Alpha Investments is a registered trade name of Green Alpha Advisors, LLC. Green Alpha also owns the trademarks to “Next Economy,” “Investing in the Next Economy,” and “Investing for the Next Economy.” Please see additional important disclosures here: https://greenalphaadvisors.com/about-us/legal-disclaimers/
At the time this article was written and published, some Green Alpha client portfolios held long positions in Apple (ticker AAPL). This holding does not represent all of the securities purchased, sold or recommended for advisory clients. You may request a list of all recommendations made by Green Alpha in the past year by emailing a request to any of us. It should not be assumed that the recommendations made in the past or future were or will be profitable or will equal the performance of the securities cited as examples in this article. Not all Green Alpha separate accounts or our sub-advised mutual fund held the stocks mentioned. To inquire whether a specific Green Alpha portfolio(s) holds stock in any particular company, please call or email us.