Hi all,
Brief note from me (too lengthy for LinkedIn). I am delving deeper into the realm of Artificial Intelligence, not solely due to the World Economic Forum highlighting 'misinformation and disinformation' in their Global Risk report, but primarily because captivating economic analyses on the challenges surrounding AI, particularly Generative AI, are becoming increasingly prevalent. Concentration risks stand out as a significant concern, urging us to address them promptly to prevent certain entities from gaining excessive power and potentially losing control over AI.
We have a market problem with AI
I completely agree with the fear expressed in this article, called the AI-Octopus:
π¬ "πΈπ ππππππππππ ππππππππππππ πππππ ππ ππ πππ πππππππ πππ πππππππ πππ πππππππππ ππ πππππ’ ππππππ ππ πππ πππππππ’, π π πππ ππ‘ππππ π ππππππ ππ ππππππππ πππππππππππππ πππ πππππππππ πππππππππ πππ ππ ππππ ππ ππππ πππ’πππππ ππππ ππππ ππππππ."
The future of AI holds the potential for economic concentration and corporate political power, as Big Tech firms dominate the AI landscape. This is different than at the start of tech revolution: there we had new parties gaining ground. The AI industry is witnessing a growing oligopoly, with companies like Nvidia, Amazon, Google, and Microsoft holding significant market power along the supply chain.
In a recent paper by Narechania & Sitaraman, it is explained why it is nec ) explains why generative AI needs more regulation and how this is also possible. I quite linked their beginning, where they explained the generative AI tech stack (see below).
A long quote:
βThere are four basic layers: microprocessing hardware, cloud computing, algorithmic models, and applications. The microprocessing hardware layer includes the production of microchips and processors β the horsepower behind AIβs computations. This layer is extremely concentrated, with a few firms dominating important aspects of production. The cloud computing layer consists of the computational infrastructureβthe computers, servers, and network connectivityβthat is required to host the data, models, and applications that comprise AIβs algorithmic outputs. This layer, too, is highly concentrated, with three firms (Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure) dominating the marketplace.
The model layer is more complicated than the first two, as it includes three sublayers (and even more within those): data, models, and model access. One primary input for an AI model is data, and so the modelβs layerβs first sublayer is data. Here, companies collect and clean data and store it in so-called βdata lakesβ (relatively unstructured data sources) or βdata warehousesβ (featuring relatively more structure). Foundation models (which are distinct from all models in general) comprise the second sublayer.24 Models are what many think of as βAI.β These models are the output of an algorithmic approach to analyzing and βlearningβ25 from the inputs that begin in the data sublayer. This βtrainingβ26 process is expensive, and so models can be intensely costly to develop. Lastly, the third sublayer consists of modes of accessing these modelsβmodel hubs and APIs (short for βapplication programming interfacesβ).
Fourth, the application layer. Applications are the part of the sector that consumers interact with most directly: When we ask ChatGPT to tell us a joke about AI,27 we use an application (ChatGPT). The application draws on all prior layers in the stack: it interacts with a model (GPT4); that model is stored in a cloud computing platform (Microsoftβs Azure); and that platform requires microprocessing hardware (designed by Nvidia and fabricated by TSMC).β
Why is this all important? Because it clarifies that a lot in the generative AI stack is not something about innovation or completely unavoidable, it is simply industrial organisation: what do we think about market concentration? For most economists the answer is straightforward. Too much market concentration is always bad. And in the case of generative AI even more so.
Why Bad?
Collusion and coordination among tech executives, reminiscent of the Gilded Age1 "money trust," raise concerns. Close connections through institutions, research projects, and social relationships create opportunities for collusion or coordination, potentially leading to illegal practices. Tech giants, similar to banks in the Gilded Age, wield immense influence across the economy, controlling data and exerting more influence than traditional banks. The dominance of Big Tech in the AI sector may result in economic concentration and corporate political power unparalleled in history.
And the risks are higher than ever before.
This gloomy quote from the Global Risk Report:
βOver the longer-term, technological advances, including in generative AI, will enable a range of non-state and state actors to access a superhuman breadth of knowledge to conceptualize and develop new tools of disruption and conflict, from malware to biological weapons. In this environment, the lines between the state, organized crime, private militia and terrorist groups would blur further. A broad set of non-state actors will capitalize on weakened systems, cementing the cycle between conflict, fragility, corruption and crime.β
Solutions
Can this be prevented? The idea is at least that current regulation, directed at setting requirements on outcomes (ethical etc.) is not enough. Regulating market structure and increasing competition might be better (what they call Ex-ante regulation). Some ideas from the paper:
Structural Separations: Separate services provided by one layer to from activities that rely on these services. Most notably, structurally separating the cloud layer from higher layers in the stack could address a wide range of market dominance problems identified above. Also typically, this idea was launched in the Gilded Age to reduce the monopoly of railroads
Nondiscrimination, Open Access, and Rate Regulation: Nondiscrimination rules allow a firm to operate two or more vertically-linked business lines, but require the firm to treat downstream businesses neutrally
Interoperability Rules: Interoperability rules lower barriers to entry and thus stimulate competition by βallowing new competitors to share in existing investmentsβ and βimposing sharing requirements on market participantsβ. One practical type of interoperability rule would be to mandate data sharing through federated learning of data.Likewise, policymakers might consider rules that improve interoperability among cloud platforms, easing transitions from one providerβs system to another.
Entry Restrictions and Licensing Requirements: First, entry restrictions might be deployed to ensure that certain foundation models and their associated applications are effective, and do not pose substantial risks to health and safety, or of bias.Similarly, licensing rules could oblige cloud providers to βknow their customers,β as in banking law. Likewise, entry restrictions might help to address concerns about costly and wasteful investmentβ and the tendencies towards consolidation.
Public options or cooperative governance: Public options are publicly-provided goods or services that coexist with private market options, offered at some (often regulatorily-)set price. Cooperative governance is a model where the owners of the company are users, users control the company, and the purpose of the company is to benefit the users.
It is clear that the AI Octopus does not go away by itself. But relatively straightforward economic analysis can limit the Octopus power.
The funny thing is that one of the authors works at Vanderbilt University named after Cornelius Vanderbilt, one of the people that made their fortune in the Gilded age based onβ¦market power.