The Full Takeoff Model (FTM) is an endogenous economic growth model developed by Tom Davidson. It is meant to illustrate the future trajectory of Artificial Intelligence, the economy and associated factors. In particular, it helps us answer how long it will take to go from a partial automation of the economy to a total automation of the economy.
This section details a succinct mathematical description of the Full Takeoff Model (FTM). This will be helpful to mathematically oriented readers who want to see all the dynamics of the model written down in a single place.
Readers who want to understand the model's conclusions would be better served reading the short summary of Tom's report. Readers who want to understand why the model is built the way it is will be better-served reading the report. Justifications for the best-guess parameter values are described below. If you want to play with the model, you can do so in the playground.
Overview of the model
The core of the FTM is three CES production functions that govern the production of goods and services (G&S), hardware research, and software research.
Each of these functions takes as input an amount of capital (K), labour (L), effective compute (C), a level of automation (A), and the total factor productivity (TFP). Their output is used to estimate the amount of capital, compute and automation available in the next timestep, while labour and the TFP vary exogenously.
The rest of the model determines how the output of the production functions translates to improvements to the efficiency of hardware and software, how the different input factors are split up across the production functions and what is the current level of automation.
(Click on the different parts of the model to understand how each one works)
We aim to use this framework to estimate 1) when we'll develop AI that could fully automate cognitive labour and 2) how much earlier we'll have AI could that could automate 20% of cognitive labour (with tasks weighted by their share of output in 2020).
Through a Monte Carlo analysis, we show that this model and our choices of parameters lead to a median date of AGI of 2045, and a median takeoff duration of 3.6 years (conditional on AGI happening before 2100). You can read more about the model's results in the short summary, and browse the results of the different analyses in the reports section.
Appendices
In the G&S production function, we need to set the task weights .
These are chosen to make the initial shares of capital, labour and compute roughly match our empirical estimates.
Remember that the G&S production function has the form
The shares of capital and cognitive output can be computed as:
So the ratio of the capital share to cognitive share is:
Since we know , and at the beginning of the simulation, we can solve for .
Now we will estimate using the ratio of the compute and labour share of the cognitive output.
We will make two simplifying assumptions:
There is no automation at the beginning of the simulation, ie for .
The task weights for the labour tasks are equal ie for .
Given these assumptions, we have that
Since we can use this equation to solve for each .
The weights for the software and hardware R&D production functions are computed identically.
To estimate the returns to hardware we assume that both hardware efficiency and the inputs to hardware R&D have grown exponentially.
We ignore the ceiling mechanism entirely since, in the past, the effects of the ceiling have not been noticeable. Because of that, we have that:
Substituting the exponential forms, we get:
The exponential growth rates on both sides of the equation must match, from which we deduce that:
To estimate the substitutability of cognitive output and capital in the production function we use an intuitive estimate of how much we believe production could be increased with ~infinite cognitive output.
Remember that the production function we are dealing with takes the form:
We consider the limit when cognitive output goes to infinity.
We then consider the ratio between this limit and the current output of the CES:
where is the current ratio between the cognitive share of the economy and the capital share of the economy, which we can estimate from historical data.
The quantity is how much we believe production could increase with unlimited cognitive labour. We estimate this intuitively, and derive the substitution rate from it:
Year: NaN Largest training run: NaN
Year: NaN Fraction of effective compute assigned to training: NaN
Year: NaN Effective compute: NaN
Year: NaN Automation index for G&S: NaN
Year: NaN Automation index for R&D: NaN
Task: NaN 2022 FLOP: NaN
Task: NaN Requirements: NaN
Task: NaN 2022 FLOP: NaN
Task: NaN Requirements: NaN 2022 FLOP
Year: NaN Hardware efficiency: NaN
Year: NaN Penalty factor for hardware R&D: NaN
Year: NaN Adjusted cumulative inputs to hardware R&D: NaN
Year: NaN Software efficiency level: NaN
Year: NaN Penalty factor for software R&D: NaN
Year: NaN Adjusted cumulative inputs to software R&D: NaN
Year: NaN Capital: NaN
Year: NaN Gross world product: NaN
Year: NaN Labour: NaN
Year: NaN Total factor productivity: NaN
Year: NaN Hardware stock: NaN
Year: NaN Hardware efficiency: NaN
Year: NaN Effective compute: NaN
Year: NaN Software efficiency level: NaN
Year: NaN Fraction of GWP used to purchase new hardware: NaN
Year: NaN Fraction of effective compute assigned to G&S: NaN
Year: NaN Fraction of effective compute assigned to hardware R&D: NaN
Year: NaN Fraction of effective compute assigned to software R&D: NaN
Year: NaN Fraction of effective compute assigned to training: NaN
Year: NaN Fraction of capital assigned to G&S: NaN
Year: NaN Fraction of capital assigned to hardware R&D: NaN
Year: NaN Fraction of labor assigned to G&S: NaN
Year: NaN Fraction of labor assigned to hardware R&D: NaN
Year: NaN Fraction of labor assigned to software R&D: NaN