Human vs. Machine: Intelligence per Watt
Contemplating the possibility that machines won't win everywhere all at once
There’s no law of physics that forbids in silico intelligence from performing all of the functions that biological intelligence currently performs—and for that reason alone, there’s no reason to believe that they eventually do so.
But one must distinguish what is technically possible from what is economically viable.
This requires some understanding of the intersections between the microeconomics of watt-per-intelligent-operation vs. the fundamental physical properties that limit computation. The Landauer Limit gives us insight into the latter, and it turns out that human brains operate very close to this limit: in other words, it would be very hard to optimize more energy efficiency out of the system. Evolution does an excellent job at these energy-optimization problems over long time spans.
Our very efficient 12 watt brain does amazing things, but it is very slow compared to silicon because speed needs a lot of energy. We are very effective problem solvers, navigating massive amounts of complexity, performing complicated mental simulations—but we don’t do it extremely fast, and we don’t really multitask. To have those features would require our brains to consume far more energy, which would mean we need a lot more food and generate a lot more heat.
Artificial neural networks may be able to do many of the computations that our brains do, but it may be possible that getting artificial neural networks (ANNs) to outperform humans on cost will take longer. Qualcomm noted this a few years back:
I’m thinking a lot about what this means in terms of identifying the most interesting and disruptive opportunities over the near and long term. A few of the variables in this multidimensional system include:
Decreasing Cost of Energy
In a future where all labor is effectively a function of energy, decreased cost of energy will unlock additional economically-viable use cases. We aren’t even close to being a Type I Kardashev civilization, but if the day someday comes when energy is cheap and abundant it will change everything. Cheap solar? Fusion?
Improving Hardware Efficiency
Forecasting hardware performance is really important for seeing what can be accomplished in the near term. This has been improving a lot, and will continue to improve—and this will unlock additional viable use cases—but there are limits to what hot, fast processors can do. They probably won’t get to the same slow, cooler, efficient processing of biological brains; this is just a consequence of the second law of thermodynamics as it relates to information theory, which is the core of Landauer.
Alternatives to ANNs
Computers already do an admirable list of tasks far faster than a computer—and far faster than ANNs. For example, arithmetic is far better on an old-fashioned calculator than a neural network.
There are tons of improvements possible through traditional machine learning, statistical learning, XGBoost, etc. These may be more narrowly defined tasks (for example, optimizing a metric in an application). These may be hyper-efficient compared to either ANNs or humans. Perhaps generative AI will also become good at creating highly-optimized ML algorithms; and we will soon see ANNs that are able to consult with specific tools when they identify specific kinds of problems.
Disruptive Opportunities
Your Margin is AI’s Opportunity
It seems like the most disruptive opportunities in the near term will be those where there’s a wide gap between current costs and an AI alternative. For example, imagine what happens if an LLM could consult on a legal contract for 1% of a lawyer’s cost for the same task. The watts won’t be hard to justify for these margins.
Cheaper & Better
Another area of disruption is what happens when generative AI gives you something that’s actually cheaper AND better than the human alternative. Look at what’s happening with key art for blog posts and online media. Previously, one had two options: either create something unique (hire an artist, have a photo shoot, etc.) or license a stock photo. Now you can use Midjourney to create useful art for a blog post—while not at the level of the best human artists—is still a lot better than stock photos on several axes: uniqueness, convenience, and speed. One can imagine many other use cases like this in lots of other media: games, video, live-streaming, etc. if these advantages are worth the watts for the consumer, then usage will scale.
Completely New Experiences
There are also likely to be entirely new experiences that couldn’t be imagined by generative AI. In these cases, there’s no established value; the only would be substitutes (e.g., an AI that does a new form of entertainment might be compared against all the costs for entertainment in general). This will be an area of experimentation.
Shifting Energy Cost to the Far Edge
Another is that there might be ways to hide the costs. Right now, a lot of AI is happening in the cloud. But Apple shipped over zettaflop (10^21 flops) of neural compute to end-users in 2022, which is 100X the compute capacity of the top 500 supercomputing clusters in the same year (around 10 exaflops, 10^18).
What happens when the cost of running an intelligent action is hidden in the cost every time you recharge your phone? What happens when you charge your phone with cheap solar? Not every form of AI needs to be done in the cloud, there are a ton of additional use cases that will unlock as AI gets pushed to the far edge and into the user’s pocket decides.
The Future of Human Work
That said: maybe there is a class of work that humans will hang on to for quite some time, simply because we can do it for the fewest watts.
There might also be domains where people actually prefer slow/expensive human-crafted versions (e.g., personal services; artisanal products for your home). We might be able to afford more of these options as our disposable incomes free up due to abundant, inexpensive AI-derived products for everything else we buy (the abundance and technology-is-deflationary thesis).
In other cases, we’ll see a reversion to the mean: where AI is capable of producing human-level outputs, the question will remain whether it can do so economically. And that aspect may take much longer in some categories than others.
This is my first subscriber-only post. I’m using my community to share ideas that aren’t quite fully-formed yet—open-sourcing my ideas with those of you already curious about how I’m viewing these problems. I’d love to hear from you. Why not share a comment?




I think it's relevant here - especially regarding the production of human-level output - when you've mentioned on Twitter that your team did the hackathon with Beamable, Scenario and ChatGPT and you've seen mechanics you haven't before. I've also been experimenting with various tools, digging deeper to figure out what is the layer where human output remains necessary/more economical for game development and at which point can it be replaced. It would be a super interesting comparison to see a hackathon comparison between teams utilizing these tools vs. not and how it turns out!
Great post, it really gave me something to think about.
Interesting article, enjoyed reading it ... Do you think that going for unlimited growth of energy consumption is the way forward? Or should we take into account that we live on a finite planet that can only bear a finite human footprint?