## [Plentiful, high-paying jobs in the age of AI - Noahpinion](https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the)
With GPT-5 [around](https://www.forbes.com/sites/roberthart/2024/05/28/openai-says-it-has-started-training-gpt-4-successor---heres-what-we-know/?sh=5bc9e0eb60fc) the [corner](https://www.metaculus.com/questions/22047/when-will-gpt-5-be-publicly-available/), will Matt Yglesias be out of a job?
![[Pasted image 20240602091634.png]]
This was a joke. A joke that was probably composed, posted, and forgotten in no more than 60 seconds. But I, being autistic, am going to take it extremely seriously. I think it illustrates a common misconception about what comparative advantage says about human labor in a world of ubiquitous AGI. In fact, I tried to warn Matt that I thought he was making a mistake, but he did not respond immediately for comment.
![[Pasted image 20240602092323.png]]
So what does *comparative advantage* mean? Luckily an actual economist has covered comparative advantage and AI. Noah writes:
> When most people hear the term âcomparative advantageâ for the first time, they immediately think of the wrong thing. They think the term means something along the lines of âwho can do a thing betterâ. After all, if an AI is better than you at storytelling, or reading an MRI, itâs better _compared_ to you, right? Except thatâs not actually what comparative advantage means. The term for âwho can do a thing betterâ is â_competitive_ advantageâ, or âabsolute advantageâ.
So I think what Matt actually means here is *absolute* advantage. Matt worries that GPT-5 will be capable of writing wonky political substacks better and more cheaply than him. Why pay Matt $10/month for his takes when I could generate 100 Matt-like takes per hour with GPT-5? Matt will be replaced and have to live a subsistence life working the bar mitzvah circuit juggling for tips, something which he will still have an absolute advantage in (GPT-5 presumably won't have limbs to juggle with). So if this is absolute advantage, what is comparative advantage?
> **Comparative advantage** actually means âwho can do a thing better relative to the other things they can doâ. So for example, suppose Iâm worse than everyone at everything, but Iâm a little _less_ bad at drawing portraits than I am at anything else. I donât have any _competitive_ advantages at all, but drawing portraits is my _comparative_ advantage.
So when GPT-5 is released, I think it's pretty likely that Matt's *comparative* advantage will still be churning out wonky hot takes. Why? Because even after GPT-5 is released, Matt will still be best at writing wonky hot takes compared to *his other skills*. Further, if GPT-5 actually does achieve this level of intelligence, its comparative advantage will be doing something else much more productive, like curing cancer. No one will spend precious GPU cycles on writing substacks when they could be spent doing something else much better (curing cancer and building a dyson sphere around the Sun). Noah explains:
> Hereâs another little toy example. Suppose using 1 gigaflop of compute for AI could produce $1000 worth of value by having AI be a doctor for a one-hour appointment. Compare that to a human, who can produce only $200 of value by doing a one-hour appointment. Obviously if you only compared these two numbers, youâd hire the AI instead of the human. But now suppose that same gigaflop of compute, could produce $2000 of value by having the AI be an electrical engineer instead. That $2000 is the _opportunity cost_ of having the AI act as a doctor. So the net value of using the AI as a doctor for that one-hour appointment is actually _negative_. Meanwhile, the human doctorâs opportunity cost is much lower â anything else she did with her hour of time would be much less valuable.Â
>
> In this example, it makes sense to have the human doctor do the appointment, even though the AI is five times better at it. The reason is because the AI â or, more accurately, the gigaflop of compute used to power the AI â has _something better to do instead_. The AI has a competitive advantage over humans in both electrical engineering and doctoring. But it only has a _comparative_ advantage in electrical engineering, while the human has a _comparative_ advantage in doctoring.
So in this example, being a doctor is like GPT-5 writing substacks. Sure, it could do it better and more cheaply than Matt, but its time would be much better spent on other things that produce way more value.
Noah doesn't say this, but I think it also raises an interesting point about ChatGPT. So long as it only costs $20/month to send it my dumb questions, that probably means it hasn't yet found an extremely valuable use. Once it does find an extremely valuable use, OpenAI will spend all of it's compute on *that*, not on me trying to get it to say funny things about the Golden Gate Bridge.
There's a lot more at the link. While Noah thinks that comparative advantage means AI could turn out quite lucrative for humans, there are other economic consequences he worries about.
## [The âCalories In, Calories Outâ Confusion: A Comprehensive Guide to Understanding Energy Balance](https://sigmanutrition.com/cico/)
Sigma Nutrition wrote a position statement on CICO. I generally trust these guys to sort fact from fiction in what's an incredibly hostile epistemic environment.
Some interesting excerpts below.
First they think it's our best tool for maximizing fat loss:
> **Altering energy balance through modifying calories in and/or calories out is still the primary tool at our disposal when it comes to maximizing the rate of fat loss.**
>
> As a heuristic, in most practical circumstances it holds true that a calorie deficit predicts weight/fat loss. Insofar as, **a sustained caloric deficit over time** **_will_** **lead to a decrease in fat mass.**
Energy intake and expenditure are not independent:
> An [overfeeding study by Levine](https://pubmed.ncbi.nlm.nih.gov/9880251/) illustrates nicely how âcalories inâ influences âcalories outâ. Healthy participants were overfed by 1,000 kcal/d for 8 weeks. However the resulting gain in fat mass wasnât what would be âpredictedâ by a 1,000 kcal surplus. Instead some of that surplus was offset by an increase in energy expenditure, with two-thirds of this coming from increased NEAT. Whilst there was wide variation in how much their NEAT increased (and hence how much fat each person gained), in one of the participants NEAT alone was seen to be able to increase by almost 700 kcal per day (average was 328 kcal/d). So for this person the total increase in energy expenditure almost completely offset the increase in calorie intake, which was reflected in the very small amount of fat this person gained (0.36 kg), despite the 1,000 extra calories per day over the eight week period. (Strangely, at the opposite end, another individual actually saw their NEAT decrease by ~ 100 kcal/d, and had a fat mass gain of over 4 kg across the study).
Being sedentary makes it harder for your body to match calories in and calories out:
> Work done by the team at the University of Leeds (which includes [Mark Hopkins](https://sigmanutrition.com/episode299/) and John Blundell) has shown how [physical activity and appetite control are not independent of one another](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5097075/), but rather are interconnected. At very low levels of physical activity there seems to be an [inability to regulate appetite and energy intake](https://pubmed.ncbi.nlm.nih.gov/27503946/) appropriately. Whereas at higher levels of physical activity there seems to be an ability for us to appropriately match our calorie intake to our energy demands. They have referred to these as âunregulatedâ and âregulatedâ zones, respectively.
Lots more at the link. In general they think CICO is widely misunderstood, both by its critics and adherents.