• Capital Gains
  • Posts
  • Should You Learn to Code Even If You Don't Plan to be a Software Engineer?

Should You Learn to Code Even If You Don't Plan to be a Software Engineer?

And, since this is 2024: should you learn to code if you do plan to become a software engineer?

Know someone who might like Capital Gains? Use the referral program to gain access to my database of book reviews (1), an invite to the Capital Gains Discord (2), stickers (10), and a mug (25). Scroll to the bottom of the email version of this edition or subscribe to get your referral link!

Economically speaking, the last few decades have been very kind to people who are good at telling computers what to do. Increasing availability of a product's complement tends to make that product worth more, especially if its own supply is inelastic. And both data worth processing and the computing power to process it have become shockingly more abundant. Meanwhile, there's short-term elasticity—it takes hours of learning Python before you start to save minutes in Excel—and the long-term elasticity is hard to measure because it's just not clear how many people can learn to program.1

Skill specialization is a specific instance of a general class of problems: whenever the supply of something arrives on a lag, and demand can change faster, there will be booms and busts. It's very easy to understate this risk, in the same way that it's easy to look at a couple quarters of Nvidia financials and conclude that it's a secular growth business. But no: both skill specialization and Nvidia are cyclical—they can just have very long cycles. For software, the growth has been so extreme over so many years that this cyclicality is invisible, and the worst experiences in recent memory are things like a hiccup in 2016-17 or so and a tough year for getting a job in 2022.2 But go earlier and you can find actual programmer recessions, like "get a job washing dishes or working for a moving company" recessions, in the early 90s or in the early 2000s.

All of this is to say that the base case, looking at a long-term history of the software engineering profession and of other industries with comparable dynamics, you should probably expect programmer salaries to decline at some point in the future, even if only temporarily. Even if there were a completely fixed supply of software engineers, and continuous growth in the value they can create, there would be times when there's more capital flowing in and thus more bidders for any given worker, and times when that condition doesn't hold and the high bid is lower. (For one thing, in that scenario a growing share of the economic benefits of software would accrue to programmers, rather than to customers or investors, and that acts as a handbrake on either overall growth or available capital.)

But what if this time is different? Specifically, what if we all switch to programming in natural languages, through LLMs, instead of in programming languages? Lots of people know Python and Javascript, lots more know how to use Excel to execute logical operations, but many, many more of them can describe what they want to do in text form.

But that's been happening for a long time. You could have made a similar argument when people started using C instead of writing raw assembly; this sample assembly code from around that time can be translated (by ChatGPT, of course) into a C program that's three lines shorter and is easier to read. In Python, it's five lines, and can be basically read aloud.

This theoretically made programming accessible to more people, but didn't mean that the field was swamped by new entrants who depressed wages. What it meant at first was that people with different sets of tools in their mental toolkit could write performance-sensitive, hardware-specific implementations in assembly, more portable code in C, and, for simple one-off scripts or performance-insensitive projects, they could use Python. In other words, the tools that simplified the job had the biggest positive impact on people who had mastered the more complicated tools first.

As a general rule, it's a good practice if you operate on one level of the stack to know what's happening one level up and one level down. Python makes it so you don't need to know precisely what's going on in your computer's memory to write a program, but knowing how different kinds of Python objects are stored and operated on helps a lot in understanding why something is slow, or why you're running into mysterious bugs. It's also useful to know the abstractions one level up, i.e. to know what O(n^2) means or to realize that the problem you're trying to solve, when correctly formalized, has either no or infinitely many solutions.

And that abstraction-level-stacking approach turns out to be a good justification for non-programmers to learn at least a little programming. There will be times when you're doing something that is simply much, much easier in Python than Excel—like "take this daily time series in multiple currencies and convert it to USD as of the date on which the transaction happened," for example, especially if after all that you get back "Whoops! Actually we hold the money in local accounts and convert it periodically on these dates." And sometimes it's replacing human effort with computer effort: "load a website once a day and write down the number that appears in a particular spot" is, of course, the kind of thing nobody was born to do but that computers were made to do. And knowing a little bit of programming shrinks the size of what Charlie Munger calls "the too-hard pile." It's a very big benefit to be able to do fifteen minutes of preliminary data-gathering rather than four hours of it before deciding whether the rest of a project is worth embarking on.3 It also provides a bit of immunity to sandbagging and overestimation in cases where a non-programmer is asking a programmer how long something will take. (And, in my experience, a great way to learn more about software engineering is to ask a software engineer why something will take so long or why it's going to be so trivial.)

So programming continues to be a useful skill, even if the exact way it's applied is opaque. As we move up the ladder of abstraction, the lower levels get more reliable; you probably don't need to know any electrical engineering to have a prosperous career as a developer today, even if in the 1970s that kind of knowledge came in handy more often. And generally, young people have a bias towards newer technologies, which means they start pretty high on that ladder, keep moving up, and develop some vertigo when the problem they're facing turns out to involve memory management or something. If you expect to build something new, you probably will end up using tools that are fairly new, too. But if you're building, it's always on a foundation, and you'll want to understand that, too.

Read More in The Diff

In The Diff, we’ve looked at the utility of programming from many angles. For example:

1. "Can learn" is deliberately ambiguous here, because it means several things: can stay motivated, can handle the abstractions, can use tools designed around the needs of people who spend all of their time programming—these are all valid interpretations of "can learn," but they operate at different scopes. Programming has gotten more accessible over time, both because computers are cheaper and because modern languages are more forgiving than assembly. In another way, though, it's less accessible: if you happened to own a computer in the 1980s, the programs you used seemed like something you could, in principle, copy, or at least copy elements of. In the 90s, hitting "view source" often gave you enough information to reproduce the page you were looking at. Now, it's less approachable. The apps people use are the work of large teams of professionals, not something one person whipped up in a few weeks, so writing a simple program feels like doing something qualitatively different than, say, building Instagram or Minecraft.

2. More recently, the market seems bifurcated: plenty of hiring is happening, but employers are more risk-averse, and one way they express this is by aiming for experienced hires over new ones that they'll have to vet on the job while training them. You can think of this as turning dollars into speed and certainty, but if you happen to have finished your CS degree last month, you might think of it in more expressive terms.

3. This is my theory for why quick mental math seems to correlate with doing well in higher math, even though they're very different skills. The people who are good at mental math can quickly guess-and-check a few cases rather than diving right into a proof, and that's often faster. It's common to joke that professional mathematicians are bad at arithmetic, but I suspect that there are three causes there: first, they are worse than you'd expect if you formed your concept of "good at math" in early grade school; second, the kind of correctness they're optimizing for is in proofs and not computations; and third, it's just really funny when a math professor makes a mistake a third-grader wouldn't.

Share Capital Gains

Subscribed readers can participate in our referral program! If you're not already subscribed, click the button below and we'll email you your link; if you are already subscribed, you can find your referral link in the email version of this edition.

Join the discussion!

Reply

or to participate.