3 private links
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to: First, Simon Wardley on building things without understanding how they work:
"I don't even read the code I deliver."
The comparison between practical engineering (the Roman Empire) and the loss of inquiry (Science in the Hellenistic age) is unnervingly familiar.
When an entire culture decides that producing outputs matters more than understanding mechanisms, it works fine right up until the environment shifts and nobody remembers how to reason from first principles.
I suspect anthropologists are going to have a field day with our time, once we get around to rediscovering anthropology.
Here’s Adam Jacob in response:
Due respect to Simon, but he’s wrong. With this degree of velocity (and it can produce not only working software but quality software,) we will solve every obstacle in front of adopting it.
It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened.
If you’ve been reading what I write, it’s not like I’ve been a believer the whole time. But I am today. Because I’m doing it. It’s amazing. We will never go back, as an industry. We will simply use this capability and catapult forward.
It’s going to be an absolute mess while we sort it out, though, make no mistake.
And here’s Bruce Perens, whose post is very much in conversation with them, even though he’s not explicitly responding to either of them.
Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
-
How much the compiler actually does at compile-time, and the fact that they never look at assembler output: learning enough assembly language to understand that scares them. So, they tend to write silly optimizations that the compiler ignores.
-
How the cache works: My favorite was the guy who allocated image buffers using valloc(), which returns page-aligned addresses, on one of the early MIPS processors. That CPU used the low bits of addresses to hash cache buckets, and only had two cache lines per bucket, resulting in a cache spill and load as every pixel was processed.
Another is that programmers think the CPU touches memory directly. Modern ones rarely do, except for synchronizing primitives. The CPU reads and writes from cache, only the cache touches the memory, only in chunks 64 bytes wide (for modern Intel processors), and modern CPUs have three layers of cache, and the most inner one has separate layers for instructions and data.
And people believe that their programs are limited by processing speed, when it may well be memory access speed. These folks can unroll loops in a way that would have great effect, if it were not for the speed of memory access. They show you profiling that makes its effect clear, but don't understand that profiling lies about this, because it only counts instructions and not memory-access time. You must test against wall-clock time.
-
How synchronization works: The fact that writing a variable from one CPU does not make it immediately visible to another CPU unless you do something special. What a race is.
-
How paging and the translation look-aside buffer work: That locality-of-reference is an important factor in the speed of your program, and how to organize data for it.
-
How numbers are stored and processed: That floats are not counting numbers. That they can't represent 1/3 accurately. Dealing with infinities and NaN, and signaling NaN in floating point numbers. How to deal with remainders (hint: the % operator). How integers wrap and what to do about overflow. How to detect a carry. What saturating arithmetic is.
Finally, here’s the MIT engineering professor Louis Bucciarelli from his book Designing Engineers, written back in 1994. Here I’m just copying and paste the quotes from my previous post on active knowledge.
A few years ago, I attended a national conference on technological literacy… One of the main speakers, a sociologist, presented data he had gathered in the form of responses to a questionnaire. After a detailed statistical analysis, he had concluded that we are a nation of technological illiterates. As an example, he noted how few of us (less than 20 percent) know how our telephone works.
This statement brought me up short. I found my mind drifting and filling with anxiety. Did I know how my telephone works?
I squirmed in my seat, doodled some, then asked myself, What does it mean to know how a telephone works? Does it mean knowing how to dial a local or long-distance number? Certainly I knew that much, but this does not seem to be the issue here.
No, I suspected the question to be understood at another level, as probing the respondent’s knowledge of what we might call the “physics of the device.”I called to mind an image of a diaphragm, excited by the pressure variations of speaking, vibrating and driving a coil back and forth within a a magnetic field… If this was what the speaker meant, then he was right: Most of us don’t know how our telephone works.
Indeed, I wondered, does [the speaker] know how his telephone works? Does he know about the heuristics used to achieve optimum routing for long distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously? Does he know how many operators are needed to keep this system working, or what those repair people actually do when they climb a telephone pole? Does he know about corporate financing, capital investment strategies, or the role of regulation in the functioning of this expansive and sophisticated communication system?
Does anyone know how their telephone works?
There’s a technical interview question that goes along the lines of: “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using? Are you familiar with the relaxed memory model of the ARM processor? How garbage collection works inside of the JVM? Do you understand how the field effect transistors inside the chip implement digital logic?
I remember talking to Brendan Gregg about how he conducted technical interviews, back when we both worked at Netflix. He told me that he was interested in identifying the limits of a candidate’s knowledge, and how they reacted when they reached that limit. So, he’d keep asking deeper questions about their area of knowledge until they reached a point where they didn’t know anymore. And then he’d see whether they would actually admit “I don’t know the answer to that”, or whether they would bluff. He knew that nobody understood the system all of the way down.
In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all correct.
Wardley’s right that it’s dangerous to build things where we don’t understand the underlying mechanism of how they actually work. This is precisely why magic is used as an epithet in our industry. Magic refers to frameworks that deliberately obscure the underlying mechanisms in service of making it easier to build within that framework. Ruby on Rails is the canonical example of a framework that uses magic.
Jacob is right that AI is changing the way that normal software development work gets done. It’s a new capability that has proven itself to be so useful that it clearly isn’t going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but the benefits exceed the risks.
Perens is right that the scenario that Wardley fears has, in some sense, already come to pass. Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but those mental models are incorrect in fundamental ways.
Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
The Grug Brained Developer
A layman's guide to thinking like the self-aware smol brained
Cybernetics may be making a comeback. Stafford Beer’s Viable System Model can help us diagnose organisational failures—and improve the conditions for success.
Supply chains are TV for matter
A major thesis of this text is that the complexity of computer hardware and software systems has exceeded our current understanding of how these systems work and fail, and furthermore, these systems are approaching the complexity of biological systems based on their cardinality and their networked hierarchy due to the widespread connectivity of the Internet and World Wide Web.
...
Although measuring network complexity remains an active area of research, efforts to quantify the concepts of node degree and dependence are confirming the fundamental hypothesis of network and complex systems researchers across multiple disciplines that relationship transitivity matters more than often credited in the traditional Newtonian-Cartesian ethic rooted in linear cause-and-effect, decomposability, reductionism, foreseeability of harm, time reversibility, and an obsession with finding broken parts and blaming people that still dominates mainstream intellectual theory and practice in accident investigations, the law, and systems engineering.