3 private links
One of the surprising (at least to me) consequences of the fall of Twitter is the rise of LinkedIn as a social media site. I saw some interesting posts I wanted to call attention to: First, Simon Wardley on building things without understanding how they work:
"I don't even read the code I deliver."
The comparison between practical engineering (the Roman Empire) and the loss of inquiry (Science in the Hellenistic age) is unnervingly familiar.
When an entire culture decides that producing outputs matters more than understanding mechanisms, it works fine right up until the environment shifts and nobody remembers how to reason from first principles.
I suspect anthropologists are going to have a field day with our time, once we get around to rediscovering anthropology.
Here’s Adam Jacob in response:
Due respect to Simon, but he’s wrong. With this degree of velocity (and it can produce not only working software but quality software,) we will solve every obstacle in front of adopting it.
It’s not slop. It’s not forgetting first principles. It’s a shift in how the craft work, and it’s already happened.
If you’ve been reading what I write, it’s not like I’ve been a believer the whole time. But I am today. Because I’m doing it. It’s amazing. We will never go back, as an industry. We will simply use this capability and catapult forward.
It’s going to be an absolute mess while we sort it out, though, make no mistake.
And here’s Bruce Perens, whose post is very much in conversation with them, even though he’s not explicitly responding to either of them.
Do not underestimate the degree to which mostly-competent programmers are unaware of what goes on inside the compiler and the hardware.
-
How much the compiler actually does at compile-time, and the fact that they never look at assembler output: learning enough assembly language to understand that scares them. So, they tend to write silly optimizations that the compiler ignores.
-
How the cache works: My favorite was the guy who allocated image buffers using valloc(), which returns page-aligned addresses, on one of the early MIPS processors. That CPU used the low bits of addresses to hash cache buckets, and only had two cache lines per bucket, resulting in a cache spill and load as every pixel was processed.
Another is that programmers think the CPU touches memory directly. Modern ones rarely do, except for synchronizing primitives. The CPU reads and writes from cache, only the cache touches the memory, only in chunks 64 bytes wide (for modern Intel processors), and modern CPUs have three layers of cache, and the most inner one has separate layers for instructions and data.
And people believe that their programs are limited by processing speed, when it may well be memory access speed. These folks can unroll loops in a way that would have great effect, if it were not for the speed of memory access. They show you profiling that makes its effect clear, but don't understand that profiling lies about this, because it only counts instructions and not memory-access time. You must test against wall-clock time.
-
How synchronization works: The fact that writing a variable from one CPU does not make it immediately visible to another CPU unless you do something special. What a race is.
-
How paging and the translation look-aside buffer work: That locality-of-reference is an important factor in the speed of your program, and how to organize data for it.
-
How numbers are stored and processed: That floats are not counting numbers. That they can't represent 1/3 accurately. Dealing with infinities and NaN, and signaling NaN in floating point numbers. How to deal with remainders (hint: the % operator). How integers wrap and what to do about overflow. How to detect a carry. What saturating arithmetic is.
Finally, here’s the MIT engineering professor Louis Bucciarelli from his book Designing Engineers, written back in 1994. Here I’m just copying and paste the quotes from my previous post on active knowledge.
A few years ago, I attended a national conference on technological literacy… One of the main speakers, a sociologist, presented data he had gathered in the form of responses to a questionnaire. After a detailed statistical analysis, he had concluded that we are a nation of technological illiterates. As an example, he noted how few of us (less than 20 percent) know how our telephone works.
This statement brought me up short. I found my mind drifting and filling with anxiety. Did I know how my telephone works?
I squirmed in my seat, doodled some, then asked myself, What does it mean to know how a telephone works? Does it mean knowing how to dial a local or long-distance number? Certainly I knew that much, but this does not seem to be the issue here.
No, I suspected the question to be understood at another level, as probing the respondent’s knowledge of what we might call the “physics of the device.”I called to mind an image of a diaphragm, excited by the pressure variations of speaking, vibrating and driving a coil back and forth within a a magnetic field… If this was what the speaker meant, then he was right: Most of us don’t know how our telephone works.
Indeed, I wondered, does [the speaker] know how his telephone works? Does he know about the heuristics used to achieve optimum routing for long distance calls? Does he know about the intricacies of the algorithms used for echo and noise suppression? Does he know how a signal is transmitted to and retrieved from a satellite in orbit? Does he know how AT&T, MCI, and the local phone companies are able to use the same network simultaneously? Does he know how many operators are needed to keep this system working, or what those repair people actually do when they climb a telephone pole? Does he know about corporate financing, capital investment strategies, or the role of regulation in the functioning of this expansive and sophisticated communication system?
Does anyone know how their telephone works?
There’s a technical interview question that goes along the lines of: “What happens when you type a URL into your browser’s address bar and hit enter?” You can talk about what happens at all sorts of different levels (e.g., HTTP, DNS, TCP, IP, …). But does anybody really understand all of the levels? Do you know about the interrupts that fire inside of your operating system when you actually strike the enter key? Do you know which modulation scheme being used by the 802.11ax Wi-Fi protocol in your laptop right now? Could you explain the difference between quadrature amplitude modulation (QAM) and quadrature phase shift keying (QPSK), and could you determine which one your laptop is currently using? Are you familiar with the relaxed memory model of the ARM processor? How garbage collection works inside of the JVM? Do you understand how the field effect transistors inside the chip implement digital logic?
I remember talking to Brendan Gregg about how he conducted technical interviews, back when we both worked at Netflix. He told me that he was interested in identifying the limits of a candidate’s knowledge, and how they reacted when they reached that limit. So, he’d keep asking deeper questions about their area of knowledge until they reached a point where they didn’t know anymore. And then he’d see whether they would actually admit “I don’t know the answer to that”, or whether they would bluff. He knew that nobody understood the system all of the way down.
In their own ways, Wardley, Jacob, Perens, and Bucciarelli are all correct.
Wardley’s right that it’s dangerous to build things where we don’t understand the underlying mechanism of how they actually work. This is precisely why magic is used as an epithet in our industry. Magic refers to frameworks that deliberately obscure the underlying mechanisms in service of making it easier to build within that framework. Ruby on Rails is the canonical example of a framework that uses magic.
Jacob is right that AI is changing the way that normal software development work gets done. It’s a new capability that has proven itself to be so useful that it clearly isn’t going away. Yes, it represents a significant shift in how we build software, it moves us further away from how the underlying stuff actually works, but the benefits exceed the risks.
Perens is right that the scenario that Wardley fears has, in some sense, already come to pass. Modern CPU architectures and operating systems contain significant complexity, and many software developers are blissfully unaware of how these things really work. Yes, they have mental models of how the system below them works, but those mental models are incorrect in fundamental ways.
Finally, Bucciarelli is right that systems like telephony are so inherently complex, have been built on top of so many different layers in so many different places, that no one person can ever actually understand how the whole thing works. This is the fundamental nature of complex technologies: our knowledge of these systems will always be partial, at best. Yes, AI will make this situation worse. But it’s a situation that we’ve been in for a long time.
...
The Goodness Paradox: The Strange Relationship Between Virtue and Violence in Human Evolution by Richard Wrangham—A Review
We believe that humanity is about fine music, happy families, and good food. People look at things like the golden record and take pride in what humanity has to accomplish, and look forward to a future where Klingons and humans can cooperate.
What we should have done is included some photos of warfare, mushroom clouds, … all that stuff we do all the time that we pretend we don’t. If we’re going to give aliens or future humans a good idea of what we’re about, shan’t we offer forth both our best and our worst?
Except we’re embarrassed of ourselves, and we tell ourselves our achievements are great, and we look out into the stars in search of people like we are — the same way we once saw faces in nature in the sky, and called those “gods”. Our narcissism is so strong that we don’t even need reflections, and we just see ourselves in everything wherever we go. All I have to say is we’ll be damn lucky if aliens are even half like the ones you see on Star Trek.
There’s this saying that a deal predicated on a lie can only go downhill. If you were an extraterrestrial and you discovered humanity was lying to you, how would that make you feel? I, for one, would slowly back away and never return.
We’ll be fortunate if an alien life form can “feel” at all besides hungry. For all we know, the predominant life forms in the universe are just big blobs that consume solar systems like plankton in a cosmic ocean. But our hubris made us send out a beacon that not only signals nearby intelligent life, but we included a map on how to find us. Fortunately, it has later turned out to be woefully inaccurate.
This recognition should cause us to rethink what ‘nature’ and ‘wilderness’ really are. If by ‘nature’ we mean something divorced from or untouched by humans, there’s almost nowhere on Earth where such conditions exist, or have existed for thousands of years. The same can be said of Earth’s climate. If early agricultural land use began warming our climate thousands of years ago, as the early anthropogenic hypothesis suggests, it implies that no ‘natural’ climate has existed for millennia.
The musician and comedian Martin Mull has observed that “writing about music is like dancing about architecture”. In a similar way, there's an inherent inadequacy in writing about tools for thought. To the extent that such a tool succeeds, it expands your thinking beyond what can be achieved using existing tools, including writing. The more transformative the tool, the larger the gap that is opened. Conversely, the larger the gap, the more difficult the new tool is to evoke in writing. But what writing can do, and the reason we wrote this essay, is act as a bootstrap. It's a way of identifying points of leverage that may help develop new tools for thought. So let's get on with it.
Abstract: Africa is the source of all modern humans, but characterization of genetic variation and of relationships among populations across the continent has been enigmatic. We studied 121 African populations, four African American populations, and 60 non-African populations for patterns of variation at 1327 nuclear microsatellite and insertion/deletion markers. We identified 14 ancestral population clusters in Africa that correlate with self-described ethnicity and shared cultural and/or linguistic properties. We observed high levels of mixed ancestry in most populations, reflecting historical migration events across the continent. Our data also provide evidence for shared ancestry among geographically diverse hunter-gatherer populations (Khoesan speakers and Pygmies). The ancestry of African Americans is predominantly from Niger-Kordofanian (~71%), European (~13%), and other African (~8%) populations, although admixture levels varied considerably among individuals. This study helps tease apart the complex evolutionary history of Africans and African Americans, aiding both anthropological and genetic epidemiologic studies.