3 private links
Pour le recrutement, il est utile de disposer de questions standards permettant d’apprécier rapidement l’aptitude des candidats à appr ́ehender divers sujets et approches. Nous présentons dans cet article le processus de sélection des candidats à une formation technique en SSI – des questions parfois éloignees du cœur du sujet, mais dont nous justifions la pertinence.
Somewhere between 15 and 20 years ago, I worked for a company. It was a very prestigious company, and it was a glorious and frustrating time. The company did amazing things. Literally unbelievable achievements - from my point of view anyway. But this was coupled with levels of chaos that
Supply chains are TV for matter
https://news.ycombinator.com/item?id=19445483
I'd be interested in somebody doing this, but instead of scaling it to 1000ms (1 second), I'd like to see it scaled to 16ms (single frame @60fps).
I feel like this would give a better representation of "time to feedback". Saying a ping of 65ms is like 5 years doesn't make a ton of sense as that's a relatively decent ping time. Saying it's 4 frames has a direct correlation to it's perceived delay, and matters to users.
As requested:
System event Actual latency Scaled latency
------------ -------------- --------------
One CPU cycle 0.4 ns 1 frame (1/60 s)
Level 1 cache access 0.9 ns 2 frames
Level 2 cache access 2.8 ns 7 frames
Level 3 cache access 28 ns 1 s
Main memory access (DDR) ~100 ns 4 s
Intel Optane memory access <10 μs 7 min
NVMe SSD I/O ~25 μs 17 min
SSD I/O 50–150 μs 0.5—2 hrs
Rotational disk I/O 1–10 ms 0.5—5 days
Internet call: SF to NYC 65 ms 1 month
Internet call: SF to Hong Kong 141 ms^3 2 months
Conversions courtesy of Wolfram|Alpha, e.g.: https://www.wolframalpha.com/input/?i=65+ms+%2F+0.4+ns+%2F+6...
MeshCommander is an entirely web based remote management tool of your Intel® AMT computers.
I think DevOps, as we understand it today, is coming to an end. At least the Ops part of it.
This recognition should cause us to rethink what ‘nature’ and ‘wilderness’ really are. If by ‘nature’ we mean something divorced from or untouched by humans, there’s almost nowhere on Earth where such conditions exist, or have existed for thousands of years. The same can be said of Earth’s climate. If early agricultural land use began warming our climate thousands of years ago, as the early anthropogenic hypothesis suggests, it implies that no ‘natural’ climate has existed for millennia.
What to learn from all this? On the one hand, Unix wins: it's supposed to be quick and easy to assemble small tools to do whatever it is you're trying to do. When time wouldn't do the arithmetic I needed it to, I sent its output to a generic arithmetic-doing utility. When I needed to count to twenty, I had a utility for doing that; if I hadn't there are any number of easy workarounds. The shell provided the I/O redirection and control flow I needed.
On the other hand, gosh, what a weird mishmash of stuff I had to remember or look up. The -l flag for bc. The fact that I needed bc at all because time won't report total CPU time. The $TIME variable that controls its report format. The bizarro 2>&1 syntax for redirecting standard error into a pipe. The sh -c trick to get time to execute a pipeline. The missing documentation of the core functionality of time.
Was it a win overall? What if Unix had less compositionality but I could use it with less memorized trivia? Would that be an improvement?
I don't know. I rather suspect that there's no way to actually reach that hypothetical universe. The bizarre mishmash of weirdness exists because so many different people invented so many tools over such a long period. And they wouldn't have done any of that inventing if the compositionality hadn't been there. I think we don't actually get to make a choice between an incoherent mess of composable paraphernalia and a coherent, well-designed but noncompositional system. Rather, we get a choice between a incoherent but useful mess and an incomplete, limited noncompositional system.
(Notes to self: (1) In connection with Parse::RecDescent, you once wrote about open versus closed systems. This is another point in that discussion. (2) Open systems tend to evolve into messes. But closed systems tend not to evolve at all, and die. (3) Closed systems are centralized and hierarchical; open systems, when they succeed, are decentralized and organic. (4) If you are looking for another example of a successful incoherent mess of composable paraphernalia, consider Git.)
A major thesis of this text is that the complexity of computer hardware and software systems has exceeded our current understanding of how these systems work and fail, and furthermore, these systems are approaching the complexity of biological systems based on their cardinality and their networked hierarchy due to the widespread connectivity of the Internet and World Wide Web.
...
Although measuring network complexity remains an active area of research, efforts to quantify the concepts of node degree and dependence are confirming the fundamental hypothesis of network and complex systems researchers across multiple disciplines that relationship transitivity matters more than often credited in the traditional Newtonian-Cartesian ethic rooted in linear cause-and-effect, decomposability, reductionism, foreseeability of harm, time reversibility, and an obsession with finding broken parts and blaming people that still dominates mainstream intellectual theory and practice in accident investigations, the law, and systems engineering.