Coding Horror

programming and human factors

The Great Filter Comes For Us All

With a 13 billion year head start on evolution, why haven't any other forms of life in the universe contacted us by now?

teaching the aliens how to exit Vim

(Arrival is a fantastic movie. Watch it, but don't stop there - read the Story of Your Life novella it was based on for so much additional nuance.)

This is called the Fermi paradox:

The Fermi Paradox is a contradiction between high estimates of the probability of the existence of extraterrestrial civilizations, such as in the Drake equation, and lack of any evidence for such civilizations.

- There are billions of stars in the galaxy that are similar to the Sun including many billions of years older than Earth.
- With high probability, some of these stars will have Earth-like planets, and if the Earth is typical, some might develop intelligent life.
- Some of these civilizations might develop interstellar travel, a step the Earth is investigating now.
- Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in about a million years.

According to this line of thinking, the Earth should have already been visited by extraterrestrial aliens. In an informal conversation, Fermi noted no convincing evidence of this, nor any signs of alien intelligence anywhere in the observable universe, leading him to ask, “Where is everybody?”

To me, this is a compelling argument, in the same way that the lack of evidence of any time travellers is:

Many have argued that the absence of time travelers from the future demonstrates that such technology will never be developed, suggesting that it is impossible. This is analogous to the Fermi paradox related to the absence of evidence of extraterrestrial life. As the absence of extraterrestrial visitors does not categorically prove they do not exist, so the absence of time travelers fails to prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.

It seems, to me at least, clear evidence that time travel is not possible, given the enormous amount of time behind us. Something, somewhere, would certainly have invented it by now... right?

So if not, what happened? The Great Filter maybe?

The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

I liked Wait But Why’s take on this a lot, which covers three main filter possibilities:

  1. Life is extraordinarily rare, almost impossible
  1. We are not a rare form of life, but near the first to evolve
  1. Almost no life makes it to this point

Those are three Great Filter possibilities, but the question remains: why are we so alone in the observable universe? I grant you that what we can observe is appallingly tiny given the unimaginable scale of the universe, so “what we can observe” may not be enough by many orders of magnitude.

I encourage you to read the entire article, it's full of great ideas explained well, including many other Great Filter possibilites. Mostly I wanted to share my personal theory of why we haven't encountered alien life by now. Like computers themselves, things don't get larger. They get smaller. And faster. And so does intelligent life.

Why build planet-size anything when the real action is in the small things? Small spaces, small units of time, everything gets smaller.

Large is inefficient and unnecessary. Look at the history of computers: from giant to tiny and tinier. From slow to fast and faster. Personally, I have a feeling really advanced life eventually does away with all physical stuff that slows you down as soon as they can, and enters the infinite spaces between:

This is, of course, a variant on the Fermi paradox: We don’t see clues to widespread, large-scale engineering, and consequently we must conclude that we’re alone. But the possibly flawed assumption here is when we say that highly visible construction projects are an inevitable outcome of intelligence. It could be that it’s the engineering of the small, rather than the large, that is inevitable. This follows from the laws of inertia (smaller machines are faster, and require less energy to function) as well as the speed of light (small computers have faster internal communication). It may be – and this is, of course, speculation – that advanced societies are building small technology and have little incentive or need to rearrange the stars in their neighborhoods, for instance. They may prefer to build nanobots instead.

Seth Shostak

Seth delivers an excellent TED talk on this topic as well:

If we can barely see far in the universe as is, there's no way we could possibly see into the infinite space and time between.

That is of course just my opinion, but we'll see.. eventually.

Discussion

I Fight For The Users

If you haven't been able to keep up with my blistering pace of one blog post per year, I don't blame you. There's a lot going on right now. It's a busy time. But let's pause and take a moment to celebrate that Elon Musk destroyed Twitter. I can't possibly say it better than Paul Ford so I'll just refer you there:

Every five or six minutes, someone in the social sciences publishes a PDF with a title like “Humans 95 Percent Happier in Small Towns, Waving at Neighbors and Eating Sandwiches.” When we gather in groups of more than, say, eight, it’s a disaster. Yet there is something fundamental in our nature that desperately wants to get everyone together in one big room, to “solve it.” Our smarter, richer betters (in Babel times, the king’s name was Nimrod) often preach the idea of a town square, a marketplace of ideas, a centralized hub of discourse and entertainment—and we listen. But when I go back and read Genesis, I hear God saying: “My children, I designed your brains to scale to 150 stable relationships. Anything beyond that is overclocking. You should all try Mastodon.”

It's been clear for quite some time that the early social media strategery of "jam a million people in a colosseum and let them fight it out with free speech" isn't panning out, but never has it been more clear than now, under the Elon Musk regime, that being beholden to the whims of a billionaire going through a midlife crisis isn't exactly healthy for society. Or you. Or me. Or anyone, really.

I tried to be fair; I gave the post-Elon Twitter era a week, thinking "how bad could it possibly be?" and good lord, it was so much worse than I could have possibly ever imagined. It's like Elon read the Dilbert pointy-haired-manager book on management and bonked his head on every rung of the ladder going down, generating an ever-growing laundry list of terrible things no manager should ever do. And he kept going!

It's undeniably sad. I really liked Twitter, warts and all, from 2007 onward. In fact, it was the only "social network" I liked at all. Even when it became clear in the Trump era that Twitter was unhealthy for human minds, I soldiered on, gleaning what I could. I'm not alone in that; Clay Shirky's moribund signoff at the end of 2022 reflected how I felt:

Indeed, Twitter was murdered at the whims of a billionaire high on Ketamine while it was (mostly) healthy, because of the "trans woke virus".

I urge you, all of you, to disavow Twitter and never look at it again. No one who cares about their mental health should be on Twitter at this point, or linking to Twitter and feeding it the attention it thrives on. We should entomb Twitter deep in concrete with this public warning on its capstone:

This place is not a place of honor...no highly esteemed deed is commemorated here ...nothing valued is here.

In the end, I begrudgingly realized, as did Paul Ford, that Elon unwittingly did us a favor by killing Twitter. He demonstrated the very real dangers of any platform run by a king, a dictator, a tyrant, a despot, an autocrat. You can have all your content rug-pulled out from under you at any time, or watch in horror as your favorite bar... slowly transforms into a nazi bar.

I've been saying for a long time that decentralization is the way to go. We can and should have sane centralized services, of course, but it's imperative that we also build decentralized services which empower users and give them control, rather than treating them like digital sharecroppers. That's what our Discourse project is all about. I propose collective ownership of the content and the communities we build online. Yeah, it's more work, it's not "free" (sorry not sorry), but I have some uncomfortable news for you: those so-called "free" services aren't really free.

Geek-and-poke-pigs-free

Which, again, is not to say that "free" services don't have a place in the world, they do, but please don't harbor any illusions about what you are sacrificing in the name of "free". Grow up.

I take a rather Tron-like view of the world when it comes to this stuff; in the software industry, our goal should be to empower users (with strong moderation tools), not exploit them.

So I encourage you to explore alternatives to Twitter, ideally open source, federated alternatives. Is it messy? Hell yes it's messy. But so is democracy; it's worth the work, because it's the only survivable long term path forward. Anything worth doing is never easy.

I'm currently on Mastodon, an open source, federated Twitter alternative at https://infosec.exchange/@codinghorror – I urge you to join me on the Mastodon server of your choice, or quite literally any other platform besides Twitter. Really, whatever works for you. Pick what you like. Help make it better for everyone.

To inspire that leap of faith, I am currently auctioning off, with all funds to benefit the Trevor Project which offers assistance to LGBTQ youth, these 10 museum quality brass plaques of what I consider to be the best tweet of all time, hands down:

(Blissfully, @horse_ebooks is also on Mastodon. As they should be. As should you. Because everything happens so much.)

If you'd like to bid on the 10 brass plaques, follow these links to eBay, and please remember, it's for a great cause, and will piss Elon off, which makes it even sweeter:

(apologies, I had to cancel the old auctions because I forgot to allow international shipping – I've also made shipping free, worldwide.)

  1. https://www.ebay.com/itm/225903779136
  2. https://www.ebay.com/itm/225903780761
  3. https://www.ebay.com/itm/225903784597
  4. https://www.ebay.com/itm/225903785269
  5. https://www.ebay.com/itm/225903785648
  6. https://www.ebay.com/itm/225903786591
  7. https://www.ebay.com/itm/225903787053
  8. https://www.ebay.com/itm/225903788754
  9. https://www.ebay.com/itm/225903789412
  10. https://www.ebay.com/itm/225903789881

I will sign the back of every plaque, because each one comes with my personal guarantee that it will easily outlive what's left of Twitter.

Discussion

The 2030 Self-Driving Car Bet

It's my honor to announce that John Carmack and I have initiated a friendly bet of $10,000* to the 501(c)(3) charity of the winner’s choice:

By January 1st, 2030, completely autonomous self-driving cars meeting SAE J3016 level 5 will be commercially available for passenger use in major cities.

I am betting against, and John is betting for.

By “completely autonomous”, per the SAE level 5 definition, we mean the vehicle performs all driving tasks under all conditions – except in the case of natural disasters or emergencies. A human passenger enters the vehicle and selects a destination. Zero human attention or interaction is required during the journey.

By "major cities" we mean any of the top 10 most populous cities in the United States of America.

To be clear, I am betting against because I think everyone is underestimating how difficult fully autonomous driving really is. I am by no means against self driving vehicles in any way! I'd much rather spend my time in a vehicle reading, watching videos, or talking to my family and friends … anything, really, instead of driving. I also think fully autonomous vehicles are a fascinating, incredibly challenging computer science problem, and I want everyone reading this to take it as just that, a challenge. Prove me wrong! Make it happen by 2030, and I'll be popping champagne along with you and everyone else!

(My take on VR is far more pessimistic. VR just… isn't going to happen, in any "changing the world" form, in our lifetimes. This is a subject for a different blog post, but I think AR and projection will do much more for us, far sooner.)

I'd like to thank John for suggesting this friendly wager as a fun way to generate STEM publicity. He is, and always will be, one of my biggest heroes. Go read Masters of Doom if you haven't, already!

And while I have you, we're still looking for code contributions in our project to update the most famous programming book of the BASIC era. Proceeds from that project will also go to charity. 😎

* We may adjust the amount up or down to adjust for inflation as mutually agreed upon in 2030, so the money has the desired impact.

Discussion

Updating The Single Most Influential Book of the BASIC Era

In a way, these two books are responsible for my entire professional career.

With early computers, you didn't boot up to a fancy schmancy desktop, or a screen full of apps you could easily poke and prod with your finger. No, those computers booted up to the command line.

From here, if you were lucky, you might have a cassette tape drive. If you knew the right commands, you could type them in to load programs from cassette tape. But that was an expensive add-on option with early personal computers. For many of us, if we wanted the computer to do anything, we had to type in entire programs from books like 101 Basic Computer Games, by hand... like so.

Yep, believe it or not, circa 1983, this was our idea of a good time. No, we didn't get out much. The book itself was a sort of greatest hits compilation of games collected from Ahl's seminal Creative Computing magazine in the 1970s:

As soon as Ahl made up his mind to leave DEC, he started laying the groundwork for Creative Computing. He announced intentions to publish the magazine at NCC in June 1974 and over the next few months contacted prospective authors, got mailing lists, arranged for typesetting and printing, and started organizing hundreds of other details.

In addition, he also moved his family to Morristown, NJ, and settled into his new job at AT&T. He had little spare capital, so he substituted for it with "sweat equity." He edited submitted articles and wrote others. He specified type, took photos, got books of "clip art," drew illustrations, and laid out boards. He wrote and laid out circulation flyers, pasted on labels, sorted and bundled mailings.

By October 1974, when it was time to specify the first print run, he had just 600 subscribers. But Ahl had no intention of running off just 600 issues. He took all the money he had received, divided it in half, and printed 8000 copies with it. These rolled off the presses October 31, 1974. Ahl recounts the feeling of euphoria on the drive to the printer replaced by dismay when he saw two skids of magazines and wondered how he would ever get them off the premises. Three trips later, his basement and garage were filled with 320 bundles of 25 magazines each. He delivered the 600 subscriber copies to the post office the next day, but it took nearly three weeks to paste labels by hand onto the other 7400 copies and send them, unsolicited, to libraries and school systems throughout the country.

I also loved Creative Computing, but it was a little before my time:

  • 1971 – Ahl ports the programs from FOCAL to BASIC.
  • 1973 – 101 BASIC Computer Games is first published by DEC.
  • 1974 – Ahl founds Creative Computing magazine and acquires the rights to the book from DEC.
  • 1977 – the “trinity” of Apple II 🖥️, PET ️🖥️, and TRS-80 🖥️ microcomputers are released to the public, all with BASIC built in, at prices regular people could mostly afford 🙌
  • 1978 – a second edition of BASIC Computer Games is released, this time published by Ahl himself.

As you can see, there’s no way average people in 1973-1976 were doing a whole lot with BASIC programs, as they had no microcomputers capable of running BASIC to buy! It took a while for inexpensive personal computers to trickle down to the mainstream, which brings us to roughly 1984 when the sequels started appearing.

There was a half-hearted attempt to modernize these early BASIC programs in 2010 with SmallBasic, but I didn't feel these ports did much to bring the code up to date, and overall had little relevance to modern code practices. You can compare the original 1973 BASIC Civil War with the 2010 SmallBasic port to see what I mean:

Certainly we can do a bit better than merely removing the line numbers? What about our old buddy the subroutine, merely the greatest invention in computer science? It's nowhere to be seen. 🤔

So it was with considerable enthusiasm that I contacted David H. Ahl, the author, and asked for permission to create a website that attempted to truly update all these ancient BASIC programs.

Thankfully, permission was granted. It's hard to understate how important this book was to an entire generation of programmers. At one point, there were more copies of this book in print than there were personal computers, period!

... in 1973, DEC published an anthology, 101 BASIC Computer Games. The book quickly went into a second printing, for a total of 10,000 copies sold. “That was far more books than there were computers around, so people were buying three, four, five of them for each computer.”

It went on to be the first computer book to sell a million copies. Quite a legacy.

I think we owe it to the world to bring this book up to date using modern, memory safe languages that embody the original spirit of BASIC, and modern programming practices including subroutines.

So let's do this. Please join us on GitHub, where we're updating those original 101 BASIC games in 10 memory safe, general purpose scripting languages:

  • Java / Kotlin
  • Python
  • C#
  • VB.NET
  • JavaScript
  • Ruby
  • Perl
  • Lua

(Edit: as of March 2022, we've a) offered Kotlin as an alternative to Java, b) removed Pascal since we can't guarantee memory safety there, and replaced it with Rust, which very much can, and c) added Lua which just cracked the top 20 in TIOBE and strongly meets the scripting and memory safe criteria.)

Now, bear in mind these are very primitive games from the 1970s. They aren't going to win any awards for gameplay, or programming sophistication. But they are precious artifacts of early computing that deserve to be preserved for future generations, including the wonderful original art by George Beker.

We need your help to do this right, and collaboratively together, as with all modern programming projects. Imagine we're all typing these programs in simultaneously together online, all over the world, instead of being isolated alone in our room in 1984, cursing at the inevitable typo we made somewhere when typing the code in by hand out of the book🤬.

Thanks Mr. Ahl. And a big thanks to everyone who contributed to this project when it was in beta, announced only on Twitter:

To encourage new contributions, by the end of 2022, for every functioning program submitted in each of the 10 indicated languages, I'll donate $5 to Girls Who Code. Before beginning, please read the guidelines in the readme, and if you have questions, scan through this discussion topic. And most of all, remember, this stuff is supposed to be fun.

(I don't want to be "that one guy", so I'm also looking for project co-owners who can help own and organize this effort. If this is a project that really appeals to you, show me what you can do and let's work together as a team.)

Perhaps as your new year's resolution you can see fit to carve off some time to take part in our project to update a classic programming bookone of the most influential books in computing history – for 2022 and beyond! 🎉

Discussion

Building a PC, Part IX: Downsizing

Hard to believe that I've had the same PC case since 2011, and my last serious upgrade was in 2015. I guess that's yet another sign that the PC is over, because PC upgrades have gotten really boring. It took 5 years for me to muster up the initiative to get my system fully upgraded! 🥱

I've been slogging away at this for quite some time now. My PC build blog entry series spans 13 glorious years:

The future of PCs may not necessarily be more speed (though there is some of that, if you read on), but in smaller builds. For this iteration, my go-to cases are the Dan A4 SFX ...

And the Streacom DA2 ...

The attraction here is maximum power in minimum size. Note that each of these cases are just large enough to fit ...

  • a standard mini-ITX system
  • SFX power supply
  • full sized GPU
  • reasonable CPU cooler

... though the DA2 offers substantially more room for cooling the CPU and adding fans.

http://i.imgur.com/odoYjle.jpg

I'm not sure you can physically build a smaller standard mini-ITX system than the DAN A4 SFX, at least not without custom parts!

DAN A4-SFX
200mm × 115mm × 317mm = 7.3 liters

Silverstone RVZ02 / ML08
380mm × 87mm × 370mm = 12.2 liters

nCase M1
240mm × 160mm × 328 mm = 12.6 liters

Streacom DA2
180mm × 286mm × 340mm = 17.5 liters

(For comparison with The Golden Age of x86 Gaming Consoles, a PS4 Pro occupies 5.3 liters and an Xbox One S 4.3 liters. About 50% more volume for considerably more than 2× the power isn't a bad deal!)

I chose the Streacom DA2 as my personal build, because after experimenting heavily with the DAN A4 SFX, I realized you need more room to deal with extremely powerful CPUs and GPUs in this form factor, and I wanted a truly powerful system:

  • Intel i9-9900KS (8 core, 16 thread, 5.0 GHz) CPU
  • Samsung 970 PRO 1TB / Samsung 970 EVO 2TB / Samsung 860 QVO 4TB SATA
  • 64GB DDR4-3000
  • Cryorig H7 cooler (exact fit)
  • NVIDIA GeForce RTX 2080 Ti GPU

Compared to my old 2015-2017 system, a slightly overclocked i7-7700k, that at least gives me 2× the cores (and faster cores, both in clock rate and IPC), 2× the memory, and 2× the M.2 slots (two versus one).

The DA2 is a clever case though less perfect than the A4-SFX. What's neat about it is the hybrid open-air design (on the top and bottom) plus the versatile horizontal and vertical bracket system interior. Per the manual (pdf):

Check out all the bracket mounting options. Incredibly versatile, and easy to manipulate with the captured nut and bolt design:

Note that you can (and really should) pop out the top and bottom acrylic pieces with the mesh dust net.

I had dramatically better temperatures after I did this, and it also made the build easier since the case can fully "breathe" through the top and bottom. You'll note that the front of the DA2 is totally solid, no air holes, so you do need that extra airflow.

I only have a few criticisms of this Streacom DA2 case:

  • The side panels are tool free, which is excellent, but the pressure fit makes them fairly difficult to remove. Feels like this could be tweaked?
  • (Don't even think about using a full sized ATX power supply. In theory it is supported, but the build becomes so much more difficult. Use a SFX power supply, which you'd expect to do for a mini-ITX build anyway.)
  • My primary complaint is that the power extension cable gets in the way. I had to remove it and re-attach it during my build. They should custom route the power cable upwards so it blocks less stuff.
  • Less of a criticism and more of an observation: if your build uses a powerful GPU and CPU, you'll need two case fans. There's mounting points for a 92mm fan in the rear, and the bracket system makes it easy to mount a 140mm fan blowing inward. You will definitely need both fans!

Here's the configuration I recommend, open on both the top and bottom for maximum airflow, with three fans total:

If you are a water cooling kind of person – I am definitely not, I experienced one too many traumatic cooling fluid leaks in the early 2000s – then you will use that 140mm space for the radiator.

I have definitely burn-in tested this machine, as I do all systems I build, and it passed with flying colors. But to be honest, if you expect to be under full CPU and GPU loads for extended periods of time you might need to switch to water cooling due to the space constraints. (Or pick slightly less powerful components.)

If you haven't built a PC system recently, it's easier than it has ever been. Heck by the time you install the M.2 drives, memory, CPU, and cooler on the motherboard you're almost done, these days!

There are a lot of interesting compact mini-itx builds out there. Perhaps that's the primary innovation in PC building for 2020 and beyond – packing all that power into less than 20 liters of space!

Read a Spanish translation of this article here.

Discussion