ECC RAM should be a human right

I am now a staunch advocate for ECC RAM, after the events of last week. You see, over the last several weeks my main desktop workstation has been misbehaving, with occasional freezing and crashing. After some diagnostics I began to suspect a faulty RAM module, and sure enough, upon performing a quick run of memtest86, it lit up the screen with a multitude of bit flip errors, at numerous memory locations, indicating that something was seriously wrong with the RAM.

Within a day or two I scrambled to replace the RAM modules with new ones, and when this was done the problems resolved themselves and everything was stable again. However, there was another more sinister side effect that I discovered shortly afterwards: Some of my data was corrupted! That’s right, it was the worst-case scenario for RAM failure: bit flip errors that get written back to the disk. I discovered that several video files that I had been editing had corrupted bits, and were no longer usable. Fortunately I still have the original source materials for the videos which I can use to recreate the final videos. It’s an unfortunate waste of time, but it could have been a lot worse if I’d let the RAM failure go on even longer. There doesn’t seem to be any further corruption in any more of my personal data, and just to be on the safe side I performed a clean install of Windows, to ensure that no system files or program files are corrupted.

The point of the story is that the data corruption was completely preventable, if only my RAM had ECC built into it. But because it doesn’t, these kinds of bit flip events go completely undetected, and proceed to wreak havoc on the integrity of our data, right under our noses.

Memory manufacturers assure us that desktop RAM is so reliable that it doesn’t need ECC, that the probability of bit flip events is so low that it’s not worth the extra “cost” of ECC. Chip manufacturers (i.e. Intel) produce CPUs that don’t even support ECC memory. Users are expected to upgrade to server-grade components just to get access to ECC memory.

Let’s quickly review the reasons why server-class machines are deemed to be “deserving” of ECC memory, while desktop machines are not:

On a personal desktop computer, your data is stored permanently on a disk, whether it’s a spinning hard drive, SSD drive, memory card, and so on. When you want to do something with your data (e.g. write a document, edit a photo, etc), the data is loaded into RAM, and when you’re finished modifying your data, it’s written back to the disk.

On a server machine, however, the situation is different: since disk access is much slower than RAM access, the server must keep as much data as possible in RAM, so that the data is instantly available to clients who request it. This means that the data ends up sitting in RAM for extended periods of time. If the RAM were to experience bit-flip errors that went undetected, the server would serve incorrect data, or worse, would end up writing incorrect data back to the disk. Therefore, the server’s RAM has ECC, so that it will correct itself in case of an occasional bit flip.

This is oversimplifying a bit, but the difference between a server and a desktop, for this exercise, is simply the amount of time that data is made to sit in RAM. So then, are we supposed to accept that if our data doesn’t remain in RAM for very long, it doesn’t need ECC at all?!

By the way, you’d better believe that your disk(s) have all kinds of error correction schemes built into them, which work automatically and transparently. It’s completely normal for data written to a physical medium to be imperfect, and those imperfections will be corrected by the firmware of the disk.

Well guess what? RAM is a physical medium, and yet we’re simply asked to take the manufacturers’ word that our RAM is reliable enough to never need ECC for the use cases of a desktop workstation. Well, I’m here to say that these practices are reckless, and represent a ticking time bomb for anyone who uses non-ECC memory for anything nontrivial. And it seems I’m not the only one.

(discussion on HackerNews)

Brain dump, October 2022

I went down a deep rabbit hole on the history of Amiga computers, and specifically the types of image encodings used in old Amiga software. As an enthusiast of digital preservation, I want to make sure that any images saved using that software can still be decoded and converted using modern-day tools.

What I found is a rich history of software developers squeezing every last bit of performance out of underpowered hardware and limited storage, using techniques that blur the line between laughably hacky and downright ingenious.

A good example is the so-called Extended HAM (HAM-E) graphics mode, which is basically designed to enable a higher color depth, i.e. more color bit planes, at the expense of horizontal resolution. However, the native format of Amiga bitmap images didn’t support this kind of color encoding in the metadata of the bitmap file, so how do we encode our color palette? The solution: put the palette information literally into the actual image!

That’s right, the color palette actually appears as several lines of seemingly “garbage” pixels at the top of the image, and a proper decoder needs to be aware of this hack. Below is an example image that is decoded incorrectly, i.e. using a decoder that reads the image literally, without taking the extra color information into account:

Incorrect

Notice in the above image that we can vaguely discern a stretched shape of a person’s face, and we can also discern a few lines of pixels at the top that don’t seem to go along with the rest of the image. Now here’s the same image decoded correctly, with the color palette lines consumed and blanked out:

Correct

But doesn’t that sound wrong? Even if a proper decoder recognizes and blanks out the garbage lines, won’t the bitmap always have unused lines at the top? Not to worry — those lines will fall into the overscan region of the monitor! (You know, because a CRT monitor has overscan margins.) Therefore the palette lines will be automatically hidden from view, unless the user explicitly adjusts the overscan on their monitor. Ah, the good old days.

There are a few modern software packages that handle these types of images and a plethora of others, including abydos, recoil, and grafx2, so the preservation of old Amiga image formats is in good shape. And in my own ImageFormats library for .NET, I implemented support for some even more obscure Amiga formats, notably ACBM (Amiga Contiguous BitMap), which seems to be unsupported by other decoders.

Brain dump, September 2022

As the happy owners of several chickens, we’ve been finding that non-chicken-owners often have a comically flawed understanding of chicken anatomy and chicken reproduction. Specifically, people seem to assume that if a hen lays an egg, it implies that the egg contains an embryo, and definitely means that a chick will eventually hatch from the egg, unless the egg is taken from the hen and eaten before the embryo grows to a visible size. This assumption is maintained even when people are told that there’s no rooster in our flock, to which the response is “What does a rooster have to do with it?”

I remind my non-chicken-owning friends that a female human routinely produces eggs that fall out of her uterus. These eggs will never develop into a human embryo, unless they are fertilized by the human equivalent of a rooster.


I worked part-time while attending college (a decently mid-tier state school) in the early 2000s, and paid for the entirety of my tuition with my salary. I never took out any loans to pay for my education, and finished college without any debt. This is a choice that I made, and this choice is not for everyone. Despite this fact, I am overjoyed that many people will now have an easier time paying for college than I did, and I hold no resentment towards those people for whom the $10,000 loan forgiveness package will make a life-altering difference.

Thinking about it in the most abstract terms, the whole point of civilization is for the current generation to improve the quality of life for the next generation. Parents make sacrifices all the time to improve their children’s future, and good parents don’t resent their children just because their children’s lives are easier than theirs were.

The only minor annoyance I feel is that there’s still no attempt to fix the underlying root cause of the issue, which is twofold: 1) Colleges are experiencing unprecedented levels of administrative bloat, which is driving up the cost of education to the point where college is a luxury product that a seventeen-year-old is expected to purchase without thinking twice, and 2) We’ve created a culture which insists that going to college is the only way to succeed in life, when in fact there are a hundred different trades that are looking desperately for more people, and provide a very comfortable living and life satisfaction.

Brain dump, July 2022

(Reminder: I perform a service of recovering data from old media such as magnetic tapes, floppies, weird flash memory formats, etc. Get in touch with anything you’d like to recover.)

Performed a very interesting data recovery case:
I received a number of QIC-150 tapes, and one QIC-525 tape, from a client in Tasmania (!) who made backups onto these tapes from an IBM AS/400 system in 1991. This presented a couple of challenges:

  • The tapes were written with variable-length block sizes, so we need to configure the tape driver accordingly by running mt -f /dev/nst0 setblk 0.
  • And then, when using dd to dump the tape contents, specify a sufficiently large block size to read the largest of the variable block sizes, which turned out to be 32760, so the command was dd … bs=32k.
  • After the data was dumped successfully, I determined that the data was saved using the SAVLIB command on the AS/400, which serializes the data onto the tape in a proprietary format. Unfortunately I couldn’t find a lot of public documentation on this format, so I had to do a bit of reverse-engineering. My client specifically wanted to locate some COBOL source files in these archives, which turned out to be easy to pinpoint and extract. Most of the COBOL code was uncompressed, but some of the files were compressed using SCB (string control byte) encoding, which is basically a form of simple run-length compression.
  • Finally, since the data came from an AS/400 system, the text contents of the files were EBCDIC-encoded, which required a conversion from EBCDIC to ASCII.

One of my first professional programming contracts was done in 1997, while I was still in high school. I built an MS-DOS application that communicated over a serial port with an industrial iron powder machine (i.e. a large apparatus that melts scrap iron and turns it into powder, to be recycled for other uses). This program would display real-time telemetry from the machine, and then output it to a daily log file. It was a very simple and rudimentary program, but it worked well enough that the company continued to use it for years. It was also enough to land me a full-time job at the same company, but that’s a subject for a longer post.

Although I still have the original executable file that I provided, the unfortunate thing is that I’ve lost the source code for this program. The only thing about it that I remember is that it was written in C, and compiled with Borland C++ v3. Therefore I’ve had a mini-quest in the back of my mind to either find the lost source code, or decompile the executable file and reconstruct the source code.

Recently I’ve come the closest I’ve gotten so far to reconstructing the code, all thanks to the excellent Reko decompiler. I opened the executable in Reko; it detected the Borland C v3 runtime effortlessly; it showed a list of function calls (which I recognized! finally!), and I was up and running navigating the disassembled functions:

The Reko decompiler working with my MS-DOS app.

The hurdle now will be to rewrite the source code based on the disassembly, so that it compiles as closely as possible to the original executable. There’s still a good amount of work left to do, but it’s now much more manageable because of Reko.

In case anyone’s interested, here is the program actually running in DosBox, albeit stuck in an “error” state because it’s not receiving any data from the aforementioned iron powder machine:

My DOS app from 1997 running in DosBox.

Brain dump, June 2022

As much as I’m relishing the decline and fall of crypto, I’m not as keen on the decline and fall of the U.S. economy. There seems to be a particularly head-in-the-sand-style disconnect between the state of the economy and how it’s being covered in the news. My savings are down about 25% for this year, as well as the savings of most of my friends and family. Prices are up by at least 20% overall, for pretty much every class of products. I have no idea how the reported “8% inflation” numbers are arrived at.

And yet, when I open the news, the headlines read, “Could a recession be on the way?!” “Stocks plunge, stoking worries of recession!”, etc. I understand that the technical definition of a recession is two consecutive quarters of negative growth, but what is it called when there’s a recession’s worth of decline in a single quarter?


Digitized a few old VHS tapes for a friend. This was done using my trusty Sabrent USB-AVCPT adapter, which accepts either S-Video or RCA input, coming from a VCR borrowed from another friend. This adapter is quite old, and therefore is no longer supported on Windows 11, but thankfully still works perfectly in Linux without any custom software. To perform the conversion:

  • Launch VLC, select “Media -> Convert/Save…”, then go to “Capture Device”.
  • Select “TV – analog”, then for the Video input select /dev/video2, or whatever the last video device is, and for the Audio input you must paste the name of the ALSA/pulseaudio device that corresponds to the adapter.
  • The audio device can be found by going to the Playlist in VLC and looking at the list of “Devices” on the left. From there you should be able to find the audio device and look at its properties, which will give you the device name.
  • Back in the Convert tab, once both Video and Audio devices are typed in, click Convert; this will take you to the final step before conversion. Make sure to enable “Deinterlace”, since the output from VCRs is usually interlaced.
  • Select a suitable “Profile”, such as “H.264 + MP3”.
  • Select a “Destination file” to which the video will be written.
  • And finally click the Start button (and simultaneously the Play button on the VCR)!
  • When the tape is finished playing, click the Stop button in VLC, and the video will be complete.
  • Be kind and rewind the VHS tape before ejecting it. Immediately upon ejecting the tape, throw it in the trash.

Notes:
When selecting the “H.264 + MP3” profile, I tweaked the video quality to be 2 Mbps, since the default was a rather low 800 kbps, considering that we’ll have 720×480 resolution at 30 fps (NTSC). I would have liked to see a playback of the video while it was converting, so I enabled “Display the output”, but this didn’t seem to work, and just got stuck on the first frame. That’s fine, since I just let the tape play until the end and then checked that the conversion was successful. Each tape was converted to around 2 GB of video.