Defeating a 40-year-old copy protection dongle

image

That’s right — this little device is what stood between me and the ability to run an even older piece of software that I recently unearthed during an expedition of software archaeology.

For a bit more background, I was recently involved in helping a friend’s accounting firm to move away from using an extremely legacy software package that they had locked themselves into using for the last four decades.

This software was built using a programming language called RPG (“Report Program Generator”), which is older than COBOL (!), and was used with IBM’s midrange computers such as the System/3, System/32, and all the way up to the AS/400. Apparently, RPG was subsequently ported to MS-DOS, so that the same software tools built with RPG could run on personal computers, which is how we ended up here.

This accounting firm was actually using a Windows 98 computer (yep, in 2026), and running the RPG software inside a DOS console window. And it turned out that, in order to run this software, it requires a special hardware copy-protection dongle to be attached to the computer’s parallel port! This was a relatively common practice in those days, particularly with “enterprise” software vendors who wanted to protect their very important™ software from unauthorized use.

image

Sadly, most of the text and markings on the dongle’s label has been worn or scratched off, but we can make out several clues:

  • The words “Stamford, CT”, and what’s very likely the logo of a company called “Software Security Inc”. The only evidence for the existence of this company is this record of them exhibiting their wares at SIGGRAPH conferences in the early 1990s, as well as several patents issued to them, relating to software protection.
  • A word that seems to say “RUNTIME”, which will become clear in a bit.

My first course of action was to take a disk image of the Windows 98 PC that was running this software, and get it running in an emulator, so that we could see what the software actually does, and perhaps export the data from this software into a more modern format, to be used with modern accounting tools. But of course all of this requires the hardware dongle; none of the accounting tools seem to work without it plugged in.

Before doing anything, I looked through the disk image for any additional interesting clues, and found plenty of fascinating (and archaeologically significant?) stuff:

image

  • We’ve got a compiler for the RPG II language (excellent!), made by a company called Software West Inc.
  • Even better, there are two versions of the RPG II compiler, released on various dates in the 1990s by Software West.
  • We’ve got the complete source code of the accounting software, written in RPG. It looks like the full accounting package consists of numerous RPG modules, with a gnarly combination of DOS batch files for orchestrating them, all set up as a “menu” system for the user to navigate using number combinations. Clearly the author of this accounting system was originally an IBM mainframe programmer, and insisted on bringing those skills over to DOS, with mixed results.

I began by playing around with the RPG compiler in isolation, and I learned very quickly that it’s the RPG compiler itself that requires the hardware dongle, and then the compiler automatically injects the same copy-protection logic into any executables it generates. This explains the text that seems to say “RUNTIME” on the dongle.

The compiler consists of a few executable files, notably RPGC.EXE, which is the compiler, and SEU.EXE, which is a source editor (“Source Entry Utility”). Here’s what we get when we launch SEU without the dongle, after a couple of seconds:

image

A bit rude, but this gives us an important clue: this program must be trying to communicate over the parallel port over the course of a few seconds (which could give us an opportunity to pause it for debugging, and see what it’s doing during that time), and then exits with a message (which we can now find in a disassembly of the program, and trace how it gets there).

A great tool for disassembling executables of this vintage is Reko. It understands 16-bit real mode executables, and even attempts to decompile them into readable C code that corresponds to the disassembly.

image

And so, looking at the decompiled/disassembled code in Reko, I expected to find in and out instructions, which would be the telltale sign of the program trying to communicate with the parallel port through the PC’s I/O ports. However… I didn’t see an in or out instruction anywhere! But then I noticed something: Reko disassembled the executable into two “segments”: 0800 and 0809, and I was only looking at segment 0809.

image

If we look at segment 0800, we see the smoking gun: in and out instructions, meaning that the copy-protection routine is definitely here, and best of all, the entire code segment is a mere 0x90 bytes, which suggests that the entire routine should be pretty easy to unravel and understand. For some reason, Reko was not able to decompile this code into a C representation, but it still produced a disassembly, which will work just fine for our purposes. Maybe this was a primitive form of obfuscation from those early days, which is now confusing Reko and preventing it from associating this chunk of code with the rest of the program… who knows.

Here is a GitHub Gist with the disassembly of this code, along with my annotations and notes. My x86 assembly knowledge is a little rusty, but here is the gist of what this code does:

  • It’s definitely a single self-contained routine, intended to be called using a “far” CALL instruction, since it returns with a RETF instruction.
  • It begins by detecting the address of the parallel port, by reading the BIOS data area. If the computer has more than one parallel port, the dongle must be connected to the first parallel port (LPT1).
  • It performs a loop where it writes values to the data register of the parallel port, and then reads the status register, and accumulates responses in the BH and BL registers.
  • At the end of the routine, the “result” of the whole procedure is stored in the BX register (BH and BL together), which will presumably be “verified” by the caller of the routine.
  • Very importantly, there doesn’t seem to be any “input” into this routine. It doesn’t pop anything from the stack, nor does it care about any register values passed into it. Which can only mean that the result of this routine is completely constant! No matter what complicated back-and-forth it does with the dongle, the result of this routine should always be the same.

With the knowledge that this routine must exit with some magic value stored in BX, we can now patch the first few bytes of the routine to do just that! Not yet knowing which value to put in BX, let’s start with 1234:

BB 34 12       MOV BX, 1234h
CB             RETF

Only the first four bytes need patching — set BX to our desired value, and get out of there (RETF). Running the patched executable with these new bytes still fails (expectedly) with the same message of “No dongle, no edit”, but it fails immediately, instead of after several seconds of talking to the parallel port. Progress!

Stepping through the disassembly more closely, we get another major clue: The only value that BH can be at the end of the routine is 76h (this is hard-coded into the routine). So, our total value for the magic number in BX must be of the form 76xx. In other words, only the BL value remains unknown:

BB __ 76       MOV BX, 76__h
CB             RETF

Since BL is an 8-bit register, it can only have 256 possible values. And what do we do when we have 256 combinations to try? Brute force it! I whipped up a script that plugs a value into that particular byte (from 0 to 255) and programmatically launches the executable in DosBox, and observes the output. Lo and behold, it worked! The brute forcing didn’t take long at all, because the correct number turned out to be… 6. Meaning that the total magic number in BX should be 7606h:

BB 06 76       MOV BX, 7606h
CB             RETF

image

Bingo!
And then, proceeding to examine the other executable files in the compiler suite, the parallel port routine turns out to be exactly the same. All of the executables have the exact same copy protection logic, as if it was rubber-stamped onto them. In fact, when the compiler (RPGC.EXE) compiles some RPG source code, it seems to copy the parallel port routine from itself into the compiled program. That’s right: the patched version of the compiler will produce executables with the same patched copy protection routine! Very convenient.

I must say, this copy protection mechanism seems a bit… simplistic? A hardware dongle that just passes back a constant number? Defeatable with a four-byte patch? Is this really worthy of a patent? But who am I to pass judgment. It’s possible that I haven’t fully understood the logic, and the copy protection will somehow re-surface in another way. It’s also possible that the creators of the RPG compiler (Software West, Inc) didn’t take proper advantage of the hardware dongle, and used it in a way that is so easily bypassed.

In any case, Software West’s RPG II compiler is now free from the constraint of the parallel port dongle! And at some point soon, I’ll work on purging any PII from the compiler directories, and make this compiler available as an artifact of computing history. It doesn’t seem to be available anywhere else on the web. If anyone reading this was associated with Software West Inc, feel free to get in touch — I have many questions!

Brain dump, October 2025

What do I do with old Android phones that I’d like to keep around for testing purposes, while no longer worrying about the long-term fate of the lithium battery inside them? Here’s what:

I take the battery out, make sure it’s fully discharged, and surgically remove the controller logic board from the battery (don’t try this yourself unless you know exactly what you’re doing).

image

I then solder this PCB directly onto the battery terminals in the phone, and finally solder my actual power inputs onto the positive and negative pads on the underside of the PCB.

image

The above example is a Samsung Galaxy Note 4, still plenty powerful for all kinds of development and testing purposes, and is now usable without a battery! I opted to use a cheap standard barrel connector for my power cable, simply because that’s what I had on hand, and also to emphasize that it must be connected to a proper power supply that outputs ~4 volts and is capable of at least 2 amps.

Here is the Note 4, running off a 12V power supply, through a cheap adjustable buck converter that is tuned to output exactly 4.0V, which makes the phone believe the battery is 85% charged.

image


Speaking of “Android” devices, I had a bit of fun playing with a couple of these intriguing things:

image

This is an LTE/WiFi dongle, which lets you insert your SIM card that has a data plan, for when you want to connect to the internet from your laptop on the go. But under the surface, there are several things about this dongle that are very interesting indeed:

First, this stick is basically a fully-fledged Android device on the inside. That’s right — it contains all the internals of a low-end Android phone, cleverly packaged into a USB stick form-factor.

image

Another interesting thing is the price: at the time of writing, I was able to obtain several of these devices for about $8 each from AliExpress, which is an incredible value for the functionality you can unlock, because:

The final interesting thing about these devices is that they’re very hackable. In fact, there’s an entire community (because of course there is!) that has figured out how to load a completely standard Debian distro onto this stick, with all the bells and whistles that provides, including USB gadget capabilities, allowing you to turn this stick into any type of USB device of your choosing, such as a mass storage device, or an HID keyboard (à la Rubber Ducky, except for 1/10th the price), or an RNDIS network interface (which can capture and record packets), or any number of other types of gadgets, all while still allowing you to “log in” to it via its WiFi hotspot or LTE modem.

I went a little overboard and soldered some wires onto the UART pins, then routed the wires around the metal shielding so that they’d stick out of the plastic case.

image

I then put some hot glue over the wires and smooshed the plastic case over it, so that the wires are nicely secured. And now I have a nicely debuggable platform for doing further tinkering with this amazing little stick!

image
image

Using Claude Code to modernize a 25-year-old kernel driver

As a bit of background, one of my hobbies is helping people recover data from old tape cartridges, such as QIC-80 tapes, which were a rather popular backup medium in the 1990s among individuals, small businesses, BBS operators, and the like. I have a soft spot for tape media; there’s something about the tactile sensation of holding these tapes in my hands that makes the whole process very joyful, even though QIC tapes are notorious for their many design flaws. With some careful inspection and reconditioning, the data on these tapes is still totally recoverable, even after all these years.

Whenever I receive a QIC-80 tape for recovery, I power up one of my older PC workstations which has the appropriate tape drive attached to it, and boot into a very old version of Linux (namely CentOS 3.5), because this is the only way to use the ftape driver, which is the kernel driver necessary for communicating with this tape drive, allowing the user to dump the binary contents of the tape.

You see, the drive that reads these tapes connects to the floppy controller on the motherboard. This clever hack was done as a cost-saving measure: instead of having to purchase a separate SCSI adapter (the standard interface for higher-tier tape media), you can just connect this tape drive to your floppy controller, which was already available on most PCs. It can even work alongside your existing floppy drive, on the same ribbon cable! The tradeoff, of course, is that the data rate is limited by the speed of the floppy controller, which was something like 500 Kbps (that’s kilobits, not bytes).

The other downside is that the protocol for communicating with these tape drives through the floppy controller was very messy, nonstandard, and not very well-supported. It was a “hack” in every sense: your motherboard’s BIOS had no knowledge of the tape drive being connected, and it was entirely up to the end-user software to know exactly how to manipulate the hardware I/O ports, timings, interrupts, etc. to trick the floppy controller into sending the appropriate commands to the tape drive.

image

There were a small number of proprietary tools for MS-DOS and Windows 3.x/9x for dealing with these drives, and only one open-source implementation for Linux, namely ftape. Of course it is possible to use those original DOS/Windows tools to read the tapes, but it’s actually only ftape that allows us to read the “raw” binary contents of the tape, regardless of which proprietary software originally wrote it, which is why I prefer it for dumping the contents and worrying afterwards about decoding the proprietary logical formatting, and then extracting the files from it.

The trouble is, the ftape driver hasn’t been supported since roughly the year 2000, and was soon removed from the Linux kernel for this reason. This is why I’ve needed to run a painfully old version of Linux anytime I have to work with one of these drives. It would be great if ftape worked on a modern distro, with all the benefits and affordances that would provide.

***

So a couple of weeks ago, it occurred to me to make a simple request to Claude Code:

> This repository is a Linux kernel driver that communicates with legacy tape drives connected to the floppy
  controller (FDC) on the motherboard. Unfortunately, this driver hasn't been maintained for a long time, and
  can only compile under kernel version 2.4. I'd like to modernize this driver, allowing it to be built with
  the latest versions of the kernel.

● I'll help you modernize this Linux kernel driver for legacy tape drives. This is a significant task that
  will require updating the code to work with modern kernel APIs and conventions.

And after several iterations of “combobulating” and whatever else Claude claims to do, I suddenly had a kernel driver that was compiling without errors. This is because Claude is able to take the compiler output and feed it back into itself, until the compilation works correctly. There was a laundry list of kernel functions and structures that were understandably deprecated or replaced, in the eternity that elapsed between kernel version 2.4 and version 6.8. Amazingly, Claude found all the outdated bits and replaced them with the correct modern equivalents, requiring just a few manual touch-ups of the code (a bit more on this later).

At this point, however, the kernel driver was still expected to be compiled as part of a full kernel tree, whereas I just wanted it to become a standalone loadable kernel module. OK, no problem:

> Is there a way to compile just this module in-place, instead of copying it into a kernel source tree?

● Yes! You can compile kernel modules out-of-tree without copying them into the kernel source. Let me create
  a proper standalone build system for the ftape driver.

…And it did. Without any further prompting. At the end of this stage, I had a loadable kernel module (.ko) that I could start trying to use with actual live hardware. Unsurprisingly, the module did not yet “work” — it was loading, but something was preventing it from communicating with the tape drive correctly. But I pressed on.

From this point forward, since loading/unloading kernel modules requires sudo, I could no longer let Claude “iterate” on such sensitive operations by itself. Luckily, the ftape driver was already generously sprinkled with logging messages (printk() calls that are printed to dmesg), so I loaded the module myself, and iteratively pasted the output of dmesg into Claude manually, and let Claude compare it to a “known-good” dmesg log that I’d previously saved from a successful tape read. After a few iterations of this, Claude identified the remaining issues preventing the module from communicating with the hardware.

● Perfect! The module has parameters that can be set. The issue is that without configuration, all the base
  addresses default to -1, which gets converted to 0xffff in the config function, causing the detection to fail.
● The "No such device or address" (ENXIO) error occurs because the ftape-internal module is not configured
  with the correct I/O port base address for your floppy controller.
  The issue is in fdc-internal.c:735-737: if fdc->sra == 0xffff, it returns -ENXIO. Since no configuration was
  provided, all values default to -1, which becomes 0xffff....

And at last, I had a kernel module that was able to a) load, b) detect the tape drive, and c) dump the contents of a test tape!

image

***

Using the ftape driver on a modern kernel, a feat that I thought was hopelessly out of reach, was suddenly completed over the course of two evenings.

As a giant caveat, I should note that I have a small bit of prior experience working with kernel modules, and a good amount of experience with C in general, so I don’t want to overstate Claude’s success in this scenario. As in, it wasn’t literally three prompts to get Claude to poop out a working kernel module, but rather several back-and-forth conversations and, yes, several manual fixups of the code. It would absolutely not be possible to perform this modernization without a baseline knowledge of the internals of a kernel module.

This led me to crystallize some thoughts on working with such coding agents in our current moment:

Open yourself up to a genuine collaboration with these tools.

Interacting with Claude Code felt like an actual collaboration with a fellow engineer. People like to compare it to working with a “junior” engineer, and I think that’s broadly accurate: it will do whatever you tell it to do, it’s eager to please, it’s overconfident, it’s quick to apologize and praise you for being “absolutely right” when you point out a mistake it made, and so on. Because of this, you (the human) are still the one who must provide the guardrails, make product decisions, enforce architectural guidelines, and spot potential problems as early as possible.

Be as specific as possible, making sure to use the domain-specific keywords for the task.

I’m not claiming to suddenly be an expert in prompt engineering, but the prompts that I’ve found to be most successful are ones that clearly lay out the verbal scaffolding for a feature, and then describe the gaps in the scaffolding that the LLM should fill in. (For some reason the image that comes to mind is one of those biological stem-cell scaffolds where an artificial human ear will grow.)

Develop an intuition for the kinds of tasks that are “well-suited” for an agent to complete.

These agents are not magical, and can’t do literally everything you ask. If you ask it to do something for which it’s not well-suited, you will become frustrated and prematurely reject these tools before you allow them to shine. On this point, it’s useful to learn how LLMs actually work, so that you develop a sense of their strengths and weaknesses.

Use these tools as a massive force multiplier of your own skills.

I’m sure that if I really wanted to, I could have done this modernization effort on my own. But that would have required me to learn kernel development as it was done 25 years ago. This would have probably taken me several weeks of nonstop poring over documentation that would be completely useless knowledge today. Instead of all that, I spent a couple of days chatting with an agent and having it explain to me all the things it did.

Naturally, I verified and tested the changes it made, and in the process I did end up learning a huge amount of things that will be actually useful to me in the future, such as modern kernel conventions, some interesting details of x86 architecture, as well as several command line incantations that I’ll be keeping in my arsenal.

Use these tools for rapid onboarding onto new frameworks.

I am not a kernel developer by any stretch, but this particular experience ignited a spark that might lead to more kernel-level work, and it turns out that kernel development isn’t nearly as difficult as it might sound. In another unrelated “vibe-coding” session, I built a Flutter app without having used Flutter before. If you’re like me, and your learning style is to learn by doing, these tools can radically accelerate your pace of learning new frameworks, freeing you up to do more high-level architectural thinking.

***

In any case, circling all the way back, I am now happy to say that ftape lives on! Twenty-five years after its last official release, it is once again buildable and usable on modern Linux. I’m still in the process of making some further tweaks and new feature additions, but I have already verified that it works with the floppy-based tape drives in my collection, as well as parallel-port-based drives which it also supports.

image

The physical setup looks very similar, but the OS is now Xubuntu 24.04, instead of CentOS 3.5! 🎉
Until next time!

DiskDigger + Avalonia UI: a success story

Up until this point, DiskDigger has been built for the .NET Framework, with a user interface that uses Windows Forms. I’ve written before about my feelings on WinForms — that it’s a perfectly good, tried-and-true technology which, as ancient as it is, seems to have outlasted numerous other UI toolkits, for the simple reason that it just works.

image

However, it has bothered me for a long time that DiskDigger is not as cross-platform as I would like it to be. Sure, there was the excellent Mono project, with its independent implementation of Windows Forms that bridged the gap somewhat, and allowed DiskDigger to run on Linux, and perhaps even on macOS (albeit only on older 32-bit versions).

image

But recently, I decided to roll up my sleeves and take a serious look for a solution that would make DiskDigger truly cross-platform. I had the following rough requirements in mind when looking for a potential framework:

  • The framework should be built on .NET, since I’d like to reuse all of the business logic of DiskDigger, which is written in C#.
  • The toolkit of UI components should allow me to match the existing UI of DiskDigger without too much hassle.
  • The final output should ideally be a single, self-contained executable with no dependencies. There should not be any need for the user to “install” or “uninstall” anything. Installing should simply involve downloading and running the executable, and uninstalling should consist of deleting the executable when no longer needed.

The fact is, years ago I recall going through a similar process of investigating a cross-platform solution, but none of the frameworks I could find at the time seemed to be mature enough for me to commit to, so I kept putting it off, until an unpardonably long time afterwards, but better late than never.

Avalonia immediately jumped out as a strong contender. It is truly cross-platform, in the sense that you “ship the platform” along with your executable. This necessarily means that it will increase the size of the final executable, but I can deal with a moderate amount of bloat, as long as the end result is not like the monstrosities built with something like Electron, which need to ship the entirety of Chromium as their runtime, and make the final product into a 200MB behemoth. On this dimension, Avalonia performs relatively well: the final self-contained executable is about 60MB, and actually compresses nicely to a 30MB zip file for distribution.

.NET itself has also made its own strides in being able to bundle your app into a single executable, with a single command:

dotnet publish -c Release -r linux-x64 --sc -p:PublishSingleFile=true

…where the relevant parameters are --sc for “self-contained”, and the self-explanatory PublishSingleFile=true.

In terms of building your user interface, Avalonia seems to be a spiritual successor to WPF, and I’m embarrassed to say that I’ve never actually used WPF, either professionally or personally. It was an entire era of Windows UI development that I’d skipped over entirely. Because of this, I was a bit worried about the learning curve I’d have to endure to jump from Windows Forms directly to Avalonia. But my fears were unfounded: it didn’t take long at all for everything to “click”, because Avalonia encourages and expects you to use good architectural patterns like view models and data bindings, which I basically already had in place.

But here was the most pleasant surprise of all:
In my day-to-day work, I switch between my main workstation that runs Windows, and my MacBook Pro, and I’m able to work on the same projects for Android and the web on both machines. So I wondered how easy it would be to keep developing DiskDigger with Avalonia on my MacBook, instead of always having to develop it on my Windows PC.

I downloaded Rider (the JetBrains IDE for working with .NET) on my MacBook, installed the Avalonia plugin, and opened my Visual Studio solution. And to my amazement, it just worked! The UI designer, the code completion, everything worked flawlessly, dare I say even better than Visual Studio itself. I was able to keep developing the Visual Studio solution, unmodified, on my Mac. At this point I was convinced that this was the right direction to go in.

image

After plenty of help from the Avalonia documentation, and a little further help from Claude, I proceeded to rebuild one screen after another, until finally I had an MVP of the whole thing, with the whole process taking around four weeks. And at long last, may I present an experimental (but fully functional and complete!) version of DiskDigger, built with Avalonia UI, which can run not only in Windows, but in Linux and macOS!

image

Once again, this is just a Beta version so far, but it is perfectly usable on any platform, and I encourage you to try it and let me know your feedback. For now, I will continue development of this new Avalonia-based edition of DiskDigger in parallel with the existing WinForms-based version, which will still be the “recommended” version for Windows. But in the longer term, I can absolutely envision focusing solely on the Avalonia version, and letting the WinForms version ride into the sunset.

Brain dump, June 2025

In the course of my hobby of restoring old computers, I sometimes come across a laptop that has a broken floppy drive. This is almost always true for much older floppy drives that use a belt mechanism for spinning the disk, instead of a direct-driven spindle motor. These floppy drives are just the worst: the belt is nearly always broken, or too weak to work properly, rendering the drive useless. Replacements for these belts are impossible to obtain in the exact thickness and diameter, and wouldn’t really be a permanent fix anyway. So, rather than attempting to repair the drive with a new belt, I’d like to connect a more “proper” floppy drive, say, a known-good drive from another laptop, or even a drive from a desktop PC.

The problem is that older laptop floppy drives connect to a ribbon cable that is 1.25mm pitch, whereas newer laptop floppy drives use a 1.0mm pitch connector. If only there was an adapter that joins these two configurations… Fortunately there are open-source PCB designs for a couple of options: the first is a ribbon cable that is 1.25mm pitch on one end, and 1.0mm pitch on the other, and the second is a PCB that has both 1.25mm and 1.0mm connectors that are joined together. Either of these variations will work for my purposes, and I obtained both of them!

image

image

For both of the above options, you must use them in conjunction with a reverse-direction ribbon cable that will ultimately connect to the floppy drive.

This was actually my first time ordering custom-printed PCBs, and the experience couldn’t have gone smoother. It was also my first time using solder paste to solder a surface-mounted connector. That experience could have definitely gone smoother — my technique with solder paste needs a lot of refinement, but one step at a time. At least the adapter works!

Here is an example Frankenstein experiment I tried: an AST Advantage NB-SX25 laptop, which I restored recently, communicating with a desktop PC floppy drive, with the help of the aforementioned 1.25mm-to-1.0mm ribbon cable (plus an adapter that goes from a 1mm ribbon cable to a 34-pin FDC connector, which is widely available on the web):

image


On a semi-related topic, what if you have an old motherboard that has an AT-style power connector (P8 and P9), but you only have an ATX power supply? Not to worry — there are readily available adapters that convert from ATX to AT connectors. However, here’s something to keep in mind: certain ATX power supplies require a load on the 5V rail in order to properly regulate 12V. The symptom becomes: you turn on the power supply, the motherboard starts to boot, but then suddenly shuts off after a few seconds. This is potentially the cause: instead of only connecting the P8/P9 connectors to the motherboard, you must also connect a load to 5V; a couple of fans should do the trick.

image