A deep dive into optical disk file systems

It’s not often that I come across a data recovery story in my own personal life, but recently I came across just such a story, and a rather unusual one at that.

You see, my mother-in-law has several video recordings of my wife from her middle school and high school years, which I naturally couldn’t wait to watch, much to her embarrassment. These recordings are saved on a number of DVD-R discs. I’m guessing that my mother-in-law recorded the videos on a camcorder (onto compact tapes), and then hooked up the camcorder to a DVD recorder and burned DVDs from the contents of the tapes. (In the early-ish days of DVDs, there were standalone DVD recording devices into which you could plug in a video input, and it will continuously write the video to the DVD.)

But, to my disappointment, when I inserted these discs into the DVD drive in my computer, they appeared completely blank. One after another, the same thing: the disc contains no files, and the system reports it having a capacity of 0 MB (with no errors or warnings), even though it was visually apparent by the burn marks that the discs had data on them. I tried reading them on a different computer, with the same result.

Since the problem seemed to be affecting all of the discs, we can conjecture that it was the DVD recorder’s fault, where it might have somehow recorded the data incorrectly, or failed to close the recording session, etc. But can there be a way to access the data that was written to the disc?

The standard way to get the total size of a disk (using the Windows API) is to call the DeviceIoControl function and get a DISK_GEOMETRY_EX structure that contains the dimensions of the disk. Calling this function on these discs was returning a size of just 2048 bytes, or just one sector, since optical discs usually have a sector size of 2048 bytes.

But just because the OS is telling us that the disk is a certain size doesn’t mean we can’t attempt to explicitly read beyond that limit.  We can use the ReadFile function to brute-force the system into reading the disk at any location we specify. It may simply be that the driver is reporting an incorrect total size for the disk, while other areas of the disk might still be accessible.

So, I attempted to read the disc beyond the first sector. Reading the second, third, fourth, etc. sectors was returning errors, as might be expected, but I continued reading, and at around the 16000th sector, it started returning data! It’s almost as if the disc’s contents didn’t start until sector 16384. Onward from that point, the data could be read successfully all the way to the end of the disc.

Now, in order to actually recover the files present on the disc, we could potentially use DiskDigger to scan and carve any recoverable files from the raw data.  But I wanted to go a step further: up until this point, DiskDigger did not support any optical disc file systems, and since I haven’t dealt with a lot of CD/DVD recovery cases, I admittedly wasn’t totally familiar with the file systems used in optical discs, which presents a perfect opportunity to learn.

The most basic and original file system used in these discs is ISO 9660, otherwise known as ECMA-119. This is a very simple file system without any special affordances like journaling, access control, etc., which is perfectly adequate for read-only media where the data is written once, and will not need to be modified again.

Later, Microsoft developed the Joliet extensions to the ISO 9660 file system, which basically added support for Unicode file names, while remaining backwards-compatible by introducing a supplementary volume descriptor. This way, systems that support only the original ISO 9660 would continue to use the original volume descriptor, and systems that support Joliet will know to look for the new volume descriptor.  So basically, Joliet-formatted disks have two directory trees (one Unicode, the other non-Unicode), with the same file entries in each tree pointing to the same content on the disk.

And finally, by the time DVDs came around, the UDF file system (Universal Disk Format), also known as ECMA-167, was standardized.  It’s not backwards-compatible with ISO 9660, but discs that are formatted with UDF usually also have a stub ISO 9660 volume that tells the reader to look for a UDF volume on the same disk.  UDF is quite a bit more sophisticated, since it’s intended to be suitable for re-writable media, as well as multiple sessions on the same disk, but it’s still not nearly as complex as NTFS or ext4.

By the way, UDF can also be used on regular disks, not just optical disks. Here’s a little-known trick: it’s actually possible to format any disk as UDF by executing this command in an elevated command prompt: format <drive>: /fs:UDF

So, after poring over the ECMA specifications (real page-turners, I assure you), I implemented support for these file systems in DiskDigger, as well as in my FileSystemAnalyzer tool.

When you use DiskDigger to scan a CD or DVD (or an .ISO image), it will simply dump the contents of the disk and make all the files available for you to save.  The ISO 9660 and UDF file systems don’t really have a concept of “deleted” files, so DiskDigger will present all the files for recovery, even if they are still accessible by normal means. The benefit of this is that DiskDigger can now also scan the disk beyond the size reported by the OS (which was the issue I detailed above), and find these file systems in the space of the disk that is not accessible by normal means. You can do this by launching DiskDigger, going to the Advanced tab, and selecting the “Detect disk size manually” checkbox.

And in FileSystemAnalyzer, you can now examine these file systems in great detail. When you open an optical disk (or disk image), if it contains an ISO 9660 file system, it lets you examine and navigate it. If the disk contains a Joliet file system, it lets you examine it as either Joliet or ISO 9660, by letting you select which volume descriptor to use.  And if the disk contains a UDF file system, it lets you examine it or the stub ISO 9660 volume that usually comes along with it.

This rounds out support for optical disk file systems in DiskDigger and FileSystemAnalyzer! It’s admittedly a bit late, and also a bit overkill, since it’s true that data recovery cases involving optical disks are few and far between, but it’s good to know that even these sorts of cases can now be handled easily and smoothly.

Misrepresentations of evolution in popular media

This post is mostly tongue-in-cheek, but it’s something that I’ve been wanting to get off my chest for some time. The theory of evolution (or the fact of evolution, as Richard Dawkins refers to it) is still not as accepted as it should be, and remains “controversial” in some circles. According to a Pew Research Center study, as many as 34% of Americans do not believe that humans and other species evolved from earlier shared ancestors, and instead believe that humans have always existed in their current form, since the moment of creation.

Evolution is not a particularly difficult theory to grasp, but unfortunately its teaching is often encumbered by certain obstacles. The greatest such obstacle is religious dogma, which I don’t need to expand upon here. But there’s another obstacle that is nearly as harmful as religion towards a proper understanding of evolution:  popular media. There is a great number of movies, TV shows, and video games that have “evolution” as their central theme, but do a very poor job of representing the reality of evolution, and in fact perpetuate falsehoods about evolution that might take even more effort to unlearn.

Here’s a selected list of popular media where evolution plays a role, but is presented in a frustratingly incorrect way. These are, of course, my personal gripes. Yours may be different, and if they are, I’d like to know about them. Enjoy!

X-Men

One the grossest misrepresentations of evolution occurs in the X-Men universe. Evolution is integral to the entire premise of the series: it is claimed that the X-Men have special abilities because of mutations in their genome.

The X-Men themselves believe that their mutations make them superior to ordinary humans. The ordinary humans, in turn, treat the mutants as outcasts and freaks because of, essentially, fear and jealousy.  The point to be drawn here is that no single mutation per se makes an individual superior to any other. The only metric of “superiority” in evolution is reproductive success.  If the X-Men series was about a mutant who could impregnate all the women in the world with a snap of his fingers, now that would be an advantageous mutation.

The sheer variety and randomness of “powers” possessed by the mutants begins to border on ridiculous. We have one mutant who can shoot energy out of his eyes, another who can control the weather, another whose skin turns into bulletproof metal, and another who can change the channel on the TV by blinking his eyes. Pray tell, what kind of mutation would enable a person to emit infrared light at the precise frequency and the right modulation to change the channel on a TV?

The powers of the X-Men are completely at odds with fundamental physical laws, including conservation of energy. But even if we dismiss the fact that there’s virtually no physical basis for any of the X-Men’s powers, there’s even less scientific basis for the idea that a genetic mutation is what causes them.

Pokémon

Alright, this is a bit of a stretch. But come on, Pokémon appeals to a substantial number of young children, and they, more than anyone, should be spared from misconceptions about evolution.

In the Pokémon world, a certain species of Pokémon can “evolve” into a different species by the mere act of gaining a sufficient amount of experience from battling other Pokémon. This perpetuates the falsehood that evolution is an instant “quantum leap” from one species to another, or that one species can instantly transform into a different, more advanced species.

Of course Pokémon are not real animals. Perhaps they are supposed to be “magical,” which might give them the ability to transform into different species. But whatever you call this transformation, don’t call it evolution.

Star Trek

As much as I like Star Trek, I’m afraid it’s one of the bigger offenders when it comes to properly depicting evolution, genetics, and DNA.

Firstly, in the Star Trek universe, it’s commonplace to see unions between two different species that produce offspring, for example a half-human half-Vulcan, or a half-Bajoran half-Cardassian, etc.  The thing is, if two individuals are able to produce a child together, then they are by definition the same species.  Just within the confines of the Earth, two individuals from different species cannot produce offspring, even if the species are very close in evolutionary time, e.g. humans and chimpanzees. So then, how could two species from different planets possibly be compatible?

Star Trek treats aliens from different planets as if they were different races of the same pseudo-species of “humanoids,” but this is very different from the idea of them being distinct species, with their own evolutionary lineage.

To its credit, Star Trek: The Next Generation attempts to explain the profusion of humanoid species on different planets in the episode The Chase, in which it’s discovered that life on habitable planets was inseminated billions of years ago by an “original” humanoid species that wanted to spread its likeness throughout the galaxy. This is a perfectly valid premise, but even if life was inseminated this way, the probability that the lineages on different planets would evolve into beings that look virtually identical billions of years later, and are able to interbreed, is essentially zero.

In sum

If you’re a Hollywood screen writer, or a video game designer, or a comic book author, or any other content creator, and are hoping to incorporate evolution into the premise of your production, consider presenting it as free as possible from misconceptions that might be absorbed by your audience. (Or hire me as a technical consultant!)

FAT12 is alive and well!

One would have thought that the FAT12 file system was safely a relic of the 1980s and 1990s, when it was used as the default file system for floppy disks and very early hard disks. FAT12 would be entirely impractical today, since it can only cover a maximum of 32 MB of disk space. However, I was surprised to find it very much alive today, in the most unlikely of places.When I go running, I use my trusty Garmin Forerunner 10 watch, which uses GPS to record my position and pace during the run. When I connect the watch to the USB port on my PC, it appears as a mass storage device, and allows me to retrieve the workout files (stored in the FIT format). It hadn’t occurred to me until now to check out the finer details of this mass storage device, but there were a few things that surprised me:

  • The entire size of the watch’s flash memory is actually just one megabyte! I’m guessing this is because they want to discourage users from using the watch for general storage (i.e. dumping of photos, documents, and so on), thereby unnecessarily wearing out the flash memory. It also encourages the user to offload the workout files relatively often, in case the memory ever gets corrupted or the watch is lost. It’s also possible that this watch uses a more expensive type of flash memory (one that is more resilient to wear and tear), which would make it prohibitively expensive to provide multi-gigabyte sizes that we breezily expect in today’s USB flash drives.
  • You guessed it: it uses FAT12 to organize the files in the flash memory. Because why not! With a total disk space of 1 MB, this is really the simplest and most compatible solution they could have chosen.

Therefore, hats off to Garmin for not overcomplicating things, and making use of a tried and tested solution that is sure to remain compatible and future-proof.

Rewatching Star Trek: Voyager

So I’ve been rewatching Star Trek: Voyager recently, and thought I’d share some notes and observations. As a disclaimer, I had the show running mostly in the background, so I didn’t pay particularly close attention. Nevertheless, this re-viewing has pretty much solidified my conclusion that this series is my least favorite in the franchise. Let’s begin.

Captain Janeway

Oh, Captain Janeway…. I don’t think I agreed with a single command decision that she makes, starting with the very first episode where she decides to strand her crew in the Delta quadrant, and ending with the very last episode where she decides to travel back in time to help Voyager reach the Alpha quadrant sooner (because of how much she regrets her decision to stay in the Delta quadrant).

The complete, repeated disregard for the prime directive (including the Temporal prime directive) has me questioning her fitness for command.

At least she ends up where she belongs — in a women’s prison mashing potatoes for other inmates. #lockherup

Neelix

The Jar-Jar Binks of the Star Trek universe! It’s hard to believe he survived that long before coming aboard Voyager without being blown to bits by… pretty much anyone who encountered him.   And it’s also hard to believe he wasn’t killed off in any number of ways during Voyager’s seven-year journey.   Being strangled by Tuvok would have been a satisfying outcome, and this actually happens, but only as part of a holodeck simulation.   It’s as if the writers are fully aware of how annoying this character is, but just want to stick it to the audience.

Stupid and/or tedious enemies

The Kazon. We’ve seen all of this before. Generic humanoids with a prosthetic forehead and a bone to pick. No wonder Seska (a secret Cardassian) fits so well with the Kazon when she defects to them from Voyager — they are nearly identical species, except that the Kazon are much stupider and less cunning. How did they ever invent warp travel? In fact, the Kazon, the Ocampa, and the Caretaker can be very closely compared with the Cardassian / Bajoran / Prophets trifecta of Deep Space Nine, except the latter were explored much more deeply and meaningfully.

The Hirogen. We’ve seen this before, too: one-dimensional enemies that are given a single human characteristic that’s exaggerated to absurdity. Basically a blend of Klingons and Jem’Hadar, they are obsessed with “hunting” and “prey,” and made for some woefully tedious and predictable episodes.

On the other hand, there were some great enemies too, such as Species 8472, which deserved many more episodes than they got. The interplay between Voyager and Species 8472 became really nuanced, and I was looking forward to more stories with them involved, but it didn’t happen.

Notable cameos

One of the truly enjoyable things about re-watching this series was the cameos, some of which I hadn’t even realized until now. Imagine watching a random episode and thinking, “Hold on, isn’t that… Scott Thompson?!” Why yes, it is — he makes a guest appearance in the episode Someone to Watch Over Me, and is delightful as always.

Other welcome appearances include Jason Alexander, who plays the leader of the “brain trust” in the episode Think Tank. I kept chuckling whenever he spoke. And of course there’s Sarah Silverman, an up-and-coming comedian at the time, who plays the nerdy-but-sexy SETI researcher in Future’s End.

Time travel as filler

A noticeable fraction of the episodes are time travel stories where most of the episode happens in an alternate timeline which, by the end of the episode, resets to the beginning without the crew knowing that anything happened. This makes the episode completely pointless, unless it’s redeemed by some meaningful character development, which it generally isn’t. Here are just a few such episodes that I can recall:

  • Time and Again, in which the Voyager crew detect a planet that has undergone a catastrophe, but it turns out that the crew themselves are the ones who cause the catastrophe in the future. Once they manage to prevent it from happening, the timeline resets to the beginning, with all the events of the episode never having taken place.
  • Timeless, in which, fifteen years in the future, Chakotay and Harry Kim correct a mistake that happened fifteen years prior, which had caused the destruction of Voyager and the rest of the crew. Once the mistake is corrected, the timeline is reset, with the events of the episode never having taken place.
  • Year of Hell (a two-parter), in which the crew is faced with a species that is bent on tampering with the timeline in order to become the most prosperous and dominant species in the sector. They do this using a ship that can fire a “temporal incursion” beam that erases any object from ever having existed. When Voyager finally manages to destroy the temporal incursion ship (without any help from the 29th Century time police, who should have been there, right?), the timeline is reset, with the events of the episode never having taken place. Note: this episode had a huge redeeming factor in the form of Kurtwood Smith, whose performance made the episode more watchable than most.
  • Relativity, in which a renegade captain from the 29th century is obsessed with destroying Voyager because he’s fed up with all the trouble they’ve caused with their time traveling (I know how he feels!). Once this captain is apprehended, the timeline is restored, with the events of the episode never having taken place.

I’m fairly certain there are a few more that I didn’t mention, but you get the idea. It feels like the writers would too often fall back on the “alternate timeline” / “it was all a dream” trope, rather than explore more meaningful ways to advance the story or develop the characters.

“Science”

What good would a dissection of a Star Trek series be without a few nitpicks of the science and technobabble that they use? The science explored in Voyager is laughably terrible, as it often is in Star Trek, but fortunately it’s laughable in a good way. As in, it’s so absurd that it overflows into being humorous.

Remember when Voyager gets stuck inside the event horizon of a singularity (in the episode Parallax) and pokes a hole through it using warp particles? And when Neelix mansplains the event horizon to Kes with utter nonsense? Oh man, that’s the good stuff.

Perhaps the most absurd episode, to the point where I felt stupider after watching it, was Threshold, which is the one where Tom Paris builds a shuttlecraft that can achieve Warp 10, which is “infinite speed.” Mind you, the scientists and theorists at the Daystrom Institute have spent centuries refining warp technology to attain this goal without success, but Tom Paris (a random-ass pilot without any scientific background) does it in a few days.

The crew acknowledges that this is an impossibility (since traveling at infinite speed would mean occupying all points in the universe simultaneously), but Tom does it anyway! Of course, after attaining Warp 10, Tom starts to experience some changes: he abducts captain Janeway to a distant planet, where they de-evolve into giant salamanders, and have salamander babies. But don’t worry, the crew finds them soon enough, and the Doctor restores them to human form, with all their memories and skills intact. What happened to the salamander babies, which are now the most unique species in the entire universe? Ehh, who cares. (Whoops, spoilers!)

Verdict

A few memorable episodes, some pretty good performances (especially by Robert Picardo as the Doctor), but the premise and overall dullness and repetitiveness of the episodes makes it more suited for running in the background rather than the foreground.

Ray tracing black holes

Lately I’ve been studying up on ray tracing, and one of my goals has been to build a nonlinear ray tracer — that is, a ray tracer that works in curved space, for example space that is curved by a nearby black hole. (See the finished  source code!)

In order to do this, the path of each ray must be calculated in a stepwise fashion, since we can no longer rely on the geometry of straight lines in our world. With each step taken by the ray, the velocity vector of the ray is updated based on an equation of motion determined by a “force field” present in our space.

This idea has certainly been explored in the past, notably by Riccardo Antonelli, who derived a very clever and simple equation for the force field that guides the motion of the ray in the vicinity of a black hole, namely

$$\vec F(r) = – \frac{3}{2} h^2 \frac{\hat r}{r^5}$$

I decided to use the above equation in my own ray tracer because it’s very efficient computationally (and because I’m not nearly familiar enough with the mathematics of GR to have derived it myself). The equation models a simple Schwarzschild black hole (non-rotating, non-charged) at the origin of our coordinate system. The simplicity of the equation has the tradeoff that the resulting images will be mostly unphysical, meaning that they’re not exactly what a real observer would “see” in the vicinity of the black hole. Instead, the images must be interpreted as instantaneous snapshots of how the light bends around the black hole, with no regard for redshifting or distortions relative to the observer’s motion.

Nevertheless, this kind of ray tracing provides some powerful visualizations that help us understand the behavior of light around black holes, and help demystify at least some of the properties of these exotic objects.

My goal is to build on this existing work, and create a ray tracer that is more fully featured, with support for other types of objects in addition to the black hole. I also want it to be more extensible, with the ability to plug in different equations of motion, as well as to build more complex scenes, or even to build scenes algorithmically. So, now that my work on this ray tracer has reached a semi-publishable state, let’s dive into all the things it lets us do.

Accretion disk

The ray tracer supports an accretion disk that is either textured or plain-colored. It also supports multiple disks, at arbitrary radii from the event horizon, albeit restricted to the horizontal plane around the black hole. The collision point of the ray with the disk is calculated by performing a binary search for the exact intersection. If we don’t search for the precise point of intersection, we would see artifacts due to the “resolution” of the steps taken by each ray (notice the jagged edges at the bottom of the disk):

Once the intersection search is implemented, the lines and borders become nice and crisp:

We can also apply different colors to the top and bottom of the disk. Observe that the black hole distorts the disk in a way that makes the bottom (colored in green) appear around the lower semicircle of the photon sphere, even though we’re looking at the disk from above:

Note that the dark black circle is not the event horizon, but is actually the photon sphere. This is because photons that cross into the photon sphere from the outside cannot escape. (Only photons that are emitted outward from inside the photon sphere can be seen by an outside observer.)

If we zoom in on the right edge of the photon sphere, we can see higher-order images of the disk appear around the sphere (second- and even third-order images are visible). These are rays of light that have circled around the photon sphere one or more times, and eventually escaped back to the observer.

And here is the same image with a more realistic-looking accretion disk:

Great! Now that we have the basics out of the way, it’s time to get a little more crazy with ray tracing arbitrary materials around the black hole.

Additional spheres

The ray tracer allows adding an unlimited number of spheres, positioned anywhere (outside the event horizon, that is!) and either textured or plain-colored. Here is a scene with one hundred “stars” randomly positioned in an “orbit” around the black hole (click to view larger versions of the images):

Notice once again how we can see second- and third-order images of the spheres as we get closer to the photon sphere. By the way, here is a similar image of stars around the black hole, but with the curvature effects turned off (as if the black hole did not curve the surrounding space):

And here is a video, generated using the ray tracer, that shows the observer circling around the black hole with stars in its vicinity. Once again, this is not a completely realistic physical picture, since the stars are not really “orbiting” around the black hole, but rather it’s a series of snapshots taken at different angles:

Notice how the spherical stars are distorted around the Einstein ring, as well as how the background sky is affected by the curvature.

Reflective spheres

And finally, the ray tracer supports adding spheres that are perfectly reflective:

All that’s necessary for doing this is to calculate the exact point of impact by the ray on the sphere (again using a binary intersection search) and get the corresponding reflected velocity vector based on the normal vector on the sphere at that point. Here is a similar image, but with a textured accretion disk:

Future work

Eventually I’d like to incorporate more algorithms for different equations of motion for the rays. For example, someone else has encoded a similar algorithm for a Kerr black hole (i.e. a black hole with angular momentum), and there is even a port of it to C# already, which I was able to integrate into my ray tracer easily:

A couple more ideas:

  • There’s no reason the ray tracer couldn’t support different types of shapes besides spheres, or even arbitrary mesh models (e.g. STL files).
  • I’d also like to use this ray tracer to create some more animations or videos, but that will have to be the subject of a future post.
  • Make it run on CUDA?