Digital hoarding

I have a confession to make: I’m a hoarder. Not a hoarder of material possessions — oh no, my house is almost entirely free of unnecessary stuff. I take pride in actively reducing the amount of physical crap that I own, and donate items I no longer need. My family and I have even made a gift-giving agreement among ourselves where any gifts must either be consumable (specialty foods, restaurant gift cards, etc) or experiences (tickets to a show, subscription to a service, etc).

My hoarding, on the other hand, is of the digital variety. My “collection” only spans a few external hard drives of 1 TB each, occasionally backed up or synced to a duplicate set of external hard drives. The physical space occupied by these drives is less than one cubic foot, but the vastness of the digital stuff that they contain is… considerable. Just to give you a rough idea of what I’m dealing with:

  • Archives of old emails and correspondence from previous jobs and contracts.
  • Archives of Instant Messenger conversations with old friends and ex-girlfriends, dating back to 1997.
  • A collection of viruses and trojans for MS-DOS from the 80s and 90s, originally for research purposes.
  • A huge library of shareware games and programs from the MS-DOS era.
  • An archive of articles, papers, and textbooks (in PDF form) relating to computer science, mathematics, and physics.
  • An archive of high-resolution NASA imagery of planets, nebulas, and galaxies.
  • An extensive library of file formats (i.e. sample files saved by all kinds of different software) and documentation for every file format specification.
  • Emulators and system images of virtually every computer system and game console ever built.
  • My complete genome, which I’ve had sequenced a few years ago.
  • And of course, my personal photo and video library, from my birth to the present day, and also photos from my parents’ and grandparents’ old albums that I have digitized and saved.

So yeah… recently I’ve been asking myself whether digital hoarding is a problem of the same magnitude as physical hoarding. On the surface, one might ask “What’s the problem?” These things aren’t taking up any physical space, and you don’t have to give them another thought after you save them to the disk. And yet, perhaps there is an emotional toll that comes with the mere knowledge that all of this old data still exists, and remains your responsibility. If anything, this is surely at odds with my attitude towards physical possessions, which is quite minimalist.

A basic litmus test for hoarding behavior consists of a simple question: How would you feel about throwing away any random item that you see around you? Will you use this twisty-tie for anything? How about this pen cap? Do you really need three different cheese graters? How about this pile of old magazines? A hoarder will answer these questions with something like, “You never know when it will come in handy.” And if I’m being honest, that’s exactly how I feel about all the digital items I listed above. I feel the same hesitation about deleting any of them as a “physical” hoarder might feel about donating old clothes, or throwing away expired spices from the pantry.

When I look at my enormous pile of digital junk, I see in myself all the symptoms of real hoarding, albeit confined to the digital realm, which is likely why it’s been able to go on for so long. I’m also reminded of how freeing and cathartic it feels to let go of unnecessary possessions, and I theorize that a similar feeling of freedom will result from permanently letting go of digital baggage.

Therefore, I have resolved to stop being hypocritical in this regard, and start practicing digitally what I practice physically.  Many of the items I mentioned in my list are actually replaceable (easily found on the web, or generated with minimal effort).  Some of the items are technically “irreplaceable,” such as my old emails and IM archives, but represent unnecessary cognitive and emotional baggage, and have no real nostalgic value.  The only things that seem to have actual meaning, and are objectively worth keeping, are my personal photos and videos, and even those can probably be trimmed down a bit.

It’s time to let the past go, and embrace the future without anything weighing you down. Let the cleaning begin.

FileSystemAnalyzer: a tool that does what it says

Today I’m happy to release a tool for low-level analysis of file systems, which includes digging through file system structures that aren’t normally visible when exploring your disks, looking at metadata of files and folders that isn’t normally accessible, and even browsing file systems that aren’t supported by Windows or your PC.

Download FileSystemAnalyzer

This is actually the software that I use “internally” to test and experiment with new features for DiskDigger, but I thought that it might be useful enough to release this tool its own. It accesses the storage devices on your PC and reads them at the lowest level, bypassing the file system drivers of the OS.

On the surface, this software is very simple: it allows you to browse the files and folders on any storage device connected to your PC, and supports a number of file systems, including some that aren’t supported by Windows itself. However, the power of this tool comes from what else it shows you in addition to the files and folders:

FAT

The program supports FAT12, FAT16, and FAT32 partitions. When looking at FAT partitions, you can see the file and directory structure, and detailed metadata and previews of files that you select. When selecting a file or folder, you can also see its position in the FAT table. And indeed you can explore the entire FAT table independently of the directory structure, to see how the table is structured and how it relates to the files and directories:

filesystemanalyzer5

exFAT

Similarly to FAT, this lets you explore the exFAT file system, while also letting you look at the actual FAT table and see how each file corresponds to each FAT entry.

NTFS

In addition to exploring the NTFS file and folder structure, you can also see the MFT table, and which MFT entry corresponds to which file or folder:

filesystemanalyzer3

HFS+

HFS+ is the default file system used in macOS (although it is slowly being superseded by APFS), and is fully supported in FileSystemAnalyzer. You can explore the folders and files in an HFS+ partition, and you can also see the actual B-Tree nodes and node contents that correspond to each file:

filesystemanalyzer2

ext4

Ext4 partitions (used in Linux and other *nix operating systems) are also supported in FileSystemAnalyzer. In addition to exploring the folders and files, you can also see the actual inode table, and observe how the inodes correspond to the directory structure:

filesystemanalyzer4

APFS

Support for APFS is still very rudimentary, since there isn’t yet any official documentation on its internal structure, and requires some reverse engineering. Nevertheless, support for APFS is planned for a near-future update.

Disk images

The program supports E01 (EWF) disk images, as well as VHD (Microsoft Virtual Hard Disk), VDI (from VirtualBox), VMDK (from VMware), and of course plain dd images.

Creating reports

Given the exhaustiveness of the information that this tool presents about the file system that it’s reading, there’s no end to the types of reports that it could generate. Before committing to specific type(s) of reports for the program to create, I’d like to get some feedback from other forensics specialists on what kind of information would be the most useful. If you have any suggestions on what to include in reports, please contact me.

Limitations

For now, FileSystemAnalyzer is strictly a read-only tool: it lets you read files and folders from a multitude of partitions, but does not let you write new data to them. In some ways this can actually be beneficial (especially for forensics purposes), but is clearly a limitation for users who might want to use the tool to add or modify files in partitions that are not supported natively by the operating system.

Feedback

I’d love to hear what you think of FileSystemAnalyzer so far, and any ideas that you might have for new features or improvements. If you have any suggestions, feel free to contact me!

Download FileSystemAnalyzer

Hard hack: reading Soviet magnetic reel tapes

During my last visit to Russia a few years ago, I rummaged through my late grandmother’s old apartment and kept a few items, which included several magnetic reel tapes which I presumed my grandparents used for bootlegging and copying their favorite music from the sixties and seventies.

20180428103500

I’ve been wanting to listen to the contents of the tapes for a while now, but only recently have I found a bit of free time to actually do it. It’s still very much possible to buy a reel-to-reel player on eBay for less than $100, but I wanted to see if I could make use of existing components that I already have. And besides, I’m only looking for a rough rendering of the recordings, and don’t really need the precise original fidelity that an actual reel-to-reel player would provide.

I still have a relatively new cassette player that I’ve used previously to digitize some of my own cassettes from years ago, and I had a hunch that the “format” of the analog audio on the magnetic reels might be similar, if not the same, as the cassettes, meaning that I could theoretically use the cassette player to read the reel tapes!

The first step is to tear down the cassette player. As a side note, although this cassette player is quite cheap, it’s actually very useful because it has a USB port that powers it and simultaneously makes it become a generic USB audio input device, which makes it perfect for digitizing cassettes. Therefore, I wanted to tear it down in a way that would make it continue to be able to read cassettes, if that use case ever comes up again.

Anyway, I removed the front casing of the player, and tore away the plastic guides that kept the cassette tape in alignment, since these guides would interfere with the thicker reel tape. I then affixed the player onto a wooden board, and added two thick screws that will hold the reels. I also attached a thick metal post on either side of the player, which will act as tape guides and keep the tape horizontal across the player.

20180428102659

Also notice that I put some wire ties onto the metal posts, to serve as vertically-adjustable tape guides for experimenting with the precise alignment of the tape with the read head.

20180428102715

Another minor problem is that I didn’t have an empty reel onto which I would wind the current reel that I’m reading. For this purpose, I cut a circle out of some thick cardboard, and glued old CDs on either side of it. This would serve as my empty reel:

20180428102806

And finally the whole contraption is ready to go! There’s something poetic about using CDs to construct a reel onto which ancient magnetic tape will be wound…

20180428103005

I proceeded to connect the cassette player to my PC, and fire up Audacity, the trusty audio recording and processing software. I pressed “Record” in Audacity, pressed the “Play” button on the cassette player, and… to my amazement, the audio started to come through! At first I was only getting one of the two stereo channels, which meant that the tape wasn’t well-aligned with the head, but after a bit of adjusting of my wire-tie tape guides, I got a good stereo signal:

audacity_magnetic_reel1

It turns out that the reel tapes are recorded at double the speed of cassette tapes, which means that the audio extracted by the cassette player sounds slowed-down by a factor of two. So, the final step was to use Audacity to boost the speed of the recording by 2x, and the final audio came out! The only slight issue is that the audio seemed to be lacking the higher-ish frequencies, so everything sounds a bit muffled. It’s difficult to tell whether this is because the cassette player head isn’t fully compatible with the reel tape, or because the tape itself has worn out or degraded over time. But again, I’m not looking for a perfect transfer of the audio, just a first-order approximation, so this is no big deal.

What’s on the tapes?!

The actual contents of the tapes are not particularly surprising, but still gave me a wonderful tiny new glimpse into the lives of my grandparents through their musical tastes. One of the tapes contains music from The Irony of Fate (Ирония судьбы), one of the most beloved films in the Soviet Union, and still watched today by a huge number of Russian people on New Year’s eve. I can attest that the film’s soundtrack, performed by Sergey Nikitin (Никитин) and Alla Pugacheva (Пугачёва) is worth saving on tape and listening on any occasion.

The second tape seems to contain random songs from radio broadcasts, including a few songs from the West. These include Seasons in the Sun by Terry Jacks and Mexico by the Les Humphries Singers. Presumably these songs were deemed innocuous enough by the Communist censors, who otherwise banned music that was seen as subversive, sexualized, or violent, such as Pink Floyd, Black Sabbath, and The Village People (that’s right).

And the third tape contains some songs by Vladimir Vysotsky (Высоцкий), another iconic figure in Soviet music, known for his biting use of slang and street jargon (known as blatnaya pesnya or the newly-coined Russian chanson) to deliver poignant, striking, thought-provoking, and often hilarious political messages. The same tape also contains songs by Konstantin Belyaev (Беляев), unknown to me until today, but apparently another minor figure in the same genre of blatnaya pesnya as Vysotsky. To be honest, I found Belyaev’s lyrics rather juvenile (more so than other блатняк), and probably better suited for drinking songs rather than music for thoughtful enjoyment. But then, perhaps that’s exactly what my grandparents used them for.

Well now, with a fresh insight into another facet of my grandparents’ lives, and a renewed appreciation for Soviet musical traditions, I think it’s time to give these tapes one more listen!

* If you’re very curious, here is a sample of the audio from one of the tapes.

Revisiting the Windows 10 thumbnail cache

When you look at a folder full of pictures, and enable the display of thumbnails in your folders, Windows will show a thumbnail that represents each of the pictures. Creating these thumbnails is an expensive operation — the system needs to open each file, render the image in memory, and resize it to the desired size of the thumbnail. Therefore, Windows maintains a cache of thumbnails, saved in the following location: [User folder]\AppData\Local\Microsoft\Windows\Explorer.

A few years ago I published a utility for examining the thumbnail cache in Windows Vista and Windows 7.  However, in Windows 8 and Windows 10, Microsoft seems to have made some slight modifications to the file format of the thumbnail cache which was preventing my utility from working properly.  So, I revisited the thumbnail cache on the most recent version of Windows, and made sure the utility works correctly with it, as well as with all previous versions of the cache.

My updated ThumbCacheViewer supports thumbnail cache files from all versions of Windows after XP.  It automatically detects cache files associated with the current user’s account, and it also allows you to explicitly open thumbnail cache files from any other location. Once the file is opened, the utility will show a simple list of all the images contained in the selected cache. If you select an individual image, it will also show some useful metadata about the image:

thumbcache3

You can see that the cached images include thumbnails of individual photos, as well as thumbnail representations of folders that contain photos. Both of these can be forensically interesting, since the folder thumbnails still contain plenty of detail in the images. You can also see that there are separate caches for different resolutions of thumbnails, some of which are strikingly high-resolution (up to 2560 pixels wide, which almost defeats the purpose of a cache).

I’ll also point out that you can perform forensic analysis on thumbnail caches using DiskDigger, by opening the cache file as a disk image. You can do this by following these steps:

  • Launch DiskDigger, and go to the “Advanced” tab in the drive selection screen.
  • In the “Bytes per sector” field, enter “1”.
  • Click the “Scan disk image” button, and find the thumbnail cache file that you want to scan.
  • Select “Dig deeper” mode, and proceed with the scan.

Here is the same cache file as in the screenshot above, but viewed using DiskDigger (note the numerous .BMP images detected by the scan):

dd_88

Either way, this is now a relatively complete solution for analyzing thumbnail cache files, whether you’re a professional forensics specialist, or a home user who’s interested in how Windows preserves thumbnails in more ways than you might think!

The dead dream of chatbots

A long time ago I wrote about the Loebner Prize, and how it seemed like this competition isn’t so much a test of machine intelligence, but rather a test of how well the programmers can fool the judges into thinking they’re talking to a human. Not that anyone has actually done this successfully — none of the judges have ever been convinced that any of the chatbots were really a human, and the annual “winner” of the Prize is decided by sympathy points awarded by some very charitable judges.

In that previous post, I remember being struck by how little has changed in the quality of the chatbots that are entered into this competition:  it hadn’t improved since the inception of the prize.  So, today I thought I’d randomly “check in” on how the chatbots are doing, and read the chat transcripts from the competition from more recent years. (I encourage you to read the transcripts for yourself, to get a taste of the “state of the art” of today’s chatbots.)

And can you guess how they’re doing? Of course you can:  they still haven’t improved by any margin, except perhaps a larger repertoire of cleverly crafted catch-all responses. Even the winner and close runner-up of the 2017 competition are comically robotic, and can be effortlessly pegged as artificial by a human judge.

My goal in this post, however, is not to bash the authors of these chatbots — I’m sure they’re competent developers doing the best they can. My goal is to discuss why chatbot technology hasn’t moved forward since the days of ELIZA. (Note: When I refer to chatbots, I’m not referring to today’s virtual assistants like Siri or Alexa, which have made big strides in a different way, but rather to human-approximating, Turing-test-passing machines, which these chatbots attempt to be.)

I think much of it has to do with a lack of corporate support. Chatbots have never really found a good business use case, so we haven’t seen any major companies devote significant resources to chatbot development. If someone like Google or Amazon had put their weight behind this technology, we might have seen an advancement or two by now.  Instead, the competitors in the Loebner Prize still consist of individual hobbyists and hackers with no corporate backing.

Interest in chatbots seems to have peaked around 2012, when the cool thing was to add a customized bot to your website and let it talk to your users, but thankfully this died down very shortly thereafter, because apparently people prefer to communicate with other people, not lame attempts at imitation. We can theorize that someday we may hit upon an “uncanny valley” effect with chatbots, where the conversation is eerily close to seeming human (but not quite), which will cause a different kind of revulsion, but we’re still very far from that point.

Another thing to note is the actual technology behind today’s (and yesterday’s) chatbots. Most of these bots, and indeed all the bots that have won the Loebner Prize in recent years, are constructed using a language called AIML, which is a markup language based on XML.  Now, there have been plenty of ways that people have abused and misused XML in our collective memory, but this has to be one of the worst!  AIML attempts to augment XML with variables, stacks, macros, conditional statements, and even loops. And the result is an unwieldy mess that is completely unreadable and unmaintainable. If chatbot technology is to move forward, this has to be the first thing to throw out and replace with something more modern.

And finally, building a chatbot is one of those endeavors that seems tantalizingly simple on the surface:  if you look at the past chat logs of the prize-winning bots, it’s easy to think to yourself, “I can build a better bot than that!”  But, once you actually start to think seriously about building a bot that approximates human conversation, you quickly come up against research-level problems like natural language processing, context awareness, and of course human psychology, behavior, and consciousness in general.  These are most definitely not problems that can be solved with XML markup.  They likely can’t even be solved with today’s neural networks and “deep learning” algorithms.  It will probably require a quantum leap in AI technology.  That is, it will require building a machine that is truly intelligent in a more general way, such that its conversations with humans are a by-product of its intelligence, instead of its primary goal.

For now, however, the dream of chatbots has been laid to rest in the mid-2010s, and will probably not come back until the technology allows it, and until they’re actually wanted or needed.