Revisiting the Windows 10 thumbnail cache

When you look at a folder full of pictures, and enable the display of thumbnails in your folders, Windows will show a thumbnail that represents each of the pictures. Creating these thumbnails is an expensive operation  – the system needs to open each file, render the image in memory, and resize it to the desired size of the thumbnail. Therefore, Windows maintains a cache of thumbnails, saved in the following location: [User folder]\AppData\Local\Microsoft\Windows\Explorer.

A few years ago I published a utility for examining the thumbnail cache in Windows Vista and Windows 7.   However, in Windows 8 and Windows 10, Microsoft seems to have made some slight modifications to the file format of the thumbnail cache which was preventing my utility from working properly.   So, I revisited the thumbnail cache on the most recent version of Windows, and made sure the utility works correctly with it, as well as with all previous versions of the cache.

My updated ThumbCacheViewer supports thumbnail cache files from all versions of Windows after XP.   It automatically detects cache files associated with the current user’s account, and it also allows you to explicitly open thumbnail cache files from any other location. Once the file is opened, the utility will show a simple list of all the images contained in the selected cache. If you select an individual image, it will also show some useful metadata about the image:

You can see that the cached images include thumbnails of individual photos, as well as thumbnail representations of folders that contain photos. Both of these can be forensically interesting, since the folder thumbnails still contain plenty of detail in the images. You can also see that there are separate caches for different resolutions of thumbnails, some of which are strikingly high-resolution (up to 2560 pixels wide, which almost defeats the purpose of a cache).

I’ll also point out that you can perform forensic analysis on thumbnail caches using DiskDigger, by opening the cache file as a disk image. You can do this by following these steps:

  • Launch DiskDigger, and go to the “Advanced” tab in the drive selection screen.
  • In the “Bytes per sector” field, enter “1”.
  • Click the “Scan disk image” button, and find the thumbnail cache file that you want to scan.
  • Select “Dig deeper” mode, and proceed with the scan.

Here is the same cache file as in the screenshot above, but viewed using DiskDigger (note the numerous .BMP images detected by the scan):

Either way, this is now a relatively complete solution for analyzing thumbnail cache files, whether you’re a professional forensics specialist, or a home user who’s interested in how Windows preserves thumbnails in more ways than you might think!

The dead dream of chatbots

A long time ago I wrote about the Loebner Prize, and how it seemed like this competition isn’t so much a test of machine intelligence, but rather a test of how well the programmers can fool the judges into thinking they’re talking to a human. Not that anyone has actually done this successfully – none of the judges have ever been convinced that any of the chatbots were really a human, and the annual “winner” of the Prize is decided by sympathy points awarded by some very charitable judges.

In that previous post, I remember being struck by how little has changed in the quality of the chatbots that are entered into this competition: it hadn’t improved since the inception of the prize. So, today I thought I’d randomly “check in” on how the chatbots are doing, and read the chat transcripts from the competition from more recent years. (I encourage you to read the transcripts for yourself, to get a taste of the “state of the art” of today’s chatbots.)

And can you guess how they’re doing? Of course you can: they still haven’t improved  by any margin, except perhaps a larger repertoire of cleverly crafted catch-all responses. Even the winner and close runner-up of the 2017 competition are comically robotic, and can be effortlessly pegged as artificial by a human judge.

My goal in this post, however, is not to bash the authors of these chatbots – I’m sure they’re competent developers doing the best they can. My goal is to discuss why chatbot technology hasn’t moved forward since the days of ELIZA. (Note: When I refer to chatbots, I’m not referring to today’s virtual assistants like Siri or Alexa, which have made big strides in a different way, but rather to human-approximating, Turing-test-passing machines, which these chatbots attempt to be.)

I think much of it has to do with a lack of corporate support. Chatbots have never really found a good business use case, so we haven’t seen any major companies devote significant resources to chatbot development. If someone like Google or Amazon had put their weight behind this technology, we might have seen an advancement or two by now. Instead, the competitors in the Loebner Prize still consist of individual hobbyists and hackers with no corporate backing.

Interest in chatbots seems to have peaked around 2012, when the cool thing was to add a customized bot to your website and let it talk to your users, but thankfully this died down very shortly thereafter, because apparently people prefer to communicate with other people, not lame attempts at imitation. We can theorize that someday we may hit upon an “uncanny valley” effect with chatbots, where the conversation is eerily close to seeming human (but not quite), which will cause a different kind of revulsion, but we’re still very far from that point.

Another thing to note is the actual technology behind today’s (and yesterday’s) chatbots. Most of these bots, and indeed all the bots that have won the Loebner Prize in recent years, are constructed using a language called AIML, which is a markup language based on XML. Now, there have been plenty of ways  that people have  abused and misused XML in our collective memory, but this has to be one of the worst! AIML attempts to augment XML with variables, stacks, macros, conditional statements, and even loops. And the result is an  unwieldy mess that is completely unreadable and unmaintainable. If chatbot technology is to move forward, this has to be the  first thing to throw out and replace with something more modern.

And finally, building a chatbot is one of those endeavors that seems tantalizingly simple on the surface: if you look at the past chat logs of the prize-winning bots, it’s easy to think to yourself, “I can build a better bot than that!” But, once you actually start to think seriously about building a bot that approximates human conversation, you quickly come up against research-level problems like natural language processing, context awareness, and of course human psychology, behavior, and consciousness in general. These are most definitely not problems that can be solved with XML markup. They likely can’t even be solved with today’s neural networks and “deep learning” algorithms. It will probably require a quantum leap in AI technology. That is, it will require building a machine that is truly intelligent in a more general way, such that its conversations with humans are a by-product of its intelligence, instead of its primary goal.

For now, however, the dream of chatbots has been laid to rest in the mid-2010s, and will probably not come back until the technology allows it, and until they’re actually wanted or needed.

A quick utility for SQLite forensics

When performing forensics on SQLite database files, it’s simple enough to browse through the database directly using a tool like sqlitebrowser, which provides a nice visual interface for exploring the data. However, I’d like to create a tool that goes one step further:   a tool that shows the contents of unallocated or freed blocks within the database, so that it’s possible to see data from rows that once existed, but were later deleted (this can be used, for example, in recovering deleted text messages from an Android device, which usually stores SMS messages in a .sqlite file).

This utility, which I’ll tentatively call SqliteCarve, represents the minimum solution for accomplishing this task: it loads a SQLite file, and parses its pages and B-tree structure. While doing this, it detects the portions of the structure that contain unallocated bytes. It then reads these bytes and parses any strings from them.

The tool presents all the strings found in the unallocated space visually, with a quick way to search for keywords within the strings:

A couple of TODOs for this utility:

  • Support strings encoded with UTF-16 and UTF-16BE, in addition to the default UTF-8.
  • Make better inferences about the type of content present in the unallocated areas, to be able to extract strings more precisely.