Premature optimization of Android View hierarchies

In my day job, one of my responsibilities is to oversee the technical evolution of our product and to optimize its performance. I’ve even written a few guidelines that detail numerous recommendations for maximizing performance of Android apps in general.

One of the things I have always recommended is to reduce the complexity of View hierarchies, and try not to overcrowd or have too many nesting levels of your Views, since this can supposedly have a negative impact on performance.  However, I made these statements based on common sentiment on the web, and based on the Android documentation, instead of on actual hard evidence.  So I  looked into it from an interesting perspective:  I dug into View hierarchies as they are used by other major apps, and compared them with our own usage.  This isn’t a totally “scientific” analysis, and it only looks at a single facet of proper View usage. Nevertheless the findings are rather surprising, and are actually challenging my insistence on View minimalism.

I looked at the Twitter, Facebook, and Slack apps, and compared each of their “feed” screens to the feed screen of our own Wikipedia app. (The reason I chose these apps is that the “performance” of their feeds is nearly perfectly smooth, especially considering that some of their content includes auto-playing videos and animations.)  I used the superbly useful and little-known UI Automator Viewer tool, which is bundled with the Android SDK, to explore these view hierarchies.

For reference, the deepest nesting that I found in the feed of the Wikipedia app is seven (7) levels deep:

layout_dump3

But get ready:  The deepest nesting in the Slack app is… eighteen (18) levels deep. And yet it performs perfectly smoothly:

layout_dump2

The deepest nesting in the Facebook app is twenty (20) levels deep. And yet it works just fine:

layout_dump4

The deepest nesting in the Twitter app is twenty three (23) levels deep, which includes eight (8) nesting levels for each item in their ListView (it’s not even a RecyclerView!). And yet I’m able to scroll the Twitter feed infinitely without a single hiccup.

layout_dump1

Therefore I’m compelled to reevaluate the importance we should be placing on optimizing View hierarchies, at least from the perspective of “nesting.”  Indeed, this seems to be yet another case for balancing reasonable performance guidelines with more immediate product goals, or put more simply, avoiding premature optimization.

Higher-level illusions

Most of us have seen optical illusions, and witnessed firsthand how a simple but specially crafted illustration can completely trick our brain, whether it’s an illusion involving depth perception, motion perception, color perception, etc.  One of my favorites is this illusion involving checkered squares with alternating shades of gray (Is square A darker than square B?):

Credit: Wikimedia Commons.

When I first saw the above illusion, I found it unfathomable that squares A and B are actually the same color, and yet it’s true. The illusion is so powerful, I had to open the image in Photoshop and literally look at the pixel color values of the squares to convince myself that they are the same.

But actually, we don’t even need to resort to any specially contrived images to fool our visual circuits, since our eyes themselves have a built-in defect — a consequence of the eye’s evolutionary history — a blind spot that gets patched over in real time by the software of our consciousness.  This allows us to go on with our lives being completely oblivious of this defect (unless we consciously look for it), but it basically means that we experience this genuine illusion during every waking moment.

These kinds of illusions powerfully illustrate how a simple misfire of our sensory perceptions can send our understanding of the world completely astray, and how our consciousness has adapted to compensate for the laughable fallibility of our senses.

My question is the following:  If it’s this easy to fool our visual processing circuits, what kinds of illusions might be at work at higher levels of our consciousness?  What other blind spots are auto-filled by the software of our brain, making us oblivious to their true nature?

The key to uncovering and understanding illusions, I think, is cognitive effort.  It takes cognitive effort to realize that the illustration at the top of this article is, in fact, an illusion.  It takes cognitive effort to expose and become aware of the blind spot in your own eyes. What other illusions might we uncover if we keep building up the muscles of cognitive effort?

Perhaps we might discover that the Earth, instead of being a flat plane with a dome covering it, is actually a spheroidal mass that orbits the Sun, contrary to all of our intuition.

Perhaps we’ll discover that the Sun is actually one of billions of other suns, and is by no means unique among them, and that our galaxy is one of billions of other galaxies, with similarly little uniqueness about it.

We might also discover that the folk tales and mythologies of our ancestors are not literally true, but are merely expressions of the fears, aspirations, ideals, and desires that we all share, especially the desire to find meaning and purpose in a world that doesn’t grant us purpose on its own.

And perhaps we’ll discover that free will itself, far from being a gift bestowed on us by a creator or even a self-evident property that emerges from our consciousness, is actually the grandest illusion of all:  that all of our thoughts and actions are consequences of deterministic physical laws.

But all of these realizations need not lead us towards fatalism or nihilism, for these too are illusions.  If the universe doesn’t grant us a purpose ex nihilo, it shouldn’t stop us from being able to create our own purpose.  And if it really is true that the laws of physics underlie all of our choices, it doesn’t make our choices any less meaningful or consequential, and it doesn’t mean that we should stop striving to make better choices that improve the lives of current and future generations.

And of course, none of this takes into account the illusions of higher and higher order that we’re bound to uncover in the future, and all the consequences of those discoveries that we can’t even fathom in the present.  The one thing we must not stop doing is exerting our cognitive effort to keep discovering and untangling illusions, wherever we might find them. The immortal words of Stephen Jay Gould come to mind:

We are the offspring of history, and must establish our own paths in this most diverse and interesting of conceivable universes — one indifferent to our suffering, and therefore offering us maximal freedom to thrive, or to fail, in our own chosen way.

Hard hack: reading Soviet magnetic reel tapes

During my last visit to Russia a few years ago, I rummaged through my late grandmother’s old apartment and kept a few items, which included several magnetic reel tapes which I presumed my grandparents used for bootlegging and copying their favorite music from the sixties and seventies.

20180428103500

I’ve been wanting to listen to the contents of the tapes for a while now, but only recently have I found a bit of free time to actually do it. It’s still very much possible to buy a reel-to-reel player on eBay for less than $100, but I wanted to see if I could make use of existing components that I already have. And besides, I’m only looking for a rough rendering of the recordings, and don’t really need the precise original fidelity that an actual reel-to-reel player would provide.

I still have a relatively new cassette player that I’ve used previously to digitize some of my own cassettes from years ago, and I had a hunch that the “format” of the analog audio on the magnetic reels might be similar, if not the same, as the cassettes, meaning that I could theoretically use the cassette player to read the reel tapes!

The first step is to tear down the cassette player. As a side note, although this cassette player is quite cheap, it’s actually very useful because it has a USB port that powers it and simultaneously makes it become a generic USB audio input device, which makes it perfect for digitizing cassettes. Therefore, I wanted to tear it down in a way that would make it continue to be able to read cassettes, if that use case ever comes up again.

Anyway, I removed the front casing of the player, and tore away the plastic guides that kept the cassette tape in alignment, since these guides would interfere with the thicker reel tape. I then affixed the player onto a wooden board, and added two thick screws that will hold the reels. I also attached a thick metal post on either side of the player, which will act as tape guides and keep the tape horizontal across the player.

20180428102659

Also notice that I put some wire ties onto the metal posts, to serve as vertically-adjustable tape guides for experimenting with the precise alignment of the tape with the read head.

20180428102715

Another minor problem is that I didn’t have an empty reel onto which I would wind the current reel that I’m reading. For this purpose, I cut a circle out of some thick cardboard, and glued old CDs on either side of it. This would serve as my empty reel:

20180428102806

And finally the whole contraption is ready to go! There’s something poetic about using CDs to construct a reel onto which ancient magnetic tape will be wound…

20180428103005

I proceeded to connect the cassette player to my PC, and fire up Audacity, the trusty audio recording and processing software. I pressed “Record” in Audacity, pressed the “Play” button on the cassette player, and… to my amazement, the audio started to come through! At first I was only getting one of the two stereo channels, which meant that the tape wasn’t well-aligned with the head, but after a bit of adjusting of my wire-tie tape guides, I got a good stereo signal:

audacity_magnetic_reel1

It turns out that the reel tapes are recorded at double the speed of cassette tapes, which means that the audio extracted by the cassette player sounds slowed-down by a factor of two. So, the final step was to use Audacity to boost the speed of the recording by 2x, and the final audio came out! The only slight issue is that the audio seemed to be lacking the higher-ish frequencies, so everything sounds a bit muffled. It’s difficult to tell whether this is because the cassette player head isn’t fully compatible with the reel tape, or because the tape itself has worn out or degraded over time. But again, I’m not looking for a perfect transfer of the audio, just a first-order approximation, so this is no big deal.

What’s on the tapes?!

The actual contents of the tapes are not particularly surprising, but still gave me a wonderful tiny new glimpse into the lives of my grandparents through their musical tastes. One of the tapes contains music from The Irony of Fate (Ирония судьбы), one of the most beloved films in the Soviet Union, and still watched today by a huge number of Russian people on New Year’s eve. I can attest that the film’s soundtrack, performed by Sergey Nikitin (Никитин) and Alla Pugacheva (Пугачёва) is worth saving on tape and listening on any occasion.

The second tape seems to contain random songs from radio broadcasts, including a few songs from the West. These include Seasons in the Sun by Terry Jacks and Mexico by the Les Humphries Singers. Presumably these songs were deemed innocuous enough by the Communist censors, who otherwise banned music that was seen as subversive, sexualized, or violent, such as Pink Floyd, Black Sabbath, and The Village People (that’s right).

And the third tape contains some songs by Vladimir Vysotsky (Высоцкий), another iconic figure in Soviet music, known for his biting use of slang and street jargon (known as blatnaya pesnya or the newly-coined Russian chanson) to deliver poignant, striking, thought-provoking, and often hilarious political messages. The same tape also contains songs by Konstantin Belyaev (Беляев), unknown to me until today, but apparently another minor figure in the same genre of blatnaya pesnya as Vysotsky. To be honest, I found Belyaev’s lyrics rather juvenile (more so than other блатняк), and probably better suited for drinking songs rather than music for thoughtful enjoyment. But then, perhaps that’s exactly what my grandparents used them for.

Well now, with a fresh insight into another facet of my grandparents’ lives, and a renewed appreciation for Soviet musical traditions, I think it’s time to give these tapes one more listen!

* If you’re very curious, here is a sample of the audio from one of the tapes.

Revisiting the Windows 10 thumbnail cache

When you look at a folder full of pictures, and enable the display of thumbnails in your folders, Windows will show a thumbnail that represents each of the pictures. Creating these thumbnails is an expensive operation — the system needs to open each file, render the image in memory, and resize it to the desired size of the thumbnail. Therefore, Windows maintains a cache of thumbnails, saved in the following location: [User folder]\AppData\Local\Microsoft\Windows\Explorer.

A few years ago I published a utility for examining the thumbnail cache in Windows Vista and Windows 7.  However, in Windows 8 and Windows 10, Microsoft seems to have made some slight modifications to the file format of the thumbnail cache which was preventing my utility from working properly.  So, I revisited the thumbnail cache on the most recent version of Windows, and made sure the utility works correctly with it, as well as with all previous versions of the cache.

My updated ThumbCacheViewer supports thumbnail cache files from all versions of Windows after XP.  It automatically detects cache files associated with the current user’s account, and it also allows you to explicitly open thumbnail cache files from any other location. Once the file is opened, the utility will show a simple list of all the images contained in the selected cache. If you select an individual image, it will also show some useful metadata about the image:

thumbcache3

You can see that the cached images include thumbnails of individual photos, as well as thumbnail representations of folders that contain photos. Both of these can be forensically interesting, since the folder thumbnails still contain plenty of detail in the images. You can also see that there are separate caches for different resolutions of thumbnails, some of which are strikingly high-resolution (up to 2560 pixels wide, which almost defeats the purpose of a cache).

I’ll also point out that you can perform forensic analysis on thumbnail caches using DiskDigger, by opening the cache file as a disk image. You can do this by following these steps:

  • Launch DiskDigger, and go to the “Advanced” tab in the drive selection screen.
  • In the “Bytes per sector” field, enter “1”.
  • Click the “Scan disk image” button, and find the thumbnail cache file that you want to scan.
  • Select “Dig deeper” mode, and proceed with the scan.

Here is the same cache file as in the screenshot above, but viewed using DiskDigger (note the numerous .BMP images detected by the scan):

dd_88

Either way, this is now a relatively complete solution for analyzing thumbnail cache files, whether you’re a professional forensics specialist, or a home user who’s interested in how Windows preserves thumbnails in more ways than you might think!

The dead dream of chatbots

A long time ago I wrote about the Loebner Prize, and how it seemed like this competition isn’t so much a test of machine intelligence, but rather a test of how well the programmers can fool the judges into thinking they’re talking to a human. Not that anyone has actually done this successfully — none of the judges have ever been convinced that any of the chatbots were really a human, and the annual “winner” of the Prize is decided by sympathy points awarded by some very charitable judges.

In that previous post, I remember being struck by how little has changed in the quality of the chatbots that are entered into this competition:  it hadn’t improved since the inception of the prize.  So, today I thought I’d randomly “check in” on how the chatbots are doing, and read the chat transcripts from the competition from more recent years. (I encourage you to read the transcripts for yourself, to get a taste of the “state of the art” of today’s chatbots.)

And can you guess how they’re doing? Of course you can:  they still haven’t improved by any margin, except perhaps a larger repertoire of cleverly crafted catch-all responses. Even the winner and close runner-up of the 2017 competition are comically robotic, and can be effortlessly pegged as artificial by a human judge.

My goal in this post, however, is not to bash the authors of these chatbots — I’m sure they’re competent developers doing the best they can. My goal is to discuss why chatbot technology hasn’t moved forward since the days of ELIZA. (Note: When I refer to chatbots, I’m not referring to today’s virtual assistants like Siri or Alexa, which have made big strides in a different way, but rather to human-approximating, Turing-test-passing machines, which these chatbots attempt to be.)

I think much of it has to do with a lack of corporate support. Chatbots have never really found a good business use case, so we haven’t seen any major companies devote significant resources to chatbot development. If someone like Google or Amazon had put their weight behind this technology, we might have seen an advancement or two by now.  Instead, the competitors in the Loebner Prize still consist of individual hobbyists and hackers with no corporate backing.

Interest in chatbots seems to have peaked around 2012, when the cool thing was to add a customized bot to your website and let it talk to your users, but thankfully this died down very shortly thereafter, because apparently people prefer to communicate with other people, not lame attempts at imitation. We can theorize that someday we may hit upon an “uncanny valley” effect with chatbots, where the conversation is eerily close to seeming human (but not quite), which will cause a different kind of revulsion, but we’re still very far from that point.

Another thing to note is the actual technology behind today’s (and yesterday’s) chatbots. Most of these bots, and indeed all the bots that have won the Loebner Prize in recent years, are constructed using a language called AIML, which is a markup language based on XML.  Now, there have been plenty of ways that people have abused and misused XML in our collective memory, but this has to be one of the worst!  AIML attempts to augment XML with variables, stacks, macros, conditional statements, and even loops. And the result is an unwieldy mess that is completely unreadable and unmaintainable. If chatbot technology is to move forward, this has to be the first thing to throw out and replace with something more modern.

And finally, building a chatbot is one of those endeavors that seems tantalizingly simple on the surface:  if you look at the past chat logs of the prize-winning bots, it’s easy to think to yourself, “I can build a better bot than that!”  But, once you actually start to think seriously about building a bot that approximates human conversation, you quickly come up against research-level problems like natural language processing, context awareness, and of course human psychology, behavior, and consciousness in general.  These are most definitely not problems that can be solved with XML markup.  They likely can’t even be solved with today’s neural networks and “deep learning” algorithms.  It will probably require a quantum leap in AI technology.  That is, it will require building a machine that is truly intelligent in a more general way, such that its conversations with humans are a by-product of its intelligence, instead of its primary goal.

For now, however, the dream of chatbots has been laid to rest in the mid-2010s, and will probably not come back until the technology allows it, and until they’re actually wanted or needed.