A new low in SEO scams?

There are honest business models, and dishonest business models. Honest business models include businesses who manufacture useful products and sell them to consumers, or perform a useful service for a reasonable price. Dishonest business models include pyramid schemes (such as Mary Kay and Amway), Ponzi schemes, and alternative medicine.

But there’s also a third category of business models: the bottom of the barrel. These businesses exist almost exclusively on the Internet and, boy, are there a lot of them. These types of businesses include:

• Selling spamming services
• Selling botnets to spam more efficiently
• Pretending to be a charitable organization during a disaster
• Gaming (exploiting) affiliate programs from web hosts or porn sites
• Affiliate programs from affiliate programs from web hosts or porn sites
• Selling SEO (search engine optimization) services
• Selling e-books about how to sell SEO services
• Selling e-books about how to game affiliate programs
• (and the list goes on…)

But now, it looks like a new contender has stepped forward:

• Translating random web pages in exchange for link placement (resulting in improved search engine rankings)

Before I describe the scheme fully, let me start at the beginning. Last week I received the following email:

Dear Sir,

I am writing to inquire regarding your web page about running in Linux where I have found a lot of useful information. My name is Anja and I’m currently studying at the Faculty of Computer Science in Belgrade. Here is the URL of your article: http://diskdigger.org/linux

I would like to share it with the people from Former Yugoslav Republics: Serbia, Montenegro, Croatia, Slovenia, Macedonia, Bosnia and Herzegovina.

I would be grateful if you could allow me to translate your writing into Serbo-Croatian language, that is used in all Former Yugoslav Republics and to post it on my website. Hopefully, it will help our people to gather some additional knowledge about computing.

I hope to hear from you soon.
Regards,

Anja Skrba
anjas@webhostinggeeks.com, http://science.webhostinggeeks.com
Tel: +381 62 300604

Wow, someone wants to translate one of my pages into another language! What an honor. However, after the initial flattery wore off, I noticed a few things that didn’t seem to add up:

• The page that this person chose to translate is fairly obscure. It almost seems like it was chosen at random. A native speaker of “Serbo-Croatian” wouldn’t gain anything from it without a lot of background knowledge.
• A language like Serbo-Croatian is itself fairly obscure. I would guess that a Serbo-Croatian citizen who is remotely interested in “computing” will most likely speak English to begin with, so this kind of translation would be useless.
• The verbiage in the email sounds a little too boilerplate, with phrases like “a lot of useful information” and “additional knowledge about computing.”

So, what could be this person’s real motive?

If we follow the link in her signature, we see that she is affiliated with “WebHostingGeeks”, which appears to be a web hosting review site that makes money from web host affiliate programs and paid reviews. This certainly makes it a traffic-driven business model, and it therefore has a lot to gain from any kind of SEO “scheme.”

After doing a Google search for the name “Anja Skrba” plus “WebHostingGeeks”, we see a plethora of results where the scheme repeats itself over and over: dozens of seemingly random web pages translated into Serbo-Croatian, with a link back to the WebHostingGeeks site.

And the pieces fall into place. Here’s how the scheme works:

• A webmaster receives an email from a foreign-sounding person (names like “Anja Skrba” and “Jovana Milutinovich” have been seen, but they’re probably not real people), asking for permission to translate one of their web pages.
• The webmaster feels honored, and replies “absolutely!”
• The translator translates the page (using Google Translate, maybe with a few touch-ups), and asks the webmaster to add a link to the translated page from the original page.
• The webmaster, blinded by pride, puts a link to the translated page onto the original page (and often blogs about what an honor it is to be noticed in such a remote corner of the world!).
• Over time, if enough credible web pages add these kinds of links, then the malicious target of the links (i.e. WebHostingGeeks) will climb straight to the top of Google search results, precisely because it’s linked to by oh-so-many respectable sites.

Indeed, after some extensive Googling, it’s surprising just how many very respectable websites have been fooled by this scam. Just do a search for “Jovana Milutinovich translate” or “Anja Skrba translate”, and you’ll see for yourself.

“Traditional” SEO schemes have their place among the bottom of the barrel, but this is surely a new low. I actually commend them for almost getting one past me.

My BASIC beginnings

Edsger Dijkstra was absolutely right when he said, “Programming in BASIC causes brain damage.”  (Lacking a source for that quote, I found an even better quote that has a source: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”)

When I reflect on my (not-too-distant) programming infancy, I often think about what I might have done differently, like what technologies I could have learned, which ones I should have avoided, or what algorithms I could have used in old software I had written, and so on.

But there’s one thing that really stands out more than anything else:  starting out with BASIC was the worst thing I could have done.

I’m not sure how useful it is to talk about this now, since BASIC has pretty much gone extinct, and rightly so, but it feels good to get it off my chest anyway.

My parents and I immigrated to the U.S. in 1991, when I was 10 years old, and I had never laid eyes on a personal computer before that time. During my family’s first few months in the States, we acquired a 80286 IBM PC, which was probably donated to us by an acquaintance (since the 80386 architecture was already prevalent at that time, and 80486 was the cutting edge).

I also happened to come across a book called BASIC and the Personal Computer by Dwyer and Critchfield.  I was instantly fascinated by the prospect of programming the computer, and the boundless possibilities that computer software could provide.

However, I made a critical error that would hinder my programming development for at least a year:  I reached the erroneous conclusion that BASIC was the only language there was!

I had no idea that BASIC was an interpreted language, or indeed what difference there is between an interpreted and a compiled language.  I thought that all software (including the games I played, Windows 3.0, Word Perfect, etc.) was written in BASIC!  This unfortunately led me down an ill-fated path of self-study, which took an even stronger effort to undo.

I learned all there was to know about BASIC programming in a few months (starting with GW-BASIC, then moving to QuickBASIC), and then I started to notice certain things about the software I was trying to write.

No matter how I tried, I couldn’t make my programs be as fast as other software I used. I couldn’t understand why this was the case.  Also, the graphics routines in BASIC were virtually nonexistent, so I was baffled how anyone could write games with elaborate graphics, scrolling, and responsive controls.  I was eager to start developing games that would rival my favorite games at the time, like Prince of Persia, Crystal Caves, and Commander Keen.  But the graphics and responsiveness of those games was orders of magnitude beyond what I could achieve with my BASIC programs.

With all this frustration on my mind, I was determined to find the reason why my programs were so limited.  I soon found a solution, but once again it was the wrong one!  I stumbled upon some example BASIC code that used assembly language subroutines (encoded as DATA lines in the BASIC program), as well as INTERRUPT routines that took advantage of the underlying DOS and BIOS services.

This led me down the path of learning Intel 286 assembly language (another few months of studying), and encoding it into my BASIC programs!  This solved the issue of responsiveness, but there was still the issue of graphics, or lack thereof.  Fortunately, I found a book at the local public library about VGA graphics programming. Even more fortunately, the book contained sample source code, using a language they called “C“….

And my eyes were open!

It hit me like a freight train. I was lucky that I didn’t have a seizure right there at the library.  I realized that I had been learning the wrong things all along!  (Of course learning assembly language was sort of right, but my application of it was still misguided.)

Learning C and C++ from that point forward wasn’t particularly difficult, but I still feel like it would have been a lot easier if my mind hadn’t been polluted by the programming style and structure that I learned from BASIC.  It makes me wonder how things might have been different, had I accidentally picked up a book on C++ instead of a book on BASIC during my earliest exploits with computers.

In all fairness, I’m sure I learned some rudimentary programming principles from BASIC, but I’m not sure that this redeems BASIC as a learning tool. There were just too many moments where, while learning C++, I thought, “So that’s the way it really works!”  And I’m sure it’s also my fault for trying to learn everything on my own, instead of seeking guidance from someone else who might have told me, “You’re doing it wrong.”

All of this makes me wonder what programming language would be appropriate for teaching today’s generation of young programmers.  Based on my comically tragic experience with BASIC, my gut instinct is to advise aspiring developers to stay away from interpreted languages (such as Python), or at the very least understand that the interpreted language they’re learning is useless for developing actual software. I don’t think there’s any harm in diving right into a compiled language (such as C++), and learning how it hugs the underlying hardware in a way that no interpreted language ever could.

That being said, I don’t wish any of this to reflect negatively on Dwyer and Critchfield’s BASIC and the Personal Computer.  It’s a solid book, and I still own the original copy.  There’s no denying that it was one of the first books that got me interested in programming, and for that I’m thankful.  However, sometimes I regret that I didn’t find Stroustrup’s The C++ Programming Language at the same garage sale as where I found BASIC and the Personal Computer.  Or, alternatively, perhaps Dwyer and Critchfield could have included the following disclaimer in large bold letters: This is not the way actual software is written!  But perhaps it’s time to let it go. I didn’t turn out so bad, right?

DiskDigger now available for Android!

I’m happy to announce that DiskDigger is now available for Android devices (phones and tablets running rooted Android 2.2 and above)! You can get the app by searching for it on the Google Play Store from your Android device.  Please note that the app only works on rooted devices.

At the moment, the app is in an early Beta stage, meaning that it’s not meant to be as powerful or complete as the original DiskDigger for Windows, and is still in active development.  Nevertheless, it uses the same powerful carving techniques to recover .JPG and .PNG images (the only file types supported so far; more will follow) from your device’s memory card or internal memory.

So, if you’ve taken a photo with your phone or tablet and then deleted it, or even reformatted your memory card, DiskDigger can recover it!

I’ve written a quick guide that has more information and a brief introduction to using the app!  If you have questions, comments, or suggestions about the app, don’t hesitate to share them!

Update: thanks to Lifehacker for writing a nice article!

I seem to be very minimal in my strategy of organizing my digital photo collection. I have a single folder on my computer called “Pictures,” and subfolders that correspond to every year (2011, 2010, …) since the year I was born. Some of the years contain subfolders that correspond to noteworthy trips that I’ve taken.

This method makes it extremely easy to back up my entire photo collection by dragging the “Pictures” folder to a different drive. It also makes it easy to reference and review the photos in rough chronological order. This is why I’ve never understood the purpose of third-party “photo management” software, since most such software inevitably reorganizes the underlying directories in its own crazy way, or builds a proprietary index of photos that takes the user away from the actual directory structure. If you’re aware of the organization of your photos on your disk, then any additional management software becomes superfluous.

At any rate, there is one slight issue with this style of organizing photos: all of the various sources of photos (different cameras, scanners, cell phones, etc) give different file names to the photos! So, when all the photos are combined into a single directory, they often conflict with each other, or at the very least become a disjointed mess. For example, the file names can be in the form DSC_xxxx, IMG_xxxx, or something similar, which isn’t very meaningful. Photos taken will cell phones are a little better; they’re usually composed of the date and time the photo was taken, but the naming format is still not uniform across all cell phone manufacturers.

Thus, the optimal naming scheme for photos would be based on the date/time, but in a way that is common between all sources of photos. This would organize the photos in natural chronological order. The vast majority of cameras and cell phones encode the date and time into the EXIF block of each photo. If only there was a utility that would read each photo, and rename it based on the date/time stored within it. Well, now there is:

This is a very minimal utility that takes a folder full of photos and renames each one based on its date/time EXIF tag. As long as you set the time on your camera(s) correctly, this will ensure that all your photos will be named in a natural and uniform way.

The tool lets you select the “pattern” of the date and time that you’d like to apply as the file name. The default pattern will give you file names similar to “20111028201345.jpg” (for a photo taken on Oct 28 2011, 20:13:45), which means that you’ll be able to sort the photos chronologically just by sorting them by name!

Pi is wrong! Long live Tau!

At one point or another, we’ve all had a feeling that something is not quite right in the world. It’s a huge relief, therefore, to discover someone else who shares your suspicion. (I’m also surprised that it’s taken me this long to stumble on this!)

It has always baffled me why we define $$\pi$$ to be the ratio of the circumference of a circle to its diameter, when it should clearly be the ratio of the circumference to its radius. This would make $$\pi$$ become the constant 6.2831853…, or 2 times the current definition of $$\pi$$.

Why should we do this? And what effect would this have?

Well, for starters, this would remove an unnecessary factor of 2 from a vast number of equations in modern physics and engineering.

Most importantly, however, this would greatly improve the intuitive significance of $$\pi$$ for students of math and physics. $$\pi$$ is supposed to be the “circle constant,” a constant that embodies a very deep relationship between angles, radii, arc lengths, and periodic functions.

The definition of a circle is the set of points in a plane that are a certain distance (the radius) from the center. The circumference of the circle is the arc length that these points trace out. The circle constant, therefore, should be the ratio of the circumference to the radius.

To avoid confusion, we’ll use the symbol tau ($$\tau$$) to be our new circle constant (as advocated by Michael Hartl, from the Greek τόρνος, meaning “turn”), and make it equal to 6.283…, or $$2\pi$$.

In high school trigonometry class, students are required to make the painful transition from degrees to radians. And what’s the definition of a radian? It’s the ratio of the length of an arc (a partial circumference) to its radius! Our intuition should tell us that the ratio of a full circumference to the radius should be the circle constant.

Instead, students are taught that a full rotation is $$2\pi$$ radians, and that the sine and cosine functions have a period of $$2\pi$$. This is intuitively clunky and fails to illustrate the true beauty of the circle constant that $$\pi$$ is supposed to be. This is surely part of the reason that so many students fail to grasp these relationships and end up hating mathematics. A full rotation should be τ radians! The period of the sine and cosine functions should be $$\tau$$!

But… wouldn’t we have to rewrite all of our textbooks and scientific papers that make use of $$\pi$$?

Yes, we would. And, in doing so, we would make them much easier to understand! You can read the Tau Manifesto website to see examples of the beautiful simplifications that $$\tau$$ would bring to mathematics, so I won’t repeat them here. You can also read the original opinion piece by Bob Palais that explores this subject.

It’s not particularly surprising that the ancient Greeks used the diameter of a circle (instead of the radius) in their definition of $$\pi$$, since the diameter is easier to measure, and also because they couldn’t have foreseen the ubiquity of this constant in virtually all sciences.

However, it’s a little unfortunate that someone like Euler, Leibniz, or Bernoulli didn’t pave the way for redefining $$\pi$$ to be 6.283…, thus missing the opportunity to simplify mathematics for generations to come.

Aside from all the aesthetic improvements this would bring, considering how vitally important it is for more of our high school students (and beyond) to understand and appreciate mathematics, we need all the “optimizations” we can get to make mathematics more palatable for them. This surely has to be an optimization to consider seriously!

From now on, I’m a firm believer in tauism! Are you?

Good and bad science, and faster-than-light neutrinos

The results from the OPERA experiment at CERN have caused a huge stir in the media over the last two weeks, and with good reason, since they claim to have measured the arrival of a neutrino beam 60 nanoseconds faster than light.

Before we go on, let’s calm down a bit. Even if these results are somehow confirmed, it wouldn’t “prove Einstein wrong,” or cause scientists to stop using General and Special Relativity on a day-to-day basis. If anything, it would show that Einstein’s theory is incomplete, but no one is disputing this in the first place.

Relativity (general and special) has been put through dozens of independent, precise, elaborate tests, and passed every single one with astonishing accuracy, which means that there’s definitely something fundamentally correct about Einstein’s theory. It shouldn’t be thought of as some kind of “sitting duck” theory, just waiting to be overthrown.

Understandably, the current consensus among the world’s physicists seems to be that there was a measurement error in the OPERA experiment, or that the experimenters neglected to integrate some subtle factor that accounts for the missing 60 ns. (For a wonderfully accessible introduction to the OPERA experiment, as well as particle physics in general, read Matt Strassler’s blog. For a more thorough discussion of possible mistakes, read Lubos Motl’s post on the subject. It’s also worthwhile to read the comments on those blogs.)

Perhaps the most convincing evidence against this experiment is that we have observed neutrino emissions from supernovae (specifically SN 1987A), and these neutrinos more-or-less coincided with our observation of visible light from the same supernova. If neutrinos are really faster than light, we should have observed the neutrinos many months before we observed the light. The only loophole in this argument would be if the OPERA effect is energy dependent, since the OPERA neutrinos had much more energy than the ones from the supernova, but that would present even more problems.

Not being a particle physicist myself, I can’t meaningfully contribute to the discussions on theoretical implications of this experiment, if it’s actually true. I would, however, like to comment on how this story is unfolding from the point of view of the scientific method, and specifically how this story highlights the differences between real science and pseudoscience. I use “pseudoscience” to refer to homeopathy, energy healing products, reiki, dowsing, magnets, pendulums, astrology, and anything else that requires more “faith” than evidence.

In the wake of attending a New Age expo (out of morbid curiosity) and being overloaded with crackpots, quacks, and hucksters, these differences become all the more plain:

• The fact that the experimenters published any data at all is a sign of great scientific integrity. The fact that they held a press conference before the paper was peer-reviewed is a bit unfortunate, as noted by Lawrence Krauss, but I think the fact that this story made it to mainstream media outlets will help the general public understand the scientific process, as people follow the story. Pseudoscientists, on the other hand, seem to be allergic to data in general, and never publish anything.
• Essentially, the scientists of the OPERA experiment are saying, “We’ve gathered these data, we used the best possible experimental parameters, we’ve performed all the checks we could think of, and we still see this anomaly. So please, tell us what we did wrong.” This is surely science at its best! This is the kind of behavior that should be an inspiration for a whole generation of new scientists. We will never hear pseudoscientists utter that phrase.
• Real scientists don’t adhere dogmatically to any theory, no matter how foundational it may be. Even though most physicists agree that there was an error in the OPERA experiment, they still reserve a little room for the possibility that the results are correct, and that Relativity might be violated. Einstein to physicists is not the same as Chopra is to pseudoscientists.
• Real scientists expect extraordinary evidence for extraordinary claims. Most scientists agree that the evidence collected by the OPERA experiment is not extraordinary. Pseudoscientists make extraordinary claims every time they open their mouth, but present no evidence at all, except anecdotal testimonials from their friends and paid endorsers.
• If we read the blogs of popular physicists on the subject of the OPERA experiment, we find lively debates on theoretical explanations for the anomalous effect, and discussions on ways the experimenters miscalculated the speed of the neutrinos. The key point is: scientists get excited about the possibility of being proven wrong. Scientists can’t wait to be proven wrong, because it would mean that there’s more science to be done!
• Perhaps most importantly, real scientists are motivated by a desire to better understand our world. The only motivation of pseudoscientists is money, thinly veiled by a scientific-sounding sales pitch, and a nonsensical product du jour.

In any case, I encourage everyone to follow this story, because it’s a high-profile example of real science at work; a triumph of human achievement. No matter how the results turn out, by observing the process of scientific scrutiny, everyone will be better equipped to spot pseudoscience when it’s in plain sight.

I will update this post as soon as I see a quack energy-healing product that uses faster-than-light neutrinos to balance the flow of energy through your chakras. Post a comment if you find one yourself!

The FujiFilm .MPO 3D photo format

A few weeks ago my dad, in his love for electronic gadgetry, purchased a FujiFilm FinePix REAL 3D camera. The concept is pretty simple: it’s basically two cameras in one, with the two sensors spaced as far apart as an average pair of human eyes. The coolest thing about the camera is its LCD display, which achieves autostereoscopy by using a lenticular lens (kind of like those novelty postcards that change from one picture to another when you look at them from different angles), so if it’s held at the right angle and distance from the eyes, the picture on the LCD display actually appears 3-dimensional without special glasses!

Anyway, I immediately started wondering about the file format that the camera uses to record its images (as well as movies, which it also records in 3D). In the case of videos, the camera actually uses the well-known AVI container format, with two synchronized streams of video (one for each eye). In the case of still photos, however, the camera saves files with a .MPO extension, which stands for Multiple Picture Object.

I was expecting a complex new image specification to reverse-engineer, but it turned out to be much simpler than that. A .MPO file is basically two JPG files, one after another, separated only by a few padding zeros (presumably to align the next image on a boundary of 256 bytes?). Technically, if you “open” one of these files in an image editing application, you would actually see the “first” image, because the MPO file looks identical to a regular JPG file at the beginning.

I proceeded to whip up a quick application in C# to view these files (that is, view both of the images in each file). This quick program also has the following features:

• It has a “stereo” mode where it displays both images side by side. Using this feature you can achieve a 3D effect by looking at both images as either a cross-eyed stereogram (cross your eyes until the two images converge, and combine into one) or a relaxed-eye stereogram. You might have to strain your eyes a bit to focus on the combined image, but the effect truly appears 3-dimensional.
• In “single” mode, the program allows you to automatically “cycle” between the two images (a wiggle-gram, if you will), which creates a cheap jittery pseudo-3D effect (see screen shots below).
• Also in “single” mode, the program lets you save each of the frames as an individual JPEG file by right-clicking on the picture.
So, if you want a quick and not-so-dirty way of viewing your MPO files, download the program and let me know what you think! (Or browse the source code on GitHub)
Here’s a screenshot of the program in “stereo” mode:

And a screenshot of the program in “cycle” mode:

If you like, you can download the original .MPO file shown in the screenshots above.

Now for a bit of a more technical discussion…. Clearly it would be a great benefit to add support for the .MPO format to DiskDigger, the best file carving application in town.

However, from the perspective of a file carver, how would one differentiate between a .MPO file and a standard .JPG file, since they both have the same header? As it is now, DiskDigger will be able to recover the first frame of the .MPO file, since it believes that it found a .JPG file.

After the standard JPG header, the MPO file continues with a collection of TIFF/EXIF tags that contain meta-information about the image, but none of these tags seem to give a clue that this is one of two images in a stereoscopic picture (at least not the tags within the first sector’s worth of data in the file, which is what we’re really interested in).

One of the EXIF tags gives the model name of the camera, which identifies it as “FinePix REAL 3D W3.” Perhaps we can use the model name (the fact that it contains “3D”) to assume that this must be a .MPO file, but I’d rather not rely on the model name, for obvious reasons, although the FinePix is currently the only model that actually uses this format (to my knowledge).

The other option would be to change the algorithm for JPG carving, so that every time we find a JPG file, we would seek to the end of the JPG image, and check if there’s another JPG image immediately following this one. But then, what if the second JPG image is actually a separate JPG file, and not part of a MPO collection?

For the time being, DiskDigger will in fact use the model name of the camera to decide if it’s a .MPO file or just a regular .JPG file. The caveats of doing this would be:

• It won’t identify .MPO files created by different manufacturers.
• It might give false positive results for .JPG images shot with the camera in 2D mode.

As always, you can download DiskDigger for all your data recovery needs. And if anyone has any better ideas of how to identify .MPO files solely based on TIFF/EXIF tags, I’d love to hear them!

Update: DiskDigger now fully supports recovering .MPO files, based on deep processing of MP tags encoded in the file!

Thumbnail cache in Windows 7 / Vista – a rumination

Today I was thinking about the security implications of thumbnail caching systems on most PCs out there today. What I mean by that is this: whenever you use Windows Explorer to browse a directory that contains photos or other images, and you enable the “thumbnail view” feature, you would see a thumbnail of each of the images. By default, Windows caches these thumbnails, so that it doesn’t have to regenerate the thumbnails the next time you browse the same folder.

This has several implications in terms of privacy and security, since it means that a copy of each image is made elsewhere on the computer (albeit lower resolution), basically without the user’s knowledge. This is good news from a forensic examiner’s point of view, since the thumbnail cache can contain thumbnails of images that have long been deleted. However, from the user’s point of view, it can present a privacy/security issue, especially if the images in question are confidential or sensitive.

Windows XP caches thumbnails in the same folder as the original images. It creates a hidden file called “Thumbs.db” and stores all the thumbnails for the current folder in that file. So, even if the original images were deleted from the folder, the Thumbs.db file will still contain thumbnails that can be viewed at a later time.

However, in Windows 7 and Windows Vista, this is no longer the case. The thumbnails are now stored in a single centralized cache under the user’s profile directory: C:\Users\[username]\AppData\Local\Microsoft\Windows\Explorer\thumbcache*.db

The above directory contains multiple thumbnail cache files, each of which corresponds to a certain resolution of thumbnails: thumbcache_32.db, thumbcache_96.db, thumbcache_256.db, and thumbcache_1024.db.

So then, wouldn’t you like to find out what thumbnails your computer has cached in these files? Well, now you can! I’ve whipped up a small utility for the sole purpose of viewing the contents of these thumbnail caches:

This is probably not the first utility that does this, but it’s definitely the simplest. It automatically detects the thumbnail caches present on your computer, and lets you view all the thumbnail images in each cache.

If you want to disable the thumbnail cache in Windows 7 or Vista, you can find instructions here.

A few nitpicks of Star Trek (2009)

Let me state for the record that I loved the new Star Trek movie. Given the last several Star Trek TNG films of the last decade, the franchise was clearly in desperate need of a reboot, and J. J. Abrams did an outstanding job of that. I thought the idea of branching off into an entirely new timeline was genius, and gives a new meaning to the very word “reboot.”

However, the new film certainly had no shortage of plot holes and scientific inaccuracies. It’s taken a while for me to crystallize my thoughts on it, but after watching it last week again on Blu-ray, I couldn’t help but jot down a few nitpicks that really stuck out in my mind. Forgive my inner nerd for really showing in this post, and please feel free to contribute your own nitpicks in the comments, or criticize mine as you see fit! And, off we go.

A supernova that threatens the galaxy?

During his mind-meld with James Kirk, the elder Spock recounts the story that led to their current predicament.

According to Spock, a supernova explosion occurs in his time that threatens the survival of the galaxy. That’s curious… what kind of supernova is this? Granted, supernova explosions are very luminous, but a single supernova would certainly not threaten an entire galaxy, and it certainly wouldn’t carry the kind of planet-destroying force as shown in the film, at least not outside of a single star system.

Using our primitive Hubble Telescope, we have observed plenty of supernova remnants within our own galaxy that pose no threat to us whatsoever. The supernova remnants can grow to several light years in size, but that kind of distance is still minuscule on a galactic scale.

As it is depicted in the movie, Romulus is literally torn apart by the force of the supernova explosion. This means that it must have been the actual Romulan Sun that exploded. No stellar explosion can maintain that kind of force if it originated from a different star system.

It seems unlikely that Romulan scientists didn’t anticipate their own sun going supernova many years in advance of the explosion. Stellar evolution, although not yet completely understood, is nevertheless fairly predictable. It should be relatively easy, especially for a warp-capable species, to tell if a planet’s parent star is on the verge of exploding. Romulus could have been safely evacuated well before its star reached the end of its life.

Appalling Vulcan irresponsibility

I’m not sure I understand why the Vulcans felt it was their duty to contain the supernova. The Romulan star system is nowhere near Vulcan, so why was it up to the Vulcans to stop the explosion? OK, let’s assume for a moment that Vulcans are the only species that has “red matter” technology, so they’re the only ones who can stop the supernova by creating a black hole.

But wait… it’s well-known that the Romulans use singularities (black holes) on a routine basis as a power source for their Warbirds, so the Romulans must be perfectly capable of creating black holes themselves! Couldn’t they simply fling an abandoned Warbird into the supernova, and let the supernova be consumed by the black hole that powers the ship’s warp core?

OK, let’s assume that it was absolutely necessary for the Vulcans to handle this threat. In that case, it seems like the Vulcans handled it extremely irresponsibly, and completely contrary to logic.

Why was it the job of a geriatric diplomat (Spock) to deliver the red matter to the site of the supernova explosion? Was he going to negotiate a peace treaty with it? Couldn’t they have sent someone more appropriate, such as a team of special-forces commandos, or at least someone in better health, or even an unmanned missile that simply plunges into the supernova along with the payload of red matter, much like Dr. Soran did with trilithium in Star Trek Generations?

Why is there so much red matter aboard Spock’s ship? Seriously, if it only takes one droplet of red matter to create a black hole, why was there a comically gigantic ball of it aboard Spock’s ship? That’s enough to create a million black holes! Where else were they planning to use this much red matter?

The Vulcans should have anticipated that red matter could be used as a weapon of genocide. They should have recognized the staggering security risk of allowing red matter to come anywhere close to hostile territory. So why did they place their entire supply of red matter, capable of destroying a million planets, onto a virtually unarmed scout ship, and proceed to send the scout ship into Romulan space?! What did they think would happen?

All of this seems very irresponsible on the part of the Vulcans. Because of their short-sightedness, they’ve indirectly caused the destruction of their own homeworld, and altered the timeline for everybody else.

Black holes and time travel

In the movie, both Nero and Spock travel backwards in time by entering a black hole (facepalm!). This is basically on the same theoretical footing as traveling back in time by performing a slingshot around a star, which made complete sense in Star Trek IV: The Voyage Home.

This isn’t the first time that black holes were portrayed as portals through time in popular media. Black holes are certainly very interesting objects to theorize about, but they’re not quite as exotic as they’re made out to be.

Objects that fall past the event horizon of a black hole do not travel backwards in time. They simply fall closer and closer to the center of the black hole, until finally they’re compressed to a single point of infinite density at the very center, adding to the mass that was already at the central point.

Of course, we don’t yet have the physics to describe the nature of the infinite-density point at the center of the black hole, which is why it’s called a singularity. But we do know that any mass that enters the black hole will remain in the black hole. It doesn’t go backwards in time, nor does it go to another dimension, or another universe. All of the mass will remain in the central singularity for the remaining lifetime of the black hole.

Necessity of drilling

Why was it necessary for Nero to drill to the planet’s core in order to drop the red matter? If the red matter really creates a black hole, it would suffice to drop the red matter anywhere on the planet’s surface, and let the black hole consume the planet from the surface inward. Speaking of red matter…

Red matter?

Theoretically, any amount of matter can be turned into a black hole if it’s compressed into a small enough volume (its Schwarzschild radius). For example, the Earth’s Schwarzschild radius is about 9 millimeters. That is, for the Earth to become a black hole, it would need to be compressed into a volume with a radius of 9 millimeters (about the size of a grape).

Presumably, “red matter” is an exotic form of matter that automatically collapses beyond its own Schwarzschild radius when it’s taken out of its containment field. Fair enough, but there are several major problems with this.

The most serious problem has to do with the size of the black hole that can be created with that amount of red matter. We can see from the movie that red matter is not particularly massive — we see Spock and a Romulan handling containers with samples of red matter without exerting themselves at all. Since it took only a droplet of red matter to create a black hole, let’s assume that the droplet’s mass is 1 gram. The Schwarzschild radius for any massive object is given by the following formula: $$r_\mathrm{s} = \frac{2Gm}{c^2}$$
So, for a mass of 1 gram, the Schwarzschild radius would be about $$1.5 \times 10^{-19}$$ meters, which is several orders of magnitude smaller than an atomic nucleus. A black hole of this size would pose no threat whatsoever, and this is for two reasons.

According to modern physics, black holes emit radiation with an intensity that is inversely proportional to their size. This is known as Hawking radiation, named after Stephen Hawking, who postulated its existence. If the black hole emits radiation, that must mean that it’s losing energy, which means that it’s losing mass, which means that it’s getting smaller! And the smaller the black hole gets, the more intense (the higher temperature) its Hawking radiation becomes. This continues until the black hole completely evaporates in a blaze of glory consisting of ultra-energetic gamma rays.

The point is, if Nero used a tiny amount of red matter to create a black hole of the same mass, the black hole would evaporate with a flash of radiation almost instantaneously. The black hole would not go on to swell up and consume the planet.

Incidentally, the theory of Hawking radiation is one response to people’s concerns regarding the possibility of creating a black hole at the Large Hadron Collider. Even if we create a tiny black hole at the LHC, it would instantly evaporate in a flash of radiation, and pose no further threat.

Also, even if black holes do not evaporate due to Hawking radiation, a black hole that’s smaller than an atomic nucleus would have a hard time finding other matter to swallow up. It would take a long time indeed for such a black hole to have a noticeable effect on an entire planet.

Where’s the Time Police?

This next nitpick doesn’t really have to do with the movie itself, but with a different Star Trek story that inadvertently shot the entire franchise in the foot.

In the Star Trek: Voyager episode “Future’s End,” it’s revealed that, in the 29th century, the Federation develops timeships that routinely patrol the timeline and attempt to eliminate any anomalies.

With this story, the writers basically negated any further possibility of having time-travel stories in Star Trek. If a starship travels back in time without “authorization,” we should expect a visit from a temporal patrol ship from the 29th century. The patrol ship would then do whatever is necessary to correct the timeline, and all would be back in order.

In the Voyager episode, the timeship Aeon travels back in time to prevent the destruction of the Solar system. One would think that the destruction of Vulcan is an equally worthy cause for a timeship to investigate, and attempt to prevent. But, of course, we see no hint of this in the movie.

Sound in space

Having sound in space seems to be a sci-fi cliché that the writers and producers just can’t unlearn, so it’s not even a nitpick anymore. And, in all honesty, a little sound adds to the excitement of the space battle scenes, so it’s not that big a deal.

However, in this movie, they actually made an attempt to get it right! I’m referring to the space-jump scene with Kirk, Sulu, and the unimportant guy who dies. When they jump off the shuttle and fall towards the planet, no sound can be heard. As they begin to enter Vulcan’s atmosphere, more and more noise is heard around them. This is absolutely correct!

So why couldn’t the movie take this excellent example and run with it, meaning get rid of all sound in the scenes where the shot takes place in outer space? All of the battle scenes and space explosions still have the usual sounds associated with them, without any regard for the fact that there’s no medium for the sounds to travel through. But I digress.

Epilogue

Well, that’s it for now, and thanks for indulging me. As I mentioned, this movie is a worthy successor to all the previous Star Trek films, as well as simply an excellent movie in its own right. I’m looking forward to the sequel(s).

In the meantime, all of the current sci-fi franchises, including Star Trek, would do well to hire some better scientific consultants. Maybe they can hire me?

Discovering the 3D Mandelbulb

There is some exciting news this week in the world of fractals. Daniel White, on his website, describes what is apparently a completely new type of fractal, and the closest analog so far to a true 3-dimensional Mandelbrot set!

Although White mentions that this is probably not the “true” 3D Mandelbrot, the new fractal is undoubtedly a sight to behold, especially considering the renderings he showcases on his webpage.

Unable to contain my enthusiasm, I quickly wrote up a small program that uses OpenGL to actually display this shape in 3D, in real time, to get a feel for what this beast looks like from all angles. Don’t get too excited; the program does not render the shape in real time, it just displays the points rendered so far in real time. The actual rendering process can take a minute or so.

The program basically renders the 3D shape by constructing a “point cloud” that approximates the edge of the fractal.

Everything in the program should be relatively self-explanatory, but here’s a brief overview of the features so far:

• The program lets you click-and drag the rendered shape to rotate it in trackball fashion (left mouse button), as well as zooming in and out (right mouse button).
• The program lets you select the “power” of the Mandelbulb formula, as well as the number of iterations to perform.
• The program lets you select the resolution of the point cloud.
• It gives you a “selection cube” with which you can select a subset of the shape to zoom in on (with the “zoom to cube” button).
• It has a number of other minor features like fog and anti-aliasing.
• It uses multiple threads to render the shape, so it will take advantage of multiple cores/processors.

Here are some additional screen shots:

Manipulating the selection cube:

After zooming in on the cube:

Zooming in further:

Looking inside:

Colorized points:

The program was written in C# .NET, using the Open Toolkit Library (OpenTK) which provides an excellent OpenGL wrapper.

Of course, this program is very much in its early stages, so don’t expect it to be perfect. As always, comments and suggestions are welcome!