# A new low in SEO scams?

There are honest business models, and dishonest business models. Honest business models include businesses who manufacture useful products and sell them to consumers, or perform a useful service for a reasonable price. Dishonest business models include pyramid schemes (such as Mary Kay and Amway), Ponzi schemes, and alternative medicine.

But there’s also a third category of business models: the bottom of the barrel. These businesses exist almost exclusively on the Internet and, boy, are there a lot of them. These types of businesses include:

• Selling spamming services
• Selling botnets to spam more efficiently
• Pretending to be a charitable organization during a disaster
• Gaming (exploiting) affiliate programs from web hosts or porn sites
• Affiliate programs from affiliate programs from web hosts or porn sites
• Selling SEO (search engine optimization) services
• Selling e-books about how to sell SEO services
• Selling e-books about how to game affiliate programs
• (and the list goes on…)

But now, it looks like a new contender has stepped forward:

• Translating random web pages in exchange for link placement (resulting in improved search engine rankings)

Before I describe the scheme fully, let me start at the beginning. Last week I received the following email:

Dear Sir,

I am writing to inquire regarding your web page about running in Linux where I have found a lot of useful information. My name is Anja and I’m currently studying at the Faculty of Computer Science in Belgrade. Here is the URL of your article: http://diskdigger.org/linux

I would like to share it with the people from Former Yugoslav Republics: Serbia, Montenegro, Croatia, Slovenia, Macedonia, Bosnia and Herzegovina.

I would be grateful if you could allow me to translate your writing into Serbo-Croatian language, that is used in all Former Yugoslav Republics and to post it on my website. Hopefully, it will help our people to gather some additional knowledge about computing.

I hope to hear from you soon.
Regards,

Anja Skrba
anjas@webhostinggeeks.com, http://science.webhostinggeeks.com
Tel: +381 62 300604

Wow, someone wants to translate one of my pages into another language! What an honor. However, after the initial flattery wore off, I noticed a few things that didn’t seem to add up:

• The page that this person chose to translate is fairly obscure. It almost seems like it was chosen at random. A native speaker of “Serbo-Croatian” wouldn’t gain anything from it without a lot of background knowledge.
• A language like Serbo-Croatian is itself fairly obscure. I would guess that a Serbo-Croatian citizen who is remotely interested in “computing” will most likely speak English to begin with, so this kind of translation would be useless.
• The verbiage in the email sounds a little too boilerplate, with phrases like “a lot of useful information” and “additional knowledge about computing.”

So, what could be this person’s real motive?

If we follow the link in her signature, we see that she is affiliated with “WebHostingGeeks”, which appears to be a web hosting review site that makes money from web host affiliate programs and paid reviews. This certainly makes it a traffic-driven business model, and it therefore has a lot to gain from any kind of SEO “scheme.”

After doing a Google search for the name “Anja Skrba” plus “WebHostingGeeks”, we see a plethora of results where the scheme repeats itself over and over: dozens of seemingly random web pages translated into Serbo-Croatian, with a link back to the WebHostingGeeks site.

And the pieces fall into place. Here’s how the scheme works:

• A webmaster receives an email from a foreign-sounding person (names like “Anja Skrba” and “Jovana Milutinovich” have been seen, but they’re probably not real people), asking for permission to translate one of their web pages.
• The webmaster feels honored, and replies “absolutely!”
• The translator translates the page (using Google Translate, maybe with a few touch-ups), and asks the webmaster to add a link to the translated page from the original page.
• The webmaster, blinded by pride, puts a link to the translated page onto the original page (and often blogs about what an honor it is to be noticed in such a remote corner of the world!).
• Over time, if enough credible web pages add these kinds of links, then the malicious target of the links (i.e. WebHostingGeeks) will climb straight to the top of Google search results, precisely because it’s linked to by oh-so-many respectable sites.

Indeed, after some extensive Googling, it’s surprising just how many very respectable websites have been fooled by this scam. Just do a search for “Jovana Milutinovich translate” or “Anja Skrba translate”, and you’ll see for yourself.

“Traditional” SEO schemes have their place among the bottom of the barrel, but this is surely a new low. I actually commend them for almost getting one past me.

# My BASIC beginnings

Edsger Dijkstra was absolutely right when he said, “Programming in BASIC causes brain damage.”  (Lacking a source for that quote, I found an even better quote that has a source: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”)

When I reflect on my (not-too-distant) programming infancy, I often think about what I might have done differently, like what technologies I could have learned, which ones I should have avoided, or what algorithms I could have used in old software I had written, and so on.

But there’s one thing that really stands out more than anything else:  starting out with BASIC was the worst thing I could have done.

I’m not sure how useful it is to talk about this now, since BASIC has pretty much gone extinct, and rightly so, but it feels good to get it off my chest anyway.

My parents and I immigrated to the U.S. in 1991, when I was 10 years old, and I had never laid eyes on a personal computer before that time. During my family’s first few months in the States, we acquired a 80286 IBM PC, which was probably donated to us by an acquaintance (since the 80386 architecture was already prevalent at that time, and 80486 was the cutting edge).

I also happened to come across a book called BASIC and the Personal Computer by Dwyer and Critchfield.  I was instantly fascinated by the prospect of programming the computer, and the boundless possibilities that computer software could provide.

However, I made a critical error that would hinder my programming development for at least a year:  I reached the erroneous conclusion that BASIC was the only language there was!

I had no idea that BASIC was an interpreted language, or indeed what difference there is between an interpreted and a compiled language.  I thought that all software (including the games I played, Windows 3.0, Word Perfect, etc.) was written in BASIC!  This unfortunately led me down an ill-fated path of self-study, which took an even stronger effort to undo.

I learned all there was to know about BASIC programming in a few months (starting with GW-BASIC, then moving to QuickBASIC), and then I started to notice certain things about the software I was trying to write.

No matter how I tried, I couldn’t make my programs be as fast as other software I used. I couldn’t understand why this was the case.  Also, the graphics routines in BASIC were virtually nonexistent, so I was baffled how anyone could write games with elaborate graphics, scrolling, and responsive controls.  I was eager to start developing games that would rival my favorite games at the time, like Prince of Persia, Crystal Caves, and Commander Keen.  But the graphics and responsiveness of those games was orders of magnitude beyond what I could achieve with my BASIC programs.

With all this frustration on my mind, I was determined to find the reason why my programs were so limited.  I soon found a solution, but once again it was the wrong one!  I stumbled upon some example BASIC code that used assembly language subroutines (encoded as DATA lines in the BASIC program), as well as INTERRUPT routines that took advantage of the underlying DOS and BIOS services.

This led me down the path of learning Intel 286 assembly language (another few months of studying), and encoding it into my BASIC programs!  This solved the issue of responsiveness, but there was still the issue of graphics, or lack thereof.  Fortunately, I found a book at the local public library about VGA graphics programming. Even more fortunately, the book contained sample source code, using a language they called “C“….

And my eyes were open!

It hit me like a freight train. I was lucky that I didn’t have a seizure right there at the library.  I realized that I had been learning the wrong things all along!  (Of course learning assembly language was sort of right, but my application of it was still misguided.)

Learning C and C++ from that point forward wasn’t particularly difficult, but I still feel like it would have been a lot easier if my mind hadn’t been polluted by the programming style and structure that I learned from BASIC.  It makes me wonder how things might have been different, had I accidentally picked up a book on C++ instead of a book on BASIC during my earliest exploits with computers.

In all fairness, I’m sure I learned some rudimentary programming principles from BASIC, but I’m not sure that this redeems BASIC as a learning tool. There were just too many moments where, while learning C++, I thought, “So that’s the way it really works!”  And I’m sure it’s also my fault for trying to learn everything on my own, instead of seeking guidance from someone else who might have told me, “You’re doing it wrong.”

All of this makes me wonder what programming language would be appropriate for teaching today’s generation of young programmers.  Based on my comically tragic experience with BASIC, my gut instinct is to advise aspiring developers to stay away from interpreted languages (such as Python), or at the very least understand that the interpreted language they’re learning is useless for developing actual software. I don’t think there’s any harm in diving right into a compiled language (such as C++), and learning how it hugs the underlying hardware in a way that no interpreted language ever could.

That being said, I don’t wish any of this to reflect negatively on Dwyer and Critchfield’s BASIC and the Personal Computer.  It’s a solid book, and I still own the original copy.  There’s no denying that it was one of the first books that got me interested in programming, and for that I’m thankful.  However, sometimes I regret that I didn’t find Stroustrup’s The C++ Programming Language at the same garage sale as where I found BASIC and the Personal Computer.  Or, alternatively, perhaps Dwyer and Critchfield could have included the following disclaimer in large bold letters: This is not the way actual software is written!  But perhaps it’s time to let it go. I didn’t turn out so bad, right?

# DiskDigger now available for Android!

I’m happy to announce that DiskDigger is now available for Android devices (phones and tablets running rooted Android 2.2 and above)! You can get the app by searching for it on the Google Play Store from your Android device.  Please note that the app only works on rooted devices.

At the moment, the app is in an early Beta stage, meaning that it’s not meant to be as powerful or complete as the original DiskDigger for Windows, and is still in active development.  Nevertheless, it uses the same powerful carving techniques to recover .JPG and .PNG images (the only file types supported so far; more will follow) from your device’s memory card or internal memory.

So, if you’ve taken a photo with your phone or tablet and then deleted it, or even reformatted your memory card, DiskDigger can recover it!

I’ve written a quick guide that has more information and a brief introduction to using the app!  If you have questions, comments, or suggestions about the app, don’t hesitate to share them!

Update: thanks to Lifehacker for writing a nice article!

# Refuting the Fine-Tuning Argument

During a recent friendly debate with some religious acquaintances, I was asked if I could name any arguments for the existence of a god that actually seem “plausible” to me on any level.  Suffice it to say that none of the standard religious arguments are in any way convincing, given a moment’s thought. However, there is a relatively recent argument that’s been gaining popularity over the last few years, and it requires more than a trivial amount of effort to dismiss. This is the argument from fine-tuning.

In case you’re not aware of the argument, it takes the following form:

Take any physical constant that we know of (e.g. the coupling constant of the strong nuclear force, the cosmological constant governing the expansion of the universe, etc).  If that constant had been a fraction of a percent different, then life wouldn’t exist (or star formation wouldn’t be possible, or the universe would collapse back in on itself, etc).  Therefore, there must have been some intelligent agent who created the universe with the precise physical constants needed for stars and planets to form, and for life to eventually arise.

There’s no denying that it sounds like an interesting, even powerful argument. In fact, some people with whom I’ve recently spoken claim this as the most compelling argument for their continued belief in a god.

Well, let’s carefully analyze this argument, and see why it, too, ends up being less than convincing.

To begin, the universe isn’t exactly overflowing with life.  The universe is more than 99.99% empty space. Most of our solar system is completely uninhabitable, except for a small rocky planet that is on a constant knife-edge of environmental stability, and is just one asteroid away from mass extinction.  It certainly doesn’t appear like the universe was created with us “in mind.” If anything, our presence in the universe is an infinitesimal smear polluting a stupefyingly vast nothingness.  Some “design,” wouldn’t you say?

I might be willing to believe in the fine-tuning argument if we had discovered that there was no universe beyond the Earth, and that the sky was just a canopy above the Earth with the stars being points of light on the canopy.  This should sound familiar:  it’s what we believed two thousand years ago, before we learned better.

So it seems like the desire to believe in the fine-tuning argument is a throwback to the pre-scientific need to feel special, and to cling on to the infantile philosophy that the universe is made specially for us. But we know that every lesson that we’ve had from science over the last 500 years has been a lesson in humility. With each discovery in physics or astronomy, we find that we’re less and less special.

It’s a bit of a straw man argument, as well, and it also smells of the “god of the gaps” fallacy. It’s saying that just because physicists don’t yet understand where Constant X comes from, it must have been designed by a supreme designer.

Just because an “unexplained” constant exists in physics doesn’t mean that it’s free to be adjusted. No one brings up an argument like, “if pi (π) was 3.15 instead of 3.14…, then mathematics wouldn’t be possible.”  It’s meaningless to change the value of pi, because pi simply represents a geometric relationship between circles and diameters. In other words, the value of pi is not a degree of freedom for the universe.  The same could very well be true for many of the physical constants which we haven’t explained yet.

At the same time, it’s possible that there are many other universes apart from this one, where physical constants are in fact different, and we’ve simply won a lottery of universes by being born in this one, just like we’ve won a lottery of planets by being born on this planet, and not another similar planet in a distant star system.

The more basic point I’m approaching here is that physicists don’t yet have an explanation for a great many things. We’ve only had quantum mechanics for less than 100 years. We don’t have an explanation for the expansion of the universe. We haven’t unified all the forces yet. We’ve only unified electromagnetism and the weak nuclear force 40 years ago.  So it’s extremely premature to say that we know anything about these constants in any deeper sense than “they exist.”  And it’s absolutely presumptuous and unwarranted to say that not only do you have a deeper understanding of the physical constants than all physicists in the world, but that you have specific knowledge that a designer-god tweaked some knobs to make the constants the way they are.

We’re in no position to make any judgment about this, given the state of our current knowledge of actual physics.  And anyone who claims to have special knowledge about where the physical constants come from deserves suspicion by default.

Lastly, even if we suppose that the fine-tuning argument suggests some kind of god, the only type of god it can possibly be is a sort of Deistic god;  a god who might have “created” the universe and left it alone.  In no way does it suggest a god who intervenes in people’s lives or answers prayers, and it’s certainly not an argument for the god of the Bible.  It takes just as much work to go from a Deistic god to a prayer-answering god than it does from no god at all.

So, to summarize, we don’t know where some of our physical constants come from, or why their values are what they are.  Or rather, we don’t know yet.  But the point is that it’s okay not to know!  Not knowing is the driving force behind every facet of human inquiry.   Perhaps one day we might discover that the universe really was built by an intelligent designer. But that discovery will be made with the same scientific rigor as all discoveries before it, instead of being built upon holes in our current knowledge.

The fine-tuning argument is therefore precisely that:  an argument that depends on lack of knowledge.  I submit that this realization by itself should disqualify the argument from honest use in debates. It should also disqualify the argument from being a plausible reason for belief in a god.

I seem to be very minimal in my strategy of organizing my digital photo collection. I have a single folder on my computer called “Pictures,” and subfolders that correspond to every year (2011, 2010, …) since the year I was born. Some of the years contain subfolders that correspond to noteworthy trips that I’ve taken.

This method makes it extremely easy to back up my entire photo collection by dragging the “Pictures” folder to a different drive. It also makes it easy to reference and review the photos in rough chronological order. This is why I’ve never understood the purpose of third-party “photo management” software, since most such software inevitably reorganizes the underlying directories in its own crazy way, or builds a proprietary index of photos that takes the user away from the actual directory structure. If you’re aware of the organization of your photos on your disk, then any additional management software becomes superfluous.

At any rate, there is one slight issue with this style of organizing photos: all of the various sources of photos (different cameras, scanners, cell phones, etc) give different file names to the photos! So, when all the photos are combined into a single directory, they often conflict with each other, or at the very least become a disjointed mess. For example, the file names can be in the form DSC_xxxx, IMG_xxxx, or something similar, which isn’t very meaningful. Photos taken will cell phones are a little better; they’re usually composed of the date and time the photo was taken, but the naming format is still not uniform across all cell phone manufacturers.

Thus, the optimal naming scheme for photos would be based on the date/time, but in a way that is common between all sources of photos. This would organize the photos in natural chronological order. The vast majority of cameras and cell phones encode the date and time into the EXIF block of each photo. If only there was a utility that would read each photo, and rename it based on the date/time stored within it. Well, now there is:

This is a very minimal utility that takes a folder full of photos and renames each one based on its date/time EXIF tag. As long as you set the time on your camera(s) correctly, this will ensure that all your photos will be named in a natural and uniform way.

The tool lets you select the “pattern” of the date and time that you’d like to apply as the file name. The default pattern will give you file names similar to “20111028201345.jpg” (for a photo taken on Oct 28 2011, 20:13:45), which means that you’ll be able to sort the photos chronologically just by sorting them by name!

# Pi is wrong! Long live Tau!

At one point or another, we’ve all had a feeling that something is not quite right in the world. It’s a huge relief, therefore, to discover someone else who shares your suspicion. (I’m also surprised that it’s taken me this long to stumble on this!)

It has always baffled me why we define $$\pi$$ to be the ratio of the circumference of a circle to its diameter, when it should clearly be the ratio of the circumference to its radius. This would make $$\pi$$ become the constant 6.2831853…, or 2 times the current definition of $$\pi$$.

Why should we do this? And what effect would this have?

Well, for starters, this would remove an unnecessary factor of 2 from a vast number of equations in modern physics and engineering.

Most importantly, however, this would greatly improve the intuitive significance of $$\pi$$ for students of math and physics. $$\pi$$ is supposed to be the “circle constant,” a constant that embodies a very deep relationship between angles, radii, arc lengths, and periodic functions.

The definition of a circle is the set of points in a plane that are a certain distance (the radius) from the center. The circumference of the circle is the arc length that these points trace out. The circle constant, therefore, should be the ratio of the circumference to the radius.

To avoid confusion, we’ll use the symbol tau ($$\tau$$) to be our new circle constant (as advocated by Michael Hartl, from the Greek τόρνος, meaning “turn”), and make it equal to 6.283…, or $$2\pi$$.

In high school trigonometry class, students are required to make the painful transition from degrees to radians. And what’s the definition of a radian? It’s the ratio of the length of an arc (a partial circumference) to its radius! Our intuition should tell us that the ratio of a full circumference to the radius should be the circle constant.

Instead, students are taught that a full rotation is $$2\pi$$ radians, and that the sine and cosine functions have a period of $$2\pi$$. This is intuitively clunky and fails to illustrate the true beauty of the circle constant that $$\pi$$ is supposed to be. This is surely part of the reason that so many students fail to grasp these relationships and end up hating mathematics. A full rotation should be τ radians! The period of the sine and cosine functions should be $$\tau$$!

But… wouldn’t we have to rewrite all of our textbooks and scientific papers that make use of $$\pi$$?

Yes, we would. And, in doing so, we would make them much easier to understand! You can read the Tau Manifesto website to see examples of the beautiful simplifications that $$\tau$$ would bring to mathematics, so I won’t repeat them here. You can also read the original opinion piece by Bob Palais that explores this subject.

It’s not particularly surprising that the ancient Greeks used the diameter of a circle (instead of the radius) in their definition of $$\pi$$, since the diameter is easier to measure, and also because they couldn’t have foreseen the ubiquity of this constant in virtually all sciences.

However, it’s a little unfortunate that someone like Euler, Leibniz, or Bernoulli didn’t pave the way for redefining $$\pi$$ to be 6.283…, thus missing the opportunity to simplify mathematics for generations to come.

Aside from all the aesthetic improvements this would bring, considering how vitally important it is for more of our high school students (and beyond) to understand and appreciate mathematics, we need all the “optimizations” we can get to make mathematics more palatable for them. This surely has to be an optimization to consider seriously!

From now on, I’m a firm believer in tauism! Are you?

# Good and bad science, and faster-than-light neutrinos

The results from the OPERA experiment at CERN have caused a huge stir in the media over the last two weeks, and with good reason, since they claim to have measured the arrival of a neutrino beam 60 nanoseconds faster than light.

Before we go on, let’s calm down a bit. Even if these results are somehow confirmed, it wouldn’t “prove Einstein wrong,” or cause scientists to stop using General and Special Relativity on a day-to-day basis. If anything, it would show that Einstein’s theory is incomplete, but no one is disputing this in the first place.

Relativity (general and special) has been put through dozens of independent, precise, elaborate tests, and passed every single one with astonishing accuracy, which means that there’s definitely something fundamentally correct about Einstein’s theory. It shouldn’t be thought of as some kind of “sitting duck” theory, just waiting to be overthrown.

Understandably, the current consensus among the world’s physicists seems to be that there was a measurement error in the OPERA experiment, or that the experimenters neglected to integrate some subtle factor that accounts for the missing 60 ns. (For a wonderfully accessible introduction to the OPERA experiment, as well as particle physics in general, read Matt Strassler’s blog. For a more thorough discussion of possible mistakes, read Lubos Motl’s post on the subject. It’s also worthwhile to read the comments on those blogs.)

Perhaps the most convincing evidence against this experiment is that we have observed neutrino emissions from supernovae (specifically SN 1987A), and these neutrinos more-or-less coincided with our observation of visible light from the same supernova. If neutrinos are really faster than light, we should have observed the neutrinos many months before we observed the light. The only loophole in this argument would be if the OPERA effect is energy dependent, since the OPERA neutrinos had much more energy than the ones from the supernova, but that would present even more problems.

Not being a particle physicist myself, I can’t meaningfully contribute to the discussions on theoretical implications of this experiment, if it’s actually true. I would, however, like to comment on how this story is unfolding from the point of view of the scientific method, and specifically how this story highlights the differences between real science and pseudoscience. I use “pseudoscience” to refer to homeopathy, energy healing products, reiki, dowsing, magnets, pendulums, astrology, and anything else that requires more “faith” than evidence.

In the wake of attending a New Age expo (out of morbid curiosity) and being overloaded with crackpots, quacks, and hucksters, these differences become all the more plain:

• The fact that the experimenters published any data at all is a sign of great scientific integrity. The fact that they held a press conference before the paper was peer-reviewed is a bit unfortunate, as noted by Lawrence Krauss, but I think the fact that this story made it to mainstream media outlets will help the general public understand the scientific process, as people follow the story. Pseudoscientists, on the other hand, seem to be allergic to data in general, and never publish anything.
• Essentially, the scientists of the OPERA experiment are saying, “We’ve gathered these data, we used the best possible experimental parameters, we’ve performed all the checks we could think of, and we still see this anomaly. So please, tell us what we did wrong.” This is surely science at its best! This is the kind of behavior that should be an inspiration for a whole generation of new scientists. We will never hear pseudoscientists utter that phrase.
• Real scientists don’t adhere dogmatically to any theory, no matter how foundational it may be. Even though most physicists agree that there was an error in the OPERA experiment, they still reserve a little room for the possibility that the results are correct, and that Relativity might be violated. Einstein to physicists is not the same as Chopra is to pseudoscientists.
• Real scientists expect extraordinary evidence for extraordinary claims. Most scientists agree that the evidence collected by the OPERA experiment is not extraordinary. Pseudoscientists make extraordinary claims every time they open their mouth, but present no evidence at all, except anecdotal testimonials from their friends and paid endorsers.
• If we read the blogs of popular physicists on the subject of the OPERA experiment, we find lively debates on theoretical explanations for the anomalous effect, and discussions on ways the experimenters miscalculated the speed of the neutrinos. The key point is: scientists get excited about the possibility of being proven wrong. Scientists can’t wait to be proven wrong, because it would mean that there’s more science to be done!
• Perhaps most importantly, real scientists are motivated by a desire to better understand our world. The only motivation of pseudoscientists is money, thinly veiled by a scientific-sounding sales pitch, and a nonsensical product du jour.

In any case, I encourage everyone to follow this story, because it’s a high-profile example of real science at work; a triumph of human achievement. No matter how the results turn out, by observing the process of scientific scrutiny, everyone will be better equipped to spot pseudoscience when it’s in plain sight.

I will update this post as soon as I see a quack energy-healing product that uses faster-than-light neutrinos to balance the flow of energy through your chakras. Post a comment if you find one yourself!

# Accommodation vs. confrontation

Last week I had the pleasure of speaking at a roundtable debate hosted by the Cleveland Freethinkers. The theme of the debate revolved around how atheists should present themselves in public discourse: should atheists be “accommodating” of their religious colleagues and acquaintances, or should they actively confront such acquaintances and directly challenge their beliefs at any reasonable opportunity? I was on the “confrontationalist” side, and the following is an approximate dump of some of my statements during the debate.

### A case against accommodation

The biggest problem with religion seems to be that, no matter how moderately religious a society is, it inevitably creates a slippery slope towards extremism for those few adherents who take it a bit too literally; and there will always be those few. The reason for this is that religious moderates are basically the same as religious extremists, except that the moderates have (thankfully) allowed themselves to be tempered by the secular social norms of our time. By default, religious moderates are tolerant of extremists, because after all, the extremists actually believe what they say they believe, unlike the moderates who water down their religion to make it more palatable in the modern world.

And it seems to me that, from the atheist perspective, being an “accommodationist” would only help perpetuate that same kind of slippery slope that’s already made abundant by the religious moderate majority.

My rhetorical question to the accommodationists would be, “To what end?” Surely there must be some extreme forms of religion that you’re not willing to accommodate? If you’re willing to accommodate some forms of moderate Christianity, or moderate Islam, but not the more extreme forms of the two, then that would be just as hypocritical as the moderate Christians who cherry-pick which verses of the Bible to take literally, and which ones to take metaphorically. Religion should be an all-or-nothing deal. When it’s not all-or-nothing, there’s always some hypocrisy to be found.

Speaking of hypocrisy, it feels like we have a certain amount of intellectual integrity at stake here. We atheists are, to a reasonable extent, certain about the truth of our convictions. I don’t mean to speak for everybody, but that’s generally the case; we arrive at certain conclusions with some amount of certainty, and we consider these conclusions “true,” or at least tentatively true, insofar as the scientific method allows us to define truth. We don’t “believe” in things in the same sense that religious people believe in things, because our conclusions are backed up by evidence and observations, which makes the truth of our beliefs that much more meaningful and convincing.

So, taking all of that into consideration, why on earth should we be accommodating toward beliefs that are clearly false, or beliefs that are clearly lies, or beliefs that are demonstrably harmful to the well-being of their adherents? What does it say about our intellectual integrity when we allow falsehoods to be perpetuated, no matter how much false hope or false happiness they might bring to the people who believe them? I would think that we should be doing our best to expose such beliefs for what they are, and uproot them from the consciousness of our society using tools like education, debate, and scrutiny.

There’s a theory of why religious people get so offended when their faith is questioned. And the theory is that religious people are actually embarrassed by the things they believe, but they just don’t consciously realize it, which is why they get so defensive when their beliefs are put under the microscope. It’s embarrassing to believe the Earth is 6000 years old; it’s embarrassing to believe that a woman can give birth to a child without a man’s contribution to the zygote.

If I put myself in the mindset of a religious person, I can see how it would be embarrassing when science explains yet another thing that used to be attributed to God, and having my God demoted again and again, to the point where the very definition of “God” becomes so nebulous that it loses all meaning. And all I’m left with is profundities like “god is the universe,” or “god is beyond human logic,” or “god exists outside of space and time” — that’s my favorite.

The thing is, for truly religious people, that kind of embarrassment is buried deep down in their unconscious mind. Instinctively they’re perfectly aware that it’s all nonsense. But those instincts have been repressed by their conscious religious training, or indoctrination, or whatever. So when those beliefs are questioned, the conscious mind has no answer, so it turns to the unconscious mind, which says that it’s all nonsense, which directly butts heads with the conscious indoctrination, and that’s where the defensiveness and the anger comes from.

That’s only a theory, anyway. But my whole point here is that our goal as responsible atheists should be to bring that unconscious embarrassment to the foreground of consciousness. Not just the consciousness of religious individuals, but the foreground of our social consciousness. It should become outwardly embarrassing to keep believing in an all-powerful creator god. It should become embarrassing to keep believing in prayer, or believing in hell or heaven.

Believing in a god is on the same theoretical footing as believing any other figment of imagination for which you would otherwise be called crazy. It just so happened that this particular god was the one that got ingrained into the fabric of our society. But aside from that, there’s absolutely no reason that we shouldn’t attach the same kind of negative stigma to people who believe in the Abrahamic god, or the literal truth of ancient folk tales.

I’m not saying that people shouldn’t be allowed to believe whatever they like; of course they should. What I am saying is, we should work towards creating a society where the moment someone considers taking religion literally, it should be readily apparent to that person how embarrassing, counterproductive, and unwise that would be. So, in that kind of society, no one would have a reason anymore to turn to religion for any purpose, so therefore no one would have a reason to go down the slippery slope toward extremism.

That’s the kind of state we should be striving for; a state where it’s just as embarrassing to believe in the god of Abraham as it is to believe in Zeus or Poseidon or Xenu; because they are all on the same footing of pre-scientific wishful thinking. And I don’t see how accommodation will help us get there. Theism in general belongs in the Bronze Age, because it’s based on Bronze-age thinking, and because the Bronze Age is already the resting place for all other gods ever invented by men. There’s just one more to go!

When people who promote the merits of religion run out of arguments, they usually retreat to the last-resort argument, which is something like, “Well, at least religion gives people comfort, or hope, or a sense of purpose…” Well, that might be true; but the problem is that all those good things are for the wrong reasons, and all those things only happen when religion is at its very best. That’s more like an idealization of religion; that’s the infomercial promise of religion. The reality is quite different. In reality, when religion is not at its best, the same religion that brings the hope and the comfort will also bring fear, shame, intolerance, and guilt. And we know all too well what happens when religion is at its worst… it makes otherwise decent people commit unspeakably evil acts, for those very same reasons!

The other problem with that argument is that it’s rather condescending towards religious people. It assumes that religious people are too weak-minded to cope with the real world without religion, and I don’t think that’s true at all. I’m fully confident that even the most devoutly religious people will be able to find their moral bearings without a god telling them what’s right and wrong. I think people might be afraid to let go of religion, because religion has been pretty much the only option for thousands of years. But the solution to all of that, as with anything else, is education; not just education, but actively combating ignorance.

A proper education should start at the very beginning. Religion’s biggest offense is the indoctrination of young children. Nobody should have any kind of opinion or dogma forced onto them from birth, and yet this happens every day, in many millions of households, in the form of religious upbringing. I wish more of us would recoil when we hear parents label their young children as “Protestant” or “Jewish” or anything else, before the children are capable of objectively evaluating the implications of such a label.

That’s why I’m not advocating forcing atheism onto anyone. What I’m talking about is subtracting the forcing of religion (which is pretty much the definition of atheism anyway)! Atheism isn’t a viewpoint that can be forced onto someone. Atheism is a natural, “clean slate” state of mind, and children should be raised with a “clean slate” until they’re ready (and educated enough) to decide which ancient Babylonian deity to worship.

To put it plainly, we simply cannot afford to accommodate irrational beliefs anymore. It would be great to accommodate them, in theory, if only people’s irrational beliefs didn’t influence their actions, and if only people with irrational beliefs didn’t get elected to public offices, and didn’t allow their irrational beliefs to influence their policies. If that were the kind of world we lived in, then, by all means, accommodation would be very fitting and reasonable.

But we live in a country where half of the population refuses to accept basic facts about biology, and half of the population can’t tell you how long it takes for the Earth to make an orbit around the Sun. And we live in a world where we have an explosive growth of a religion that has a doctrine of military conquest literally built into it, and a growing minority of that religion that’s plotting our destruction as we speak.

We cannot afford to accommodate religions that are inherently anti-human, which all three of the world’s “great monotheisms” absolutely are. The moment when a religion places more value on things that are supposed to happen after we die, rather than focusing on doing good deeds in this life for its own sake, is a warning sign that such a religion needs to be eradicated, and fast.

Our battle is nearly vertically uphill, and the last thing we should be doing is pretending that there’s any good to be found in letting people cling on to their irrational beliefs just a bit longer. Religion’s function has been to divide people, divide communities, and stifle scientific and intellectual achievement. We should be doing our best to phase it out, instead of accommodating it. To put it as charitably as I can, religion has been the training wheels of our morality. And at some point, training wheels become more of a hindrance than a benefit. Our civilization is long overdue to take the training wheels off, and throw them away.

# A few nitpicks of Star Trek (2009)

Let me state for the record that I loved the new Star Trek movie. Given the last several Star Trek TNG films of the last decade, the franchise was clearly in desperate need of a reboot, and J. J. Abrams did an outstanding job of that. I thought the idea of branching off into an entirely new timeline was genius, and gives a new meaning to the very word “reboot.”

However, the new film certainly had no shortage of plot holes and scientific inaccuracies. It’s taken a while for me to crystallize my thoughts on it, but after watching it last week again on Blu-ray, I couldn’t help but jot down a few nitpicks that really stuck out in my mind. Forgive my inner nerd for really showing in this post, and please feel free to contribute your own nitpicks in the comments, or criticize mine as you see fit! And, off we go.

### A supernova that threatens the galaxy?

During his mind-meld with James Kirk, the elder Spock recounts the story that led to their current predicament.

According to Spock, a supernova explosion occurs in his time that threatens the survival of the galaxy. That’s curious… what kind of supernova is this? Granted, supernova explosions are very luminous, but a single supernova would certainly not threaten an entire galaxy, and it certainly wouldn’t carry the kind of planet-destroying force as shown in the film, at least not outside of a single star system.

Using our primitive Hubble Telescope, we have observed plenty of supernova remnants within our own galaxy that pose no threat to us whatsoever. The supernova remnants can grow to several light years in size, but that kind of distance is still minuscule on a galactic scale.

As it is depicted in the movie, Romulus is literally torn apart by the force of the supernova explosion. This means that it must have been the actual Romulan Sun that exploded. No stellar explosion can maintain that kind of force if it originated from a different star system.

It seems unlikely that Romulan scientists didn’t anticipate their own sun going supernova many years in advance of the explosion. Stellar evolution, although not yet completely understood, is nevertheless fairly predictable. It should be relatively easy, especially for a warp-capable species, to tell if a planet’s parent star is on the verge of exploding. Romulus could have been safely evacuated well before its star reached the end of its life.

### Appalling Vulcan irresponsibility

I’m not sure I understand why the Vulcans felt it was their duty to contain the supernova. The Romulan star system is nowhere near Vulcan, so why was it up to the Vulcans to stop the explosion? OK, let’s assume for a moment that Vulcans are the only species that has “red matter” technology, so they’re the only ones who can stop the supernova by creating a black hole.

But wait… it’s well-known that the Romulans use singularities (black holes) on a routine basis as a power source for their Warbirds, so the Romulans must be perfectly capable of creating black holes themselves! Couldn’t they simply fling an abandoned Warbird into the supernova, and let the supernova be consumed by the black hole that powers the ship’s warp core?

OK, let’s assume that it was absolutely necessary for the Vulcans to handle this threat. In that case, it seems like the Vulcans handled it extremely irresponsibly, and completely contrary to logic.

Why was it the job of a geriatric diplomat (Spock) to deliver the red matter to the site of the supernova explosion? Was he going to negotiate a peace treaty with it? Couldn’t they have sent someone more appropriate, such as a team of special-forces commandos, or at least someone in better health, or even an unmanned missile that simply plunges into the supernova along with the payload of red matter, much like Dr. Soran did with trilithium in Star Trek Generations?

Why is there so much red matter aboard Spock’s ship? Seriously, if it only takes one droplet of red matter to create a black hole, why was there a comically gigantic ball of it aboard Spock’s ship? That’s enough to create a million black holes! Where else were they planning to use this much red matter?

The Vulcans should have anticipated that red matter could be used as a weapon of genocide. They should have recognized the staggering security risk of allowing red matter to come anywhere close to hostile territory. So why did they place their entire supply of red matter, capable of destroying a million planets, onto a virtually unarmed scout ship, and proceed to send the scout ship into Romulan space?! What did they think would happen?

All of this seems very irresponsible on the part of the Vulcans. Because of their short-sightedness, they’ve indirectly caused the destruction of their own homeworld, and altered the timeline for everybody else.

### Black holes and time travel

In the movie, both Nero and Spock travel backwards in time by entering a black hole (facepalm!). This is basically on the same theoretical footing as traveling back in time by performing a slingshot around a star, which made complete sense in Star Trek IV: The Voyage Home.

This isn’t the first time that black holes were portrayed as portals through time in popular media. Black holes are certainly very interesting objects to theorize about, but they’re not quite as exotic as they’re made out to be.

Objects that fall past the event horizon of a black hole do not travel backwards in time. They simply fall closer and closer to the center of the black hole, until finally they’re compressed to a single point of infinite density at the very center, adding to the mass that was already at the central point.

Of course, we don’t yet have the physics to describe the nature of the infinite-density point at the center of the black hole, which is why it’s called a singularity. But we do know that any mass that enters the black hole will remain in the black hole. It doesn’t go backwards in time, nor does it go to another dimension, or another universe. All of the mass will remain in the central singularity for the remaining lifetime of the black hole.

### Necessity of drilling

Why was it necessary for Nero to drill to the planet’s core in order to drop the red matter? If the red matter really creates a black hole, it would suffice to drop the red matter anywhere on the planet’s surface, and let the black hole consume the planet from the surface inward. Speaking of red matter…

### Red matter?

Theoretically, any amount of matter can be turned into a black hole if it’s compressed into a small enough volume (its Schwarzschild radius). For example, the Earth’s Schwarzschild radius is about 9 millimeters. That is, for the Earth to become a black hole, it would need to be compressed into a volume with a radius of 9 millimeters (about the size of a grape).

Presumably, “red matter” is an exotic form of matter that automatically collapses beyond its own Schwarzschild radius when it’s taken out of its containment field. Fair enough, but there are several major problems with this.

The most serious problem has to do with the size of the black hole that can be created with that amount of red matter. We can see from the movie that red matter is not particularly massive — we see Spock and a Romulan handling containers with samples of red matter without exerting themselves at all. Since it took only a droplet of red matter to create a black hole, let’s assume that the droplet’s mass is 1 gram. The Schwarzschild radius for any massive object is given by the following formula: $$r_\mathrm{s} = \frac{2Gm}{c^2}$$
So, for a mass of 1 gram, the Schwarzschild radius would be about $$1.5 \times 10^{-19}$$ meters, which is several orders of magnitude smaller than an atomic nucleus. A black hole of this size would pose no threat whatsoever, and this is for two reasons.

According to modern physics, black holes emit radiation with an intensity that is inversely proportional to their size. This is known as Hawking radiation, named after Stephen Hawking, who postulated its existence. If the black hole emits radiation, that must mean that it’s losing energy, which means that it’s losing mass, which means that it’s getting smaller! And the smaller the black hole gets, the more intense (the higher temperature) its Hawking radiation becomes. This continues until the black hole completely evaporates in a blaze of glory consisting of ultra-energetic gamma rays.

The point is, if Nero used a tiny amount of red matter to create a black hole of the same mass, the black hole would evaporate with a flash of radiation almost instantaneously. The black hole would not go on to swell up and consume the planet.

Incidentally, the theory of Hawking radiation is one response to people’s concerns regarding the possibility of creating a black hole at the Large Hadron Collider. Even if we create a tiny black hole at the LHC, it would instantly evaporate in a flash of radiation, and pose no further threat.

Also, even if black holes do not evaporate due to Hawking radiation, a black hole that’s smaller than an atomic nucleus would have a hard time finding other matter to swallow up. It would take a long time indeed for such a black hole to have a noticeable effect on an entire planet.

### Where’s the Time Police?

This next nitpick doesn’t really have to do with the movie itself, but with a different Star Trek story that inadvertently shot the entire franchise in the foot.

In the Star Trek: Voyager episode “Future’s End,” it’s revealed that, in the 29th century, the Federation develops timeships that routinely patrol the timeline and attempt to eliminate any anomalies.

With this story, the writers basically negated any further possibility of having time-travel stories in Star Trek. If a starship travels back in time without “authorization,” we should expect a visit from a temporal patrol ship from the 29th century. The patrol ship would then do whatever is necessary to correct the timeline, and all would be back in order.

In the Voyager episode, the timeship Aeon travels back in time to prevent the destruction of the Solar system. One would think that the destruction of Vulcan is an equally worthy cause for a timeship to investigate, and attempt to prevent. But, of course, we see no hint of this in the movie.

### Sound in space

Having sound in space seems to be a sci-fi cliché that the writers and producers just can’t unlearn, so it’s not even a nitpick anymore. And, in all honesty, a little sound adds to the excitement of the space battle scenes, so it’s not that big a deal.

However, in this movie, they actually made an attempt to get it right! I’m referring to the space-jump scene with Kirk, Sulu, and the unimportant guy who dies. When they jump off the shuttle and fall towards the planet, no sound can be heard. As they begin to enter Vulcan’s atmosphere, more and more noise is heard around them. This is absolutely correct!

So why couldn’t the movie take this excellent example and run with it, meaning get rid of all sound in the scenes where the shot takes place in outer space? All of the battle scenes and space explosions still have the usual sounds associated with them, without any regard for the fact that there’s no medium for the sounds to travel through. But I digress.

### Epilogue

Well, that’s it for now, and thanks for indulging me. As I mentioned, this movie is a worthy successor to all the previous Star Trek films, as well as simply an excellent movie in its own right. I’m looking forward to the sequel(s).

In the meantime, all of the current sci-fi franchises, including Star Trek, would do well to hire some better scientific consultants. Maybe they can hire me?

# Discovering the 3D Mandelbulb

There is some exciting news this week in the world of fractals. Daniel White, on his website, describes what is apparently a completely new type of fractal, and the closest analog so far to a true 3-dimensional Mandelbrot set!

Although White mentions that this is probably not the “true” 3D Mandelbrot, the new fractal is undoubtedly a sight to behold, especially considering the renderings he showcases on his webpage.

Unable to contain my enthusiasm, I quickly wrote up a small program that uses OpenGL to actually display this shape in 3D, in real time, to get a feel for what this beast looks like from all angles. Don’t get too excited; the program does not render the shape in real time, it just displays the points rendered so far in real time. The actual rendering process can take a minute or so.

The program basically renders the 3D shape by constructing a “point cloud” that approximates the edge of the fractal.

Everything in the program should be relatively self-explanatory, but here’s a brief overview of the features so far:

• The program lets you click-and drag the rendered shape to rotate it in trackball fashion (left mouse button), as well as zooming in and out (right mouse button).
• The program lets you select the “power” of the Mandelbulb formula, as well as the number of iterations to perform.
• The program lets you select the resolution of the point cloud.
• It gives you a “selection cube” with which you can select a subset of the shape to zoom in on (with the “zoom to cube” button).
• It has a number of other minor features like fog and anti-aliasing.
• It uses multiple threads to render the shape, so it will take advantage of multiple cores/processors.

Here are some additional screen shots:

Manipulating the selection cube:

After zooming in on the cube:

Zooming in further:

Looking inside:

Colorized points:

The program was written in C# .NET, using the Open Toolkit Library (OpenTK) which provides an excellent OpenGL wrapper.

Of course, this program is very much in its early stages, so don’t expect it to be perfect. As always, comments and suggestions are welcome!