Blog

My BASIC beginnings

Edsger Dijkstra was absolutely right when he said, “Programming in BASIC causes brain damage.” (Lacking a source for that quote, I found an even better quote that has a source: “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”)

When I reflect on my (not-too-distant) programming infancy, I often think about what I might have done differently, like what technologies I could have learned, which ones I should have avoided, or what algorithms I could have used in old software I had written, and so on.

But there’s one thing that really stands out more than anything else: starting out with BASIC was the worst thing I could have done.

I’m not sure how useful it is to talk about this now, since BASIC has pretty much gone extinct, and rightly so, but it feels good to get it off my chest anyway.

My parents and I immigrated to the U.S. in 1991, when I was 10 years old, and I had never laid eyes on a personal computer before that time. During my family’s first few months in the States, we acquired a 80286 IBM PC, which was probably donated to us by an acquaintance (since the 80386 architecture was already prevalent at that time, and 80486 was the cutting edge).

I also happened to come across a book called BASIC and the Personal Computer by Dwyer and Critchfield. I was instantly fascinated by the prospect of programming the computer, and the boundless possibilities that computer software could provide.

However, I made a critical error that would hinder my programming development for at least a year: I reached the erroneous conclusion that BASIC was the only language there was!

I had no idea that BASIC was an interpreted language, or indeed what difference there is between an interpreted and a compiled language. I thought that all software (including the games I played, Windows 3.0, Word Perfect, etc.) was written in BASIC! This unfortunately led me down an ill-fated path of self-study, which took an even stronger effort to undo.

I learned all there was to know about BASIC programming in a few months (starting with GW-BASIC, then moving to QuickBASIC), and then I started to notice certain things about the software I was trying to write.

No matter how I tried, I couldn’t make my programs be as fast as other software I used. I couldn’t understand why this was the case. Also, the graphics routines in BASIC were virtually nonexistent, so I was baffled how anyone could write games with elaborate graphics, scrolling, and responsive controls. I was eager to start developing games that would rival my favorite games at the time, like Prince of Persia, Crystal Caves, and Commander Keen.  But the graphics and responsiveness of those games was orders of magnitude beyond what I could achieve with my BASIC programs.

With all this frustration on my mind, I was determined to find the reason why my programs were so limited. I soon found a solution, but once again it was the wrong one! I stumbled upon some example BASIC code that used assembly language subroutines (encoded as DATA lines in the BASIC program), as well as INTERRUPT routines that took advantage of the underlying DOS and BIOS services.

This led me down the path of learning Intel 286 assembly language (another few months of studying), and encoding it into my BASIC programs! This solved the issue of responsiveness, but there was still the issue of graphics, or lack thereof. Fortunately, I found a book at the local public library about VGA graphics programming. Even more fortunately, the book contained sample source code, using a language they called C….

And my eyes were open!

It hit me like a freight train. I almost burst out laughing right there at the library. I realized that I had been learning the wrong things all along! (Of course learning assembly language was sort of right, but my application of it was still misguided.)

Learning C and C++ from that point forward wasn’t particularly difficult, but I still feel like it would have been a lot easier if my mind hadn’t been polluted by the programming style and structure that I learned from BASIC. It makes me wonder how things might have been different, had I accidentally picked up a book on C++ instead of a book on BASIC during my earliest exploits with computers.

In all fairness, I’m sure I learned some rudimentary programming principles from BASIC, but I’m not sure that this redeems BASIC as a learning tool. There were just too many moments where, while learning C++, I thought, “So that’s the way it really works!” And I’m sure it’s also my fault for trying to learn everything on my own, instead of seeking guidance from someone else who might have told me, “You’re doing it wrong.”

All of this makes me wonder what programming language would be appropriate for teaching today’s generation of young programmers. Based on my comically tragic experience with BASIC, my gut instinct is to advise aspiring developers to stay away from interpreted languages (such as Python), or at the very least understand that the interpreted language they’re learning is useless for developing actual software. I don’t think there’s any harm in diving right into a compiled language (such as C++), and learning how it hugs the underlying hardware in a way that no interpreted language ever could.

That being said, I don’t wish any of this to reflect negatively on Dwyer and Critchfield’s BASIC and the Personal Computer. It’s a solid book, and I still own the original copy. There’s no denying that it was one of the first books that got me interested in programming, and for that I’m thankful. However, sometimes I regret that I didn’t find Stroustrup’s The C++ Programming Language at the same garage sale as where I found BASIC and the Personal Computer. Or, alternatively, perhaps Dwyer and Critchfield could have included the following disclaimer in large bold letters: This is not the way actual software is written! But perhaps it’s time to let it go. I didn’t turn out so bad, right?

DiskDigger now available for Android!

I’m happy to announce that DiskDigger is now available for Android devices (phones and tablets running rooted Android 2.2 and above)! You can get the app by searching for it on the Google Play Store from your Android device.  Please note that the app only works on rooted devices.

At the moment, the app is in an early Beta stage, meaning that it’s not meant to be as powerful or complete as the original DiskDigger for Windows, and is still in active development.   Nevertheless, it uses the same powerful carving techniques to recover .JPG and .PNG images (the only file types supported so far; more will follow) from your device’s memory card or internal memory.

So, if you’ve taken a photo with your phone or tablet and then deleted it, or even reformatted your memory card, DiskDigger can recover it!

I’ve written a quick guide that has more information and a brief introduction to using the app!   If you have questions, comments, or suggestions about the app, don’t hesitate to share them!

Update: thanks to Lifehacker for writing a nice article!

Refuting the Fine-Tuning Argument

During a recent friendly debate with some religious acquaintances, I was asked if I could name any arguments for the existence of a god that actually seem “plausible” to me on any level. Suffice it to say that none of the standard religious arguments are in any way convincing, given a moment’s thought. However, there is a relatively recent argument that’s been gaining popularity over the last few years, and it requires more than a trivial amount of effort to dismiss. This is the argument from fine-tuning.

In case you’re not aware of the argument, it takes the following form:

Take any physical constant that we know of (e.g. the coupling constant of the strong nuclear force, the cosmological constant governing the expansion of the universe, etc). If that constant had been a fraction of a percent different, then life wouldn’t exist (or star formation wouldn’t be possible, or the universe would collapse back in on itself, etc). Therefore, there must have been some intelligent agent who created the universe with the precise physical constants needed for stars and planets to form, and for life to eventually arise.

There’s no denying that it sounds like an interesting, even powerful argument. In fact, some people with whom I’ve recently spoken claim this as the most compelling argument for their continued belief in a god.

Well, let’s analyze this argument carefully, and see why it, too, ends up being less than convincing.

To begin, the universe isn’t exactly overflowing with life. The universe is more than 99.99% empty space. Most of our solar system is completely uninhabitable, except for a small rocky planet that is on a constant knife-edge of environmental stability, and is just one asteroid away from mass extinction. It certainly doesn’t appear like the universe was created with us “in mind.” If anything, our presence in the universe is an infinitesimal smear polluting a stupefyingly vast nothingness. Some “design,” wouldn’t you say?

I might be willing to believe in the fine-tuning argument if we had discovered that there was no universe beyond the Earth, and that the sky was just a canopy above the Earth with the stars being points of light on the canopy.
This should sound familiar: it’s what we believed two thousand years ago, before we learned better.

So it seems like the desire to believe in the fine-tuning argument is a throwback to the pre-scientific need to feel special, and to cling on to the infantile philosophy that the universe is made specially for us. But we know that every lesson that we’ve had from science over the last 500 years has been a lesson in humility. With each discovery in physics or astronomy, we find that we’re less and less special.

It’s a bit of a straw man argument, as well, and it also smells of the “god of the gaps” fallacy. It’s saying that just because physicists don’t yet understand where Constant X comes from, it must have been designed by a supreme designer.

Just because an “unexplained” constant exists in physics doesn’t mean that it’s free to be adjusted. No one brings up an argument like, “if π was equal to 3 instead of 3.14…, then mathematics wouldn’t be possible.” It’s meaningless to change the value of π, because π simply represents a geometric relationship between circles and diameters. In other words, the value of π is not a degree of freedom for the universe. The same could very well be true for many of the physical constants which we haven’t explained yet.

At the same time, it’s possible that there are many other universes apart from this one, where physical constants are in fact different, and we’ve simply won a lottery of universes by being born in this one, just like we’ve won a lottery of planets by being born on this planet, and not another similar planet in a distant star system.

The more basic point I’m approaching here is that physicists don’t yet have an explanation for a great many things. We’ve only had quantum mechanics for less than 100 years. We don’t have an explanation for the expansion of the universe. We haven’t unified all the forces yet. We’ve only unified electromagnetism and the weak nuclear force 40 years ago. So it’s extremely premature to say that we know anything about these constants in any deeper sense than “they exist.” And it’s absolutely presumptuous and unwarranted to say that not only do you have a deeper understanding of the physical constants than all physicists in the world, but that you have specific knowledge that a designer-god tweaked some knobs to make the constants the way they are.

We’re in no position to make any judgment about this, given the state of our current knowledge of actual physics.  And anyone who claims to have special knowledge about where the physical constants come from deserves suspicion by default.

Lastly, even if we suppose that the fine-tuning argument suggests some kind of god, the only type of god it can possibly be is a sort of Deistic god; a god who might have “created” the universe and left it alone. In no way does it suggest a god who intervenes in people’s lives or answers prayers, and it’s certainly not an argument for the god of the Bible. It takes just as much work to go from a Deistic god to a prayer-answering god than it does from no god at all.

So, to summarize, we don’t know where some of our physical constants come from, or why their values are what they are. Or rather, we don’t know yet. But the point is that it’s okay not to know! Not knowing is the driving force behind every facet of human inquiry. Perhaps one day we might discover that the universe really was built by an intelligent designer. But that discovery will be made with the same scientific rigor as all discoveries before it, instead of being built upon holes in our current knowledge.

The fine-tuning argument is therefore precisely that: an argument that depends on lack of knowledge. I submit that this realization by itself should disqualify the argument from honest use in debates. It should also disqualify the argument from being a plausible reason for belief in a god.

Correctly naming your photos

I seem to be very minimal in my strategy of organizing my digital photo collection. I have a single folder on my computer called “Pictures,” and subfolders that correspond to every year (2011, 2010, …) since the year I was born. Some of the years contain subfolders that correspond to noteworthy trips that I’ve taken.

This method makes it extremely easy to back up my entire photo collection by dragging the “Pictures” folder to a different drive. It also makes it easy to reference and review the photos in rough chronological order. This is why I’ve never understood the purpose of third-party “photo management” software, since most such software inevitably reorganizes the underlying directories in its own crazy way, or builds a proprietary index of photos that takes the user away from the actual directory structure. If you’re aware of the organization of your photos on your disk, then any additional management software becomes superfluous.

At any rate, there is one slight issue with this style of organizing photos: all of the various sources of photos (different cameras, scanners, cell phones, etc) give different file names to the photos! So, when all the photos are combined into a single directory, they often conflict with each other, or at the very least become a disjointed mess. For example, the file names can be in the form DSC_xxxx, IMG_xxxx, or something similar, which isn’t very meaningful. Photos taken will cell phones are a little better; they’re usually composed of the date and time the photo was taken, but the naming format is still not uniform across all cell phone manufacturers.

Thus, the optimal naming scheme for photos would be based on the date/time, but in a way that is common between all sources of photos. This would organize the photos in natural chronological order. The vast majority of cameras and cell phones encode the date and time into the EXIF block of each photo. If only there was a utility that would read each photo, and rename it based on the date/time stored within it. Well, now there is:

Download it now! (Or browse the source code on GitHub)

This is a very minimal utility that takes a folder full of photos and renames each one based on its date/time EXIF tag. As long as you set the time on your camera(s) correctly, this will ensure that all your photos will be named in a natural and uniform way.

The tool lets you select the “pattern” of the date and time that you’d like to apply as the file name. The default pattern will give you file names similar to “20111028201345.jpg” (for a photo taken on Oct 28 2011, 20:13:45), which means that you’ll be able to sort the photos chronologically just by sorting them by name!

Pi is wrong! Long live Tau!

At one point or another, we’ve all had a feeling that something is not quite right in the world. It’s a huge relief, therefore, to discover someone else who shares your suspicion. (I’m also surprised that it’s taken me this long to stumble on this!)

It has always baffled me why we define π to be the ratio of the circumference of a circle to its diameter, when it should clearly be the ratio of the circumference to its radius. This would make π become the constant 6.2831853…, or 2 times the current definition of π.

Why should we do this? And what effect would this have?

Well, for starters, this would remove an unnecessary factor of 2 from a vast number of equations in modern physics and engineering.

Most importantly, however, this would greatly improve the intuitive significance of π for students of math and physics. π is supposed to be the “circle constant,” a constant that embodies a very deep relationship between angles, radii, arc lengths, and periodic functions.

The definition of a circle is the set of points in a plane that are a certain distance (the radius) from the center. The circumference of the circle is the arc length that these points trace out. The circle constant, therefore, should be the ratio of the circumference to the radius.

To avoid confusion we’ll use the symbol tau (\tau) to be our new circle constant (as advocated by Michael Hartl, from the Greek τόρνος, meaning “turn”), and make it equal to 6.283…, or 2\pi.

In high school trigonometry class, students are required to make the painful transition from degrees to radians. And what’s the definition of a radian? It’s the ratio of the length of an arc (a partial circumference) to its radius! Our intuition should tell us that the ratio of a full circumference to the radius should be the circle constant.

Instead, students are taught that a full rotation is 2\pi radians, and that the sine and cosine functions have a period of 2\pi. This is intuitively clunky and fails to illustrate the true beauty of the circle constant that \pi is supposed to be. This is surely part of the reason that so many students fail to grasp these relationships and end up hating mathematics. A full rotation should be \tau radians! The period of the sine and cosine functions should be \tau!

But… wouldn’t we have to rewrite all of our textbooks and scientific papers that make use of \pi?

Yes, we would. And in doing so we would make them much easier to understand! You can read the Tau Manifesto website to see examples of the beautiful simplifications that \tau would bring to mathematics, so I won’t repeat them here. You can also read the original opinion piece by Bob Palais that explores this subject.

It’s not particularly surprising that the ancient Greeks used the diameter of a circle (instead of the radius) in their definition of \pi, since the diameter is easier to measure, and also because they couldn’t have foreseen the ubiquity of this constant in virtually all sciences.

However, it’s a little unfortunate that someone like Euler, Leibniz, or Bernoulli didn’t pave the way for redefining \pi to be 6.283…, thus missing the opportunity to simplify mathematics for generations to come.

Aside from all the aesthetic improvements this would bring, considering how vitally important it is for more of our high school students (and beyond) to understand and appreciate mathematics, we need all the “optimizations” we can get to make mathematics more palatable for them. This surely has to be an optimization to consider seriously!

From now on, I’m a firm believer in tauism! Are you?