On the Bible Being Divinely Inspired

[read the disclaimer before proceeding]

The following are several points I’ve thought about recently that seem to contradict the idea of a divinely-inspired Bible.

If God chose to reveal his “word” to mankind, why did he do it at a point in history when human spirituality was in its infancy, when people were still struggling with their own primitive mythology and completely unprepared for such a revelation? For that matter, why did God make his “word” so similar to other competing mythologies, almost as if it had been derived from earlier forms of the same beliefs?

Why did God reveal his “word” to such a local group of people, instead of revealing it to every person in the world simultaneously, thus preventing the possibility of competing religious beliefs? Why did he leave it up to the people to “spread” the message to others, who may or may not believe, thereby causing bitter worldwide conflicts that threaten the very survival of our civilization?

Wouldn’t it be better if God made his revelation right now, in our time? Think about all the problems this would solve:

  • All events would be well-documented by eyewitnesses and the media.
  • We wouldn’t have to rely on a translation of 2000-year-old fragments of text written by second-hand sources. This would be a brand-new and complete message directly from God.
  • Since the revelation would now be in plain modern English, all debates over translation inaccuracies would end. As far as contradictions and inconsistencies, I’m certain we can persuade God to clarify certain points if needed, now that God is taking a more active participation in his creation.
  • Once and for all, we would be certain which religion and which denomination is the correct one!

Why is the “scientific” content of the Bible so suspiciously similar to the sum of human scientific knowledge at the time? (the firmament, flat earth, etc.)

Why did all direct communication with God cease since pretty much the beginning of the modern age? Why are there no modern-day prophets or Saints that perform actual miracles? Why did all the “magical” events in the Bible only occur in a time when people were gullible enough to believe they can happen?

Perhaps the strongest point against the divinity of the Bible is the fact that it can be interpreted in a million different ways, most of which are completely incompatible. This caused the fragmentation of the original Church into hundreds of denominations, with many displaying fierce hostility toward others.

Why a divine being could not write a better book is beyond me.

Loebner Prize misguided?

In one of Alan Turing’s most noted papers, “Computing Machinery and Intelligence” (1950), Turing describes a test for machine intelligence, where a human “judge” would attempt to hold a conversation with two consoles, one operated by a human, and the other by a machine (with the judge unaware which is which). If, throughout the conversation, the judge cannot distinguish between the human and the machine, then the machine can be considered intelligent. Although it seems simplistic and rudimentary, this test can be quite useful since it circumvents any requirement to define or quantify “intelligence” or any aspects of it. It simply assumes that humans are intelligent, and if a machine can simulate human responses, then it must also be equally intelligent.

The first formal implementation of the Turing Test was organized in 1991 by Dr. Hugh Loebner, a somewhat eccentric academic figure who is also an activist for the decriminalization of prostitution. Dr. Loebner has been conducting this competition every year, ever since. The home page of the Loebner Prize contains transcripts of the conversations held between the judges and the various finalist programs.

Maybe it’s because I’m looking at the transcripts through the eyes of a software engineer, but I found the programs’ responses laughably crude and robotic. I fail to see how any human judge could attribute any “human” qualities to the programs’ output. It is trivial to observe how the programs randomly regurgitate a block of words spoken by the human, or, when asked a question they weren’t programmed to answer, spout off a random cliche to divert the judge’s attention from the program’s incompleteness.

Upon examining the transcripts from the earlier years of the competition (around 1994), and comparing them to the latest results (2004), something even more disturbing becomes clear: the sophistication of these programs has not changed a single bit! Of course, some will say, the programs have gotten more sophisticated internally, perhaps with a bigger repository of vocabulary. However, conversationally, they are virtually no different than the very first ELIZA implementation.

It seems to me that this kind of competition has more to do with behavioral psychology than computer science. It is, as some have called it, a beauty contest. In essence, the Loebner Prize would be awarded to the program that can do the best job of fooling a person into believing that it’s human, which, apparently, isn’t too difficult. This leads me to conclude that the Loebner competition, perhaps even the Turing test, is misguided at best. Since when does machine intelligence have to be expressed in the form of human conversation? If we are to expect a machine to sound remotely human, we would need to supply it with all of the life experiences of a human being, complete with sensory data (images, sounds, smells), memories from childhood, and fundamental instincts like self-preservation, the desire to learn, and the need to socialize.

In short, for a machine to become intelligent in the human sense, it would need to lead a human life from its conception. An example of such a machine may be an android that is perfectly disguised as a human being and made to interact with humans. It would be even better if the android itself is made to believe that it is human.
But to expect a computer console application (no matter how complex), without any real sensory input except keyboard clicks, to ever respond like a human being is misguided indeed.

A Disclaimer

This is a general disclaimer — a grain of salt, if you will — that should be taken before reading my posts on matters of philosophy and religion.

My degree is in computer science, and I am employed full-time. No matter how much spare time I devote to reading philosophical works, I remain, at best, a dilettante of philosophy in an academic sense.

For those readers who have advanced degrees in philosophy, I can only apologize in advance for any choice of words you may find sophomoric, or for any blatant errors you may discover in my posts. In either case, you’re always welcome to leave a comment or send me an e-mail.

That being said, I do strongly believe that I am well-qualified to comment on philosophical matters, if only in a recreational capacity. My qualifications for indulging in these philosophical musings are very simple: I am a living human being, capable of independent logical reasoning. I believe that it’s every person’s right, if not responsibility, to evaluate his or her beliefs as objectively as possible.

Being a computer scientist gives me an additional affinity towards logical thought, which is why most of my posts are written from a perspective of pure logic.

I realize that a lot of my arguments may not be new, and are probably thoroughly covered elsewhere. It is simply my desire to be heard, to encourage conversation, and to ensure that my words will contribute to an ever-growing body of rational thought on the web.

Getting the Damn STB TV Tuner Working in RedHat 9

My computer came with a PCI TV tuner card from STB. The manufacturer provided drivers for Windows 98, but of course, shortly thereafter, the manufacturer disappeared from the face of the earth. Obtaining a suitable driver for Windows XP was nearly impossible until the recent development of open-source WDM drivers for all BT848-based tuner cards. However, although getting the card to work under Linux was a bit tricky, it was not at all impossible. This is a brief log of the steps I took to get the damn thing to work under RedHat 9.

At first I tried to use xawtv just to see if I could get a picture. And in fact, it actually showed Channel 3, which got me excited. However, there was no sound, and I couldn’t change the channel. I knew that the configuration of bttv was somehow wrong. After poring over the BTTV HOWTO document, I came up with the following lines to add to /etc/modules.conf:

The parameters specified above correctly identify the STB TV PCI card, and even enable FM radio functionality, which the card supports.

After a reboot, xawtv worked wonderfully. However, there was soon a new problem: I installed an updated accelerated video driver from NVIDIA, which made xawtv crash with a segmentation fault. After searching the web for answers, I found the following solution: simply start xawtv with the command line xawtv -device /dev/video0. That’s it!

One more minor issue was getting the program called tvtime to work. This program is vastly superior to xawtv, but it had a slight problem where it automatically turned up the tuner volume all the way, and let the user control the volume through the mixer. This wasn’t good because the STB card clips the audio if it’s above 50% volume, so it sounded really distorted and rectified. All I needed to do to fix this was find the line of code where tvtime sets the tuner volume, and change the default number. The number that it was setting the volume to was 60000 (presumably the maximum is 65535). So I changed it to 32000 and recompiled. It now runs marvelously.
(Update 10-24-04: I submitted a bug report to tvtime, and the author added a preference to control the audio gain on the tuner card itself. Thanks!)

Getting the Damn Aureal Sound Card Working in RedHat 9

This is a brief how-to on getting my Aureal-based sound card (Turtle Beach Montego) to work under RedHat Linux 9. I’m sure I’m not the only one who owns such a card, so this might be useful for someone in the future.

RedHat did not recognize my sound card upon installation, so naturally I thought it wasn’t supported. This was until I stumbled upon a driver at SourceForge that purported to provide support for Aureal-based cards.

I downloaded the project’s distribution files, and tried to compile it… but the compiler couldn’t get past 10 lines of code before it choked. The source files were obviously written for an earlier version of the kernel.

However, all hope was not lost. I downloaded a CVS snapshot of the project (instead of the distribution), and tried to compile that. Miraculously, that only gave one or two errors. Then, all I did was comment out the two offending lines of code, and it compiled successfully. The output was a kernel module called au8830.o.

Installing it was a different issue altogether.

This driver uses a combination of open- and closed-source code. The closed-source portion that came with the driver was compiled with a much earlier version of gcc. Because of this, insmod would not allow me to load the module. It gave an error that the module needs to be compiled with gcc version greater than 3. Fortunately, insmod can be forced to load the module anyway by issuing the -f option.

With the -f option, insmod tried to load it, but reported a few unresolved externals. I realized that I needed to load the soundcore module first. After loading soundcore, I tried loading my module again, and what do you know? It loaded successfully!

/dev/dsp was working like a charm, but sound in KDE (aRts?) wasn’t loading properly. I went into the KDE Control Center, into the Sound Server tab, and changed the server startup setting from “autodetect” to “Threaded OSS.” That seemed to do the trick, and it’s been working fine ever since.

To automate the process of loading the module at boot time, I edited my /etc/modules.conf file and added the following lines:

This will ensure that the au8830 module will be loaded after the soundcore module, and that the au8830 module will be force-loaded with the -f option.