Homeowner’s log, March 2023

I finally installed an outdoor spigot on the far side of the house for watering the garden, which is something I’ve been meaning to do for months, and needed to finish before the start of this year’s gardening season. I’ve never done proper “plumbing” before, and was fully preparing for the hassle of soldering copper pipes, but then stumbled on a much simpler solution: SharkBite connectors!

I located a cold-water supply line in the basement that was perfect for splicing into:

  • It’s close to my desired location of the outdoor spigot.
  • The pipe is totally exposed, which makes it perfect for experimenting with SharkBite fittings, in case something goes wrong, or it starts leaking, etc.
  • There are shutoff valves nearby on either side of the spot where I want to splice into, meaning that I won’t need to drain the water from the whole house.

SharkBite fittings don’t require any soldering, and simply slide over the pipe, securing themselves with a rubber o-ring inside the fitting. This seems almost too good to be true, which is why some professional plumbers seem to be distrustful of SharkBite fittings. But from what I can tell, as long as they’re installed properly, they’re every bit as reliable as regular soldered copper joints. Since the pipe on which I used these fittings is exposed, I’ll be able to monitor it for any problems in the future, and will report back if there is any leakage.

image

For the length of new piping that leads to the outdoor spigot, I used PEX tubing, which is a lighter, cheaper, and more durable alternative to copper. The only new tools I needed to purchase were a pipe cutter for cutting the copper pipe, a special tool for cutting the PEX pipe, and another tool for crimping the connectors that join together the different segments of the PEX tubing. The overall total cost of all the tools and materials was about $100, and the total installation time was no more than one hour! This is an enormous savings over hiring a professional plumber, and while I encourage everyone to hire local professionals to do jobs that are beyond your comfort level, if you’re considering doing simple plumbing work that doesn’t impact “critical” portions of your house, then SharkBite fittings and PEX tubing are great options.

Artificial stupidity

As we observe the meteoric rise of LLMs (large language models) and GPTs (generative pre-trained transformers), I’m feeling two distinct emotions: annoyance and depression.

I’m annoyed because even the best of these models (GPT-4 being the current version at the time of writing) have serious fundamental flaws, and yet every company is absolutely scrambling to stuff this technology into every possible product — Microsoft has integrated it into Bing with predictable hilarity, and is now proceeding to build it into their Office suite; Snapchat is building an AI bot that becomes your “friend” when you create an account, with horrifying consequences already being observed, and so on. All of these decisions are frightfully reckless, and are driven by nothing but the latest Silicon Valley hype cycle.

Let’s get one thing out of the way: anyone who claims that these language models are actually “intelligent”, or even “sentient”, probably has a financial incentive to say that. The hype around this technology is so strong that it’s hijacking the imagination of serious academics and researchers. The hype is even stronger than it was for crypto!

These models have reignited and intensified conversations about AGI (artificial general intelligence) and how close we really are to building an intelligence that overtakes human cognition by every measure. These debates are certainly worth having, but I’m skeptical that LLMs bring us any closer to understanding anything at all about intelligence or consciousness.

The one valid observation that these LLMs have demonstrated is very simple: human language is not very complex, and it’s possible to take literally every word ever written by a human being and feed it into a language model that can synthesize plausible text based on a prompt. It really is that simple.

Yes, it is impressive that you can have a believable “conversation” with these language models, but that’s because most conversations have already been had, and 99% of our day-to-day communication can be reduced to boilerplate prompts to a language model. Neat, huh?

I can foresee a counterargument being raised here: virtually no one does long division anymore, or really any kind of arithmetic with more than two digits, because we invented pocket calculators to do the arithmetic for us, which gives us freedom to do higher-order reasoning. What’s wrong with creating more powerful technologies to offload even more menial reasoning tasks, so that we are free to think on grander scales?

The problem here is that pocket calculators are exact by their nature, and always produce a consistent and correct result. If a calculator malfunctions, the malfunction becomes clear very quickly, and is easy to repair or replace. LLMs, on the other hand, are inexact by their nature, and produce content that cannot be relied upon. It will not be clear when and how LLMs will malfunction, or even what it means for a LLM to malfunction, and what effect a malfunction will have on its output.

You might go on to say that the kind of aversion to new technology that I’m expressing dates back to Plato and his Phaedrus dialogue, in which Socrates recalls a tale about the Egyptian king Thamus being distrustful of the invention of writing:

“For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

A fair point, and future developments in LLMs might prove me to be as short-sighted as King Thamus was. I’m not denying that LLMs could see plenty of excellent and positive uses; I’m simply pointing out how recklessly we seem to be deploying this technology, without understanding its potential impact.

Human language in its written and spoken forms, as unsophisticated as it might be, is integral to our mechanisms of sensemaking. And it seems to me that sensemaking, of all things, is not something to be offloaded from our own minds. We already have a problem of “misinformation” on the web, but LLMs carry the potential to amplify this problem by orders of magnitude, because the very same misinformation is part of the data on which they were trained.

The very act of “writing”, i.e. distilling and transforming abstract thoughts into words, is a skill that we mustn’t let fall by the wayside. If we delegate this skill to a language model, and allow the skill to atrophy, what exactly will replace it? What higher-order communication technique, even more powerful than the written word, awaits us?

The best-case outcome from the current LLM craze is that it’s a hype cycle that will end with a few good judicious uses of this technology in specific circumstances. And the worst case is a general dumbing-down of humanity: a world in which humans no longer bother to say anything original, because it’s all been said, and a world in which a language model consumes the totality of our culture and regurgitates it back to us. Enjoy!

Brain dump, February 2023

As a software archaeologist I often find myself trying out old software that I hadn’t used myself in my own career. I think this can be very instructive, since old software can often have some good ideas built into it, ideas that might have been forgotten, but nevertheless ideas from which we can draw when building today’s software.

Recently I played around with Microsoft QuickC for Windows 3.1, which was a C development environment (IDE) targeted at individual developers, and had a rather modest set of features compared to enterprise-caliber IDEs of the era. Nevertheless my existing knowledge of Windows programming, coming from Windows 9x development and onward, transferred fairly easily onto QuickC, and I was able to develop a sample app fairly quickly:

image

It’s a Mandelbrot viewer/explorer app, which is one of my favorite “sample” apps to build in a new environment. It runs in any version of Windows 3.x, has no dependencies, and weighs in at 20KB. Here is the source code, if you like!

What struck me about using QuickC is the simplicity and efficiency of it. Even though it still has the familiar issues of native Windows programming — many screenfuls of boilerplate code and having to manually handle message loops and drawing subroutines — after this was out of the way, the sailing was smooth.

Today I make Android apps for a living, and I can’t help but compare the user experience of building an Android app (using Android Studio) to the experience of building old-school Windows apps, specifically in the way of efficiency. The compilation time of my QuickC app was no more than a few seconds (in an emulator that was emulating a 50 MHz PC). Compare this with building a similar Android app, where kicking off a clean Gradle build is a cue to take a coffee break, even on the most modern hardware. Of course over the years the Gradle build process has gotten faster, and the Android folks at Google are quick to award themselves a medal for improving build speeds by a few seconds. Still, it’s only very recently that Gradle has gotten fast enough to finish building a Hello World app in under a minute. I won’t even get into the sizes, now measured in gigabytes, that modern IDEs require to make themselves at home on our workstations, whereas the entirety of QuickC was able to fit on three floppy disks.

Is this kind of level of efficiency and streamlining squarely in the distant past of software tools, or can we in the present day take steps to get back to that spirit?