Beta This Cleaning up the software industry’s bug-infested nest
Beta This Cleaning up the software industry’s bug-infested nest
Hal Plotkin, Special to SF Gate
Tuesday, August 17, 1999
URL: http://www.sfgate.com/cgi-bin/article.cgi?file=/technology/archive/1999/08/17/beta.dtl
How is it the multibillion dollar software industry gets away with knowingly and repeatedly selling defective products without penalty?
Cars are subject to costly and embarrassing recalls at company expense. Canned goods and toys get similar treatment. Electrical appliance manufacturers even have an independent quality assurance agency — the Underwriter’s Laboratory — that gives a seal of approval to all new devices released to the market.
So why shouldn’t the software industry — for which buggy products are not the exception, but the rule — have to live up to the same standards?
Recently, for example, I purchased a boxed version of anti-virus software from Network Associates, based in Santa Clara. After installing it, I immediately got a message telling me to download a new, improved version from the company’s website. Seems there were a few bugs included with the original release.
Turns out the “improved” software was at least the third iteration of the product since the supposedly new one I had just purchased. It created so many system instabilities and installation complexities I finally returned it for a full refund. The ordeal consumed nearly an entire day.
To be fair, Network Associates is not the only software company overlooking quality control. Almost all the software execs I meet these days ignore product reliability even though it should be their single most important concern. Instead, they usually spend their time focused on things they have less direct control over, such as federal encryption regulations or allegedly unscrupulous competition from market leaders like Microsoft.
What they should be worrying about is the fact that they keep shooting themselves in the collective foot by releasing products that just don’t work. What the industry really needs is a better quality-control certification process.
It should work along the lines of the Underwriters Laboratory. UL was created in 1894 when the age of electricity created an avalanche of new devices, many of which were quite lethal. You got them wet, or left them on too long, and your house burned down. UL testing helped manufacturers build customer confidence. Savvy consumers knew, and still know, to look for the little UL seal before buying a new appliance.
Although it hasn’t been around as long as electricity, the software industry’s problems can be traced to some old habits. Twenty years ago, when the business was still in its infancy, software companies began releasing free “beta” — or test — versions of new products. The idea was that computer enthusiasts eager to get their hands on the latest wares would experiment with the new releases, discover any bugs, and report them back to the manufacturer, who would then make fixes before asking consumers to pony up real money.
Now that the industry has grown up, this process no longer makes sense. These days, thanks to pressure from Wall Street, as well as from their own marketing departments, software firms usually establish fixed deadlines for the final release of new software products. Many of them meet the deadlines whether their goods are ready or not.
Dozens of bug fixes to both Windows 95 and 98, for example, were made available on Microsoft’s website weeks after the company started selling the software. To date, I’ve downloaded about 10 bug fixes for Windows 98 alone. Many other supposedly finished software products are likewise just dressed-up beta versions. Only the consumers aren’t told they’re unwitting beta testers.
That would not be a problem if the formal beta-testing process worked as originally intended. Unfortunately, there are no guarantees that bugs found during beta testing will be fixed before shrink-wrapped products are rushed out the door. You know those pesky little “read-me” files you get with most new software? Usually, they contain a list of “known problems” that haven’t yet been fixed. “Known problem” is, of course, a common euphemism for “bug.”
The Bugnet.com website even has a list of euphemisms major software vendors use to avoid the b-word. My favorite is the Adobe site. The company put the word “bug” as a keyword at the bottom of one of its support pages, and made the text the same color as the background so that the word is invisible. That way, users who search for help are led to the needed support document without the company acknowledging the bug’s existence.
All this has led to a number of after-market, ad hoc attempts to identify and fix bugs. Several magazines, including the late and lamented Windows magazine (which has ceased publication but is still available online at http://www.winmag.com/), do a limited number of tests on new software, rating it for things like bugginess. There are also some good websites, like Bugnet and the Mac-oriented MacFixIt, where users can turn for help when their systems crash.
But these are hit-and-miss solutions. In any case, chances are you’re not the only one feeling the effects of whatever bug hit you. Software is not very elastic. It’s a simple set of instructions, line by line, that tells your computer what to do. If those instructions tell your computer to do something that kills it, it’s very likely other computers with similar configurations will also go down. Bad software isn’t homicidal, it’s genocidal; it wipes out multitudes of computers with a single keystroke.
In the old days, many early computer users actually liked tinkering with their systems, tracking down bugs, and figuring out solutions to obscure problems.
Today, however, software is aimed at a mass-consumer market. Frustrating new customers is the surest way to slow down the fast-growing industry.
Buggy software helps explain why so many people, particularly folks over 50, feel intimidated by computers. They turn the machines on, try to do something simple, and they don’t work. So they give up and join the chorus who laugh, a bit self-consciously, about how their kids, or their grandchildren, are much better with computers than they are. Sadly, these new users often don’t realize that much of the software being sold today frequently doesn’t work as advertised. They think the problem is theirs alone.
While it’s impossible to test new software with every conceivable system configuration, testing it with the most common configurations would be a snap. Let 30 or 40 paid testers, working with the most common configurations of hardware and software, play with a new program for a month or so and see if any of their systems go down.
If the software is clean, the new product gets a quality seal of approval. If the software comes up short, the manufacturer would have the option of fixing it first, or trying to sell it without the seal. I know I’d gladly pay $5 extra for software that was more adequately tested. An added benefit of this process is that paid testers, unlike beta volunteers, would have real power. Companies that refused to listen to them wouldn’t get the seal.
To be sure, some software that passes inspection would probably still crash a limited number of more uniquely-configured systems. But even so, consumers would enjoy a much higher overall quality level than now exists. In the end, that would benefit software makers even more than consumers. The industry is growing pretty fast right now. Just imagine how fast it might grow if more software products actually worked right out of the box.