(from an interview with Bjarne Soustroup in techreview.com)
Technology Review: Why is most software so bad?
Bjarne Stroustrup: Some software is actually pretty good by any standards. Think of the Mars Rovers, Google, and the Human Genome Project. That's quality software! Fifteen years ago, most people, and especially most experts, would have said each of those examples was impossible. Our technological civilization depends on software, so if software had been as bad as its worst reputation, most of us would have been dead by now.
On the other hand, looking at "average" pieces of code can make me cry. The structure is appalling, and the programmers clearly didn't think deeply about correctness, algorithms, data structures, or maintainability. Most people don't actually read code; they just see Internet Explorer "freeze."
I think the real problem is that "we" (that is, we software developers) are in a permanent state of emergency, grasping at straws to get our work done. We perform many minor miracles through trial and error, excessive use of brute force, and lots and lots of testing, but--so often--it's not enough.
Software developers have become adept at the difficult art of building reasonably reliable systems out of unreliable parts. The snag is that often we do not know exactly how we did it: a system just "sort of evolved" into something minimally acceptable. Personally, I prefer to know when a system will work, and why it will.
TR: How can we fix the mess we are in?
BS: In theory, the answer is simple: educate our software developers better, use more-appropriate design methods, and design for flexibility and for the long haul. Reward correct, solid, and safe systems. Punish sloppiness.
In reality, that's impossible. People reward developers who deliver software that is cheap, buggy, and first. That's because people want fancy new gadgets now. They don't want inconvenience, don't want to learn new ways of interacting with their computers, don't want delays in delivery, and don't want to pay extra for quality (unless it's obvious up front--and often not even then). And without real changes in user behavior, software suppliers are unlikely to change.
We can't just stop the world for a decade while we reprogram everything from our coffee machines to our financial systems. On the other hand, just muddling along is expensive, dangerous, and depressing. Significant improvements are needed, and they can only come gradually. They must come on a broad front; no single change is sufficient.
One problem is that "academic smokestacks" get in the way: too many people push some area as a panacea. Better design methods can help, better specification techniques can help, better programming languages can help, better testing technologies can help, better operating systems can help, better middle-ware infrastructures can help, better understanding of application domains can help, better understanding of data structures and algorithms can help--and so on. For example, type theory, model-based development, and formal methods can undoubtedly provide significant help in some areas, but pushed as the solution to the exclusion of other approaches, each guarantees failure in large-scale projects. People push what they know and what they have seen work; how could they do otherwise? But few have the technical maturity to balance the demands and the resources. |