Software Liability

  • By Brad Conte, July 28, 2005
  • Post Categories: Security

From multi-billion dollar government agency's to home burglar alarms -- different forms of security and protection abound everywhere. Specifically, one of the most interesting forms of day-to-day security in the lives of average people is the aspect of Internet security.

Every year, millions of dollars are spent on Internet-related security. From international businesses hiring world-class consultants to normal home computer users purchasing antivirus programs, people realize that Internet security is critical and many do their best to take appropriate security-measures. This is not without good reason: Currently, there are over 50,000 virus on the Internet, having wrecked a reported 55 billion dollars in damages.

The basic idea of "security," as defined by Bruce Schneier in his latest book Beyond Fear, "is about preventing adverse consequences from the intentional and unwarranted actions of others". This basically means that security is about protecting innocent people from people who have malicious intentions -- a definition that should be easily agreed upon. Security exists to protect the innocent from harmful people or situations that are not their fault.

Security with specific regards to the Internet has two forms. The most familiar one is software used for the express purpose of protecting a computer from the Internet, such as antivirus software. The other form of Internet security lies in software that exists to perform one task, but in order to perform that task it must also take steps to ensure a certain level of security that will not allow malicious attackers exploit the program to their advantage, this would include a much wider range of software, such as Internet browsers. Regardless of which type of security a software product offers, it must do so flawlessly, because if a flaw is found, a security system that is 99% secure is just a as worthless as a system with no security at all.

Unfortunately, despite the fact that users spend so much time and money on highly reputed software, attempting to secure themselves and their data from attackers on the Internet, the attackers continue to be wildly successful in their hacking endeavors. Until recently, the reason for the common level of attacking success was simply attributed to user incompetence. It was commonly assumed that the level of security a computer had was thoroughly determined by the user. If the computer's security was compromised then it was the user's fault for not securing the computer well enough.

However, security experts, such as Schneier and Blake Ross, have recently stated that users can no longer bear so much of the blame for having insecure computers. Instead, they state that, while many security issues are indeed caused by the end-user's stupidity, the fundamental security problem that allows for Internet attacks to be so successful lies with the security software that the users rely on for security in the first place. These experts argue that users cannot be fully blamed for having insecure computer that fall prey to countless Internet attacks because they, the users, have no way of truly securing their systems to begin with -- the very programs they reply on to provide the necessary security are actually causing problems themselves.

In a recent lecture here in Sacramento on this very subject of software companies' role in Internet security, Schneier repeatedly bashed software designers in general for their stupid and thoughtless design practices. He stated that, as the world's most recognized security guru, he was ashamed that he could not offer his own mother a solution to surf the Internet safely. Just this month, Ross also wrote on the subject in his blog. He stated that, being someone who is regularly asked to comment on what he believe the future of computes holds, he foresees a bleak future for the computing industry in general if software designers don't start getting serious about their design practices. "I’m disgusted by what the average person has to deal with on a day-to-day basis," he states.

Users deserve to have a higher general plateau of security in the software they use than they currently have. Software manufacturers have become lazy, and are rapidly producing products that are bursting at the seems with simple security holes, holes that are forever being exploited by bored seventeen-year-olds, often at precious cost of the victim. Users have no way of protecting themselves from such security holes, because they are relying on those programs for their security in the first place. Thus, software manufacturers must be forced to stop making such simple security mistakes in their software, because it is completely unfair to the end-user to purchase (relatively) expensive software that falls victim to some of the oldest attacks in the book. Software manufacturers will not change their production practices easily, however, and if they are to be forced into producing better software, legal action will be required.

First, it is critical to understand why companies are reluctant to bother designing secure software. They are not producing poor security out of sheer spite for their users (despite the fact it may sometimes feels that way), but rather because of four simple, main reasons.

Once of these reasons is because designing solid security is just plain difficult. Designing software that has to have a certain level of security is perhaps the most difficult software task that can be tackled. Not only does the product have the problems and difficulties of a normal software program, but it has to be able to absorb and deal with excessive intentional abuse, and to be able to identify security threats properly and weed them out without affecting the normal flow of legitimate user-generated activity. There are thousands of aspects that have to be analyzed and properly dealt with, yet not one mistake can be made. To top this off, security designers are faced with the dark, brutal reality of their situation, which is, as Schneier so eloquently put it: "As computer scientists, we have no clue how to write secure code. [...] We don't even know how to make a program end." This is true, because there is no Bible on designing software security that designers can consult for absolute answers. Everyone is forced to simply dream up ideas, test them, and hope they work -- they have no way of knowing exactly how attackers are going to try to manipulate the program. Writing secure programs is very, very difficult and if a program is to be secure, it must have a lot of time and hard work invested in it. Secure software cannot be designed overnight.

Another reason security companies produce poor security in their software is because the security is often designed by the wrong people. The people who actually enjoy doing designing proper security are few, expensive, and far between. Thus, most security is not designed by experts in the field, but rather by "forced" experts. These are not people who excel at security design in specific, but rather people who are reasonably bright and are assigned the task of analyzing security. They do it because they're good thinkers, not because they're ingenious security analysts. Because of this they are not going to attack security problems with the same level of passion and expertise that a security expert would. Thus, when their designs are then analyzed by attackers who do care passionately about security design and do have a high level of expertise, flaws are found.

Designing good security also consumes time and resources, something corporate managers are reluctant to spend. It takes a lot of time to analyze and test a security system, and most companies work on deadlines that don't allow for extended analyzation testing. Also, since true security experts are rare, companies usually have to hire a consultant if they wish to have someone with the necessary expertise review their designs. Unfortunately, many managers couldn't tell the different between a world-renowned security consultant and a dead duck, so they often are reluctant to hire the necessary experts because they don't appreciate them for all that they offer.

However, the most critical reason companies design poor security is because the results of good security are relatively invisible and do little to nothing to boost product sales. Most of the minor security measures and fixes will never be used, resulting in nothing but lost resources as far as managers are concerned. Plus, the security measures that actually do get used will often be used unnoticeable. One way or another, the user will likely never personally know notice the minor security features. If they don't notice them, then they won't factor them into their purchasing decisions.

Thus, from the manager's point of view, the company is wasting time, effort, and money on a feature that won't boost sales. Needless to say, this is rarely an attractive option. The result of this is that most security is sloppy, rushed, and has not been analyzed with a fine-toothed comb. Then when attackers spend more time analyzing the program than the actual designers themselves did, they, the attackers, find flaws and write viruses to exploit them. When these exploits are released publicly, the company then throws together a quick patch that fixes the problem and offers it to their users as an "upgrade".

It is such common to have insecure software that gets exploited and broken on a consistent basis, that the public actually accepts it to be a normal part of their lives. People actually expect to have their computers infected with viruses every so often, and they expect to have to update their software.

There is no reason that the public should accept this, however, because companies have the ability to write more secure programs than they do. In situations where designers do not have enough time to analyze their programs before they are released, deadlines can be extended -- albeit at a small profit lose to the company. In situations where there are not enough knowledgeable experts available to analyze and/or implement the security designs, consultants can be hired. There are more than enough capable consultants for hire, they are just not so plentiful they can be hired and disposed of as easily as the average programmer.

Since software companies owe it to consumers to provide the best security they can, and they have proven that they are unwilling to do so by their own good will, they must be forced to do so under some sort of penalty. Preferably, the consumer market would rise up as a whole and demand reform, refusing to purchase products from companies that did not conform to a certain level of security standards. This option is unrealistic, however, as the large majority of the consumer market is unaware that they're being ripped off by the software companies in the first place, much less know what sort of reform to demand. The only other way to impose standards on software companies is to do so legally.

Legal action against software companies would take one of two (if not both) forms. First, software companies would be forced to face inspection and auditing of their software, and would be subjected to penalties if their software failed to pass a certain set of basic standards. If the imposed fines and penalties were stiff enough, it would be more cost beneficial for companies to produce more security, rather than less of it.

The second, and probably most important, form of legal action would be that users would be granted the legal grounds to sue software companies if that the company's software was exploited due to poor security, damaging the user in some way. This would not only have the financial trade-off advantage of the first method, but would also have the benefit of "public shaming". What software company in their right mind would want their product publicly taken to court and sued for poor security practices? If such a thing were to happen, they would not only stand to lose money from the lawsuit, but they would stand to lose their consumer respect as well.

The most common argument raised against this legal proposal is that it is impractical to demand that software companies write near-flawless programs. It is pointed out that even leading experts, like Schneier himself, acknowledge that it is impossible to confirm that any given program is completely secure. There are simply too many variables to account for, and too many unknown attack strategies exist to be taken into full, flawless consideration.

While this is most certainly a valid argument, it addresses a scenario that is a long way off from where the world software security is today. Demanding that software companies produce near-flawless security is a long, long way from where the situation stands right now. Currently, the security they produce is riddled with trivial bugs and juvenile mistakes -- think about it, many viruses, such as Sasser, are written by kids in their late teens. Software companies have a long, long way to go before their security products could even be considered to be somewhat close to perfect. It would be more than reasonable to at least hold software companies responsible for their basic, on-going security flaws. The world will never see a time in which everyone is perfectly secure from everything, but hopefully it will see a time in which world-class software programs are not repeatedly reduced to shreds by simple teenagers.

If legal action were taken against software companies that produced sloppy security programs, there would be a sharp decline in the number of careless mistakes that allowed these programs to be exploited. Cooperations would save millions of dollars and home users could use their computers with greater confidence. If Schneier, the world's leading expert on security analysis, cannot protect his own mother from the dangers of the Internet, something is obviously very, very wrong with the state of modern computer security.


Note: This article was originally written as a research paper for a college English class, titled Computer Insecurity. A few minor grammatical errors have been made since it was originally completed. Sources were inlined as hyperlinks and the sources for trivial ideas were dropped (the original audience was non-technical).