Deploying Secure Applications
By Mark Kemmerle, Enterprise Information Security Director, OIT
In the winter of 2005-06, the states of Rhode Island and New Hampshire suffered data breaches that resulted in the loss of credit card numbers and personally identifiable information. Only the Rhode Island incident was a result of insecure code, but their Internet portal provider was the same as used in Maine, so we were especially concerned. In the wake of these incidents, the Strategic Planning and Policy group under the direction of Dave Blocher began to work on an application deployment certification policy which was signed by CIO Dick Thompson and published on October 30, 2006. The policy requires that new major applications be tested not only for functionality, network performance, etc, but also for security. (The policy is available on the OIT Intranet web page: http://www.maine.gov/oit/oitpolicies/index.htm under the title “Deployment Certification Policy for Major Application Projects.”)
As part of an initial response to learning about the problems in other New England states, Paul Sandlin of the OIT eGovernment group did a survey of our web applications and found that we had at least 650 web-facing applications. With the limited resources available to us, it will be some time before we are able to review all the legacy applications that face the web. We have looked at the Gartner Group’s assessment application vulnerability assessment tools, but Gartner doesn't have much to say about them - they say that the products are unproven and that the technology is still in the "hype cycle." We viewed several web demonstrations and had a couple of the most promising products in for trials. The best seemed to be Hailstorm from Cenzic, Inc.
For now, the application certification policy is in effect primarily for new projects that cost over $250,000. The policy is an excellent start, but we should be moving toward creating additional policies (and funding their implementation) that address the legacy applications and provide for periodic reassessment and recertification. So far, we've been through about a half-dozen major full-scale penetration tests and vulnerability assessments for new systems. The Department of Health and Human Services (DHHS) has done all but one of them. (The other was for the AdvantageME system.) DHHS is sensitive to the risks of exposing critical health care and patient information and they have had sufficient funding that they haven't made an issue of paying for the reviews – all of which have been outsourced. In fact, DHHS has voluntarily paid for the reviews of at least two systems that fall under the $250,000 threshold, since they've seen the value of the reviews of their larger systems.
Having been through a number of these reviews, we see certain patterns developing. Almost all the systems – developed in-house or outsourced -- suffer from coding problems: vulnerabilities to SQL injections, cross-site scripting, buffer overflows, and similar problems. The application development groups are interested in writing more secure code, but we've outsourced most of our major new system development recently, so the problem is larger than just our in-house staff. DHHS has expressed an interest defining requirements for a training program for developers here, but this does not address vendor-supplied systems. OIT is working on language to be included in contracts for system development which will state that applications must pass a security review to give us assurance that the final product is solid.
We have not had particular difficulty with vendors with a definition of insecure code. The contractor that we've hired for the risk assessments is good at demonstrating their ability to compromise a system via SQL injection or other tactics. They’ve also been very open with the vendor for AdvantageME, for example, concerning their testing methodology and the vendor seems very committed to understanding and addressing their findings. The vendor has also spoken with us about other audits that the application has undergone with other customers.
Some day we may have to face the question about what we’ll do when a customer is counting on an application being in production by a certain date and the security testing shows problems with the code or with the operating environment. As we become more familiar with planning system development projects to include time (and money) for independent security review and for remediation of defects, we will be able to reduce last-minute surprises. Essentially, though, the application vulnerability assessment is a really a risk assessment process. Your consultant can't make a decision for you, but they should be able to explain the risks clearly. It's then up to the agency and OIT to decide whether we'll have to tolerate an increased level of risk.
Before a system goes into production, there needs to be a final signoff meeting of the agency business partners, the IT staff, the vendor, and the third-party auditor to discuss frankly the exposures. Generally speaking, we will have been able to reduce or eliminate many of the risks that are found in the evaluation by patching systems, removing unnecessary services, reviewing permissions and access control lists, etc. Some risks are clearly unacceptable and will delay implementation, no matter how committed an agency is to a production date. For other risks, there are mitigations -- usually logging and log analysis to detect problems and stop them early – and sometimes an agency may decide to accept a certain risk level while in production and plan to address the causes of the risk in the first major application update.
The state may have to live with a back-leveled Oracle or Apache because a vendor-supplied application doesn't run with the most recent version. We can bring some pressure to bear on the vendors, but they are probably already working to meet the demands of the marketplace. We shouldn’t be afraid to incur extra cost or experience delay to fix a serious security problem. Answering to our agency business partners or to the Legislature is not always easy, but assuring that our critical systems are secure and our citizens’ data is protected should always be our highest priorities.
The agency business partners can be our best allies. The Maine Center for Disease Control, the Banking and Financial Institutions regulators, Maine Revenue Services (to name only a few) are very attuned to the need to keep their data secure. Now that we have a policy in place that specifies security assessments for new systems, we can build security into contracts, budgets, and development schedules.