Strong cryptographic policies will help save military software systems

Famous venture capitalist Marc Andreessen coined the phrase “Software eats the world” in a the wall street journal editorial in 2011. More than a decade later, that couldn’t be truer with more devices running more software and creating a larger and more complex attack surface to combat and manage. Yet we still rely on cryptographic methods created almost half a century ago.

When the complexity of systems increases, the complexity of its problems also increases. The defense industry is perfectly capable of building reliable machines out of unreliable parts. Why then do we struggle so much when software is involved?

Across industries, better systems have been built based on its past successes. Just as advances in materials science led to higher-speed turbines and subsequently lighter, faster, and more maneuverable aircraft, modern software systems have been built on advances in technology. In the software world, these advances come in the form of packages or libraries. By dividing the functionality of these libraries, we are able to differentiate and specialize.

A simple example to reinforce the point follows. The very first applications were monolithic and ran on a specific hardware platform. Differentiation gave rise to specialization, and so the database was separated from the presentation layer and the user interface. The business application logic was separated again. This allowed databases to progress in speed and scale independent of user interfaces, networking, and even the storage layer below.

This specialization has lowered costs and increased performance. It also gave rise to a supply chain, where various independent vendors create and contribute different parts of the overall product. At the macro level, complex systems such as the aircraft use many components, each with its own software systems. All plays are based on the fruits of previous success. As system complexity increases, the ability to understand the “ripple” effects of individual component failure decreases. It also presents the opportunity for remediation.

In an aircraft, there are several different ways to measure key telemetry, i.e. speed, altitude, etc. Each has different metrics, different vendors, different software stacks. This diversifies the risk of a single component being unreliable.

Software systems, unfortunately, are not designed to build in redundancy. Instead, modern software systems rely on patches and updates when flaws are found. This approach has consequences. System downtime often occurs during the creation and application of a patch, and only if the failure is noticed. This hits crypto particularly hard, because crypto failures allow an adversary to spy – and when one does spy successfully, they tend to keep that fact a secret.

In the crypto world, there is a widespread belief that it is somehow unbreakable because it is based on mathematics. That couldn’t be further from the truth. Despite the merits of the calculations, the implementations have bugs – on average 10-20 bugs per 1000 lines of code. Keys and certificates sometimes leak. Human error is usually present, either in the form of insufficient programming skills, a lack of ongoing training, or simple errors in implementation.

The fact is that single points of failure in crypto exist and are commonplace. Yet the industry suffers from crypto amnesia and a scattered view of crypto that has allowed breaches to occur and attacks.

This begs the question: how can we build resilient software systems out of unreliable parts? The previous observation has direct applicability: redundancy in algorithms, implementations, and software components are all ways to diversify risk, just as we do in physical engineering where lives are at stake. Engineering outside of software are acutely aware of the need to build in redundancy and create a management layer to manage this added complexity. In the software, such management does not exist.

The answers lie in policy control and interoperability.

Policy to encode governance rules, and interoperability to enable the redundancy and agility required to continue operating under degraded conditions. For systems integrators, most of whom have written software, a deliberate focus must be placed on resiliency.

The software engineering culture, beginning at the product management level, must demand that the software system be able to continue to serve even when individual components are defeated. Software engineers must also change their way of thinking, asking themselves: “what if the component I am using no longer serves as I expect it to?”

And finally, when it comes to cryptography, keep in mind that you may never know that a component has failed. Only by changing the way we think about software engineering can we move away from the culture of patches and panic updates.

Dr. Vincent Berk is Chief Strategy Officer at Quantum Xchange, a provider of crypto-diverse security products and services.

Have an opinion?

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to react, or if you would like to submit an editorial on your behalf, please email Cary O’Reilly, senior editor of C4ISRNET.

Comments are closed.