Although apples-to-apples comparisons are difficult to find, the invention of public-key encryption is one major technological advance allowing direct comparison between military and civilian innovation and stimulus. Independently invented at approximately the same time in separate military and civilian contexts, its uses in those differing contexts varied. Public-key encryption is the technology which allows secure communication and commerce on the Internet (or at least it did until various governments started hacking away at it). Public-key cryptography (an IEEE milestone) was first invented by James Ellis and Clifford Cocks with Malcolm Williamson at Britain's Government Communications Headquarters. However, the British government kept the technology secret until 1997. Independently, Whitfield Diffie, Martin Hellman and Ralph Merkle invented it in 1976. Unlike the British inventors, they intended their invention for the civilian sphere. Public-key cryptography made possible trillions of dollars worth of internet commerce during the first decade of the twenty- first century. Although measuring the value of e-commerce worldwide is difficult, World Trade Organization publications refer to estimates of between US$1.2 and US$1.5 trillion annually by 2010. The civilian commerce generated by public-key cryptography in turn stimulated many other innovations.
The invention of the transistor, which is frequently called the most important invention of the 20th Century, was delayed- rather than advanced by- World War II. Bell Labs' search for a solid-state amplifier began in 1936, but the people working on it were shifted to other projects beginning in 1939. Work resumed in 1945, and the first point-contact transistor was demonstrated to Bell Labs management in December of 1947. On 22 June 1948, the invention was demonstrated to the U.S. military who -to the relief of the scientists -decided not to classify it. Because spending by military customers helped develop the transistor into a manufacturable product and to lower the price enough to make it attractive to the civilian market, the question of whether war delayed or accelerated the adoption, is perhaps the closest we can come to an answer. It is tempting to speculate whether a peacetime invention of the transistor six years earlier (in 1941) would have led to a commercial product (solid-state hearing aids) in 1946 instead of 1952; and transistor radios and solar cells in 1949 instead of 1954, if the people working on solid-state amplification in 1939 had been able to continue uninterrupted.
An additional impediment to military innovation is that adapting it for civilian use often requires expensive conversion. When military- driven R&D fails to make a timely transition to commercial use, it can lead to "technological lock-in." A notable example was numerical control technology in the United States, which began as a military venture, but became distorted by dependence on military aerospace contracts. One consequence of this was that Japan was able to take over the U.S. dominance in machine tools.
In 2013, the Pentagon's budget for research and development (excluding acquisition) was US$69.7 billion, more than the combined research and development budgets of nine innovation giants: Johnson & Johnson, Apple, Corning, Siemens, Samsung, Intel, Microsoft, Pfizer, and IBM. Innovation is hard to quantify, but would anyone seriously try to make the case that the equivalent military spending is delivering more innovation than all those civilian corporations put together?
To those who make the claim that war stimulates technological innovation, we should answer cautiously, "Not always." Adapted from The Spectrum, an IEEE publication.
© The Student Enginner UoN All Rights Reserved