By Jonathan Moore, CTO at SpiderOak
H.L. Menken wrote in 1920 that: “…there is always a well-known solution to every human problem—neat, plausible, and wrong.” So it is today with our failings in cyber security, which pervade every level of business and government. Far from fielding effective technologies and policies that bend the risk curve in the right direction for our businesses and society, we have yet even to think about the problem in the right terms. What’s at stake is not just a limited risk to profits or an occasional inconvenience, but the promise of continued growth.
The cyber risk we face is far worse than what most people perceive. Unlike most problems touching upon technology, the situation is likely getting worse, not better. What in that past was a manageable nuisance for most organizations keeps getting more severe. Companies and government agencies used to worry somewhat about hacked emails and other files, or the risk of getting locked out of systems until they paid a ransom. Managers with a moderate grasp of technology saw this as a cat-and-mouse game where threats and defenses emerged at nearly the same rate and the key take-away was simply to expend appropriate amounts on information security and make sure systems and software were relatively up to date. Compromises were likely to be limited and manageable—a crisis but not an existential threat to an organization.
Unfortunately this view is sanguine, especially when one considers the trajectory of software and our inability not only to design software differently, but because nearly every level of our cyber defense universe, from technical to legal to political, is falling short. The reality is that cyber is a more schematically different threat at every level than we have been able to grasp. We’ve been fixing leaks in a plumbing system that is fundamentally unsound—taking a tactical approach to a strategic problem—and hoping that tactics from the physical world will work in cyber space. We think in terms of borders, state power, and attribution; none of which exist in the ways they do in the physical world. Every satellite overhead, VPN, and trusted host outside of national borders is in fact a tunnel which adversaries can use to reach deep in to our regional networks. We live in a world where private individuals develop attacks tools that rival nation states; and exist in a world where proliferation is inevitable. When attacks happen, they rarely originate from the true adversary’s infostructure and implants borrow code and plant false flags to confuse attribution.
Furthermore, many people seem to expect a technical solution to the problem, even though no easy one exists. Security is a property of an operational plan and buying the next solution to bolt on to a broken operational practice will make little difference.
What is at risk is not just a further increase in costs required for preventing or recovering from cyber-attacks, but public confidence in our economies and governments. It is not too hyperbolic to say that the situation could eventually present a crisis to modern government itself. Populism has already rocked the foundations of modern governments around the world in recent years. How will this situation be compounded if people see governments and institutions providing ineffective protection from cyber threats at the precise time our reliance on the digital world compounds? It is a problem when people face slow computers and hacked email. It is a societal crisis when digital currency, communications, supply chains, and other systems that are now or soon to be fundamentally important to our daily lives are rendered unreliable.
For starters, software bugs are not going away. The reason for this is that software users at every level of the economy are demanding more and more features. Unfortunately, each new feature leads to more risk and more vulnerabilities. It would be easy to blame this reality on programmers, but those programmers are doing exactly what we are telling them to do; given us more complex software at lower cost.
The economics of software development are such that one cannot expect developers to fix all of the bugs. An implicit rule in programming is that the perfect is the enemy of the good (and profit). There is always pressure to operationalize new as soon as possible. The shelf life of a new version of software is limited, which also reduces any incentive to perfect software. In the physical world, who would waste time scouring a textbook for a typo or two if a publisher was already planning to release a new edition in just weeks or months?
Our inability to grasp the scale of change that is necessary was on display in the recent major SolarWinds hack and exploitation of Microsoft email servers, directed respectively by agents of Russia and China. There is a temptation to view these as more severe than the routine compromises that occur frequently, but not fundamentally different than significant hacking successes like the 2013-2014 exfiltration of U.S. government background check information held by the Office of Personnel and Management or the 2017 purloining of several hundred million personal credit records held by Equifax. Both were similarly orchestrated by a state actor: China in each instance. The change is in the scale and apparent disregard of repercussions in the attacks.
There are four basic types of cyber-attacks. First, there are efforts to spy on people or organizations. These compromises can involve a one-time exfiltration of information or methods to surveille a target over time. An example of the latter was the persistent spying on Nortel, a Canadian networking equipment manufacturer. Spyware was placed so deeply in executives’ computers that it was extremely hard to detect. Crucial technical and corporate information was transmitted to China likely for more than a decade beginning in 2000. Eventually Nortel, a competitor to Chinese networking company Huawei, was liquidated after more than 100 years of existence. This spying goes beyond just political and military intelligence and includes attempts to acquire technology from private companies. Last year, China, Russia, and North Korea all reportedly attempted to steal information about COVID-19 vaccine research.
Second, there are influence operations that use digital media, especially social media like Twitter and Facebook, to manipulate political processes. China has significant experience with this activity, and last year used thousands of fake and hijacked Twitter accounts to shape perceptions about the COVID-19 outbreak. Russia uses similar tactics to influence elections in the West. Some may classify these activities differently than cyber-attacks. However, the use of digital skill by hostile foreign powers to change perceptions of reality, even if it does not involve breaking into systems, ought to be seen in same light as other cyber-attacks. Like those other compromises, these types of attacks run the risk not only of elevating costs as companies try to separate fact from fiction; they also threaten to undermine confidence in the basic pillars of our society.
Third, there are cyber-attacks to facilitate fraud and theft. A famous example is that of Albert Gonzales, who, from 2005-2008, led a group that stole 90 million credit and debit card numbers. He once called his enterprise, “Operation Get Rich or Die Tryin.’”. Other examples of this attack are wire fraud and ransomware. A recent example of ransomware began last October, when Ryuk ransomware struck several hospitals and began to shut down important records and managements systems. One Los Angeles hospital administrator had to send his staff to an ATM to move $17,000 in cash, which was converted to cryptocurrency to pay off the attackers. There were more than 80 ransomware attacks against hospitals in 2020.
Fourth, there are cyber-attacks meant to damage physical or digital access. The Stuxnet virus, which damaged Iranian centrifuges that can enrich uranium potentially up to weapons grade shows how far this can be taken. No one has claimed formal responsibility for the attack, but most analysts believe it was a cooperative effort between Israel and the United States. Additionally, attacks on the Ukrainian power grid and the NotPetya are examples off these kinds of attacks. NotPetya in particular is quite interesting in the way it interacts with the insurance and governance. The NotPetya attack was a software supply chain attack which was aimed at the Ukraine, but erased the data from cooperate computers across the globe. The United States Federal government attributed the attack to Russia and insurance companies are now claiming it was an act of war and they are relieved from policy payouts. The attack is estimated to have caused at least $10 billion in damages.
Even when there is attribution for attacks there is rarely certainty in the identity of attackers. Russian hackers write their code to appear Chinese and vice versa. China could steal a Russian software implant and make an ensuing attack appear Russian. As a result of this uncertainty, one of the most fundament forms of defense—deterrence—doesn’t really work. Attacks in the physical world seldom have deniability; those in cyber-space usually do. Since this makes retribution hard, especially for governments that require a high degree of certainty and evidence, attackers cannot be deterred through the promise of retaliation. Furthermore, even if the identity of an attacker were reliably ascertainable and deterrence-through-retaliation might be an option, it is not something that individuals or companies can realistically practice lest they be labeled has hackers themselves. And the national government has so far shown limited desire to respond based on attacks against individual companies. At the same time regional law enforcement have very few tools to respond to foreign actors.
When attacks are widespread this is not always the case: Jake Sullivan, national security advisor to President Joe Biden, said in February that the U.S. would react to Russia for the SolarWinds attack. He added the response would include “a mix of tools seen and unseen” and “it will not simply be sanctions.”
But the benefits of this approach have limits. Big attacks led by state actors may not even be the biggest threat we face. The reality is that individual bad actors can make serious cyber weapons themselves. On September 11, 2001, we learned of the damage that could be done by terrorists that most previously thought was limited to the capabilities of foreign militaries. A similar dissolution of nefarious power exists with cyber-threats. Furthermore, as with the conventional threat faced by terrorists, stopping the digital threat by controlling the supply chain, as governments do to curb nuclear proliferation for example, is not a workable option. The hardware and software-writing tools hackers need just aren’t that sophisticated.
In this episode we talk with Alex Flaxman, a medical doctor with software development experience. He has unique insights around privacy and security to share.
Today we chat with Shannon Morse. She is a content creator and influencer with a focus on infosec and privacy. We talk about her recommendations, how to factor physical security into your threat model, and lots more.
Today we chat with Zach Otte about the intersection of design, security, and privacy. Balancing these is an important part of the work he does at SpiderOak and he’s got some great insights to share.