Gavin E L Hall Blog - Insights into my Mind

Is Russia an Underwater Threat to the Internet

In early December, media picked up a report by the Policy Exchange examining the potential vulnerability to communications infrastructure which concludes that the UK is “uniquely vulnerable”. While it is good to see that non-technical aspects of cyber-security are receiving policy coverage, it is important to evaluate the nature of the threat and avoid hyperbole.
We first need to understand how the internet works, I don’t just mean the www. of the Web but all the different internets. In essence, this is a communications relay. Information is requested by a user, Point A, from a provider, Point B. The information is transmitted from A to B via a series of intermediate nodes, such as the London Internet Exchange (LINX). Information is routed between these nodes via a series of cables.
Due to the preponderance of water on the planet, many of these connectors between country nodes are undersea. On TeleGeography’s Submarine Cable Map, it is immediately apparent that a number of chokepoints exist that broadly align with chokepoints for global shipping. The threat, highlighted in the Policy Exchange Report, is that these cables are vulnerable to hostile action.
So it’s time to start stockpiling money, water, and canned food, right? Well, maybe hold fire on the panic. First, the threat to undersea cables is not new. As the Policy Exchange Report highlights, we have used undersea cables to transmit information — telegrams, financial data (“the cable” trading dollars and pounds), and telephony for years. The cables were a possible site of confrontation during the Cold War, and The Diplomat and other publications ran stories in 2015 similar to the current reports of concern.
Indeed, the harvesting of data, such as in Operation Ivy Bells in the early 1970s, is probably a more substantive threat that the severing of cables and cutting off an entire country. The CIA has been active in undersea intelligence gathering and counter-espionage since the Eisenhower Administration of the 1950s, primarily through the US Navy’s Sound Surveillance System (SOSUS).
So intelligence and military assets have been deployed in the undersea theater of war for decades. The threat to the undersea cables around the UK is no different to the threat that has existed for mor than 70 years, and Britain has deployed assets deployed to mitigate that threat.
Let’s assume for a moment that the threat is a clear and present danger. A State, or rogue actor, has decided to cut off the UK’s access to the internet. Britain has a number of cables entering at various different points, though there is a concentration in the Bristol Channel. How many cables need to be cut and how coordinated does this action have to be to have a noticeable effect?
Remember information is being routed via nodes. If we want to go from A to B on our Christmas holidays and the direct route is closed, what do we do? We go from A to B via C. We take a different route. Yes, congestion is likely going to higher and the journey along the route will be longer due to increased traffic, but we will arrive at our destination. The internet is no different — cutting an individual cable or even a series of cables will be an annoyance but it does not shut down access. The relative position of such an attack as a viable threat is negligible, especially as slowing down the Internet can be achieved without having to bother going to the hassle of developing deep sea combat capabilities.
Cables, whether on land or undersea, are only one part of the digital infrastructure. Taking out a node has much greater potential for damage, but I will await think-tanks catching up in their writing of reports before moving to discuss that issue. You the reader, however, can probably develop scenarios on your own.

This article was originally published on EA Worldview, 27th January 2017.

WannaCry: The Role of Government in Cyber-Intrusions

The WannaCry cyber-incident of May 12, which involved the British National Health Service (NHS), has received a good deal of coverage. Comments focused on whether the attack was preventable and if it presents increased vulnerability for public sector organizations, with substantive focus on the use of the outdated Windows XP. Such analysis does, however, gloss over an essential question: What do we want the role of the government to be and, indeed, what could it or what should it be?
The role of the government in cybersecurity has two essential debates. First, the dividing line between corporate — including the public sector — and government responsibility. Second, if some role for the government is accepted, then which branch of government should have primacy or be involved at all? The 2016 National Cyber Security Strategy attempts to delineate the responsibility of the individual, corporate and government.

The strategy established that the NHS and other public bodies had “the responsibility to safeguard the assets which they hold, maintain the services they provide, and incorporate the appropriate level of security into the products they sell.” In light of the WannaCry malware infestation, from the operational perspective, the failure lies within the NHS itself as opposed to the government. However, with the strategic level in mind, it remains within the purview of the government “to protect citizens and the economy from harm” and that the government “is ultimately responsible for assuring national resilience … and the maintenance of essential services and functions across the whole of government.”

In order to begin to think about the roles and responsibilities in UK cybersecurity, consider who holds the information regarding cyber-intrusions and malicious activity in the cyber-environment. The National Cyber Security Centre (NCSC), as part of  UK Government Communications Headquarters (GCHQ), is the central data coordination point for government oversight of cyber-activity.

In March 2015, however, the government emphasized the role of insurance companies in managing and mitigating risk in the cyber-environment. Indeed, the cyber-insurance market has been growing in recent years and is expected to grow significantly following the WannaCry incident. Insurance companies, therefore, have an increasing amount of information on the preparedness and vulnerabilities of UK networks. How much of this information should be shared with the NCSC?

The immediate response is all of it. Considering that the exploit used by WannaCry was “identified long ago” by the US National Security Agency (NSA), perhaps a government agency as the central collation point for all cyber-environment data is not necessarily in the interest of enhancing security within the UK cybersecurity or, indeed, in the global commons.

So what could the government do?

The simple answer is not a lot. The removal of geographic boundaries, the increase in actors, the deniability of actors, the variations in potential target groups and the overall impact on social cohesion mean that the job is beyond the scope of the government as primary provider of cybersecurity for the nation, hence the blurred delineation seen in the 2016 strategy. Attempting to continue on the present course is reliant on the hope that no significant intrusions occur.

But cyber-intrusions will occur and the individual, corporate and public-sector bodies that utilize the cyber environment need to have a clear understanding that their data is their responsibility. The first step is an educational starting point that a cyber-intrusion will happen and you will lose data. The question then becomes how to minimize the loss and recover lost data, known as resilience. This role should be the government’s concern in the cyber-environment to help minimize the harm suffered by an intrusion across all levels of the UK cyber-environment.

Furthermore, the accountability for individual, corporate or public sector aspects of cybersecurity should be transferred to the insurance industry. This means that body X with good investment in cybersecurity will pay a lower premium than body Y which has negligible investment or reliant on out-of-date technology. The effect of such a shift would be that all entities would be forced to take cybersecurity seriously or face higher premiums and hit to the bottom line. For the public sector, not only will IT procurement have to be considered, but also a cost-benefit analysis of increasing premiums versus new infrastructure. It would be interesting to see a future intrusion in the public sector if the government has to admit that it chose the cheap option with its citizens’ data.

This article was originally published on
Fair Observer on 18th May 2017

The Cyber Threat to the United Kingdom: Reality Check

The opening of the UK National Cyber Security Centre in February 2017, part of the electronic intelligence agency GCHQ, was welcomed with the revelation that the UK had been subjected to 188 ‘high-level cyber attacks’, defined by Ciaran Martin, Director of the NCSC as operations which threatened national security, over the previous three months.This raises an interesting question, is the UK, its Government, its businesses and its citizens  becoming increasingly vulnerable to cyber-intrusions?

Public perception, the focus of the media and policy makers is dominated by the potential threat to nation states. Indeed, in 2012 then US Defence Secretary Leon Panetta warned of a possible ‘cyber pearl harbour’, with specific focus on the vulnerability of critical national infrastructure. The focus of resource deployment, including the £1.9 billion announced by the UK Government in November 2016, is primarily geared towards combatting this threat. Spending and planning has a significant focus on attribution, to who has carried out an attack. The fear is that with a known assailant being absent, the capacity to respond is limited. The issue of attribution however is less of an issue than commonly presented. A cyber-intrusion at the State level is an expression of political intentions and therefore does not operate in a vacuum.

Nation states operate in a rational manner to pursue their strategic objectives. Russia has allegedly pursued cyber-attacks for years, against NATO over Kosovo (1999), Estonia (2007), Georgia (2008), NATO over Crimea (2014) and Ukraine (2017). Definitive proof that would satisfy a technocrat as to the origins of the specific attack, likely rerouted through a series of intermediary countries, remains a challenge, but in all the cited examples the Russian involvement is “well evidenced” and “widely accepted”, according to Martin.

Undoubtedly these intrusions create a nuisance and erode confidence in the countries or organisations involved. The question is whether they pose a threat to national security equivalent to a kinetic attack, especially as NATO has declared it will invoke the collective defence of Article V in response to a cyber-attack of sufficient scale.

The corporate level is where the greatest threat exists. Potential perpetrators include nation-states seeking to create instability or economic distress, criminal gangs seeking to gain profit, hacktivists attempting to pursue a specific vendetta and Joe Bloggs citizen because he can. Add so-called “white” or ethical hackers, who seek to expose companies’ vulnerabilities so that they can address them, and the situation becomes incredibly complex. Attribution is and will remain a substantial problem, and attempting to resolve it would not be a worthwhile investment. Instead the acceptance of vulnerability and greater focus on resilience and the institutional process for responding to an intrusion is less likely to have a detrimental impact on the company. The role of insurance firms is crucial, as they have the necessary information to inform big-picture government policy.

Since the UK adopted the National Cyber Security Strategy in 2011, companies are more open to reporting attacks than in recent years. There is now a greater understanding and acceptance that a cyber-intrusion is not necessarily detrimental to the firm.

This response of business is as important as the attack. The recent exposure of hacks on Yahoo and TalkTalk showed the damage of a slow admission, with Yahoo taking three years to acknowledge the attack and TalkTalk fined £400,000 by the Information Commissioners Office.
Like business, the individual citizen has to accept a much greater degree of personal responsibility for actions, or inactions. This is not limited to the cyber realm; it includes reliance on the government or someone else to be responsible for resolving their problems.

The UK has made significant improvements, in terms of cyber-defence, since the Cyber Security Strategy of 2011. The one area that has not been fully embraced is the educational objective of the strategy. The media and general public attach a far greater fear of cyber intrusion than is evident in the reality of the actual threat posed.

Cyber-attacks, as part of international politics, will continue. The approach cannot be to wish them away, but to encourage an integrated perspective where a sensible approach to defence is matched by an appreciation of what those attacks are seeking to achieve.

Originally published on the
Birmingham Perspective, 16th March 2017.

Facing Russia - NATO's Warsaw Summit

NATO meets in Warsaw, Poland this weekend in what is arguably its most important summit since the Cold War.

The primary if unstated focus of the meeting is Russia. The alliance will seek to build on “assurance measures”, introduced at the Wales Summit in 2014 in the wake of the Crimea and Ukraine crises, towards an effective and credible deterrence for areas such as the Baltic States.

The challenges and arguments about facing Moscow are not new, but there is increasing concern that they are shifting from the political to military domain. Cohesion now has to pursue credibility, through the bedrock of
the Article V commitment that an attack on one is an attack on all.

NATO Secretary-General Jens Stoltenberg has indicated that the re-invigoration of collective defence will be the core objective of the Warsaw Summit. So how will success or failure be measured?

At the Wales Summit, NATO
established the Very High Readiness Joint Task Force (VJTF) as part of the Readiness Action Plan. In Warsaw, the Alliance needs to go further and ensure that the ability to deploy military force is credible, both in equipment and political will. The trend to shift force posture away from high-end military capability will have to be reversed.

This is not just a question of resources. NATO members will have to address the challenges of geography and balance in its operations.

The Baltic states are joined by land to the rest of the Alliance by
a narrow corridor connecting Poland and Lithuania. Any planning has to recognize that the Russian enclave of Kaliningrad could be used for “area denial” operations. Credibility will be undermined if the VJTF cannot be deployed from the Baltic into the Balkans across the full spectrum of land, maritime, and air operations, hence the undermining of credibility. Given questions about how much infrastructure has been developed in Eastern Europe for deployment of the VJTF, the alliance will have to continue more pre-positioning of equipment and forward deployment, reinforced by regular exercises designed to enhance national force inter-operability.

NATO has been too reliant on the United States in military operations, which subjects operations to the key determinant of American political will. For example, European allies have not established a heavy strategic airlift capability, enhancing the problem of area denial by Russia or another power. Amid a universal decline in defence spending and the shift towards expeditionary peacekeeping forces, NATO’s capability of a high-end military deterrent is questionable. Alliance members will not only have to meet the spending pledge of 2% of GDP on defense, but also ensure that the expenditure
bolsters NATO’s deterrent cornerstone with an agreed standard of readiness.

shift from extended deterrence and the nuclear guarantee towards a more realistic, effective, and credible deterrent based on conventional forces is needed, coupled with an institutional change. An executive committee for certain decisions may eventually replace the principle of consensus — while that radical change is unlikely to be realized soon, greater autonomy and pre-authorisation of action for local commanders may be part of the revised force posture.

Expect the Warsaw Summit Declaration to reinforce the core principle of collective defence, with a shift in orientation away from out-of-area operations such as Afghanistan. Re-establishing clear escalatory ladder for conventional forces, akin to the doctrine of “flexible response” from the Cold War, will remove ambiguity and enhance deterrence. Meanwhile, with Russia maintaining a
sub-strategic advantage in nuclear weapons, NATO’s forces will benefit from being bolstered across the nuclear triumvirate to ensure that the full range of escalatory response options are available for deterrence, and, thus, stability in Europe.

This article was original published on 8th July 2016 by
EA Worldview.

The Importance of Scooby-Doo to the Provision of Security in the Cyber Environment

In the first week of July I was fortunate enough to attend the annual Cyber Conference at Chatham House. Two significant trends became apparent, which enable the exploration of a long muted metaphor, of mine, for cyber-security: at the end of every episode of Scooby-Doo the ‘monster’ is unmasked and a human is revealed to be behind the incident.

The trends in question are the importance of human factors in cyber security and the institutional processes that enable companies to respond to cyber incidents and mitigate the effects that they may have.

The essential argument that underpins these two trends is that technical cyber-security is near its zenith. Sometimes, engineers will stop an intrusion from becoming a breach and sometimes they won’t. Equally, they will be better at preventing some forms of intrusions than others and a realisation exists that zero risk equates to infinite expenditure. Therefore, an institution, either civic or military, that wants to enhance its cyber-security has to look beyond merely technical means.

The notion of human factors, or what some call the human firewall, is not a ground-breaking concept in cyber-security. The significant change that has occurred, in the corporate world, is the understanding that the implementation of best practices comes from the C-Suite (CCO/CEO/CFO etc.) and filters throughout the organisation. In the past, IT departments, broadly, had minimal C-Suite, or board level, oversight and purchased specifically tailored software to provide a solution for a given threat – a technical response. However, as information and, especially, data, the intellectual property of modern world, has become increasingly important to business, issues of security have moved to board level and away from the IT department. In many ways, this is akin to the ‘whole force’ approach utilising ‘full spectrum targeting’ that the British military espouses and runs parallel with changes in working habits, such as the blurring of the work/social and office/home distinctions, along with increasing demand for BYOD (Bring Your Own Device). Thus, the security of the business process is fundamentally altered and procedural solutions are necessary to provide protection for core business assets.

The enhancement of institutional processes relates to an increasing acceptance of the mantra that two types of companies exist: those that have been hacked and those that don’t know that they have been hacked. The procedures in place should achieve one of three objectives, to prevent an intrusion from becoming a breach, to reduce the amount of time to identify a breach (cases of well over a year exist!) and to mitigate the damage caused by a breach once discovered. An intrusion is a contained event whereby an unauthorised presence exists. A breach is the tangible consequences of an intrusion, whereby intellectual property, data, is, or can be, destroyed, altered, or stolen. The majority of intrusions do not become breaches.

As mentioned earlier, the military spend a lot of time talking about a ‘whole force’ approach and ‘full spectrum targeting’. Although this is the British terminology, the US has a similar approach, and it appears to confirm recourse to Clausewitzian total war principles, which have largely been shunned since the end of the Second World War. In the cyber environment, it is perhaps unsurprising that the level of civilian involvement is high given that around 95% of cyber infrastructure is owned by private companies. If you consider that to be a stretch then bear in mind that full spectrum targeting has been defined as delivering all levels of national power to achieve an outcome.

The traditional separation of the battlespace from societal-space is increasingly blurred. During the majority of human history civilians (non-combatants) have withdrawn from the battlespace and left the combatants to engage each other. When a victory is decided the non-combatants are free to return to the battlespace with a decreased, or negligible, risk of becoming collateral damage. Total war doctrine distorts this distinction. Utilising all levels of national power to a military outcome means that the civic-industrial process supports the military instrument. Therefore, all the apparatus of civil-industry, including the morale of the workforce, becomes a legitimate target of war. Operation Millennium, and the other 1,000 bomber raids during the Second World War strategic bombing campaign, exemplifies the point.

The cyber environment is an area where national advantage can be gained, both in relative and absolute terms. Consequently, the understanding of how to best gain effect from cyber operations has evolved from an narrow initial focus on perimeter style network defence (think a castle with a moat around it) to more multi-layered and active defence concepts, whereby defence is not merely sitting behind a wall and hoping it holds but also the ability to charge out through the gates and drive the enemy away. Or, better still, destroy the enemy before he gets close to your castle, or even better, remove the feasibility of the enemy being able to enter your land and move up to the castle walls. In military-strategic jargon this equates to deterrence by denial and increasingly makes the distinction between offensive and defensive action problematic.

Deterrence maintains a crucial role in the cyber environment and is also facilitated by the ability to attribute action to a particular party; think name and shame. Attribution is not some form of Holy Grail that is waiting to be discovered. It would be fair to say that prior to 2011, the process of attributing action in the cyber environment was not as well developed as we find today. By separating the strategic from the tactical, or operational, levels, it can be seen that attribution holds well at the strategic level. Hostile action in the cyber environment does not take place within a vacuum. Without picking on one country’s actions, consider the incidents in Estonia, 2007, Georgia/South Ossetia, 2008, Crimea, 2014, and the ongoing events in Eastern Ukraine. What level of proof is required to attribute hostile action has taken place? Identification of an electronic fingerprint and cookie crumb trail back to the perpetrator seems to be at odds with the transition of cyber-security principles away from the purely technical. Attribution is a firmly political decision.

It is increasingly understood that interaction, hostile or otherwise, in the cyber environment represents a reciprocal expression of human choice. As such, conceptualisation of issues within the cyber environment and the development of solutions and strategies require an appreciation of Scooby Doo. The technological fascination with, and, more generally, the poor public understanding of, the cyber environment, ensures that Scooby Doo would rarely earn his snack and that the true perpetrators of harm remain hidden behind a veil of ignorance. Consequently, and with one eye to the pending SDSR in the UK, unless human factors are central to a cyber-security strategy, it will surely be insufficient and incomplete.

This article was originally published on the
Institute for Conflict, Cooperation and Security Blog.