Gavin E L Hall Blog - Insights into my Mind

D-Day at 75: Is It Time to Reconsider Britain’s “Special Relationship” with the US?

The overall picture of Britain’s so-called special relationship with the US since the D-Day landings 75 years ago is not one of mutual respect and cooperation between equals, but rather one of dominance. 

As the 75
th anniversary of the incredible feat of cooperation that began the liberation of Europe, D-Day or Operation Overlord, approaches on June 6, it is worth reflecting on the nature of the relationship between the United Kingdom and the United States, and indeed whether such a relationship can be constituted as “special.”
The D-Day landings were the product of a partnership of equals. The plans were conceived by a British general, Frederick Morgan, and supported by a British-dominated
staff in control of the Allied forces: General Bernard Montgomery in charge of land troops, Air Chief Marshall Trafford Leigh-Mallory responsible for aerial support, and Admiral Bertram Ramsay the sea. Furthermore, the deputy supreme Allied commander, Arthur Tedder, was British, leaving Dwight Eisenhower, as supreme Allied commander, the only non-British person in a strategic command position. Given that the plans for the operation had been developed by the British, and that senior command positions were held by British officers, it strongly suggests that Eisenhower’s appointment was more rooted in politics that ability, not to say that he wasn’t a capable general.

The picture of cooperation is further reinforced by the participation of forces. Britain supplied 892 out of the 1,213 warships taking part, and 3,261 of the 4,126 landing craft, with the
Royal and Merchant Navies providing more than double the personnel level of the United States. The Royal Air Force supplied around half of the 11,590 Allied aircraft involved, whilst on land British and Canadian forces had responsibility for three landing beaches (Gold, Sword and Juno), with 75,215 troops and 7,900 paratroopers, while the US covered the remaining two beaches (Omaha and Utah) with 57,500 troops and 15,500 paratroopers.

Furthermore, the intelligence operations at breaking the German Engima codes were led by
Alan Turing at Bletchley Park, and the substantive disinformation campaign, including the fictitious First United States Army Group designed to trick the Germans into believing the invasion would take place at Calais, was led by Colonel David Strangeways, a Brit. Therefore, the bulk of forces involved in D-Day were provided by the British Empire. This is not to say that the American participation should be overlooked. But a reversal of the common perception put forward by Hollywood films that the United States led the salvation of Europe is in order, when the operation was, in fact, planned by the British, based on British intelligence, British-led and involved a majority of British troops.

The so-called special relationship of the Second World War was, therefore, one of military parity in terms of command, personnel and capability, with intelligence arguably dominated by the British. Political relationships were largely dependent on the respective personalities of the individual leaders at any given time and, as such, can’t be considered to be part of a special relationship. However, the lend-lease programs and the postwar Marshall Plan aid demonstrated the economic dominance of the United States, which was further reinforced by the Bretton Woods system, based on linking the dollar to gold reserves. Furthermore, the International Monetary Fund and the World Bank are both effectively controlled by the United States, as the voting percentages in each organization are dictated by the levels of contribution. Since the Bretton Wood system collapsed in 1971, the dollar has operated as a global reserve currency.

/www.youtube.com/watch?v=bnlLgUU4VAY" target="_blank">Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser.

Until the Gulf War broke out in 1991, the United Kingdom and United States had been unwilling to support each other, directly, in military terms. The UK did not engage in combat operations alongside the US in Vietnam, nor did the United States get involved in the legacies of empire, like the Suez Crisis of 1956 and the Malayan Emergency 1948-60, for example, or indeed when the territorial integrity for the United Kingdom was undermined with the Argentinian invasion of the Falkland Islands in 1982.

Beyond the mutual support offered as members of NATO and the P5 of the UN Security Council, where the interests of both countries were arguably aligned, there is little evidence of a special relationship between Britain and the US during the Cold War period. Indeed, it could be argued that the only reason the notion of a special relationship has such prominence in the mindset is down to the positive relationship between President Ronald Reagan and Prime Minister Margaret Thatcher in the 1980s.

The Gulf War saw combined operations to overthrow Saddam Hussein, with the British providing a number of specialist roles. Notably with special-forces and low-level runway denial missions, which the Royal Air Force was the only air force in the world capable of performing. (The Tornado GR1 was the only supersonic low-level bomber and utilized the JP-233 Low-Altitude Airfield Attack System.) However, the
bulk of the coalition forces, around 700,000 out of a total of approximately 950,000 troops were provided by the United States. Furthermore, every command position was held by an American officer. It can be argued that as this was ostensibly an American operation, this is unsurprising. However, it is also worth noting that the United States didn’t feel obligated to make a political appointment, as in the case of Eisenhower on D-Day, to reinforce the special relationship.

The nature of the relationship is stark when operations to defeat the Taliban insurgency in Afghanistan from 2006 onward, but especially following the 2009 surge under President Barack Obama, are considered. The US provided over 75% of the troops deployed to Afghanistan, even with a coalition of 43 partner countries participating in operations during the course of the 11-year International Security Assistance Force (ISAF) mission.

American troops consistently took part in riskier combat operations, despite the common perception of the increased risks to British troops operating in Helmand province. Once again, all the senior command positions were maintained by American officers after 2007, following the establishment of the ISAF countrywide command under a British general, David Richards, and NATO’s Allied Rapid Reaction Corps. The initial period of the ISAF mission that saw the mandate expanded outward from Kabul between 2003 and 2006, was run on a rotational basis from contributing countries. A number of deputy commanders were British, and the regional commands involved a rotational system between Allies, though none were specifically British.

Despite the United Kingdom supplying the most troops after the United States and operating in one of the most dangerous areas of Afghanistan, no special dispensation was given in terms of providing the command structure for the mission. Therefore, the argument that Britain is just another US ally and not in any way special gains credence.

The change over the course of recent history in the widely-hailed special relationship between Britain and the United States is difficult to miss. Politically, the relationship is dependent on the respective leadership personalities at a given time. Economically, there is no special relationship, and, as seen in the Gulf War and Afghanistan, militarily, the United Kingdom is lacking with regards to the ability to contribute relative to the United States.

The only area that could be considered special is in the field of intelligence, where the UK, as part of the Five Eyes alliance, does enjoy a privileged status based on capability. The overall picture, thus, is not one of mutual respect and cooperation between equals, but rather one of dominance. Therefore, when we remember the bravery of those involved in the Normandy landings of June 6, 1944, and the associated costs, both in terms of human lives and resources, it could be worthwhile to also contemplate just what should Britain’s role in the world — and its relative power within the international system — be going forward given that it is no longer an equal partner in a special relationship.

This article was originally published by Fair Observer, 5th June 2019

Is Russia an Underwater Threat to the Internet

In early December, media picked up a report by the Policy Exchange examining the potential vulnerability to communications infrastructure which concludes that the UK is “uniquely vulnerable”. While it is good to see that non-technical aspects of cyber-security are receiving policy coverage, it is important to evaluate the nature of the threat and avoid hyperbole.
We first need to understand how the internet works, I don’t just mean the www. of the Web but all the different internets. In essence, this is a communications relay. Information is requested by a user, Point A, from a provider, Point B. The information is transmitted from A to B via a series of intermediate nodes, such as the London Internet Exchange (LINX). Information is routed between these nodes via a series of cables.
Due to the preponderance of water on the planet, many of these connectors between country nodes are undersea. On TeleGeography’s Submarine Cable Map, it is immediately apparent that a number of chokepoints exist that broadly align with chokepoints for global shipping. The threat, highlighted in the Policy Exchange Report, is that these cables are vulnerable to hostile action.
So it’s time to start stockpiling money, water, and canned food, right? Well, maybe hold fire on the panic. First, the threat to undersea cables is not new. As the Policy Exchange Report highlights, we have used undersea cables to transmit information — telegrams, financial data (“the cable” trading dollars and pounds), and telephony for years. The cables were a possible site of confrontation during the Cold War, and The Diplomat and other publications ran stories in 2015 similar to the current reports of concern.
Indeed, the harvesting of data, such as in Operation Ivy Bells in the early 1970s, is probably a more substantive threat that the severing of cables and cutting off an entire country. The CIA has been active in undersea intelligence gathering and counter-espionage since the Eisenhower Administration of the 1950s, primarily through the US Navy’s Sound Surveillance System (SOSUS).
So intelligence and military assets have been deployed in the undersea theater of war for decades. The threat to the undersea cables around the UK is no different to the threat that has existed for mor than 70 years, and Britain has deployed assets deployed to mitigate that threat.
Let’s assume for a moment that the threat is a clear and present danger. A State, or rogue actor, has decided to cut off the UK’s access to the internet. Britain has a number of cables entering at various different points, though there is a concentration in the Bristol Channel. How many cables need to be cut and how coordinated does this action have to be to have a noticeable effect?
Remember information is being routed via nodes. If we want to go from A to B on our Christmas holidays and the direct route is closed, what do we do? We go from A to B via C. We take a different route. Yes, congestion is likely going to higher and the journey along the route will be longer due to increased traffic, but we will arrive at our destination. The internet is no different — cutting an individual cable or even a series of cables will be an annoyance but it does not shut down access. The relative position of such an attack as a viable threat is negligible, especially as slowing down the Internet can be achieved without having to bother going to the hassle of developing deep sea combat capabilities.
Cables, whether on land or undersea, are only one part of the digital infrastructure. Taking out a node has much greater potential for damage, but I will await think-tanks catching up in their writing of reports before moving to discuss that issue. You the reader, however, can probably develop scenarios on your own.

This article was originally published on EA Worldview, 27th January 2017.

WannaCry: The Role of Government in Cyber-Intrusions

The WannaCry cyber-incident of May 12, which involved the British National Health Service (NHS), has received a good deal of coverage. Comments focused on whether the attack was preventable and if it presents increased vulnerability for public sector organizations, with substantive focus on the use of the outdated Windows XP. Such analysis does, however, gloss over an essential question: What do we want the role of the government to be and, indeed, what could it or what should it be?
The role of the government in cybersecurity has two essential debates. First, the dividing line between corporate — including the public sector — and government responsibility. Second, if some role for the government is accepted, then which branch of government should have primacy or be involved at all? The 2016 National Cyber Security Strategy attempts to delineate the responsibility of the individual, corporate and government.

The strategy established that the NHS and other public bodies had “the responsibility to safeguard the assets which they hold, maintain the services they provide, and incorporate the appropriate level of security into the products they sell.” In light of the WannaCry malware infestation, from the operational perspective, the failure lies within the NHS itself as opposed to the government. However, with the strategic level in mind, it remains within the purview of the government “to protect citizens and the economy from harm” and that the government “is ultimately responsible for assuring national resilience … and the maintenance of essential services and functions across the whole of government.”

In order to begin to think about the roles and responsibilities in UK cybersecurity, consider who holds the information regarding cyber-intrusions and malicious activity in the cyber-environment. The National Cyber Security Centre (NCSC), as part of  UK Government Communications Headquarters (GCHQ), is the central data coordination point for government oversight of cyber-activity.

In March 2015, however, the government emphasized the role of insurance companies in managing and mitigating risk in the cyber-environment. Indeed, the cyber-insurance market has been growing in recent years and is expected to grow significantly following the WannaCry incident. Insurance companies, therefore, have an increasing amount of information on the preparedness and vulnerabilities of UK networks. How much of this information should be shared with the NCSC?

The immediate response is all of it. Considering that the exploit used by WannaCry was “identified long ago” by the US National Security Agency (NSA), perhaps a government agency as the central collation point for all cyber-environment data is not necessarily in the interest of enhancing security within the UK cybersecurity or, indeed, in the global commons.

So what could the government do?

The simple answer is not a lot. The removal of geographic boundaries, the increase in actors, the deniability of actors, the variations in potential target groups and the overall impact on social cohesion mean that the job is beyond the scope of the government as primary provider of cybersecurity for the nation, hence the blurred delineation seen in the 2016 strategy. Attempting to continue on the present course is reliant on the hope that no significant intrusions occur.

But cyber-intrusions will occur and the individual, corporate and public-sector bodies that utilize the cyber environment need to have a clear understanding that their data is their responsibility. The first step is an educational starting point that a cyber-intrusion will happen and you will lose data. The question then becomes how to minimize the loss and recover lost data, known as resilience. This role should be the government’s concern in the cyber-environment to help minimize the harm suffered by an intrusion across all levels of the UK cyber-environment.

Furthermore, the accountability for individual, corporate or public sector aspects of cybersecurity should be transferred to the insurance industry. This means that body X with good investment in cybersecurity will pay a lower premium than body Y which has negligible investment or reliant on out-of-date technology. The effect of such a shift would be that all entities would be forced to take cybersecurity seriously or face higher premiums and hit to the bottom line. For the public sector, not only will IT procurement have to be considered, but also a cost-benefit analysis of increasing premiums versus new infrastructure. It would be interesting to see a future intrusion in the public sector if the government has to admit that it chose the cheap option with its citizens’ data.

This article was originally published on
Fair Observer on 18th May 2017

The Cyber Threat to the United Kingdom: Reality Check

The opening of the UK National Cyber Security Centre in February 2017, part of the electronic intelligence agency GCHQ, was welcomed with the revelation that the UK had been subjected to 188 ‘high-level cyber attacks’, defined by Ciaran Martin, Director of the NCSC as operations which threatened national security, over the previous three months.This raises an interesting question, is the UK, its Government, its businesses and its citizens  becoming increasingly vulnerable to cyber-intrusions?

Public perception, the focus of the media and policy makers is dominated by the potential threat to nation states. Indeed, in 2012 then US Defence Secretary Leon Panetta warned of a possible ‘cyber pearl harbour’, with specific focus on the vulnerability of critical national infrastructure. The focus of resource deployment, including the £1.9 billion announced by the UK Government in November 2016, is primarily geared towards combatting this threat. Spending and planning has a significant focus on attribution, to who has carried out an attack. The fear is that with a known assailant being absent, the capacity to respond is limited. The issue of attribution however is less of an issue than commonly presented. A cyber-intrusion at the State level is an expression of political intentions and therefore does not operate in a vacuum.

Nation states operate in a rational manner to pursue their strategic objectives. Russia has allegedly pursued cyber-attacks for years, against NATO over Kosovo (1999), Estonia (2007), Georgia (2008), NATO over Crimea (2014) and Ukraine (2017). Definitive proof that would satisfy a technocrat as to the origins of the specific attack, likely rerouted through a series of intermediary countries, remains a challenge, but in all the cited examples the Russian involvement is “well evidenced” and “widely accepted”, according to Martin.

Undoubtedly these intrusions create a nuisance and erode confidence in the countries or organisations involved. The question is whether they pose a threat to national security equivalent to a kinetic attack, especially as NATO has declared it will invoke the collective defence of Article V in response to a cyber-attack of sufficient scale.

The corporate level is where the greatest threat exists. Potential perpetrators include nation-states seeking to create instability or economic distress, criminal gangs seeking to gain profit, hacktivists attempting to pursue a specific vendetta and Joe Bloggs citizen because he can. Add so-called “white” or ethical hackers, who seek to expose companies’ vulnerabilities so that they can address them, and the situation becomes incredibly complex. Attribution is and will remain a substantial problem, and attempting to resolve it would not be a worthwhile investment. Instead the acceptance of vulnerability and greater focus on resilience and the institutional process for responding to an intrusion is less likely to have a detrimental impact on the company. The role of insurance firms is crucial, as they have the necessary information to inform big-picture government policy.

Since the UK adopted the National Cyber Security Strategy in 2011, companies are more open to reporting attacks than in recent years. There is now a greater understanding and acceptance that a cyber-intrusion is not necessarily detrimental to the firm.

This response of business is as important as the attack. The recent exposure of hacks on Yahoo and TalkTalk showed the damage of a slow admission, with Yahoo taking three years to acknowledge the attack and TalkTalk fined £400,000 by the Information Commissioners Office.
Like business, the individual citizen has to accept a much greater degree of personal responsibility for actions, or inactions. This is not limited to the cyber realm; it includes reliance on the government or someone else to be responsible for resolving their problems.

The UK has made significant improvements, in terms of cyber-defence, since the Cyber Security Strategy of 2011. The one area that has not been fully embraced is the educational objective of the strategy. The media and general public attach a far greater fear of cyber intrusion than is evident in the reality of the actual threat posed.

Cyber-attacks, as part of international politics, will continue. The approach cannot be to wish them away, but to encourage an integrated perspective where a sensible approach to defence is matched by an appreciation of what those attacks are seeking to achieve.

Originally published on the
Birmingham Perspective, 16th March 2017.

Facing Russia - NATO's Warsaw Summit

NATO meets in Warsaw, Poland this weekend in what is arguably its most important summit since the Cold War.

The primary if unstated focus of the meeting is Russia. The alliance will seek to build on “assurance measures”, introduced at the Wales Summit in 2014 in the wake of the Crimea and Ukraine crises, towards an effective and credible deterrence for areas such as the Baltic States.

The challenges and arguments about facing Moscow are not new, but there is increasing concern that they are shifting from the political to military domain. Cohesion now has to pursue credibility, through the bedrock of
the Article V commitment that an attack on one is an attack on all.

NATO Secretary-General Jens Stoltenberg has indicated that the re-invigoration of collective defence will be the core objective of the Warsaw Summit. So how will success or failure be measured?

At the Wales Summit, NATO
established the Very High Readiness Joint Task Force (VJTF) as part of the Readiness Action Plan. In Warsaw, the Alliance needs to go further and ensure that the ability to deploy military force is credible, both in equipment and political will. The trend to shift force posture away from high-end military capability will have to be reversed.

This is not just a question of resources. NATO members will have to address the challenges of geography and balance in its operations.

The Baltic states are joined by land to the rest of the Alliance by
a narrow corridor connecting Poland and Lithuania. Any planning has to recognize that the Russian enclave of Kaliningrad could be used for “area denial” operations. Credibility will be undermined if the VJTF cannot be deployed from the Baltic into the Balkans across the full spectrum of land, maritime, and air operations, hence the undermining of credibility. Given questions about how much infrastructure has been developed in Eastern Europe for deployment of the VJTF, the alliance will have to continue more pre-positioning of equipment and forward deployment, reinforced by regular exercises designed to enhance national force inter-operability.

NATO has been too reliant on the United States in military operations, which subjects operations to the key determinant of American political will. For example, European allies have not established a heavy strategic airlift capability, enhancing the problem of area denial by Russia or another power. Amid a universal decline in defence spending and the shift towards expeditionary peacekeeping forces, NATO’s capability of a high-end military deterrent is questionable. Alliance members will not only have to meet the spending pledge of 2% of GDP on defense, but also ensure that the expenditure
bolsters NATO’s deterrent cornerstone with an agreed standard of readiness.

A
shift from extended deterrence and the nuclear guarantee towards a more realistic, effective, and credible deterrent based on conventional forces is needed, coupled with an institutional change. An executive committee for certain decisions may eventually replace the principle of consensus — while that radical change is unlikely to be realized soon, greater autonomy and pre-authorisation of action for local commanders may be part of the revised force posture.

Expect the Warsaw Summit Declaration to reinforce the core principle of collective defence, with a shift in orientation away from out-of-area operations such as Afghanistan. Re-establishing clear escalatory ladder for conventional forces, akin to the doctrine of “flexible response” from the Cold War, will remove ambiguity and enhance deterrence. Meanwhile, with Russia maintaining a
sub-strategic advantage in nuclear weapons, NATO’s forces will benefit from being bolstered across the nuclear triumvirate to ensure that the full range of escalatory response options are available for deterrence, and, thus, stability in Europe.

This article was original published on 8th July 2016 by
EA Worldview.