Bluntly, there are enterprises and state-actors who want to steal your research and associated intellectual property (IP). They are prepared to use deception to do this, and this may extend to the use of technical means to acquire your work where more straightforward methods are unavailable to them. They will seek out and exploit weak seams, both at the organisational and individual level. They are not above ethical compromise of the research endeavour.
It is uncomfortable to acknowledge this, and to confront the reality that this is systematic, ongoing and, for some states, a matter of policy. In undertaking this theft, these enterprises and state-actors damage collaboration, and hence the basis of scientific progress, and impact the trustworthiness and integrity of research more broadly.
Science and technology are a domain of geopolitical contest. Research security policies and practices are required to serve the following goals.
To protect the economic interests of the UK and to permit the funders of the research, often the UK government and public bodies, and the owners and developers of the IP associated with the research, to gain appropriate benefit from it.
To protect the rights of the individuals undertaking the research to secure appropriate recognition for their work, and to safeguard their autonomy to determine the uses of that research.
To protect research subjects, their data, and the basis upon which their consent was granted.
To limit the capacity of hostile or adversary states to impact the UK’s democracy and rule of law. This includes preventing research institutions and structures from becoming the subject of, or being directed towards, political interference.
To protect sovereign defence and security assets and associated research, and know-how arising from research, and to preserve advantage in these areas. Clearly, this is an area where the motivation of adversaries and hostile states, and the associated risks, are highest.
Each of these goals is challenging to achieve. Balancing the needs of research security without having a chilling effect on collaboration, and increasing costly bureaucracy and unwarranted oversight is no easy task. This is particularly the case when research is very much a bottom-up, entrepreneurial endeavour, undertaken by a global workforce in institutions not always well aligned to their specifically national responsibilities. We will just have to do the best we can.
A significant problem, however, in setting appropriate research security policy arises from what I believe to be an outdated frame: that of ‘dual-use’ technology.
Research universities have always worked on defence-related technologies (with, in the UK, direct support from His Majesty’s Government through, for example, the Ministry of Defence and the Defence Science and Technology Laboratory, and the large defence primes). Because of political sensitivities, or at any rate, perceived sensitivities, this is not much discussed in open fora. In my view this is a mistake.
Researchers in universities have played a key role in weapons research, specifically nuclear science, modelling and materials, and in energetics and shock physics. Other areas well represented in UK universities include, but are not restricted to, RF technologies, directed energy systems, propulsion including hypersonics, sensors, satellite and remote sensing systems, and human performance in hostile environments. These long-standing research areas are, for the most part, classically ‘dual-use’. They have clearly determinable and direct military application. Though the science may be broad, it is often relevant to extreme regimes. The technologies associated with this work are narrower and the associated civil applications constrained. Much of this is covered by export control arrangements. It is unusual for people working in these areas to be unaware of the uses to which their work can be put, at least in outline, if not the specifics. We require strong protections.
Increasingly, however, large areas of technology, principally but not exclusively in the digital arena, have become highly militarily (and/or national security) relevant. Examples include Artificial Intelligence (AI), the Internet of Things (IoT), autonomous systems, edge computing, image processing, quantum sensing, bioengineering, novel materials design, and behavioural sciences. These are principally platform technologies whose applications are overwhelmingly commercial. The research and innovation community is broad. We cannot readily limit access or collaboration, nor would it necessarily be advantageous if we could. The amount of research across the ecosystem is so substantial that a determined adversary can in any event readily secure access. Whilst these areas are, in some senses, ‘dual-use’, it makes very little sense to treat them from a research security standpoint in a manner similar to the areas set out above. I have considered the labels ‘primary-use’ and ‘secondary-use’.
For these platform technologies, I propose a different approach to research security. We need far deeper and more transparent engagement from the defence and security communities with the research ecosystem. Simply increasing regulation and oversight will be ineffective. Researchers must be equipped to make informed, nuanced decisions about what requires protection, from whom, and how.
We should speak more openly about defence and security applications. We should be more confident, and more collectively engaged, in meeting our responsibilities to defend democracy and protect society.
When we had a much larger defence research establishment keeping the really sensitive research activities secure was much easier. We allowed market forces to dominate sovereign security and are now paying the price... Not impossible to repair but that will need political will and sustained investment... And scientific and engineering leadership...