Last week Reuters published an article "Rogue communication devices found in Chinese solar power inverters" that has received wide and sensationalised coverage. Whilst the original Reuters article provides some limited technical detail and attempts balance, The Times, for example, goes with "Chinese ‘kill switches’ found hidden in US solar farms" and a sub-head "Hidden cellular radios could be activated remotely to cripple power grids in the event of a confrontation between China and the West".
The essence of the story is that undocumented communication modules, in principle permitting remote and unauthorised access, have been found embedded in solar power inverters (solar panels provide DC whilst power grids require AC, so an inverter is essential) purchased from Chinese suppliers. The context is, of course, an increasing dependence on Chinese technology in the area of renewables, strategic competition in technology between the US and its allies, and a fractious trade relationship.
Now, far be it from me to discount the possibility of, and the risks posed by, 'supply chain attacks'. That is, attacks that are enabled by an adversary interposing themselves in the supply chain for critical equipment and altering it in such a way as to obtain accesses or interfere with the proper operation of that equipment. I have, after all, only recently provided a RUSI (the defence and security think tank) Commentary: "Technical Security: Back to the Future", pointing to precisely this. I am however keen to ensure these risks are not misunderstood or misrepresented.
Supply chain attacks are extremely difficult and costly to mount. They are always liable to discovery, particularly if they are required to lay dormant for a significant period. The loss, in that event, of 'equity', the likelihood of attribution, and the exposure of intent, make them a tool that must be used with great care and subject to precise targeting.
So, what might be happening here? The blindingly obvious explanation is bad engineering: a difference between the specification and the system as implemented; undocumented control and management interfaces; residual testing infrastructure; redundant implementations; multiple variants; the list goes on. Such errors are particularly likely to manifest when the priorities are low-cost, extended functionality, and speed to market. Engineers know this and customers ought to know that 'you get what you pay for'.
Now, please do not get me wrong. I am not content with this situation, because bad engineering is as much of a threat as malicious implant. Indeed, perhaps more. You start with a risk of failure arising simply from errors and unintended interactions, and the vectors of attack are multiplied and difficult to discern. The situation is exacerbated if your adversary is aware of the bad engineering and is better positioned to exploit it than you are to mitigate it. This was, in reality, a large part of the 'Huawei problem' - you do not need to posit a deliberate effort to provide network equipment with strategic vulnerabilities when multiple poorly developed software variations across a complex technical product portfolio will yield sufficient vulnerabilities to be exploited.
Clearly, better engineering would be a good thing, as would, for a broad range of reasons, lessening our dependence on Chinese technology. Nevertheless, I am sceptical that either of these are that realistic. And I should say, as in this case, that there are tradeoffs: is national security better served by rapid deployment of renewables to slow climate change, or by a more robust energy infrastructure? The balance to be achieved is not straightforward.
My preferred approach is to build resilient systems founded upon 'zero-trust' architectures (which are built on the assumption that threats arise from both outside and inside the system and that no component is trusted even if it is already within the system boundary). In the meantime, inadequate analysis and sensationalist coverage yield poor policy.