top of page

A hardware leash to keep AI under control



A hardware leash to keep AI under control

Amidst the rapid evolution and diffusion of artificial intelligence, there is a growing concern about the potential risks associated with its uncontrolled development and deployment, and many are starting to consider strategies to mitigate these risks, integrating constraints into vital AI hardware components like GPUs to limit the power of AI algorithms. This approach goes hand in hand with other measures already established by some governments (like the US), such as export control, which avoids access to dual-use AI-based technologies but also significantly harms the business of companies in the AI domain (Nvidia lost billions due to export restrictions).

 

Indeed, regardless of their sophistication and complexity, AI algorithms are ultimately bound by the functionalities and capabilities of the hardware they run on. Researchers are exploring options to leverage the symbiosis between hardware and software to contain the potential threats associated with AIbased systems. This approach includes the idea of embedding regulations that govern and control the training and deployment of advanced algorithms directly into the chips on which they run.

 


Figure 1 - Google’s Titan server chip (left) and first-generation Titan M security chip (right)

 This new strategy is currently a hot topic in theoretical debates about dangerously powerful AI and could offer a robust solution to prevent AI misuse by rogue nations, irresponsible companies, criminal organisations, hackers, individuals, and even autonomous systems.

 

Unlike conventional laws or treaties (see e.g. the AI Act), this strategy might prove more complex and challenging to evade, relying on limitations introduced by design at the hardware level. As a confirmation, a recent report by the Center for New American Security1 highlights how carefully restricted silicon could strengthen various AI controls.

 

Certain chips already feature trusted components to protect sensitive data or prevent misuse. For instance, iPhones store biometric data in a trusted area called Secure Enclave, a dedicated hardware component separate from the main processor and provided with its own isolated memory and processing resources.

 

The biometric data is encrypted and stored in the Secure Enclave, preventing unauthorised access or tampering and restricting access only to authorised processes. Similarly, Google adopts the Titan M chips2 family, a RISC-V CPU with its own memory and cryptographic accelerator which is used in Pixel smartphones as a secure element that stores sensitive information, performs cryptographic operations, and verifies the integrity of the device’s software during the OS boot.

 

Moreover, to protect cloud infrastructure and services, Google adopts custom security-oriented chips in its data centres, including cryptographic accelerators, secure storage modules, and hardware-based encryption engines.

 

This approach could be adopted to leverage  similar features in GPUs or incorporate new functionalities into future chips to limit AI access to computing power without proper authorisation. And the authorisation could be bound to licenses issued by government or international regulators and periodically renewed, enabling the restriction of AI training access by withholding new licenses. And this is not just theoretical: all chips from Nvidia that are adopted to train AI and are crucial for the creation of the most powerful AI models  already contain a secure cryptographic

module.

 

In November 2023, a research team at the Future of Life Institute,4 a nonprofit dedicated to protecting humanity from existential threats, adopted, in cooperation with security startup Mithril Security,5 the security module of an Intel CPU to develop a demo in which a cryptographic scheme can prevent unauthorised use of an AI model, set a computing threshold to limit processing load and remotely disable the use of the model. Complementing this approach, CNAS proposes that government or international regulators define protocols to deploy models only after they meet specific safety evaluation criteria.

 

While some view hard-coding restrictions into computer hardware as extreme, there is precedence in establishing infrastructure to monitor or control significant technology and enforce international treaties. An excellent example is represented by the adoption of this approach in modern seismometers: seismometers play a crucial role in monitoring underground nuclear tests, thus supporting treaties on underground weapon testing.

 

Although the approach to including hardware limitations in AI chips is not purely theoretical, the implementation of this approach faces political obstacles and technical challenges but also represents a very promising and profitable research area for the stakeholders of the ECS community. Developing hardware controls and limitations for AI would require new hardware features in future AI chips but also a new generation of cryptographic software schemes. Additionally, in a so critical domain, this approach requires a strengthened ECS value chain to ensure EU strategic autonomy and avoid being bypassed by other regions of the world with advanced chipmaking capabilities. This area of research and engineering is currently uncertain but will certainly require significant investments because, considering the current geopolitical situation, investigating microelectronic controls embedded in AI chips represents a potential EU security priority.


Download ISSUE 6 of INSIDE Magazine via this link: https://www.inside-association.eu/publications




bottom of page