Securing the AIoT: Safeguarding AI at the Edge
Edge AI computing has been a trending topic in recent years. The market was already valued at $9 billion in 2020, projected to surpass $60 billion by 2030 (Allied Market Research). Initially driven by computer vision systems (particularly for use in autonomous vehicles – no pun intended), the proliferation of other edge applications, such as advanced noise cancellation and spatial audio, has further accelerated the growth of the Edge AI computing Market.
Whether computation takes place in the cloud or down at the edge, data protection remains vital to ensuring the safety of AI system operations. Integrity checking and secure boot/update, for example, are undoubtedly mission-critical. Furthermore, for applications that have a direct bearing on people’s lives (such as smart cars, healthcare, smart locks, and industrial IoT), a successful attack would not only affect the safety of that application’s data set but also potentially endanger lives. Imagine the case for a smart camera surveillance system, when intrusions and hazards wouldn’t be inferred and detected correctly if a hacker manages to alter the AI model or the input streaming images.
Unlike a standard CPU, which is ready to use out of the box, a neural network needs to be taught how to make the correct inferences before being put to use. Therefore, AI system creators must include the training stage when planning for system security. This means that besides the hardware itself (including the neural network accelerator), other related attack surfaces need to be considered. These include:
- tampering/theft of training data
- tampering/theft of trained AI model
- tampering/theft of input data
- theft of inference results
- tampering of results used for retraining
Careful implementation of security protocols using the cryptographic functions of privacy (anti-theft and anti-tampering) and integrity (tampering detection) can mitigate against such attacks. A diagram of the training thru implementation stages of an Edge AI system is seen below, along with the above-mentioned threats from tampering and theft: