Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Marek Zylinski

and 5 more

Artificial intelligence (AI) on an edge device has enormous potential, including advanced signal filtering, event detection, optimization in communications and data compression, improving device performance, advanced on-chip process control, and enhancing energy efficiency. In this tutorial, we  provide a brief overview of AI deployment on edge devices, and we describe the process of building and deploying a neural network model on a digital edge device. The primary challenge when deploying an AI model in circuits is to fit the model within the constraints of the limited resources as the restricted memory capacity on IoT circuits and the finite computational power impose constraints on the utilization of deep neural networks on IoT. We addresses this issue by elucidating methods for optimizing neural network models. Part of the tutorial also covers the deployments of deep neural network into logic circuits, as significantly enhanced computational speed can be attained by transitioning the AI paradigm from neural networks to learning automata algorithms. This shift involves a move from arithmetic-based calculations to logic-based approaches. This transformation facilitates the deployment of AI onto Field-Programmable Gate Arrays (FPGAs). The last part of the tutorial covers the emerging topic of in-memory computation of the multiply-accumulate operation. Transferring computations to analog memories has the potential to improve speed and energy efficiency compared to digital architectures, potentially achieving improvements of several orders of magnitude. It is our hope that this tutorial will assist researchers and engineers to integrate AI models on  edge devices, facilitating rapid and reliable implementation.