This is different from the traditional IoT-AI application framework where the data generated by connected technologies is transmitted to a backend cloud system, processed by AI algorithms and the resulting control actions are transmitted across the network to the connected devices.
Instead of running AI models at the backend, they are configured onto processors or FPGA chips inside the connected devices operating at the network edge.
Features of true edge AI
Examples of Edge AI include autonomous vehicles, smart traffic lights and the wider Internet of Vehicles (IoV) network where vehicles, traffic lights and emergency services can mediate between each other to coordinate emergency routes and diversions when necessary.
This coordination means a high level of processing efficiency and accelerated data-driven decision making in real-time. This is made possible in recent years for three key reasons:
- Advanced machine learning models have proven to learn and generalize simple tasks sufficiently well. Most edge AI applications are designed for specific computing tasks and the AI techniques can be used to model system behavior with high accuracy.
- Advancements in parallel processing chips, following Moore’s Law, has enabled energy-efficient use and deployment of relatively complex machine learning models on low-power devices connected at the network edge.
- IoT devices are adopted at large scale. Thanks to high-speed connectivity and low energy consumption of these devices, these devices can be used to run AI algorithms directly at the edge instead of transmitting real-time data streams across latency-bound cloud networks.
Intelligent edge vs edge intelligence
With these drivers, the Intelligent Edge — that is, the infrastructure — meets Edge Intelligence, actual applications of intelligence on edge devices. The term edge AI is used synonymously with edge intelligence. These key elements derive the use of Intelligent Edge capabilities:
AI applications on edge
AI applications draw data from multiple devices on the network edge and embed intelligence into the automation functionality served by the connected devices.
AI inference on edge
AI models use data to infer a decision or control action using real-time data generated by these devices. This process takes place in real-time and is achieved using pre-trained AI models.
AI edge computing architecture
The networking systems and architecture are adapted to support Edge AI applications. The endpoints are not only sensors but a low-power computing system that can run AI models and are connected with an automation system that performs the required control actions intelligently.
AI training on the edge
The models can be programmed to train and adapt on new data streams generated by sensor endpoints. This approach requires model embedding onto dedicated FPGA devices or smart devices with onboard computing systems.
AI edge optimization
The network edge and smart devices involved in Edge AI applications are optimized for:
- Computing performance
- Energy efficiency
- End-user privacy
Challenges with edge AI
All of this sounds interesting, except that most networking and IoT devices at the network edge can only be used as a data source.
Machine learning models deployed on smart devices can work well if the model is small and the AI task is limited to solving a simple classification problem. As the model grows in complexity, Edge AI devices will undergo a steep accuracy-resource demand tradeoff. The number of parameters that need to be learned and configured on a computing chip can grow exponentially for AI applications designed to solve complex problems. This means that the device must be equipped with a capable processing chip which consumes high energy.
Ideally, an intelligent machine must be able to adapt and improve in its learning capacity. Edge AI applications can take advantage of distributed edge computing devices, each generating a variety of useful information to train the AI models. However, employing a federated learning approach presents its own set of challenges:
- First, the underlying hardware devices must be equipped with processing chips to reconfigure and embed relatively large AI models.
- Secondly, the devices should be able to communicate and interface together using standardized protocols and specifications. This may be challenging when the IoT devices are designed by competing proprietary vendors. The heterogeneous network architecture means that significant manual configuration may be required for edge AI applications to work at scale.
(Explore the ethics of artificial intelligence.)
Adopting edge AI doesn’t go smoothly
While Edge AI is destined to play a crucial role in the future of AI adoption, it will also face the same adoption challenges as any conventional AI and digital technology.
Edge AI applications are developed for computing devices with limited computing capacity and hardware designed for specific tasks. The entire application will run between devices without interacting with a backend cloud network.
This means that data governance, security and user privacy capabilities must also be embedded into the AI system and the intelligent edge cannot truly be separated from a centralized computing environment.
What is Splunk?
This posting does not necessarily represent Splunk's position, strategies or opinion.