Optimizing Computational Resources for Edge Intelligence Through Model Cascade Strategies

Abstract

As the number of interconnected devices increases and more artificial intelligence (AI) applications upon the Internet of Things (IoT) start to flourish, so does the environmental cost of the computational resources needed to send and process all the generated data. Therefore, promoting the optimization of AI applications is a key factor for the sustainable development of IoT solutions. Paradigms such as Edge Computing are progressively proposed as a solution in the IoT field, becoming an alternative to delegate all the computation to the Cloud. However, bringing the computation to the local stage is limited by the resources’ availability of the devices hosted at the Edge of the network. For this reason, this work presents an approach that simplifies the complexity of supervised learning algorithms at the Edge. Specifically, it separates complex models into multiple simpler classifiers forming a cascade of discriminative models. The suitability of this proposal in a human activity recognition (HAR) context is assessed by comparing the performance of three different variations of this strategy. Furthermore, its computational cost is analyzed in several resource-constrained Edge devices in terms of processing time. The experimental results show the viability of this approach to outperform other ensemble methods, i.e., the Stacking technique. Moreover, it substantially reduces the computational cost of the classification tasks by more than 60% without a significant accuracy loss (around 3.5%). This highlights the potential of this strategy to reduce resource and energy requirements in IoT architectures and promote more efficient and sustainable classification solutions.