A Hybrid Evaluation Methodology for Human Activity Recognition Systems

Abstract

Evaluating human activity recognition systems usually implies following expensive and time consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a hybrid evaluation methodology to overcome the enumerated problems. Central to the hybrid methodology are surveys to users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Sensor noise, varying time lapses and user erratic behaviour can also be simulated using the tool. The hybrid methodology is shown to have very important advantages that allow researchers carrying out their work more efficiently.