Adversarial autoencoder with attention driven loss
Résumé
Current video anomaly detection approaches are based on frame reconstruction or prediction. In this context, the inter-class data imbalance problem is dominated by the imbalance between foreground and static background items in video anomaly detection techniques. In order to relieve the foreground-background issue, we augmented the Adversarial AutoEncoder network with the Attention Driven Loss. More particularly, in order to assign greater weights to the region of interest (the foreground) than the background region, we incorporated the attention maps into the reconstruction phase of the network. This augmentation yields different weights for the background and foreground correspondingly. In our experiments, we focused only on the reconstruction phase of the network. Our results proved the effectiveness of our method. We conducted experiments on two popular benchmarks (UCSD Dataset and Avenue Dataset) achieving competitive results when compared with other state-of-the-art work. Our model achieves 88.0% AUC on Avenue, 80.2% AUC on Ped1 and 92.8% AUC on Ped2 datasets.