Adversarial autoencoder with attention driven loss - Université de technologie de Troyes Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Adversarial autoencoder with attention driven loss

Résumé

Current video anomaly detection approaches are based on frame reconstruction or prediction. In this context, the inter-class data imbalance problem is dominated by the imbalance between foreground and static background items in video anomaly detection techniques. In order to relieve the foreground-background issue, we augmented the Adversarial AutoEncoder network with the Attention Driven Loss. More particularly, in order to assign greater weights to the region of interest (the foreground) than the background region, we incorporated the attention maps into the reconstruction phase of the network. This augmentation yields different weights for the background and foreground correspondingly. In our experiments, we focused only on the reconstruction phase of the network. Our results proved the effectiveness of our method. We conducted experiments on two popular benchmarks (UCSD Dataset and Avenue Dataset) achieving competitive results when compared with other state-of-the-art work. Our model achieves 88.0% AUC on Avenue, 80.2% AUC on Ped1 and 92.8% AUC on Ped2 datasets.
Fichier non déposé

Dates et versions

hal-04439885 , version 1 (05-02-2024)

Identifiants

Citer

Marie Njaime, Charbel Chahla, Hichem Snoussi, Fahed Abdallah. Adversarial autoencoder with attention driven loss. APPLICATIONS OF MATHEMATICS IN ENGINEERING AND ECONOMICS (AMEE’22): Proceedings of the 48th International Conference “Applications of Mathematics in Engineering and Economics”, Jun 2022, Sofia, Bulgaria. pp.130002, ⟨10.1063/5.0179220⟩. ⟨hal-04439885⟩
8 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More