We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Targeted collapse regularized autoencoder for anomaly detection: black hole at the center

Abstract: Autoencoders have been extensively used in the development of recent anomaly detection techniques. The premise of their application is based on the notion that after training the autoencoder on normal training data, anomalous inputs will exhibit a significant reconstruction error. Consequently, this enables a clear differentiation between normal and anomalous samples. In practice, however, it is observed that autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some of the anomalous samples. To improve the performance, various techniques propose additional components and more sophisticated training procedures. In this work, we propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term that regulates the norm of representations in the latent space. The simplicity of our approach minimizes the requirement for hyperparameter tuning and customization for new applications which, paired with its permissive data modality constraint, enhances the potential for successful adoption across a broad range of applications. We test the method on various visual and tabular benchmarks and demonstrate that the technique matches and frequently outperforms more complex alternatives. We further demonstrate that implementing this idea in the context of state-of-the-art methods can further improve their performance. We also provide a theoretical analysis and numerical simulations that help demonstrate the underlying process that unfolds during training and how it helps with anomaly detection. This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.
Comments: 18 pages, 4 figures, 8 tables
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Neurons and Cognition (q-bio.NC); Machine Learning (stat.ML)
Cite as: arXiv:2306.12627 [cs.LG]
  (or arXiv:2306.12627v2 [cs.LG] for this version)

Submission history

From: Amin Ghafourian [view email]
[v1] Thu, 22 Jun 2023 01:33:47 GMT (1068kb,D)
[v2] Wed, 27 Mar 2024 23:54:26 GMT (1792kb,D)

Link back to: arXiv, form interface, contact.