News
This page contains automatically translated content.
New accepted article at the International Journal On Advances in Intelligent Systems
The article Continuous Feature Networks: A Novel Method to Process Irregularly and Inconsistently Sampled Data With Position-Dependent Features by Birk Martin Magnussen, Claudius Stern and Bernhard Sick was accepted for publication at the International Journal On Advances in Intelligent Systems akzeptiert. The content:
Continuous Kernels have been a recent development in convolutional neural networks. Such kernels are used to process data sampled at different resolutions as well as irregularly and inconsistently sampled data. Convolutional neural networks have the property of translational invariance (e.g., features are detected regardless of their position in the measurement domain), which is unsuitable if the position of detected features is relevant for the prediction task. However, the capabilities of continuous kernels to process irregularly sampled data are still desired. This article introduces the continuous feature network, a novel method utilizing continuous kernels, for detecting global features at absolute positions in the data domain. Through a use case in processing multiple spatially resolved reflection spectroscopy data, which is sampled irregularly and inconsistently, we show that the proposed method is capable of processing such data directly without additional preprocessing or augmentation as is needed using comparable methods. In addition, we show that the proposed method is able to achieve a higher prediction accuracy than a comparable network on a dataset with position-dependent features. Furthermore, a higher robustness to missing data compared to a benchmark network using data interpolation is observed, which allows the network to adapt to sensors with a failure of individual light emitters or detectors without the need for retraining. The article shows how these capabilities stem from the continuous kernels used and how the number of available kernels to be trained affects the model. Finally, the article proposes a method to utilize the introduced method as a base for an interpretable model usable for explainable AI.