Mathematical models and optimization methods have played a longstanding, critical role in thedesign of signal processing and analysis systems. In the past decade, data-driven approaches - especially deep learning - have been widely adopted and have achieved state-of-the-art results in various signal and image processing applications; this, however, is at the cost of interpretability and explainability of the model and its decisions. Despite their unprecedented performance and wide adoption, black-box deep learning models come with major shortcomings: training such deep learning models requires a large amount of data and (ground truth) annotations and consumes a significant amount of computational and power resources. Moreover, the performance of deep learning models is sensitive to deviations between the training and the test set, for example due to the presence of (adversarial) noise. This special issue welcomes contributions related to innovative designs of model-aware deep learning models, novel approaches to train such models (including unsupervised, self-supervised and semi-supervisedlearning) and advanced topics concerning such models (including robustness, explainability, metalearning and out-of-distribution detection).
Guest editors:
Dr. Emilie Chouzenoux, Inria Saclay, emilie.chouzenoux@inria.fr
Dr. Nikos Deligiannis, Vrije Univ. Brussels, ndeligia@etrovub.be
Pr. Aleksandra Pizurica, Univ. Ghent, Aleksandra.Pizurica@ugent.be
Manuscript submission information:
The special issue topic is at the forefront of the developments of interpretable AI and at the intersection between signal processing and machine learning. In recent years, there are various important contributions that show that signal processing concepts such as low-complexity models (such as sparsity and low-rankness) and associated algorithms can offer a fundamental twist in the design of deep learning models. Such models can incorporate knowledge about the data or task at hand offering performance and interpretability advantages. Despite the progress in the field and the attention it has received, there are still various important problems that remain elusive, including the robustness of these models when the data is contaminated by (adversarial) noise, the post-hoc explainability of these models, the efficient (unsupervised) training of these models. This special issue aims at bridging this gap by welcoming contributions covering among others the following topics:
Innovative design of model-aware deep learning models
New model-aware deep learning architecture, including graph deep learning models and transformers
Generative model-aware deep learning Out-of-distribution detection for model-aware deep learning
Model-aware deep learning models by algorithmic unrolling
Robust model-aware deep learning Model-aware deep learning for meta-learning, zero-shot and few-shot learning
Unsupervised, self-supervised and semi-supervised model-aware deep learning
Explainability and interpretability for model-aware deep learning
Distributed and federated model-aware deep learning
Applications of model-aware deep learning in image/video processing, signal processing, computer vision, big data, and natural language processingImportant date:
Submission deadline: 15 March 2024
Deadline for acceptance: August 2024