Call for Paper of a Special Issue on Weakly Supervised Learning on Big Vision Data
摘要截稿:
全文截稿: 2019-10-15
影响因子: 7.196
期刊难度:
CCF分类: B类
中科院JCR分区:
• 大类 : 计算机科学 - 1区
• 小类 : 计算机:人工智能 - 1区
• 小类 : 工程:电子与电气 - 1区
Overview
Recently, pattern recognition and computer vision have been largely boosted by the use of deep learning. This boost has been seen in a wide range of applications, including object detection, tracking and recognition, action recognition, face recognition, person re-identification, etc. Such a huge development is due to advancing hardware and also the availability of big data (e.g. the large number of data samples).
Actually, there are large demands in many computer vision tasks, such as object detection and untrimmed action recognition. It is also costly to annotate the location of each object/action in an image/video for training a large-scale set of images/videos. And thus, it is well demanded to seek a way to minimize such a requirement.
More importantly, while a huge number of data samples make deep neural networks learnable, most of the developed algorithms largely depend on labelled data, acquiring labels is extremely expensive for increasingly big data, and it is ultimately infeasible to label all data samples. How to utilize partially labelled and even unlabelled data to learn feature representation by deep neural networks is now critically important. Recently, several attempts have been made, including the extension of dictionary learning and clustering-based feature extraction under deep neural networks for achieving an end-to-end learning recently as well. The multi-instance learning is employed since it is suitable for weakly labelled data, for instance when providing the existence label of an object in an image without localization. The curriculum learning that learns the knowledge from the easy samples (e.g. salient candidate in object detection) to hard samples is also developed to solve the weakly supervised learning challenge. The related approach such as distillation methods/teacher-student models are also found useful to help transfer existing strong network to the one with the same structure trained on only limited data. More recently, the developed deep adversarial network and deep capsule network are becoming important approaches for unsupervised learning, providing a promising way for using unlabeled data in deep neural networks effectively.
In addition, while the large applications of big data by deep neural network are for classification problem (e.g. object detection/categorization, action recognition), a more important application of big data is to forecast what will happen or what the trend is in future periods based on the previous observed data and prior knowledge. Unsupervised learning is important for predictive learning, since prediction is based on a vast amount of historical data, which are apparently impossible to be completely labelled and therefore mostly are unlabelled.
We call the setting of learning under limited labelled data or limited annotation as weakly supervised learning. We are still at the starting points of solving the weakly supervised learning. This special issue aims to promote the research on the purpose of weakly supervised learning on big data. We aim to solicit high quality papers, both in theory and applications. Especially, the following (but not limited to) topics are the particular interests of this special issue, including:
Unsupervised Learning
Semi-supervised Learning
Deep Clustering
Adversarial Learning
Predictive Learning
Distillation Modelling
Online Learning
Active Learning
Transfer Learning
Related applications in computer vision and pattern recognition