L. Khelifi, M. Mignotte.
International Journal of Image and Data Fusion (IJIDF), Taylor & Francis, vol. 2, (2):99-121 March 2021
Publication year: 2021

Motion segmentation in dynamic scenes is currently widely dominated by parametric methods based on deep neural networks. The present study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective fusion framework for motion segmentation in dynamic scenes (FFMS). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labelling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: xyyt and xt). Experiments conducted on two challenging datasets (SynthDB and YUP++) show that, contrary to current motion segmentation approaches that either require parameter estimation or a training step, FFMS is significantly faster, easier to code, simple and has limited parameters.