Dynamic texture (DT) segmentation is the problem of clustering into groups various characteristics and phenomena that reproduce in both time and space, assigning a unique label to each group or region. Though this problem is highly complex, it has recently become the focus of considerable interest. This paper presents a simple and effective fusion framework for dynamic texture segmentation, whose objective is to combine multiple and weak region-based segmentation maps to get a final better segmentation result. The different label fields to be fused, are given by a simple clustering technique applied to an input video (based on three orthogonal planes xy, xt and yt). This is using as features a set of values of the requantized local binary patterns (LBP) histogram around the pixel to be classified. Promising preliminary experimental results have been achieved by our method on the challenging SynthDB dataset. Compared to existing dynamic texture segmentation approaches that require estimation of parameters or training classifiers, our method is easy to implement, simple and has few parameters.