The objective of this 11 months’ plan starting from 1st of February 2016 till 1st of January 2017, is to create and retexture 3D model of garment. First of all, 3D model of garment has to be captured from real scenarios using state-of-the-art approaches and devices. We will use the Kinect v2 TM sensor to capture the garment 3D model. In order to obtain the garment surface, it will be placed either on human or a mannequin. Once data will be captured, we will segment 3D garment surface and body surface, respectively. Take into account that these tasks are not trivial and their approach is different due to garment surface having multiple parts. As soon as garment model is captured and segmented, distinctive points will be extracted from garment surfaces. It is worth notice the complexity of this step since garments have different textures, colors and shapes. Moreover, our approach will handle occlusions due to the high variability of garment types (e.g., key points in ankles will be matched with the equivalent point in trousers but they could not be matched with shorts). Once points are extracted in both models, we will find the correspondence between them, taking into account deformations of the human body.