In this project, the possible solutions to the above problem will be investigated as described in what follows. As the first step, various methods for extracting feature and corner points, including Speeded Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), the Harris¿Stephens algorithm, the minimum eigenvalue algorithm, Binary Robust Invariant Scalable Keypoints (BRISK), Maximally Stable Extremal Regions (MSER), Fast Retina Keypoint (FREAK) and Histograms of Oriented Gradients (HOG), will be investigated and compared, in order to come up with a solution to maximize the number of features which could be extracted from a given image. Next, the feature extraction and feature matching algorithms will be further refined, in order to enhance the chances of an actual match between every pair of images to be detected. In other words, due to the fact that a certain feature may appear with different scales or rotations between different images, finding out the correspondences demands more carefully analyzing the similarities, which necessitates increasing and adjusting the number of octaves employed when searching for the features, as well as the number of different scale levels considered for each, being accompanied by handling possible changes in the orientation, which should lead to making the algorithm robust against the rigid movements of the objects with respect to the camera. Finally, yet importantly, the misalignment between the triangulated points needs to be dealt with, which demands limiting the area from which the features are extracted to a certain ROI, in order to avoid the noise caused by the distortions present towards the edges and corners of every image.