Hello /MachineLearning,
I am interested in perspective/pose estimation of an object using a Neural Network.
First of all this object, is a 2D marker (5x5cm flat sticker) pasted onto a 1x1mtr flat surface. This marker is known as a fiducial marker, like the markers from this image; link!
Detection of this object is no problem, but I have no idea on how to do a perspective estimation of this object using a ANN.
I'm aware of other 'ways' to estimate the perspective of my object. For instance; using OpenCV. However my current OpenCV implementation lacks precision, which I am hoping to get using a trained ANN (I could be entirely wrong tho').
This lack of precision within my OpenCV implementation is partially due to a (slightly) blurry image of my object (blurry due to hyperfocal distance and camera distance).
My thoughts on how to do this;
Generate the marker, apply a (small) perspective transform, save the marker image and perspective transform to a training set. Repeat this step X number of times.
*Develop FF ANN (feed forward artificial neural network) and train using the (above) training set. *
Grab object from camera image, and identify the perspective transform using the FF ANN.
So /MachineLearning, what do you guys think ? Moving in the right direction, or am I way of ? :o
Thanks!
[link][5 comments]