As the title says, is it possible to achieve digit recognition from training data of different dimensionalities (i.e. pixel resolutions), independent of line thickness, position in the image and resolution? For example, by focussing on the relationships between the filled pixels, rather than the absolute filled/unfilled pixels.
I.e. a human recognises a digit by the known topology of the digit rather than thinking in terms of pixels. Is there any research in this sort of digit recognition?
I guess neural networks do something similar, but they are restricted to a known input dimensionality. Maybe one could use resizing algorithms, PCA, etc. and a neural network to achieve what I mean. But perhaps there are other ways.
[link][14 comments]