Quantcast
Channel: Machine Learning
Viewing all articles
Browse latest Browse all 62546

Image recognition and depth perception

$
0
0

I've been thinking about Numenta and sensory perception and have been made to wonder the following.

  • Does 3D vision/depth perception make it easier for us humans to identify objects?
  • When we look at objects in the world, we don't see them in still life. Even when nothing and no one is moving, we see objects, shapes and colors over an interval of time.
  • In low light, objects that don't move are difficult to see - in fact, military servicemen are trained to rapidly dart their eyes around the object while in low light, in order to better discern its features.

So seems to me that when seeing the world, we are actually perceiving the changes in shape and depth with respect to time. I wonder - is there a way to incorporate this theory into a neural network? Might it be possible to impose a depth field upon static images?

If we take a static image, add depth information, then "expose" it to a deep learning system for some time interval (perhaps with some dither/small random motion added), would it perform better than today's best?

When we see the world, we see more than edges and shapes. Why shouldn't our machines?

submitted by daneirkusauralex
[link][8 comments]

Viewing all articles
Browse latest Browse all 62546

Trending Articles