From what I deduct, sign language (what the deaf use) recognition would likely be a hybrid of gesture and speech recognition.
Specifically, I'm thinking about an app in which a deaf person (which I am) can wrap a smartphone around their wrist. That approach wouldn't capture every sign but possibly just enough to be useful. At this point, I'm simply looking to experiment.
While I've worked with ML before, this is a new territory for me. I've been going through readouts from iPhone's sensors. As you might expect, they're basically time series of motion data. How to turn them into single rows of Y -> classification, X -> features?
Since every signer, like speakers, delivers at varying speeds, I'm trying to figure out how to make the input less time orientated and more "shape orientated", if that makes any sense. To simplify this as much as possible, think of somebody circling their hand. Instead of time series, it'd be lovely to turn the input into, well, circular motions. I'm thinking perhaps polynomial regression? But how to turn time series of varying intervals into single lines of Y (classification) and Xs -> polynomials?
Hope I'm somehow making sense.
[link][1 comment]