So I have two time sequences(Each with a varying sample rate) describing the motion of an object on a 2-d horizontal plane.
One time sequence measures x and y positions on this plane with a 0.01% margin of error.
The other measures a sequence of accelerometer readings with a larger ~15% margin of error.
I want to use data from the first series in conjunction with data from the second series to either:
- produce a third sequence of smoother, more accurate accelerometer readings.
- train a machine learning model to map future noisy accelerometer readings to their respective x,y coordinates.
So the question:
What machine learning approaches exist that would be best suited to solving this problem?
How would you solve this problem differently if instead of being given a sequence of data, you were given accelerometer readings on the spot?
How well does your solution scale if the margin of error varied from accelerometer to accelerometer?
[link][3 comments]