I have previously asked this forum about metrics other than classification performance. These metrics are usually unstated or incomplete in papers, and the opinion of some is that they don't matter much--an assessment I don't particularly agree with. I was wondering if anybody would be willing to state some numbers from their own experience. MNIST is an example, but any other popular benchmark would also work. For example:
training time, hardware specs including power consumption if possible,general algorithm or algorithm class, test error, test time, etc
Any feedback would be very much appreciated, even if its just a guesstimate. Thanks so much for any help.
[link][1 comment]