We often recognize a distinction between tuning parameters and "normal" model parameters. An example where this is clear is ridge regression, where the ridge penalty is a tuning parameter and the coefficients of regression are normal parameters. A less obvious example is k-nearest-neighbors, where k is a tuning parameter and the identities of the k nearest neighbors for a given test case are normal parameters.
The key practical difference between tuning parameters and normal parameters is that one usually uses a fitting algorithm (e.g., least squares) to choose the values of the normal parameters that minimize training error, whereas for the tuning parameters, the training-error-minimizing value is trivial (like 0, in the case of ridge regression) and one instead must use a higher-order procedure (like cross-validation) if one wants to use training data to choose the parameter's value.
The question is, what do you call normal parameters, if anything?
[link][3 comments]