I am new to Bayesian Decision Theory and don't understand the following concept:
So from what I understood, the Bayes error is used to report the performance of a Bayes classifier in terms of the probability of making and error. From the conditional error probabilities (I uploaded the equations as images in hope they are helpful to explain what I am referring to)

we can obtain the total probability of making an error (the probability to mis-classify).

Now, if my Bayes classifier was designed to minimize the overall risk, I have a loss function that gives penalties to certain decisions.
![conditional risk] (http://sebastianraschka.com/_my_resources/images/equations/cond_risk.png) 
So, if my classifier includes such a loss function when I optimize my classifier for minimum overall risk, shouldn't be the Bayes error also include the loss function term?
Hope you can help me here, because I think I am missing something here ...
EDIT:
I'll try to express my problem using an 2D-classification problem:
Let's assume I have two pdfs (e.g., p(x|c1) and p(x|c2) ) with slight overlap. And mis-classifying a pattern as c2 where it truly belongs to c1 is more costly than vice-versa.
In this case I would assign a higher loss to "classify pattern x as c1 when it is truly c2" than "classify pattern x as c2 when it is truly c1" in order to calculate and minimize the overall risk.
I would therefore increase the probability to classify a pattern x as c2 over c1 due to the minimum risk optimization. Isn't this something I have to also include in p(error)?
[link][7 comments]