Making Risk Minimization Tolerant to Label Noise
Abstract
In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label noise also if we can assume the classes to be separable under noisefree data distribution. Through extensive empirical studies, we show that risk minimization under the loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm.
keywords:
Classification, Label Noise, Loss Function, Risk Minimization, Noise TolerancermkRemark \newdefinitiondefnDefinition \newproofpfProof \newproofpot1Proof of Theorem 2
1 Introduction
In a classifier learning problem we are given training data and when the class labels in the training data may be incorrect (or noisecorrupted), we refer to it as label noise. Learning classifiers in the presence of label noise is a classical problem in machine learning (Frénay and Verleysen, 2014). This challenging problem has become more relevant in recent times due to the current applications of Machine Learning. In many of the web based applications, the labeled data is essentially obtained through user feedback or user labeling. This leads to data with label noise because of a lot of variability among different users while labeling and also due to the inevitable human errors. In traditional pattern recognition problems also, we need to tackle label noise. For example, overlapping classconditional densities give rise to training data with label noise. This is because we can always view data generated from such densities as data that is originally classified according to, say, Bayes optimal classifier and then subjected to (nonuniform) label noise before being given to the learning algorithm. Feature measurement errors can also lead to label noise in the training data.
In this paper, we discuss methods for learning classifiers that are robust to label noise. Specifically we consider the risk minimization strategy which is a generic method for learning classifiers. We focus on the issue of making risk minimization robust to label noise.
Risk minimization is one of the popular strategies for learning classifiers from training data (Haussler, 1992; Devroye et al., 1996).^{1}^{1}1Risk minimization strategy is briefly discussed in Section 3.1. Many of the standard approaches for learning classifiers (such as Bayes classifier, Neural Network or SVM based classifier etc.) can be viewed as (empirical) risk minimization under a suitable loss function. The Bayes classifier minimizes risk under the loss function. One would like to minimize risk under loss as it minimizes probability of misclassification. However, in general, minimizing risk under loss is computationally hard because it gives rise to a nonconvex and nonsmooth optimization problem. Hence many convex loss functions are proposed to make the risk minimization efficient. Square loss (used in feedforward neural networks), Hinge loss (used in SVM), logloss (used in logistic regression) and exponential loss (used in boosting) are some common examples of such convex loss functions. Many such convex loss functions are shown to be classification calibrated; that is, low risk under these losses implies low risk under loss (Bartlett et al., 2006). However, these results do not say anything about the robustness of such risk minimization algorithms to label noise. In this paper we present some interesting theoretical results on when risk minimization can be robust to label noise.
A learning algorithm can be said to be robust to label noise if the classifier learnt using noisy data and noise free data, both have same classification accuracy on noisefree test data (Manwani and Sastry, 2013). In Manwani and Sastry (2013), it is shown that risk minimization under loss is tolerant to uniform noise (with noise rate less than 50%). It is also tolerant to nonuniform noise under some additional conditions. It is also shown in (Manwani and Sastry, 2013) through counterexamples that risk minimization under many of the standard convex loss functions such as hinge loss, log loss or exponential loss, is not noisetolerant even under uniform noise.
In this paper, we extend the above theoretical analysis. We provide some sufficient conditions on a loss function so that risk minimization with that loss function becomes noise tolerant under uniform and nonuniform label noise. While loss satisfies these, none of the standard convex loss functions satisfy the conditions. We also show that some of the nonconvex loss functions such as sigmoid loss, ramp loss and probit loss satisfy the sufficiency conditions. Our results show that risk minimization under these loss functions is tolerant to uniform noise and that it is also tolerant to nonuniform noise if the Bayes risk (under noisefree data) is zero and if one parameter in the loss function is properly chosen. Hence we propose that risk minimization using sigmoid or ramp loss (which can be viewed as continuous but nonconvex approximations to loss) would result in learning methods that are robust to label noise. Through extensive empirical studies, we show that such risk minimization has good robustness to label noise.
The rest of the paper is organized as follows. In Section 2, we provide a brief review of methods for tackling label noise and then summarize the contributions of this paper. In Section 3 we define the notion of noise tolerance of a learning algorithm and formally state our problem. In this section we also provide a brief overview of the general risk minimization strategy. Section 4 contains all our theoretical results. We present simulation results on both synthetically generated data as well as on some benchmark data sets in Section 5. Some concluding remarks are presented in Section 6.
2 Prior Work
Learning in presence of noise is a long standing problem in machine learning. It has been approached from many different directions. A detailed survey of these approaches is given in Frénay and Verleysen (2014).
In a recent study, Nettleton et al. present an extensive empirical investigation of robustness of many standard classifier learning methods to noise in training data (Nettleton et al., 2010). They showed that the Naive Bayes classifier has the best noise tolerance properties. We comment more about this after presenting our theoretical results.
In general, when there is label noise, there are two broad approaches to the problem of learning a classifier. In the first set of approaches, data is preprocessed to clean the noisy points and then a classifier is learnt using standard algorithms. In the second set of approaches, the learning algorithm itself is designed in such a way that the label noise does not affect the algorithm. We call these approaches inherently noise tolerant. We briefly discuss these two broad approaches below.
2.1 Data Cleaning Based Approaches
These approaches rely on guessing points which are corrupted by label noise. Once these points are identified, they can be either filtered out or their labels suitably altered. Several heuristics have been used to guess such noisy points.
For example, it is reasonable to assume that the class label of a point which is situated deep inside the class region of a class should match with the class labels of its nearest neighbors. Thus, mismatch of the class label of a point with most of its nearest neighbors can be used as a heuristic to decide whether a point is noisy or not (Fine et al., 1999). This method of guessing noisy points may not work near the classification boundary. The performance of this heuristic also depends on the number of nearest neighbors used.
Another heuristic is that, in general, noisy points are tough to classify correctly. Thus, when we learn multiple classifiers using the noisy data, many of the classifiers may disagree on the class label of the noisy points. This heuristic has also been used to identify noisy points (Angelova et al., 2005; Brodley and Friedl, 1999; Zhu et al., 2003). Decision tree pruning (John, 1995), distance of a point to the centroid of its own class (Daza and Acuna, 2007), points achieving weights higher than a threshold in boosting algorithm (Karmaker and Kwek, 2006), margin of the learnt classifier (HarPeled et al., 2007) are some other heuristics which have been used to identify the noisy examples.
As is easy to see, the performance of such heuristics depend on the nature of label noise. There is no single approach for identifying noisy points which can work for all problems. While each of the above heuristics has certain advantages, none of them are universally applicable. A nonnoisy points can be detected as noisy point and viceversa under any of these heuristics. This could eventually increase the overall noise level in the training data. Moreover, removal of the noisy points from the training data may lead to loosing important information about the classification boundary (Bouveyron and Girard, 2009).
2.2 Inherently Noise Tolerant Approaches
These approaches do not do any preprocessing of the data; but the algorithm is designed in such a way that its output is not affected much by the label noise in the training data.
Perceptron algorithm, which is the simplest algorithm for learning linear classifiers, is modified in several ways to make it robust to the label noise (Khardon and Wachman, 2007). Noisy points can frequently participate in updating the hyperplane parameters in the Perceptron algorithm, as noisy points are tough to be correctly classified. Thus, allowing a negative margin around the classification boundary can avoid frequent hyperplane updates caused due to the misclassifications with small margin. Putting an upper bound on the number of mistakes allowed for any example also controls the effect of label noise (Khardon and Wachman, 2007). Similar techniques have been employed to improve Adaboost algorithm against noisy points. Overfitting problem in Adaboost, caused due to the label noise, can be controlled by introducing a prior on weights which can punish large weights (Rätsch et al., 1999). In boosting algorithms, making the coefficients of each of the base classifiers inputdependent, also controls the exponential growth of weights due to noise (Jin et al., 2003). SVM can be made robust to label noise by modifying the kernel matrix (Biggio et al., 2011). All these approaches are based on heuristics and work well in some cases. However, for most of these approaches, there are no provable guarantees of noise tolerance.
Noise tolerant learning has also been approached from the point of view of efficient probably approximately correct (PAC) learnability. By efficiency, we mean polynomial time learnability. Kearns (1998) proposed a PAC learning algorithm for learning under label noise using statistical queries. However, the specific statistics that are calculated from the training data are problemspecific. PAC learning of the linear threshold functions is, in general, NPhard (Höffgen and Simon, 1992). However, linear threshold functions are efficiently PAC learnable under uniform noise if the noisefree data is linearly separable with appropriate large margin (Bylander, 1994). For the same problem, Blum and Frieze (1996) present a method to PAClearn in presence of uniform label noise without requiring the large margin condition. But the final classifier is a decision list of linear threshold functions. Cohen (1997) proposed an ellipsoid algorithm which efficiently PAC learns linear classifiers under uniform label noise. This result is generalized further for class conditional label noise (Stempfel et al., 2007). (Under class conditional noise model, the probability of a label being corrupted is same for all examples of one class though different classes can have different noise rates). All these results are given for linear classifiers and for uniform label noise. There are no efficient PAC learnability results under nonuniform label noise.
Recently Scott et al. (2013) proposed a method of estimating Type 1 and Type 2 error rates of any specific classifier under the noisefree distribution given only the noisy training data. This is for the case of a 2class problem where the training data is corrupted with class conditional label noise. They used the concept of mutually irreducible distributions and showed that such an estimation is possible if the noisefree class conditional distributions are mutually irreducible. This estimation strategy can be used to get a robust method of learning classifiers under classconditional noise. In another recent method, Natarajan et al. (????) propose risk minimization under a specially constructed surrogate loss function as a method of learning classifiers that is robust to class conditional label noise. Given any loss function, they propose a method to construct a new loss function. They show that the risk under this new loss for noisy data is same as the risk under the original loss for noise free data. The construction of the new loss function needs information of noise rates which is to be estimated from data. Similar results are also presented in (Stempfel and Ralaivola, 2009).
Manwani and Sastry (2013) have analyzed the noise tolerance properties of risk minimization under many of the standard loss functions. It is shown that risk minimization with loss function is tolerant to uniform noise and also to nonuniform noise if the risk of optimal classifier under noisefree data is zero (Manwani and Sastry, 2013). No other loss function is shown to be noise tolerant in this paper (except for square loss under uniform noise). It is also shown, through counterexamples, that risk minimization with many of the standard convex loss functions (e.g., hinge loss, logistic loss and exponential loss) does not have noise tolerance property even under uniform noise (Manwani and Sastry, 2013). This paper does not consider the case of classconditional noise. A provably correct algorithm to learn linear classifiers based on risk minimization under 01 loss is presented in (Sastry et al., 2010). This algorithm uses the continuous actionset learning automata (CALA) (Thathachar and Sastry, 2003)
In this paper we build on and generalize the results presented in Manwani and Sastry (2013). The main contributions of the paper are the following. We provide a sufficient condition on any loss functions such that the risk minimization with that loss function becomes noise tolerant under uniform label noise. This is a generalization of the main theoretical result in Manwani and Sastry (2013). We observe that the loss satisfies this sufficiency condition. We show that ramp loss (Brooks, 2011) (which is empirically found to be robust in learning from noisy data (Wu and Liu, 2007)) and sigmoid loss (which can be viewed as a continuous but nonconvex approximation of loss) and probit loss (Zheng and Liu, 2012) also satisfy this sufficiency condition. We also show that our condition on the loss function along with the assumption that Bayes risk (under noisefree distribution) is zero, is sufficient to make risk minimization tolerant to nonuniform noise under suitable choice of a parameter in the loss function. We also provide a sufficient condition for robustness to class conditional noise. This result generalizes the result presented in Natarajan et al. (????).
In general it is hard to minimize risk under loss. Here we investigate approximation of loss function with a differentiable function without losing the noisetolerance property. We show that we can use sigmoid and ramp losses (with some extra conditions if we need to tackle nonuniform label noise) for the approximation. We investigate standard descent algorithm for minimizing risk under sigmoid and ramp loss. Ramp loss can be written as difference of two convex functions (Wu and Liu, 2007). We make use of this to have an efficient algorithm to learn nonlinear classifiers (through a kernel trick) by minimizing risk under ramp loss. We present extensive empirical investigations to illustrate the noise tolerance properties of our risk minimization strategies and compare it against the performance of SVM. Among the classifier learning methodologies that can be viewed as risk minimization, Bayes (or Naive Bayes) and SVM are the most popular ones. Bayes classifier minimizes risk under loss. Hence we compare performance of risk minimization under loss and the other loss functions that satisfy our condition with that of SVM.
3 Problem Statement
In this paper, our focus is on binary classification. In this section we introduce our notation and formally define our notion of noise tolerance of a learning algorithm. Here we consider only the 2class problem.
3.1 Risk Minimization
We first provide a brief overview of risk minimization for the sake of completeness. More details on this can be found in (Haussler, 1992; Devroye et al., 1996).
Let be the feature space from which the examples are drawn and let be the class labels. We use and to denote the two classes. In a typical classifier learning problem, we are given training data, , drawn according to an unknown distribution, , over . The task is to learn a classifier which can predict the class label of a new feature vector. We will represent a classifier as where is a realvalued function defined over the feature space. The function is called a discriminant function though often is also referred to as the classifier. We would use the notation of calling itself as the classifier though the final prediction of label for a new feature vector is given by .
We want to learn a ‘good’ function or classifier from a chosen family of functions, . For example, if we are learning linear classifiers, then . Thus, the family of classifiers of interest here is parameterized by .
One way of specifying the goodness of a classifier is through the so called loss function. We denote a loss function as . The idea is that, given an example , tells us how well the classifier predicts the label on this example. We want to learn a classifier that has, on the average, low loss. Given any loss function, , and a classifier, , we define the Lrisk of by
(1) 
where the denotes expectation with respect to the distribution, , with which the training examples are drawn.
Now the objective is to learn a classifier, , that has minimum risk. Such a strategy for learning classifiers is called risk minimization.
As an example, consider the loss function defined by
(2)  
It is easy to see that the risk under loss of any is the probability that the classifier misclassifies an example. The Bayes classifier is the minimizer of risk under loss.
Normally, when one refers to risk of a classifier it is always considered to be under the loss function. Hence, here we called the risk under any general loss function as Lrisk. This notation is consistent with the so called risk used in Bartlett et al. (2006). Whenever the specific loss function under consideration is clear from context, we simply say risk instead of Lrisk.
Many standard methods of learning classifiers can be viewed as risk minimization with a suitable loss function. As noted above, Bayes classifier is same as minimizing risk under loss. Learning a feedforward neural network based classifier can be viewed as risk minimization under squared error loss. (This loss function is defined by ). We would mention a few more loss functions later in this paper.
In general, minimizing risk is not feasible because we normally do not have knowledge of the distribution . So, one often approximates the expectation by sample average over the iid training data and hence one minimizes the so called empirical risk given by
If we have sufficient number of training examples (depending on the complexity of the family of classifiers, ), then the minimizer of empirical risk would be a good approximation to the minimizer of true risk (Devroye et al., 1996). In this paper, all our theoretical results are proved for (true) risk minimization though we briefly comment on their relevance to empirical risk minimization.
3.2 Noise Tolerance
In this section we formalize our notion of noise tolerance of risk minimization under any loss function.
Let be the (unobservable) noise free data, drawn iid according to a fixed but unknown distribution over . The noisy training data given to learner is , where with probability and with probability . Note that our notation shows that the probability that the label of an example is incorrect may be a function of the feature vector of that example. In general, for a feature vector , its correct label (that is, label under distribution ) is denoted as while the noise corrupted label is denoted by . We use to denote the joint probability distribution of and .
We say that the noise is uniform if . Noise is said to be class conditional if and . In general, when noise rate is a function of , it is termed as nonuniform noise.
Recall that a loss function is and in a general risk minimization method, we learn a realvalued function by minimizing expectation of loss over some chosen function class . For any classifier , the Lrisk under noisefree case is
Subscript denotes that the expectation is with respect to the distribution . Let be the global minimizer of .
When there is label noise in the data, the data is essentially drawn according to distribution . The Lrisk of any classifier under noisy data is
Here the expectation is with respect to the joint distribution which includes averaging over noisy labels also. Let be the global minimizer of risk in the noisy case. (Note that both and depend on though our notation does not explicitly show it).
Risk minimization under a given loss function is said to be noise tolerant if the has the same probability of misclassification as that of on the noise free data. This can be stated more formally as follows (Manwani and Sastry, 2013).
Definition 1
Risk minimization under loss function , is said to be noisetolerant if
.
When the above is satisfied we also say that the loss function is noisetolerant. Note that a loss function can be noise tolerant even if the two functions and are different, if both of them have the same classification accuracy under the distribution . Given a loss function, our goal is to identify, which is a global minimizer of Lrisk under the noisefree case. If the loss function is noise tolerant, then minimizing Lrisk with the noisy data would also result in learning .
4 Sufficient Conditions for Noise Tolerance
In this section we formally state and prove our theoretical results on noise tolerant risk minimization. We start with Theorem 2, where we provide a sufficient condition for a loss function to be noise tolerant under uniform and nonuniform noise.
Theorem 2.
Let . Also, let the loss function satisfy and for some positive constant . Then risk minimization under loss function becomes noise tolerant under uniform noise. If, in addition, , then is noise tolerant under nonuniform noise also.
Proof.

Uniform Noise: For any , we have
. Under uniform noise, we have . Hence, the Lrisk under noisy case for any is
Hence, . Since is global minimizer of , and since we assumed , we get . Thus is also the global minimizer of . This completes proof of noise tolerance under uniform noise.

Nonuniform Noise: Recall that under nonuniform noise, the probability with which a feature vector has wrong label is given by . Hence, the Lrisk under the noisy case for any is,
Hence,
(3) Under our assumption, . Since the loss function is nonnegative, this implies . Since we assumed , we have . Thus we get . Thus is also global minimizer of risk under nonuniform noise. This proves noise tolerance under nonuniform noise.
∎
The condition on loss function that we assumed in the theorem above is a kind of symmetry condition:
Note that the above condition also implies that the loss function is bounded. Theorem 2 shows that risk minimization under a loss function is noise tolerant under uniform noise if the loss function satisfies the above condition. For noise tolerance under nonuniform noise, in addition to the above symmetry condition on the loss function, we need . In Manwani and Sastry (2013), this result is proved only for the loss and thus the above theorem is a generalization of the main result in that paper.
Recall that the loss function is given by if and otherwise. As is easy to see, the loss function satisfies the above symmetry condition with . Hence the loss is noisetolerant under uniform noise. None of the standard convex loss functions (such as hinge loss used in SVM or exponential loss used in AdaBoost) satisfy the symmetry condition. It is shown in Manwani and Sastry (2013), through counterexamples, that none of them are robust to uniform noise.
For loss to be noisetolerant under nonuniform noise, we need the global minimum of risk under loss to be zero, in the noisefree case. This means that, under the noisefree distribution , the classes are separable (by a classifier in the family of classifiers over which we are minimizing the risk). We note that this condition may not be as restrictive as it may appear at first sight. This separability is under the noisefree distribution which is, so to say, unobservable. For example, consider training data generated by sampling from two class conditional densities whose supports overlap. We can think of the noisefree data as the one obtained by classifying the data using a Bayes optimal classifier. Then the data would be separable under noisefree distribution. The labels in the actual training data could be thought of as obtained from this ideal separable data by independent noisecorruption of the original labels. Then the probability of a label being wrong would be a function of the feature vector and thus result in nonuniform label noise.
If the global minimum of Lrisk, , is small but nonzero, then we can show that risk minimization under a loss function satisfying our symmetry condition would be approximately noise tolerant. Essentially, we can show that can be bounded by where is a constant which increases with increasing noise rate and would go to infinity as the maximum noise rate approaches 0.5. We derive this bound below.
Suppose . That is, the global minimum of Lrisk under noisefree distribution is . Since is the global minimizer of , . From equation (3), we have
This implies
where we used and . Let . Then we have . Which implies,
.
This shows that if is small then is also small. (Note that is what we learn by minimizing risk under the noisy distribution). For example, if we have maximum nonuniform noise rate 40% , then .
Our Theorem 2 shows that risk minimization under loss function is tolerant to uniform noise and also to nonuniform noise if global minimum of risk is zero. As has been mentioned earlier, the Bayes classifier minimizes risk under loss. Hence our result shows that Bayes classifier has good noise tolerance property. We can obtain (a good approximation of) Bayes classifier by minimizing risk under loss over an appropriate class of functions . We can also obtain (a good approximation of) Bayes classifier by estimating the class conditional densities from data. For multidimensional feature vectors, a simplification often employed while estimating class conditional densities is to assume independence of features and the resulting classifier is termed Naive Bayes classifier. In many situations this would be a good approximation to Bayes classifier. In a recent study, Nettleton et al. presented extensive empirical investigations on noise robustness of different classifier learning algorithms (Nettleton et al., 2010). In their study, they considered the top ten machine learning algorithms (Yu et al., 2007). They found that the Naive Bayes classifier has the best robustness with respect to noise. Theorem 2 proved above provides some theoretical justification for the noiserobustness of Naive Bayes classifier. Later, in Section 5 we also present simulation results to show that risk minimization under loss has very good robustness to label noise.
As mentioned in Section 3.1, in practice one minimizes empirical risk because one often does not have the knowledge of class conditional densities. Our theorem, as proved, applies only to (true) risk minimization. If we have good number of examples and if the complexity of the class of function is not large, then, by the standard results on consistency of empirical risk minimization (Devroye et al., 1996), the minimizer of empirical risk under noise free distribution would be close to minimizer of true risk under noisefree distribution and similarly for the noisy distribution. Hence, it is reasonable to assume that minimizer of empirical risk with noisy samples would be close to minimizer of empirical risk with noisefree samples. Also, if we take the expectation integral in the proof of Theorem 2 to be with respect to the empirical distribution given by the given set of examples, then the Lrisk under noisefree distribution is same as the empirical risk. Then Theorem 2 can be interpreted as saying that the minimizer of empirical risk with noisefree samples would be same as the minimizer of empirical risk with noisy samples averaged over the labelnoise distribution. All this provides a plausibility argument that the noiserobustness property proved by Theorem 2 would (approximately) hold even for the case of empirical risk minimization. Our empirical results presented in Section 5 also provide evidence for this. More work is needed to formally prove such a result to extend the noiserobustness results to empirical risk minimization and to derive some bounds on the number of examples needed.
Risk minimization under loss is hard because it involves optimizing a nonconvex and nonsmooth objective function. One can easily design a smooth loss function ( which can be viewed as a continuous approximation of the loss function) that can satisfy the symmetry condition of Theorem 2. Hence, one can try optimizing risk under such a loss function. As we show here, we can use the ramp loss, the sigmoid loss etc. for this. However, under such a loss function, it may not be possible to achieve . For example, a sigmoid function value is always strictly positive and hence the risk (under such a loss function) of any classifier is strictly greater than zero. Thus for other loss functions which can satisfy our symmetry condition, the sufficient condition for noise tolerance under nonuniform noise, namely that global minimum of Lrisk (under that loss function) is zero, may be very restrictive. We address this issue next.
We call the global minimum of risk under loss as Bayes risk. If we assume that Bayes risk under noisefree case is zero, then we can show that some of the loss functions satisfying our symmetry condition can achieve noise tolerance under nonuniform noise also by proper choice of a parameter in the loss function (even if the global minimum of Lrisk is nonzero). We present these results for the sigmoid loss, the ramp loss and the probit loss in the next three subsections.
4.1 Sigmoid Loss
Sigmoid loss with parameter is defined as
(4) 
If we view the loss as a function of the single variable , then the parameter is proportional to the magnitude of the slope of the function at origin. It is easy to verify that
The following theorem shows that sigmoid loss function is noise tolerant.
Theorem 3.
Assume . Then sigmoid loss is noise tolerant under uniform noise. In addition, if Bayes risk under noisefree case is zero, then there exist a constant such that the risk minimization under sigmoid loss is tolerant to nonuniform noise.
Proof.
First part of the theorem follows directly from Theorem 2 because sigmoid loss satisfies the symmetry condition. We prove second part below. For any , the Lrisk under the noisy case is given by
Hence,
(5) 
For establishing noise tolerance under nonuniform noise, we need to show that, . We define three sets , , where, , and .
Since we assumed that Bayes risk (under noisefree case) is , . Note that the three sets above form a partition of . Now we can rewrite equation (5) as
(6) 
We observe the following.

The third term is less than or equal to zero always because, on , we have .

The first integral is over where we have . Since , the integral has negative value for all . The value of this integral decreases with increasing . As , the integral becomes , where . We have strictly greater than zero, because if is not the optimal classifier then .

The second integral is over , where . This integral is always positive and as , the limit of the integral is zero.
Thus as , the limit of the sum of first two terms on the RHS of equation (6) is . Hence there exist a such that for all , the sum of first two integral is negative. The third term on the RHS of equation (6) is always nonpositive. This shows that for all , and this completes the proof. ∎
Theorem 3 shows that if we take a sufficiently large value of the parameter , then sigmoid loss is noise tolerant under nonuniform noise also. This is so even though the global minimum of risk, in the noisefree case, under sigmoid loss is greater than zero. (But we assumed that the Bayes risk under noisefree case is zero). What this means is that we need the loss function (as a function of the variable ) to be sufficiently steep at origin to wellapproximation of loss so as to get noise tolerance. We also note here that the value of , which may be problem dependent, can be fixed through cross validation in practice.
4.2 Ramp Loss
Ramp loss with a parameter is defined by,
(7) 
where denotes the positive part of which is given by . The following lemma shows that the ramp loss function satisfies the symmetry property needed in Theorem 2.
Lemma 4.
Ramp Loss described in Eq. (7) satisfies
Proof.
We have
which completes the proof. ∎
The above lemma shows that the ramp loss satisfies our symmetry condition and hence, by Theorem 2, is noisetolerant to uniform noise. It has been empirically observed that ramp loss is more robust to noise than SVM (Wu and Liu, 2007; Xu et al., 2006; Brooks, 2011). Our results provide a theoretical justification for it.
The following theorem shows that ramp loss can be noisetolerant to nonuniform noise also if is sufficiently high.
Theorem 5.
Assume . Then the ramp loss is noise tolerant under uniform noise. Also, if Bayes risk under noisefree case is zero, there exist a constant such that the risk minimization under ramp loss is tolerant to nonuniform noise.
Proof.
Lemma 4 shows that the ramp loss satisfies the symmetry property. Thus, Theorem 2 directly implies that ramp loss is noise tolerant under uniform noise. Proof of noise tolerance under nonuniform noise is similar to proof of Theorem 3 and it follows from the same decomposition of feature space. We omit the details. ∎
4.3 Probit Loss
Probit loss (Zheng and Liu, 2012; McAllester and Keshet, 2011) with a parameter is defined by,
(8) 
where is cumulative distribution function (CDF) of standard Normal distribution.
Lemma 6.
Probit Loss described in Eq. (8) satisfies
Proof.
because . Hence satisfies the symmetry property. ∎
Theorem 7.
Assume . Then probit loss is noise tolerant under uniform noise. Also, if Bayes risk under noisefree case is zero, there exists a constant such that the risk minimization under probit loss is tolerant to nonuniform noise.
Proof.
Lemma 6 shows that the probit loss satisfies the symmetry property. Thus, Theorem 2 directly implies that probit loss is noise tolerant under uniform noise. Proof of noise tolerance under nonuniform noise is similar to proof of Theorem 3 and it follows from the same decomposition of feature space. We omit the details. ∎
4.4 Classconditional Noise
So far, we have considered only the cases of uniform and nonuniform noise. A special case of nonuniform noise is class conditional noise where noise rate is same for all feature vectors from one class. This is an interesting special case of label noise (Stempfel et al., 2007; Scott et al., 2013; Natarajan et al., ????). In the results proved so far, we need Bayes risk under noisefree case to be zero for a loss function to be tolerant to nonuniform noise. Since class conditional noise is a very special case of nonuniform noise, an interesting question is to ask whether this condition can be relaxed.
Under class conditional noise we have . Suppose we know and . Note that this does not make the problem trivial because we still do not know which are the examples with wrong labels. It may be possible to estimate the noise rates from the noisy training data using, e.g., the method in Scott et al. (2013). In such a situation, we can ask how to make risk minimization noise tolerant. Suppose we have a loss function that satisfies our symmetry condition. The following theorem shows how we can learn global minimizer of Lrisk under the noisefree case given access only to data corrupted with class conditional label noise.
Theorem 8.
Assume , and . Assume loss function satisfies, for some positive constant , . We define loss function as & where . Then minimizer of risk with loss function under class conditional noise is same as minimizer of risk with loss under noise free data.
Proof.
For any , under no noise, we have,
Under class conditional noise, we use the loss function , and hence the risk under noisy case is
It is easy to see that, with the value of given in the theorem statement, we have . Using this in the above, we get
Hence,
As and , we have . Thus , which is global minimizer of risk with loss function under noisefree data is also the global minimizer of risk under class conditional noise with loss function . ∎
The above theorem allows us to construct a new loss function given the loss function (and the noise rates) so that minimizing risk under the noisy case with loss would result in learning minimizer of risk with under noisefree data.
5 Experiments
In this section, we present some empirical results on both synthetic and real data sets to illustrate the noise tolerance properties of different loss functions. Our theoretical results have shown that loss, sigmoid loss and ramp loss are all noise tolerant. We compare performances of risk minimization with these noise tolerant losses with SVM which is hinge loss based risk minimization approach. Square loss has also been shown to be noise tolerant under uniform label noise (Manwani and Sastry, 2013). Hence we also compare with square loss. The experimental results are shown on synthetic datasets and real world datasets from UCI ML repository (Bache and Lichman, 2013).
5.1 Dataset Description
We used 5 synthetic problems of 2class classification. Among these, 4 problems are linear and 1 is nonlinear. All synthetic problems have separable classes under noisefree case. We consider both two dimensional data (so that we can geometrically see the performance) as well as higher dimensional data (with dimension ). Below, we describe each of the synthetic problems by describing how the labeled training data is generated under noisefree case. We add label noise as needed to generate noisy training sets. In the description below we denote the uniform density function with support set by .

Synthetic Dataset 1 : Uniform Distribution In , we sample iid points from . We label these samples using the following separating hyperplane.
where is a 10dimensional vector of 1’s.

Synthetic Dataset 2 : Asymmetry and Nonuniformity Let and be two mixture density functions in defined as follows
We sample 2000 iid points each from and . We label these points using the following hyperplane

Synthetic Dataset 3 : Asymmetry and Imbalance Let and be two density functions in defined as follows
We sample points independently from and points independently from distribution . We label these points using the following hyperplane

Synthetic Dataset 4 : Asymmetry and Imbalance in High Dimension Let and be two uniform densities defined in as follows
We sample and points independently from and respectively. We label these points using the following hyperplane.
where is the standard basis vector in whose first element is 1 and rest of all are 0.

Synthetic Dataset 5 : 22 Checker Board Let be a uniform density defined on as follows
We sample 4000 points independently from . We classify these points using , where and represent the first and the second dimension of .
Apart from the above synthetic data sets we also consider 5 data sets from the UCI ML repository described in Table 1.
Dataset  # Points  Dimension  Class Dist. 

Ionosphere  351  34  225,126 
Balance  576  4  288,288 
Vote  435  15  267,168 
Heart  270  13  120,150 
WBC  683  10  239,444 
5.2 Experimental Setup
We implemented all risk minimization algorithms in MATLAB. There is no general purpose algorithm for minimizing empirical risk under loss. We use the method based on a team of continuous actionset learning automata (CALA) (Sastry et al., 2010). It is known that if the stepsize parameter, , is sufficiently small, CALAteam based algorithm converges to global minimum of risk in linear classifier case (Sastry et al., 2010). In our simulations, we keep . Since this algorithm takes a little long to converge, we show results for risk minimization with  loss only on Synthetic dataset 1 and on Breast Cancer dataset.
For risk minimization with ramp loss and sigmoid loss for learning linear classifiers, we used simple gradient descent with decreasing step size and a momentum term. We use an incremental version; that is we keep updating the linear classifier after processing each example and we choose the next example randomly from the training data. The gradient descent is run with multiple starts ( times) and we keep the best final value. We learn with when we have uniform noise and with when we have nonuniform (or class conditional) noise. In all cases we report the results with best value.
We illustrate learning of nonlinear classifiers only with minimizing risk under ramp loss. The regularized (empirical) risk under ramp loss can be written as difference of two convex functions. This decomposition leads to an efficient minimization algorithm using DC (difference of convex) program (An and Tao, 1997; Wu and Liu, 2007). DC algorithm for learning a nonlinear classifier by minimizing regularized risk under ramp loss is explained in A. This is the method (as described in Algorithm 2) we used to learn nonlinear classifiers. We compared ramp loss based classifier with SVM (based on hinge loss) for nonlinear problems.
To learn SVM classifier, we used LibSVM code (Chang and Lin, 2011). We have run experiments with different values of the SVM parameter, () and the results reported are those with best value.
In the previous subsection, we explained how the noisefree data is generated for synthetic problems. For the bench mark data sets we take the data as noise free. We then randomly add uniform or nonuniform or class conditional (CC) noise. For uniform noise case we vary the noise rate () from to . For class conditional noise we used rates of and . We incorporate nonuniform noise as follows. For every example, the probability of flipping the label is based on which quadrant (with respect to its first two features) the example falls in. For nonuniform noise, the rates in the four quadrants are respectively for all problem.
For each problem, we randomly used for training (within training data, is used for validation) and for test sets. Then the training data is corrupted with label noise as needed. We determine the accuracy of the learnt classifier on the test set which is noisefree. In each case, this process of random choice of training and test sets is repeated times. We report the average (and standard deviation) of accuracy of different methods for different noise rates.
5.3 Simulation Results on Synthetic Problems
(a)  (b) 
(c)  (d) 
(e)  (f) 
In Synthetic Dataset 1, classes are symmetric with uniform class conditional densities and the examples from the two classes are balanced. As can be seen from Figure 1, accuracy of loss drops to only , sigmoid loss and ramp loss accuracies drop to but accuracy of SVM drops severely to . Under nonuniform noise, sigmoid loss, ramp loss, loss perform much better than SVM. Under class conditional noise, SVM’s accuracy drops to , whereas all the noisetolerant losses have accuracy around .
In Synthetic Dataset 2, we have balanced but asymmetric classes in . In addition to that we have nonuniform class conditional densities. Figure 2 presents classifiers learnt using sigmoid loss, ramp loss, hinge loss and square error loss on Synthetic Dataset 2 with 10% uniform label noise. We see that sigmoid loss and ramp loss based risk minimization approaches accurately capture the true classifier. On the other hand, SVM (hinge loss) and square error based approach fail to learn the true classifier in presence of label noise. As can be seen from Figure 3, even under noise, accuracy of SVM drops to . On the other hand sigmoid loss, ramp loss retain accuracy of at least even under noise. Also under nonuniform noise and class conditional noise, accuracies of sigmoid loss and ramp loss are around whereas accuracy of SVM is only . It is easy to see the noise tolerance of risk minimization with sigmoid loss or ramp loss when compared to the performance of SVM.
(a)  (b) 
(c)  (d) 
(e)  (f) 
In Synthetic Dataset 3, we have imbalanced set of training examples and asymmetric class regions in . But here, we have uniform class conditional densities. Figure 4 shows classifiers learnt using sigmoid loss, ramp loss, hinge loss and square error loss on Synthetic Dataset 3 with class conditional label noise. Here again, we see that sigmoid loss and ramp loss based approaches correctly find the true classifier. Whereas, hinge loss and square error loss based approaches fail to learn the true classifier. As can be seen from Figure 5, under uniform noise, accuracy of SVM drops to . Then it decreases to under uniform noise. Accuracies of sigmoid loss, ramp loss stay above even under noise. Under nonuniform noise and class conditional noise, both sigmoid loss and ramp loss outperform SVM.
In Synthetic Dataset 4, we have imbalanced, asymmetric classes in . As can be seen from Figure 6, the performance of noisetolerant loss functions stays good even in these higher dimensions. The figure also show that the SVM method is not robust to label noise and its accuracies keep dropping when there is label noise.
(a)  (b) 
(c)  (d) 
Noise Rate  kernel  SVM  Ramp Loss 

0%  quadratic  99.610.18  99.60.2 
Uni. 15%  quadratic  90.263.9  99.28 0.32 
Uni. 30%  quadratic  80.974.7  98.50.8 
0%  Gaussian  98.930.6  98.90.6 
Uni. 15%  Gaussian  96.30.6  99.060.9 
Uni. 30%  Gaussian  93.61.7  96.31.1 
Figure 7 shows the classifiers learnt using SVM and ramp loss on Synthetic Dataset 5 ( checker board) with 30% label noise. Quadratic kernel is used in both approaches to capture the nonlinear classification boundary. We see that ramp loss based classifier accurately captures the true classifier, while SVM completely misses it. We can see in Table 2, on checker board data, accuracy of SVM with quadratic kernel drops to 90% under 15% noise and 80% under 30% noise from 99% under noise free data. Ramp Loss shows impressive noise tolerance while using quadratic kernel. Ramp loss retains 98.5% accuracy even under 30% noise. SVM with Gaussian kernel achieves better noise tolerance than SVM with polynomial kernel on 22checker board data. Accuracy of SVM drops to 93.6%(80%) under 30% noise when using Gaussian(quadratic) kernel. Ramp loss performs better, retaining 96.3% accuracy under 30% noise.
5.3.1 Results on UCI Datasets
We now discuss the performances on the 5 benchmark data sets from UCI ML repository. On the Ionosphere data the accuracy achieved by a linear classifier (even in noisefree case) is high. We compare risk minimization with sigmoid and ramp loss on this data against the performance of SVM under uniform noise. On Ionosphere dataset, as can be seen from Table 3, accuracy of SVM drops to under noise from under nonoise, whereas Ramp loss drops to from . Sigmoid loss performs similar to Ramp loss.
Noise()  Ramp  Sigmoid  SVM  Sq.Err. 

0%  
Uni 10%  
Uni 20%  
Uni 30%  
Uni 40% 
Dataset  Noise Rate  SVM  Ramp Loss 

Balance  0%  99.301.16  99.30 1.2 
Uni 15%  96.062.4  97.71.17  
Uni 30%  82.111.2  92.17.4  
Heart  0%  82.587.82  83.334.56 
Uni 15%  80.68.85  84.07 7.10  
Uni 30%  77.36 9.31  79.109.94  
Vote  0%  94.491.64  94.491.64 
Uni 15%  90.674.4  90.364.2  
Uni 30%  81.25.8  85.326.7 
On Balance, Heart and Vote datasets, we explore SVM and Ramp loss using Gaussian kernel under uniform noise. The results on these three datasets are described in Table 4. We can see in Balance dataset, in Table 4, accuracy of SVM drops to 82% under 30% noise from 99% on noise free data while ramp loss retains 92% accuracy. In Heart dataset, ramp loss performs better than SVM. In Vote dataset, performance of Ramp loss is marginally better.
Noise()  Ramp  Sigmoid  SVM  Sq.Err.  

0%  
Uniform 10%  
Uniform 20%  
Uniform 30%  
Uniform 40%  
Non Uniform  
CC (40%20%) 
Breast Cancer data set has almost separable classes and a linear classifier performs well. On Breast Cancer data set we compare loss, sigmoid loss and ramp loss with SVM (hinge loss). In breast cancer problem, as can be seen in Table 5, accuracy of CALA algorithm drops to under noise from under nonoise. Sigmoid loss and Ramp loss drop to under noise. Accuracy of SVM drops to under noise. Under nonuniform noise and class conditional noise, risk minimization under  loss, Sigmoid loss, Ramp loss perform better than SVM.
All the results presented here, amply demonstrate the noise tolerance of risk minimization under sigmoid loss and ramp loss which satisfy our theoretical conditions for noise tolerance. In contrast, the SVM method does not exhibit much robustness to label noise. Using synthetic data sets we have demonstrated that SVM is particularly vulnerable to label noise under certain kinds of geometry of pattern classes. Under balanced training set, symmetric classes with uniform densities, SVM performs moderately well under noise. But if we have intraclass nonuniform density or imbalanced training set along with asymmetric class regions, then accuracy of SVM drops severely when training data are corrupted with label noise. This is demonstrated in two dimensions through problems 2 and 3 and in higher dimensions through problems 4. On the other hand risk minimization with  loss, ramp loss and sigmoid loss exhibit impressive impressive noise tolerance abilities as can be seen from our results on synthetic as well as real data sets.
6 Conclusions and Future Work
In this paper, we analyzed the noise tolerance of risk minimization which is a generic method for learning classifiers. We derived some sufficient conditions on a loss function for risk minimization under that loss function to be noise tolerant under uniform and nonuniform label noise. It is known loss is noise tolerant under uniform and nonuniform noise Manwani and Sastry (2013). The result we presented here is generalization of that result. Our result shows that sigmoid loss, ramp loss and probit loss are all noise tolerant under uniform label noise. We also presented results to show that risk minimization under these loss functions can be noise tolerant to nonuniform label noise also if a parameter in the loss function is sufficiently high. Our theoretical results provide justification for the known superiority of the ramp loss over SVM in empirical studies. We also generalized a result on noise tolerance of loss under class conditional label noise proved in to the case of any loss function that satisfies a sufficient condition. This shows that sigmoid loss, ramp loss etc. can be used for noise robust learning of classifiers under class conditional noise.
Through extensive empirical studies we demonstrated the noise tolerance of sigmoid loss, ramp loss and loss and also showed that the popular SVM method is not robust to label noise. We also showed specific types of class geometries in 2class problem that make SVM sensitive to label noise.
All these noise tolerant losses are nonconvex which makes the risk minimization harder. Risk minimization under loss is known to be hard. But the sigmoid loss, ramp loss etc are smooth and hence here we have used simple gradient descent for risk minimization under these loss functions. But, in general, such an approach would not be efficient to learn nonlinear classifiers under these losses. To do that, we have derived a DC program based risk minimization algorithm for ramp loss. For ramp loss, this approach allows to use kernel functions by default. Thus, making it easy to learn robust nonlinear classifiers.
We can extend the concept of noise tolerance by introducing degree of noise tolerance. Degree of noise tolerance could be defined as the difference of misclassification probability and on noise free data. loss, ramp loss and sigmoid loss have highest degree of noise tolerance as the above difference is zero. Hence an interesting direction of work is to analyze different convex loss functions from the point of view of degree of noise tolerance.
Appendix A Regularized Empirical Risk Minimization under Ramp Loss using DC Program
Ramp loss can be written as difference of two convex function.
For a nonlinear classifier parameterized by as , the regularized empirical risk under ramp loss is