diff writeup/aistats2011_submission.tex @ 604:51213beaed8b

draft of NIPS 2010 workshop camera-ready version
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Mon, 22 Nov 2010 14:52:33 -0500
parents eb6244c6d861
children
line wrap: on
line diff
--- a/writeup/aistats2011_submission.tex	Sun Oct 31 22:40:33 2010 -0400
+++ b/writeup/aistats2011_submission.tex	Mon Nov 22 14:52:33 2010 -0500
@@ -21,7 +21,7 @@
 \begin{document}
 
 \twocolumn[
-\aistatstitle{Deeper Learners Benefit More from Multi-Task and Perturbed Examples}
+\aistatstitle{Deep Learners Benefit More from Out-of-Distribution Examples}
 \runningtitle{Deep Learners for Out-of-Distribution Examples}
 \runningauthor{Bengio et. al.}
 \aistatsauthor{Anonymous Authors}]
@@ -57,9 +57,7 @@
 
 %\vspace*{-2mm}
 \begin{abstract}
-  Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because
-they can be shared across tasks and examples from different but related 
-distributions, can yield even more benefits where there are more such levels of representation. The experiments are performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits). We show that a deep learner could not only {\em beat previously published results but also reach human-level performance}.
+  Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits. Comparative experiments were performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits), using both a multi-task setting and perturbed examples in order to obtain out-of-distribution examples. The results agree with the hypothesis, and show that a deep learner did {\em beat previously published results and reached human-level performance}.
 \end{abstract}
 %\vspace*{-3mm}
 
@@ -74,7 +72,7 @@
 %\vspace*{-1mm}
 
 {\bf Deep Learning} has emerged as a promising new area of research in
-statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review.
+statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008-very-small,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review.
 Learning algorithms for deep architectures are centered on the learning
 of useful representations of data, which are better suited to the task at hand,
 and are organized in a hierarchy with multiple levels.
@@ -86,7 +84,7 @@
 of visual cortex) \citep{HonglakL2008}, and that they become more and
 more invariant to factors of variation (such as camera movement) in
 higher layers~\citep{Goodfellow2009}.
-Learning a hierarchy of features increases the
+It has been hypothesized that learning a hierarchy of features increases the
 ease and practicality of developing representations that are at once
 tailored to specific tasks, yet are able to borrow statistical strength
 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the
@@ -116,17 +114,17 @@
 in terms of unsupervised extraction
 of a hierarchy of features useful for classification. Each layer is trained
 to denoise its input, creating a layer of features that can be used as
-input for the next layer. Note that training a Denoising Auto-Encoder
+input for the next layer, forming a Stacked Denoising Auto-encoder (SDA).
+Note that training a Denoising Auto-encoder
 can actually been seen as training a particular RBM by an inductive
-principle different from maximum likelihood~\citep{ift6266-tr-anonymous}, % Vincent-SM-2010}, 
+principle different from maximum likelihood~\citep{Vincent-SM-2010}, 
 namely by Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}. 
 
 Previous comparative experimental results with stacking of RBMs and DAs
 to build deep supervised predictors had shown that they could outperform
-shallow architectures in a variety of settings (see~\citet{Bengio-2009}
-for a review), especially
+shallow architectures in a variety of settings, especially
 when the data involves complex interactions between many factors of 
-variation~\citep{LarochelleH2007}. Other experiments have suggested
+variation~\citep{LarochelleH2007,Bengio-2009}. Other experiments have suggested
 that the unsupervised layer-wise pre-training acted as a useful
 prior~\citep{Erhan+al-2010} that allows one to initialize a deep
 neural network in a relatively much smaller region of parameter space, 
@@ -141,7 +139,7 @@
 (the multi-task setting), or examples coming from an overlapping
 but different distribution (images with different kinds of perturbations
 and noises, here). This is consistent with the hypotheses discussed
-at length in~\citet{Bengio-2009} regarding the potential advantage
+in~\citet{Bengio-2009} regarding the potential advantage
 of deep learning and the idea that more levels of representation can
 give rise to more abstract, more general features of the raw input.
 
@@ -196,25 +194,14 @@
 %\end{enumerate}
 
 Our experimental results provide positive evidence towards all of these questions,
-as well as {\em classifiers that reach human-level performance on 62-class isolated character
+as well as {\bf classifiers that reach human-level performance on 62-class isolated character
 recognition and beat previously published results on the NIST dataset (special database 19)}.
 To achieve these results, we introduce in the next section a sophisticated system
 for stochastically transforming character images and then explain the methodology,
 which is based on training with or without these transformed images and testing on 
-clean ones. We measure the relative advantage of out-of-distribution examples
-(perturbed or out-of-class)
-for a deep learner vs a supervised shallow one.
+clean ones. 
 Code for generating these transformations as well as for the deep learning 
 algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
-We also estimate the relative advantage for deep learners of training with
-other classes than those of interest, by comparing learners trained with
-62 classes with learners trained with only a subset (on which they
-are then tested).
-The conclusion discusses
-the more general question of why deep learners may benefit so much from 
-out-of-distribution examples. Since out-of-distribution data
-(perturbed or from other related classes) is very common, this conclusion
-is of practical importance.
 
 %\vspace*{-3mm}
 %\newpage
@@ -231,12 +218,12 @@
 improve character recognizers, this effort is on a large scale both
 in number of classes and in the complexity of the transformations, hence
 in the complexity of the learning task.
-The code for these transformations (mostly python) is available at 
-{\tt http://anonymous.url.net}. All the modules in the pipeline share
+The code for these transformations (mostly Python) is available at 
+{\tt http://anonymous.url.net}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
 amount of deformation or noise introduced. 
 There are two main parts in the pipeline. The first one,
-from slant to pinch below, performs transformations. The second
+from thickness to pinch, performs transformations. The second
 part, from blur to contrast, adds different kinds of noise.
 More details can be found in~\citep{ift6266-tr-anonymous}.
 
@@ -272,14 +259,15 @@
 
 Much previous work on deep learning had been performed on
 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009},
-with 60~000 examples, and variants involving 10~000
-examples~\citep{Larochelle-jmlr-2009,VincentPLarochelleH2008}.
+with 60,000 examples, and variants involving 10,000
+examples~\citep{Larochelle-jmlr-2009,VincentPLarochelleH2008-very-small}.
 The focus here is on much larger training sets, from 10 times to 
 to 1000 times larger, and 62 classes.
 
 The first step in constructing the larger datasets (called NISTP and P07) is to sample from
 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
-and {\bf OCR data} (scanned machine printed characters). Once a character
+and {\bf OCR data} (scanned machine printed characters). See more in 
+Section~\ref{sec:sources} below. Once a character
 is sampled from one of these sources (chosen randomly), the second step is to
 apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}.
 
@@ -297,8 +285,9 @@
 %processing \citep{SnowEtAl2008} and vision
 %\citep{SorokinAndForsyth2008,whitehill09}. 
 AMT users were presented
-with 10 character images (from a test set) and asked to choose 10 corresponding ASCII
-characters. They were forced to choose a single character class (either among the
+with 10 character images (from a test set) on a screen
+and asked to label them.
+They were forced to choose a single character class (either among the
 62 or 10 character classes) for each image.
 80 subjects classified 2500 images per (dataset,task) pair.
 Different humans labelers sometimes provided a different label for the same
@@ -309,6 +298,7 @@
 
 %\vspace*{-3mm}
 \subsection{Data Sources}
+\label{sec:sources}
 %\vspace*{-2mm}
 
 %\begin{itemize}
@@ -320,10 +310,10 @@
 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications,
 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes 
 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity. 
-The fourth partition (called $hsf_4$, 82587 examples), 
+The fourth partition (called $hsf_4$, 82,587 examples), 
 experimentally recognized to be the most difficult one, is the one recommended 
 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
-for that purpose. We randomly split the remainder (731668 examples) into a training set and a validation set for
+for that purpose. We randomly split the remainder (731,668 examples) into a training set and a validation set for
 model selection. 
 The performances reported by previous work on that dataset mostly use only the digits.
 Here we use all the classes both in the training and testing phase. This is especially
@@ -339,14 +329,14 @@
 In order to have a good variety of sources we downloaded an important number of free fonts from:
 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}.
 % TODO: pointless to anonymize, it's not pointing to our work
-Including the operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from.
+Including an operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from.
 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, 
 directly as input to our models.
 %\vspace*{-1mm}
 
 %\item 
 {\bf Captchas.}
-The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for 
+The Captcha data source is an adaptation of the \emph{pycaptcha} library (a Python-based captcha generator library) for 
 generating characters of the same format as the NIST dataset. This software is based on
 a random character class generator and various kinds of transformations similar to those described in the previous sections. 
 In order to increase the variability of the data generated, many different fonts are used for generating the characters. 
@@ -376,7 +366,7 @@
 
 %\item 
 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has
-\{651668 / 80000 / 82587\} \{training / validation / test\} examples.
+\{651,668 / 80,000 / 82,587\} \{training / validation / test\} examples.
 %\vspace*{-1mm}
 
 %\item 
@@ -385,16 +375,19 @@
 For each new example to generate, a data source is selected with probability $10\%$ from the fonts,
 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the
 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$.
-It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples.
+It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
+obtained from the corresponding NIST sets plus other sources.
 %\vspace*{-1mm}
 
 %\item 
 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources)
   except that we only apply
-  transformations from slant to pinch. Therefore, the character is
+  transformations from slant to pinch (see Fig.\ref{fig:transform}(b-f)).
+  Therefore, the character is
   transformed but no additional noise is added to the image, giving images
   closer to the NIST dataset. 
-It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples.
+It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
+obtained from the corresponding NIST sets plus other sources.
 %\end{itemize}
 
 \begin{figure*}[ht]
@@ -414,11 +407,11 @@
 \end{figure*}
 
 %\vspace*{-3mm}
-\subsection{Models and their Hyperparameters}
+\subsection{Models and their Hyper-parameters}
 %\vspace*{-2mm}
 
 The experiments are performed using MLPs (with a single
-hidden layer) and SDAs.
+hidden layer) and deep SDAs.
 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.}
 
 {\bf Multi-Layer Perceptrons (MLP).}
@@ -427,7 +420,7 @@
 (making the use of SVMs computationally challenging because of their quadratic
 scaling behavior). Preliminary experiments on training SVMs (libSVM) with subsets of the training
 set allowing the program to fit in memory yielded substantially worse results
-than those obtained with MLPs. For training on nearly a billion examples
+than those obtained with MLPs. For training on nearly a hundred million examples
 (with the perturbed data), the MLPs and SDA are much more convenient than
 classifiers based on kernel methods.
 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized
@@ -441,7 +434,7 @@
 %\vspace*{-1mm}
 
 
-{\bf Stacked Denoising Auto-Encoders (SDA).}
+{\bf Stacked Denoising Auto-encoders (SDA).}
 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs)
 can be used to initialize the weights of each layer of a deep MLP (with many hidden 
 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006}, 
@@ -461,13 +454,15 @@
 compositions of simpler ones through a deep hierarchy).
 
 Here we chose to use the Denoising
-Auto-encoder~\citep{VincentPLarochelleH2008} as the building block for
+Auto-encoder~\citep{VincentPLarochelleH2008-very-small} as the building block for
 these deep hierarchies of features, as it is simple to train and
 explain (see Figure~\ref{fig:da}, as well as 
 tutorial and code there: {\tt http://deeplearning.net/tutorial}), 
 provides efficient inference, and yielded results
 comparable or better than RBMs in series of experiments
-\citep{VincentPLarochelleH2008}. During training, a Denoising
+\citep{VincentPLarochelleH2008-very-small}. It really corresponds to a Gaussian
+RBM trained by a Score Matching criterion~\cite{Vincent-SM-2010}.
+During training, a Denoising
 Auto-encoder is presented with a stochastically corrupted version
 of the input and trained to reconstruct the uncorrupted input,
 forcing the hidden units to represent the leading regularities in
@@ -478,7 +473,7 @@
 be used as inputs for training a second one, etc.
 After this unsupervised pre-training stage, the parameters
 are used to initialize a deep MLP, which is fine-tuned by
-the same standard procedure used to train them (see previous section).
+the same standard procedure used to train them (see above).
 The SDA hyper-parameters are the same as for the MLP, with the addition of the
 amount of corruption noise (we used the masking noise process, whereby a
 fixed proportion of the input values, randomly selected, are zeroed), and a
@@ -486,7 +481,7 @@
 from the same above set). The fraction of inputs corrupted was selected
 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number
 of hidden layers but it was fixed to 3 based on previous work with
-SDAs on MNIST~\citep{VincentPLarochelleH2008}. The size of the hidden
+SDAs on MNIST~\citep{VincentPLarochelleH2008-very-small}. The size of the hidden
 layers was kept constant across hidden layers, and the best results
 were obtained with the largest values that we could experiment
 with given our patience, with 1000 hidden units.
@@ -532,10 +527,11 @@
 %%\vspace*{-1mm}
 The models are either trained on NIST (MLP0 and SDA0), 
 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested
-on either NIST, NISTP or P07, either on the 62-class task
-or on the 10-digits task. Training (including about half
+on either NIST, NISTP or P07 (regardless of the data set used for training),
+either on the 62-class task
+or on the 10-digits task. Training time (including about half
 for unsupervised pre-training, for DAs) on the larger
-datasets takes around one day on a GPU-285.
+datasets is around one day on a GPU (GTX 285).
 Figure~\ref{fig:error-rates-charts} summarizes the results obtained,
 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1,
 SDA2), along with the previous results on the digits NIST special database
@@ -558,12 +554,13 @@
 In addition, as shown in the left of
 Figure~\ref{fig:improvements-charts}, the relative improvement in error
 rate brought by out-of-distribution examples is greater for the deep
-stacked SDA, and these
+SDA, and these
 differences with the shallow MLP are statistically and qualitatively
 significant. 
 The left side of the figure shows the improvement to the clean
 NIST test set error brought by the use of out-of-distribution examples
-(i.e. the perturbed examples examples from NISTP or P07). 
+(i.e. the perturbed examples examples from NISTP or P07),
+over the models trained exclusively on NIST (respectively SDA0 and MLP0).
 Relative percent change is measured by taking
 $100 \% \times$ (original model's error / perturbed-data model's error - 1).
 The right side of
@@ -576,7 +573,7 @@
 for the SDA.  Note that to simplify these multi-task experiments, only the original
 NIST dataset is used. For example, the MLP-digits bar shows the relative
 percent improvement in MLP error rate on the NIST digits test set 
-is $100\% \times$ (single-task
+as $100\% \times$ (single-task
 model's error / multi-task model's error - 1).  The single-task model is
 trained with only 10 outputs (one per digit), seeing only digit examples,
 whereas the multi-task model is trained with 62 outputs, with all 62
@@ -585,7 +582,8 @@
 comparing the correct digit class with the output class associated with the
 maximum conditional probability among only the digit classes outputs.  The
 setting is similar for the other two target classes (lower case characters
-and upper case characters).
+and upper case characters). Note however that some types of perturbations
+(NISTP) help more than others (P07) when testing on the clean images.
 %%\vspace*{-1mm}
 %\subsection{Perturbed Training Data More Helpful for SDA}
 %%\vspace*{-1mm}
@@ -651,12 +649,15 @@
 images (65\% relative improvement on NISTP) 
 but only marginally helped (5\% relative improvement on all classes) 
 or even hurt (10\% relative loss on digits)
-with respect to clean examples . On the other hand, the deep SDAs
+with respect to clean examples. On the other hand, the deep SDAs
 were significantly boosted by these out-of-distribution examples.
 Similarly, whereas the improvement due to the multi-task setting was marginal or
 negative for the MLP (from +5.6\% to -3.6\% relative change), 
 it was quite significant for the SDA (from +13\% to +27\% relative change),
 which may be explained by the arguments below.
+Since out-of-distribution data
+(perturbed or from other related classes) is very common, this conclusion
+is of practical importance.
 %\end{itemize}
 
 In the original self-taught learning framework~\citep{RainaR2007}, the
@@ -668,8 +669,12 @@
 We note instead that, for deep
 architectures, our experiments show that such a positive effect is accomplished
 even in a scenario with a \emph{large number of labeled examples},
-i.e., here, the relative gain of self-taught learning is probably preserved
-in the asymptotic regime.
+i.e., here, the relative gain of self-taught learning and
+out-of-distribution examples is probably preserved
+in the asymptotic regime. However, note that in our perturbation experiments
+(but not in our multi-task experiments), 
+even the out-of-distribution examples are labeled, unlike in the
+earlier self-taught learning experiments~\citep{RainaR2007}.
 
 {\bf Why would deep learners benefit more from the self-taught learning 
 framework and out-of-distribution examples}?