comparison writeup/aistats2011_cameraready.tex @ 627:249a180795e3

camera ready version
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Thu, 17 Mar 2011 08:15:34 -0400
parents
children f55f1b1499c4
comparison
equal deleted inserted replaced
624:49933073590c 627:249a180795e3
1 %\documentclass[twoside,11pt]{article} % For LaTeX2e
2 \documentclass{article} % For LaTeX2e
3 \usepackage[accepted]{aistats2e_2011}
4 %\usepackage{times}
5 \usepackage{wrapfig}
6 \usepackage{amsthm}
7 \usepackage{amsmath}
8 \usepackage{bbm}
9 \usepackage[utf8]{inputenc}
10 \usepackage[psamsfonts]{amssymb}
11 %\usepackage{algorithm,algorithmic} % not used after all
12 \usepackage{graphicx,subfigure}
13 \usepackage[numbers]{natbib}
14
15 \addtolength{\textwidth}{10mm}
16 \addtolength{\evensidemargin}{-5mm}
17 \addtolength{\oddsidemargin}{-5mm}
18
19 %\setlength\parindent{0mm}
20
21 \begin{document}
22
23 \twocolumn[
24 \aistatstitle{Deep Learners Benefit More from Out-of-Distribution Examples}
25 \runningtitle{Deep Learners for Out-of-Distribution Examples}
26 \runningauthor{Bengio et. al.}
27 \aistatsauthor{
28 Yoshua Bengio \and
29 Frédéric Bastien \and
30 Arnaud Bergeron \and
31 Nicolas Boulanger-Lewandowski \and
32 Thomas Breuel \and
33 Youssouf Chherawala \and
34 Moustapha Cisse \and
35 Myriam Côté \and
36 Dumitru Erhan \and
37 Jeremy Eustache \and
38 Xavier Glorot \and
39 Xavier Muller \and
40 Sylvain Pannetier Lebeuf \and
41 Razvan Pascanu \and
42 Salah Rifai \and
43 Francois Savard \and
44 Guillaume Sicard
45 \vspace*{5mm}}]
46 \aistatsaddress{Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
47 %\date{{\tt bengioy@iro.umontreal.ca}, Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
48 %\jmlrheading{}{2010}{}{10/2010}{XX/2011}{Yoshua Bengio et al}
49 %\editor{}
50
51 %\makeanontitle
52 %\maketitle
53
54 %{\bf Running title: Deep Self-Taught Learning}
55
56 \vspace*{5mm}
57 \begin{abstract}
58 Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits. Comparative experiments were performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits), using both a multi-task setting and perturbed examples in order to obtain out-of-distribution examples. The results agree with the hypothesis, and show that a deep learner did {\em beat previously published results and reached human-level performance}.
59 \end{abstract}
60 %\vspace*{-3mm}
61
62 %\begin{keywords}
63 %Deep learning, self-taught learning, out-of-distribution examples, handwritten character recognition, multi-task learning
64 %\end{keywords}
65 %\keywords{self-taught learning \and multi-task learning \and out-of-distribution examples \and deep learning \and handwriting recognition}
66
67
68
69 \section{Introduction}
70 %\vspace*{-1mm}
71
72 {\bf Deep Learning} has emerged as a promising new area of research in
73 statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008-very-small,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review.
74 Learning algorithms for deep architectures are centered on the learning
75 of useful representations of data, which are better suited to the task at hand,
76 and are organized in a hierarchy with multiple levels.
77 This is in part inspired by observations of the mammalian visual cortex,
78 which consists of a chain of processing elements, each of which is associated with a
79 different representation of the raw visual input. In fact,
80 it was found recently that the features learnt in deep architectures resemble
81 those observed in the first two of these stages (in areas V1 and V2
82 of visual cortex) \citep{HonglakL2008}, and that they become more and
83 more invariant to factors of variation (such as camera movement) in
84 higher layers~\citep{Goodfellow2009}.
85 It has been hypothesized that learning a hierarchy of features increases the
86 ease and practicality of developing representations that are at once
87 tailored to specific tasks, yet are able to borrow statistical strength
88 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the
89 feature representation can lead to higher-level (more abstract, more
90 general) features that are more robust to unanticipated sources of
91 variance extant in real data.
92
93 Whereas a deep architecture can in principle be more powerful than a
94 shallow one in terms of representation, depth appears to render the
95 training problem more difficult in terms of optimization and local minima.
96 It is also only recently that successful algorithms were proposed to
97 overcome some of these difficulties. All are based on unsupervised
98 learning, often in an greedy layer-wise ``unsupervised pre-training''
99 stage~\citep{Bengio-2009}.
100 The principle is that each layer starting from
101 the bottom is trained to represent its input (the output of the previous
102 layer). After this
103 unsupervised initialization, the stack of layers can be
104 converted into a deep supervised feedforward neural network and fine-tuned by
105 stochastic gradient descent.
106 One of these layer initialization techniques,
107 applied here, is the Denoising
108 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see
109 Figure~\ref{fig:da}), which performed similarly or
110 better~\citep{VincentPLarochelleH2008-very-small} than previously
111 proposed Restricted Boltzmann Machines (RBM)~\citep{Hinton06}
112 in terms of unsupervised extraction
113 of a hierarchy of features useful for classification. Each layer is trained
114 to denoise its input, creating a layer of features that can be used as
115 input for the next layer, forming a Stacked Denoising Auto-encoder (SDA).
116 Note that training a Denoising Auto-encoder
117 can actually been seen as training a particular RBM by an inductive
118 principle different from maximum likelihood~\citep{Vincent-SM-2010},
119 namely by Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}.
120
121 Previous comparative experimental results with stacking of RBMs and DAs
122 to build deep supervised predictors had shown that they could outperform
123 shallow architectures in a variety of settings, especially
124 when the data involves complex interactions between many factors of
125 variation~\citep{LarochelleH2007,Bengio-2009}. Other experiments have suggested
126 that the unsupervised layer-wise pre-training acted as a useful
127 prior~\citep{Erhan+al-2010} that allows one to initialize a deep
128 neural network in a relatively much smaller region of parameter space,
129 corresponding to better generalization.
130
131 To further the understanding of the reasons for the good performance
132 observed with deep learners, we focus here on the following {\em hypothesis}:
133 intermediate levels of representation, especially when there are
134 more such levels, can be exploited to {\bf share
135 statistical strength across different but related types of examples},
136 such as examples coming from other tasks than the task of interest
137 (the multi-task setting), or examples coming from an overlapping
138 but different distribution (images with different kinds of perturbations
139 and noises, here). This is consistent with the hypotheses discussed
140 in~\citet{Bengio-2009} regarding the potential advantage
141 of deep learning and the idea that more levels of representation can
142 give rise to more abstract, more general features of the raw input.
143
144 This hypothesis is related to a learning setting called
145 {\bf self-taught learning}~\citep{RainaR2007}, which combines principles
146 of semi-supervised and multi-task learning: the learner can exploit examples
147 that are unlabeled and possibly come from a distribution different from the target
148 distribution, e.g., from other classes than those of interest.
149 It has already been shown that deep learners can clearly take advantage of
150 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small},
151 but more needed to be done to explore the impact
152 of {\em out-of-distribution} examples and of the {\em multi-task} setting
153 (one exception is~\citep{CollobertR2008}, which shares and uses unsupervised
154 pre-training only with the first layer). In particular the {\em relative
155 advantage of deep learning} for these settings has not been evaluated.
156
157
158 %
159 The {\bf main claim} of this paper is that deep learners (with several levels of representation) can
160 {\bf benefit more from out-of-distribution examples than shallow learners} (with a single
161 level), both in the context of the multi-task setting and from
162 perturbed examples. Because we are able to improve on state-of-the-art
163 performance and reach human-level performance
164 on a large-scale task, we consider that this paper is also a contribution
165 to advance the application of machine learning to handwritten character recognition.
166 More precisely, we ask and answer the following questions:
167
168 %\begin{enumerate}
169 $\bullet$ %\item
170 Do the good results previously obtained with deep architectures on the
171 MNIST digit images generalize to the setting of a similar but much larger and richer
172 dataset, the NIST special database 19, with 62 classes and around 800k examples?
173
174 $\bullet$ %\item
175 To what extent does the perturbation of input images (e.g. adding
176 noise, affine transformations, background images) make the resulting
177 classifiers better not only on similarly perturbed images but also on
178 the {\em original clean examples}? We study this question in the
179 context of the 62-class and 10-class tasks of the NIST special database 19.
180
181 $\bullet$ %\item
182 Do deep architectures {\em benefit {\bf more} from such out-of-distribution}
183 examples, in particular do they benefit more from
184 examples that are perturbed versions of the examples from the task of interest?
185
186 $\bullet$ %\item
187 Similarly, does the feature learning step in deep learning algorithms benefit {\bf more}
188 from training with moderately {\em different classes} (i.e. a multi-task learning scenario) than
189 a corresponding shallow and purely supervised architecture?
190 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case)
191 to answer this question.
192 %\end{enumerate}
193
194 Our experimental results provide positive evidence towards all of these questions,
195 as well as {\bf classifiers that reach human-level performance on 62-class isolated character
196 recognition and beat previously published results on the NIST dataset (special database 19)}.
197 To achieve these results, we introduce in the next section a sophisticated system
198 for stochastically transforming character images and then explain the methodology,
199 which is based on training with or without these transformed images and testing on
200 clean ones.
201 Code for generating these transformations as well as for the deep learning
202 algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
203
204 %\vspace*{-3mm}
205 %\newpage
206 \section{Perturbed and Transformed Character Images}
207 \label{s:perturbations}
208 %\vspace*{-2mm}
209
210 Figure~\ref{fig:transform} shows the different transformations we used to stochastically
211 transform $32 \times 32$ source images (such as the one in Fig.\ref{fig:torig})
212 in order to obtain data from a larger distribution which
213 covers a domain substantially larger than the clean characters distribution from
214 which we start.
215 Although character transformations have been used before to
216 improve character recognizers, this effort is on a large scale both
217 in number of classes and in the complexity of the transformations, hence
218 in the complexity of the learning task.
219 The code for these transformations (mostly Python) is available at
220 {\tt http://anonymous.url.net}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
221 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
222 amount of deformation or noise introduced.
223 There are two main parts in the pipeline. The first one,
224 from thickness to pinch, performs transformations. The second
225 part, from blur to contrast, adds different kinds of noise.
226 More details can be found in~\citep{ift6266-tr-anonymous}.
227
228 \begin{figure*}[ht]
229 \centering
230 \subfigure[Original]{\includegraphics[scale=0.6]{images/Original.png}\label{fig:torig}}
231 \subfigure[Thickness]{\includegraphics[scale=0.6]{images/Thick_only.png}}
232 \subfigure[Slant]{\includegraphics[scale=0.6]{images/Slant_only.png}}
233 \subfigure[Affine Transformation]{\includegraphics[scale=0.6]{images/Affine_only.png}}
234 \subfigure[Local Elastic Deformation]{\includegraphics[scale=0.6]{images/Localelasticdistorsions_only.png}}
235 \subfigure[Pinch]{\includegraphics[scale=0.6]{images/Pinch_only.png}}
236 %Noise
237 \subfigure[Motion Blur]{\includegraphics[scale=0.6]{images/Motionblur_only.png}}
238 \subfigure[Occlusion]{\includegraphics[scale=0.6]{images/occlusion_only.png}}
239 \subfigure[Gaussian Smoothing]{\includegraphics[scale=0.6]{images/Bruitgauss_only.png}}
240 \subfigure[Pixels Permutation]{\includegraphics[scale=0.6]{images/Permutpixel_only.png}}
241 \subfigure[Gaussian Noise]{\includegraphics[scale=0.6]{images/Distorsiongauss_only.png}}
242 \subfigure[Background Image Addition]{\includegraphics[scale=0.6]{images/background_other_only.png}}
243 \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}}
244 \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}}
245 \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}}
246 \caption{Top left (a): example original image. Others (b-o): examples of the effect
247 of each transformation module taken separately. Actual perturbed examples are obtained by
248 a pipeline of these, with random choices about which module to apply and how much perturbation
249 to apply.}
250 \label{fig:transform}
251 %\vspace*{-2mm}
252 \end{figure*}
253
254 %\vspace*{-3mm}
255 \section{Experimental Setup}
256 %\vspace*{-1mm}
257
258 Much previous work on deep learning had been performed on
259 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009},
260 with 60,000 examples, and variants involving 10,000
261 examples~\citep{Larochelle-jmlr-2009,VincentPLarochelleH2008-very-small}.
262 The focus here is on much larger training sets, from 10 times to
263 to 1000 times larger, and 62 classes.
264
265 The first step in constructing the larger datasets (called NISTP and P07) is to sample from
266 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
267 and {\bf OCR data} (scanned machine printed characters). See more in
268 Section~\ref{sec:sources} below. Once a character
269 is sampled from one of these sources (chosen randomly), the second step is to
270 apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}.
271
272 To provide a baseline of error rate comparison we also estimate human performance
273 on both the 62-class task and the 10-class digits task.
274 We compare the best Multi-Layer Perceptrons (MLP) against
275 the best Stacked Denoising Auto-encoders (SDA), when
276 both models' hyper-parameters are selected to minimize the validation set error.
277 We also provide a comparison against a precise estimate
278 of human performance obtained via Amazon's Mechanical Turk (AMT)
279 service ({\tt http://mturk.com}).
280 AMT users are paid small amounts
281 of money to perform tasks for which human intelligence is required.
282 Mechanical Turk has been used extensively in natural language processing and vision.
283 %processing \citep{SnowEtAl2008} and vision
284 %\citep{SorokinAndForsyth2008,whitehill09}.
285 AMT users were presented
286 with 10 character images (from a test set) on a screen
287 and asked to label them.
288 They were forced to choose a single character class (either among the
289 62 or 10 character classes) for each image.
290 80 subjects classified 2500 images per (dataset,task) pair.
291 Different humans labelers sometimes provided a different label for the same
292 example, and we were able to estimate the error variance due to this effect
293 because each image was classified by 3 different persons.
294 The average error of humans on the 62-class task NIST test set
295 is 18.2\%, with a standard error of 0.1\%.
296 We controlled noise in the labelling process by (1)
297 requiring AMT workers with a higher than normal average of accepted
298 responses ($>$95\%) on other tasks (2) discarding responses that were not
299 complete (10 predictions) (3) discarding responses for which for which the
300 time to predict was smaller than 3 seconds for NIST (the mean response time
301 was 20 seconds) and 6 seconds seconds for NISTP (average response time of
302 45 seconds) (4) discarding responses which were obviously wrong (10
303 identical ones, or "12345..."). Overall, after such filtering, we kept
304 approximately 95\% of the AMT workers' responses.
305
306 %\vspace*{-3mm}
307 \subsection{Data Sources}
308 \label{sec:sources}
309 %\vspace*{-2mm}
310
311 %\begin{itemize}
312 %\item
313 {\bf NIST.}
314 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995},
315 widely used for training and testing character
316 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}.
317 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications,
318 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes
319 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity.
320 The fourth partition (called $hsf_4$, 82,587 examples),
321 experimentally recognized to be the most difficult one, is the one recommended
322 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
323 for that purpose. We randomly split the remainder (731,668 examples) into a training set and a validation set for
324 model selection.
325 The performances reported by previous work on that dataset mostly use only the digits.
326 Here we use all the classes both in the training and testing phase. This is especially
327 useful to estimate the effect of a multi-task setting.
328 The distribution of the classes in the NIST training and test sets differs
329 substantially, with relatively many more digits in the test set, and a more uniform distribution
330 of letters in the test set (whereas in the training set they are distributed
331 more like in natural text).
332 %\vspace*{-1mm}
333
334 %\item
335 {\bf Fonts.}
336 In order to have a good variety of sources we downloaded an important number of free fonts from:
337 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}.
338 % TODO: pointless to anonymize, it's not pointing to our work
339 Including an operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from.
340 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image,
341 directly as input to our models.
342 %\vspace*{-1mm}
343
344 %\item
345 {\bf Captchas.}
346 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a Python-based captcha generator library) for
347 generating characters of the same format as the NIST dataset. This software is based on
348 a random character class generator and various kinds of transformations similar to those described in the previous sections.
349 In order to increase the variability of the data generated, many different fonts are used for generating the characters.
350 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity
351 depending on the value of the complexity parameter provided by the user of the data source.
352 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class?
353 %\vspace*{-1mm}
354
355 %\item
356 {\bf OCR data.}
357 A large set (2 million) of scanned, OCRed and manually verified machine-printed
358 characters where included as an
359 additional source. This set is part of a larger corpus being collected by the Image Understanding
360 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern
361 ({\tt http://www.iupr.com}), and which will be publicly released.
362 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this
363 %\end{itemize}
364
365 %\vspace*{-3mm}
366 \subsection{Data Sets}
367 %\vspace*{-2mm}
368
369 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label
370 from one of the 62 character classes.
371 %\begin{itemize}
372 %\vspace*{-1mm}
373
374 %\item
375 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has
376 \{651,668 / 80,000 / 82,587\} \{training / validation / test\} examples.
377 %\vspace*{-1mm}
378
379 %\item
380 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources
381 and sending them through the transformation pipeline described in section \ref{s:perturbations}.
382 For each new example to generate, a data source is selected with probability $10\%$ from the fonts,
383 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the
384 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$.
385 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
386 obtained from the corresponding NIST sets plus other sources.
387 %\vspace*{-1mm}
388
389 %\item
390 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources)
391 except that we only apply
392 transformations from slant to pinch (see Fig.\ref{fig:transform}(b-f)).
393 Therefore, the character is
394 transformed but no additional noise is added to the image, giving images
395 closer to the NIST dataset.
396 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
397 obtained from the corresponding NIST sets plus other sources.
398 %\end{itemize}
399
400 \begin{figure*}[ht]
401 %\vspace*{-2mm}
402 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}}
403 %\vspace*{-2mm}
404 \caption{Illustration of the computations and training criterion for the denoising
405 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of
406 the layer (i.e. raw input or output of previous layer)
407 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$.
408 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which
409 is compared to the uncorrupted input $x$ through the loss function
410 $L_H(x,z)$, whose expected value is approximately minimized during training
411 by tuning $\theta$ and $\theta'$.}
412 \label{fig:da}
413 %\vspace*{-2mm}
414 \end{figure*}
415
416 %\vspace*{-3mm}
417 \subsection{Models and their Hyper-parameters}
418 %\vspace*{-2mm}
419
420 The experiments are performed using MLPs (with a single
421 hidden layer) and deep SDAs.
422 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.}
423
424 {\bf Multi-Layer Perceptrons (MLP).} Whereas previous work had compared
425 deep architectures to both shallow MLPs and SVMs, we only compared to MLPs
426 here because of the very large datasets used (making the use of SVMs
427 computationally challenging because of their quadratic scaling
428 behavior). Preliminary experiments on training SVMs (libSVM) with subsets
429 of the training set allowing the program to fit in memory yielded
430 substantially worse results than those obtained with MLPs\footnote{RBF SVMs
431 trained with a subset of NISTP or NIST, 100k examples, to fit in memory,
432 yielded 64\% test error or worse; online linear SVMs trained on the whole
433 of NIST or 800k from NISTP yielded no better than 42\% error; slightly
434 better results were obtained by sparsifying the pixel intensities and
435 projecting to a second-order polynomial (a very sparse vector), still
436 41\% error. We expect that better results could be obtained with a
437 better implementation allowing for training with more examples and
438 a higher-order non-linear projection.} For training on nearly a hundred million examples (with the
439 perturbed data), the MLPs and SDA are much more convenient than classifiers
440 based on kernel methods. The MLP has a single hidden layer with $\tanh$
441 activation functions, and softmax (normalized exponentials) on the output
442 layer for estimating $P(class | image)$. The number of hidden units is
443 taken in $\{300,500,800,1000,1500\}$. Training examples are presented in
444 minibatches of size 20. A constant learning rate was chosen among $\{0.001,
445 0.01, 0.025, 0.075, 0.1, 0.5\}$.
446 %through preliminary experiments (measuring performance on a validation set),
447 %and $0.1$ (which was found to work best) was then selected for optimizing on
448 %the whole training sets.
449 %\vspace*{-1mm}
450
451
452 {\bf Stacked Denoising Auto-encoders (SDA).}
453 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs)
454 can be used to initialize the weights of each layer of a deep MLP (with many hidden
455 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006},
456 apparently setting parameters in the
457 basin of attraction of supervised gradient descent yielding better
458 generalization~\citep{Erhan+al-2010}. This initial {\em unsupervised
459 pre-training phase} uses all of the training images but not the training labels.
460 Each layer is trained in turn to produce a new representation of its input
461 (starting from the raw pixels).
462 It is hypothesized that the
463 advantage brought by this procedure stems from a better prior,
464 on the one hand taking advantage of the link between the input
465 distribution $P(x)$ and the conditional distribution of interest
466 $P(y|x)$ (like in semi-supervised learning), and on the other hand
467 taking advantage of the expressive power and bias implicit in the
468 deep architecture (whereby complex concepts are expressed as
469 compositions of simpler ones through a deep hierarchy).
470
471 Here we chose to use the Denoising
472 Auto-encoder~\citep{VincentPLarochelleH2008-very-small} as the building block for
473 these deep hierarchies of features, as it is simple to train and
474 explain (see Figure~\ref{fig:da}, as well as
475 tutorial and code there: {\tt http://deeplearning.net/tutorial}),
476 provides efficient inference, and yielded results
477 comparable or better than RBMs in series of experiments
478 \citep{VincentPLarochelleH2008-very-small}. It really corresponds to a Gaussian
479 RBM trained by a Score Matching criterion~\cite{Vincent-SM-2010}.
480 During training, a Denoising
481 Auto-encoder is presented with a stochastically corrupted version
482 of the input and trained to reconstruct the uncorrupted input,
483 forcing the hidden units to represent the leading regularities in
484 the data. Here we use the random binary masking corruption
485 (which sets to 0 a random subset of the inputs).
486 Once it is trained, in a purely unsupervised way,
487 its hidden units' activations can
488 be used as inputs for training a second one, etc.
489 After this unsupervised pre-training stage, the parameters
490 are used to initialize a deep MLP, which is fine-tuned by
491 the same standard procedure used to train them (see above).
492 The SDA hyper-parameters are the same as for the MLP, with the addition of the
493 amount of corruption noise (we used the masking noise process, whereby a
494 fixed proportion of the input values, randomly selected, are zeroed), and a
495 separate learning rate for the unsupervised pre-training stage (selected
496 from the same above set). The fraction of inputs corrupted was selected
497 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number
498 of hidden layers but it was fixed to 3 for most experiments,
499 based on previous work with
500 SDAs on MNIST~\citep{VincentPLarochelleH2008-very-small}.
501 We also compared against 1 and against 2 hidden layers, in order
502 to disantangle the effect of depth from the effect of unsupervised
503 pre-training.
504 The size of the hidden
505 layers was kept constant across hidden layers, and the best results
506 were obtained with the largest values that we could experiment
507 with given our patience, with 1000 hidden units.
508
509 %\vspace*{-1mm}
510
511 \begin{figure*}[ht]
512 %\vspace*{-2mm}
513 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}}
514 %\vspace*{-3mm}
515 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained
516 on NIST, 1 on NISTP, and 2 on P07. Left: overall results
517 of all models, on NIST and NISTP test sets.
518 Right: error rates on NIST test digits only, along with the previous results from
519 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
520 respectively based on ART, nearest neighbors, MLPs, and SVMs.}
521 \label{fig:error-rates-charts}
522 %\vspace*{-2mm}
523 \end{figure*}
524
525
526 \begin{figure*}[ht]
527 \vspace*{-3mm}
528 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}}
529 \vspace*{-3mm}
530 \caption{Relative improvement in error rate due to out-of-distribution examples.
531 Left: Improvement (or loss, when negative)
532 induced by out-of-distribution examples (perturbed data).
533 Right: Improvement (or loss, when negative) induced by multi-task
534 learning (training on all classes and testing only on either digits,
535 upper case, or lower-case). The deep learner (SDA) benefits more from
536 out-of-distribution examples, compared to the shallow MLP.}
537 \label{fig:improvements-charts}
538 \vspace*{-2mm}
539 \end{figure*}
540
541 \vspace*{-2mm}
542 \section{Experimental Results}
543 \vspace*{-2mm}
544
545 %%\vspace*{-1mm}
546 %\subsection{SDA vs MLP vs Humans}
547 %%\vspace*{-1mm}
548 The models are either trained on NIST (MLP0 and SDA0),
549 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested
550 on either NIST, NISTP or P07 (regardless of the data set used for training),
551 either on the 62-class task
552 or on the 10-digits task. Training time (including about half
553 for unsupervised pre-training, for DAs) on the larger
554 datasets is around one day on a GPU (GTX 285).
555 Figure~\ref{fig:error-rates-charts} summarizes the results obtained,
556 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1,
557 SDA2), along with the previous results on the digits NIST special database
558 19 test set from the literature, respectively based on ARTMAP neural
559 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search
560 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs
561 ~\citep{Milgram+al-2005}.% More detailed and complete numerical results
562 %(figures and tables, including standard errors on the error rates) can be
563 %found in Appendix.
564 The deep learner not only outperformed the shallow ones and
565 previously published performance (in a statistically and qualitatively
566 significant way) but when trained with perturbed data
567 reaches human performance on both the 62-class task
568 and the 10-class (digits) task.
569 17\% error (SDA1) or 18\% error (humans) may seem large but a large
570 majority of the errors from humans and from SDA1 are from out-of-context
571 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a
572 ``c'' and a ``C'' are often indistinguishible).
573 Regarding shallower networks pre-trained with unsupervised denoising
574 auto-encders, we find that the NIST test error is 21\% with one hidden
575 layer and 20\% with two hidden layers (vs 17\% in the same conditions
576 with 3 hidden layers). Compare this with the 23\% error achieved
577 by the MLP, i.e. a single hidden layer and no unsupervised pre-training.
578 As found in previous work~\cite{Erhan+al-2010,Larochelle-jmlr-2009},
579 these results show that both depth and
580 unsupervised pre-training need to be combined in order to achieve
581 the best results.
582
583
584 In addition, as shown in the left of
585 Figure~\ref{fig:improvements-charts}, the relative improvement in error
586 rate brought by out-of-distribution examples is greater for the deep
587 SDA, and these
588 differences with the shallow MLP are statistically and qualitatively
589 significant.
590 The left side of the figure shows the improvement to the clean
591 NIST test set error brought by the use of out-of-distribution examples
592 (i.e. the perturbed examples examples from NISTP or P07),
593 over the models trained exclusively on NIST (respectively SDA0 and MLP0).
594 Relative percent change is measured by taking
595 $100 \% \times$ (original model's error / perturbed-data model's error - 1).
596 The right side of
597 Figure~\ref{fig:improvements-charts} shows the relative improvement
598 brought by the use of a multi-task setting, in which the same model is
599 trained for more classes than the target classes of interest (i.e. training
600 with all 62 classes when the target classes are respectively the digits,
601 lower-case, or upper-case characters). Again, whereas the gain from the
602 multi-task setting is marginal or negative for the MLP, it is substantial
603 for the SDA. Note that to simplify these multi-task experiments, only the original
604 NIST dataset is used. For example, the MLP-digits bar shows the relative
605 percent improvement in MLP error rate on the NIST digits test set
606 as $100\% \times$ (single-task
607 model's error / multi-task model's error - 1). The single-task model is
608 trained with only 10 outputs (one per digit), seeing only digit examples,
609 whereas the multi-task model is trained with 62 outputs, with all 62
610 character classes as examples. Hence the hidden units are shared across
611 all tasks. For the multi-task model, the digit error rate is measured by
612 comparing the correct digit class with the output class associated with the
613 maximum conditional probability among only the digit classes outputs. The
614 setting is similar for the other two target classes (lower case characters
615 and upper case characters). Note however that some types of perturbations
616 (NISTP) help more than others (P07) when testing on the clean images.
617 %%\vspace*{-1mm}
618 %\subsection{Perturbed Training Data More Helpful for SDA}
619 %%\vspace*{-1mm}
620
621 %%\vspace*{-1mm}
622 %\subsection{Multi-Task Learning Effects}
623 %%\vspace*{-1mm}
624
625 \iffalse
626 As previously seen, the SDA is better able to benefit from the
627 transformations applied to the data than the MLP. In this experiment we
628 define three tasks: recognizing digits (knowing that the input is a digit),
629 recognizing upper case characters (knowing that the input is one), and
630 recognizing lower case characters (knowing that the input is one). We
631 consider the digit classification task as the target task and we want to
632 evaluate whether training with the other tasks can help or hurt, and
633 whether the effect is different for MLPs versus SDAs. The goal is to find
634 out if deep learning can benefit more (or less) from multiple related tasks
635 (i.e. the multi-task setting) compared to a corresponding purely supervised
636 shallow learner.
637
638 We use a single hidden layer MLP with 1000 hidden units, and a SDA
639 with 3 hidden layers (1000 hidden units per layer), pre-trained and
640 fine-tuned on NIST.
641
642 Our results show that the MLP benefits marginally from the multi-task setting
643 in the case of digits (5\% relative improvement) but is actually hurt in the case
644 of characters (respectively 3\% and 4\% worse for lower and upper class characters).
645 On the other hand the SDA benefited from the multi-task setting, with relative
646 error rate improvements of 27\%, 15\% and 13\% respectively for digits,
647 lower and upper case characters, as shown in Table~\ref{tab:multi-task}.
648 \fi
649
650
651 \vspace*{-2mm}
652 \section{Conclusions and Discussion}
653 \vspace*{-2mm}
654
655 We have found that out-of-distribution examples (multi-task learning
656 and perturbed examples) are more beneficial
657 to a deep learner than to a traditional shallow and purely
658 supervised learner. More precisely,
659 the answers are positive for all the questions asked in the introduction.
660 %\begin{itemize}
661
662 $\bullet$ %\item
663 {\bf Do the good results previously obtained with deep architectures on the
664 MNIST digits generalize to a much larger and richer (but similar)
665 dataset, the NIST special database 19, with 62 classes and around 800k examples}?
666 Yes, the SDA {\em systematically outperformed the MLP and all the previously
667 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level
668 performance} at around 17\% error on the 62-class task and 1.4\% on the digits,
669 and beating previously published results on the same data.
670
671 $\bullet$ %\item
672 {\bf To what extent do out-of-distribution examples help deep learners,
673 and do they help them more than shallow supervised ones}?
674 We found that distorted training examples not only made the resulting
675 classifier better on similarly perturbed images but also on
676 the {\em original clean examples}, and more importantly and more novel,
677 that deep architectures benefit more from such {\em out-of-distribution}
678 examples. Shallow MLPs were helped by perturbed training examples when tested on perturbed input
679 images (65\% relative improvement on NISTP)
680 but only marginally helped (5\% relative improvement on all classes)
681 or even hurt (10\% relative loss on digits)
682 with respect to clean examples. On the other hand, the deep SDAs
683 were significantly boosted by these out-of-distribution examples.
684 Similarly, whereas the improvement due to the multi-task setting was marginal or
685 negative for the MLP (from +5.6\% to -3.6\% relative change),
686 it was quite significant for the SDA (from +13\% to +27\% relative change),
687 which may be explained by the arguments below.
688 Since out-of-distribution data
689 (perturbed or from other related classes) is very common, this conclusion
690 is of practical importance.
691 %\end{itemize}
692
693 In the original self-taught learning framework~\citep{RainaR2007}, the
694 out-of-sample examples were used as a source of unsupervised data, and
695 experiments showed its positive effects in a \emph{limited labeled data}
696 scenario. However, many of the results by \citet{RainaR2007} (who used a
697 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught
698 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases.
699 We note instead that, for deep
700 architectures, our experiments show that such a positive effect is accomplished
701 even in a scenario with a \emph{large number of labeled examples},
702 i.e., here, the relative gain of self-taught learning and
703 out-of-distribution examples is probably preserved
704 in the asymptotic regime. However, note that in our perturbation experiments
705 (but not in our multi-task experiments),
706 even the out-of-distribution examples are labeled, unlike in the
707 earlier self-taught learning experiments~\citep{RainaR2007}.
708
709 {\bf Why would deep learners benefit more from the self-taught learning
710 framework and out-of-distribution examples}?
711 The key idea is that the lower layers of the predictor compute a hierarchy
712 of features that can be shared across tasks or across variants of the
713 input distribution. A theoretical analysis of generalization improvements
714 due to sharing of intermediate features across tasks already points
715 towards that explanation~\cite{baxter95a}.
716 Intermediate features that can be used in different
717 contexts can be estimated in a way that allows to share statistical
718 strength. Features extracted through many levels are more likely to
719 be more abstract and more invariant to some of the factors of variation
720 in the underlying distribution (as the experiments in~\citet{Goodfellow2009} suggest),
721 increasing the likelihood that they would be useful for a larger array
722 of tasks and input conditions.
723 Therefore, we hypothesize that both depth and unsupervised
724 pre-training play a part in explaining the advantages observed here, and future
725 experiments could attempt at teasing apart these factors.
726 And why would deep learners benefit from the self-taught learning
727 scenarios even when the number of labeled examples is very large?
728 We hypothesize that this is related to the hypotheses studied
729 in~\citet{Erhan+al-2010}. In~\citet{Erhan+al-2010}
730 it was found that online learning on a huge dataset did not make the
731 advantage of the deep learning bias vanish, and a similar phenomenon
732 may be happening here. We hypothesize that unsupervised pre-training
733 of a deep hierarchy with out-of-distribution examples initializes the
734 model in the basin of attraction of supervised gradient descent
735 that corresponds to better generalization. Furthermore, such good
736 basins of attraction are not discovered by pure supervised learning
737 (with or without out-of-distribution examples) from random initialization, and more labeled examples
738 does not allow the shallow or purely supervised models to discover
739 the kind of better basins associated
740 with deep learning and out-of-distribution examples.
741
742 A Flash demo of the recognizer (where both the MLP and the SDA can be compared)
743 can be executed on-line at the anonymous site {\tt http://deep.host22.com}.
744
745 \iffalse
746 \section*{Appendix I: Detailed Numerical Results}
747
748 These tables correspond to Figures 2 and 3 and contain the raw error rates for each model and dataset considered.
749 They also contain additional data such as test errors on P07 and standard errors.
750
751 \begin{table}[ht]
752 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits +
753 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training
754 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture
755 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07)
756 and using a validation set to select hyper-parameters and other training choices.
757 \{SDA,MLP\}0 are trained on NIST,
758 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07.
759 The human error rate on digits is a lower bound because it does not count digits that were
760 recognized as letters. For comparison, the results found in the literature
761 on NIST digits classification using the same test set are included.}
762 \label{tab:sda-vs-mlp-vs-humans}
763 \begin{center}
764 \begin{tabular}{|l|r|r|r|r|} \hline
765 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
766 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline
767 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline
768 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline
769 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline
770 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline
771 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline
772 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline
773 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline
774 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline
775 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline
776 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline
777 \end{tabular}
778 \end{center}
779 \end{table}
780
781 \begin{table}[ht]
782 \caption{Relative change in error rates due to the use of perturbed training data,
783 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models.
784 A positive value indicates that training on the perturbed data helped for the
785 given test set (the first 3 columns on the 62-class tasks and the last one is
786 on the clean 10-class digits). Clearly, the deep learning models did benefit more
787 from perturbed training data, even when testing on clean data, whereas the MLP
788 trained on perturbed data performed worse on the clean digits and about the same
789 on the clean characters. }
790 \label{tab:perturbation-effect}
791 \begin{center}
792 \begin{tabular}{|l|r|r|r|r|} \hline
793 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
794 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline
795 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline
796 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline
797 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline
798 \end{tabular}
799 \end{center}
800 \end{table}
801
802 \begin{table}[ht]
803 \caption{Test error rates and relative change in error rates due to the use of
804 a multi-task setting, i.e., training on each task in isolation vs training
805 for all three tasks together, for MLPs vs SDAs. The SDA benefits much
806 more from the multi-task setting. All experiments on only on the
807 unperturbed NIST data, using validation error for model selection.
808 Relative improvement is 1 - single-task error / multi-task error.}
809 \label{tab:multi-task}
810 \begin{center}
811 \begin{tabular}{|l|r|r|r|} \hline
812 & single-task & multi-task & relative \\
813 & setting & setting & improvement \\ \hline
814 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline
815 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline
816 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline
817 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline
818 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline
819 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline
820 \end{tabular}
821 \end{center}
822 \end{table}
823
824 \fi
825
826 %\afterpage{\clearpage}
827 %\clearpage
828 {
829 %\bibliographystyle{spbasic} % basic style, author-year citations
830 \bibliographystyle{plainnat}
831 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,specials,aigaion-shorter}
832 %\bibliographystyle{unsrtnat}
833 %\bibliographystyle{apalike}
834 }
835
836
837 \end{document}