comparison writeup/aistats2011_cameraready.tex @ 633:13baba8a4522

merge
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Sat, 19 Mar 2011 22:51:40 -0400
parents 510220effb14
children 54e8958e963b
comparison
equal deleted inserted replaced
632:5541056d3fb0 633:13baba8a4522
1 %\documentclass[twoside,11pt]{article} % For LaTeX2e
2 \documentclass{article} % For LaTeX2e
3 \usepackage[accepted]{aistats2e_2011}
4 %\usepackage{times}
5 \usepackage{wrapfig}
6 \usepackage{amsthm}
7 \usepackage{amsmath}
8 \usepackage{bbm}
9 \usepackage[utf8]{inputenc}
10 \usepackage[psamsfonts]{amssymb}
11 %\usepackage{algorithm,algorithmic} % not used after all
12 \usepackage{graphicx,subfigure}
13 \usepackage{natbib}
14
15 \addtolength{\textwidth}{10mm}
16 \addtolength{\evensidemargin}{-5mm}
17 \addtolength{\oddsidemargin}{-5mm}
18
19 %\setlength\parindent{0mm}
20
21 \begin{document}
22
23 \twocolumn[
24 \aistatstitle{Deep Learners Benefit More from Out-of-Distribution Examples}
25 \runningtitle{Deep Learners for Out-of-Distribution Examples}
26 \runningauthor{Bengio et. al.}
27 \aistatsauthor{
28 Yoshua Bengio \and
29 Frédéric Bastien \and
30 \bf Arnaud Bergeron \and
31 Nicolas Boulanger-Lewandowski \and \\
32 \bf Thomas Breuel \and
33 Youssouf Chherawala \and
34 \bf Moustapha Cisse \and
35 Myriam Côté \and \\
36 \bf Dumitru Erhan \and
37 Jeremy Eustache \and
38 \bf Xavier Glorot \and
39 Xavier Muller \and \\
40 \bf Sylvain Pannetier Lebeuf \and
41 Razvan Pascanu \and
42 \bf Salah Rifai \and
43 Francois Savard \and \\
44 \bf Guillaume Sicard \\
45 \vspace*{1mm}}
46
47 %I can't use aistatsaddress in a single side paragraphe.
48 %The document is 2 colums, but this section span the 2 colums, sot there is only 1 left
49 \center{Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
50 \vspace*{5mm}
51 ]
52 %\aistatsaddress{Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
53
54
55 %\vspace*{5mm}}
56 %\date{{\tt bengioy@iro.umontreal.ca}, Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
57 %\jmlrheading{}{2010}{}{10/2010}{XX/2011}{Yoshua Bengio et al}
58 %\editor{}
59
60 %\makeanontitle
61 %\maketitle
62
63 %{\bf Running title: Deep Self-Taught Learning}
64
65 \vspace*{5mm}
66 \begin{abstract}
67 Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits. Comparative experiments were performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits), using both a multi-task setting and perturbed examples in order to obtain out-of-distribution examples. The results agree with the hypothesis, and show that a deep learner did {\em beat previously published results and reached human-level performance}.
68 \end{abstract}
69 %\vspace*{-3mm}
70
71 %\begin{keywords}
72 %Deep learning, self-taught learning, out-of-distribution examples, handwritten character recognition, multi-task learning
73 %\end{keywords}
74 %\keywords{self-taught learning \and multi-task learning \and out-of-distribution examples \and deep learning \and handwriting recognition}
75
76
77
78 \section{Introduction}
79 %\vspace*{-1mm}
80
81 {\bf Deep Learning} has emerged as a promising new area of research in
82 statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008-very-small,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review.
83 Learning algorithms for deep architectures are centered on the learning
84 of useful representations of data, which are better suited to the task at hand,
85 and are organized in a hierarchy with multiple levels.
86 This is in part inspired by observations of the mammalian visual cortex,
87 which consists of a chain of processing elements, each of which is associated with a
88 different representation of the raw visual input. In fact,
89 it was found recently that the features learnt in deep architectures resemble
90 those observed in the first two of these stages (in areas V1 and V2
91 of visual cortex) \citep{HonglakL2008}, and that they become more and
92 more invariant to factors of variation (such as camera movement) in
93 higher layers~\citep{Goodfellow2009}.
94 It has been hypothesized that learning a hierarchy of features increases the
95 ease and practicality of developing representations that are at once
96 tailored to specific tasks, yet are able to borrow statistical strength
97 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the
98 feature representation can lead to higher-level (more abstract, more
99 general) features that are more robust to unanticipated sources of
100 variance extant in real data.
101
102 Whereas a deep architecture can in principle be more powerful than a
103 shallow one in terms of representation, depth appears to render the
104 training problem more difficult in terms of optimization and local minima.
105 It is also only recently that successful algorithms were proposed to
106 overcome some of these difficulties. All are based on unsupervised
107 learning, often in an greedy layer-wise ``unsupervised pre-training''
108 stage~\citep{Bengio-2009}.
109 The principle is that each layer starting from
110 the bottom is trained to represent its input (the output of the previous
111 layer). After this
112 unsupervised initialization, the stack of layers can be
113 converted into a deep supervised feedforward neural network and fine-tuned by
114 stochastic gradient descent.
115 One of these layer initialization techniques,
116 applied here, is the Denoising
117 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see
118 Figure~\ref{fig:da}), which performed similarly or
119 better~\citep{VincentPLarochelleH2008-very-small} than previously
120 proposed Restricted Boltzmann Machines (RBM)~\citep{Hinton06}
121 in terms of unsupervised extraction
122 of a hierarchy of features useful for classification. Each layer is trained
123 to denoise its input, creating a layer of features that can be used as
124 input for the next layer, forming a Stacked Denoising Auto-encoder (SDA).
125 Note that training a Denoising Auto-encoder
126 can actually been seen as training a particular RBM by an inductive
127 principle different from maximum likelihood~\citep{Vincent-SM-2010},
128 namely by Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}.
129
130 Previous comparative experimental results with stacking of RBMs and DAs
131 to build deep supervised predictors had shown that they could outperform
132 shallow architectures in a variety of settings, especially
133 when the data involves complex interactions between many factors of
134 variation~\citep{LarochelleH2007,Bengio-2009}. Other experiments have suggested
135 that the unsupervised layer-wise pre-training acted as a useful
136 prior~\citep{Erhan+al-2010} that allows one to initialize a deep
137 neural network in a relatively much smaller region of parameter space,
138 corresponding to better generalization.
139
140 To further the understanding of the reasons for the good performance
141 observed with deep learners, we focus here on the following {\em hypothesis}:
142 intermediate levels of representation, especially when there are
143 more such levels, can be exploited to {\bf share
144 statistical strength across different but related types of examples},
145 such as examples coming from other tasks than the task of interest
146 (the multi-task setting), or examples coming from an overlapping
147 but different distribution (images with different kinds of perturbations
148 and noises, here). This is consistent with the hypotheses discussed
149 in~\citet{Bengio-2009} regarding the potential advantage
150 of deep learning and the idea that more levels of representation can
151 give rise to more abstract, more general features of the raw input.
152
153 This hypothesis is related to a learning setting called
154 {\bf self-taught learning}~\citep{RainaR2007}, which combines principles
155 of semi-supervised and multi-task learning: the learner can exploit examples
156 that are unlabeled and possibly come from a distribution different from the target
157 distribution, e.g., from other classes than those of interest.
158 It has already been shown that deep learners can clearly take advantage of
159 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small},
160 but more needed to be done to explore the impact
161 of {\em out-of-distribution} examples and of the {\em multi-task} setting
162 (one exception is~\citep{CollobertR2008}, which shares and uses unsupervised
163 pre-training only with the first layer). In particular the {\em relative
164 advantage of deep learning} for these settings has not been evaluated.
165
166
167 %
168 The {\bf main claim} of this paper is that deep learners (with several levels of representation) can
169 {\bf benefit more from out-of-distribution examples than shallow learners} (with a single
170 level), both in the context of the multi-task setting and from
171 perturbed examples. Because we are able to improve on state-of-the-art
172 performance and reach human-level performance
173 on a large-scale task, we consider that this paper is also a contribution
174 to advance the application of machine learning to handwritten character recognition.
175 More precisely, we ask and answer the following questions:
176
177 %\begin{enumerate}
178 $\bullet$ %\item
179 Do the good results previously obtained with deep architectures on the
180 MNIST digit images generalize to the setting of a similar but much larger and richer
181 dataset, the NIST special database 19, with 62 classes and around 800k examples?
182
183 $\bullet$ %\item
184 To what extent does the perturbation of input images (e.g. adding
185 noise, affine transformations, background images) make the resulting
186 classifiers better not only on similarly perturbed images but also on
187 the {\em original clean examples}? We study this question in the
188 context of the 62-class and 10-class tasks of the NIST special database 19.
189
190 $\bullet$ %\item
191 Do deep architectures {\em benefit {\bf more} from such out-of-distribution}
192 examples, in particular do they benefit more from
193 examples that are perturbed versions of the examples from the task of interest?
194
195 $\bullet$ %\item
196 Similarly, does the feature learning step in deep learning algorithms benefit {\bf more}
197 from training with moderately {\em different classes} (i.e. a multi-task learning scenario) than
198 a corresponding shallow and purely supervised architecture?
199 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case)
200 to answer this question.
201 %\end{enumerate}
202
203 Our experimental results provide positive evidence towards all of these questions,
204 as well as {\bf classifiers that reach human-level performance on 62-class isolated character
205 recognition and beat previously published results on the NIST dataset (special database 19)}.
206 To achieve these results, we introduce in the next section a sophisticated system
207 for stochastically transforming character images and then explain the methodology,
208 which is based on training with or without these transformed images and testing on
209 clean ones.
210 Code for generating these transformations as well as for the deep learning
211 algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
212
213 %\vspace*{-3mm}
214 %\newpage
215 \section{Perturbed and Transformed Character Images}
216 \label{s:perturbations}
217 %\vspace*{-2mm}
218
219 Figure~\ref{fig:transform} shows the different transformations we used to stochastically
220 transform $32 \times 32$ source images (such as the one in Fig.\ref{fig:torig})
221 in order to obtain data from a larger distribution which
222 covers a domain substantially larger than the clean characters distribution from
223 which we start.
224 Although character transformations have been used before to
225 improve character recognizers, this effort is on a large scale both
226 in number of classes and in the complexity of the transformations, hence
227 in the complexity of the learning task.
228 The code for these transformations (mostly Python) is available at
229 {\tt http://anonymous.url.net}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
230 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
231 amount of deformation or noise introduced.
232 There are two main parts in the pipeline. The first one,
233 from thickness to pinch, performs transformations. The second
234 part, from blur to contrast, adds different kinds of noise.
235 More details can be found in~\citep{ift6266-tr-anonymous}.
236
237 \begin{figure*}[ht]
238 \centering
239 \subfigure[Original]{\includegraphics[scale=0.6]{images/Original.png}\label{fig:torig}}
240 \subfigure[Thickness]{\includegraphics[scale=0.6]{images/Thick_only.png}}
241 \subfigure[Slant]{\includegraphics[scale=0.6]{images/Slant_only.png}}
242 \subfigure[Affine Transformation]{\includegraphics[scale=0.6]{images/Affine_only.png}}
243 \subfigure[Local Elastic Deformation]{\includegraphics[scale=0.6]{images/Localelasticdistorsions_only.png}}
244 \subfigure[Pinch]{\includegraphics[scale=0.6]{images/Pinch_only.png}}
245 %Noise
246 \subfigure[Motion Blur]{\includegraphics[scale=0.6]{images/Motionblur_only.png}}
247 \subfigure[Occlusion]{\includegraphics[scale=0.6]{images/occlusion_only.png}}
248 \subfigure[Gaussian Smoothing]{\includegraphics[scale=0.6]{images/Bruitgauss_only.png}}
249 \subfigure[Pixels Permutation]{\includegraphics[scale=0.6]{images/Permutpixel_only.png}}
250 \subfigure[Gaussian Noise]{\includegraphics[scale=0.6]{images/Distorsiongauss_only.png}}
251 \subfigure[Background Image Addition]{\includegraphics[scale=0.6]{images/background_other_only.png}}
252 \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}}
253 \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}}
254 \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}}
255 \caption{Top left (a): example original image. Others (b-o): examples of the effect
256 of each transformation module taken separately. Actual perturbed examples are obtained by
257 a pipeline of these, with random choices about which module to apply and how much perturbation
258 to apply.}
259 \label{fig:transform}
260 %\vspace*{-2mm}
261 \end{figure*}
262
263 %\vspace*{-3mm}
264 \section{Experimental Setup}
265 %\vspace*{-1mm}
266
267 Much previous work on deep learning had been performed on
268 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009},
269 with 60,000 examples, and variants involving 10,000
270 examples~\citep{Larochelle-jmlr-2009,VincentPLarochelleH2008-very-small}.
271 The focus here is on much larger training sets, from 10 times to
272 to 1000 times larger, and 62 classes.
273
274 The first step in constructing the larger datasets (called NISTP and P07) is to sample from
275 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
276 and {\bf OCR data} (scanned machine printed characters). See more in
277 Section~\ref{sec:sources} below. Once a character
278 is sampled from one of these sources (chosen randomly), the second step is to
279 apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}.
280
281 To provide a baseline of error rate comparison we also estimate human performance
282 on both the 62-class task and the 10-class digits task.
283 We compare the best Multi-Layer Perceptrons (MLP) against
284 the best Stacked Denoising Auto-encoders (SDA), when
285 both models' hyper-parameters are selected to minimize the validation set error.
286 We also provide a comparison against a precise estimate
287 of human performance obtained via Amazon's Mechanical Turk (AMT)
288 service ({\tt http://mturk.com}).
289 AMT users are paid small amounts
290 of money to perform tasks for which human intelligence is required.
291 Mechanical Turk has been used extensively in natural language processing and vision.
292 %processing \citep{SnowEtAl2008} and vision
293 %\citep{SorokinAndForsyth2008,whitehill09}.
294 AMT users were presented
295 with 10 character images (from a test set) on a screen
296 and asked to label them.
297 They were forced to choose a single character class (either among the
298 62 or 10 character classes) for each image.
299 80 subjects classified 2500 images per (dataset,task) pair.
300 Different humans labelers sometimes provided a different label for the same
301 example, and we were able to estimate the error variance due to this effect
302 because each image was classified by 3 different persons.
303 The average error of humans on the 62-class task NIST test set
304 is 18.2\%, with a standard error of 0.1\%.
305 We controlled noise in the labelling process by (1)
306 requiring AMT workers with a higher than normal average of accepted
307 responses ($>$95\%) on other tasks (2) discarding responses that were not
308 complete (10 predictions) (3) discarding responses for which for which the
309 time to predict was smaller than 3 seconds for NIST (the mean response time
310 was 20 seconds) and 6 seconds seconds for NISTP (average response time of
311 45 seconds) (4) discarding responses which were obviously wrong (10
312 identical ones, or "12345..."). Overall, after such filtering, we kept
313 approximately 95\% of the AMT workers' responses.
314
315 %\vspace*{-3mm}
316 \subsection{Data Sources}
317 \label{sec:sources}
318 %\vspace*{-2mm}
319
320 %\begin{itemize}
321 %\item
322 {\bf NIST.}
323 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995},
324 widely used for training and testing character
325 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}.
326 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications,
327 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes
328 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity.
329 The fourth partition (called $hsf_4$, 82,587 examples),
330 experimentally recognized to be the most difficult one, is the one recommended
331 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
332 for that purpose. We randomly split the remainder (731,668 examples) into a training set and a validation set for
333 model selection.
334 The performances reported by previous work on that dataset mostly use only the digits.
335 Here we use all the classes both in the training and testing phase. This is especially
336 useful to estimate the effect of a multi-task setting.
337 The distribution of the classes in the NIST training and test sets differs
338 substantially, with relatively many more digits in the test set, and a more uniform distribution
339 of letters in the test set (whereas in the training set they are distributed
340 more like in natural text).
341 %\vspace*{-1mm}
342
343 %\item
344 {\bf Fonts.}
345 In order to have a good variety of sources we downloaded an important number of free fonts from:
346 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}.
347 % TODO: pointless to anonymize, it's not pointing to our work
348 Including an operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from.
349 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image,
350 directly as input to our models.
351 %\vspace*{-1mm}
352
353 %\item
354 {\bf Captchas.}
355 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a Python-based captcha generator library) for
356 generating characters of the same format as the NIST dataset. This software is based on
357 a random character class generator and various kinds of transformations similar to those described in the previous sections.
358 In order to increase the variability of the data generated, many different fonts are used for generating the characters.
359 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity
360 depending on the value of the complexity parameter provided by the user of the data source.
361 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class?
362 %\vspace*{-1mm}
363
364 %\item
365 {\bf OCR data.}
366 A large set (2 million) of scanned, OCRed and manually verified machine-printed
367 characters where included as an
368 additional source. This set is part of a larger corpus being collected by the Image Understanding
369 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern
370 ({\tt http://www.iupr.com}), and which will be publicly released.
371 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this
372 %\end{itemize}
373
374 %\vspace*{-3mm}
375 \subsection{Data Sets}
376 %\vspace*{-2mm}
377
378 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label
379 from one of the 62 character classes.
380 %\begin{itemize}
381 %\vspace*{-1mm}
382
383 %\item
384 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has
385 \{651,668 / 80,000 / 82,587\} \{training / validation / test\} examples.
386 %\vspace*{-1mm}
387
388 %\item
389 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources
390 and sending them through the transformation pipeline described in section \ref{s:perturbations}.
391 For each new example to generate, a data source is selected with probability $10\%$ from the fonts,
392 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the
393 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$.
394 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
395 obtained from the corresponding NIST sets plus other sources.
396 %\vspace*{-1mm}
397
398 %\item
399 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources)
400 except that we only apply
401 transformations from slant to pinch (see Fig.\ref{fig:transform}(b-f)).
402 Therefore, the character is
403 transformed but no additional noise is added to the image, giving images
404 closer to the NIST dataset.
405 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
406 obtained from the corresponding NIST sets plus other sources.
407 %\end{itemize}
408
409 \begin{figure*}[ht]
410 %\vspace*{-2mm}
411 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}}
412 %\vspace*{-2mm}
413 \caption{Illustration of the computations and training criterion for the denoising
414 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of
415 the layer (i.e. raw input or output of previous layer)
416 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$.
417 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which
418 is compared to the uncorrupted input $x$ through the loss function
419 $L_H(x,z)$, whose expected value is approximately minimized during training
420 by tuning $\theta$ and $\theta'$.}
421 \label{fig:da}
422 %\vspace*{-2mm}
423 \end{figure*}
424
425 %\vspace*{-3mm}
426 \subsection{Models and their Hyper-parameters}
427 %\vspace*{-2mm}
428
429 The experiments are performed using MLPs (with a single
430 hidden layer) and deep SDAs.
431 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.}
432
433 {\bf Multi-Layer Perceptrons (MLP).} The MLP output estimated
434 \[
435 P({\rm class}|{\rm input}=x)
436 \]
437 with
438 \[
439 f(x)={\rm softmax}(b_2+W_2\tanh(b_1+W_1 x)),
440 \]
441 i.e., two layers, where
442 \[
443 p={\rm softmax}(a)
444 \]
445 means that
446 \[
447 p_i(x)=\exp(a_i)/\sum_j \exp(a_j)
448 \]
449 representing the probability
450 for class $i$, $\tanh$ is the element-wise
451 hyperbolic tangent, $b_i$ are parameter vectors, and $W_i$ are
452 parameter matrices (one per layer). The
453 number of rows of $W_1$ is called the number of hidden units (of the
454 single hidden layer, here), and
455 is one way to control capacity (the main other ways to control capacity are
456 the number of training iterations and optionally a regularization penalty
457 on the parameters, not used here because it did not help).
458 Whereas previous work had compared
459 deep architectures to both shallow MLPs and SVMs, we only compared to MLPs
460 here because of the very large datasets used (making the use of SVMs
461 computationally challenging because of their quadratic scaling
462 behavior). Preliminary experiments on training SVMs (libSVM) with subsets
463 of the training set allowing the program to fit in memory yielded
464 substantially worse results than those obtained with MLPs\footnote{RBF SVMs
465 trained with a subset of NISTP or NIST, 100k examples, to fit in memory,
466 yielded 64\% test error or worse; online linear SVMs trained on the whole
467 of NIST or 800k from NISTP yielded no better than 42\% error; slightly
468 better results were obtained by sparsifying the pixel intensities and
469 projecting to a second-order polynomial (a very sparse vector), still
470 41\% error. We expect that better results could be obtained with a
471 better implementation allowing for training with more examples and
472 a higher-order non-linear projection.} For training on nearly a hundred million examples (with the
473 perturbed data), the MLPs and SDA are much more convenient than classifiers
474 based on kernel methods. The MLP has a single hidden layer with $\tanh$
475 activation functions, and softmax (normalized exponentials) on the output
476 layer for estimating $P({\rm class} | {\rm input})$. The number of hidden units is
477 taken in $\{300,500,800,1000,1500\}$. Training examples are presented in
478 minibatches of size 20, i.e., the parameters are iteratively updated in the direction
479 of the mean gradient of the next 20 examples. A constant learning rate was chosen among $\{0.001,
480 0.01, 0.025, 0.075, 0.1, 0.5\}$.
481 %through preliminary experiments (measuring performance on a validation set),
482 %and $0.1$ (which was found to work best) was then selected for optimizing on
483 %the whole training sets.
484 %\vspace*{-1mm}
485
486
487 {\bf Stacked Denoising Auto-encoders (SDA).}
488 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs)
489 can be used to initialize the weights of each layer of a deep MLP (with many hidden
490 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006},
491 apparently setting parameters in the
492 basin of attraction of supervised gradient descent yielding better
493 generalization~\citep{Erhan+al-2010}. This initial {\em unsupervised
494 pre-training phase} uses all of the training images but not the training labels.
495 Each layer is trained in turn to produce a new representation of its input
496 (starting from the raw pixels).
497 It is hypothesized that the
498 advantage brought by this procedure stems from a better prior,
499 on the one hand taking advantage of the link between the input
500 distribution $P(x)$ and the conditional distribution of interest
501 $P(y|x)$ (like in semi-supervised learning), and on the other hand
502 taking advantage of the expressive power and bias implicit in the
503 deep architecture (whereby complex concepts are expressed as
504 compositions of simpler ones through a deep hierarchy).
505
506 Here we chose to use the Denoising
507 Auto-encoder~\citep{VincentPLarochelleH2008-very-small} as the building block for
508 these deep hierarchies of features, as it is simple to train and
509 explain (see Figure~\ref{fig:da}, as well as
510 tutorial and code there: {\tt http://deeplearning.net/tutorial}),
511 provides efficient inference, and yielded results
512 comparable or better than RBMs in series of experiments
513 \citep{VincentPLarochelleH2008-very-small}. It really corresponds to a Gaussian
514 RBM trained by a Score Matching criterion~\cite{Vincent-SM-2010}.
515 During its unsupervised training, a Denoising
516 Auto-encoder is presented with a stochastically corrupted version $\tilde{x}$
517 of the input $x$ and trained to reconstruct to produce a reconstruction $z$
518 of the uncorrupted input $x$. Because the network has to denoise, it is
519 forcing the hidden units $y$ to represent the leading regularities in
520 the data. Following~\citep{VincentPLarochelleH2008-very-small}
521 the hidden units output $y$ is obtained through
522 \[
523 y={\rm sigm}(c+V x)
524 \]
525 where ${\rm sigm}(a)=1/(1+\exp(-a))$
526 and the reconstruction is
527 \[
528 z={\rm sigm}(d+V' y).
529 \]
530 We minimize the training
531 set average of the cross-entropy
532 reconstruction error
533 \[
534 L_H(x,z)=\sum_i z_i \log x_i + (1-z_i) \log(1-x_i).
535 \]
536 Here we use the random binary masking corruption
537 (which in $\tilde{x}$ sets to 0 a random subset of the elements of $x$, and
538 copies the rest).
539 Once the first denoising auto-encoder is trained, its parameters can be used
540 to set the first layer of the deep MLP. The original data are then processed
541 through that first layer, and the output of the hidden units form a new
542 representation that can be used as input data for training a second denoising
543 auto-encoder, still in a purely unsupervised way.
544 This is repeated for the desired number of hidden layers.
545 After this unsupervised pre-training stage, the parameters
546 are used to initialize a deep MLP (similar to the above, but
547 with more layers), which is fine-tuned by
548 the same standard procedure (stochastic gradient descent)
549 used to train MLPs in general (see above).
550 The top layer parameters of the deep MLP (the one which outputs the
551 class probabilities and takes the top hidden layer as input) can
552 be initialized at 0.
553 The SDA hyper-parameters are the same as for the MLP, with the addition of the
554 amount of corruption noise (we used the masking noise process, whereby a
555 fixed proportion of the input values, randomly selected, are zeroed), and a
556 separate learning rate for the unsupervised pre-training stage (selected
557 from the same above set). The fraction of inputs corrupted was selected
558 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number
559 of hidden layers but it was fixed to 3 for our experiments,
560 based on previous work with
561 SDAs on MNIST~\citep{VincentPLarochelleH2008-very-small}.
562 We also compared against 1 and against 2 hidden layers, in order
563 to disantangle the effect of depth from the effect of unsupervised
564 pre-training.
565 The size of the hidden
566 layers was kept constant across hidden layers, and the best results
567 were obtained with the largest values that we could experiment
568 with given our patience, with 1000 hidden units.
569
570 %\vspace*{-1mm}
571
572 \begin{figure*}[ht]
573 %\vspace*{-2mm}
574 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}}
575 %\vspace*{-3mm}
576 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained
577 on NIST, 1 on NISTP, and 2 on P07. Left: overall results
578 of all models, on NIST and NISTP test sets.
579 Right: error rates on NIST test digits only, along with the previous results from
580 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
581 respectively based on ART, nearest neighbors, MLPs, and SVMs.}
582 \label{fig:error-rates-charts}
583 %\vspace*{-2mm}
584 \end{figure*}
585
586
587 \begin{figure*}[ht]
588 \vspace*{-3mm}
589 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}}
590 \vspace*{-3mm}
591 \caption{Relative improvement in error rate due to out-of-distribution examples.
592 Left: Improvement (or loss, when negative)
593 induced by out-of-distribution examples (perturbed data).
594 Right: Improvement (or loss, when negative) induced by multi-task
595 learning (training on all classes and testing only on either digits,
596 upper case, or lower-case). The deep learner (SDA) benefits more from
597 out-of-distribution examples, compared to the shallow MLP.}
598 \label{fig:improvements-charts}
599 \vspace*{-2mm}
600 \end{figure*}
601
602 \vspace*{-2mm}
603 \section{Experimental Results}
604 \vspace*{-2mm}
605
606 %%\vspace*{-1mm}
607 %\subsection{SDA vs MLP vs Humans}
608 %%\vspace*{-1mm}
609 The models are either trained on NIST (MLP0 and SDA0),
610 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested
611 on either NIST, NISTP or P07 (regardless of the data set used for training),
612 either on the 62-class task
613 or on the 10-digits task. Training time (including about half
614 for unsupervised pre-training, for DAs) on the larger
615 datasets is around one day on a GPU (GTX 285).
616 Figure~\ref{fig:error-rates-charts} summarizes the results obtained,
617 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1,
618 SDA2), along with the previous results on the digits NIST special database
619 19 test set from the literature, respectively based on ARTMAP neural
620 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search
621 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs
622 ~\citep{Milgram+al-2005}.% More detailed and complete numerical results
623 %(figures and tables, including standard errors on the error rates) can be
624 %found in Appendix.
625 The deep learner not only outperformed the shallow ones and
626 previously published performance (in a statistically and qualitatively
627 significant way) but when trained with perturbed data
628 reaches human performance on both the 62-class task
629 and the 10-class (digits) task.
630 17\% error (SDA1) or 18\% error (humans) may seem large but a large
631 majority of the errors from humans and from SDA1 are from out-of-context
632 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a
633 ``c'' and a ``C'' are often indistinguishible).
634 Regarding shallower networks pre-trained with unsupervised denoising
635 auto-encders, we find that the NIST test error is 21\% with one hidden
636 layer and 20\% with two hidden layers (vs 17\% in the same conditions
637 with 3 hidden layers). Compare this with the 23\% error achieved
638 by the MLP, i.e. a single hidden layer and no unsupervised pre-training.
639 As found in previous work~\cite{Erhan+al-2010,Larochelle-jmlr-2009},
640 these results show that both depth and
641 unsupervised pre-training need to be combined in order to achieve
642 the best results.
643
644
645 In addition, as shown in the left of
646 Figure~\ref{fig:improvements-charts}, the relative improvement in error
647 rate brought by out-of-distribution examples is greater for the deep
648 SDA, and these
649 differences with the shallow MLP are statistically and qualitatively
650 significant.
651 The left side of the figure shows the improvement to the clean
652 NIST test set error brought by the use of out-of-distribution examples
653 (i.e. the perturbed examples examples from NISTP or P07),
654 over the models trained exclusively on NIST (respectively SDA0 and MLP0).
655 Relative percent change is measured by taking
656 $100 \% \times$ (original model's error / perturbed-data model's error - 1).
657 The right side of
658 Figure~\ref{fig:improvements-charts} shows the relative improvement
659 brought by the use of a multi-task setting, in which the same model is
660 trained for more classes than the target classes of interest (i.e. training
661 with all 62 classes when the target classes are respectively the digits,
662 lower-case, or upper-case characters). Again, whereas the gain from the
663 multi-task setting is marginal or negative for the MLP, it is substantial
664 for the SDA. Note that to simplify these multi-task experiments, only the original
665 NIST dataset is used. For example, the MLP-digits bar shows the relative
666 percent improvement in MLP error rate on the NIST digits test set
667 as $100\% \times$ (single-task
668 model's error / multi-task model's error - 1). The single-task model is
669 trained with only 10 outputs (one per digit), seeing only digit examples,
670 whereas the multi-task model is trained with 62 outputs, with all 62
671 character classes as examples. Hence the hidden units are shared across
672 all tasks. For the multi-task model, the digit error rate is measured by
673 comparing the correct digit class with the output class associated with the
674 maximum conditional probability among only the digit classes outputs. The
675 setting is similar for the other two target classes (lower case characters
676 and upper case characters). Note however that some types of perturbations
677 (NISTP) help more than others (P07) when testing on the clean images.
678 %%\vspace*{-1mm}
679 %\subsection{Perturbed Training Data More Helpful for SDA}
680 %%\vspace*{-1mm}
681
682 %%\vspace*{-1mm}
683 %\subsection{Multi-Task Learning Effects}
684 %%\vspace*{-1mm}
685
686 \iffalse
687 As previously seen, the SDA is better able to benefit from the
688 transformations applied to the data than the MLP. In this experiment we
689 define three tasks: recognizing digits (knowing that the input is a digit),
690 recognizing upper case characters (knowing that the input is one), and
691 recognizing lower case characters (knowing that the input is one). We
692 consider the digit classification task as the target task and we want to
693 evaluate whether training with the other tasks can help or hurt, and
694 whether the effect is different for MLPs versus SDAs. The goal is to find
695 out if deep learning can benefit more (or less) from multiple related tasks
696 (i.e. the multi-task setting) compared to a corresponding purely supervised
697 shallow learner.
698
699 We use a single hidden layer MLP with 1000 hidden units, and a SDA
700 with 3 hidden layers (1000 hidden units per layer), pre-trained and
701 fine-tuned on NIST.
702
703 Our results show that the MLP benefits marginally from the multi-task setting
704 in the case of digits (5\% relative improvement) but is actually hurt in the case
705 of characters (respectively 3\% and 4\% worse for lower and upper class characters).
706 On the other hand the SDA benefited from the multi-task setting, with relative
707 error rate improvements of 27\%, 15\% and 13\% respectively for digits,
708 lower and upper case characters, as shown in Table~\ref{tab:multi-task}.
709 \fi
710
711
712 \vspace*{-2mm}
713 \section{Conclusions and Discussion}
714 \vspace*{-2mm}
715
716 We have found that out-of-distribution examples (multi-task learning
717 and perturbed examples) are more beneficial
718 to a deep learner than to a traditional shallow and purely
719 supervised learner. More precisely,
720 the answers are positive for all the questions asked in the introduction.
721 %\begin{itemize}
722
723 $\bullet$ %\item
724 {\bf Do the good results previously obtained with deep architectures on the
725 MNIST digits generalize to a much larger and richer (but similar)
726 dataset, the NIST special database 19, with 62 classes and around 800k examples}?
727 Yes, the SDA {\em systematically outperformed the MLP and all the previously
728 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level
729 performance} at around 17\% error on the 62-class task and 1.4\% on the digits,
730 and beating previously published results on the same data.
731
732 $\bullet$ %\item
733 {\bf To what extent do out-of-distribution examples help deep learners,
734 and do they help them more than shallow supervised ones}?
735 We found that distorted training examples not only made the resulting
736 classifier better on similarly perturbed images but also on
737 the {\em original clean examples}, and more importantly and more novel,
738 that deep architectures benefit more from such {\em out-of-distribution}
739 examples. Shallow MLPs were helped by perturbed training examples when tested on perturbed input
740 images (65\% relative improvement on NISTP)
741 but only marginally helped (5\% relative improvement on all classes)
742 or even hurt (10\% relative loss on digits)
743 with respect to clean examples. On the other hand, the deep SDAs
744 were significantly boosted by these out-of-distribution examples.
745 Similarly, whereas the improvement due to the multi-task setting was marginal or
746 negative for the MLP (from +5.6\% to -3.6\% relative change),
747 it was quite significant for the SDA (from +13\% to +27\% relative change),
748 which may be explained by the arguments below.
749 Since out-of-distribution data
750 (perturbed or from other related classes) is very common, this conclusion
751 is of practical importance.
752 %\end{itemize}
753
754 In the original self-taught learning framework~\citep{RainaR2007}, the
755 out-of-sample examples were used as a source of unsupervised data, and
756 experiments showed its positive effects in a \emph{limited labeled data}
757 scenario. However, many of the results by \citet{RainaR2007} (who used a
758 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught
759 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases.
760 We note instead that, for deep
761 architectures, our experiments show that such a positive effect is accomplished
762 even in a scenario with a \emph{large number of labeled examples},
763 i.e., here, the relative gain of self-taught learning and
764 out-of-distribution examples is probably preserved
765 in the asymptotic regime. However, note that in our perturbation experiments
766 (but not in our multi-task experiments),
767 even the out-of-distribution examples are labeled, unlike in the
768 earlier self-taught learning experiments~\citep{RainaR2007}.
769
770 {\bf Why would deep learners benefit more from the self-taught learning
771 framework and out-of-distribution examples}?
772 The key idea is that the lower layers of the predictor compute a hierarchy
773 of features that can be shared across tasks or across variants of the
774 input distribution. A theoretical analysis of generalization improvements
775 due to sharing of intermediate features across tasks already points
776 towards that explanation~\cite{baxter95a}.
777 Intermediate features that can be used in different
778 contexts can be estimated in a way that allows to share statistical
779 strength. Features extracted through many levels are more likely to
780 be more abstract and more invariant to some of the factors of variation
781 in the underlying distribution (as the experiments in~\citet{Goodfellow2009} suggest),
782 increasing the likelihood that they would be useful for a larger array
783 of tasks and input conditions.
784 Therefore, we hypothesize that both depth and unsupervised
785 pre-training play a part in explaining the advantages observed here, and future
786 experiments could attempt at teasing apart these factors.
787 And why would deep learners benefit from the self-taught learning
788 scenarios even when the number of labeled examples is very large?
789 We hypothesize that this is related to the hypotheses studied
790 in~\citet{Erhan+al-2010}. In~\citet{Erhan+al-2010}
791 it was found that online learning on a huge dataset did not make the
792 advantage of the deep learning bias vanish, and a similar phenomenon
793 may be happening here. We hypothesize that unsupervised pre-training
794 of a deep hierarchy with out-of-distribution examples initializes the
795 model in the basin of attraction of supervised gradient descent
796 that corresponds to better generalization. Furthermore, such good
797 basins of attraction are not discovered by pure supervised learning
798 (with or without out-of-distribution examples) from random initialization, and more labeled examples
799 does not allow the shallow or purely supervised models to discover
800 the kind of better basins associated
801 with deep learning and out-of-distribution examples.
802
803 A Flash demo of the recognizer (where both the MLP and the SDA can be compared)
804 can be executed on-line at the anonymous site {\tt http://deep.host22.com}.
805
806 \iffalse
807 \section*{Appendix I: Detailed Numerical Results}
808
809 These tables correspond to Figures 2 and 3 and contain the raw error rates for each model and dataset considered.
810 They also contain additional data such as test errors on P07 and standard errors.
811
812 \begin{table}[ht]
813 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits +
814 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training
815 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture
816 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07)
817 and using a validation set to select hyper-parameters and other training choices.
818 \{SDA,MLP\}0 are trained on NIST,
819 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07.
820 The human error rate on digits is a lower bound because it does not count digits that were
821 recognized as letters. For comparison, the results found in the literature
822 on NIST digits classification using the same test set are included.}
823 \label{tab:sda-vs-mlp-vs-humans}
824 \begin{center}
825 \begin{tabular}{|l|r|r|r|r|} \hline
826 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
827 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline
828 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline
829 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline
830 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline
831 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline
832 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline
833 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline
834 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline
835 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline
836 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline
837 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline
838 \end{tabular}
839 \end{center}
840 \end{table}
841
842 \begin{table}[ht]
843 \caption{Relative change in error rates due to the use of perturbed training data,
844 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models.
845 A positive value indicates that training on the perturbed data helped for the
846 given test set (the first 3 columns on the 62-class tasks and the last one is
847 on the clean 10-class digits). Clearly, the deep learning models did benefit more
848 from perturbed training data, even when testing on clean data, whereas the MLP
849 trained on perturbed data performed worse on the clean digits and about the same
850 on the clean characters. }
851 \label{tab:perturbation-effect}
852 \begin{center}
853 \begin{tabular}{|l|r|r|r|r|} \hline
854 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
855 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline
856 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline
857 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline
858 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline
859 \end{tabular}
860 \end{center}
861 \end{table}
862
863 \begin{table}[ht]
864 \caption{Test error rates and relative change in error rates due to the use of
865 a multi-task setting, i.e., training on each task in isolation vs training
866 for all three tasks together, for MLPs vs SDAs. The SDA benefits much
867 more from the multi-task setting. All experiments on only on the
868 unperturbed NIST data, using validation error for model selection.
869 Relative improvement is 1 - single-task error / multi-task error.}
870 \label{tab:multi-task}
871 \begin{center}
872 \begin{tabular}{|l|r|r|r|} \hline
873 & single-task & multi-task & relative \\
874 & setting & setting & improvement \\ \hline
875 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline
876 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline
877 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline
878 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline
879 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline
880 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline
881 \end{tabular}
882 \end{center}
883 \end{table}
884
885 \fi
886
887 %\afterpage{\clearpage}
888 %\clearpage
889 {
890 %\bibliographystyle{spbasic} % basic style, author-year citations
891 \bibliographystyle{plainnat}
892 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,specials,aigaion-shorter}
893 %\bibliographystyle{unsrtnat}
894 %\bibliographystyle{apalike}
895 }
896
897
898 \end{document}