comparison writeup/aistats2011_revised.tex @ 633:13baba8a4522

merge
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Sat, 19 Mar 2011 22:51:40 -0400
parents 49933073590c
children
comparison
equal deleted inserted replaced
632:5541056d3fb0 633:13baba8a4522
1 %\documentclass[twoside,11pt]{article} % For LaTeX2e
2 \documentclass{article} % For LaTeX2e
3 \usepackage{aistats2e_2011}
4 %\usepackage{times}
5 \usepackage{wrapfig}
6 \usepackage{amsthm}
7 \usepackage{amsmath}
8 \usepackage{bbm}
9 \usepackage[utf8]{inputenc}
10 \usepackage[psamsfonts]{amssymb}
11 %\usepackage{algorithm,algorithmic} % not used after all
12 \usepackage{graphicx,subfigure}
13 \usepackage[numbers]{natbib}
14
15 \addtolength{\textwidth}{10mm}
16 \addtolength{\evensidemargin}{-5mm}
17 \addtolength{\oddsidemargin}{-5mm}
18
19 %\setlength\parindent{0mm}
20
21 \begin{document}
22
23 \twocolumn[
24 \aistatstitle{Deep Learners Benefit More from Out-of-Distribution Examples}
25 \runningtitle{Deep Learners for Out-of-Distribution Examples}
26 \runningauthor{Bengio et. al.}
27 \aistatsauthor{Anonymous Authors\\
28 \vspace*{5mm}}]
29 \iffalse
30 Yoshua Bengio \and
31 Frédéric Bastien \and
32 Arnaud Bergeron \and
33 Nicolas Boulanger-Lewandowski \and
34 Thomas Breuel \and
35 Youssouf Chherawala \and
36 Moustapha Cisse \and
37 Myriam Côté \and
38 Dumitru Erhan \and
39 Jeremy Eustache \and
40 Xavier Glorot \and
41 Xavier Muller \and
42 Sylvain Pannetier Lebeuf \and
43 Razvan Pascanu \and
44 Salah Rifai \and
45 Francois Savard \and
46 Guillaume Sicard
47 %}
48 \fi
49 %\aistatsaddress{Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
50 %\date{{\tt bengioy@iro.umontreal.ca}, Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada}
51 %\jmlrheading{}{2010}{}{10/2010}{XX/2011}{Yoshua Bengio et al}
52 %\editor{}
53
54 %\makeanontitle
55 %\maketitle
56
57 %{\bf Running title: Deep Self-Taught Learning}
58
59 \vspace*{5mm}
60 \begin{abstract}
61 Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because they can be shared across tasks and examples from different but related distributions, can yield even more benefits. Comparative experiments were performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits), using both a multi-task setting and perturbed examples in order to obtain out-of-distribution examples. The results agree with the hypothesis, and show that a deep learner did {\em beat previously published results and reached human-level performance}.
62 \end{abstract}
63 %\vspace*{-3mm}
64
65 %\begin{keywords}
66 %Deep learning, self-taught learning, out-of-distribution examples, handwritten character recognition, multi-task learning
67 %\end{keywords}
68 %\keywords{self-taught learning \and multi-task learning \and out-of-distribution examples \and deep learning \and handwriting recognition}
69
70
71
72 \section{Introduction}
73 %\vspace*{-1mm}
74
75 {\bf Deep Learning} has emerged as a promising new area of research in
76 statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008-very-small,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review.
77 Learning algorithms for deep architectures are centered on the learning
78 of useful representations of data, which are better suited to the task at hand,
79 and are organized in a hierarchy with multiple levels.
80 This is in part inspired by observations of the mammalian visual cortex,
81 which consists of a chain of processing elements, each of which is associated with a
82 different representation of the raw visual input. In fact,
83 it was found recently that the features learnt in deep architectures resemble
84 those observed in the first two of these stages (in areas V1 and V2
85 of visual cortex) \citep{HonglakL2008}, and that they become more and
86 more invariant to factors of variation (such as camera movement) in
87 higher layers~\citep{Goodfellow2009}.
88 It has been hypothesized that learning a hierarchy of features increases the
89 ease and practicality of developing representations that are at once
90 tailored to specific tasks, yet are able to borrow statistical strength
91 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the
92 feature representation can lead to higher-level (more abstract, more
93 general) features that are more robust to unanticipated sources of
94 variance extant in real data.
95
96 Whereas a deep architecture can in principle be more powerful than a
97 shallow one in terms of representation, depth appears to render the
98 training problem more difficult in terms of optimization and local minima.
99 It is also only recently that successful algorithms were proposed to
100 overcome some of these difficulties. All are based on unsupervised
101 learning, often in an greedy layer-wise ``unsupervised pre-training''
102 stage~\citep{Bengio-2009}.
103 The principle is that each layer starting from
104 the bottom is trained to represent its input (the output of the previous
105 layer). After this
106 unsupervised initialization, the stack of layers can be
107 converted into a deep supervised feedforward neural network and fine-tuned by
108 stochastic gradient descent.
109 One of these layer initialization techniques,
110 applied here, is the Denoising
111 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see
112 Figure~\ref{fig:da}), which performed similarly or
113 better~\citep{VincentPLarochelleH2008-very-small} than previously
114 proposed Restricted Boltzmann Machines (RBM)~\citep{Hinton06}
115 in terms of unsupervised extraction
116 of a hierarchy of features useful for classification. Each layer is trained
117 to denoise its input, creating a layer of features that can be used as
118 input for the next layer, forming a Stacked Denoising Auto-encoder (SDA).
119 Note that training a Denoising Auto-encoder
120 can actually been seen as training a particular RBM by an inductive
121 principle different from maximum likelihood~\citep{Vincent-SM-2010},
122 namely by Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}.
123
124 Previous comparative experimental results with stacking of RBMs and DAs
125 to build deep supervised predictors had shown that they could outperform
126 shallow architectures in a variety of settings, especially
127 when the data involves complex interactions between many factors of
128 variation~\citep{LarochelleH2007,Bengio-2009}. Other experiments have suggested
129 that the unsupervised layer-wise pre-training acted as a useful
130 prior~\citep{Erhan+al-2010} that allows one to initialize a deep
131 neural network in a relatively much smaller region of parameter space,
132 corresponding to better generalization.
133
134 To further the understanding of the reasons for the good performance
135 observed with deep learners, we focus here on the following {\em hypothesis}:
136 intermediate levels of representation, especially when there are
137 more such levels, can be exploited to {\bf share
138 statistical strength across different but related types of examples},
139 such as examples coming from other tasks than the task of interest
140 (the multi-task setting), or examples coming from an overlapping
141 but different distribution (images with different kinds of perturbations
142 and noises, here). This is consistent with the hypotheses discussed
143 in~\citet{Bengio-2009} regarding the potential advantage
144 of deep learning and the idea that more levels of representation can
145 give rise to more abstract, more general features of the raw input.
146
147 This hypothesis is related to a learning setting called
148 {\bf self-taught learning}~\citep{RainaR2007}, which combines principles
149 of semi-supervised and multi-task learning: the learner can exploit examples
150 that are unlabeled and possibly come from a distribution different from the target
151 distribution, e.g., from other classes than those of interest.
152 It has already been shown that deep learners can clearly take advantage of
153 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small},
154 but more needed to be done to explore the impact
155 of {\em out-of-distribution} examples and of the {\em multi-task} setting
156 (one exception is~\citep{CollobertR2008}, which shares and uses unsupervised
157 pre-training only with the first layer). In particular the {\em relative
158 advantage of deep learning} for these settings has not been evaluated.
159
160
161 %
162 The {\bf main claim} of this paper is that deep learners (with several levels of representation) can
163 {\bf benefit more from out-of-distribution examples than shallow learners} (with a single
164 level), both in the context of the multi-task setting and from
165 perturbed examples. Because we are able to improve on state-of-the-art
166 performance and reach human-level performance
167 on a large-scale task, we consider that this paper is also a contribution
168 to advance the application of machine learning to handwritten character recognition.
169 More precisely, we ask and answer the following questions:
170
171 %\begin{enumerate}
172 $\bullet$ %\item
173 Do the good results previously obtained with deep architectures on the
174 MNIST digit images generalize to the setting of a similar but much larger and richer
175 dataset, the NIST special database 19, with 62 classes and around 800k examples?
176
177 $\bullet$ %\item
178 To what extent does the perturbation of input images (e.g. adding
179 noise, affine transformations, background images) make the resulting
180 classifiers better not only on similarly perturbed images but also on
181 the {\em original clean examples}? We study this question in the
182 context of the 62-class and 10-class tasks of the NIST special database 19.
183
184 $\bullet$ %\item
185 Do deep architectures {\em benefit {\bf more} from such out-of-distribution}
186 examples, in particular do they benefit more from
187 examples that are perturbed versions of the examples from the task of interest?
188
189 $\bullet$ %\item
190 Similarly, does the feature learning step in deep learning algorithms benefit {\bf more}
191 from training with moderately {\em different classes} (i.e. a multi-task learning scenario) than
192 a corresponding shallow and purely supervised architecture?
193 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case)
194 to answer this question.
195 %\end{enumerate}
196
197 Our experimental results provide positive evidence towards all of these questions,
198 as well as {\bf classifiers that reach human-level performance on 62-class isolated character
199 recognition and beat previously published results on the NIST dataset (special database 19)}.
200 To achieve these results, we introduce in the next section a sophisticated system
201 for stochastically transforming character images and then explain the methodology,
202 which is based on training with or without these transformed images and testing on
203 clean ones.
204 Code for generating these transformations as well as for the deep learning
205 algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
206
207 %\vspace*{-3mm}
208 %\newpage
209 \section{Perturbed and Transformed Character Images}
210 \label{s:perturbations}
211 %\vspace*{-2mm}
212
213 Figure~\ref{fig:transform} shows the different transformations we used to stochastically
214 transform $32 \times 32$ source images (such as the one in Fig.\ref{fig:torig})
215 in order to obtain data from a larger distribution which
216 covers a domain substantially larger than the clean characters distribution from
217 which we start.
218 Although character transformations have been used before to
219 improve character recognizers, this effort is on a large scale both
220 in number of classes and in the complexity of the transformations, hence
221 in the complexity of the learning task.
222 The code for these transformations (mostly Python) is available at
223 {\tt http://anonymous.url.net}. All the modules in the pipeline (Figure~\ref{fig:transform}) share
224 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the
225 amount of deformation or noise introduced.
226 There are two main parts in the pipeline. The first one,
227 from thickness to pinch, performs transformations. The second
228 part, from blur to contrast, adds different kinds of noise.
229 More details can be found in~\citep{ift6266-tr-anonymous}.
230
231 \begin{figure*}[ht]
232 \centering
233 \subfigure[Original]{\includegraphics[scale=0.6]{images/Original.png}\label{fig:torig}}
234 \subfigure[Thickness]{\includegraphics[scale=0.6]{images/Thick_only.png}}
235 \subfigure[Slant]{\includegraphics[scale=0.6]{images/Slant_only.png}}
236 \subfigure[Affine Transformation]{\includegraphics[scale=0.6]{images/Affine_only.png}}
237 \subfigure[Local Elastic Deformation]{\includegraphics[scale=0.6]{images/Localelasticdistorsions_only.png}}
238 \subfigure[Pinch]{\includegraphics[scale=0.6]{images/Pinch_only.png}}
239 %Noise
240 \subfigure[Motion Blur]{\includegraphics[scale=0.6]{images/Motionblur_only.png}}
241 \subfigure[Occlusion]{\includegraphics[scale=0.6]{images/occlusion_only.png}}
242 \subfigure[Gaussian Smoothing]{\includegraphics[scale=0.6]{images/Bruitgauss_only.png}}
243 \subfigure[Pixels Permutation]{\includegraphics[scale=0.6]{images/Permutpixel_only.png}}
244 \subfigure[Gaussian Noise]{\includegraphics[scale=0.6]{images/Distorsiongauss_only.png}}
245 \subfigure[Background Image Addition]{\includegraphics[scale=0.6]{images/background_other_only.png}}
246 \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}}
247 \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}}
248 \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}}
249 \caption{Top left (a): example original image. Others (b-o): examples of the effect
250 of each transformation module taken separately. Actual perturbed examples are obtained by
251 a pipeline of these, with random choices about which module to apply and how much perturbation
252 to apply.}
253 \label{fig:transform}
254 %\vspace*{-2mm}
255 \end{figure*}
256
257 %\vspace*{-3mm}
258 \section{Experimental Setup}
259 %\vspace*{-1mm}
260
261 Much previous work on deep learning had been performed on
262 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009},
263 with 60,000 examples, and variants involving 10,000
264 examples~\citep{Larochelle-jmlr-2009,VincentPLarochelleH2008-very-small}.
265 The focus here is on much larger training sets, from 10 times to
266 to 1000 times larger, and 62 classes.
267
268 The first step in constructing the larger datasets (called NISTP and P07) is to sample from
269 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
270 and {\bf OCR data} (scanned machine printed characters). See more in
271 Section~\ref{sec:sources} below. Once a character
272 is sampled from one of these sources (chosen randomly), the second step is to
273 apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}.
274
275 To provide a baseline of error rate comparison we also estimate human performance
276 on both the 62-class task and the 10-class digits task.
277 We compare the best Multi-Layer Perceptrons (MLP) against
278 the best Stacked Denoising Auto-encoders (SDA), when
279 both models' hyper-parameters are selected to minimize the validation set error.
280 We also provide a comparison against a precise estimate
281 of human performance obtained via Amazon's Mechanical Turk (AMT)
282 service ({\tt http://mturk.com}).
283 AMT users are paid small amounts
284 of money to perform tasks for which human intelligence is required.
285 Mechanical Turk has been used extensively in natural language processing and vision.
286 %processing \citep{SnowEtAl2008} and vision
287 %\citep{SorokinAndForsyth2008,whitehill09}.
288 AMT users were presented
289 with 10 character images (from a test set) on a screen
290 and asked to label them.
291 They were forced to choose a single character class (either among the
292 62 or 10 character classes) for each image.
293 80 subjects classified 2500 images per (dataset,task) pair.
294 Different humans labelers sometimes provided a different label for the same
295 example, and we were able to estimate the error variance due to this effect
296 because each image was classified by 3 different persons.
297 The average error of humans on the 62-class task NIST test set
298 is 18.2\%, with a standard error of 0.1\%.
299 We controlled noise in the labelling process by (1)
300 requiring AMT workers with a higher than normal average of accepted
301 responses ($>$95\%) on other tasks (2) discarding responses that were not
302 complete (10 predictions) (3) discarding responses for which for which the
303 time to predict was smaller than 3 seconds for NIST (the mean response time
304 was 20 seconds) and 6 seconds seconds for NISTP (average response time of
305 45 seconds) (4) discarding responses which were obviously wrong (10
306 identical ones, or "12345..."). Overall, after such filtering, we kept
307 approximately 95\% of the AMT workers' responses.
308
309 %\vspace*{-3mm}
310 \subsection{Data Sources}
311 \label{sec:sources}
312 %\vspace*{-2mm}
313
314 %\begin{itemize}
315 %\item
316 {\bf NIST.}
317 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995},
318 widely used for training and testing character
319 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}.
320 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications,
321 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes
322 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity.
323 The fourth partition (called $hsf_4$, 82,587 examples),
324 experimentally recognized to be the most difficult one, is the one recommended
325 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
326 for that purpose. We randomly split the remainder (731,668 examples) into a training set and a validation set for
327 model selection.
328 The performances reported by previous work on that dataset mostly use only the digits.
329 Here we use all the classes both in the training and testing phase. This is especially
330 useful to estimate the effect of a multi-task setting.
331 The distribution of the classes in the NIST training and test sets differs
332 substantially, with relatively many more digits in the test set, and a more uniform distribution
333 of letters in the test set (whereas in the training set they are distributed
334 more like in natural text).
335 %\vspace*{-1mm}
336
337 %\item
338 {\bf Fonts.}
339 In order to have a good variety of sources we downloaded an important number of free fonts from:
340 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}.
341 % TODO: pointless to anonymize, it's not pointing to our work
342 Including an operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from.
343 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image,
344 directly as input to our models.
345 %\vspace*{-1mm}
346
347 %\item
348 {\bf Captchas.}
349 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a Python-based captcha generator library) for
350 generating characters of the same format as the NIST dataset. This software is based on
351 a random character class generator and various kinds of transformations similar to those described in the previous sections.
352 In order to increase the variability of the data generated, many different fonts are used for generating the characters.
353 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity
354 depending on the value of the complexity parameter provided by the user of the data source.
355 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class?
356 %\vspace*{-1mm}
357
358 %\item
359 {\bf OCR data.}
360 A large set (2 million) of scanned, OCRed and manually verified machine-printed
361 characters where included as an
362 additional source. This set is part of a larger corpus being collected by the Image Understanding
363 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern
364 ({\tt http://www.iupr.com}), and which will be publicly released.
365 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this
366 %\end{itemize}
367
368 %\vspace*{-3mm}
369 \subsection{Data Sets}
370 %\vspace*{-2mm}
371
372 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label
373 from one of the 62 character classes.
374 %\begin{itemize}
375 %\vspace*{-1mm}
376
377 %\item
378 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has
379 \{651,668 / 80,000 / 82,587\} \{training / validation / test\} examples.
380 %\vspace*{-1mm}
381
382 %\item
383 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources
384 and sending them through the transformation pipeline described in section \ref{s:perturbations}.
385 For each new example to generate, a data source is selected with probability $10\%$ from the fonts,
386 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the
387 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$.
388 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
389 obtained from the corresponding NIST sets plus other sources.
390 %\vspace*{-1mm}
391
392 %\item
393 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources)
394 except that we only apply
395 transformations from slant to pinch (see Fig.\ref{fig:transform}(b-f)).
396 Therefore, the character is
397 transformed but no additional noise is added to the image, giving images
398 closer to the NIST dataset.
399 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples
400 obtained from the corresponding NIST sets plus other sources.
401 %\end{itemize}
402
403 \begin{figure*}[ht]
404 %\vspace*{-2mm}
405 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}}
406 %\vspace*{-2mm}
407 \caption{Illustration of the computations and training criterion for the denoising
408 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of
409 the layer (i.e. raw input or output of previous layer)
410 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$.
411 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which
412 is compared to the uncorrupted input $x$ through the loss function
413 $L_H(x,z)$, whose expected value is approximately minimized during training
414 by tuning $\theta$ and $\theta'$.}
415 \label{fig:da}
416 %\vspace*{-2mm}
417 \end{figure*}
418
419 %\vspace*{-3mm}
420 \subsection{Models and their Hyper-parameters}
421 %\vspace*{-2mm}
422
423 The experiments are performed using MLPs (with a single
424 hidden layer) and deep SDAs.
425 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.}
426
427 {\bf Multi-Layer Perceptrons (MLP).} Whereas previous work had compared
428 deep architectures to both shallow MLPs and SVMs, we only compared to MLPs
429 here because of the very large datasets used (making the use of SVMs
430 computationally challenging because of their quadratic scaling
431 behavior). Preliminary experiments on training SVMs (libSVM) with subsets
432 of the training set allowing the program to fit in memory yielded
433 substantially worse results than those obtained with MLPs\footnote{RBF SVMs
434 trained with a subset of NISTP or NIST, 100k examples, to fit in memory,
435 yielded 64\% test error or worse; online linear SVMs trained on the whole
436 of NIST or 800k from NISTP yielded no better than 42\% error; slightly
437 better results were obtained by sparsifying the pixel intensities and
438 projecting to a second-order polynomial (a very sparse vector), still
439 41\% error. We expect that better results could be obtained with a
440 better implementation allowing for training with more examples and
441 a higher-order non-linear projection.} For training on nearly a hundred million examples (with the
442 perturbed data), the MLPs and SDA are much more convenient than classifiers
443 based on kernel methods. The MLP has a single hidden layer with $\tanh$
444 activation functions, and softmax (normalized exponentials) on the output
445 layer for estimating $P(class | image)$. The number of hidden units is
446 taken in $\{300,500,800,1000,1500\}$. Training examples are presented in
447 minibatches of size 20. A constant learning rate was chosen among $\{0.001,
448 0.01, 0.025, 0.075, 0.1, 0.5\}$.
449 %through preliminary experiments (measuring performance on a validation set),
450 %and $0.1$ (which was found to work best) was then selected for optimizing on
451 %the whole training sets.
452 %\vspace*{-1mm}
453
454
455 {\bf Stacked Denoising Auto-encoders (SDA).}
456 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs)
457 can be used to initialize the weights of each layer of a deep MLP (with many hidden
458 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006},
459 apparently setting parameters in the
460 basin of attraction of supervised gradient descent yielding better
461 generalization~\citep{Erhan+al-2010}. This initial {\em unsupervised
462 pre-training phase} uses all of the training images but not the training labels.
463 Each layer is trained in turn to produce a new representation of its input
464 (starting from the raw pixels).
465 It is hypothesized that the
466 advantage brought by this procedure stems from a better prior,
467 on the one hand taking advantage of the link between the input
468 distribution $P(x)$ and the conditional distribution of interest
469 $P(y|x)$ (like in semi-supervised learning), and on the other hand
470 taking advantage of the expressive power and bias implicit in the
471 deep architecture (whereby complex concepts are expressed as
472 compositions of simpler ones through a deep hierarchy).
473
474 Here we chose to use the Denoising
475 Auto-encoder~\citep{VincentPLarochelleH2008-very-small} as the building block for
476 these deep hierarchies of features, as it is simple to train and
477 explain (see Figure~\ref{fig:da}, as well as
478 tutorial and code there: {\tt http://deeplearning.net/tutorial}),
479 provides efficient inference, and yielded results
480 comparable or better than RBMs in series of experiments
481 \citep{VincentPLarochelleH2008-very-small}. It really corresponds to a Gaussian
482 RBM trained by a Score Matching criterion~\cite{Vincent-SM-2010}.
483 During training, a Denoising
484 Auto-encoder is presented with a stochastically corrupted version
485 of the input and trained to reconstruct the uncorrupted input,
486 forcing the hidden units to represent the leading regularities in
487 the data. Here we use the random binary masking corruption
488 (which sets to 0 a random subset of the inputs).
489 Once it is trained, in a purely unsupervised way,
490 its hidden units' activations can
491 be used as inputs for training a second one, etc.
492 After this unsupervised pre-training stage, the parameters
493 are used to initialize a deep MLP, which is fine-tuned by
494 the same standard procedure used to train them (see above).
495 The SDA hyper-parameters are the same as for the MLP, with the addition of the
496 amount of corruption noise (we used the masking noise process, whereby a
497 fixed proportion of the input values, randomly selected, are zeroed), and a
498 separate learning rate for the unsupervised pre-training stage (selected
499 from the same above set). The fraction of inputs corrupted was selected
500 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number
501 of hidden layers but it was fixed to 3 for most experiments,
502 based on previous work with
503 SDAs on MNIST~\citep{VincentPLarochelleH2008-very-small}.
504 We also compared against 1 and against 2 hidden layers, in order
505 to disantangle the effect of depth from the effect of unsupervised
506 pre-training.
507 The size of the hidden
508 layers was kept constant across hidden layers, and the best results
509 were obtained with the largest values that we could experiment
510 with given our patience, with 1000 hidden units.
511
512 %\vspace*{-1mm}
513
514 \begin{figure*}[ht]
515 %\vspace*{-2mm}
516 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}}
517 %\vspace*{-3mm}
518 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained
519 on NIST, 1 on NISTP, and 2 on P07. Left: overall results
520 of all models, on NIST and NISTP test sets.
521 Right: error rates on NIST test digits only, along with the previous results from
522 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}
523 respectively based on ART, nearest neighbors, MLPs, and SVMs.}
524 \label{fig:error-rates-charts}
525 %\vspace*{-2mm}
526 \end{figure*}
527
528
529 \begin{figure*}[ht]
530 \vspace*{-3mm}
531 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}}
532 \vspace*{-3mm}
533 \caption{Relative improvement in error rate due to out-of-distribution examples.
534 Left: Improvement (or loss, when negative)
535 induced by out-of-distribution examples (perturbed data).
536 Right: Improvement (or loss, when negative) induced by multi-task
537 learning (training on all classes and testing only on either digits,
538 upper case, or lower-case). The deep learner (SDA) benefits more from
539 out-of-distribution examples, compared to the shallow MLP.}
540 \label{fig:improvements-charts}
541 \vspace*{-2mm}
542 \end{figure*}
543
544 \vspace*{-2mm}
545 \section{Experimental Results}
546 \vspace*{-2mm}
547
548 %%\vspace*{-1mm}
549 %\subsection{SDA vs MLP vs Humans}
550 %%\vspace*{-1mm}
551 The models are either trained on NIST (MLP0 and SDA0),
552 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested
553 on either NIST, NISTP or P07 (regardless of the data set used for training),
554 either on the 62-class task
555 or on the 10-digits task. Training time (including about half
556 for unsupervised pre-training, for DAs) on the larger
557 datasets is around one day on a GPU (GTX 285).
558 Figure~\ref{fig:error-rates-charts} summarizes the results obtained,
559 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1,
560 SDA2), along with the previous results on the digits NIST special database
561 19 test set from the literature, respectively based on ARTMAP neural
562 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search
563 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs
564 ~\citep{Milgram+al-2005}.% More detailed and complete numerical results
565 %(figures and tables, including standard errors on the error rates) can be
566 %found in Appendix.
567 The deep learner not only outperformed the shallow ones and
568 previously published performance (in a statistically and qualitatively
569 significant way) but when trained with perturbed data
570 reaches human performance on both the 62-class task
571 and the 10-class (digits) task.
572 17\% error (SDA1) or 18\% error (humans) may seem large but a large
573 majority of the errors from humans and from SDA1 are from out-of-context
574 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a
575 ``c'' and a ``C'' are often indistinguishible).
576 Regarding shallower networks pre-trained with unsupervised denoising
577 auto-encders, we find that the NIST test error is 21\% with one hidden
578 layer and 20\% with two hidden layers (vs 17\% in the same conditions
579 with 3 hidden layers). Compare this with the 23\% error achieved
580 by the MLP, i.e. a single hidden layer and no unsupervised pre-training.
581 As found in previous work~\cite{Erhan+al-2010,Larochelle-jmlr-2009},
582 these results show that both depth and
583 unsupervised pre-training need to be combined in order to achieve
584 the best results.
585
586
587 In addition, as shown in the left of
588 Figure~\ref{fig:improvements-charts}, the relative improvement in error
589 rate brought by out-of-distribution examples is greater for the deep
590 SDA, and these
591 differences with the shallow MLP are statistically and qualitatively
592 significant.
593 The left side of the figure shows the improvement to the clean
594 NIST test set error brought by the use of out-of-distribution examples
595 (i.e. the perturbed examples examples from NISTP or P07),
596 over the models trained exclusively on NIST (respectively SDA0 and MLP0).
597 Relative percent change is measured by taking
598 $100 \% \times$ (original model's error / perturbed-data model's error - 1).
599 The right side of
600 Figure~\ref{fig:improvements-charts} shows the relative improvement
601 brought by the use of a multi-task setting, in which the same model is
602 trained for more classes than the target classes of interest (i.e. training
603 with all 62 classes when the target classes are respectively the digits,
604 lower-case, or upper-case characters). Again, whereas the gain from the
605 multi-task setting is marginal or negative for the MLP, it is substantial
606 for the SDA. Note that to simplify these multi-task experiments, only the original
607 NIST dataset is used. For example, the MLP-digits bar shows the relative
608 percent improvement in MLP error rate on the NIST digits test set
609 as $100\% \times$ (single-task
610 model's error / multi-task model's error - 1). The single-task model is
611 trained with only 10 outputs (one per digit), seeing only digit examples,
612 whereas the multi-task model is trained with 62 outputs, with all 62
613 character classes as examples. Hence the hidden units are shared across
614 all tasks. For the multi-task model, the digit error rate is measured by
615 comparing the correct digit class with the output class associated with the
616 maximum conditional probability among only the digit classes outputs. The
617 setting is similar for the other two target classes (lower case characters
618 and upper case characters). Note however that some types of perturbations
619 (NISTP) help more than others (P07) when testing on the clean images.
620 %%\vspace*{-1mm}
621 %\subsection{Perturbed Training Data More Helpful for SDA}
622 %%\vspace*{-1mm}
623
624 %%\vspace*{-1mm}
625 %\subsection{Multi-Task Learning Effects}
626 %%\vspace*{-1mm}
627
628 \iffalse
629 As previously seen, the SDA is better able to benefit from the
630 transformations applied to the data than the MLP. In this experiment we
631 define three tasks: recognizing digits (knowing that the input is a digit),
632 recognizing upper case characters (knowing that the input is one), and
633 recognizing lower case characters (knowing that the input is one). We
634 consider the digit classification task as the target task and we want to
635 evaluate whether training with the other tasks can help or hurt, and
636 whether the effect is different for MLPs versus SDAs. The goal is to find
637 out if deep learning can benefit more (or less) from multiple related tasks
638 (i.e. the multi-task setting) compared to a corresponding purely supervised
639 shallow learner.
640
641 We use a single hidden layer MLP with 1000 hidden units, and a SDA
642 with 3 hidden layers (1000 hidden units per layer), pre-trained and
643 fine-tuned on NIST.
644
645 Our results show that the MLP benefits marginally from the multi-task setting
646 in the case of digits (5\% relative improvement) but is actually hurt in the case
647 of characters (respectively 3\% and 4\% worse for lower and upper class characters).
648 On the other hand the SDA benefited from the multi-task setting, with relative
649 error rate improvements of 27\%, 15\% and 13\% respectively for digits,
650 lower and upper case characters, as shown in Table~\ref{tab:multi-task}.
651 \fi
652
653
654 \vspace*{-2mm}
655 \section{Conclusions and Discussion}
656 \vspace*{-2mm}
657
658 We have found that out-of-distribution examples (multi-task learning
659 and perturbed examples) are more beneficial
660 to a deep learner than to a traditional shallow and purely
661 supervised learner. More precisely,
662 the answers are positive for all the questions asked in the introduction.
663 %\begin{itemize}
664
665 $\bullet$ %\item
666 {\bf Do the good results previously obtained with deep architectures on the
667 MNIST digits generalize to a much larger and richer (but similar)
668 dataset, the NIST special database 19, with 62 classes and around 800k examples}?
669 Yes, the SDA {\em systematically outperformed the MLP and all the previously
670 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level
671 performance} at around 17\% error on the 62-class task and 1.4\% on the digits,
672 and beating previously published results on the same data.
673
674 $\bullet$ %\item
675 {\bf To what extent do out-of-distribution examples help deep learners,
676 and do they help them more than shallow supervised ones}?
677 We found that distorted training examples not only made the resulting
678 classifier better on similarly perturbed images but also on
679 the {\em original clean examples}, and more importantly and more novel,
680 that deep architectures benefit more from such {\em out-of-distribution}
681 examples. Shallow MLPs were helped by perturbed training examples when tested on perturbed input
682 images (65\% relative improvement on NISTP)
683 but only marginally helped (5\% relative improvement on all classes)
684 or even hurt (10\% relative loss on digits)
685 with respect to clean examples. On the other hand, the deep SDAs
686 were significantly boosted by these out-of-distribution examples.
687 Similarly, whereas the improvement due to the multi-task setting was marginal or
688 negative for the MLP (from +5.6\% to -3.6\% relative change),
689 it was quite significant for the SDA (from +13\% to +27\% relative change),
690 which may be explained by the arguments below.
691 Since out-of-distribution data
692 (perturbed or from other related classes) is very common, this conclusion
693 is of practical importance.
694 %\end{itemize}
695
696 In the original self-taught learning framework~\citep{RainaR2007}, the
697 out-of-sample examples were used as a source of unsupervised data, and
698 experiments showed its positive effects in a \emph{limited labeled data}
699 scenario. However, many of the results by \citet{RainaR2007} (who used a
700 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught
701 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases.
702 We note instead that, for deep
703 architectures, our experiments show that such a positive effect is accomplished
704 even in a scenario with a \emph{large number of labeled examples},
705 i.e., here, the relative gain of self-taught learning and
706 out-of-distribution examples is probably preserved
707 in the asymptotic regime. However, note that in our perturbation experiments
708 (but not in our multi-task experiments),
709 even the out-of-distribution examples are labeled, unlike in the
710 earlier self-taught learning experiments~\citep{RainaR2007}.
711
712 {\bf Why would deep learners benefit more from the self-taught learning
713 framework and out-of-distribution examples}?
714 The key idea is that the lower layers of the predictor compute a hierarchy
715 of features that can be shared across tasks or across variants of the
716 input distribution. A theoretical analysis of generalization improvements
717 due to sharing of intermediate features across tasks already points
718 towards that explanation~\cite{baxter95a}.
719 Intermediate features that can be used in different
720 contexts can be estimated in a way that allows to share statistical
721 strength. Features extracted through many levels are more likely to
722 be more abstract and more invariant to some of the factors of variation
723 in the underlying distribution (as the experiments in~\citet{Goodfellow2009} suggest),
724 increasing the likelihood that they would be useful for a larger array
725 of tasks and input conditions.
726 Therefore, we hypothesize that both depth and unsupervised
727 pre-training play a part in explaining the advantages observed here, and future
728 experiments could attempt at teasing apart these factors.
729 And why would deep learners benefit from the self-taught learning
730 scenarios even when the number of labeled examples is very large?
731 We hypothesize that this is related to the hypotheses studied
732 in~\citet{Erhan+al-2010}. In~\citet{Erhan+al-2010}
733 it was found that online learning on a huge dataset did not make the
734 advantage of the deep learning bias vanish, and a similar phenomenon
735 may be happening here. We hypothesize that unsupervised pre-training
736 of a deep hierarchy with out-of-distribution examples initializes the
737 model in the basin of attraction of supervised gradient descent
738 that corresponds to better generalization. Furthermore, such good
739 basins of attraction are not discovered by pure supervised learning
740 (with or without out-of-distribution examples) from random initialization, and more labeled examples
741 does not allow the shallow or purely supervised models to discover
742 the kind of better basins associated
743 with deep learning and out-of-distribution examples.
744
745 A Flash demo of the recognizer (where both the MLP and the SDA can be compared)
746 can be executed on-line at the anonymous site {\tt http://deep.host22.com}.
747
748 \iffalse
749 \section*{Appendix I: Detailed Numerical Results}
750
751 These tables correspond to Figures 2 and 3 and contain the raw error rates for each model and dataset considered.
752 They also contain additional data such as test errors on P07 and standard errors.
753
754 \begin{table}[ht]
755 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits +
756 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training
757 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture
758 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07)
759 and using a validation set to select hyper-parameters and other training choices.
760 \{SDA,MLP\}0 are trained on NIST,
761 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07.
762 The human error rate on digits is a lower bound because it does not count digits that were
763 recognized as letters. For comparison, the results found in the literature
764 on NIST digits classification using the same test set are included.}
765 \label{tab:sda-vs-mlp-vs-humans}
766 \begin{center}
767 \begin{tabular}{|l|r|r|r|r|} \hline
768 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
769 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline
770 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline
771 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline
772 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline
773 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline
774 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline
775 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline
776 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline
777 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline
778 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline
779 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline
780 \end{tabular}
781 \end{center}
782 \end{table}
783
784 \begin{table}[ht]
785 \caption{Relative change in error rates due to the use of perturbed training data,
786 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models.
787 A positive value indicates that training on the perturbed data helped for the
788 given test set (the first 3 columns on the 62-class tasks and the last one is
789 on the clean 10-class digits). Clearly, the deep learning models did benefit more
790 from perturbed training data, even when testing on clean data, whereas the MLP
791 trained on perturbed data performed worse on the clean digits and about the same
792 on the clean characters. }
793 \label{tab:perturbation-effect}
794 \begin{center}
795 \begin{tabular}{|l|r|r|r|r|} \hline
796 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline
797 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline
798 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline
799 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline
800 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline
801 \end{tabular}
802 \end{center}
803 \end{table}
804
805 \begin{table}[ht]
806 \caption{Test error rates and relative change in error rates due to the use of
807 a multi-task setting, i.e., training on each task in isolation vs training
808 for all three tasks together, for MLPs vs SDAs. The SDA benefits much
809 more from the multi-task setting. All experiments on only on the
810 unperturbed NIST data, using validation error for model selection.
811 Relative improvement is 1 - single-task error / multi-task error.}
812 \label{tab:multi-task}
813 \begin{center}
814 \begin{tabular}{|l|r|r|r|} \hline
815 & single-task & multi-task & relative \\
816 & setting & setting & improvement \\ \hline
817 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline
818 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline
819 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline
820 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline
821 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline
822 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline
823 \end{tabular}
824 \end{center}
825 \end{table}
826
827 \fi
828
829 %\afterpage{\clearpage}
830 %\clearpage
831 {
832 %\bibliographystyle{spbasic} % basic style, author-year citations
833 \bibliographystyle{plainnat}
834 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,specials,aigaion-shorter}
835 %\bibliographystyle{unsrtnat}
836 %\bibliographystyle{apalike}
837 }
838
839
840 \end{document}