Mercurial > ift6266
comparison writeup/aistats2011_submission.tex @ 600:1f5d2d01b84d
draft submission to AISTATS 2011
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Sun, 31 Oct 2010 09:11:47 -0400 |
parents | |
children | 203c6071e104 |
comparison
equal
deleted
inserted
replaced
598:a0fdc1f134da | 600:1f5d2d01b84d |
---|---|
1 %\documentclass[twoside,11pt]{article} % For LaTeX2e | |
2 \documentclass{article} % For LaTeX2e | |
3 \usepackage{aistats2e_2011} | |
4 \usepackage{times} | |
5 \usepackage{wrapfig} | |
6 \usepackage{amsthm} | |
7 \usepackage{amsmath} | |
8 \usepackage{bbm} | |
9 \usepackage[utf8]{inputenc} | |
10 \usepackage[psamsfonts]{amssymb} | |
11 %\usepackage{algorithm,algorithmic} % not used after all | |
12 \usepackage{graphicx,subfigure} | |
13 \usepackage[numbers]{natbib} | |
14 | |
15 \addtolength{\textwidth}{10mm} | |
16 \addtolength{\evensidemargin}{-5mm} | |
17 \addtolength{\oddsidemargin}{-5mm} | |
18 | |
19 %\setlength\parindent{0mm} | |
20 | |
21 \begin{document} | |
22 | |
23 \title{Deeper Learners Benefit More from Multi-Task and Perturbed Examples} | |
24 \author{ | |
25 Yoshua Bengio \and | |
26 Frédéric Bastien \and | |
27 Arnaud Bergeron \and | |
28 Nicolas Boulanger-Lewandowski \and | |
29 Thomas Breuel \and | |
30 Youssouf Chherawala \and | |
31 Moustapha Cisse \and | |
32 Myriam Côté \and | |
33 Dumitru Erhan \and | |
34 Jeremy Eustache \and | |
35 Xavier Glorot \and | |
36 Xavier Muller \and | |
37 Sylvain Pannetier Lebeuf \and | |
38 Razvan Pascanu \and | |
39 Salah Rifai \and | |
40 Francois Savard \and | |
41 Guillaume Sicard | |
42 } | |
43 \date{{\tt bengioy@iro.umontreal.ca}, Dept. IRO, U. Montreal, P.O. Box 6128, Centre-Ville branch, H3C 3J7, Montreal (Qc), Canada} | |
44 %\jmlrheading{}{2010}{}{10/2010}{XX/2011}{Yoshua Bengio et al} | |
45 %\editor{} | |
46 | |
47 %\makeanontitle | |
48 \maketitle | |
49 | |
50 %{\bf Running title: Deep Self-Taught Learning} | |
51 | |
52 \vspace*{-2mm} | |
53 \begin{abstract} | |
54 Recent theoretical and empirical work in statistical machine learning has demonstrated the potential of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple levels of representation. The hypothesis evaluated here is that intermediate levels of representation, because | |
55 they can be shared across tasks and examples from different but related | |
56 distributions, can yield even more benefits where there are more such levels of representation. The experiments are performed on a large-scale handwritten character recognition setting with 62 classes (upper case, lower case, digits). We show that a deep learner could not only {\em beat previously published results but also reach human-level performance}. | |
57 \end{abstract} | |
58 \vspace*{-3mm} | |
59 | |
60 %\begin{keywords} | |
61 %Deep learning, self-taught learning, out-of-distribution examples, handwritten character recognition, multi-task learning | |
62 %\end{keywords} | |
63 %\keywords{self-taught learning \and multi-task learning \and out-of-distribution examples \and deep learning \and handwriting recognition} | |
64 | |
65 | |
66 | |
67 \section{Introduction} | |
68 \vspace*{-1mm} | |
69 | |
70 {\bf Deep Learning} has emerged as a promising new area of research in | |
71 statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review. | |
72 Learning algorithms for deep architectures are centered on the learning | |
73 of useful representations of data, which are better suited to the task at hand, | |
74 and are organized in a hierarchy with multiple levels. | |
75 This is in part inspired by observations of the mammalian visual cortex, | |
76 which consists of a chain of processing elements, each of which is associated with a | |
77 different representation of the raw visual input. In fact, | |
78 it was found recently that the features learnt in deep architectures resemble | |
79 those observed in the first two of these stages (in areas V1 and V2 | |
80 of visual cortex) \citep{HonglakL2008}, and that they become more and | |
81 more invariant to factors of variation (such as camera movement) in | |
82 higher layers~\citep{Goodfellow2009}. | |
83 Learning a hierarchy of features increases the | |
84 ease and practicality of developing representations that are at once | |
85 tailored to specific tasks, yet are able to borrow statistical strength | |
86 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the | |
87 feature representation can lead to higher-level (more abstract, more | |
88 general) features that are more robust to unanticipated sources of | |
89 variance extant in real data. | |
90 | |
91 Whereas a deep architecture can in principle be more powerful than a | |
92 shallow one in terms of representation, depth appears to render the | |
93 training problem more difficult in terms of optimization and local minima. | |
94 It is also only recently that successful algorithms were proposed to | |
95 overcome some of these difficulties. All are based on unsupervised | |
96 learning, often in an greedy layer-wise ``unsupervised pre-training'' | |
97 stage~\citep{Bengio-2009}. | |
98 The principle is that each layer starting from | |
99 the bottom is trained to represent its input (the output of the previous | |
100 layer). After this | |
101 unsupervised initialization, the stack of layers can be | |
102 converted into a deep supervised feedforward neural network and fine-tuned by | |
103 stochastic gradient descent. | |
104 One of these layer initialization techniques, | |
105 applied here, is the Denoising | |
106 Auto-encoder~(DAE)~\citep{VincentPLarochelleH2008-very-small} (see | |
107 Figure~\ref{fig:da}), which performed similarly or | |
108 better~\citep{VincentPLarchelleH2008-very-small} than previously | |
109 proposed Restricted Boltzmann Machines (RBM)~\citep{Hinton06} | |
110 in terms of unsupervised extraction | |
111 of a hierarchy of features useful for classification. Each layer is trained | |
112 to denoise its input, creating a layer of features that can be used as | |
113 input for the next layer. Note that training a Denoising Auto-Encoder | |
114 can actually been seen as training a particular RBM by an inductive | |
115 principle different from maximum likelihood~\cite{Vincent-SM-2010}, namely by | |
116 Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}. | |
117 | |
118 Previous comparative experimental results with stacking of RBMs and DAEs | |
119 to build deep supervised predictors had shown that they could outperform | |
120 shallow architectures in a variety of settings (see~\citet{Bengio-2009} | |
121 for a review), especially | |
122 when the data involves complex interactions between many factors of | |
123 variation~\citep{LarochelleH2007}. Other experiments have suggested | |
124 that the unsupervised layer-wise pre-training acted as a useful | |
125 prior~\citep{Erhan+al-2010} that allows one to initialize a deep | |
126 neural network in a relatively much smaller region of parameter space, | |
127 corresponding to better generalization. | |
128 | |
129 To further the understanding of the reasons for the good performance | |
130 observed with deep learners, we focus here on the following {\em hypothesis}: | |
131 intermediate levels of representation, especially when there are | |
132 more such levels, can be exploited to {\bf share | |
133 statistical strength across different but related types of examples}, | |
134 such as examples coming from other tasks than the task of interest | |
135 (the multi-task setting), or examples coming from an overlapping | |
136 but different distribution (images with different kinds of perturbations | |
137 and noises, here). This is consistent with the hypotheses discussed | |
138 at length in~\citet{Bengio-2009} regarding the potential advantage | |
139 of deep learning and the idea that more levels of representation can | |
140 give rise to more abstract, more general features of the raw input. | |
141 | |
142 This hypothesis is related to a learning setting called | |
143 {\bf self-taught learning}~\citep{RainaR2007}, which combines principles | |
144 of semi-supervised and multi-task learning: the learner can exploit examples | |
145 that are unlabeled and possibly come from a distribution different from the target | |
146 distribution, e.g., from other classes than those of interest. | |
147 It has already been shown that deep learners can clearly take advantage of | |
148 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}, | |
149 but more needed to be done to explore the impact | |
150 of {\em out-of-distribution} examples and of the {\em multi-task} setting | |
151 (one exception is~\citep{CollobertR2008}, which shares and uses unsupervised | |
152 pre-training only with the first layer). In particular the {\em relative | |
153 advantage of deep learning} for these settings has not been evaluated. | |
154 | |
155 | |
156 % | |
157 The {\bf main claim} of this paper is that deep learners (with several levels of representation) can | |
158 {\bf benefit more from self-taught learning than shallow learners} (with a single | |
159 level), both in the context of the multi-task setting and from {\em | |
160 out-of-distribution examples} in general. Because we are able to improve on state-of-the-art | |
161 performance and reach human-level performance | |
162 on a large-scale task, we consider that this paper is also a contribution | |
163 to advance the application of machine learning to handwritten character recognition. | |
164 More precisely, we ask and answer the following questions: | |
165 | |
166 %\begin{enumerate} | |
167 $\bullet$ %\item | |
168 Do the good results previously obtained with deep architectures on the | |
169 MNIST digit images generalize to the setting of a similar but much larger and richer | |
170 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
171 | |
172 $\bullet$ %\item | |
173 To what extent does the perturbation of input images (e.g. adding | |
174 noise, affine transformations, background images) make the resulting | |
175 classifiers better not only on similarly perturbed images but also on | |
176 the {\em original clean examples}? We study this question in the | |
177 context of the 62-class and 10-class tasks of the NIST special database 19. | |
178 | |
179 $\bullet$ %\item | |
180 Do deep architectures {\em benefit {\bf more} from such out-of-distribution} | |
181 examples, in particular do they benefit more from | |
182 examples that are perturbed versions of the examples from the task of interest? | |
183 | |
184 $\bullet$ %\item | |
185 Similarly, does the feature learning step in deep learning algorithms benefit {\bf more} | |
186 from training with moderately {\em different classes} (i.e. a multi-task learning scenario) than | |
187 a corresponding shallow and purely supervised architecture? | |
188 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case) | |
189 to answer this question. | |
190 %\end{enumerate} | |
191 | |
192 Our experimental results provide positive evidence towards all of these questions, | |
193 as well as {\em classifiers that reach human-level performance on 62-class isolated character | |
194 recognition and beat previously published results on the NIST dataset (special database 19)}. | |
195 To achieve these results, we introduce in the next section a sophisticated system | |
196 for stochastically transforming character images and then explain the methodology, | |
197 which is based on training with or without these transformed images and testing on | |
198 clean ones. We measure the relative advantage of out-of-distribution examples | |
199 (perturbed or out-of-class) | |
200 for a deep learner vs a supervised shallow one. | |
201 Code for generating these transformations as well as for the deep learning | |
202 algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}. | |
203 We also estimate the relative advantage for deep learners of training with | |
204 other classes than those of interest, by comparing learners trained with | |
205 62 classes with learners trained with only a subset (on which they | |
206 are then tested). | |
207 The conclusion discusses | |
208 the more general question of why deep learners may benefit so much from | |
209 the self-taught learning framework. Since out-of-distribution data | |
210 (perturbed or from other related classes) is very common, this conclusion | |
211 is of practical importance. | |
212 | |
213 \vspace*{-3mm} | |
214 %\newpage | |
215 \section{Perturbed and Transformed Character Images} | |
216 \label{s:perturbations} | |
217 \vspace*{-2mm} | |
218 | |
219 Figure~\ref{fig:transform} shows the different transformations we used to stochastically | |
220 transform $32 \times 32$ source images (such as the one in Fig.\ref{fig:torig}) | |
221 in order to obtain data from a larger distribution which | |
222 covers a domain substantially larger than the clean characters distribution from | |
223 which we start. | |
224 Although character transformations have been used before to | |
225 improve character recognizers, this effort is on a large scale both | |
226 in number of classes and in the complexity of the transformations, hence | |
227 in the complexity of the learning task. | |
228 The code for these transformations (mostly python) is available at | |
229 {\tt http://anonymous.url.net}. All the modules in the pipeline share | |
230 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the | |
231 amount of deformation or noise introduced. | |
232 There are two main parts in the pipeline. The first one, | |
233 from slant to pinch below, performs transformations. The second | |
234 part, from blur to contrast, adds different kinds of noise. | |
235 More details can be found in~\citep{ift6266-tr-anonymous}. | |
236 | |
237 \begin{figure}[ht] | |
238 \centering | |
239 \subfigure[Original]{\includegraphics[scale=0.6]{images/Original.png}\label{fig:torig}} | |
240 \subfigure[Thickness]{\includegraphics[scale=0.6]{images/Thick_only.png}} | |
241 \subfigure[Slant]{\includegraphics[scale=0.6]{images/Slant_only.png}} | |
242 \subfigure[Affine Transformation]{\includegraphics[scale=0.6]{images/Affine_only.png}} | |
243 \subfigure[Local Elastic Deformation]{\includegraphics[scale=0.6]{images/Localelasticdistorsions_only.png}} | |
244 \subfigure[Pinch]{\includegraphics[scale=0.6]{images/Pinch_only.png}} | |
245 %Noise | |
246 \subfigure[Motion Blur]{\includegraphics[scale=0.6]{images/Motionblur_only.png}} | |
247 \subfigure[Occlusion]{\includegraphics[scale=0.6]{images/occlusion_only.png}} | |
248 \subfigure[Gaussian Smoothing]{\includegraphics[scale=0.6]{images/Bruitgauss_only.png}} | |
249 \subfigure[Pixels Permutation]{\includegraphics[scale=0.6]{images/Permutpixel_only.png}} | |
250 \subfigure[Gaussian Noise]{\includegraphics[scale=0.6]{images/Distorsiongauss_only.png}} | |
251 \subfigure[Background Image Addition]{\includegraphics[scale=0.6]{images/background_other_only.png}} | |
252 \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}} | |
253 \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}} | |
254 \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}} | |
255 \caption{Top left (a): example original image. Others (b-o): examples of the effect | |
256 of each transformation module taken separately. Actual perturbed examples are obtained by | |
257 a pipeline of these, with random choices about which module to apply and how much perturbation | |
258 to apply.} | |
259 \label{fig:transform} | |
260 \vspace*{-2mm} | |
261 \end{figure} | |
262 | |
263 \vspace*{-3mm} | |
264 \section{Experimental Setup} | |
265 \vspace*{-1mm} | |
266 | |
267 Much previous work on deep learning had been performed on | |
268 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
269 with 60~000 examples, and variants involving 10~000 | |
270 examples~\citep{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}. | |
271 The focus here is on much larger training sets, from 10 times to | |
272 to 1000 times larger, and 62 classes. | |
273 | |
274 The first step in constructing the larger datasets (called NISTP and P07) is to sample from | |
275 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, | |
276 and {\bf OCR data} (scanned machine printed characters). Once a character | |
277 is sampled from one of these sources (chosen randomly), the second step is to | |
278 apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}. | |
279 | |
280 To provide a baseline of error rate comparison we also estimate human performance | |
281 on both the 62-class task and the 10-class digits task. | |
282 We compare the best Multi-Layer Perceptrons (MLP) against | |
283 the best Stacked Denoising Auto-encoders (SDA), when | |
284 both models' hyper-parameters are selected to minimize the validation set error. | |
285 We also provide a comparison against a precise estimate | |
286 of human performance obtained via Amazon's Mechanical Turk (AMT) | |
287 service ({\tt http://mturk.com}). | |
288 AMT users are paid small amounts | |
289 of money to perform tasks for which human intelligence is required. | |
290 Mechanical Turk has been used extensively in natural language processing and vision. | |
291 %processing \citep{SnowEtAl2008} and vision | |
292 %\citep{SorokinAndForsyth2008,whitehill09}. | |
293 AMT users were presented | |
294 with 10 character images (from a test set) and asked to choose 10 corresponding ASCII | |
295 characters. They were forced to choose a single character class (either among the | |
296 62 or 10 character classes) for each image. | |
297 80 subjects classified 2500 images per (dataset,task) pair. | |
298 Different humans labelers sometimes provided a different label for the same | |
299 example, and we were able to estimate the error variance due to this effect | |
300 because each image was classified by 3 different persons. | |
301 The average error of humans on the 62-class task NIST test set | |
302 is 18.2\%, with a standard error of 0.1\%. | |
303 | |
304 \vspace*{-3mm} | |
305 \subsection{Data Sources} | |
306 \vspace*{-2mm} | |
307 | |
308 %\begin{itemize} | |
309 %\item | |
310 {\bf NIST.} | |
311 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995}, | |
312 widely used for training and testing character | |
313 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}. | |
314 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications, | |
315 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes | |
316 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity. | |
317 The fourth partition (called $hsf_4$, 82587 examples), | |
318 experimentally recognized to be the most difficult one, is the one recommended | |
319 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
320 for that purpose. We randomly split the remainder (731668 examples) into a training set and a validation set for | |
321 model selection. | |
322 The performances reported by previous work on that dataset mostly use only the digits. | |
323 Here we use all the classes both in the training and testing phase. This is especially | |
324 useful to estimate the effect of a multi-task setting. | |
325 The distribution of the classes in the NIST training and test sets differs | |
326 substantially, with relatively many more digits in the test set, and a more uniform distribution | |
327 of letters in the test set (whereas in the training set they are distributed | |
328 more like in natural text). | |
329 %\vspace*{-1mm} | |
330 | |
331 %\item | |
332 {\bf Fonts.} | |
333 In order to have a good variety of sources we downloaded an important number of free fonts from: | |
334 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}. | |
335 % TODO: pointless to anonymize, it's not pointing to our work | |
336 Including the operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from. | |
337 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, | |
338 directly as input to our models. | |
339 \vspace*{-1mm} | |
340 | |
341 %\item | |
342 {\bf Captchas.} | |
343 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for | |
344 generating characters of the same format as the NIST dataset. This software is based on | |
345 a random character class generator and various kinds of transformations similar to those described in the previous sections. | |
346 In order to increase the variability of the data generated, many different fonts are used for generating the characters. | |
347 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity | |
348 depending on the value of the complexity parameter provided by the user of the data source. | |
349 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class? | |
350 \vspace*{-1mm} | |
351 | |
352 %\item | |
353 {\bf OCR data.} | |
354 A large set (2 million) of scanned, OCRed and manually verified machine-printed | |
355 characters where included as an | |
356 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
357 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern | |
358 ({\tt http://www.iupr.com}), and which will be publicly released. | |
359 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this | |
360 %\end{itemize} | |
361 | |
362 \vspace*{-3mm} | |
363 \subsection{Data Sets} | |
364 \vspace*{-2mm} | |
365 | |
366 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label | |
367 from one of the 62 character classes. | |
368 %\begin{itemize} | |
369 \vspace*{-1mm} | |
370 | |
371 %\item | |
372 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has | |
373 \{651668 / 80000 / 82587\} \{training / validation / test\} examples. | |
374 \vspace*{-1mm} | |
375 | |
376 %\item | |
377 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources | |
378 and sending them through the transformation pipeline described in section \ref{s:perturbations}. | |
379 For each new example to generate, a data source is selected with probability $10\%$ from the fonts, | |
380 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the | |
381 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$. | |
382 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. | |
383 \vspace*{-1mm} | |
384 | |
385 %\item | |
386 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources) | |
387 except that we only apply | |
388 transformations from slant to pinch. Therefore, the character is | |
389 transformed but no additional noise is added to the image, giving images | |
390 closer to the NIST dataset. | |
391 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. | |
392 %\end{itemize} | |
393 | |
394 \vspace*{-3mm} | |
395 \subsection{Models and their Hyperparameters} | |
396 \vspace*{-2mm} | |
397 | |
398 The experiments are performed using MLPs (with a single | |
399 hidden layer) and SDAs. | |
400 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.} | |
401 | |
402 {\bf Multi-Layer Perceptrons (MLP).} | |
403 Whereas previous work had compared deep architectures to both shallow MLPs and | |
404 SVMs, we only compared to MLPs here because of the very large datasets used | |
405 (making the use of SVMs computationally challenging because of their quadratic | |
406 scaling behavior). Preliminary experiments on training SVMs (libSVM) with subsets of the training | |
407 set allowing the program to fit in memory yielded substantially worse results | |
408 than those obtained with MLPs. For training on nearly a billion examples | |
409 (with the perturbed data), the MLPs and SDA are much more convenient than | |
410 classifiers based on kernel methods. | |
411 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
412 exponentials) on the output layer for estimating $P(class | image)$. | |
413 The number of hidden units is taken in $\{300,500,800,1000,1500\}$. | |
414 Training examples are presented in minibatches of size 20. A constant learning | |
415 rate was chosen among $\{0.001, 0.01, 0.025, 0.075, 0.1, 0.5\}$. | |
416 %through preliminary experiments (measuring performance on a validation set), | |
417 %and $0.1$ (which was found to work best) was then selected for optimizing on | |
418 %the whole training sets. | |
419 \vspace*{-1mm} | |
420 | |
421 | |
422 {\bf Stacked Denoising Auto-Encoders (SDA).} | |
423 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) | |
424 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
425 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006}, | |
426 apparently setting parameters in the | |
427 basin of attraction of supervised gradient descent yielding better | |
428 generalization~\citep{Erhan+al-2010}. This initial {\em unsupervised | |
429 pre-training phase} uses all of the training images but not the training labels. | |
430 Each layer is trained in turn to produce a new representation of its input | |
431 (starting from the raw pixels). | |
432 It is hypothesized that the | |
433 advantage brought by this procedure stems from a better prior, | |
434 on the one hand taking advantage of the link between the input | |
435 distribution $P(x)$ and the conditional distribution of interest | |
436 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
437 taking advantage of the expressive power and bias implicit in the | |
438 deep architecture (whereby complex concepts are expressed as | |
439 compositions of simpler ones through a deep hierarchy). | |
440 | |
441 \begin{figure}[ht] | |
442 \vspace*{-2mm} | |
443 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}} | |
444 \vspace*{-2mm} | |
445 \caption{Illustration of the computations and training criterion for the denoising | |
446 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of | |
447 the layer (i.e. raw input or output of previous layer) | |
448 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$. | |
449 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which | |
450 is compared to the uncorrupted input $x$ through the loss function | |
451 $L_H(x,z)$, whose expected value is approximately minimized during training | |
452 by tuning $\theta$ and $\theta'$.} | |
453 \label{fig:da} | |
454 \vspace*{-2mm} | |
455 \end{figure} | |
456 | |
457 Here we chose to use the Denoising | |
458 Auto-encoder~\citep{VincentPLarochelleH2008} as the building block for | |
459 these deep hierarchies of features, as it is simple to train and | |
460 explain (see Figure~\ref{fig:da}, as well as | |
461 tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
462 provides efficient inference, and yielded results | |
463 comparable or better than RBMs in series of experiments | |
464 \citep{VincentPLarochelleH2008}. During training, a Denoising | |
465 Auto-encoder is presented with a stochastically corrupted version | |
466 of the input and trained to reconstruct the uncorrupted input, | |
467 forcing the hidden units to represent the leading regularities in | |
468 the data. Here we use the random binary masking corruption | |
469 (which sets to 0 a random subset of the inputs). | |
470 Once it is trained, in a purely unsupervised way, | |
471 its hidden units' activations can | |
472 be used as inputs for training a second one, etc. | |
473 After this unsupervised pre-training stage, the parameters | |
474 are used to initialize a deep MLP, which is fine-tuned by | |
475 the same standard procedure used to train them (see previous section). | |
476 The SDA hyper-parameters are the same as for the MLP, with the addition of the | |
477 amount of corruption noise (we used the masking noise process, whereby a | |
478 fixed proportion of the input values, randomly selected, are zeroed), and a | |
479 separate learning rate for the unsupervised pre-training stage (selected | |
480 from the same above set). The fraction of inputs corrupted was selected | |
481 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
482 of hidden layers but it was fixed to 3 based on previous work with | |
483 SDAs on MNIST~\citep{VincentPLarochelleH2008}. The size of the hidden | |
484 layers was kept constant across hidden layers, and the best results | |
485 were obtained with the largest values that we could experiment | |
486 with given our patience, with 1000 hidden units. | |
487 | |
488 \vspace*{-1mm} | |
489 | |
490 \begin{figure}[ht] | |
491 %\vspace*{-2mm} | |
492 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}} | |
493 %\vspace*{-3mm} | |
494 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained | |
495 on NIST, 1 on NISTP, and 2 on P07. Left: overall results | |
496 of all models, on NIST and NISTP test sets. | |
497 Right: error rates on NIST test digits only, along with the previous results from | |
498 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
499 respectively based on ART, nearest neighbors, MLPs, and SVMs.} | |
500 \label{fig:error-rates-charts} | |
501 \vspace*{-2mm} | |
502 \end{figure} | |
503 | |
504 | |
505 \begin{figure}[ht] | |
506 \vspace*{-3mm} | |
507 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}} | |
508 \vspace*{-3mm} | |
509 \caption{Relative improvement in error rate due to self-taught learning. | |
510 Left: Improvement (or loss, when negative) | |
511 induced by out-of-distribution examples (perturbed data). | |
512 Right: Improvement (or loss, when negative) induced by multi-task | |
513 learning (training on all classes and testing only on either digits, | |
514 upper case, or lower-case). The deep learner (SDA) benefits more from | |
515 both self-taught learning scenarios, compared to the shallow MLP.} | |
516 \label{fig:improvements-charts} | |
517 \vspace*{-2mm} | |
518 \end{figure} | |
519 | |
520 \section{Experimental Results} | |
521 \vspace*{-2mm} | |
522 | |
523 %%\vspace*{-1mm} | |
524 %\subsection{SDA vs MLP vs Humans} | |
525 %%\vspace*{-1mm} | |
526 The models are either trained on NIST (MLP0 and SDA0), | |
527 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested | |
528 on either NIST, NISTP or P07, either on the 62-class task | |
529 or on the 10-digits task. Training (including about half | |
530 for unsupervised pre-training, for DAs) on the larger | |
531 datasets takes around one day on a GPU-285. | |
532 Figure~\ref{fig:error-rates-charts} summarizes the results obtained, | |
533 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, | |
534 SDA2), along with the previous results on the digits NIST special database | |
535 19 test set from the literature, respectively based on ARTMAP neural | |
536 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search | |
537 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs | |
538 ~\citep{Milgram+al-2005}.% More detailed and complete numerical results | |
539 %(figures and tables, including standard errors on the error rates) can be | |
540 %found in Appendix. | |
541 The deep learner not only outperformed the shallow ones and | |
542 previously published performance (in a statistically and qualitatively | |
543 significant way) but when trained with perturbed data | |
544 reaches human performance on both the 62-class task | |
545 and the 10-class (digits) task. | |
546 17\% error (SDA1) or 18\% error (humans) may seem large but a large | |
547 majority of the errors from humans and from SDA1 are from out-of-context | |
548 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a | |
549 ``c'' and a ``C'' are often indistinguishible). | |
550 | |
551 In addition, as shown in the left of | |
552 Figure~\ref{fig:improvements-charts}, the relative improvement in error | |
553 rate brought by self-taught learning is greater for the SDA, and these | |
554 differences with the MLP are statistically and qualitatively | |
555 significant. | |
556 The left side of the figure shows the improvement to the clean | |
557 NIST test set error brought by the use of out-of-distribution examples | |
558 (i.e. the perturbed examples examples from NISTP or P07). | |
559 Relative percent change is measured by taking | |
560 $100 \% \times$ (original model's error / perturbed-data model's error - 1). | |
561 The right side of | |
562 Figure~\ref{fig:improvements-charts} shows the relative improvement | |
563 brought by the use of a multi-task setting, in which the same model is | |
564 trained for more classes than the target classes of interest (i.e. training | |
565 with all 62 classes when the target classes are respectively the digits, | |
566 lower-case, or upper-case characters). Again, whereas the gain from the | |
567 multi-task setting is marginal or negative for the MLP, it is substantial | |
568 for the SDA. Note that to simplify these multi-task experiments, only the original | |
569 NIST dataset is used. For example, the MLP-digits bar shows the relative | |
570 percent improvement in MLP error rate on the NIST digits test set | |
571 is $100\% \times$ (single-task | |
572 model's error / multi-task model's error - 1). The single-task model is | |
573 trained with only 10 outputs (one per digit), seeing only digit examples, | |
574 whereas the multi-task model is trained with 62 outputs, with all 62 | |
575 character classes as examples. Hence the hidden units are shared across | |
576 all tasks. For the multi-task model, the digit error rate is measured by | |
577 comparing the correct digit class with the output class associated with the | |
578 maximum conditional probability among only the digit classes outputs. The | |
579 setting is similar for the other two target classes (lower case characters | |
580 and upper case characters). | |
581 %%\vspace*{-1mm} | |
582 %\subsection{Perturbed Training Data More Helpful for SDA} | |
583 %%\vspace*{-1mm} | |
584 | |
585 %%\vspace*{-1mm} | |
586 %\subsection{Multi-Task Learning Effects} | |
587 %%\vspace*{-1mm} | |
588 | |
589 \iffalse | |
590 As previously seen, the SDA is better able to benefit from the | |
591 transformations applied to the data than the MLP. In this experiment we | |
592 define three tasks: recognizing digits (knowing that the input is a digit), | |
593 recognizing upper case characters (knowing that the input is one), and | |
594 recognizing lower case characters (knowing that the input is one). We | |
595 consider the digit classification task as the target task and we want to | |
596 evaluate whether training with the other tasks can help or hurt, and | |
597 whether the effect is different for MLPs versus SDAs. The goal is to find | |
598 out if deep learning can benefit more (or less) from multiple related tasks | |
599 (i.e. the multi-task setting) compared to a corresponding purely supervised | |
600 shallow learner. | |
601 | |
602 We use a single hidden layer MLP with 1000 hidden units, and a SDA | |
603 with 3 hidden layers (1000 hidden units per layer), pre-trained and | |
604 fine-tuned on NIST. | |
605 | |
606 Our results show that the MLP benefits marginally from the multi-task setting | |
607 in the case of digits (5\% relative improvement) but is actually hurt in the case | |
608 of characters (respectively 3\% and 4\% worse for lower and upper class characters). | |
609 On the other hand the SDA benefited from the multi-task setting, with relative | |
610 error rate improvements of 27\%, 15\% and 13\% respectively for digits, | |
611 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. | |
612 \fi | |
613 | |
614 | |
615 \vspace*{-2mm} | |
616 \section{Conclusions and Discussion} | |
617 \vspace*{-2mm} | |
618 | |
619 We have found that the self-taught learning framework is more beneficial | |
620 to a deep learner than to a traditional shallow and purely | |
621 supervised learner. More precisely, | |
622 the answers are positive for all the questions asked in the introduction. | |
623 %\begin{itemize} | |
624 | |
625 $\bullet$ %\item | |
626 {\bf Do the good results previously obtained with deep architectures on the | |
627 MNIST digits generalize to a much larger and richer (but similar) | |
628 dataset, the NIST special database 19, with 62 classes and around 800k examples}? | |
629 Yes, the SDA {\em systematically outperformed the MLP and all the previously | |
630 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level | |
631 performance} at around 17\% error on the 62-class task and 1.4\% on the digits, | |
632 and beating previously published results on the same data. | |
633 | |
634 $\bullet$ %\item | |
635 {\bf To what extent do self-taught learning scenarios help deep learners, | |
636 and do they help them more than shallow supervised ones}? | |
637 We found that distorted training examples not only made the resulting | |
638 classifier better on similarly perturbed images but also on | |
639 the {\em original clean examples}, and more importantly and more novel, | |
640 that deep architectures benefit more from such {\em out-of-distribution} | |
641 examples. MLPs were helped by perturbed training examples when tested on perturbed input | |
642 images (65\% relative improvement on NISTP) | |
643 but only marginally helped (5\% relative improvement on all classes) | |
644 or even hurt (10\% relative loss on digits) | |
645 with respect to clean examples . On the other hand, the deep SDAs | |
646 were significantly boosted by these out-of-distribution examples. | |
647 Similarly, whereas the improvement due to the multi-task setting was marginal or | |
648 negative for the MLP (from +5.6\% to -3.6\% relative change), | |
649 it was quite significant for the SDA (from +13\% to +27\% relative change), | |
650 which may be explained by the arguments below. | |
651 %\end{itemize} | |
652 | |
653 In the original self-taught learning framework~\citep{RainaR2007}, the | |
654 out-of-sample examples were used as a source of unsupervised data, and | |
655 experiments showed its positive effects in a \emph{limited labeled data} | |
656 scenario. However, many of the results by \citet{RainaR2007} (who used a | |
657 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught | |
658 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases. | |
659 We note instead that, for deep | |
660 architectures, our experiments show that such a positive effect is accomplished | |
661 even in a scenario with a \emph{large number of labeled examples}, | |
662 i.e., here, the relative gain of self-taught learning is probably preserved | |
663 in the asymptotic regime. | |
664 | |
665 {\bf Why would deep learners benefit more from the self-taught learning framework}? | |
666 The key idea is that the lower layers of the predictor compute a hierarchy | |
667 of features that can be shared across tasks or across variants of the | |
668 input distribution. A theoretical analysis of generalization improvements | |
669 due to sharing of intermediate features across tasks already points | |
670 towards that explanation~\cite{baxter95a}. | |
671 Intermediate features that can be used in different | |
672 contexts can be estimated in a way that allows to share statistical | |
673 strength. Features extracted through many levels are more likely to | |
674 be more abstract and more invariant to some of the factors of variation | |
675 in the underlying distribution (as the experiments in~\citet{Goodfellow2009} suggest), | |
676 increasing the likelihood that they would be useful for a larger array | |
677 of tasks and input conditions. | |
678 Therefore, we hypothesize that both depth and unsupervised | |
679 pre-training play a part in explaining the advantages observed here, and future | |
680 experiments could attempt at teasing apart these factors. | |
681 And why would deep learners benefit from the self-taught learning | |
682 scenarios even when the number of labeled examples is very large? | |
683 We hypothesize that this is related to the hypotheses studied | |
684 in~\citet{Erhan+al-2010}. In~\citet{Erhan+al-2010} | |
685 it was found that online learning on a huge dataset did not make the | |
686 advantage of the deep learning bias vanish, and a similar phenomenon | |
687 may be happening here. We hypothesize that unsupervised pre-training | |
688 of a deep hierarchy with self-taught learning initializes the | |
689 model in the basin of attraction of supervised gradient descent | |
690 that corresponds to better generalization. Furthermore, such good | |
691 basins of attraction are not discovered by pure supervised learning | |
692 (with or without self-taught settings) from random initialization, and more labeled examples | |
693 does not allow the shallow or purely supervised models to discover | |
694 the kind of better basins associated | |
695 with deep learning and self-taught learning. | |
696 | |
697 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) | |
698 can be executed on-line at {\tt http://deep.host22.com}. | |
699 | |
700 \iffalse | |
701 \section*{Appendix I: Detailed Numerical Results} | |
702 | |
703 These tables correspond to Figures 2 and 3 and contain the raw error rates for each model and dataset considered. | |
704 They also contain additional data such as test errors on P07 and standard errors. | |
705 | |
706 \begin{table}[ht] | |
707 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits + | |
708 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training | |
709 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture | |
710 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07) | |
711 and using a validation set to select hyper-parameters and other training choices. | |
712 \{SDA,MLP\}0 are trained on NIST, | |
713 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07. | |
714 The human error rate on digits is a lower bound because it does not count digits that were | |
715 recognized as letters. For comparison, the results found in the literature | |
716 on NIST digits classification using the same test set are included.} | |
717 \label{tab:sda-vs-mlp-vs-humans} | |
718 \begin{center} | |
719 \begin{tabular}{|l|r|r|r|r|} \hline | |
720 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline | |
721 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline | |
722 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline | |
723 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline | |
724 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline | |
725 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline | |
726 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline | |
727 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline | |
728 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline | |
729 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline | |
730 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline | |
731 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline | |
732 \end{tabular} | |
733 \end{center} | |
734 \end{table} | |
735 | |
736 \begin{table}[ht] | |
737 \caption{Relative change in error rates due to the use of perturbed training data, | |
738 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models. | |
739 A positive value indicates that training on the perturbed data helped for the | |
740 given test set (the first 3 columns on the 62-class tasks and the last one is | |
741 on the clean 10-class digits). Clearly, the deep learning models did benefit more | |
742 from perturbed training data, even when testing on clean data, whereas the MLP | |
743 trained on perturbed data performed worse on the clean digits and about the same | |
744 on the clean characters. } | |
745 \label{tab:perturbation-effect} | |
746 \begin{center} | |
747 \begin{tabular}{|l|r|r|r|r|} \hline | |
748 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline | |
749 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline | |
750 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline | |
751 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline | |
752 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline | |
753 \end{tabular} | |
754 \end{center} | |
755 \end{table} | |
756 | |
757 \begin{table}[ht] | |
758 \caption{Test error rates and relative change in error rates due to the use of | |
759 a multi-task setting, i.e., training on each task in isolation vs training | |
760 for all three tasks together, for MLPs vs SDAs. The SDA benefits much | |
761 more from the multi-task setting. All experiments on only on the | |
762 unperturbed NIST data, using validation error for model selection. | |
763 Relative improvement is 1 - single-task error / multi-task error.} | |
764 \label{tab:multi-task} | |
765 \begin{center} | |
766 \begin{tabular}{|l|r|r|r|} \hline | |
767 & single-task & multi-task & relative \\ | |
768 & setting & setting & improvement \\ \hline | |
769 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline | |
770 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline | |
771 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline | |
772 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline | |
773 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline | |
774 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline | |
775 \end{tabular} | |
776 \end{center} | |
777 \end{table} | |
778 | |
779 \fi | |
780 | |
781 %\afterpage{\clearpage} | |
782 %\clearpage | |
783 { | |
784 %\bibliographystyle{spbasic} % basic style, author-year citations | |
785 \bibliographystyle{plainnat} | |
786 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,specials,aigaion-shorter} | |
787 %\bibliographystyle{unsrtnat} | |
788 %\bibliographystyle{apalike} | |
789 } | |
790 | |
791 | |
792 \end{document} |