Mercurial > ift6266
comparison writeup/nips2010_cameraready.tex @ 604:51213beaed8b
draft of NIPS 2010 workshop camera-ready version
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Mon, 22 Nov 2010 14:52:33 -0500 |
parents | |
children | 63f838479510 |
comparison
equal
deleted
inserted
replaced
603:eb6244c6d861 | 604:51213beaed8b |
---|---|
1 \documentclass{article} % For LaTeX2e | |
2 \usepackage{nips10submit_e,times} | |
3 \usepackage{wrapfig} | |
4 \usepackage{amsthm,amsmath,bbm} | |
5 \usepackage[psamsfonts]{amssymb} | |
6 \usepackage{algorithm,algorithmic} | |
7 \usepackage[utf8]{inputenc} | |
8 \usepackage{graphicx,subfigure} | |
9 \usepackage[numbers]{natbib} | |
10 | |
11 \addtolength{\textwidth}{20mm} | |
12 \addtolength{\textheight}{20mm} | |
13 \addtolength{\topmargin}{-10mm} | |
14 \addtolength{\evensidemargin}{-10mm} | |
15 \addtolength{\oddsidemargin}{-10mm} | |
16 | |
17 %\setlength\parindent{0mm} | |
18 | |
19 \title{Deep Self-Taught Learning for Handwritten Character Recognition} | |
20 \author{ | |
21 Frédéric Bastien, | |
22 Yoshua Bengio, | |
23 Arnaud Bergeron, | |
24 Nicolas Boulanger-Lewandowski, | |
25 Thomas Breuel,\\ | |
26 {\bf Youssouf Chherawala, | |
27 Moustapha Cisse, | |
28 Myriam Côté, | |
29 Dumitru Erhan, | |
30 Jeremy Eustache,}\\ | |
31 {\bf Xavier Glorot, | |
32 Xavier Muller, | |
33 Sylvain Pannetier Lebeuf, | |
34 Razvan Pascanu,} \\ | |
35 {\bf Salah Rifai, | |
36 Francois Savard, | |
37 Guillaume Sicard}\\ | |
38 Dept. IRO, U. Montreal | |
39 } | |
40 | |
41 \begin{document} | |
42 | |
43 %\makeanontitle | |
44 \maketitle | |
45 | |
46 \vspace*{-2mm} | |
47 \begin{abstract} | |
48 Recent theoretical and empirical work in statistical machine learning has | |
49 demonstrated the importance of learning algorithms for deep | |
50 architectures, i.e., function classes obtained by composing multiple | |
51 non-linear transformations. Self-taught learning (exploiting unlabeled | |
52 examples or examples from other distributions) has already been applied | |
53 to deep learners, but mostly to show the advantage of unlabeled | |
54 examples. Here we explore the advantage brought by {\em out-of-distribution examples}. | |
55 For this purpose we | |
56 developed a powerful generator of stochastic variations and noise | |
57 processes for character images, including not only affine transformations | |
58 but also slant, local elastic deformations, changes in thickness, | |
59 background images, grey level changes, contrast, occlusion, and various | |
60 types of noise. The out-of-distribution examples are obtained from these | |
61 highly distorted images or by including examples of object classes | |
62 different from those in the target test set. | |
63 We show that {\em deep learners benefit | |
64 more from them than a corresponding shallow learner}, at least in the area of | |
65 handwritten character recognition. In fact, we show that they reach | |
66 human-level performance on both handwritten digit classification and | |
67 62-class handwritten character recognition. | |
68 \end{abstract} | |
69 \vspace*{-3mm} | |
70 | |
71 \section{Introduction} | |
72 \vspace*{-1mm} | |
73 | |
74 {\bf Deep Learning} has emerged as a promising new area of research in | |
75 statistical machine learning~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,VincentPLarochelleH2008-very-small,ranzato-08,TaylorHintonICML2009,Larochelle-jmlr-2009,Salakhutdinov+Hinton-2009,HonglakL2009,HonglakLNIPS2009,Jarrett-ICCV2009,Taylor-cvpr-2010}. See \citet{Bengio-2009} for a review. | |
76 Learning algorithms for deep architectures are centered on the learning | |
77 of useful representations of data, which are better suited to the task at hand, | |
78 and are organized in a hierarchy with multiple levels. | |
79 This is in part inspired by observations of the mammalian visual cortex, | |
80 which consists of a chain of processing elements, each of which is associated with a | |
81 different representation of the raw visual input. In fact, | |
82 it was found recently that the features learnt in deep architectures resemble | |
83 those observed in the first two of these stages (in areas V1 and V2 | |
84 of visual cortex)~\citep{HonglakL2008}, and that they become more and | |
85 more invariant to factors of variation (such as camera movement) in | |
86 higher layers~\citep{Goodfellow2009}. | |
87 It has been hypothesized that learning a hierarchy of features increases the | |
88 ease and practicality of developing representations that are at once | |
89 tailored to specific tasks, yet are able to borrow statistical strength | |
90 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the | |
91 feature representation can lead to higher-level (more abstract, more | |
92 general) features that are more robust to unanticipated sources of | |
93 variance extant in real data. | |
94 | |
95 {\bf Self-taught learning}~\citep{RainaR2007} is a paradigm that combines principles | |
96 of semi-supervised and multi-task learning: the learner can exploit examples | |
97 that are unlabeled and possibly come from a distribution different from the target | |
98 distribution, e.g., from other classes than those of interest. | |
99 It has already been shown that deep learners can clearly take advantage of | |
100 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}, | |
101 but more needs to be done to explore the impact | |
102 of {\em out-of-distribution} examples and of the multi-task setting | |
103 (one exception is~\citep{CollobertR2008}, which uses a different kind | |
104 of learning algorithm). In particular the {\em relative | |
105 advantage} of deep learning for these settings has not been evaluated. | |
106 The hypothesis discussed in the conclusion is that a deep hierarchy of features | |
107 may be better able to provide sharing of statistical strength | |
108 between different regions in input space or different tasks. | |
109 | |
110 \iffalse | |
111 Whereas a deep architecture can in principle be more powerful than a | |
112 shallow one in terms of representation, depth appears to render the | |
113 training problem more difficult in terms of optimization and local minima. | |
114 It is also only recently that successful algorithms were proposed to | |
115 overcome some of these difficulties. All are based on unsupervised | |
116 learning, often in an greedy layer-wise ``unsupervised pre-training'' | |
117 stage~\citep{Bengio-2009}. | |
118 The principle is that each layer starting from | |
119 the bottom is trained to represent its input (the output of the previous | |
120 layer). After this | |
121 unsupervised initialization, the stack of layers can be | |
122 converted into a deep supervised feedforward neural network and fine-tuned by | |
123 stochastic gradient descent. | |
124 One of these layer initialization techniques, | |
125 applied here, is the Denoising | |
126 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see | |
127 Figure~\ref{fig:da}), which performed similarly or | |
128 better~\citep{VincentPLarochelleH2008-very-small} than previously | |
129 proposed Restricted Boltzmann Machines (RBM)~\citep{Hinton06} | |
130 in terms of unsupervised extraction | |
131 of a hierarchy of features useful for classification. Each layer is trained | |
132 to denoise its input, creating a layer of features that can be used as | |
133 input for the next layer, forming a Stacked Denoising Auto-encoder (SDA). | |
134 Note that training a Denoising Auto-encoder | |
135 can actually been seen as training a particular RBM by an inductive | |
136 principle different from maximum likelihood~\citep{Vincent-SM-2010}, | |
137 namely by Score Matching~\citep{Hyvarinen-2005,HyvarinenA2008}. | |
138 \fi | |
139 | |
140 Previous comparative experimental results with stacking of RBMs and DAs | |
141 to build deep supervised predictors had shown that they could outperform | |
142 shallow architectures in a variety of settings, especially | |
143 when the data involves complex interactions between many factors of | |
144 variation~\citep{LarochelleH2007,Bengio-2009}. Other experiments have suggested | |
145 that the unsupervised layer-wise pre-training acted as a useful | |
146 prior~\citep{Erhan+al-2010} that allows one to initialize a deep | |
147 neural network in a relatively much smaller region of parameter space, | |
148 corresponding to better generalization. | |
149 | |
150 To further the understanding of the reasons for the good performance | |
151 observed with deep learners, we focus here on the following {\em hypothesis}: | |
152 intermediate levels of representation, especially when there are | |
153 more such levels, can be exploited to {\bf share | |
154 statistical strength across different but related types of examples}, | |
155 such as examples coming from other tasks than the task of interest | |
156 (the multi-task setting), or examples coming from an overlapping | |
157 but different distribution (images with different kinds of perturbations | |
158 and noises, here). This is consistent with the hypotheses discussed | |
159 in~\citet{Bengio-2009} regarding the potential advantage | |
160 of deep learning and the idea that more levels of representation can | |
161 give rise to more abstract, more general features of the raw input. | |
162 | |
163 This hypothesis is related to a learning setting called | |
164 {\bf self-taught learning}~\citep{RainaR2007}, which combines principles | |
165 of semi-supervised and multi-task learning: the learner can exploit examples | |
166 that are unlabeled and possibly come from a distribution different from the target | |
167 distribution, e.g., from other classes than those of interest. | |
168 It has already been shown that deep learners can clearly take advantage of | |
169 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}, | |
170 but more needed to be done to explore the impact | |
171 of {\em out-of-distribution} examples and of the {\em multi-task} setting | |
172 (one exception is~\citep{CollobertR2008}, which shares and uses unsupervised | |
173 pre-training only with the first layer). In particular the {\em relative | |
174 advantage of deep learning} for these settings has not been evaluated. | |
175 | |
176 | |
177 % | |
178 The {\bf main claim} of this paper is that deep learners (with several levels of representation) can | |
179 {\bf benefit more from out-of-distribution examples than shallow learners} (with a single | |
180 level), both in the context of the multi-task setting and from | |
181 perturbed examples. Because we are able to improve on state-of-the-art | |
182 performance and reach human-level performance | |
183 on a large-scale task, we consider that this paper is also a contribution | |
184 to advance the application of machine learning to handwritten character recognition. | |
185 More precisely, we ask and answer the following questions: | |
186 | |
187 %\begin{enumerate} | |
188 $\bullet$ %\item | |
189 Do the good results previously obtained with deep architectures on the | |
190 MNIST digit images generalize to the setting of a similar but much larger and richer | |
191 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
192 | |
193 $\bullet$ %\item | |
194 To what extent does the perturbation of input images (e.g. adding | |
195 noise, affine transformations, background images) make the resulting | |
196 classifiers better not only on similarly perturbed images but also on | |
197 the {\em original clean examples}? We study this question in the | |
198 context of the 62-class and 10-class tasks of the NIST special database 19. | |
199 | |
200 $\bullet$ %\item | |
201 Do deep architectures {\em benefit {\bf more} from such out-of-distribution} | |
202 examples, in particular do they benefit more from | |
203 examples that are perturbed versions of the examples from the task of interest? | |
204 | |
205 $\bullet$ %\item | |
206 Similarly, does the feature learning step in deep learning algorithms benefit {\bf more} | |
207 from training with moderately {\em different classes} (i.e. a multi-task learning scenario) than | |
208 a corresponding shallow and purely supervised architecture? | |
209 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case) | |
210 to answer this question. | |
211 %\end{enumerate} | |
212 | |
213 Our experimental results provide positive evidence towards all of these questions, | |
214 as well as {\bf classifiers that reach human-level performance on 62-class isolated character | |
215 recognition and beat previously published results on the NIST dataset (special database 19)}. | |
216 To achieve these results, we introduce in the next section a sophisticated system | |
217 for stochastically transforming character images and then explain the methodology, | |
218 which is based on training with or without these transformed images and testing on | |
219 clean ones. | |
220 Code for generating these transformations as well as for the deep learning | |
221 algorithms are made available at {\tt http://hg.assembla.com/ift6266}. | |
222 | |
223 \vspace*{-3mm} | |
224 %\newpage | |
225 \section{Perturbed and Transformed Character Images} | |
226 \label{s:perturbations} | |
227 \vspace*{-2mm} | |
228 | |
229 \begin{minipage}[h]{\linewidth} | |
230 \begin{wrapfigure}[8]{l}{0.15\textwidth} | |
231 %\begin{minipage}[b]{0.14\linewidth} | |
232 \vspace*{-5mm} | |
233 \begin{center} | |
234 \includegraphics[scale=.4]{images/Original.png}\\ | |
235 {\bf Original} | |
236 \end{center} | |
237 \end{wrapfigure} | |
238 %\vspace{0.7cm} | |
239 %\end{minipage}% | |
240 %\hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
241 This section describes the different transformations we used to stochastically | |
242 transform $32 \times 32$ source images (such as the one on the left) | |
243 in order to obtain data from a larger distribution which | |
244 covers a domain substantially larger than the clean characters distribution from | |
245 which we start. | |
246 Although character transformations have been used before to | |
247 improve character recognizers, this effort is on a large scale both | |
248 in number of classes and in the complexity of the transformations, hence | |
249 in the complexity of the learning task. | |
250 More details can | |
251 be found in this technical report~\citep{ARXIV-2010}. | |
252 The code for these transformations (mostly python) is available at | |
253 {\tt http://hg.assembla.com/ift6266}. All the modules in the pipeline share | |
254 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the | |
255 amount of deformation or noise introduced. | |
256 There are two main parts in the pipeline. The first one, | |
257 from thickness to pinch, performs transformations. The second | |
258 part, from blur to contrast, adds different kinds of noise. | |
259 \end{minipage} | |
260 | |
261 \vspace*{1mm} | |
262 %\subsection{Transformations} | |
263 {\large\bf 2.1 Transformations} | |
264 \vspace*{1mm} | |
265 | |
266 | |
267 \begin{minipage}[h]{\linewidth} | |
268 \begin{wrapfigure}[7]{l}{0.15\textwidth} | |
269 %\begin{minipage}[b]{0.14\linewidth} | |
270 %\centering | |
271 \begin{center} | |
272 \vspace*{-5mm} | |
273 \includegraphics[scale=.4]{images/Thick_only.png}\\ | |
274 {\bf Thickness} | |
275 \end{center} | |
276 %\vspace{.6cm} | |
277 %\end{minipage}% | |
278 %\hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
279 \end{wrapfigure} | |
280 To change character {\bf thickness}, morphological operators of dilation and erosion~\citep{Haralick87,Serra82} | |
281 are applied. The neighborhood of each pixel is multiplied | |
282 element-wise with a {\em structuring element} matrix. | |
283 The pixel value is replaced by the maximum or the minimum of the resulting | |
284 matrix, respectively for dilation or erosion. Ten different structural elements with | |
285 increasing dimensions (largest is $5\times5$) were used. For each image, | |
286 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
287 element from a subset of the $n=round(m \times complexity)$ smallest structuring elements | |
288 where $m=10$ for dilation and $m=6$ for erosion (to avoid completely erasing thin characters). | |
289 A neutral element (no transformation) | |
290 is always present in the set. | |
291 %\vspace{.4cm} | |
292 \end{minipage} | |
293 \vspace*{3mm} | |
294 | |
295 \begin{minipage}[b]{0.14\linewidth} | |
296 \centering | |
297 \includegraphics[scale=.4]{images/Slant_only.png}\\ | |
298 {\bf Slant} | |
299 \end{minipage}% | |
300 \hspace{0.3cm} | |
301 \begin{minipage}[b]{0.83\linewidth} | |
302 %\centering | |
303 To produce {\bf slant}, each row of the image is shifted | |
304 proportionally to its height: $shift = round(slant \times height)$. | |
305 $slant \sim U[-complexity,complexity]$. | |
306 The shift is randomly chosen to be either to the left or to the right. | |
307 \vspace{8mm} | |
308 \end{minipage} | |
309 \vspace*{3mm} | |
310 | |
311 \begin{minipage}[h]{\linewidth} | |
312 \begin{minipage}[b]{0.14\linewidth} | |
313 %\centering | |
314 \begin{wrapfigure}[8]{l}{0.15\textwidth} | |
315 \vspace*{-6mm} | |
316 \begin{center} | |
317 \includegraphics[scale=.4]{images/Affine_only.png}\\ | |
318 {\small {\bf Affine \mbox{Transformation}}} | |
319 \end{center} | |
320 \end{wrapfigure} | |
321 %\end{minipage}% | |
322 %\hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
323 A $2 \times 3$ {\bf affine transform} matrix (with | |
324 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$. | |
325 Output pixel $(x,y)$ takes the value of input pixel | |
326 nearest to $(ax+by+c,dx+ey+f)$, | |
327 producing scaling, translation, rotation and shearing. | |
328 Marginal distributions of $(a,b,c,d,e,f)$ have been tuned to | |
329 forbid large rotations (to avoid confusing classes) but to give good | |
330 variability of the transformation: $a$ and $d$ $\sim U[1-3 | |
331 complexity,1+3\,complexity]$, $b$ and $e$ $\sim U[-3 \,complexity,3\, | |
332 complexity]$, and $c$ and $f \sim U[-4 \,complexity, 4 \, | |
333 complexity]$.\\ | |
334 \end{minipage} | |
335 \end{minipage} | |
336 | |
337 \iffalse | |
338 \vspace*{-4.5mm} | |
339 | |
340 \begin{minipage}[h]{\linewidth} | |
341 \begin{wrapfigure}[7]{l}{0.15\textwidth} | |
342 %\hspace*{-8mm}\begin{minipage}[b]{0.25\linewidth} | |
343 %\centering | |
344 \begin{center} | |
345 \vspace*{-4mm} | |
346 \includegraphics[scale=.4]{images/Localelasticdistorsions_only.png}\\ | |
347 {\bf Local Elastic Deformation} | |
348 \end{center} | |
349 \end{wrapfigure} | |
350 %\end{minipage}% | |
351 %\hspace{-3mm}\begin{minipage}[b]{0.85\linewidth} | |
352 %\vspace*{-20mm} | |
353 The {\bf local elastic deformation} | |
354 module induces a ``wiggly'' effect in the image, following~\citet{SimardSP03-short}, | |
355 which provides more details. | |
356 The intensity of the displacement fields is given by | |
357 $\alpha = \sqrt[3]{complexity} \times 10.0$, which are | |
358 convolved with a Gaussian 2D kernel (resulting in a blur) of | |
359 standard deviation $\sigma = 10 - 7 \times\sqrt[3]{complexity}$. | |
360 %\vspace{.9cm} | |
361 \end{minipage} | |
362 | |
363 \vspace*{7mm} | |
364 | |
365 %\begin{minipage}[b]{0.14\linewidth} | |
366 %\centering | |
367 \begin{minipage}[h]{\linewidth} | |
368 \begin{wrapfigure}[7]{l}{0.15\textwidth} | |
369 \vspace*{-5mm} | |
370 \begin{center} | |
371 \includegraphics[scale=.4]{images/Pinch_only.png}\\ | |
372 {\bf Pinch} | |
373 \end{center} | |
374 \end{wrapfigure} | |
375 %\vspace{.6cm} | |
376 %\end{minipage}% | |
377 %\hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
378 The {\bf pinch} module applies the ``Whirl and pinch'' GIMP filter with whirl set to 0. | |
379 A pinch is ``similar to projecting the image onto an elastic | |
380 surface and pressing or pulling on the center of the surface'' (GIMP documentation manual). | |
381 For a square input image, draw a radius-$r$ disk | |
382 around its center $C$. Any pixel $P$ belonging to | |
383 that disk has its value replaced by | |
384 the value of a ``source'' pixel in the original image, | |
385 on the line that goes through $C$ and $P$, but | |
386 at some other distance $d_2$. Define $d_1=distance(P,C)$ | |
387 and $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times | |
388 d_1$, where $pinch$ is a parameter of the filter. | |
389 The actual value is given by bilinear interpolation considering the pixels | |
390 around the (non-integer) source position thus found. | |
391 Here $pinch \sim U[-complexity, 0.7 \times complexity]$. | |
392 %\vspace{1.5cm} | |
393 \end{minipage} | |
394 | |
395 \vspace{1mm} | |
396 | |
397 {\large\bf 2.2 Injecting Noise} | |
398 %\subsection{Injecting Noise} | |
399 \vspace{2mm} | |
400 | |
401 \begin{minipage}[h]{\linewidth} | |
402 %\vspace*{-.2cm} | |
403 \begin{minipage}[t]{0.14\linewidth} | |
404 \centering | |
405 \vspace*{-2mm} | |
406 \includegraphics[scale=.4]{images/Motionblur_only.png}\\ | |
407 {\bf Motion Blur} | |
408 \end{minipage}% | |
409 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
410 %\vspace*{.5mm} | |
411 The {\bf motion blur} module is GIMP's ``linear motion blur'', which | |
412 has parameters $length$ and $angle$. The value of | |
413 a pixel in the final image is approximately the mean of the first $length$ pixels | |
414 found by moving in the $angle$ direction, | |
415 $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$. | |
416 \vspace{5mm} | |
417 \end{minipage} | |
418 \end{minipage} | |
419 | |
420 \vspace*{1mm} | |
421 | |
422 \begin{minipage}[h]{\linewidth} | |
423 \begin{minipage}[t]{0.14\linewidth} | |
424 \centering | |
425 \includegraphics[scale=.4]{images/occlusion_only.png}\\ | |
426 {\bf Occlusion} | |
427 %\vspace{.5cm} | |
428 \end{minipage}% | |
429 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
430 \vspace*{-18mm} | |
431 The {\bf occlusion} module selects a random rectangle from an {\em occluder} character | |
432 image and places it over the original {\em occluded} | |
433 image. Pixels are combined by taking the max(occluder, occluded), | |
434 i.e. keeping the lighter ones. | |
435 The rectangle corners | |
436 are sampled so that larger complexity gives larger rectangles. | |
437 The destination position in the occluded image are also sampled | |
438 according to a normal distribution (more details in~\citet{ift6266-tr-anonymous}). | |
439 This module is skipped with probability 60\%. | |
440 %\vspace{7mm} | |
441 \end{minipage} | |
442 \end{minipage} | |
443 | |
444 \vspace*{1mm} | |
445 | |
446 \begin{wrapfigure}[8]{l}{0.15\textwidth} | |
447 \vspace*{-6mm} | |
448 \begin{center} | |
449 %\begin{minipage}[t]{0.14\linewidth} | |
450 %\centering | |
451 \includegraphics[scale=.4]{images/Bruitgauss_only.png}\\ | |
452 {\bf Gaussian Smoothing} | |
453 \end{center} | |
454 \end{wrapfigure} | |
455 %\vspace{.5cm} | |
456 %\end{minipage}% | |
457 %\hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
458 With the {\bf Gaussian smoothing} module, | |
459 different regions of the image are spatially smoothed. | |
460 This is achieved by first convolving | |
461 the image with an isotropic Gaussian kernel of | |
462 size and variance chosen uniformly in the ranges $[12,12 + 20 \times | |
463 complexity]$ and $[2,2 + 6 \times complexity]$. This filtered image is normalized | |
464 between $0$ and $1$. We also create an isotropic weighted averaging window, of the | |
465 kernel size, with maximum value at the center. For each image we sample | |
466 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be | |
467 averaging centers between the original image and the filtered one. We | |
468 initialize to zero a mask matrix of the image size. For each selected pixel | |
469 we add to the mask the averaging window centered on it. The final image is | |
470 computed from the following element-wise operation: $\frac{image + filtered\_image | |
471 \times mask}{mask+1}$. | |
472 This module is skipped with probability 75\%. | |
473 %\end{minipage} | |
474 | |
475 \newpage | |
476 | |
477 \vspace*{-9mm} | |
478 | |
479 %\hspace*{-3mm}\begin{minipage}[t]{0.18\linewidth} | |
480 %\centering | |
481 \begin{minipage}[t]{\linewidth} | |
482 \begin{wrapfigure}[7]{l}{0.15\textwidth} | |
483 \vspace*{-5mm} | |
484 \begin{center} | |
485 \includegraphics[scale=.4]{images/Permutpixel_only.png}\\ | |
486 {\small\bf Permute Pixels} | |
487 \end{center} | |
488 \end{wrapfigure} | |
489 %\end{minipage}% | |
490 %\hspace{-0cm}\begin{minipage}[t]{0.86\linewidth} | |
491 %\vspace*{-20mm} | |
492 This module {\bf permutes neighbouring pixels}. It first selects a | |
493 fraction $\frac{complexity}{3}$ of pixels randomly in the image. Each | |
494 of these pixels is then sequentially exchanged with a random pixel | |
495 among its four nearest neighbors (on its left, right, top or bottom). | |
496 This module is skipped with probability 80\%.\\ | |
497 \vspace*{1mm} | |
498 \end{minipage} | |
499 | |
500 \vspace{-3mm} | |
501 | |
502 \begin{minipage}[t]{\linewidth} | |
503 \begin{wrapfigure}[7]{l}{0.15\textwidth} | |
504 %\vspace*{-3mm} | |
505 \begin{center} | |
506 %\hspace*{-3mm}\begin{minipage}[t]{0.18\linewidth} | |
507 %\centering | |
508 \vspace*{-5mm} | |
509 \includegraphics[scale=.4]{images/Distorsiongauss_only.png}\\ | |
510 {\small \bf Gauss. Noise} | |
511 \end{center} | |
512 \end{wrapfigure} | |
513 %\end{minipage}% | |
514 %\hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
515 \vspace*{12mm} | |
516 The {\bf Gaussian noise} module simply adds, to each pixel of the image independently, a | |
517 noise $\sim Normal(0,(\frac{complexity}{10})^2)$. | |
518 This module is skipped with probability 70\%. | |
519 %\vspace{1.1cm} | |
520 \end{minipage} | |
521 | |
522 \vspace*{1.2cm} | |
523 | |
524 \begin{minipage}[t]{\linewidth} | |
525 \begin{minipage}[t]{0.14\linewidth} | |
526 \centering | |
527 \includegraphics[scale=.4]{images/background_other_only.png}\\ | |
528 {\small \bf Bg Image} | |
529 \end{minipage}% | |
530 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
531 \vspace*{-18mm} | |
532 Following~\citet{Larochelle-jmlr-2009}, the {\bf background image} module adds a random | |
533 background image behind the letter, from a randomly chosen natural image, | |
534 with contrast adjustments depending on $complexity$, to preserve | |
535 more or less of the original character image. | |
536 %\vspace{.8cm} | |
537 \end{minipage} | |
538 \end{minipage} | |
539 %\vspace{-.7cm} | |
540 | |
541 \begin{minipage}[t]{0.14\linewidth} | |
542 \centering | |
543 \includegraphics[scale=.4]{images/Poivresel_only.png}\\ | |
544 {\small \bf Salt \& Pepper} | |
545 \end{minipage}% | |
546 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
547 \vspace*{-18mm} | |
548 The {\bf salt and pepper noise} module adds noise $\sim U[0,1]$ to random subsets of pixels. | |
549 The number of selected pixels is $0.2 \times complexity$. | |
550 This module is skipped with probability 75\%. | |
551 %\vspace{.9cm} | |
552 \end{minipage} | |
553 %\vspace{-.7cm} | |
554 | |
555 \vspace{1mm} | |
556 | |
557 \begin{minipage}[t]{\linewidth} | |
558 \begin{wrapfigure}[7]{l}{0.14\textwidth} | |
559 %\begin{minipage}[t]{0.14\linewidth} | |
560 %\centering | |
561 \begin{center} | |
562 \vspace*{-4mm} | |
563 \hspace*{-1mm}\includegraphics[scale=.4]{images/Rature_only.png}\\ | |
564 {\bf Scratches} | |
565 %\end{minipage}% | |
566 \end{center} | |
567 \end{wrapfigure} | |
568 %\hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
569 %\vspace{.4cm} | |
570 The {\bf scratches} module places line-like white patches on the image. The | |
571 lines are heavily transformed images of the digit ``1'' (one), chosen | |
572 at random among 500 such 1 images, | |
573 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times | |
574 complexity)^2$ (in degrees), using bi-cubic interpolation. | |
575 Two passes of a grey-scale morphological erosion filter | |
576 are applied, reducing the width of the line | |
577 by an amount controlled by $complexity$. | |
578 This module is skipped with probability 85\%. The probabilities | |
579 of applying 1, 2, or 3 patches are (50\%,30\%,20\%). | |
580 \end{minipage} | |
581 | |
582 \vspace*{1mm} | |
583 | |
584 \begin{minipage}[t]{0.25\linewidth} | |
585 \centering | |
586 \hspace*{-16mm}\includegraphics[scale=.4]{images/Contrast_only.png}\\ | |
587 {\bf Grey Level \& Contrast} | |
588 \end{minipage}% | |
589 \hspace{-12mm}\begin{minipage}[t]{0.82\linewidth} | |
590 \vspace*{-18mm} | |
591 The {\bf grey level and contrast} module changes the contrast by changing grey levels, and may invert the image polarity (white | |
592 to black and black to white). The contrast is $C \sim U[1-0.85 \times complexity,1]$ | |
593 so the image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The | |
594 polarity is inverted with probability 50\%. | |
595 %\vspace{.7cm} | |
596 \end{minipage} | |
597 \vspace{2mm} | |
598 | |
599 \fi | |
600 | |
601 \iffalse | |
602 \begin{figure}[ht] | |
603 \centerline{\resizebox{.9\textwidth}{!}{\includegraphics{images/example_t.png}}}\\ | |
604 \caption{Illustration of the pipeline of stochastic | |
605 transformations applied to the image of a lower-case \emph{t} | |
606 (the upper left image). Each image in the pipeline (going from | |
607 left to right, first top line, then bottom line) shows the result | |
608 of applying one of the modules in the pipeline. The last image | |
609 (bottom right) is used as training example.} | |
610 \label{fig:pipeline} | |
611 \end{figure} | |
612 \fi | |
613 | |
614 \vspace*{-3mm} | |
615 \section{Experimental Setup} | |
616 \vspace*{-1mm} | |
617 | |
618 Much previous work on deep learning had been performed on | |
619 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
620 with 60~000 examples, and variants involving 10~000 | |
621 examples~\citep{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}. | |
622 The focus here is on much larger training sets, from 10 times to | |
623 to 1000 times larger, and 62 classes. | |
624 | |
625 The first step in constructing the larger datasets (called NISTP and P07) is to sample from | |
626 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, | |
627 and {\bf OCR data} (scanned machine printed characters). Once a character | |
628 is sampled from one of these {\em data sources} (chosen randomly), the second step is to | |
629 apply a pipeline of transformations and/or noise processes described in section \ref{s:perturbations}. | |
630 | |
631 To provide a baseline of error rate comparison we also estimate human performance | |
632 on both the 62-class task and the 10-class digits task. | |
633 We compare the best Multi-Layer Perceptrons (MLP) against | |
634 the best Stacked Denoising Auto-encoders (SDA), when | |
635 both models' hyper-parameters are selected to minimize the validation set error. | |
636 We also provide a comparison against a precise estimate | |
637 of human performance obtained via Amazon's Mechanical Turk (AMT) | |
638 service ({\tt http://mturk.com}). | |
639 AMT users are paid small amounts | |
640 of money to perform tasks for which human intelligence is required. | |
641 An incentive for them to do the job right is that payment can be denied | |
642 if the job is not properly done. | |
643 Mechanical Turk has been used extensively in natural language processing and vision. | |
644 %processing \citep{SnowEtAl2008} and vision | |
645 %\citep{SorokinAndForsyth2008,whitehill09}. | |
646 AMT users were presented | |
647 with 10 character images at a time (from a test set) and asked to choose 10 corresponding ASCII | |
648 characters. They were forced to choose a single character class (either among the | |
649 62 or 10 character classes) for each image. | |
650 80 subjects classified 2500 images per (dataset,task) pair. | |
651 Different humans labelers sometimes provided a different label for the same | |
652 example, and we were able to estimate the error variance due to this effect | |
653 because each image was classified by 3 different persons. | |
654 The average error of humans on the 62-class task NIST test set | |
655 is 18.2\%, with a standard error of 0.1\%. | |
656 | |
657 \vspace*{-3mm} | |
658 \subsection{Data Sources} | |
659 \vspace*{-2mm} | |
660 | |
661 %\begin{itemize} | |
662 %\item | |
663 {\bf NIST.} | |
664 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995}, | |
665 widely used for training and testing character | |
666 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}. | |
667 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications, | |
668 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes | |
669 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity. | |
670 The fourth partition (called $hsf_4$, 82587 examples), | |
671 experimentally recognized to be the most difficult one, is the one recommended | |
672 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
673 for that purpose. We randomly split the remainder (731,668 examples) into a training set and a validation set for | |
674 model selection. | |
675 The performances reported by previous work on that dataset mostly use only the digits. | |
676 Here we use all the classes both in the training and testing phase. This is especially | |
677 useful to estimate the effect of a multi-task setting. | |
678 The distribution of the classes in the NIST training and test sets differs | |
679 substantially, with relatively many more digits in the test set, and a more uniform distribution | |
680 of letters in the test set (whereas in the training set they are distributed | |
681 more like in natural text). | |
682 \vspace*{-1mm} | |
683 | |
684 %\item | |
685 {\bf Fonts.} | |
686 In order to have a good variety of sources we downloaded an important number of free fonts from: | |
687 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}. | |
688 % TODO: pointless to anonymize, it's not pointing to our work | |
689 Including an operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from. | |
690 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, | |
691 directly as input to our models. | |
692 \vspace*{-1mm} | |
693 | |
694 %\item | |
695 {\bf Captchas.} | |
696 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a Python-based captcha generator library) for | |
697 generating characters of the same format as the NIST dataset. This software is based on | |
698 a random character class generator and various kinds of transformations similar to those described in the previous sections. | |
699 In order to increase the variability of the data generated, many different fonts are used for generating the characters. | |
700 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity | |
701 depending on the value of the complexity parameter provided by the user of the data source. | |
702 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class? | |
703 \vspace*{-1mm} | |
704 | |
705 %\item | |
706 {\bf OCR data.} | |
707 A large set (2 million) of scanned, OCRed and manually verified machine-printed | |
708 characters where included as an | |
709 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
710 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern | |
711 ({\tt http://www.iupr.com}), and which will be publicly released. | |
712 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this | |
713 %\end{itemize} | |
714 | |
715 \vspace*{-3mm} | |
716 \subsection{Data Sets} | |
717 \vspace*{-2mm} | |
718 | |
719 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label | |
720 from one of the 62 character classes. They are obtained from the optional application of the | |
721 perturbation pipeline to iid samples from the datasources, and they are randomly split into | |
722 training set, validation set, and test set. | |
723 %\begin{itemize} | |
724 \vspace*{-1mm} | |
725 | |
726 %\item | |
727 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has | |
728 \{651668 / 80000 / 82587\} \{training / validation / test\} examples, containing | |
729 upper case, lower case, and digits. | |
730 \vspace*{-1mm} | |
731 | |
732 %\item | |
733 {\bf P07.} This dataset of upper case, lower case and digit images | |
734 is obtained by taking raw characters from all four of the above sources | |
735 and sending them through the transformation pipeline described in section \ref{s:perturbations}. | |
736 For each new example to generate, a data source is selected with probability $10\%$ from the fonts, | |
737 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the | |
738 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$. | |
739 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. | |
740 \vspace*{-1mm} | |
741 | |
742 %\item | |
743 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources) | |
744 except that we only apply | |
745 transformations from slant to pinch. Therefore, the character is | |
746 transformed but no additional noise is added to the image, giving images | |
747 closer to the NIST dataset. | |
748 It has \{81,920,000 / 80,000 / 20,000\} \{training / validation / test\} examples | |
749 obtained from the corresponding NIST sets plus other sources. | |
750 %\end{itemize} | |
751 | |
752 \vspace*{-3mm} | |
753 \subsection{Models and their Hyperparameters} | |
754 \vspace*{-2mm} | |
755 | |
756 The experiments are performed using MLPs (with a single | |
757 hidden layer) and deep SDAs. | |
758 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.} | |
759 | |
760 {\bf Multi-Layer Perceptrons (MLP).} | |
761 Whereas previous work had compared deep architectures to both shallow MLPs and | |
762 SVMs, we only compared to MLPs here because of the very large datasets used | |
763 (making the use of SVMs computationally challenging because of their quadratic | |
764 scaling behavior). Preliminary experiments on training SVMs (libSVM) with subsets of the training | |
765 set allowing the program to fit in memory yielded substantially worse results | |
766 than those obtained with MLPs. For training on nearly a hundred million examples | |
767 (with the perturbed data), the MLPs and SDA are much more convenient than | |
768 classifiers based on kernel methods. | |
769 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
770 exponentials) on the output layer for estimating $P(class | image)$. | |
771 The number of hidden units is taken in $\{300,500,800,1000,1500\}$. | |
772 Training examples are presented in minibatches of size 20. A constant learning | |
773 rate was chosen among $\{0.001, 0.01, 0.025, 0.075, 0.1, 0.5\}$. | |
774 %through preliminary experiments (measuring performance on a validation set), | |
775 %and $0.1$ (which was found to work best) was then selected for optimizing on | |
776 %the whole training sets. | |
777 \vspace*{-1mm} | |
778 | |
779 | |
780 {\bf Stacked Denoising Auto-encoders (SDA).} | |
781 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) | |
782 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
783 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006}, | |
784 apparently setting parameters in the | |
785 basin of attraction of supervised gradient descent yielding better | |
786 generalization~\citep{Erhan+al-2010}. This initial {\em unsupervised | |
787 pre-training phase} uses all of the training images but not the training labels. | |
788 Each layer is trained in turn to produce a new representation of its input | |
789 (starting from the raw pixels). | |
790 It is hypothesized that the | |
791 advantage brought by this procedure stems from a better prior, | |
792 on the one hand taking advantage of the link between the input | |
793 distribution $P(x)$ and the conditional distribution of interest | |
794 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
795 taking advantage of the expressive power and bias implicit in the | |
796 deep architecture (whereby complex concepts are expressed as | |
797 compositions of simpler ones through a deep hierarchy). | |
798 | |
799 \begin{figure}[ht] | |
800 \vspace*{-2mm} | |
801 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}} | |
802 \vspace*{-2mm} | |
803 \caption{Illustration of the computations and training criterion for the denoising | |
804 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of | |
805 the layer (i.e. raw input or output of previous layer) | |
806 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$. | |
807 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which | |
808 is compared to the uncorrupted input $x$ through the loss function | |
809 $L_H(x,z)$, whose expected value is approximately minimized during training | |
810 by tuning $\theta$ and $\theta'$.} | |
811 \label{fig:da} | |
812 \vspace*{-2mm} | |
813 \end{figure} | |
814 | |
815 Here we chose to use the Denoising | |
816 Auto-encoder~\citep{VincentPLarochelleH2008} as the building block for | |
817 these deep hierarchies of features, as it is simple to train and | |
818 explain (see Figure~\ref{fig:da}, as well as | |
819 tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
820 provides efficient inference, and yielded results | |
821 comparable or better than RBMs in series of experiments | |
822 \citep{VincentPLarochelleH2008-very-small}. It really corresponds to a Gaussian | |
823 RBM trained by a Score Matching criterion~\cite{Vincent-SM-2010}. | |
824 During training, a Denoising | |
825 Auto-encoder is presented with a stochastically corrupted version | |
826 of the input and trained to reconstruct the uncorrupted input, | |
827 forcing the hidden units to represent the leading regularities in | |
828 the data. Here we use the random binary masking corruption | |
829 (which sets to 0 a random subset of the inputs). | |
830 Once it is trained, in a purely unsupervised way, | |
831 its hidden units' activations can | |
832 be used as inputs for training a second one, etc. | |
833 After this unsupervised pre-training stage, the parameters | |
834 are used to initialize a deep MLP, which is fine-tuned by | |
835 the same standard procedure used to train them (see previous section). | |
836 The SDA hyper-parameters are the same as for the MLP, with the addition of the | |
837 amount of corruption noise (we used the masking noise process, whereby a | |
838 fixed proportion of the input values, randomly selected, are zeroed), and a | |
839 separate learning rate for the unsupervised pre-training stage (selected | |
840 from the same above set). The fraction of inputs corrupted was selected | |
841 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
842 of hidden layers but it was fixed to 3 based on previous work with | |
843 SDAs on MNIST~\citep{VincentPLarochelleH2008-very-small}. The size of the hidden | |
844 layers was kept constant across hidden layers, and the best results | |
845 were obtained with the largest values that we could experiment | |
846 with given our patience, with 1000 hidden units. | |
847 | |
848 \vspace*{-1mm} | |
849 | |
850 \begin{figure}[ht] | |
851 \vspace*{-2mm} | |
852 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}} | |
853 \vspace*{-3mm} | |
854 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained | |
855 on NIST, 1 on NISTP, and 2 on P07. Left: overall results | |
856 of all models, on NIST and NISTP test sets. | |
857 Right: error rates on NIST test digits only, along with the previous results from | |
858 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
859 respectively based on ART, nearest neighbors, MLPs, and SVMs.} | |
860 \label{fig:error-rates-charts} | |
861 \vspace*{-2mm} | |
862 \end{figure} | |
863 | |
864 | |
865 \begin{figure}[ht] | |
866 \vspace*{-3mm} | |
867 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}} | |
868 \vspace*{-3mm} | |
869 \caption{Relative improvement in error rate due to self-taught learning. | |
870 Left: Improvement (or loss, when negative) | |
871 induced by out-of-distribution examples (perturbed data). | |
872 Right: Improvement (or loss, when negative) induced by multi-task | |
873 learning (training on all classes and testing only on either digits, | |
874 upper case, or lower-case). The deep learner (SDA) benefits more from | |
875 both self-taught learning scenarios, compared to the shallow MLP.} | |
876 \label{fig:improvements-charts} | |
877 \vspace*{-2mm} | |
878 \end{figure} | |
879 | |
880 \section{Experimental Results} | |
881 \vspace*{-2mm} | |
882 | |
883 %\vspace*{-1mm} | |
884 %\subsection{SDA vs MLP vs Humans} | |
885 %\vspace*{-1mm} | |
886 The models are either trained on NIST (MLP0 and SDA0), | |
887 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested | |
888 on either NIST, NISTP or P07 (regardless of the data set used for training), | |
889 either on the 62-class task | |
890 or on the 10-digits task. Training time (including about half | |
891 for unsupervised pre-training, for DAs) on the larger | |
892 datasets is around one day on a GPU (GTX 285). | |
893 Figure~\ref{fig:error-rates-charts} summarizes the results obtained, | |
894 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, | |
895 SDA2), along with the previous results on the digits NIST special database | |
896 19 test set from the literature, respectively based on ARTMAP neural | |
897 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search | |
898 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs | |
899 ~\citep{Milgram+al-2005}.% More detailed and complete numerical results | |
900 %(figures and tables, including standard errors on the error rates) can be | |
901 %found in Appendix. | |
902 The deep learner not only outperformed the shallow ones and | |
903 previously published performance (in a statistically and qualitatively | |
904 significant way) but when trained with perturbed data | |
905 reaches human performance on both the 62-class task | |
906 and the 10-class (digits) task. | |
907 17\% error (SDA1) or 18\% error (humans) may seem large but a large | |
908 majority of the errors from humans and from SDA1 are from out-of-context | |
909 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a | |
910 ``c'' and a ``C'' are often indistinguishible). | |
911 | |
912 In addition, as shown in the left of | |
913 Figure~\ref{fig:improvements-charts}, the relative improvement in error | |
914 rate brought by self-taught learning is greater for the SDA, and these | |
915 differences with the MLP are statistically and qualitatively | |
916 significant. | |
917 The left side of the figure shows the improvement to the clean | |
918 NIST test set error brought by the use of out-of-distribution examples | |
919 (i.e. the perturbed examples examples from NISTP or P07), | |
920 over the models trained exclusively on NIST (respectively SDA0 and MLP0). | |
921 Relative percent change is measured by taking | |
922 $100 \% \times$ (original model's error / perturbed-data model's error - 1). | |
923 The right side of | |
924 Figure~\ref{fig:improvements-charts} shows the relative improvement | |
925 brought by the use of a multi-task setting, in which the same model is | |
926 trained for more classes than the target classes of interest (i.e. training | |
927 with all 62 classes when the target classes are respectively the digits, | |
928 lower-case, or upper-case characters). Again, whereas the gain from the | |
929 multi-task setting is marginal or negative for the MLP, it is substantial | |
930 for the SDA. Note that to simplify these multi-task experiments, only the original | |
931 NIST dataset is used. For example, the MLP-digits bar shows the relative | |
932 percent improvement in MLP error rate on the NIST digits test set | |
933 is $100\% \times$ (single-task | |
934 model's error / multi-task model's error - 1). The single-task model is | |
935 trained with only 10 outputs (one per digit), seeing only digit examples, | |
936 whereas the multi-task model is trained with 62 outputs, with all 62 | |
937 character classes as examples. Hence the hidden units are shared across | |
938 all tasks. For the multi-task model, the digit error rate is measured by | |
939 comparing the correct digit class with the output class associated with the | |
940 maximum conditional probability among only the digit classes outputs. The | |
941 setting is similar for the other two target classes (lower case characters | |
942 and upper case characters). Note however that some types of perturbations | |
943 (NISTP) help more than others (P07) when testing on the clean images. | |
944 %%\vspace*{-1mm} | |
945 %\subsection{Perturbed Training Data More Helpful for SDA} | |
946 %\vspace*{-1mm} | |
947 | |
948 %\vspace*{-1mm} | |
949 %\subsection{Multi-Task Learning Effects} | |
950 %\vspace*{-1mm} | |
951 | |
952 \iffalse | |
953 As previously seen, the SDA is better able to benefit from the | |
954 transformations applied to the data than the MLP. In this experiment we | |
955 define three tasks: recognizing digits (knowing that the input is a digit), | |
956 recognizing upper case characters (knowing that the input is one), and | |
957 recognizing lower case characters (knowing that the input is one). We | |
958 consider the digit classification task as the target task and we want to | |
959 evaluate whether training with the other tasks can help or hurt, and | |
960 whether the effect is different for MLPs versus SDAs. The goal is to find | |
961 out if deep learning can benefit more (or less) from multiple related tasks | |
962 (i.e. the multi-task setting) compared to a corresponding purely supervised | |
963 shallow learner. | |
964 | |
965 We use a single hidden layer MLP with 1000 hidden units, and a SDA | |
966 with 3 hidden layers (1000 hidden units per layer), pre-trained and | |
967 fine-tuned on NIST. | |
968 | |
969 Our results show that the MLP benefits marginally from the multi-task setting | |
970 in the case of digits (5\% relative improvement) but is actually hurt in the case | |
971 of characters (respectively 3\% and 4\% worse for lower and upper class characters). | |
972 On the other hand the SDA benefited from the multi-task setting, with relative | |
973 error rate improvements of 27\%, 15\% and 13\% respectively for digits, | |
974 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. | |
975 \fi | |
976 | |
977 | |
978 \vspace*{-2mm} | |
979 \section{Conclusions and Discussion} | |
980 \vspace*{-2mm} | |
981 | |
982 We have found that the self-taught learning framework is more beneficial | |
983 to a deep learner than to a traditional shallow and purely | |
984 supervised learner. More precisely, | |
985 the answers are positive for all the questions asked in the introduction. | |
986 %\begin{itemize} | |
987 | |
988 $\bullet$ %\item | |
989 {\bf Do the good results previously obtained with deep architectures on the | |
990 MNIST digits generalize to a much larger and richer (but similar) | |
991 dataset, the NIST special database 19, with 62 classes and around 800k examples}? | |
992 Yes, the SDA {\em systematically outperformed the MLP and all the previously | |
993 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level | |
994 performance} at around 17\% error on the 62-class task and 1.4\% on the digits, | |
995 and beating previously published results on the same data. | |
996 | |
997 $\bullet$ %\item | |
998 {\bf To what extent do self-taught learning scenarios help deep learners, | |
999 and do they help them more than shallow supervised ones}? | |
1000 We found that distorted training examples not only made the resulting | |
1001 classifier better on similarly perturbed images but also on | |
1002 the {\em original clean examples}, and more importantly and more novel, | |
1003 that deep architectures benefit more from such {\em out-of-distribution} | |
1004 examples. MLPs were helped by perturbed training examples when tested on perturbed input | |
1005 images (65\% relative improvement on NISTP) | |
1006 but only marginally helped (5\% relative improvement on all classes) | |
1007 or even hurt (10\% relative loss on digits) | |
1008 with respect to clean examples. On the other hand, the deep SDAs | |
1009 were significantly boosted by these out-of-distribution examples. | |
1010 Similarly, whereas the improvement due to the multi-task setting was marginal or | |
1011 negative for the MLP (from +5.6\% to -3.6\% relative change), | |
1012 it was quite significant for the SDA (from +13\% to +27\% relative change), | |
1013 which may be explained by the arguments below. | |
1014 %\end{itemize} | |
1015 | |
1016 In the original self-taught learning framework~\citep{RainaR2007}, the | |
1017 out-of-sample examples were used as a source of unsupervised data, and | |
1018 experiments showed its positive effects in a \emph{limited labeled data} | |
1019 scenario. However, many of the results by \citet{RainaR2007} (who used a | |
1020 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught | |
1021 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases. | |
1022 We note instead that, for deep | |
1023 architectures, our experiments show that such a positive effect is accomplished | |
1024 even in a scenario with a \emph{large number of labeled examples}, | |
1025 i.e., here, the relative gain of self-taught learning and | |
1026 out-of-distribution examples is probably preserved | |
1027 in the asymptotic regime. However, note that in our perturbation experiments | |
1028 (but not in our multi-task experiments), | |
1029 even the out-of-distribution examples are labeled, unlike in the | |
1030 earlier self-taught learning experiments~\citep{RainaR2007}. | |
1031 | |
1032 {\bf Why would deep learners benefit more from the self-taught learning framework}? | |
1033 The key idea is that the lower layers of the predictor compute a hierarchy | |
1034 of features that can be shared across tasks or across variants of the | |
1035 input distribution. A theoretical analysis of generalization improvements | |
1036 due to sharing of intermediate features across tasks already points | |
1037 towards that explanation~\cite{baxter95a}. | |
1038 Intermediate features that can be used in different | |
1039 contexts can be estimated in a way that allows to share statistical | |
1040 strength. Features extracted through many levels are more likely to | |
1041 be more abstract and more invariant to some of the factors of variation | |
1042 in the underlying distribution (as the experiments in~\citet{Goodfellow2009} suggest), | |
1043 increasing the likelihood that they would be useful for a larger array | |
1044 of tasks and input conditions. | |
1045 Therefore, we hypothesize that both depth and unsupervised | |
1046 pre-training play a part in explaining the advantages observed here, and future | |
1047 experiments could attempt at teasing apart these factors. | |
1048 And why would deep learners benefit from the self-taught learning | |
1049 scenarios even when the number of labeled examples is very large? | |
1050 We hypothesize that this is related to the hypotheses studied | |
1051 in~\citet{Erhan+al-2010}. In~\citet{Erhan+al-2010} | |
1052 it was found that online learning on a huge dataset did not make the | |
1053 advantage of the deep learning bias vanish, and a similar phenomenon | |
1054 may be happening here. We hypothesize that unsupervised pre-training | |
1055 of a deep hierarchy with self-taught learning initializes the | |
1056 model in the basin of attraction of supervised gradient descent | |
1057 that corresponds to better generalization. Furthermore, such good | |
1058 basins of attraction are not discovered by pure supervised learning | |
1059 (with or without self-taught settings), and more labeled examples | |
1060 does not allow the model to go from the poorer basins of attraction discovered | |
1061 by the purely supervised shallow models to the kind of better basins associated | |
1062 with deep learning and self-taught learning. | |
1063 | |
1064 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) | |
1065 can be executed on-line at {\tt http://deep.host22.com}. | |
1066 | |
1067 \newpage | |
1068 { | |
1069 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,aigaion-shorter,specials} | |
1070 %\bibliographystyle{plainnat} | |
1071 \bibliographystyle{unsrtnat} | |
1072 %\bibliographystyle{apalike} | |
1073 } | |
1074 | |
1075 | |
1076 \end{document} |