Mercurial > ift6266
annotate writeup/nips2010_submission.tex @ 482:ce69aa9204d8
changement au titre et reecriture abstract
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Mon, 31 May 2010 13:59:11 -0400 |
parents | 150203d2b5c3 |
children | b9cdb464de5f |
rev | line source |
---|---|
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
1 \documentclass{article} % For LaTeX2e |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
2 \usepackage{nips10submit_e,times} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
3 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
4 \usepackage{amsthm,amsmath,amssymb,bbold,bbm} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
5 \usepackage{algorithm,algorithmic} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
6 \usepackage[utf8]{inputenc} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
7 \usepackage{graphicx,subfigure} |
469 | 8 \usepackage[numbers]{natbib} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
9 |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
10 \title{Deep Self-Taught Learning for Handwritten Character Recognition} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
11 \author{The IFT6266 Gang} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
12 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
13 \begin{document} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
14 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
15 %\makeanontitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
16 \maketitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
17 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
18 \begin{abstract} |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
19 Recent theoretical and empirical work in statistical machine learning has |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
20 demonstrated the importance of learning algorithms for deep |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
21 architectures, i.e., function classes obtained by composing multiple |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
22 non-linear transformations. The self-taught learning (exploitng unlabeled |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
23 examples or examples from other distributions) has already been applied |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
24 to deep learners, but mostly to show the advantage of unlabeled |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
25 examples. Here we explore the advantage brought by {\em out-of-distribution |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
26 examples} and show that {\em deep learners benefit more from them than a |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
27 corresponding shallow learner}, in the area |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
28 of handwritten character recognition. In fact, we show that they reach |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
29 human-level performance on both handwritten digit classification and |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
30 62-class handwritten character recognition. For this purpose we |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
31 developed a powerful generator of stochastic variations and noise |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
32 processes character images, including not only affine transformations but |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
33 also slant, local elastic deformations, changes in thickness, background |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
34 images, color, contrast, occlusion, and various types of pixel and |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
35 spatially correlated noise. The out-of-distribution examples are |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
36 obtained by training with these highly distorted images or |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
37 by including object classes different from those in the target test set. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
38 \end{abstract} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
39 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
40 \section{Introduction} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
41 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
42 Deep Learning has emerged as a promising new area of research in |
469 | 43 statistical machine learning (see~\citet{Bengio-2009} for a review). |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
44 Learning algorithms for deep architectures are centered on the learning |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
45 of useful representations of data, which are better suited to the task at hand. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
46 This is in great part inspired by observations of the mammalian visual cortex, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
47 which consists of a chain of processing elements, each of which is associated with a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
48 different representation. In fact, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
49 it was found recently that the features learnt in deep architectures resemble |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
50 those observed in the first two of these stages (in areas V1 and V2 |
469 | 51 of visual cortex)~\citep{HonglakL2008}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
52 Processing images typically involves transforming the raw pixel data into |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
53 new {\bf representations} that can be used for analysis or classification. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
54 For example, a principal component analysis representation linearly projects |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
55 the input image into a lower-dimensional feature space. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
56 Why learn a representation? Current practice in the computer vision |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
57 literature converts the raw pixels into a hand-crafted representation |
469 | 58 e.g.\ SIFT features~\citep{Lowe04}, but deep learning algorithms |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
59 tend to discover similar features in their first few |
469 | 60 levels~\citep{HonglakL2008,ranzato-08,Koray-08,VincentPLarochelleH2008-very-small}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
61 Learning increases the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
62 ease and practicality of developing representations that are at once |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
63 tailored to specific tasks, yet are able to borrow statistical strength |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
64 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
65 feature representation can lead to higher-level (more abstract, more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
66 general) features that are more robust to unanticipated sources of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
67 variance extant in real data. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
68 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
69 Whereas a deep architecture can in principle be more powerful than a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
70 shallow one in terms of representation, depth appears to render the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
71 training problem more difficult in terms of optimization and local minima. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
72 It is also only recently that successful algorithms were proposed to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
73 overcome some of these difficulties. All are based on unsupervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
74 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
469 | 75 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
76 applied here, is the Denoising |
469 | 77 Auto-Encoder~(DEA)~\citep{VincentPLarochelleH2008-very-small}, which |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
78 performed similarly or better than previously proposed Restricted Boltzmann |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
79 Machines in terms of unsupervised extraction of a hierarchy of features |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
80 useful for classification. The principle is that each layer starting from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
81 the bottom is trained to encode their input (the output of the previous |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
82 layer) and try to reconstruct it from a corrupted version of it. After this |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
83 unsupervised initialization, the stack of denoising auto-encoders can be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
84 converted into a deep supervised feedforward neural network and trained by |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
85 stochastic gradient descent. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
86 |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
87 In this paper we ask the following questions: |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
88 \begin{enumerate} |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
89 \item Do the good results previously obtained with deep architectures on the |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
90 MNIST digits generalize to the setting of a much larger and richer (but similar) |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
91 dataset, the NIST special database 19, with 62 classes and around 800k examples? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
92 \item To what extent does the perturbation of input images (e.g. adding |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
93 noise, affine transformations, background images) make the resulting |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
94 classifier better not only on similarly perturbed images but also on |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
95 the {\em original clean examples}? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
96 \item Do deep architectures benefit more from such {\em out-of-distribution} |
469 | 97 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
98 \item Similarly, does the feature learning step in deep learning algorithms benefit more |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
99 training with similar but different classes (i.e. a multi-task learning scenario) than |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
100 a corresponding shallow and purely supervised architecture? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
101 \end{enumerate} |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
102 The experimental results presented here provide positive evidence towards all of these questions. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
103 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
104 \section{Perturbation and Transformation of Character Images} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
105 |
467 | 106 This section describes the different transformations we used to stochastically |
107 transform source images in order to obtain data. More details can | |
469 | 108 be found in this technical report~\citep{ift6266-tr-anonymous}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
109 The code for these transformations (mostly python) is available at |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
110 {\tt http://anonymous.url.net}. All the modules in the pipeline share |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
111 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the |
467 | 112 amount of deformation or noise introduced. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
113 |
467 | 114 There are two main parts in the pipeline. The first one, |
115 from slant to pinch below, performs transformations. The second | |
116 part, from blur to contrast, adds different kinds of noise. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
117 |
467 | 118 {\large\bf Transformations}\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
119 {\bf Slant.} |
467 | 120 We mimic slant by shifting each row of the image |
121 proportionnaly to its height: $shift = round(slant \times height)$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
122 The $slant$ coefficient can be negative or positive with equal probability |
467 | 123 and its value is randomly sampled according to the complexity level: |
124 e $slant \sim U[0,complexity]$, so the | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
125 maximum displacement for the lowest or highest pixel line is of |
467 | 126 $round(complexity \times 32)$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
127 {\bf Thickness.} |
469 | 128 Morpholigical operators of dilation and erosion~\citep{Haralick87,Serra82} |
467 | 129 are applied. The neighborhood of each pixel is multiplied |
130 element-wise with a {\em structuring element} matrix. | |
131 The pixel value is replaced by the maximum or the minimum of the resulting | |
132 matrix, respectively for dilation or erosion. Ten different structural elements with | |
133 increasing dimensions (largest is $5\times5$) were used. For each image, | |
134 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
135 element from a subset of the $n$ smallest structuring elements where $n$ is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
136 $round(10 \times complexity)$ for dilation and $round(6 \times complexity)$ |
467 | 137 for erosion. A neutral element is always present in the set, and if it is |
138 chosen no transformation is applied. Erosion allows only the six | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
139 smallest structural elements because when the character is too thin it may |
467 | 140 be completely erased.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
141 {\bf Affine Transformations.} |
467 | 142 A $2 \times 3$ affine transform matrix (with |
143 6 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$ level. | |
144 Each pixel $(x,y)$ of the output image takes the value of the pixel | |
145 nearest to $(ax+by+c,dx+ey+f)$ in the input image. This | |
146 produces scaling, translation, rotation and shearing. | |
147 The marginal distributions of $(a,b,c,d,e,f)$ have been tuned by hand to | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
148 forbid important rotations (not to confuse classes) but to give good |
467 | 149 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times |
150 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3 | |
151 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times | |
152 complexity]$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
153 {\bf Local Elastic Deformations.} |
469 | 154 This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03}, |
467 | 155 which provides more details. |
156 Two "displacements" fields are generated and applied, for horizontal | |
157 and vertical displacements of pixels. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
158 To generate a pixel in either field, first a value between -1 and 1 is |
467 | 159 chosen from a uniform distribution. Then all the pixels, in both fields, are |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
160 multiplied by a constant $\alpha$ which controls the intensity of the |
467 | 161 displacements (larger $\alpha$ translates into larger wiggles). |
162 Each field is convoluted with a Gaussian 2D kernel of | |
163 standard deviation $\sigma$. Visually, this results in a blur. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
164 $\alpha = \sqrt[3]{complexity} \times 10.0$ and $\sigma = 10 - 7 \times |
467 | 165 \sqrt[3]{complexity}$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
166 {\bf Pinch.} |
467 | 167 This GIMP filter is named "Whirl and |
168 pinch", but whirl was set to 0. A pinch is ``similar to projecting the image onto an elastic | |
469 | 169 surface and pressing or pulling on the center of the surface''~\citep{GIMP-manual}. |
467 | 170 For a square input image, think of drawing a circle of |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
171 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
172 that disk (region inside circle) will have its value recalculated by taking |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
173 the value of another "source" pixel in the original image. The position of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
174 that source pixel is found on the line thats goes through $C$ and $P$, but |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
175 at some other distance $d_2$. Define $d_1$ to be the distance between $P$ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
176 and $C$. $d_2$ is given by $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
177 d_1$, where $pinch$ is a parameter to the filter. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
178 The actual value is given by bilinear interpolation considering the pixels |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
179 around the (non-integer) source position thus found. |
467 | 180 Here $pinch \sim U[-complexity, 0.7 \times complexity]$.\\ |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
181 |
467 | 182 {\large\bf Injecting Noise}\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
183 {\bf Motion Blur.} |
467 | 184 This GIMP filter is a ``linear motion blur'' in GIMP |
185 terminology, with two parameters, $length$ and $angle$. The value of | |
186 a pixel in the final image is the approximately mean value of the $length$ first pixels | |
187 found by moving in the $angle$ direction. | |
188 Here $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
189 {\bf Occlusion.} |
467 | 190 This filter selects a random rectangle from an {\em occluder} character |
191 images and places it over the original {\em occluded} character | |
192 image. Pixels are combined by taking the max(occluder,occluded), | |
193 closer to black. The corners of the occluder The rectangle corners | |
194 are sampled so that larger complexity gives larger rectangles. | |
195 The destination position in the occluded image are also sampled | |
469 | 196 according to a normal distribution (see more details in~\citet{ift6266-tr-anonymous}). |
467 | 197 It has has a probability of not being applied at all of 60\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
198 {\bf Pixel Permutation.} |
467 | 199 This filter permutes neighbouring pixels. It selects first |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
200 $\frac{complexity}{3}$ pixels randomly in the image. Each of them are then |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
201 sequentially exchanged to one other pixel in its $V4$ neighbourhood. Number |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
202 of exchanges to the left, right, top, bottom are equal or does not differ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
203 from more than 1 if the number of selected pixels is not a multiple of 4. |
467 | 204 It has has a probability of not being applied at all of 80\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
205 {\bf Gaussian Noise.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
206 This filter simply adds, to each pixel of the image independently, a |
467 | 207 noise $\sim Normal(0(\frac{complexity}{10})^2)$. |
208 It has has a probability of not being applied at all of 70\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
209 {\bf Background Images.} |
469 | 210 Following~\citet{Larochelle-jmlr-2009}, this transformation adds a random |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
211 background behind the letter. The background is chosen by first selecting, |
467 | 212 at random, an image from a set of images. Then a 32$\times$32 subregion |
213 of that image is chosen as the background image (by sampling position | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
214 uniformly while making sure not to cross image borders). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
215 To combine the original letter image and the background image, contrast |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
216 adjustments are made. We first get the maximal values (i.e. maximal |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
217 intensity) for both the original image and the background image, $maximage$ |
467 | 218 and $maxbg$. We also have a parameter $contrast \sim U[complexity, 1]$. |
219 Each background pixel value is multiplied by $\frac{max(maximage - | |
220 contrast, 0)}{maxbg}$ (higher contrast yield darker | |
221 background). The output image pixels are max(background,original).\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
222 {\bf Salt and Pepper Noise.} |
467 | 223 This filter adds noise $\sim U[0,1]$ to random subsets of pixels. |
224 The number of selected pixels is $0.2 \times complexity$. | |
225 This filter has a probability of not being applied at all of 75\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
226 {\bf Spatially Gaussian Noise.} |
467 | 227 Different regions of the image are spatially smoothed. |
228 The image is convolved with a symmetric Gaussian kernel of | |
229 size and variance choosen uniformly in the ranges $[12,12 + 20 \times | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
230 complexity]$ and $[2,2 + 6 \times complexity]$. The result is normalized |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
231 between $0$ and $1$. We also create a symmetric averaging window, of the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
232 kernel size, with maximum value at the center. For each image we sample |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
233 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
234 averaging centers between the original image and the filtered one. We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
235 initialize to zero a mask matrix of the image size. For each selected pixel |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
236 we add to the mask the averaging window centered to it. The final image is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
237 computed from the following element-wise operation: $\frac{image + filtered |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
238 image \times mask}{mask+1}$. |
467 | 239 This filter has a probability of not being applied at all of 75\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
240 {\bf Scratches.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
241 The scratches module places line-like white patches on the image. The |
467 | 242 lines are heavily transformed images of the digit "1" (one), chosen |
243 at random among five thousands such 1 images. The 1 image is | |
244 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times | |
245 complexity)^2$, using bicubic interpolation, | |
246 Two passes of a greyscale morphological erosion filter | |
247 are applied, reducing the width of the line | |
248 by an amount controlled by $complexity$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
249 This filter is only applied only 15\% of the time. When it is applied, 50\% |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
250 of the time, only one patch image is generated and applied. In 30\% of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
251 cases, two patches are generated, and otherwise three patches are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
252 generated. The patch is applied by taking the maximal value on any given |
467 | 253 patch or the original image, for each of the 32x32 pixel locations.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
254 {\bf Color and Contrast Changes.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
255 This filter changes the constrast and may invert the image polarity (white |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
256 on black to black on white). The contrast $C$ is defined here as the |
467 | 257 difference between the maximum and the minimum pixel value of the image. |
258 Contrast $\sim U[1-0.85 \times complexity,1]$ (so constrast $\geq 0.15$). | |
259 The image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
260 polarity is inverted with $0.5$ probability. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
261 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
262 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
263 \begin{figure}[h] |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
264 \resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}\\ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
265 \caption{Illustration of the pipeline of stochastic |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
266 transformations applied to the image of a lower-case t |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
267 (the upper left image). Each image in the pipeline (going from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
268 left to right, first top line, then bottom line) shows the result |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
269 of applying one of the modules in the pipeline. The last image |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
270 (bottom right) is used as training example.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
271 \label{fig:pipeline} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
272 \end{figure} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
273 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
274 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
275 \begin{figure}[h] |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
276 \resizebox{.99\textwidth}{!}{\includegraphics{images/transfo.png}}\\ |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
277 \caption{Illustration of each transformation applied alone to the same image |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
278 of an upper-case h (top left). First row (from left to rigth) : original image, slant, |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
279 thickness, affine transformation, local elastic deformation; second row (from left to rigth) : |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
280 pinch, motion blur, occlusion, pixel permutation, Gaussian noise; third row (from left to rigth) : |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
281 background image, salt and pepper noise, spatially Gaussian noise, scratches, |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
282 color and contrast changes.} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
283 \label{fig:transfo} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
284 \end{figure} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
285 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
286 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
287 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
288 \section{Experimental Setup} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
289 |
472 | 290 Whereas much previous work on deep learning algorithms had been performed on |
291 the MNIST digits classification task~\citep{Hinton06,ranzato-07,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
292 with 60~000 examples, and variants involving 10~000 | |
293 examples~\cite{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}, we want | |
294 to focus here on the case of much larger training sets, from 10 times to | |
295 to 1000 times larger. The larger datasets are obtained by first sampling from | |
296 a {\em data source} (NIST characters, scanned machine printed characters, characters | |
297 from fonts, or characters from captchas) and then optionally applying some of the | |
298 above transformations and/or noise processes. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
299 |
472 | 300 \subsection{Data Sources} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
301 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
302 \begin{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
303 \item {\bf NIST} |
472 | 304 Our main source of characters is the NIST Special Database 19~\cite{Grother-1995}, |
305 widely used for training and testing character | |
306 recognition systems~\cite{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
307 The dataset is composed with 8????? digits and characters (upper and lower cases), with hand checked classifications, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
308 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
309 corresponding to "0"-"9","A"-"Z" and "a"-"z". The dataset contains 8 series of different complexity. |
472 | 310 The fourth series, $hsf_4$, experimentally recognized to be the most difficult one is recommended |
311 by NIST as testing set and is used in our work and some previous work~\cite{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005} | |
312 for that purpose. We randomly split the remainder into a training set and a validation set for | |
480
150203d2b5c3
added number of train test and valid for NIST
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
479
diff
changeset
|
313 model selection. The sizes of these data sets are: 651668 for training, 80000 for validation, |
150203d2b5c3
added number of train test and valid for NIST
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
479
diff
changeset
|
314 and 82587 for testing. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
315 The performances reported by previous work on that dataset mostly use only the digits. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
316 Here we use all the classes both in the training and testing phase. This is especially |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
317 useful to estimate the effect of a multi-task setting. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
318 Note that the distribution of the classes in the NIST training and test sets differs |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
319 substantially, with relatively many more digits in the test set, and uniform distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
320 of letters in the test set, not in the training set (more like the natural distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
321 of letters in text). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
322 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
323 \item {\bf Fonts} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
324 In order to have a good variety of sources we downloaded an important number of free fonts from: {\tt http://anonymous.url.net} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
325 %real adress {\tt http://cg.scs.carleton.ca/~luc/freefonts.html} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
326 in addition to Windows 7's, this adds up to a total of $9817$ different fonts that we can choose uniformly. |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
327 The ttf file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
328 directly as input to our models. |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
329 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
330 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
331 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
332 \item {\bf Captchas} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
333 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for |
472 | 334 generating characters of the same format as the NIST dataset. This software is based on |
335 a random character class generator and various kinds of tranformations similar to those described in the previous sections. | |
336 In order to increase the variability of the data generated, many different fonts are used for generating the characters. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
337 Transformations (slant, distorsions, rotation, translation) are applied to each randomly generated character with a complexity |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
338 depending on the value of the complexity parameter provided by the user of the data source. Two levels of complexity are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
339 allowed and can be controlled via an easy to use facade class. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
340 \item {\bf OCR data} |
472 | 341 A large set (2 million) of scanned, OCRed and manually verified machine-printed |
342 characters (from various documents and books) where included as an | |
343 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
344 Pattern Recognition Research group lead by Thomas Breuel at University of Kaiserslautern | |
345 ({\tt http://www.iupr.com}), and which will be publically released. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
346 \end{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
347 |
472 | 348 \subsection{Data Sets} |
349 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label | |
350 from one of the 62 character classes. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
351 \begin{itemize} |
472 | 352 \item {\bf NIST}. This is the raw NIST special database 19. |
353 \item {\bf P07}. This dataset is obtained by taking raw characters from all four of the above sources | |
354 and sending them through the above transformation pipeline. | |
355 For each new exemple to generate, a source is selected with probability $10\%$ from the fonts, | |
356 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the | |
357 order given above, and for each of them we sample uniformly a complexity in the range $[0,0.7]$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
358 \item {\bf NISTP} NISTP is equivalent to P07 (complexity parameter of $0.7$ with the same sources proportion) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
359 except that we only apply |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
360 transformations from slant to pinch. Therefore, the character is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
361 transformed but no additionnal noise is added to the image, giving images |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
362 closer to the NIST dataset. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
363 \end{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
364 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
365 \subsection{Models and their Hyperparameters} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
366 |
472 | 367 All hyper-parameters are selected based on performance on the NISTP validation set. |
368 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
369 \subsubsection{Multi-Layer Perceptrons (MLP)} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
370 |
472 | 371 Whereas previous work had compared deep architectures to both shallow MLPs and |
372 SVMs, we only compared to MLPs here because of the very large datasets used. | |
373 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
374 exponentials) on the output layer for estimating P(class | image). | |
375 The hyper-parameters are the following: number of hidden units, taken in | |
376 $\{300,500,800,1000,1500\}$. The optimization procedure is as follows. Training | |
377 examples are presented in minibatches of size 20. A constant learning | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
378 rate is chosen in $10^{-3},0.01, 0.025, 0.075, 0.1, 0.5\}$ |
472 | 379 through preliminary experiments, and 0.1 was selected. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
380 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
381 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
382 \subsubsection{Stacked Denoising Auto-Encoders (SDAE)} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
383 \label{SdA} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
384 |
472 | 385 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) |
386 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
387 layers)~\citep{Hinton06,ranzato-07,Bengio-nips-2006} | |
388 enabling better generalization, apparently setting parameters in the | |
389 basin of attraction of supervised gradient descent yielding better | |
390 generalization~\citep{Erhan+al-2010}. It is hypothesized that the | |
391 advantage brought by this procedure stems from a better prior, | |
392 on the one hand taking advantage of the link between the input | |
393 distribution $P(x)$ and the conditional distribution of interest | |
394 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
395 taking advantage of the expressive power and bias implicit in the | |
396 deep architecture (whereby complex concepts are expressed as | |
397 compositions of simpler ones through a deep hierarchy). | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
398 |
472 | 399 Here we chose to use the Denoising |
400 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for | |
401 these deep hierarchies of features, as it is very simple to train and | |
402 teach (see tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
403 provides immediate and efficient inference, and yielded results | |
404 comparable or better than RBMs in series of experiments | |
405 \citep{VincentPLarochelleH2008}. During training of a Denoising | |
406 Auto-Encoder, it is presented with a stochastically corrupted version | |
407 of the input and trained to reconstruct the uncorrupted input, | |
408 forcing the hidden units to represent the leading regularities in | |
409 the data. Once it is trained, its hidden units activations can | |
410 be used as inputs for training a second one, etc. | |
411 After this unsupervised pre-training stage, the parameters | |
412 are used to initialize a deep MLP, which is fine-tuned by | |
413 the same standard procedure used to train them (see previous section). | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
414 |
472 | 415 The hyper-parameters are the same as for the MLP, with the addition of the |
416 amount of corruption noise (we used the masking noise process, whereby a | |
417 fixed proportion of the input values, randomly selected, are zeroed), and a | |
418 separate learning rate for the unsupervised pre-training stage (selected | |
419 from the same above set). The fraction of inputs corrupted was selected | |
420 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
421 of hidden layers but it was fixed to 3 based on previous work with | |
422 stacked denoising auto-encoders on MNIST~\citep{VincentPLarochelleH2008}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
423 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
424 \section{Experimental Results} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
425 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
426 \subsection{SDA vs MLP vs Humans} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
427 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
428 We compare here the best MLP (according to validation set error) that we found against |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
429 the best SDA (again according to validation set error), along with a precise estimate |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
430 of human performance obtained via Amazon's Mechanical Turk (AMT) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
431 service\footnote{http://mturk.com}. AMT users are paid small amounts |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
432 of money to perform tasks for which human intelligence is required. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
433 Mechanical Turk has been used extensively in natural language |
469 | 434 processing \citep{SnowEtAl2008} and vision |
435 \citep{SorokinAndForsyth2008,whitehill09}. AMT users where presented | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
436 with 10 character images and asked to type 10 corresponding ascii |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
437 characters. Hence they were forced to make a hard choice among the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
438 62 character classes. Three users classified each image, allowing |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
439 to estimate inter-human variability (shown as +/- in parenthesis below). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
440 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
441 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
442 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits + |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
443 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
444 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
445 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
446 and using a validation set to select hyper-parameters and other training choices. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
447 \{SDA,MLP\}0 are trained on NIST, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
448 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
449 The human error rate on digits is a lower bound because it does not count digits that were |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
450 recognized as letters. For comparison, the results found in the literature |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
451 on NIST digits classification using the same test set are included.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
452 \label{tab:sda-vs-mlp-vs-humans} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
453 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
454 \begin{tabular}{|l|r|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
455 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline |
472 | 456 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
457 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
458 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
459 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
460 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
461 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
462 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline |
469 | 463 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline |
464 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline | |
465 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline | |
472 | 466 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
467 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
468 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
469 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
470 |
475 | 471 \begin{figure}[h] |
472 \resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}\\ | |
473 \caption{Charts corresponding to table \ref{tab:sda-vs-mlp-vs-humans}. Left: overall results; error bars indicate a 95\% confidence interval. Right: error rates on NIST test digits only, with results from litterature. } | |
474 \label{fig:error-rates-charts} | |
475 \end{figure} | |
476 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
477 \subsection{Perturbed Training Data More Helpful for SDAE} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
478 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
479 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
480 \caption{Relative change in error rates due to the use of perturbed training data, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
481 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
482 A positive value indicates that training on the perturbed data helped for the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
483 given test set (the first 3 columns on the 62-class tasks and the last one is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
484 on the clean 10-class digits). Clearly, the deep learning models did benefit more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
485 from perturbed training data, even when testing on clean data, whereas the MLP |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
486 trained on perturbed data performed worse on the clean digits and about the same |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
487 on the clean characters. } |
469 | 488 \label{tab:perturbation-effect} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
489 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
490 \begin{tabular}{|l|r|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
491 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
492 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
493 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
494 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
495 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
496 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
497 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
498 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
499 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
500 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
501 \subsection{Multi-Task Learning Effects} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
502 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
503 As previously seen, the SDA is better able to benefit from the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
504 transformations applied to the data than the MLP. In this experiment we |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
505 define three tasks: recognizing digits (knowing that the input is a digit), |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
506 recognizing upper case characters (knowing that the input is one), and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
507 recognizing lower case characters (knowing that the input is one). We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
508 consider the digit classification task as the target task and we want to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
509 evaluate whether training with the other tasks can help or hurt, and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
510 whether the effect is different for MLPs versus SDAs. The goal is to find |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
511 out if deep learning can benefit more (or less) from multiple related tasks |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
512 (i.e. the multi-task setting) compared to a corresponding purely supervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
513 shallow learner. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
514 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
515 We use a single hidden layer MLP with 1000 hidden units, and a SDA |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
516 with 3 hidden layers (1000 hidden units per layer), pre-trained and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
517 fine-tuned on NIST. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
518 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
519 Our results show that the MLP benefits marginally from the multi-task setting |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
520 in the case of digits (5\% relative improvement) but is actually hurt in the case |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
521 of characters (respectively 3\% and 4\% worse for lower and upper class characters). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
522 On the other hand the SDA benefitted from the multi-task setting, with relative |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
523 error rate improvements of 27\%, 15\% and 13\% respectively for digits, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
524 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
525 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
526 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
527 \caption{Test error rates and relative change in error rates due to the use of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
528 a multi-task setting, i.e., training on each task in isolation vs training |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
529 for all three tasks together, for MLPs vs SDAs. The SDA benefits much |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
530 more from the multi-task setting. All experiments on only on the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
531 unperturbed NIST data, using validation error for model selection. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
532 Relative improvement is 1 - single-task error / multi-task error.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
533 \label{tab:multi-task} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
534 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
535 \begin{tabular}{|l|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
536 & single-task & multi-task & relative \\ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
537 & setting & setting & improvement \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
538 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
539 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
540 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
541 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
542 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
543 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
544 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
545 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
546 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
547 |
475 | 548 |
549 \begin{figure}[h] | |
550 \resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}\\ | |
551 \caption{Charts corresponding to tables \ref{tab:perturbation-effect} (left) and \ref{tab:multi-task} (right).} | |
552 \label{fig:improvements-charts} | |
553 \end{figure} | |
554 | |
555 | |
556 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
557 \section{Conclusions} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
558 |
472 | 559 The conclusions are positive for all the questions asked in the introduction. |
560 \begin{itemize} | |
561 \item Do the good results previously obtained with deep architectures on the | |
562 MNIST digits generalize to the setting of a much larger and richer (but similar) | |
563 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
564 Yes, the SDA systematically outperformed the MLP, in fact reaching human-level | |
565 performance. | |
566 \item To what extent does the perturbation of input images (e.g. adding | |
567 noise, affine transformations, background images) make the resulting | |
568 classifier better not only on similarly perturbed images but also on | |
569 the {\em original clean examples}? Do deep architectures benefit more from such {\em out-of-distribution} | |
570 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? | |
571 MLPs were helped by perturbed training examples when tested on perturbed input images, | |
572 but only marginally helped wrt clean examples. On the other hand, the deep SDAs | |
573 were very significantly boosted by these out-of-distribution examples. | |
574 \item Similarly, does the feature learning step in deep learning algorithms benefit more | |
575 training with similar but different classes (i.e. a multi-task learning scenario) than | |
576 a corresponding shallow and purely supervised architecture? | |
577 Whereas the improvement due to the multi-task setting was marginal or | |
578 negative for the MLP, it was very significant for the SDA. | |
579 \end{itemize} | |
580 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
581 \bibliography{strings,ml,aigaion,specials} |
469 | 582 %\bibliographystyle{plainnat} |
583 \bibliographystyle{unsrtnat} | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
584 %\bibliographystyle{apalike} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
585 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
586 \end{document} |