Mercurial > ift6266
annotate writeup/nips2010_submission.tex @ 479:6593e67381a3
Added transformation figure
author | Xavier Glorot <glorotxa@iro.umontreal.ca> |
---|---|
date | Sun, 30 May 2010 18:54:36 -0400 |
parents | db28764b8252 |
children | 150203d2b5c3 |
rev | line source |
---|---|
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
1 \documentclass{article} % For LaTeX2e |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
2 \usepackage{nips10submit_e,times} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
3 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
4 \usepackage{amsthm,amsmath,amssymb,bbold,bbm} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
5 \usepackage{algorithm,algorithmic} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
6 \usepackage[utf8]{inputenc} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
7 \usepackage{graphicx,subfigure} |
469 | 8 \usepackage[numbers]{natbib} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
9 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
10 \title{Generating and Exploiting Perturbed and Multi-Task Handwritten Training Data for Deep Architectures} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
11 \author{The IFT6266 Gang} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
12 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
13 \begin{document} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
14 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
15 %\makeanontitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
16 \maketitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
17 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
18 \begin{abstract} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
19 Recent theoretical and empirical work in statistical machine learning has |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
20 demonstrated the importance of learning algorithms for deep |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
21 architectures, i.e., function classes obtained by composing multiple |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
22 non-linear transformations. In the area of handwriting recognition, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
23 deep learning algorithms |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
24 had been evaluated on rather small datasets with a few tens of thousands |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
25 of examples. Here we propose a powerful generator of variations |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
26 of examples for character images based on a pipeline of stochastic |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
27 transformations that include not only the usual affine transformations |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
28 but also the addition of slant, local elastic deformations, changes |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
29 in thickness, background images, color, contrast, occlusion, and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
30 various types of pixel and spatially correlated noise. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
31 We evaluate a deep learning algorithm (Stacked Denoising Autoencoders) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
32 on the task of learning to classify digits and letters transformed |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
33 with this pipeline, using the hundreds of millions of generated examples |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
34 and testing on the full 62-class NIST test set. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
35 We find that the SDA outperforms its |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
36 shallow counterpart, an ordinary Multi-Layer Perceptron, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
37 and that it is better able to take advantage of the additional |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
38 generated data, as well as better able to take advantage of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
39 the multi-task setting, i.e., |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
40 training from more classes than those of interest in the end. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
41 In fact, we find that the SDA reaches human performance as |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
42 estimated by the Amazon Mechanical Turk on the 62-class NIST test characters. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
43 \end{abstract} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
44 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
45 \section{Introduction} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
46 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
47 Deep Learning has emerged as a promising new area of research in |
469 | 48 statistical machine learning (see~\citet{Bengio-2009} for a review). |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
49 Learning algorithms for deep architectures are centered on the learning |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
50 of useful representations of data, which are better suited to the task at hand. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
51 This is in great part inspired by observations of the mammalian visual cortex, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
52 which consists of a chain of processing elements, each of which is associated with a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
53 different representation. In fact, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
54 it was found recently that the features learnt in deep architectures resemble |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
55 those observed in the first two of these stages (in areas V1 and V2 |
469 | 56 of visual cortex)~\citep{HonglakL2008}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
57 Processing images typically involves transforming the raw pixel data into |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
58 new {\bf representations} that can be used for analysis or classification. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
59 For example, a principal component analysis representation linearly projects |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
60 the input image into a lower-dimensional feature space. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
61 Why learn a representation? Current practice in the computer vision |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
62 literature converts the raw pixels into a hand-crafted representation |
469 | 63 e.g.\ SIFT features~\citep{Lowe04}, but deep learning algorithms |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
64 tend to discover similar features in their first few |
469 | 65 levels~\citep{HonglakL2008,ranzato-08,Koray-08,VincentPLarochelleH2008-very-small}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
66 Learning increases the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
67 ease and practicality of developing representations that are at once |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
68 tailored to specific tasks, yet are able to borrow statistical strength |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
69 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
70 feature representation can lead to higher-level (more abstract, more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
71 general) features that are more robust to unanticipated sources of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
72 variance extant in real data. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
73 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
74 Whereas a deep architecture can in principle be more powerful than a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
75 shallow one in terms of representation, depth appears to render the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
76 training problem more difficult in terms of optimization and local minima. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
77 It is also only recently that successful algorithms were proposed to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
78 overcome some of these difficulties. All are based on unsupervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
79 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
469 | 80 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
81 applied here, is the Denoising |
469 | 82 Auto-Encoder~(DEA)~\citep{VincentPLarochelleH2008-very-small}, which |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
83 performed similarly or better than previously proposed Restricted Boltzmann |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
84 Machines in terms of unsupervised extraction of a hierarchy of features |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
85 useful for classification. The principle is that each layer starting from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
86 the bottom is trained to encode their input (the output of the previous |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
87 layer) and try to reconstruct it from a corrupted version of it. After this |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
88 unsupervised initialization, the stack of denoising auto-encoders can be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
89 converted into a deep supervised feedforward neural network and trained by |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
90 stochastic gradient descent. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
91 |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
92 In this paper we ask the following questions: |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
93 \begin{enumerate} |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
94 \item Do the good results previously obtained with deep architectures on the |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
95 MNIST digits generalize to the setting of a much larger and richer (but similar) |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
96 dataset, the NIST special database 19, with 62 classes and around 800k examples? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
97 \item To what extent does the perturbation of input images (e.g. adding |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
98 noise, affine transformations, background images) make the resulting |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
99 classifier better not only on similarly perturbed images but also on |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
100 the {\em original clean examples}? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
101 \item Do deep architectures benefit more from such {\em out-of-distribution} |
469 | 102 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
103 \item Similarly, does the feature learning step in deep learning algorithms benefit more |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
104 training with similar but different classes (i.e. a multi-task learning scenario) than |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
105 a corresponding shallow and purely supervised architecture? |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
106 \end{enumerate} |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
107 The experimental results presented here provide positive evidence towards all of these questions. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
108 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
109 \section{Perturbation and Transformation of Character Images} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
110 |
467 | 111 This section describes the different transformations we used to stochastically |
112 transform source images in order to obtain data. More details can | |
469 | 113 be found in this technical report~\citep{ift6266-tr-anonymous}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
114 The code for these transformations (mostly python) is available at |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
115 {\tt http://anonymous.url.net}. All the modules in the pipeline share |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
116 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the |
467 | 117 amount of deformation or noise introduced. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
118 |
467 | 119 There are two main parts in the pipeline. The first one, |
120 from slant to pinch below, performs transformations. The second | |
121 part, from blur to contrast, adds different kinds of noise. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
122 |
467 | 123 {\large\bf Transformations}\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
124 {\bf Slant.} |
467 | 125 We mimic slant by shifting each row of the image |
126 proportionnaly to its height: $shift = round(slant \times height)$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
127 The $slant$ coefficient can be negative or positive with equal probability |
467 | 128 and its value is randomly sampled according to the complexity level: |
129 e $slant \sim U[0,complexity]$, so the | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
130 maximum displacement for the lowest or highest pixel line is of |
467 | 131 $round(complexity \times 32)$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
132 {\bf Thickness.} |
469 | 133 Morpholigical operators of dilation and erosion~\citep{Haralick87,Serra82} |
467 | 134 are applied. The neighborhood of each pixel is multiplied |
135 element-wise with a {\em structuring element} matrix. | |
136 The pixel value is replaced by the maximum or the minimum of the resulting | |
137 matrix, respectively for dilation or erosion. Ten different structural elements with | |
138 increasing dimensions (largest is $5\times5$) were used. For each image, | |
139 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
140 element from a subset of the $n$ smallest structuring elements where $n$ is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
141 $round(10 \times complexity)$ for dilation and $round(6 \times complexity)$ |
467 | 142 for erosion. A neutral element is always present in the set, and if it is |
143 chosen no transformation is applied. Erosion allows only the six | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
144 smallest structural elements because when the character is too thin it may |
467 | 145 be completely erased.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
146 {\bf Affine Transformations.} |
467 | 147 A $2 \times 3$ affine transform matrix (with |
148 6 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$ level. | |
149 Each pixel $(x,y)$ of the output image takes the value of the pixel | |
150 nearest to $(ax+by+c,dx+ey+f)$ in the input image. This | |
151 produces scaling, translation, rotation and shearing. | |
152 The marginal distributions of $(a,b,c,d,e,f)$ have been tuned by hand to | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
153 forbid important rotations (not to confuse classes) but to give good |
467 | 154 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times |
155 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3 | |
156 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times | |
157 complexity]$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
158 {\bf Local Elastic Deformations.} |
469 | 159 This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03}, |
467 | 160 which provides more details. |
161 Two "displacements" fields are generated and applied, for horizontal | |
162 and vertical displacements of pixels. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
163 To generate a pixel in either field, first a value between -1 and 1 is |
467 | 164 chosen from a uniform distribution. Then all the pixels, in both fields, are |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
165 multiplied by a constant $\alpha$ which controls the intensity of the |
467 | 166 displacements (larger $\alpha$ translates into larger wiggles). |
167 Each field is convoluted with a Gaussian 2D kernel of | |
168 standard deviation $\sigma$. Visually, this results in a blur. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
169 $\alpha = \sqrt[3]{complexity} \times 10.0$ and $\sigma = 10 - 7 \times |
467 | 170 \sqrt[3]{complexity}$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
171 {\bf Pinch.} |
467 | 172 This GIMP filter is named "Whirl and |
173 pinch", but whirl was set to 0. A pinch is ``similar to projecting the image onto an elastic | |
469 | 174 surface and pressing or pulling on the center of the surface''~\citep{GIMP-manual}. |
467 | 175 For a square input image, think of drawing a circle of |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
176 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
177 that disk (region inside circle) will have its value recalculated by taking |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
178 the value of another "source" pixel in the original image. The position of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
179 that source pixel is found on the line thats goes through $C$ and $P$, but |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
180 at some other distance $d_2$. Define $d_1$ to be the distance between $P$ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
181 and $C$. $d_2$ is given by $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
182 d_1$, where $pinch$ is a parameter to the filter. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
183 The actual value is given by bilinear interpolation considering the pixels |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
184 around the (non-integer) source position thus found. |
467 | 185 Here $pinch \sim U[-complexity, 0.7 \times complexity]$.\\ |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
186 |
467 | 187 {\large\bf Injecting Noise}\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
188 {\bf Motion Blur.} |
467 | 189 This GIMP filter is a ``linear motion blur'' in GIMP |
190 terminology, with two parameters, $length$ and $angle$. The value of | |
191 a pixel in the final image is the approximately mean value of the $length$ first pixels | |
192 found by moving in the $angle$ direction. | |
193 Here $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
194 {\bf Occlusion.} |
467 | 195 This filter selects a random rectangle from an {\em occluder} character |
196 images and places it over the original {\em occluded} character | |
197 image. Pixels are combined by taking the max(occluder,occluded), | |
198 closer to black. The corners of the occluder The rectangle corners | |
199 are sampled so that larger complexity gives larger rectangles. | |
200 The destination position in the occluded image are also sampled | |
469 | 201 according to a normal distribution (see more details in~\citet{ift6266-tr-anonymous}). |
467 | 202 It has has a probability of not being applied at all of 60\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
203 {\bf Pixel Permutation.} |
467 | 204 This filter permutes neighbouring pixels. It selects first |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
205 $\frac{complexity}{3}$ pixels randomly in the image. Each of them are then |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
206 sequentially exchanged to one other pixel in its $V4$ neighbourhood. Number |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
207 of exchanges to the left, right, top, bottom are equal or does not differ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
208 from more than 1 if the number of selected pixels is not a multiple of 4. |
467 | 209 It has has a probability of not being applied at all of 80\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
210 {\bf Gaussian Noise.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
211 This filter simply adds, to each pixel of the image independently, a |
467 | 212 noise $\sim Normal(0(\frac{complexity}{10})^2)$. |
213 It has has a probability of not being applied at all of 70\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
214 {\bf Background Images.} |
469 | 215 Following~\citet{Larochelle-jmlr-2009}, this transformation adds a random |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
216 background behind the letter. The background is chosen by first selecting, |
467 | 217 at random, an image from a set of images. Then a 32$\times$32 subregion |
218 of that image is chosen as the background image (by sampling position | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
219 uniformly while making sure not to cross image borders). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
220 To combine the original letter image and the background image, contrast |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
221 adjustments are made. We first get the maximal values (i.e. maximal |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
222 intensity) for both the original image and the background image, $maximage$ |
467 | 223 and $maxbg$. We also have a parameter $contrast \sim U[complexity, 1]$. |
224 Each background pixel value is multiplied by $\frac{max(maximage - | |
225 contrast, 0)}{maxbg}$ (higher contrast yield darker | |
226 background). The output image pixels are max(background,original).\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
227 {\bf Salt and Pepper Noise.} |
467 | 228 This filter adds noise $\sim U[0,1]$ to random subsets of pixels. |
229 The number of selected pixels is $0.2 \times complexity$. | |
230 This filter has a probability of not being applied at all of 75\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
231 {\bf Spatially Gaussian Noise.} |
467 | 232 Different regions of the image are spatially smoothed. |
233 The image is convolved with a symmetric Gaussian kernel of | |
234 size and variance choosen uniformly in the ranges $[12,12 + 20 \times | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
235 complexity]$ and $[2,2 + 6 \times complexity]$. The result is normalized |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
236 between $0$ and $1$. We also create a symmetric averaging window, of the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
237 kernel size, with maximum value at the center. For each image we sample |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
238 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
239 averaging centers between the original image and the filtered one. We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
240 initialize to zero a mask matrix of the image size. For each selected pixel |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
241 we add to the mask the averaging window centered to it. The final image is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
242 computed from the following element-wise operation: $\frac{image + filtered |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
243 image \times mask}{mask+1}$. |
467 | 244 This filter has a probability of not being applied at all of 75\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
245 {\bf Scratches.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
246 The scratches module places line-like white patches on the image. The |
467 | 247 lines are heavily transformed images of the digit "1" (one), chosen |
248 at random among five thousands such 1 images. The 1 image is | |
249 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times | |
250 complexity)^2$, using bicubic interpolation, | |
251 Two passes of a greyscale morphological erosion filter | |
252 are applied, reducing the width of the line | |
253 by an amount controlled by $complexity$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
254 This filter is only applied only 15\% of the time. When it is applied, 50\% |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
255 of the time, only one patch image is generated and applied. In 30\% of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
256 cases, two patches are generated, and otherwise three patches are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
257 generated. The patch is applied by taking the maximal value on any given |
467 | 258 patch or the original image, for each of the 32x32 pixel locations.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
259 {\bf Color and Contrast Changes.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
260 This filter changes the constrast and may invert the image polarity (white |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
261 on black to black on white). The contrast $C$ is defined here as the |
467 | 262 difference between the maximum and the minimum pixel value of the image. |
263 Contrast $\sim U[1-0.85 \times complexity,1]$ (so constrast $\geq 0.15$). | |
264 The image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
265 polarity is inverted with $0.5$ probability. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
266 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
267 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
268 \begin{figure}[h] |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
269 \resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}\\ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
270 \caption{Illustration of the pipeline of stochastic |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
271 transformations applied to the image of a lower-case t |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
272 (the upper left image). Each image in the pipeline (going from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
273 left to right, first top line, then bottom line) shows the result |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
274 of applying one of the modules in the pipeline. The last image |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
275 (bottom right) is used as training example.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
276 \label{fig:pipeline} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
277 \end{figure} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
278 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
279 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
280 \begin{figure}[h] |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
281 \resizebox{.99\textwidth}{!}{\includegraphics{images/transfo.png}}\\ |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
282 \caption{Illustration of each transformation applied to the same image |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
283 of the upper-case h (upper-left image). first row (from left to rigth) : original image, slant, |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
284 thickness, affine transformation, local elastic deformation; second row (from left to rigth) : |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
285 pinch, motion blur, occlusion, pixel permutation, gaussian noise; third row (from left to rigth) : |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
286 background image, salt and pepper noise, spatially gaussian noise, scratches, |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
287 color and contrast changes.} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
288 \label{fig:transfo} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
289 \end{figure} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
290 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
291 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
292 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
293 \section{Experimental Setup} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
294 |
472 | 295 Whereas much previous work on deep learning algorithms had been performed on |
296 the MNIST digits classification task~\citep{Hinton06,ranzato-07,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
297 with 60~000 examples, and variants involving 10~000 | |
298 examples~\cite{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}, we want | |
299 to focus here on the case of much larger training sets, from 10 times to | |
300 to 1000 times larger. The larger datasets are obtained by first sampling from | |
301 a {\em data source} (NIST characters, scanned machine printed characters, characters | |
302 from fonts, or characters from captchas) and then optionally applying some of the | |
303 above transformations and/or noise processes. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
304 |
472 | 305 \subsection{Data Sources} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
306 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
307 \begin{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
308 \item {\bf NIST} |
472 | 309 Our main source of characters is the NIST Special Database 19~\cite{Grother-1995}, |
310 widely used for training and testing character | |
311 recognition systems~\cite{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
312 The dataset is composed with 8????? digits and characters (upper and lower cases), with hand checked classifications, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
313 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
314 corresponding to "0"-"9","A"-"Z" and "a"-"z". The dataset contains 8 series of different complexity. |
472 | 315 The fourth series, $hsf_4$, experimentally recognized to be the most difficult one is recommended |
316 by NIST as testing set and is used in our work and some previous work~\cite{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005} | |
317 for that purpose. We randomly split the remainder into a training set and a validation set for | |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
318 model selection. The sizes of these data sets are: for training, XXX for validation, |
472 | 319 and XXX for testing. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
320 The performances reported by previous work on that dataset mostly use only the digits. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
321 Here we use all the classes both in the training and testing phase. This is especially |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
322 useful to estimate the effect of a multi-task setting. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
323 Note that the distribution of the classes in the NIST training and test sets differs |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
324 substantially, with relatively many more digits in the test set, and uniform distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
325 of letters in the test set, not in the training set (more like the natural distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
326 of letters in text). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
327 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
328 \item {\bf Fonts} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
329 In order to have a good variety of sources we downloaded an important number of free fonts from: {\tt http://anonymous.url.net} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
330 %real adress {\tt http://cg.scs.carleton.ca/~luc/freefonts.html} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
331 in addition to Windows 7's, this adds up to a total of $9817$ different fonts that we can choose uniformly. |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
332 The ttf file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
333 directly as input to our models. |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
334 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
335 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
336 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
337 \item {\bf Captchas} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
338 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for |
472 | 339 generating characters of the same format as the NIST dataset. This software is based on |
340 a random character class generator and various kinds of tranformations similar to those described in the previous sections. | |
341 In order to increase the variability of the data generated, many different fonts are used for generating the characters. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
342 Transformations (slant, distorsions, rotation, translation) are applied to each randomly generated character with a complexity |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
343 depending on the value of the complexity parameter provided by the user of the data source. Two levels of complexity are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
344 allowed and can be controlled via an easy to use facade class. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
345 \item {\bf OCR data} |
472 | 346 A large set (2 million) of scanned, OCRed and manually verified machine-printed |
347 characters (from various documents and books) where included as an | |
348 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
349 Pattern Recognition Research group lead by Thomas Breuel at University of Kaiserslautern | |
350 ({\tt http://www.iupr.com}), and which will be publically released. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
351 \end{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
352 |
472 | 353 \subsection{Data Sets} |
354 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label | |
355 from one of the 62 character classes. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
356 \begin{itemize} |
472 | 357 \item {\bf NIST}. This is the raw NIST special database 19. |
358 \item {\bf P07}. This dataset is obtained by taking raw characters from all four of the above sources | |
359 and sending them through the above transformation pipeline. | |
360 For each new exemple to generate, a source is selected with probability $10\%$ from the fonts, | |
361 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the | |
362 order given above, and for each of them we sample uniformly a complexity in the range $[0,0.7]$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
363 \item {\bf NISTP} NISTP is equivalent to P07 (complexity parameter of $0.7$ with the same sources proportion) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
364 except that we only apply |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
365 transformations from slant to pinch. Therefore, the character is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
366 transformed but no additionnal noise is added to the image, giving images |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
367 closer to the NIST dataset. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
368 \end{itemize} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
369 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
370 \subsection{Models and their Hyperparameters} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
371 |
472 | 372 All hyper-parameters are selected based on performance on the NISTP validation set. |
373 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
374 \subsubsection{Multi-Layer Perceptrons (MLP)} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
375 |
472 | 376 Whereas previous work had compared deep architectures to both shallow MLPs and |
377 SVMs, we only compared to MLPs here because of the very large datasets used. | |
378 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
379 exponentials) on the output layer for estimating P(class | image). | |
380 The hyper-parameters are the following: number of hidden units, taken in | |
381 $\{300,500,800,1000,1500\}$. The optimization procedure is as follows. Training | |
382 examples are presented in minibatches of size 20. A constant learning | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
383 rate is chosen in $10^{-3},0.01, 0.025, 0.075, 0.1, 0.5\}$ |
472 | 384 through preliminary experiments, and 0.1 was selected. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
385 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
386 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
387 \subsubsection{Stacked Denoising Auto-Encoders (SDAE)} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
388 \label{SdA} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
389 |
472 | 390 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) |
391 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
392 layers)~\citep{Hinton06,ranzato-07,Bengio-nips-2006} | |
393 enabling better generalization, apparently setting parameters in the | |
394 basin of attraction of supervised gradient descent yielding better | |
395 generalization~\citep{Erhan+al-2010}. It is hypothesized that the | |
396 advantage brought by this procedure stems from a better prior, | |
397 on the one hand taking advantage of the link between the input | |
398 distribution $P(x)$ and the conditional distribution of interest | |
399 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
400 taking advantage of the expressive power and bias implicit in the | |
401 deep architecture (whereby complex concepts are expressed as | |
402 compositions of simpler ones through a deep hierarchy). | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
403 |
472 | 404 Here we chose to use the Denoising |
405 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for | |
406 these deep hierarchies of features, as it is very simple to train and | |
407 teach (see tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
408 provides immediate and efficient inference, and yielded results | |
409 comparable or better than RBMs in series of experiments | |
410 \citep{VincentPLarochelleH2008}. During training of a Denoising | |
411 Auto-Encoder, it is presented with a stochastically corrupted version | |
412 of the input and trained to reconstruct the uncorrupted input, | |
413 forcing the hidden units to represent the leading regularities in | |
414 the data. Once it is trained, its hidden units activations can | |
415 be used as inputs for training a second one, etc. | |
416 After this unsupervised pre-training stage, the parameters | |
417 are used to initialize a deep MLP, which is fine-tuned by | |
418 the same standard procedure used to train them (see previous section). | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
419 |
472 | 420 The hyper-parameters are the same as for the MLP, with the addition of the |
421 amount of corruption noise (we used the masking noise process, whereby a | |
422 fixed proportion of the input values, randomly selected, are zeroed), and a | |
423 separate learning rate for the unsupervised pre-training stage (selected | |
424 from the same above set). The fraction of inputs corrupted was selected | |
425 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
426 of hidden layers but it was fixed to 3 based on previous work with | |
427 stacked denoising auto-encoders on MNIST~\citep{VincentPLarochelleH2008}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
428 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
429 \section{Experimental Results} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
430 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
431 \subsection{SDA vs MLP vs Humans} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
432 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
433 We compare here the best MLP (according to validation set error) that we found against |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
434 the best SDA (again according to validation set error), along with a precise estimate |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
435 of human performance obtained via Amazon's Mechanical Turk (AMT) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
436 service\footnote{http://mturk.com}. AMT users are paid small amounts |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
437 of money to perform tasks for which human intelligence is required. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
438 Mechanical Turk has been used extensively in natural language |
469 | 439 processing \citep{SnowEtAl2008} and vision |
440 \citep{SorokinAndForsyth2008,whitehill09}. AMT users where presented | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
441 with 10 character images and asked to type 10 corresponding ascii |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
442 characters. Hence they were forced to make a hard choice among the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
443 62 character classes. Three users classified each image, allowing |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
444 to estimate inter-human variability (shown as +/- in parenthesis below). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
445 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
446 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
447 \caption{Overall comparison of error rates ($\pm$ std.err.) on 62 character classes (10 digits + |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
448 26 lower + 26 upper), except for last columns -- digits only, between deep architecture with pre-training |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
449 (SDA=Stacked Denoising Autoencoder) and ordinary shallow architecture |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
450 (MLP=Multi-Layer Perceptron). The models shown are all trained using perturbed data (NISTP or P07) |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
451 and using a validation set to select hyper-parameters and other training choices. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
452 \{SDA,MLP\}0 are trained on NIST, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
453 \{SDA,MLP\}1 are trained on NISTP, and \{SDA,MLP\}2 are trained on P07. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
454 The human error rate on digits is a lower bound because it does not count digits that were |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
455 recognized as letters. For comparison, the results found in the literature |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
456 on NIST digits classification using the same test set are included.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
457 \label{tab:sda-vs-mlp-vs-humans} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
458 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
459 \begin{tabular}{|l|r|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
460 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline |
472 | 461 Humans& 18.2\% $\pm$.1\% & 39.4\%$\pm$.1\% & 46.9\%$\pm$.1\% & $1.4\%$ \\ \hline |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
462 SDA0 & 23.7\% $\pm$.14\% & 65.2\%$\pm$.34\% & 97.45\%$\pm$.06\% & 2.7\% $\pm$.14\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
463 SDA1 & 17.1\% $\pm$.13\% & 29.7\%$\pm$.3\% & 29.7\%$\pm$.3\% & 1.4\% $\pm$.1\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
464 SDA2 & 18.7\% $\pm$.13\% & 33.6\%$\pm$.3\% & 39.9\%$\pm$.17\% & 1.7\% $\pm$.1\%\\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
465 MLP0 & 24.2\% $\pm$.15\% & 68.8\%$\pm$.33\% & 78.70\%$\pm$.14\% & 3.45\% $\pm$.15\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
466 MLP1 & 23.0\% $\pm$.15\% & 41.8\%$\pm$.35\% & 90.4\%$\pm$.1\% & 3.85\% $\pm$.16\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
467 MLP2 & 24.3\% $\pm$.15\% & 46.0\%$\pm$.35\% & 54.7\%$\pm$.17\% & 4.85\% $\pm$.18\% \\ \hline |
469 | 468 \citep{Granger+al-2007} & & & & 4.95\% $\pm$.18\% \\ \hline |
469 \citep{Cortes+al-2000} & & & & 3.71\% $\pm$.16\% \\ \hline | |
470 \citep{Oliveira+al-2002} & & & & 2.4\% $\pm$.13\% \\ \hline | |
472 | 471 \citep{Milgram+al-2005} & & & & 2.1\% $\pm$.12\% \\ \hline |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
472 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
473 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
474 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
475 |
475 | 476 \begin{figure}[h] |
477 \resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}\\ | |
478 \caption{Charts corresponding to table \ref{tab:sda-vs-mlp-vs-humans}. Left: overall results; error bars indicate a 95\% confidence interval. Right: error rates on NIST test digits only, with results from litterature. } | |
479 \label{fig:error-rates-charts} | |
480 \end{figure} | |
481 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
482 \subsection{Perturbed Training Data More Helpful for SDAE} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
483 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
484 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
485 \caption{Relative change in error rates due to the use of perturbed training data, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
486 either using NISTP, for the MLP1/SDA1 models, or using P07, for the MLP2/SDA2 models. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
487 A positive value indicates that training on the perturbed data helped for the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
488 given test set (the first 3 columns on the 62-class tasks and the last one is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
489 on the clean 10-class digits). Clearly, the deep learning models did benefit more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
490 from perturbed training data, even when testing on clean data, whereas the MLP |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
491 trained on perturbed data performed worse on the clean digits and about the same |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
492 on the clean characters. } |
469 | 493 \label{tab:perturbation-effect} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
494 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
495 \begin{tabular}{|l|r|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
496 & NIST test & NISTP test & P07 test & NIST test digits \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
497 SDA0/SDA1-1 & 38\% & 84\% & 228\% & 93\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
498 SDA0/SDA2-1 & 27\% & 94\% & 144\% & 59\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
499 MLP0/MLP1-1 & 5.2\% & 65\% & -13\% & -10\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
500 MLP0/MLP2-1 & -0.4\% & 49\% & 44\% & -29\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
501 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
502 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
503 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
504 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
505 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
506 \subsection{Multi-Task Learning Effects} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
507 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
508 As previously seen, the SDA is better able to benefit from the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
509 transformations applied to the data than the MLP. In this experiment we |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
510 define three tasks: recognizing digits (knowing that the input is a digit), |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
511 recognizing upper case characters (knowing that the input is one), and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
512 recognizing lower case characters (knowing that the input is one). We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
513 consider the digit classification task as the target task and we want to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
514 evaluate whether training with the other tasks can help or hurt, and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
515 whether the effect is different for MLPs versus SDAs. The goal is to find |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
516 out if deep learning can benefit more (or less) from multiple related tasks |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
517 (i.e. the multi-task setting) compared to a corresponding purely supervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
518 shallow learner. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
519 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
520 We use a single hidden layer MLP with 1000 hidden units, and a SDA |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
521 with 3 hidden layers (1000 hidden units per layer), pre-trained and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
522 fine-tuned on NIST. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
523 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
524 Our results show that the MLP benefits marginally from the multi-task setting |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
525 in the case of digits (5\% relative improvement) but is actually hurt in the case |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
526 of characters (respectively 3\% and 4\% worse for lower and upper class characters). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
527 On the other hand the SDA benefitted from the multi-task setting, with relative |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
528 error rate improvements of 27\%, 15\% and 13\% respectively for digits, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
529 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
530 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
531 \begin{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
532 \caption{Test error rates and relative change in error rates due to the use of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
533 a multi-task setting, i.e., training on each task in isolation vs training |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
534 for all three tasks together, for MLPs vs SDAs. The SDA benefits much |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
535 more from the multi-task setting. All experiments on only on the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
536 unperturbed NIST data, using validation error for model selection. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
537 Relative improvement is 1 - single-task error / multi-task error.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
538 \label{tab:multi-task} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
539 \begin{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
540 \begin{tabular}{|l|r|r|r|} \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
541 & single-task & multi-task & relative \\ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
542 & setting & setting & improvement \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
543 MLP-digits & 3.77\% & 3.99\% & 5.6\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
544 MLP-lower & 17.4\% & 16.8\% & -4.1\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
545 MLP-upper & 7.84\% & 7.54\% & -3.6\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
546 SDA-digits & 2.6\% & 3.56\% & 27\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
547 SDA-lower & 12.3\% & 14.4\% & 15\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
548 SDA-upper & 5.93\% & 6.78\% & 13\% \\ \hline |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
549 \end{tabular} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
550 \end{center} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
551 \end{table} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
552 |
475 | 553 |
554 \begin{figure}[h] | |
555 \resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}\\ | |
556 \caption{Charts corresponding to tables \ref{tab:perturbation-effect} (left) and \ref{tab:multi-task} (right).} | |
557 \label{fig:improvements-charts} | |
558 \end{figure} | |
559 | |
560 | |
561 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
562 \section{Conclusions} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
563 |
472 | 564 The conclusions are positive for all the questions asked in the introduction. |
565 \begin{itemize} | |
566 \item Do the good results previously obtained with deep architectures on the | |
567 MNIST digits generalize to the setting of a much larger and richer (but similar) | |
568 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
569 Yes, the SDA systematically outperformed the MLP, in fact reaching human-level | |
570 performance. | |
571 \item To what extent does the perturbation of input images (e.g. adding | |
572 noise, affine transformations, background images) make the resulting | |
573 classifier better not only on similarly perturbed images but also on | |
574 the {\em original clean examples}? Do deep architectures benefit more from such {\em out-of-distribution} | |
575 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? | |
576 MLPs were helped by perturbed training examples when tested on perturbed input images, | |
577 but only marginally helped wrt clean examples. On the other hand, the deep SDAs | |
578 were very significantly boosted by these out-of-distribution examples. | |
579 \item Similarly, does the feature learning step in deep learning algorithms benefit more | |
580 training with similar but different classes (i.e. a multi-task learning scenario) than | |
581 a corresponding shallow and purely supervised architecture? | |
582 Whereas the improvement due to the multi-task setting was marginal or | |
583 negative for the MLP, it was very significant for the SDA. | |
584 \end{itemize} | |
585 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
586 \bibliography{strings,ml,aigaion,specials} |
469 | 587 %\bibliographystyle{plainnat} |
588 \bibliographystyle{unsrtnat} | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
589 %\bibliographystyle{apalike} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
590 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
591 \end{document} |