Mercurial > ift6266
annotate writeup/techreport.tex @ 583:ae77edb9df67
DIRO techreport, sent to arXiv
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Sat, 18 Sep 2010 16:44:46 -0400 |
parents | 9ebb335ca904 |
children | 81c6fde68a8a |
rev | line source |
---|---|
582 | 1 \documentclass{article} % For LaTeX2e |
2 \usepackage{times} | |
3 \usepackage{wrapfig} | |
4 \usepackage{amsthm,amsmath,bbm} | |
5 \usepackage[psamsfonts]{amssymb} | |
6 \usepackage{algorithm,algorithmic} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
7 \usepackage[utf8]{inputenc} |
582 | 8 \usepackage{graphicx,subfigure} |
9 \usepackage[numbers]{natbib} | |
10 | |
11 \addtolength{\textwidth}{10mm} | |
12 \addtolength{\evensidemargin}{-5mm} | |
13 \addtolength{\oddsidemargin}{-5mm} | |
14 | |
15 %\setlength\parindent{0mm} | |
16 | |
17 \title{Deep Self-Taught Learning for Handwritten Character Recognition} | |
18 \author{ | |
19 Frédéric Bastien \and | |
20 Yoshua Bengio \and | |
21 Arnaud Bergeron \and | |
22 Nicolas Boulanger-Lewandowski \and | |
23 Thomas Breuel \and | |
24 Youssouf Chherawala \and | |
25 Moustapha Cisse \and | |
26 Myriam Côté \and | |
27 Dumitru Erhan \and | |
28 Jeremy Eustache \and | |
29 Xavier Glorot \and | |
30 Xavier Muller \and | |
31 Sylvain Pannetier Lebeuf \and | |
32 Razvan Pascanu \and | |
33 Salah Rifai \and | |
34 Francois Savard \and | |
35 Guillaume Sicard | |
36 } | |
37 \date{June 8th, 2010, Technical Report 1353, Dept. IRO, U. Montreal} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
38 |
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
39 \begin{document} |
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
40 |
582 | 41 %\makeanontitle |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
42 \maketitle |
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
43 |
582 | 44 %\vspace*{-2mm} |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
45 \begin{abstract} |
582 | 46 Recent theoretical and empirical work in statistical machine learning has |
47 demonstrated the importance of learning algorithms for deep | |
48 architectures, i.e., function classes obtained by composing multiple | |
49 non-linear transformations. Self-taught learning (exploiting unlabeled | |
50 examples or examples from other distributions) has already been applied | |
51 to deep learners, but mostly to show the advantage of unlabeled | |
52 examples. Here we explore the advantage brought by {\em out-of-distribution examples}. | |
53 For this purpose we | |
54 developed a powerful generator of stochastic variations and noise | |
55 processes for character images, including not only affine transformations | |
56 but also slant, local elastic deformations, changes in thickness, | |
57 background images, grey level changes, contrast, occlusion, and various | |
58 types of noise. The out-of-distribution examples are obtained from these | |
59 highly distorted images or by including examples of object classes | |
60 different from those in the target test set. | |
61 We show that {\em deep learners benefit | |
62 more from them than a corresponding shallow learner}, at least in the area of | |
63 handwritten character recognition. In fact, we show that they reach | |
64 human-level performance on both handwritten digit classification and | |
65 62-class handwritten character recognition. | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
66 \end{abstract} |
582 | 67 %\vspace*{-3mm} |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
68 |
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
69 \section{Introduction} |
582 | 70 %\vspace*{-1mm} |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
71 |
582 | 72 {\bf Deep Learning} has emerged as a promising new area of research in |
73 statistical machine learning (see~\citet{Bengio-2009} for a review). | |
392
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
74 Learning algorithms for deep architectures are centered on the learning |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
75 of useful representations of data, which are better suited to the task at hand. |
582 | 76 This is in part inspired by observations of the mammalian visual cortex, |
392
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
77 which consists of a chain of processing elements, each of which is associated with a |
582 | 78 different representation of the raw visual input. In fact, |
392
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
79 it was found recently that the features learnt in deep architectures resemble |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
80 those observed in the first two of these stages (in areas V1 and V2 |
582 | 81 of visual cortex)~\citep{HonglakL2008}, and that they become more and |
82 more invariant to factors of variation (such as camera movement) in | |
83 higher layers~\citep{Goodfellow2009}. | |
84 Learning a hierarchy of features increases the | |
392
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
85 ease and practicality of developing representations that are at once |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
86 tailored to specific tasks, yet are able to borrow statistical strength |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
87 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
88 feature representation can lead to higher-level (more abstract, more |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
89 general) features that are more robust to unanticipated sources of |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
90 variance extant in real data. |
5f8fffd7347f
possible image for illustrating perturbations
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
381
diff
changeset
|
91 |
582 | 92 {\bf Self-taught learning}~\citep{RainaR2007} is a paradigm that combines principles |
93 of semi-supervised and multi-task learning: the learner can exploit examples | |
94 that are unlabeled and possibly come from a distribution different from the target | |
95 distribution, e.g., from other classes than those of interest. | |
96 It has already been shown that deep learners can clearly take advantage of | |
97 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}, | |
98 but more needs to be done to explore the impact | |
99 of {\em out-of-distribution} examples and of the multi-task setting | |
100 (one exception is~\citep{CollobertR2008}, which uses a different kind | |
101 of learning algorithm). In particular the {\em relative | |
102 advantage} of deep learning for these settings has not been evaluated. | |
103 The hypothesis discussed in the conclusion is that a deep hierarchy of features | |
104 may be better able to provide sharing of statistical strength | |
105 between different regions in input space or different tasks. | |
106 | |
107 \iffalse | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
108 Whereas a deep architecture can in principle be more powerful than a |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
109 shallow one in terms of representation, depth appears to render the |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
110 training problem more difficult in terms of optimization and local minima. |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
111 It is also only recently that successful algorithms were proposed to |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
112 overcome some of these difficulties. All are based on unsupervised |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
113 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
582 | 114 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
115 applied here, is the Denoising |
582 | 116 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see Figure~\ref{fig:da}), |
117 which | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
118 performed similarly or better than previously proposed Restricted Boltzmann |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
119 Machines in terms of unsupervised extraction of a hierarchy of features |
582 | 120 useful for classification. Each layer is trained to denoise its |
121 input, creating a layer of features that can be used as input for the next layer. | |
122 \fi | |
123 %The principle is that each layer starting from | |
124 %the bottom is trained to encode its input (the output of the previous | |
125 %layer) and to reconstruct it from a corrupted version. After this | |
126 %unsupervised initialization, the stack of DAs can be | |
127 %converted into a deep supervised feedforward neural network and fine-tuned by | |
128 %stochastic gradient descent. | |
129 | |
130 % | |
131 In this paper we ask the following questions: | |
132 | |
133 %\begin{enumerate} | |
134 $\bullet$ %\item | |
135 Do the good results previously obtained with deep architectures on the | |
136 MNIST digit images generalize to the setting of a much larger and richer (but similar) | |
137 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
138 | |
139 $\bullet$ %\item | |
140 To what extent does the perturbation of input images (e.g. adding | |
141 noise, affine transformations, background images) make the resulting | |
142 classifiers better not only on similarly perturbed images but also on | |
143 the {\em original clean examples}? We study this question in the | |
144 context of the 62-class and 10-class tasks of the NIST special database 19. | |
145 | |
146 $\bullet$ %\item | |
147 Do deep architectures {\em benefit more from such out-of-distribution} | |
148 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? | |
149 We use highly perturbed examples to generate out-of-distribution examples. | |
150 | |
151 $\bullet$ %\item | |
152 Similarly, does the feature learning step in deep learning algorithms benefit more | |
153 from training with moderately different classes (i.e. a multi-task learning scenario) than | |
154 a corresponding shallow and purely supervised architecture? | |
155 We train on 62 classes and test on 10 (digits) or 26 (upper case or lower case) | |
156 to answer this question. | |
157 %\end{enumerate} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
158 |
582 | 159 Our experimental results provide positive evidence towards all of these questions. |
160 To achieve these results, we introduce in the next section a sophisticated system | |
161 for stochastically transforming character images and then explain the methodology, | |
162 which is based on training with or without these transformed images and testing on | |
163 clean ones. We measure the relative advantage of out-of-distribution examples | |
164 for a deep learner vs a supervised shallow one. | |
165 Code for generating these transformations as well as for the deep learning | |
166 algorithms are made available. | |
167 We also estimate the relative advantage for deep learners of training with | |
168 other classes than those of interest, by comparing learners trained with | |
169 62 classes with learners trained with only a subset (on which they | |
170 are then tested). | |
171 The conclusion discusses | |
172 the more general question of why deep learners may benefit so much from | |
173 the self-taught learning framework. | |
407
fe2e2964e7a3
description des transformations en cours ajout d un fichier special.bib pour des references specifiques
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
393
diff
changeset
|
174 |
582 | 175 %\vspace*{-3mm} |
176 \newpage | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
177 \section{Perturbation and Transformation of Character Images} |
582 | 178 \label{s:perturbations} |
179 %\vspace*{-2mm} | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
180 |
582 | 181 \begin{wrapfigure}[8]{l}{0.15\textwidth} |
182 %\begin{minipage}[b]{0.14\linewidth} | |
183 %\vspace*{-5mm} | |
184 \begin{center} | |
185 \includegraphics[scale=.4]{images/Original.png}\\ | |
186 {\bf Original} | |
187 \end{center} | |
188 \end{wrapfigure} | |
189 %%\vspace{0.7cm} | |
190 %\end{minipage}% | |
191 %\hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
192 This section describes the different transformations we used to stochastically | |
193 transform $32 \times 32$ source images (such as the one on the left) | |
194 in order to obtain data from a larger distribution which | |
195 covers a domain substantially larger than the clean characters distribution from | |
196 which we start. | |
197 Although character transformations have been used before to | |
198 improve character recognizers, this effort is on a large scale both | |
199 in number of classes and in the complexity of the transformations, hence | |
200 in the complexity of the learning task. | |
201 More details can | |
202 be found in this technical report~\citep{ift6266-tr-anonymous}. | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
203 The code for these transformations (mostly python) is available at |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
204 {\tt http://anonymous.url.net}. All the modules in the pipeline share |
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
205 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the |
582 | 206 amount of deformation or noise introduced. |
207 There are two main parts in the pipeline. The first one, | |
208 from slant to pinch below, performs transformations. The second | |
209 part, from blur to contrast, adds different kinds of noise. | |
210 %\end{minipage} | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
211 |
582 | 212 %\vspace*{1mm} |
213 \subsection{Transformations} | |
214 %{\large\bf 2.1 Transformations} | |
215 %\vspace*{1mm} | |
216 | |
217 \subsubsection*{Thickness} | |
541 | 218 |
582 | 219 %\begin{wrapfigure}[7]{l}{0.15\textwidth} |
220 \begin{minipage}[b]{0.14\linewidth} | |
221 %\centering | |
222 \begin{center} | |
223 \vspace*{-5mm} | |
224 \includegraphics[scale=.4]{images/Thick_only.png}\\ | |
225 %{\bf Thickness} | |
226 \end{center} | |
227 \vspace{.6cm} | |
228 \end{minipage}% | |
229 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
230 %\end{wrapfigure} | |
231 To change character {\bf thickness}, morphological operators of dilation and erosion~\citep{Haralick87,Serra82} | |
541 | 232 are applied. The neighborhood of each pixel is multiplied |
233 element-wise with a {\em structuring element} matrix. | |
234 The pixel value is replaced by the maximum or the minimum of the resulting | |
235 matrix, respectively for dilation or erosion. Ten different structural elements with | |
236 increasing dimensions (largest is $5\times5$) were used. For each image, | |
237 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
582 | 238 element from a subset of the $n=round(m \times complexity)$ smallest structuring elements |
239 where $m=10$ for dilation and $m=6$ for erosion (to avoid completely erasing thin characters). | |
240 A neutral element (no transformation) | |
241 is always present in the set. | |
242 %%\vspace{.4cm} | |
243 \end{minipage} | |
541 | 244 |
582 | 245 \vspace{2mm} |
541 | 246 |
582 | 247 \subsubsection*{Slant} |
248 \vspace*{2mm} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
249 |
582 | 250 \begin{minipage}[b]{0.14\linewidth} |
251 \centering | |
252 \includegraphics[scale=.4]{images/Slant_only.png}\\ | |
253 %{\bf Slant} | |
254 \end{minipage}% | |
255 \hspace{0.3cm} | |
256 \begin{minipage}[b]{0.83\linewidth} | |
257 %\centering | |
258 To produce {\bf slant}, each row of the image is shifted | |
259 proportionally to its height: $shift = round(slant \times height)$. | |
260 $slant \sim U[-complexity,complexity]$. | |
261 The shift is randomly chosen to be either to the left or to the right. | |
262 \vspace{5mm} | |
263 \end{minipage} | |
264 %\vspace*{-4mm} | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
265 |
582 | 266 %\newpage |
541 | 267 |
582 | 268 \subsubsection*{Affine Transformations} |
541 | 269 |
582 | 270 \begin{minipage}[b]{0.14\linewidth} |
271 %\centering | |
272 %\begin{wrapfigure}[8]{l}{0.15\textwidth} | |
273 \begin{center} | |
274 \includegraphics[scale=.4]{images/Affine_only.png} | |
275 \vspace*{6mm} | |
276 %{\small {\bf Affine \mbox{Transformation}}} | |
277 \end{center} | |
278 %\end{wrapfigure} | |
279 \end{minipage}% | |
280 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
281 \noindent A $2 \times 3$ {\bf affine transform} matrix (with | |
282 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$. | |
283 Output pixel $(x,y)$ takes the value of input pixel | |
284 nearest to $(ax+by+c,dx+ey+f)$, | |
285 producing scaling, translation, rotation and shearing. | |
286 Marginal distributions of $(a,b,c,d,e,f)$ have been tuned to | |
287 forbid large rotations (to avoid confusing classes) but to give good | |
288 variability of the transformation: $a$ and $d$ $\sim U[1-3 | |
289 complexity,1+3\,complexity]$, $b$ and $e$ $\sim U[-3 \,complexity,3\, | |
290 complexity]$, and $c$ and $f \sim U[-4 \,complexity, 4 \, | |
291 complexity]$.\\ | |
292 \end{minipage} | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
293 |
582 | 294 %\vspace*{-4.5mm} |
295 \subsubsection*{Local Elastic Deformations} | |
541 | 296 |
582 | 297 %\begin{minipage}[t]{\linewidth} |
298 %\begin{wrapfigure}[7]{l}{0.15\textwidth} | |
299 %\hspace*{-8mm} | |
300 \begin{minipage}[b]{0.14\linewidth} | |
301 %\centering | |
302 \begin{center} | |
303 \vspace*{5mm} | |
304 \includegraphics[scale=.4]{images/Localelasticdistorsions_only.png} | |
305 %{\bf Local Elastic Deformation} | |
306 \end{center} | |
307 %\end{wrapfigure} | |
308 \end{minipage}% | |
309 \hspace{3mm} | |
310 \begin{minipage}[b]{0.85\linewidth} | |
311 %%\vspace*{-20mm} | |
312 The {\bf local elastic deformation} | |
313 module induces a ``wiggly'' effect in the image, following~\citet{SimardSP03-short}, | |
314 which provides more details. | |
315 The intensity of the displacement fields is given by | |
316 $\alpha = \sqrt[3]{complexity} \times 10.0$, which are | |
317 convolved with a Gaussian 2D kernel (resulting in a blur) of | |
318 standard deviation $\sigma = 10 - 7 \times\sqrt[3]{complexity}$. | |
319 \vspace{2mm} | |
320 \end{minipage} | |
415
1e9788ce1680
Added the parts concerning the transformations I'd announced I'd do: Local elastic deformations; occlusions; gimp transformations; salt and pepper noise; background images
fsavard
parents:
411
diff
changeset
|
321 |
582 | 322 \vspace*{4mm} |
415
1e9788ce1680
Added the parts concerning the transformations I'd announced I'd do: Local elastic deformations; occlusions; gimp transformations; salt and pepper noise; background images
fsavard
parents:
411
diff
changeset
|
323 |
582 | 324 \subsubsection*{Pinch} |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
325 |
582 | 326 \begin{minipage}[b]{0.14\linewidth} |
327 %\centering | |
328 %\begin{wrapfigure}[7]{l}{0.15\textwidth} | |
329 %\vspace*{-5mm} | |
330 \begin{center} | |
331 \includegraphics[scale=.4]{images/Pinch_only.png}\\ | |
332 \vspace*{15mm} | |
333 %{\bf Pinch} | |
334 \end{center} | |
335 %\end{wrapfigure} | |
336 %%\vspace{.6cm} | |
337 \end{minipage}% | |
338 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} | |
339 The {\bf pinch} module applies the ``Whirl and pinch'' GIMP filter with whirl set to 0. | |
340 A pinch is ``similar to projecting the image onto an elastic | |
541 | 341 surface and pressing or pulling on the center of the surface'' (GIMP documentation manual). |
582 | 342 For a square input image, draw a radius-$r$ disk |
343 around its center $C$. Any pixel $P$ belonging to | |
344 that disk has its value replaced by | |
345 the value of a ``source'' pixel in the original image, | |
346 on the line that goes through $C$ and $P$, but | |
347 at some other distance $d_2$. Define $d_1=distance(P,C)$ | |
348 and $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times | |
349 d_1$, where $pinch$ is a parameter of the filter. | |
541 | 350 The actual value is given by bilinear interpolation considering the pixels |
351 around the (non-integer) source position thus found. | |
352 Here $pinch \sim U[-complexity, 0.7 \times complexity]$. | |
582 | 353 %%\vspace{1.5cm} |
354 \end{minipage} | |
541 | 355 |
582 | 356 %\vspace{1mm} |
416
5f9d04dda707
Correction d'une erreur pour pinch et ajout d'une ref bibliographique
fsavard
parents:
415
diff
changeset
|
357 |
582 | 358 %{\large\bf 2.2 Injecting Noise} |
359 \subsection{Injecting Noise} | |
360 %\vspace{2mm} | |
415
1e9788ce1680
Added the parts concerning the transformations I'd announced I'd do: Local elastic deformations; occlusions; gimp transformations; salt and pepper noise; background images
fsavard
parents:
411
diff
changeset
|
361 |
582 | 362 \subsubsection*{Motion Blur} |
415
1e9788ce1680
Added the parts concerning the transformations I'd announced I'd do: Local elastic deformations; occlusions; gimp transformations; salt and pepper noise; background images
fsavard
parents:
411
diff
changeset
|
363 |
582 | 364 %%\vspace*{-.2cm} |
365 \begin{minipage}[t]{0.14\linewidth} | |
366 \centering | |
367 \vspace*{0mm} | |
368 \includegraphics[scale=.4]{images/Motionblur_only.png} | |
369 %{\bf Motion Blur} | |
370 \end{minipage}% | |
371 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
372 %%\vspace*{.5mm} | |
373 \vspace*{2mm} | |
374 The {\bf motion blur} module is GIMP's ``linear motion blur'', which | |
375 has parameters $length$ and $angle$. The value of | |
376 a pixel in the final image is approximately the mean of the first $length$ pixels | |
377 found by moving in the $angle$ direction, | |
378 $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$. | |
379 %\vspace{5mm} | |
380 \end{minipage} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
381 |
582 | 382 %\vspace*{1mm} |
383 | |
384 \subsubsection*{Occlusion} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
385 |
582 | 386 \begin{minipage}[t]{0.14\linewidth} |
387 \centering | |
388 \vspace*{3mm} | |
389 \includegraphics[scale=.4]{images/occlusion_only.png}\\ | |
390 %{\bf Occlusion} | |
391 %%\vspace{.5cm} | |
392 \end{minipage}% | |
393 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
394 %\vspace*{-18mm} | |
395 The {\bf occlusion} module selects a random rectangle from an {\em occluder} character | |
396 image and places it over the original {\em occluded} | |
397 image. Pixels are combined by taking the max(occluder, occluded), | |
398 i.e. keeping the lighter ones. | |
399 The rectangle corners | |
541 | 400 are sampled so that larger complexity gives larger rectangles. |
401 The destination position in the occluded image are also sampled | |
582 | 402 according to a normal distribution (more details in~\citet{ift6266-tr-anonymous}). |
403 This module is skipped with probability 60\%. | |
404 %%\vspace{7mm} | |
405 \end{minipage} | |
415
1e9788ce1680
Added the parts concerning the transformations I'd announced I'd do: Local elastic deformations; occlusions; gimp transformations; salt and pepper noise; background images
fsavard
parents:
411
diff
changeset
|
406 |
582 | 407 %\vspace*{1mm} |
408 \subsubsection*{Gaussian Smoothing} | |
426
a7fab59de174
change order of transformations
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
425
diff
changeset
|
409 |
582 | 410 %\begin{wrapfigure}[8]{l}{0.15\textwidth} |
411 %\vspace*{-6mm} | |
412 \begin{minipage}[t]{0.14\linewidth} | |
413 \begin{center} | |
414 %\centering | |
415 \vspace*{6mm} | |
416 \includegraphics[scale=.4]{images/Bruitgauss_only.png} | |
417 %{\bf Gaussian Smoothing} | |
418 \end{center} | |
419 %\end{wrapfigure} | |
420 %%\vspace{.5cm} | |
421 \end{minipage}% | |
422 \hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
423 With the {\bf Gaussian smoothing} module, | |
424 different regions of the image are spatially smoothed. | |
425 This is achieved by first convolving | |
426 the image with an isotropic Gaussian kernel of | |
541 | 427 size and variance chosen uniformly in the ranges $[12,12 + 20 \times |
582 | 428 complexity]$ and $[2,2 + 6 \times complexity]$. This filtered image is normalized |
429 between $0$ and $1$. We also create an isotropic weighted averaging window, of the | |
541 | 430 kernel size, with maximum value at the center. For each image we sample |
431 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be | |
432 averaging centers between the original image and the filtered one. We | |
433 initialize to zero a mask matrix of the image size. For each selected pixel | |
582 | 434 we add to the mask the averaging window centered on it. The final image is |
435 computed from the following element-wise operation: $\frac{image + filtered\_image | |
436 \times mask}{mask+1}$. | |
437 This module is skipped with probability 75\%. | |
438 \end{minipage} | |
439 | |
440 %\newpage | |
441 | |
442 %\vspace*{-9mm} | |
443 \subsubsection*{Permute Pixels} | |
541 | 444 |
582 | 445 %\hspace*{-3mm}\begin{minipage}[t]{0.18\linewidth} |
446 %\centering | |
447 \begin{minipage}[t]{0.14\textwidth} | |
448 %\begin{wrapfigure}[7]{l}{ | |
449 %\vspace*{-5mm} | |
450 \begin{center} | |
451 \vspace*{1mm} | |
452 \includegraphics[scale=.4]{images/Permutpixel_only.png} | |
453 %{\small\bf Permute Pixels} | |
454 \end{center} | |
455 %\end{wrapfigure} | |
456 \end{minipage}% | |
457 \hspace{3mm}\begin{minipage}[t]{0.86\linewidth} | |
458 \vspace*{1mm} | |
459 %%\vspace*{-20mm} | |
460 This module {\bf permutes neighbouring pixels}. It first selects a | |
461 fraction $\frac{complexity}{3}$ of pixels randomly in the image. Each | |
462 of these pixels is then sequentially exchanged with a random pixel | |
463 among its four nearest neighbors (on its left, right, top or bottom). | |
464 This module is skipped with probability 80\%.\\ | |
465 %\vspace*{1mm} | |
466 \end{minipage} | |
467 | |
468 %\vspace{-3mm} | |
469 | |
470 \subsubsection*{Gaussian Noise} | |
471 | |
472 \begin{minipage}[t]{0.14\textwidth} | |
473 %\begin{wrapfigure}[7]{l}{ | |
474 %%\vspace*{-3mm} | |
475 \begin{center} | |
476 %\hspace*{-3mm}\begin{minipage}[t]{0.18\linewidth} | |
477 %\centering | |
478 \vspace*{0mm} | |
479 \includegraphics[scale=.4]{images/Distorsiongauss_only.png} | |
480 %{\small \bf Gauss. Noise} | |
481 \end{center} | |
482 %\end{wrapfigure} | |
483 \end{minipage}% | |
484 \hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
485 \vspace*{1mm} | |
486 %\vspace*{12mm} | |
487 The {\bf Gaussian noise} module simply adds, to each pixel of the image independently, a | |
488 noise $\sim Normal(0,(\frac{complexity}{10})^2)$. | |
489 This module is skipped with probability 70\%. | |
490 %%\vspace{1.1cm} | |
491 \end{minipage} | |
541 | 492 |
582 | 493 %\vspace*{1.2cm} |
494 | |
495 \subsubsection*{Background Image Addition} | |
496 | |
497 \begin{minipage}[t]{\linewidth} | |
498 \begin{minipage}[t]{0.14\linewidth} | |
499 \centering | |
500 \vspace*{0mm} | |
501 \includegraphics[scale=.4]{images/background_other_only.png} | |
502 %{\small \bf Bg Image} | |
503 \end{minipage}% | |
504 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
505 \vspace*{1mm} | |
506 Following~\citet{Larochelle-jmlr-2009}, the {\bf background image} module adds a random | |
507 background image behind the letter, from a randomly chosen natural image, | |
508 with contrast adjustments depending on $complexity$, to preserve | |
509 more or less of the original character image. | |
510 %%\vspace{.8cm} | |
511 \end{minipage} | |
512 \end{minipage} | |
513 %%\vspace{-.7cm} | |
514 | |
515 \subsubsection*{Salt and Pepper Noise} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
516 |
582 | 517 \begin{minipage}[t]{0.14\linewidth} |
518 \centering | |
519 \vspace*{0mm} | |
520 \includegraphics[scale=.4]{images/Poivresel_only.png} | |
521 %{\small \bf Salt \& Pepper} | |
522 \end{minipage}% | |
523 \hspace{0.3cm}\begin{minipage}[t]{0.83\linewidth} | |
524 \vspace*{1mm} | |
525 The {\bf salt and pepper noise} module adds noise $\sim U[0,1]$ to random subsets of pixels. | |
526 The number of selected pixels is $0.2 \times complexity$. | |
527 This module is skipped with probability 75\%. | |
528 %%\vspace{.9cm} | |
529 \end{minipage} | |
530 %%\vspace{-.7cm} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
531 |
582 | 532 %\vspace{1mm} |
533 \subsubsection*{Scratches} | |
462
f59af1648d83
cleaner le techreport
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
461
diff
changeset
|
534 |
582 | 535 \begin{minipage}[t]{0.14\textwidth} |
536 %\begin{wrapfigure}[7]{l}{ | |
537 %\begin{minipage}[t]{0.14\linewidth} | |
538 %\centering | |
539 \begin{center} | |
540 \vspace*{4mm} | |
541 %\hspace*{-1mm} | |
542 \includegraphics[scale=.4]{images/Rature_only.png}\\ | |
543 %{\bf Scratches} | |
544 \end{center} | |
545 \end{minipage}% | |
546 %\end{wrapfigure} | |
547 \hspace{0.3cm}\begin{minipage}[t]{0.86\linewidth} | |
548 %%\vspace{.4cm} | |
549 The {\bf scratches} module places line-like white patches on the image. The | |
541 | 550 lines are heavily transformed images of the digit ``1'' (one), chosen |
582 | 551 at random among 500 such 1 images, |
541 | 552 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times |
582 | 553 complexity)^2$ (in degrees), using bi-cubic interpolation. |
541 | 554 Two passes of a grey-scale morphological erosion filter |
555 are applied, reducing the width of the line | |
556 by an amount controlled by $complexity$. | |
582 | 557 This module is skipped with probability 85\%. The probabilities |
558 of applying 1, 2, or 3 patches are (50\%,30\%,20\%). | |
559 \end{minipage} | |
428 | 560 |
582 | 561 %\vspace*{1mm} |
428 | 562 |
582 | 563 \subsubsection*{Grey Level and Contrast Changes} |
428 | 564 |
582 | 565 \begin{minipage}[t]{0.15\linewidth} |
566 \centering | |
567 \vspace*{0mm} | |
568 \includegraphics[scale=.4]{images/Contrast_only.png} | |
569 %{\bf Grey Level \& Contrast} | |
570 \end{minipage}% | |
571 \hspace{3mm}\begin{minipage}[t]{0.85\linewidth} | |
572 \vspace*{1mm} | |
573 The {\bf grey level and contrast} module changes the contrast by changing grey levels, and may invert the image polarity (white | |
574 to black and black to white). The contrast is $C \sim U[1-0.85 \times complexity,1]$ | |
575 so the image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The | |
576 polarity is inverted with probability 50\%. | |
577 %%\vspace{.7cm} | |
578 \end{minipage} | |
579 %\vspace{2mm} | |
420
a3a4a9c6476d
added transformations description and began dataset descriptions
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
417
diff
changeset
|
580 |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
581 |
582 | 582 \iffalse |
583 \begin{figure}[ht] | |
584 \centerline{\resizebox{.9\textwidth}{!}{\includegraphics{images/example_t.png}}}\\ | |
393
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
585 \caption{Illustration of the pipeline of stochastic |
582 | 586 transformations applied to the image of a lower-case \emph{t} |
393
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
587 (the upper left image). Each image in the pipeline (going from |
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
588 left to right, first top line, then bottom line) shows the result |
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
589 of applying one of the modules in the pipeline. The last image |
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
590 (bottom right) is used as training example.} |
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
591 \label{fig:pipeline} |
4c840798d290
added examples of figure and table of results
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
392
diff
changeset
|
592 \end{figure} |
582 | 593 \fi |
594 | |
595 %\vspace*{-3mm} | |
596 \section{Experimental Setup} | |
597 %\vspace*{-1mm} | |
598 | |
599 Much previous work on deep learning had been performed on | |
600 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
601 with 60~000 examples, and variants involving 10~000 | |
602 examples~\citep{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}. | |
603 The focus here is on much larger training sets, from 10 times to | |
604 to 1000 times larger, and 62 classes. | |
605 | |
606 The first step in constructing the larger datasets (called NISTP and P07) is to sample from | |
607 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, | |
608 and {\bf OCR data} (scanned machine printed characters). Once a character | |
609 is sampled from one of these sources (chosen randomly), the second step is to | |
610 apply a pipeline of transformations and/or noise processes described in section \ref{s:perturbations}. | |
611 | |
612 To provide a baseline of error rate comparison we also estimate human performance | |
613 on both the 62-class task and the 10-class digits task. | |
614 We compare the best Multi-Layer Perceptrons (MLP) against | |
615 the best Stacked Denoising Auto-encoders (SDA), when | |
616 both models' hyper-parameters are selected to minimize the validation set error. | |
617 We also provide a comparison against a precise estimate | |
618 of human performance obtained via Amazon's Mechanical Turk (AMT) | |
619 service (http://mturk.com). | |
620 AMT users are paid small amounts | |
621 of money to perform tasks for which human intelligence is required. | |
622 Mechanical Turk has been used extensively in natural language processing and vision. | |
623 %processing \citep{SnowEtAl2008} and vision | |
624 %\citep{SorokinAndForsyth2008,whitehill09}. | |
625 AMT users were presented | |
626 with 10 character images (from a test set) and asked to choose 10 corresponding ASCII | |
627 characters. They were forced to choose a single character class (either among the | |
628 62 or 10 character classes) for each image. | |
629 80 subjects classified 2500 images per (dataset,task) pair, | |
630 with the guarantee that 3 different subjects classified each image, allowing | |
631 us to estimate inter-human variability (e.g a standard error of 0.1\% | |
632 on the average 18.2\% error done by humans on the 62-class task NIST test set). | |
633 | |
634 %\vspace*{-3mm} | |
635 \subsection{Data Sources} | |
636 %\vspace*{-2mm} | |
637 | |
638 %\begin{itemize} | |
639 %\item | |
640 {\bf NIST.} | |
641 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995}, | |
642 widely used for training and testing character | |
643 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}. | |
644 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications, | |
645 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes | |
646 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity. | |
647 The fourth partition (called $hsf_4$, 82587 examples), | |
648 experimentally recognized to be the most difficult one, is the one recommended | |
649 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
650 for that purpose. We randomly split the remainder (731668 examples) into a training set and a validation set for | |
651 model selection. | |
652 The performances reported by previous work on that dataset mostly use only the digits. | |
653 Here we use all the classes both in the training and testing phase. This is especially | |
654 useful to estimate the effect of a multi-task setting. | |
655 The distribution of the classes in the NIST training and test sets differs | |
656 substantially, with relatively many more digits in the test set, and a more uniform distribution | |
657 of letters in the test set (whereas in the training set they are distributed | |
658 more like in natural text). | |
659 %\vspace*{-1mm} | |
660 | |
661 %\item | |
662 {\bf Fonts.} | |
663 In order to have a good variety of sources we downloaded an important number of free fonts from: | |
664 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}. | |
665 % TODO: pointless to anonymize, it's not pointing to our work | |
666 Including the operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from. | |
667 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, | |
668 directly as input to our models. | |
669 %\vspace*{-1mm} | |
670 | |
671 %\item | |
672 {\bf Captchas.} | |
673 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for | |
674 generating characters of the same format as the NIST dataset. This software is based on | |
675 a random character class generator and various kinds of transformations similar to those described in the previous sections. | |
676 In order to increase the variability of the data generated, many different fonts are used for generating the characters. | |
677 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity | |
678 depending on the value of the complexity parameter provided by the user of the data source. | |
679 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class? | |
680 %\vspace*{-1mm} | |
681 | |
682 %\item | |
683 {\bf OCR data.} | |
684 A large set (2 million) of scanned, OCRed and manually verified machine-printed | |
685 characters where included as an | |
686 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
687 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern | |
688 ({\tt http://www.iupr.com}), and which will be publicly released. | |
689 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this | |
690 %\end{itemize} | |
691 | |
692 %\vspace*{-3mm} | |
693 \subsection{Data Sets} | |
694 %\vspace*{-2mm} | |
695 | |
696 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label | |
697 from one of the 62 character classes. | |
698 %\begin{itemize} | |
699 %\vspace*{-1mm} | |
700 | |
701 %\item | |
702 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has | |
703 \{651668 / 80000 / 82587\} \{training / validation / test\} examples. | |
704 %\vspace*{-1mm} | |
705 | |
706 %\item | |
707 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources | |
708 and sending them through the transformation pipeline described in section \ref{s:perturbations}. | |
709 For each new example to generate, a data source is selected with probability $10\%$ from the fonts, | |
710 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the | |
711 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$. | |
712 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. | |
713 %\vspace*{-1mm} | |
714 | |
715 %\item | |
716 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources) | |
717 except that we only apply | |
718 transformations from slant to pinch. Therefore, the character is | |
719 transformed but no additional noise is added to the image, giving images | |
720 closer to the NIST dataset. | |
721 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. | |
722 %\end{itemize} | |
723 | |
724 %\vspace*{-3mm} | |
725 \subsection{Models and their Hyperparameters} | |
726 %\vspace*{-2mm} | |
727 | |
728 The experiments are performed using MLPs (with a single | |
729 hidden layer) and SDAs. | |
730 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.} | |
731 | |
732 {\bf Multi-Layer Perceptrons (MLP).} | |
733 Whereas previous work had compared deep architectures to both shallow MLPs and | |
734 SVMs, we only compared to MLPs here because of the very large datasets used | |
735 (making the use of SVMs computationally challenging because of their quadratic | |
736 scaling behavior). | |
737 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
738 exponentials) on the output layer for estimating $P(class | image)$. | |
739 The number of hidden units is taken in $\{300,500,800,1000,1500\}$. | |
740 Training examples are presented in minibatches of size 20. A constant learning | |
741 rate was chosen among $\{0.001, 0.01, 0.025, 0.075, 0.1, 0.5\}$. | |
742 %through preliminary experiments (measuring performance on a validation set), | |
743 %and $0.1$ (which was found to work best) was then selected for optimizing on | |
744 %the whole training sets. | |
745 %\vspace*{-1mm} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
746 |
422
e7790db265b1
Basic text for section 3, add a bit more detail to section 4.2.2
Arnaud Bergeron <abergeron@gmail.com>
parents:
417
diff
changeset
|
747 |
582 | 748 {\bf Stacked Denoising Auto-Encoders (SDA).} |
749 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) | |
750 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
751 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006}, | |
752 apparently setting parameters in the | |
753 basin of attraction of supervised gradient descent yielding better | |
754 generalization~\citep{Erhan+al-2010}. It is hypothesized that the | |
755 advantage brought by this procedure stems from a better prior, | |
756 on the one hand taking advantage of the link between the input | |
757 distribution $P(x)$ and the conditional distribution of interest | |
758 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
759 taking advantage of the expressive power and bias implicit in the | |
760 deep architecture (whereby complex concepts are expressed as | |
761 compositions of simpler ones through a deep hierarchy). | |
762 | |
763 \begin{figure}[ht] | |
764 %\vspace*{-2mm} | |
765 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}} | |
766 %\vspace*{-2mm} | |
767 \caption{Illustration of the computations and training criterion for the denoising | |
768 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of | |
769 the layer (i.e. raw input or output of previous layer) | |
770 s corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$. | |
771 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which | |
772 is compared to the uncorrupted input $x$ through the loss function | |
773 $L_H(x,z)$, whose expected value is approximately minimized during training | |
774 by tuning $\theta$ and $\theta'$.} | |
775 \label{fig:da} | |
776 %\vspace*{-2mm} | |
777 \end{figure} | |
778 | |
779 Here we chose to use the Denoising | |
780 Auto-encoder~\citep{VincentPLarochelleH2008} as the building block for | |
781 these deep hierarchies of features, as it is simple to train and | |
782 explain (see Figure~\ref{fig:da}, as well as | |
783 tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
784 provides efficient inference, and yielded results | |
785 comparable or better than RBMs in series of experiments | |
786 \citep{VincentPLarochelleH2008}. During training, a Denoising | |
787 Auto-encoder is presented with a stochastically corrupted version | |
788 of the input and trained to reconstruct the uncorrupted input, | |
789 forcing the hidden units to represent the leading regularities in | |
790 the data. Here we use the random binary masking corruption | |
791 (which sets to 0 a random subset of the inputs). | |
792 Once it is trained, in a purely unsupervised way, | |
793 its hidden units' activations can | |
794 be used as inputs for training a second one, etc. | |
795 After this unsupervised pre-training stage, the parameters | |
796 are used to initialize a deep MLP, which is fine-tuned by | |
797 the same standard procedure used to train them (see previous section). | |
798 The SDA hyper-parameters are the same as for the MLP, with the addition of the | |
799 amount of corruption noise (we used the masking noise process, whereby a | |
800 fixed proportion of the input values, randomly selected, are zeroed), and a | |
801 separate learning rate for the unsupervised pre-training stage (selected | |
802 from the same above set). The fraction of inputs corrupted was selected | |
803 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
804 of hidden layers but it was fixed to 3 based on previous work with | |
805 SDAs on MNIST~\citep{VincentPLarochelleH2008}. | |
806 | |
807 %\vspace*{-1mm} | |
808 | |
809 \begin{figure}[ht] | |
810 %\vspace*{-2mm} | |
811 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}} | |
812 %\vspace*{-3mm} | |
813 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained | |
814 on NIST, 1 on NISTP, and 2 on P07. Left: overall results | |
815 of all models, on NIST and NISTP test sets. | |
816 Right: error rates on NIST test digits only, along with the previous results from | |
817 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
818 respectively based on ART, nearest neighbors, MLPs, and SVMs.} | |
819 \label{fig:error-rates-charts} | |
820 %\vspace*{-2mm} | |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
477
diff
changeset
|
821 \end{figure} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
477
diff
changeset
|
822 |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
477
diff
changeset
|
823 |
582 | 824 \begin{figure}[ht] |
825 %\vspace*{-3mm} | |
826 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}} | |
827 %\vspace*{-3mm} | |
828 \caption{Relative improvement in error rate due to self-taught learning. | |
829 Left: Improvement (or loss, when negative) | |
830 induced by out-of-distribution examples (perturbed data). | |
831 Right: Improvement (or loss, when negative) induced by multi-task | |
832 learning (training on all classes and testing only on either digits, | |
833 upper case, or lower-case). The deep learner (SDA) benefits more from | |
834 both self-taught learning scenarios, compared to the shallow MLP.} | |
835 \label{fig:improvements-charts} | |
836 %\vspace*{-2mm} | |
452
b0622f78cfec
Add a small paragraph mentionning the distribution differences and a figure illustrating the difference.
Arnaud Bergeron <abergeron@gmail.com>
parents:
444
diff
changeset
|
837 \end{figure} |
b0622f78cfec
Add a small paragraph mentionning the distribution differences and a figure illustrating the difference.
Arnaud Bergeron <abergeron@gmail.com>
parents:
444
diff
changeset
|
838 |
582 | 839 \section{Experimental Results} |
840 %\vspace*{-2mm} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
841 |
582 | 842 %%\vspace*{-1mm} |
843 %\subsection{SDA vs MLP vs Humans} | |
844 %%\vspace*{-1mm} | |
845 The models are either trained on NIST (MLP0 and SDA0), | |
846 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested | |
847 on either NIST, NISTP or P07, either on the 62-class task | |
848 or on the 10-digits task. Training (including about half | |
849 for unsupervised pre-training, for DAs) on the larger | |
850 datasets takes around one day on a GPU-285. | |
851 Figure~\ref{fig:error-rates-charts} summarizes the results obtained, | |
852 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, | |
853 SDA2), along with the previous results on the digits NIST special database | |
854 19 test set from the literature, respectively based on ARTMAP neural | |
855 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search | |
856 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs | |
857 ~\citep{Milgram+al-2005}. More detailed and complete numerical results | |
858 (figures and tables, including standard errors on the error rates) can be | |
859 found in Appendix I of the supplementary material. | |
860 The deep learner not only outperformed the shallow ones and | |
861 previously published performance (in a statistically and qualitatively | |
862 significant way) but when trained with perturbed data | |
863 reaches human performance on both the 62-class task | |
864 and the 10-class (digits) task. | |
865 17\% error (SDA1) or 18\% error (humans) may seem large but a large | |
866 majority of the errors from humans and from SDA1 are from out-of-context | |
867 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a | |
868 ``c'' and a ``C'' are often indistinguishible). | |
438 | 869 |
582 | 870 In addition, as shown in the left of |
871 Figure~\ref{fig:improvements-charts}, the relative improvement in error | |
872 rate brought by self-taught learning is greater for the SDA, and these | |
873 differences with the MLP are statistically and qualitatively | |
874 significant. | |
875 The left side of the figure shows the improvement to the clean | |
876 NIST test set error brought by the use of out-of-distribution examples | |
877 (i.e. the perturbed examples examples from NISTP or P07). | |
878 Relative percent change is measured by taking | |
879 $100 \% \times$ (original model's error / perturbed-data model's error - 1). | |
880 The right side of | |
881 Figure~\ref{fig:improvements-charts} shows the relative improvement | |
882 brought by the use of a multi-task setting, in which the same model is | |
883 trained for more classes than the target classes of interest (i.e. training | |
884 with all 62 classes when the target classes are respectively the digits, | |
885 lower-case, or upper-case characters). Again, whereas the gain from the | |
886 multi-task setting is marginal or negative for the MLP, it is substantial | |
887 for the SDA. Note that to simplify these multi-task experiments, only the original | |
888 NIST dataset is used. For example, the MLP-digits bar shows the relative | |
889 percent improvement in MLP error rate on the NIST digits test set | |
890 is $100\% \times$ (single-task | |
891 model's error / multi-task model's error - 1). The single-task model is | |
892 trained with only 10 outputs (one per digit), seeing only digit examples, | |
893 whereas the multi-task model is trained with 62 outputs, with all 62 | |
894 character classes as examples. Hence the hidden units are shared across | |
895 all tasks. For the multi-task model, the digit error rate is measured by | |
896 comparing the correct digit class with the output class associated with the | |
897 maximum conditional probability among only the digit classes outputs. The | |
898 setting is similar for the other two target classes (lower case characters | |
899 and upper case characters). | |
900 %%\vspace*{-1mm} | |
901 %\subsection{Perturbed Training Data More Helpful for SDA} | |
902 %%\vspace*{-1mm} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
903 |
582 | 904 %%\vspace*{-1mm} |
905 %\subsection{Multi-Task Learning Effects} | |
906 %%\vspace*{-1mm} | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
907 |
582 | 908 \iffalse |
460
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
909 As previously seen, the SDA is better able to benefit from the |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
910 transformations applied to the data than the MLP. In this experiment we |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
911 define three tasks: recognizing digits (knowing that the input is a digit), |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
912 recognizing upper case characters (knowing that the input is one), and |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
913 recognizing lower case characters (knowing that the input is one). We |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
914 consider the digit classification task as the target task and we want to |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
915 evaluate whether training with the other tasks can help or hurt, and |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
916 whether the effect is different for MLPs versus SDAs. The goal is to find |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
917 out if deep learning can benefit more (or less) from multiple related tasks |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
918 (i.e. the multi-task setting) compared to a corresponding purely supervised |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
919 shallow learner. |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
920 |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
921 We use a single hidden layer MLP with 1000 hidden units, and a SDA |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
922 with 3 hidden layers (1000 hidden units per layer), pre-trained and |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
923 fine-tuned on NIST. |
437
479f2f518fc9
added Training with More Classes than Necessary
Guillaume Sicard <guitch21@gmail.com>
parents:
434
diff
changeset
|
924 |
460
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
925 Our results show that the MLP benefits marginally from the multi-task setting |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
926 in the case of digits (5\% relative improvement) but is actually hurt in the case |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
927 of characters (respectively 3\% and 4\% worse for lower and upper class characters). |
582 | 928 On the other hand the SDA benefited from the multi-task setting, with relative |
460
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
929 error rate improvements of 27\%, 15\% and 13\% respectively for digits, |
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
930 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. |
582 | 931 \fi |
460
fe292653a0f8
ajoute dernier tableau de resultats
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
458
diff
changeset
|
932 |
582 | 933 |
934 %\vspace*{-2mm} | |
935 \section{Conclusions and Discussion} | |
936 %\vspace*{-2mm} | |
937 | |
938 We have found that the self-taught learning framework is more beneficial | |
939 to a deep learner than to a traditional shallow and purely | |
940 supervised learner. More precisely, | |
941 the answers are positive for all the questions asked in the introduction. | |
942 %\begin{itemize} | |
943 | |
944 $\bullet$ %\item | |
945 {\bf Do the good results previously obtained with deep architectures on the | |
946 MNIST digits generalize to a much larger and richer (but similar) | |
947 dataset, the NIST special database 19, with 62 classes and around 800k examples}? | |
948 Yes, the SDA {\em systematically outperformed the MLP and all the previously | |
949 published results on this dataset} (the ones that we are aware of), {\em in fact reaching human-level | |
950 performance} at around 17\% error on the 62-class task and 1.4\% on the digits. | |
951 | |
952 $\bullet$ %\item | |
953 {\bf To what extent do self-taught learning scenarios help deep learners, | |
954 and do they help them more than shallow supervised ones}? | |
955 We found that distorted training examples not only made the resulting | |
956 classifier better on similarly perturbed images but also on | |
957 the {\em original clean examples}, and more importantly and more novel, | |
958 that deep architectures benefit more from such {\em out-of-distribution} | |
959 examples. MLPs were helped by perturbed training examples when tested on perturbed input | |
960 images (65\% relative improvement on NISTP) | |
961 but only marginally helped (5\% relative improvement on all classes) | |
962 or even hurt (10\% relative loss on digits) | |
963 with respect to clean examples . On the other hand, the deep SDAs | |
964 were significantly boosted by these out-of-distribution examples. | |
965 Similarly, whereas the improvement due to the multi-task setting was marginal or | |
966 negative for the MLP (from +5.6\% to -3.6\% relative change), | |
967 it was quite significant for the SDA (from +13\% to +27\% relative change), | |
968 which may be explained by the arguments below. | |
969 %\end{itemize} | |
437
479f2f518fc9
added Training with More Classes than Necessary
Guillaume Sicard <guitch21@gmail.com>
parents:
434
diff
changeset
|
970 |
582 | 971 In the original self-taught learning framework~\citep{RainaR2007}, the |
972 out-of-sample examples were used as a source of unsupervised data, and | |
973 experiments showed its positive effects in a \emph{limited labeled data} | |
974 scenario. However, many of the results by \citet{RainaR2007} (who used a | |
975 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught | |
976 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases. | |
977 We note instead that, for deep | |
978 architectures, our experiments show that such a positive effect is accomplished | |
979 even in a scenario with a \emph{large number of labeled examples}, | |
980 i.e., here, the relative gain of self-taught learning is probably preserved | |
981 in the asymptotic regime. | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
982 |
582 | 983 {\bf Why would deep learners benefit more from the self-taught learning framework}? |
984 The key idea is that the lower layers of the predictor compute a hierarchy | |
985 of features that can be shared across tasks or across variants of the | |
986 input distribution. Intermediate features that can be used in different | |
987 contexts can be estimated in a way that allows to share statistical | |
988 strength. Features extracted through many levels are more likely to | |
989 be more abstract (as the experiments in~\citet{Goodfellow2009} suggest), | |
990 increasing the likelihood that they would be useful for a larger array | |
991 of tasks and input conditions. | |
992 Therefore, we hypothesize that both depth and unsupervised | |
993 pre-training play a part in explaining the advantages observed here, and future | |
994 experiments could attempt at teasing apart these factors. | |
995 And why would deep learners benefit from the self-taught learning | |
996 scenarios even when the number of labeled examples is very large? | |
997 We hypothesize that this is related to the hypotheses studied | |
998 in~\citet{Erhan+al-2010}. Whereas in~\citet{Erhan+al-2010} | |
999 it was found that online learning on a huge dataset did not make the | |
1000 advantage of the deep learning bias vanish, a similar phenomenon | |
1001 may be happening here. We hypothesize that unsupervised pre-training | |
1002 of a deep hierarchy with self-taught learning initializes the | |
1003 model in the basin of attraction of supervised gradient descent | |
1004 that corresponds to better generalization. Furthermore, such good | |
1005 basins of attraction are not discovered by pure supervised learning | |
1006 (with or without self-taught settings), and more labeled examples | |
1007 does not allow the model to go from the poorer basins of attraction discovered | |
1008 by the purely supervised shallow models to the kind of better basins associated | |
1009 with deep learning and self-taught learning. | |
1010 | |
1011 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) | |
1012 can be executed on-line at {\tt http://deep.host22.com}. | |
1013 | |
583
ae77edb9df67
DIRO techreport, sent to arXiv
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
582
diff
changeset
|
1014 %\newpage |
582 | 1015 { |
583
ae77edb9df67
DIRO techreport, sent to arXiv
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
582
diff
changeset
|
1016 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,specials,aigaion-shorter} |
582 | 1017 %\bibliographystyle{plainnat} |
1018 \bibliographystyle{unsrtnat} | |
1019 %\bibliographystyle{apalike} | |
1020 } | |
1021 | |
379
a21a174c1c18
added writeup skeleton
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
1022 |
407
fe2e2964e7a3
description des transformations en cours ajout d un fichier special.bib pour des references specifiques
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
393
diff
changeset
|
1023 \end{document} |