Mercurial > ift6266
annotate writeup/nips2010_submission.tex @ 501:5927432d8b8d
-
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Tue, 01 Jun 2010 12:28:05 -0400 |
parents | 8479bf822d0e |
children | 2b35a6e5ece4 e837ef6eef8c |
rev | line source |
---|---|
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
1 \documentclass{article} % For LaTeX2e |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
2 \usepackage{nips10submit_e,times} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
3 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
4 \usepackage{amsthm,amsmath,amssymb,bbold,bbm} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
5 \usepackage{algorithm,algorithmic} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
6 \usepackage[utf8]{inputenc} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
7 \usepackage{graphicx,subfigure} |
469 | 8 \usepackage[numbers]{natbib} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
9 |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
10 \title{Deep Self-Taught Learning for Handwritten Character Recognition} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
11 \author{The IFT6266 Gang} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
12 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
13 \begin{document} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
14 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
15 %\makeanontitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
16 \maketitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
17 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
18 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
19 \begin{abstract} |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
20 Recent theoretical and empirical work in statistical machine learning has |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
21 demonstrated the importance of learning algorithms for deep |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
22 architectures, i.e., function classes obtained by composing multiple |
495 | 23 non-linear transformations. The self-taught learning (exploiting unlabeled |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
24 examples or examples from other distributions) has already been applied |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
25 to deep learners, but mostly to show the advantage of unlabeled |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
26 examples. Here we explore the advantage brought by {\em out-of-distribution |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
27 examples} and show that {\em deep learners benefit more from them than a |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
28 corresponding shallow learner}, in the area |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
29 of handwritten character recognition. In fact, we show that they reach |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
30 human-level performance on both handwritten digit classification and |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
31 62-class handwritten character recognition. For this purpose we |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
32 developed a powerful generator of stochastic variations and noise |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
33 processes character images, including not only affine transformations but |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
34 also slant, local elastic deformations, changes in thickness, background |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
35 images, grey level changes, contrast, occlusion, and various types of pixel and |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
36 spatially correlated noise. The out-of-distribution examples are |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
37 obtained by training with these highly distorted images or |
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
38 by including object classes different from those in the target test set. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
39 \end{abstract} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
40 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
41 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
42 \section{Introduction} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
43 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
44 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
45 Deep Learning has emerged as a promising new area of research in |
469 | 46 statistical machine learning (see~\citet{Bengio-2009} for a review). |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
47 Learning algorithms for deep architectures are centered on the learning |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
48 of useful representations of data, which are better suited to the task at hand. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
49 This is in great part inspired by observations of the mammalian visual cortex, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
50 which consists of a chain of processing elements, each of which is associated with a |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
51 different representation of the raw visual input. In fact, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
52 it was found recently that the features learnt in deep architectures resemble |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
53 those observed in the first two of these stages (in areas V1 and V2 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
54 of visual cortex)~\citep{HonglakL2008}, and that they become more and |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
55 more invariant to factors of variation (such as camera movement) in |
501 | 56 higher layers~\citep{Goodfellow2009}. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
57 Learning a hierarchy of features increases the |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
58 ease and practicality of developing representations that are at once |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
59 tailored to specific tasks, yet are able to borrow statistical strength |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
60 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
61 feature representation can lead to higher-level (more abstract, more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
62 general) features that are more robust to unanticipated sources of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
63 variance extant in real data. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
64 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
65 Whereas a deep architecture can in principle be more powerful than a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
66 shallow one in terms of representation, depth appears to render the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
67 training problem more difficult in terms of optimization and local minima. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
68 It is also only recently that successful algorithms were proposed to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
69 overcome some of these difficulties. All are based on unsupervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
70 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
469 | 71 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
72 applied here, is the Denoising |
469 | 73 Auto-Encoder~(DEA)~\citep{VincentPLarochelleH2008-very-small}, which |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
74 performed similarly or better than previously proposed Restricted Boltzmann |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
75 Machines in terms of unsupervised extraction of a hierarchy of features |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
76 useful for classification. The principle is that each layer starting from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
77 the bottom is trained to encode their input (the output of the previous |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
78 layer) and try to reconstruct it from a corrupted version of it. After this |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
79 unsupervised initialization, the stack of denoising auto-encoders can be |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
80 converted into a deep supervised feedforward neural network and fine-tuned by |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
81 stochastic gradient descent. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
82 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
83 Self-taught learning~\citep{RainaR2007} is a paradigm that combines principles |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
84 of semi-supervised and multi-task learning: the learner can exploit examples |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
85 that are unlabeled and/or come from a distribution different from the target |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
86 distribution, e.g., from other classes that those of interest. Whereas |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
87 it has already been shown that deep learners can clearly take advantage of |
496
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
88 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
89 and multi-task learning, not much has been done yet to explore the impact |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
90 of {\em out-of-distribution} examples and of the multi-task setting |
496
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
91 (but see~\citep{CollobertR2008}). In particular the {\em relative |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
92 advantage} of deep learning for this settings has not been evaluated. |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
93 |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
94 In this paper we ask the following questions: |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
95 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
96 %\begin{enumerate} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
97 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
98 Do the good results previously obtained with deep architectures on the |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
99 MNIST digit images generalize to the setting of a much larger and richer (but similar) |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
100 dataset, the NIST special database 19, with 62 classes and around 800k examples? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
101 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
102 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
103 To what extent does the perturbation of input images (e.g. adding |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
104 noise, affine transformations, background images) make the resulting |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
105 classifiers better not only on similarly perturbed images but also on |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
106 the {\em original clean examples}? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
107 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
108 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
109 Do deep architectures {\em benefit more from such out-of-distribution} |
469 | 110 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
111 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
112 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
113 Similarly, does the feature learning step in deep learning algorithms benefit more |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
114 training with similar but different classes (i.e. a multi-task learning scenario) than |
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
115 a corresponding shallow and purely supervised architecture? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
116 %\end{enumerate} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
117 |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
118 The experimental results presented here provide positive evidence towards all of these questions. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
119 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
120 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
121 \section{Perturbation and Transformation of Character Images} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
122 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
123 |
467 | 124 This section describes the different transformations we used to stochastically |
125 transform source images in order to obtain data. More details can | |
469 | 126 be found in this technical report~\citep{ift6266-tr-anonymous}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
127 The code for these transformations (mostly python) is available at |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
128 {\tt http://anonymous.url.net}. All the modules in the pipeline share |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
129 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the |
467 | 130 amount of deformation or noise introduced. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
131 |
467 | 132 There are two main parts in the pipeline. The first one, |
133 from slant to pinch below, performs transformations. The second | |
134 part, from blur to contrast, adds different kinds of noise. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
135 |
501 | 136 \begin{figure}[h] |
137 \resizebox{.99\textwidth}{!}{\includegraphics{images/transfo.png}}\\ | |
138 \caption{Illustration of each transformation applied alone to the same image | |
139 of an upper-case h (top left). First row (from left to right) : original image, slant, | |
140 thickness, affine transformation (translation, rotation, shear), | |
141 local elastic deformation; second row (from left to right) : | |
142 pinch, motion blur, occlusion, pixel permutation, Gaussian noise; third row (from left to right) : | |
143 background image, salt and pepper noise, spatially Gaussian noise, scratches, | |
144 grey level and contrast changes.} | |
145 \label{fig:transfo} | |
146 \end{figure} | |
147 | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
148 {\large\bf Transformations} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
149 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
150 \vspace*{2mm} |
483
b9cdb464de5f
pointeur a la demo
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
482
diff
changeset
|
151 |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
152 {\bf Slant.} |
467 | 153 We mimic slant by shifting each row of the image |
495 | 154 proportionally to its height: $shift = round(slant \times height)$. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
155 The $slant$ coefficient can be negative or positive with equal probability |
467 | 156 and its value is randomly sampled according to the complexity level: |
157 e $slant \sim U[0,complexity]$, so the | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
158 maximum displacement for the lowest or highest pixel line is of |
467 | 159 $round(complexity \times 32)$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
160 {\bf Thickness.} |
495 | 161 Morphological operators of dilation and erosion~\citep{Haralick87,Serra82} |
467 | 162 are applied. The neighborhood of each pixel is multiplied |
163 element-wise with a {\em structuring element} matrix. | |
164 The pixel value is replaced by the maximum or the minimum of the resulting | |
165 matrix, respectively for dilation or erosion. Ten different structural elements with | |
166 increasing dimensions (largest is $5\times5$) were used. For each image, | |
167 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
168 element from a subset of the $n$ smallest structuring elements where $n$ is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
169 $round(10 \times complexity)$ for dilation and $round(6 \times complexity)$ |
467 | 170 for erosion. A neutral element is always present in the set, and if it is |
171 chosen no transformation is applied. Erosion allows only the six | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
172 smallest structural elements because when the character is too thin it may |
467 | 173 be completely erased.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
174 {\bf Affine Transformations.} |
467 | 175 A $2 \times 3$ affine transform matrix (with |
176 6 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$ level. | |
177 Each pixel $(x,y)$ of the output image takes the value of the pixel | |
178 nearest to $(ax+by+c,dx+ey+f)$ in the input image. This | |
179 produces scaling, translation, rotation and shearing. | |
180 The marginal distributions of $(a,b,c,d,e,f)$ have been tuned by hand to | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
181 forbid important rotations (not to confuse classes) but to give good |
467 | 182 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times |
183 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3 | |
184 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times | |
185 complexity]$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
186 {\bf Local Elastic Deformations.} |
496
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
187 This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03-short}, |
467 | 188 which provides more details. |
189 Two "displacements" fields are generated and applied, for horizontal | |
190 and vertical displacements of pixels. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
191 To generate a pixel in either field, first a value between -1 and 1 is |
467 | 192 chosen from a uniform distribution. Then all the pixels, in both fields, are |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
193 multiplied by a constant $\alpha$ which controls the intensity of the |
467 | 194 displacements (larger $\alpha$ translates into larger wiggles). |
195 Each field is convoluted with a Gaussian 2D kernel of | |
196 standard deviation $\sigma$. Visually, this results in a blur. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
197 $\alpha = \sqrt[3]{complexity} \times 10.0$ and $\sigma = 10 - 7 \times |
467 | 198 \sqrt[3]{complexity}$.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
199 {\bf Pinch.} |
467 | 200 This GIMP filter is named "Whirl and |
201 pinch", but whirl was set to 0. A pinch is ``similar to projecting the image onto an elastic | |
469 | 202 surface and pressing or pulling on the center of the surface''~\citep{GIMP-manual}. |
467 | 203 For a square input image, think of drawing a circle of |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
204 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
205 that disk (region inside circle) will have its value recalculated by taking |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
206 the value of another "source" pixel in the original image. The position of |
495 | 207 that source pixel is found on the line that goes through $C$ and $P$, but |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
208 at some other distance $d_2$. Define $d_1$ to be the distance between $P$ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
209 and $C$. $d_2$ is given by $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
210 d_1$, where $pinch$ is a parameter to the filter. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
211 The actual value is given by bilinear interpolation considering the pixels |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
212 around the (non-integer) source position thus found. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
213 Here $pinch \sim U[-complexity, 0.7 \times complexity]$. |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
214 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
215 \vspace*{1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
216 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
217 {\large\bf Injecting Noise} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
218 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
219 \vspace*{1mm} |
483
b9cdb464de5f
pointeur a la demo
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
482
diff
changeset
|
220 |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
221 {\bf Motion Blur.} |
467 | 222 This GIMP filter is a ``linear motion blur'' in GIMP |
223 terminology, with two parameters, $length$ and $angle$. The value of | |
224 a pixel in the final image is the approximately mean value of the $length$ first pixels | |
225 found by moving in the $angle$ direction. | |
226 Here $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
227 {\bf Occlusion.} |
467 | 228 This filter selects a random rectangle from an {\em occluder} character |
229 images and places it over the original {\em occluded} character | |
230 image. Pixels are combined by taking the max(occluder,occluded), | |
231 closer to black. The corners of the occluder The rectangle corners | |
232 are sampled so that larger complexity gives larger rectangles. | |
233 The destination position in the occluded image are also sampled | |
469 | 234 according to a normal distribution (see more details in~\citet{ift6266-tr-anonymous}). |
467 | 235 It has has a probability of not being applied at all of 60\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
236 {\bf Pixel Permutation.} |
467 | 237 This filter permutes neighbouring pixels. It selects first |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
238 $\frac{complexity}{3}$ pixels randomly in the image. Each of them are then |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
239 sequentially exchanged to one other pixel in its $V4$ neighbourhood. Number |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
240 of exchanges to the left, right, top, bottom are equal or does not differ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
241 from more than 1 if the number of selected pixels is not a multiple of 4. |
467 | 242 It has has a probability of not being applied at all of 80\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
243 {\bf Gaussian Noise.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
244 This filter simply adds, to each pixel of the image independently, a |
467 | 245 noise $\sim Normal(0(\frac{complexity}{10})^2)$. |
246 It has has a probability of not being applied at all of 70\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
247 {\bf Background Images.} |
469 | 248 Following~\citet{Larochelle-jmlr-2009}, this transformation adds a random |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
249 background behind the letter. The background is chosen by first selecting, |
495 | 250 at random, an image from a set of images. Then a 32$\times$32 sub-region |
467 | 251 of that image is chosen as the background image (by sampling position |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
252 uniformly while making sure not to cross image borders). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
253 To combine the original letter image and the background image, contrast |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
254 adjustments are made. We first get the maximal values (i.e. maximal |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
255 intensity) for both the original image and the background image, $maximage$ |
467 | 256 and $maxbg$. We also have a parameter $contrast \sim U[complexity, 1]$. |
257 Each background pixel value is multiplied by $\frac{max(maximage - | |
258 contrast, 0)}{maxbg}$ (higher contrast yield darker | |
259 background). The output image pixels are max(background,original).\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
260 {\bf Salt and Pepper Noise.} |
467 | 261 This filter adds noise $\sim U[0,1]$ to random subsets of pixels. |
262 The number of selected pixels is $0.2 \times complexity$. | |
263 This filter has a probability of not being applied at all of 75\%.\\ | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
264 {\bf Spatially Gaussian Noise.} |
467 | 265 Different regions of the image are spatially smoothed. |
266 The image is convolved with a symmetric Gaussian kernel of | |
495 | 267 size and variance chosen uniformly in the ranges $[12,12 + 20 \times |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
268 complexity]$ and $[2,2 + 6 \times complexity]$. The result is normalized |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
269 between $0$ and $1$. We also create a symmetric averaging window, of the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
270 kernel size, with maximum value at the center. For each image we sample |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
271 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
272 averaging centers between the original image and the filtered one. We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
273 initialize to zero a mask matrix of the image size. For each selected pixel |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
274 we add to the mask the averaging window centered to it. The final image is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
275 computed from the following element-wise operation: $\frac{image + filtered |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
276 image \times mask}{mask+1}$. |
467 | 277 This filter has a probability of not being applied at all of 75\%.\\ |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
278 {\bf Scratches.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
279 The scratches module places line-like white patches on the image. The |
467 | 280 lines are heavily transformed images of the digit "1" (one), chosen |
281 at random among five thousands such 1 images. The 1 image is | |
282 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times | |
495 | 283 complexity)^2$, using bi-cubic interpolation, |
284 Two passes of a grey-scale morphological erosion filter | |
467 | 285 are applied, reducing the width of the line |
286 by an amount controlled by $complexity$. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
287 This filter is only applied only 15\% of the time. When it is applied, 50\% |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
288 of the time, only one patch image is generated and applied. In 30\% of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
289 cases, two patches are generated, and otherwise three patches are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
290 generated. The patch is applied by taking the maximal value on any given |
467 | 291 patch or the original image, for each of the 32x32 pixel locations.\\ |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
292 {\bf Grey Level and Contrast Changes.} |
495 | 293 This filter changes the contrast and may invert the image polarity (white |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
294 on black to black on white). The contrast $C$ is defined here as the |
467 | 295 difference between the maximum and the minimum pixel value of the image. |
495 | 296 Contrast $\sim U[1-0.85 \times complexity,1]$ (so contrast $\geq 0.15$). |
467 | 297 The image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
298 polarity is inverted with $0.5$ probability. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
299 |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
300 \iffalse |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
301 \begin{figure}[h] |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
302 \resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}\\ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
303 \caption{Illustration of the pipeline of stochastic |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
304 transformations applied to the image of a lower-case t |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
305 (the upper left image). Each image in the pipeline (going from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
306 left to right, first top line, then bottom line) shows the result |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
307 of applying one of the modules in the pipeline. The last image |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
308 (bottom right) is used as training example.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
309 \label{fig:pipeline} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
310 \end{figure} |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
311 \fi |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
312 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
313 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
314 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
315 \section{Experimental Setup} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
316 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
317 |
472 | 318 Whereas much previous work on deep learning algorithms had been performed on |
319 the MNIST digits classification task~\citep{Hinton06,ranzato-07,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, | |
320 with 60~000 examples, and variants involving 10~000 | |
501 | 321 examples~\citep{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}, we want |
472 | 322 to focus here on the case of much larger training sets, from 10 times to |
323 to 1000 times larger. The larger datasets are obtained by first sampling from | |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
324 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, |
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
325 and {\bf OCR data} (scanned machine printed characters). Once a character |
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
326 is sampled from one of these sources (chosen randomly), a pipeline of |
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
327 the above transformations and/or noise processes is applied to the |
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
328 image. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
329 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
330 \vspace*{-1mm} |
472 | 331 \subsection{Data Sources} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
332 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
333 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
334 %\begin{itemize} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
335 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
336 {\bf NIST.} |
501 | 337 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995}, |
472 | 338 widely used for training and testing character |
501 | 339 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005}. |
340 The dataset is composed with 814255 digits and characters (upper and lower cases), with hand checked classifications, | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
341 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
342 corresponding to "0"-"9","A"-"Z" and "a"-"z". The dataset contains 8 series of different complexity. |
472 | 343 The fourth series, $hsf_4$, experimentally recognized to be the most difficult one is recommended |
501 | 344 by NIST as testing set and is used in our work and some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002,Milgram+al-2005} |
472 | 345 for that purpose. We randomly split the remainder into a training set and a validation set for |
480
150203d2b5c3
added number of train test and valid for NIST
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
479
diff
changeset
|
346 model selection. The sizes of these data sets are: 651668 for training, 80000 for validation, |
150203d2b5c3
added number of train test and valid for NIST
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
479
diff
changeset
|
347 and 82587 for testing. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
348 The performances reported by previous work on that dataset mostly use only the digits. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
349 Here we use all the classes both in the training and testing phase. This is especially |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
350 useful to estimate the effect of a multi-task setting. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
351 Note that the distribution of the classes in the NIST training and test sets differs |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
352 substantially, with relatively many more digits in the test set, and uniform distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
353 of letters in the test set, not in the training set (more like the natural distribution |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
354 of letters in text). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
355 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
356 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
357 {\bf Fonts.} |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
358 In order to have a good variety of sources we downloaded an important number of free fonts from: {\tt http://anonymous.url.net} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
359 %real adress {\tt http://cg.scs.carleton.ca/~luc/freefonts.html} |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
360 in addition to Windows 7's, this adds up to a total of $9817$ different fonts that we can choose uniformly. |
495 | 361 The {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
362 directly as input to our models. |
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
363 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
364 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
365 {\bf Captchas.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
366 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for |
472 | 367 generating characters of the same format as the NIST dataset. This software is based on |
495 | 368 a random character class generator and various kinds of transformations similar to those described in the previous sections. |
472 | 369 In order to increase the variability of the data generated, many different fonts are used for generating the characters. |
495 | 370 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
371 depending on the value of the complexity parameter provided by the user of the data source. Two levels of complexity are |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
372 allowed and can be controlled via an easy to use facade class. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
373 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
374 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
375 {\bf OCR data.} |
472 | 376 A large set (2 million) of scanned, OCRed and manually verified machine-printed |
377 characters (from various documents and books) where included as an | |
378 additional source. This set is part of a larger corpus being collected by the Image Understanding | |
379 Pattern Recognition Research group lead by Thomas Breuel at University of Kaiserslautern | |
495 | 380 ({\tt http://www.iupr.com}), and which will be publicly released. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
381 %\end{itemize} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
382 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
383 \vspace*{-1mm} |
472 | 384 \subsection{Data Sets} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
385 \vspace*{-1mm} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
386 |
472 | 387 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label |
388 from one of the 62 character classes. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
389 %\begin{itemize} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
390 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
391 %\item |
501 | 392 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
393 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
394 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
395 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources |
472 | 396 and sending them through the above transformation pipeline. |
495 | 397 For each new example to generate, a source is selected with probability $10\%$ from the fonts, |
472 | 398 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the |
399 order given above, and for each of them we sample uniformly a complexity in the range $[0,0.7]$. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
400 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
401 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
402 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same sources proportion) |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
403 except that we only apply |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
404 transformations from slant to pinch. Therefore, the character is |
495 | 405 transformed but no additional noise is added to the image, giving images |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
406 closer to the NIST dataset. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
407 %\end{itemize} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
408 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
409 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
410 \subsection{Models and their Hyperparameters} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
411 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
412 |
472 | 413 All hyper-parameters are selected based on performance on the NISTP validation set. |
414 | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
415 {\bf Multi-Layer Perceptrons (MLP).} |
472 | 416 Whereas previous work had compared deep architectures to both shallow MLPs and |
417 SVMs, we only compared to MLPs here because of the very large datasets used. | |
418 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized | |
419 exponentials) on the output layer for estimating P(class | image). | |
420 The hyper-parameters are the following: number of hidden units, taken in | |
421 $\{300,500,800,1000,1500\}$. The optimization procedure is as follows. Training | |
422 examples are presented in minibatches of size 20. A constant learning | |
474
bcf024e6ab23
fits now, but still now graphics
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
472
diff
changeset
|
423 rate is chosen in $10^{-3},0.01, 0.025, 0.075, 0.1, 0.5\}$ |
472 | 424 through preliminary experiments, and 0.1 was selected. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
425 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
426 {\bf Stacked Denoising Auto-Encoders (SDAE).} |
472 | 427 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) |
428 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
429 layers)~\citep{Hinton06,ranzato-07,Bengio-nips-2006} | |
430 enabling better generalization, apparently setting parameters in the | |
431 basin of attraction of supervised gradient descent yielding better | |
432 generalization~\citep{Erhan+al-2010}. It is hypothesized that the | |
433 advantage brought by this procedure stems from a better prior, | |
434 on the one hand taking advantage of the link between the input | |
435 distribution $P(x)$ and the conditional distribution of interest | |
436 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
437 taking advantage of the expressive power and bias implicit in the | |
438 deep architecture (whereby complex concepts are expressed as | |
439 compositions of simpler ones through a deep hierarchy). | |
440 Here we chose to use the Denoising | |
441 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for | |
442 these deep hierarchies of features, as it is very simple to train and | |
443 teach (see tutorial and code there: {\tt http://deeplearning.net/tutorial}), | |
444 provides immediate and efficient inference, and yielded results | |
445 comparable or better than RBMs in series of experiments | |
446 \citep{VincentPLarochelleH2008}. During training of a Denoising | |
447 Auto-Encoder, it is presented with a stochastically corrupted version | |
448 of the input and trained to reconstruct the uncorrupted input, | |
449 forcing the hidden units to represent the leading regularities in | |
450 the data. Once it is trained, its hidden units activations can | |
451 be used as inputs for training a second one, etc. | |
452 After this unsupervised pre-training stage, the parameters | |
453 are used to initialize a deep MLP, which is fine-tuned by | |
454 the same standard procedure used to train them (see previous section). | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
455 The SDA hyper-parameters are the same as for the MLP, with the addition of the |
472 | 456 amount of corruption noise (we used the masking noise process, whereby a |
457 fixed proportion of the input values, randomly selected, are zeroed), and a | |
458 separate learning rate for the unsupervised pre-training stage (selected | |
459 from the same above set). The fraction of inputs corrupted was selected | |
460 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
461 of hidden layers but it was fixed to 3 based on previous work with | |
462 stacked denoising auto-encoders on MNIST~\citep{VincentPLarochelleH2008}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
463 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
464 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
465 \section{Experimental Results} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
466 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
467 %\vspace*{-1mm} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
468 %\subsection{SDA vs MLP vs Humans} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
469 %\vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
470 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
471 We compare the best MLP (according to validation set error) that we found against |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
472 the best SDA (again according to validation set error), along with a precise estimate |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
473 of human performance obtained via Amazon's Mechanical Turk (AMT) |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
474 service\footnote{http://mturk.com}. |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
475 %AMT users are paid small amounts |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
476 %of money to perform tasks for which human intelligence is required. |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
477 %Mechanical Turk has been used extensively in natural language |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
478 %processing \citep{SnowEtAl2008} and vision |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
479 %\citep{SorokinAndForsyth2008,whitehill09}. |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
480 AMT users where presented |
495 | 481 with 10 character images and asked to type 10 corresponding ASCII |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
482 characters. They were forced to make a hard choice among the |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
483 62 or 10 character classes (all classes or digits only). |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
484 Three users classified each image, allowing |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
485 to estimate inter-human variability (shown as +/- in parenthesis below). |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
486 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
487 Figure~\ref{fig:error-rates-charts} summarizes the results obtained, |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
488 comparing Humans, three MLPs (MLP0, MLP1, MLP2) and three SDAs (SDA0, SDA1, |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
489 SDA2), along with the previous results on the digits NIST special database |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
490 19 test set from the literature respectively based on ARTMAP neural |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
491 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
492 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002}, and SVMs |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
493 ~\citep{Milgram+al-2005}. More detailed and complete numerical results |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
494 (figures and tables, including standard errors on the error rates) can be |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
495 found in the supplementary material. The 3 kinds of model differ in the |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
496 training sets used: NIST only (MLP0,SDA0), NISTP (MLP1, SDA1), or P07 |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
497 (MLP2, SDA2). The deep learner not only outperformed the shallow ones and |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
498 previously published performance (in a statistically and qualitatively |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
499 significant way) but reaches human performance on both the 62-class task |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
500 and the 10-class (digits) task. In addition, as shown in the left of |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
501 Figure~\ref{fig:fig:improvements-charts}, the relative improvement in error |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
502 rate brought by self-taught learning is greater for the SDA, and these |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
503 differences with the MLP are statistically and qualitatively |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
504 significant. The left side of the figure shows the improvement to the clean |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
505 NIST test set error brought by the use of out-of-distribution examples |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
506 (i.e. the perturbed examples examples from NISTP or P07). The right side of |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
507 Figure~\ref{fig:fig:improvements-charts} shows the relative improvement |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
508 brought by the use of a multi-task setting, in which the same model is |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
509 trained for more classes than the target classes of interest (i.e. training |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
510 with all 62 classes when the target classes are respectively the digits, |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
511 lower-case, or upper-case characters). Again, whereas the gain from the |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
512 multi-task setting is marginal or negative for the MLP, it is substantial |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
513 for the SDA. Note that for these multi-task experiment, only the original |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
514 NIST dataset is used. For example, the MLP-digits bar shows the relative |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
515 improvement in MLP error rate on the NIST digits test set (1 - single-task |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
516 model's error / multi-task model's error). The single-task model is |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
517 trained with only 10 outputs (one per digit), seeing only digit examples, |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
518 whereas the multi-task model is trained with 62 outputs, with all 62 |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
519 character classes as examples. Hence the hidden units are shared across |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
520 all tasks. For the multi-task model, the digit error rate is measured by |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
521 comparing the correct digit class with the output class associated with the |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
522 maximum conditional probability among only the digit classes outputs. The |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
523 setting is similar for the other two target classes (lower case characters |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
524 and upper case characters). |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
525 |
475 | 526 \begin{figure}[h] |
527 \resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}\\ | |
490
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
528 \caption{Left: overall results; error bars indicate a 95\% confidence interval. |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
529 Right: error rates on NIST test digits only, with results from literature. } |
475 | 530 \label{fig:error-rates-charts} |
531 \end{figure} | |
532 | |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
533 %\vspace*{-1mm} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
534 %\subsection{Perturbed Training Data More Helpful for SDAE} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
535 %\vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
536 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
537 %\vspace*{-1mm} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
538 %\subsection{Multi-Task Learning Effects} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
539 %\vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
540 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
541 \iffalse |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
542 As previously seen, the SDA is better able to benefit from the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
543 transformations applied to the data than the MLP. In this experiment we |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
544 define three tasks: recognizing digits (knowing that the input is a digit), |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
545 recognizing upper case characters (knowing that the input is one), and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
546 recognizing lower case characters (knowing that the input is one). We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
547 consider the digit classification task as the target task and we want to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
548 evaluate whether training with the other tasks can help or hurt, and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
549 whether the effect is different for MLPs versus SDAs. The goal is to find |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
550 out if deep learning can benefit more (or less) from multiple related tasks |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
551 (i.e. the multi-task setting) compared to a corresponding purely supervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
552 shallow learner. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
553 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
554 We use a single hidden layer MLP with 1000 hidden units, and a SDA |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
555 with 3 hidden layers (1000 hidden units per layer), pre-trained and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
556 fine-tuned on NIST. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
557 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
558 Our results show that the MLP benefits marginally from the multi-task setting |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
559 in the case of digits (5\% relative improvement) but is actually hurt in the case |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
560 of characters (respectively 3\% and 4\% worse for lower and upper class characters). |
495 | 561 On the other hand the SDA benefited from the multi-task setting, with relative |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
562 error rate improvements of 27\%, 15\% and 13\% respectively for digits, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
563 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
564 \fi |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
565 |
475 | 566 |
567 \begin{figure}[h] | |
568 \resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}\\ | |
490
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
569 \caption{Relative improvement in error rate due to self-taught learning. |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
570 Left: Improvement (or loss, when negative) |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
571 induced by out-of-distribution examples (perturbed data). |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
572 Right: Improvement (or loss, when negative) induced by multi-task |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
573 learning (training on all classes and testing only on either digits, |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
574 upper case, or lower-case). The deep learner (SDA) benefits more from |
d6cf4912abb0
caption + consequent
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
487
diff
changeset
|
575 both self-taught learning scenarios, compared to the shallow MLP.} |
475 | 576 \label{fig:improvements-charts} |
577 \end{figure} | |
578 | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
579 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
580 \section{Conclusions} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
581 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
582 |
472 | 583 The conclusions are positive for all the questions asked in the introduction. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
584 %\begin{itemize} |
487 | 585 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
586 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
587 Do the good results previously obtained with deep architectures on the |
472 | 588 MNIST digits generalize to the setting of a much larger and richer (but similar) |
589 dataset, the NIST special database 19, with 62 classes and around 800k examples? | |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
590 Yes, the SDA systematically outperformed the MLP and all the previously |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
591 published results on this dataset (as far as we know), in fact reaching human-level |
472 | 592 performance. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
593 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
594 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
595 To what extent does the perturbation of input images (e.g. adding |
472 | 596 noise, affine transformations, background images) make the resulting |
597 classifier better not only on similarly perturbed images but also on | |
598 the {\em original clean examples}? Do deep architectures benefit more from such {\em out-of-distribution} | |
599 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? | |
600 MLPs were helped by perturbed training examples when tested on perturbed input images, | |
495 | 601 but only marginally helped with respect to clean examples. On the other hand, the deep SDAs |
472 | 602 were very significantly boosted by these out-of-distribution examples. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
603 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
604 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
605 Similarly, does the feature learning step in deep learning algorithms benefit more |
472 | 606 training with similar but different classes (i.e. a multi-task learning scenario) than |
607 a corresponding shallow and purely supervised architecture? | |
608 Whereas the improvement due to the multi-task setting was marginal or | |
609 negative for the MLP, it was very significant for the SDA. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
610 %\end{itemize} |
472 | 611 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
612 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
613 can be executed on-line at {\tt http://deep.host22.com}. |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
614 |
498
7ff00c27c976
add missing file for bibtex and make it smaller.
Frederic Bastien <nouiz@nouiz.org>
parents:
496
diff
changeset
|
615 \newpage |
496
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
616 { |
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
617 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,aigaion-shorter,specials} |
469 | 618 %\bibliographystyle{plainnat} |
619 \bibliographystyle{unsrtnat} | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
620 %\bibliographystyle{apalike} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
621 } |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
622 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
623 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
624 \end{document} |