Mercurial > ift6266
annotate writeup/nips2010_submission.tex @ 554:e95395f51d72
minor
author | Yoshua Bengio <bengioy@iro.umontreal.ca> |
---|---|
date | Wed, 02 Jun 2010 18:17:52 -0400 |
parents | 8f6c09d1140f |
children | b6dfba0a110c |
rev | line source |
---|---|
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
1 \documentclass{article} % For LaTeX2e |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
2 \usepackage{nips10submit_e,times} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
3 |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
4 \usepackage{amsthm,amsmath,bbm} |
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
5 \usepackage[psamsfonts]{amssymb} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
6 \usepackage{algorithm,algorithmic} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
7 \usepackage[utf8]{inputenc} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
8 \usepackage{graphicx,subfigure} |
469 | 9 \usepackage[numbers]{natbib} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
10 |
523 | 11 %\setlength\parindent{0mm} |
12 | |
482
ce69aa9204d8
changement au titre et reecriture abstract
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
480
diff
changeset
|
13 \title{Deep Self-Taught Learning for Handwritten Character Recognition} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
14 \author{The IFT6266 Gang} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
15 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
16 \begin{document} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
17 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
18 %\makeanontitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
19 \maketitle |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
20 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
21 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
22 \begin{abstract} |
554 | 23 Recent theoretical and empirical work in statistical machine learning has demonstrated the importance of learning algorithms for deep architectures, i.e., function classes obtained by composing multiple non-linear transformations. Self-taught learning (exploiting unlabeled examples or examples from other distributions) has already been applied to deep learners, but mostly to show the advantage of unlabeled examples. Here we explore the advantage brought by {\em out-of-distribution examples} and show that {\em deep learners benefit more from them than a corresponding shallow learner}, in the area of handwritten character recognition. In fact, we show that they reach human-level performance on both handwritten digit classification and 62-class handwritten character recognition. For this purpose we developed a powerful generator of stochastic variations and noise processes for character images, including not only affine transformations but also slant, local elastic deformations, changes in thickness, background images, grey level changes, contrast, occlusion, and various types of noise. The out-of-distribution examples are obtained from these highly distorted images or by including examples of object classes different from those in the target test set. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
24 \end{abstract} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
25 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
26 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
27 \section{Introduction} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
28 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
29 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
30 Deep Learning has emerged as a promising new area of research in |
469 | 31 statistical machine learning (see~\citet{Bengio-2009} for a review). |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
32 Learning algorithms for deep architectures are centered on the learning |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
33 of useful representations of data, which are better suited to the task at hand. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
34 This is in great part inspired by observations of the mammalian visual cortex, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
35 which consists of a chain of processing elements, each of which is associated with a |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
36 different representation of the raw visual input. In fact, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
37 it was found recently that the features learnt in deep architectures resemble |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
38 those observed in the first two of these stages (in areas V1 and V2 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
39 of visual cortex)~\citep{HonglakL2008}, and that they become more and |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
40 more invariant to factors of variation (such as camera movement) in |
501 | 41 higher layers~\citep{Goodfellow2009}. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
42 Learning a hierarchy of features increases the |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
43 ease and practicality of developing representations that are at once |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
44 tailored to specific tasks, yet are able to borrow statistical strength |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
45 from other related tasks (e.g., modeling different kinds of objects). Finally, learning the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
46 feature representation can lead to higher-level (more abstract, more |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
47 general) features that are more robust to unanticipated sources of |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
48 variance extant in real data. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
49 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
50 Whereas a deep architecture can in principle be more powerful than a |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
51 shallow one in terms of representation, depth appears to render the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
52 training problem more difficult in terms of optimization and local minima. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
53 It is also only recently that successful algorithms were proposed to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
54 overcome some of these difficulties. All are based on unsupervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
55 learning, often in an greedy layer-wise ``unsupervised pre-training'' |
469 | 56 stage~\citep{Bengio-2009}. One of these layer initialization techniques, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
57 applied here, is the Denoising |
547 | 58 Auto-encoder~(DA)~\citep{VincentPLarochelleH2008-very-small} (see Figure~\ref{fig:da}), |
59 which | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
60 performed similarly or better than previously proposed Restricted Boltzmann |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
61 Machines in terms of unsupervised extraction of a hierarchy of features |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
62 useful for classification. The principle is that each layer starting from |
511
d057941417ed
a few changes in the first section
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
510
diff
changeset
|
63 the bottom is trained to encode its input (the output of the previous |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
64 layer) and to reconstruct it from a corrupted version. After this |
547 | 65 unsupervised initialization, the stack of DAs can be |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
66 converted into a deep supervised feedforward neural network and fine-tuned by |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
67 stochastic gradient descent. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
68 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
69 Self-taught learning~\citep{RainaR2007} is a paradigm that combines principles |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
70 of semi-supervised and multi-task learning: the learner can exploit examples |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
71 that are unlabeled and/or come from a distribution different from the target |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
72 distribution, e.g., from other classes than those of interest. |
532
2e33885730cf
changements aux charts.ods
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
529
diff
changeset
|
73 It has already been shown that deep learners can clearly take advantage of |
2e33885730cf
changements aux charts.ods
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
529
diff
changeset
|
74 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}, |
2e33885730cf
changements aux charts.ods
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
529
diff
changeset
|
75 but more needs to be done to explore the impact |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
76 of {\em out-of-distribution} examples and of the multi-task setting |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
77 (one exception is~\citep{CollobertR2008}, which uses very different kinds |
532
2e33885730cf
changements aux charts.ods
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
529
diff
changeset
|
78 of learning algorithms). In particular the {\em relative |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
79 advantage} of deep learning for these settings has not been evaluated. |
512 | 80 The hypothesis explored here is that a deep hierarchy of features |
81 may be better able to provide sharing of statistical strength | |
82 between different regions in input space or different tasks, | |
83 as discussed in the conclusion. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
84 |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
85 In this paper we ask the following questions: |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
86 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
87 %\begin{enumerate} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
88 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
89 Do the good results previously obtained with deep architectures on the |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
90 MNIST digit images generalize to the setting of a much larger and richer (but similar) |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
91 dataset, the NIST special database 19, with 62 classes and around 800k examples? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
92 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
93 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
94 To what extent does the perturbation of input images (e.g. adding |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
95 noise, affine transformations, background images) make the resulting |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
96 classifiers better not only on similarly perturbed images but also on |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
97 the {\em original clean examples}? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
98 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
99 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
100 Do deep architectures {\em benefit more from such out-of-distribution} |
469 | 101 examples, i.e. do they benefit more from the self-taught learning~\citep{RainaR2007} framework? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
102 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
103 $\bullet$ %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
104 Similarly, does the feature learning step in deep learning algorithms benefit more |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
105 from training with moderately different classes (i.e. a multi-task learning scenario) than |
466
6205481bf33f
asking the questions
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
464
diff
changeset
|
106 a corresponding shallow and purely supervised architecture? |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
107 %\end{enumerate} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
108 |
511
d057941417ed
a few changes in the first section
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
510
diff
changeset
|
109 Our experimental results provide positive evidence towards all of these questions. |
547 | 110 To achieve these results, we introduce in the next section a sophisticated system |
111 for stochastically transforming character images. The conclusion discusses | |
112 the more general question of why deep learners may benefit so much from | |
113 the self-taught learning framework. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
114 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
115 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
116 \section{Perturbation and Transformation of Character Images} |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
117 \label{s:perturbations} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
118 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
119 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
120 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
121 \centering |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
122 \includegraphics[scale=.45]{images/Original.PNG} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
123 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
124 \vspace{1.2cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
125 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
126 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
127 {\bf Original.} |
467 | 128 This section describes the different transformations we used to stochastically |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
129 transform source images such as the one on the left |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
130 in order to obtain data from a larger distribution which |
547 | 131 covers a domain substantially larger than the clean characters distribution from |
132 which we start. Although character transformations have been used before to | |
133 improve character recognizers, this effort is on a large scale both | |
134 in number of classes and in the complexity of the transformations, hence | |
135 in the complexity of the learning task. | |
136 More details can | |
469 | 137 be found in this technical report~\citep{ift6266-tr-anonymous}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
138 The code for these transformations (mostly python) is available at |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
139 {\tt http://anonymous.url.net}. All the modules in the pipeline share |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
140 a global control parameter ($0 \le complexity \le 1$) that allows one to modulate the |
467 | 141 amount of deformation or noise introduced. |
142 There are two main parts in the pipeline. The first one, | |
143 from slant to pinch below, performs transformations. The second | |
144 part, from blur to contrast, adds different kinds of noise. | |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
145 \end{minipage} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
146 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
147 {\large\bf Transformations} |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
148 |
501 | 149 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
150 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
151 \centering |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
152 \includegraphics[scale=.45]{images/Slant_only.PNG} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
153 \label{fig:Slant} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
154 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
155 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
156 %\centering |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
157 {\bf Slant.} |
541 | 158 Each row of the image is shifted |
495 | 159 proportionally to its height: $shift = round(slant \times height)$. |
541 | 160 $slant \sim U[-complexity,complexity]$. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
161 \vspace{1.2cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
162 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
163 |
523 | 164 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
165 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
166 \centering |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
167 \includegraphics[scale=.45]{images/Thick_only.PNG} |
554 | 168 \label{fig:Thick} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
169 \vspace{.9cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
170 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
171 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
554 | 172 {\bf Thickness.} |
495 | 173 Morphological operators of dilation and erosion~\citep{Haralick87,Serra82} |
467 | 174 are applied. The neighborhood of each pixel is multiplied |
175 element-wise with a {\em structuring element} matrix. | |
176 The pixel value is replaced by the maximum or the minimum of the resulting | |
177 matrix, respectively for dilation or erosion. Ten different structural elements with | |
178 increasing dimensions (largest is $5\times5$) were used. For each image, | |
179 randomly sample the operator type (dilation or erosion) with equal probability and one structural | |
541 | 180 element from a subset of the $n=round(m \times complexity)$ smallest structuring elements |
181 where $m=10$ for dilation and $m=6$ for erosion (to avoid completely erasing thin characters). | |
182 A neutral element (no transformation) | |
183 is always present in the set. is applied. | |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
184 \vspace{.4cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
185 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
186 \vspace{-.7cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
187 |
523 | 188 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
189 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
190 \centering |
552 | 191 \includegraphics[scale=.45]{images/Affine_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
192 \label{fig:Affine} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
193 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
194 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
195 {\bf Affine Transformations.} |
467 | 196 A $2 \times 3$ affine transform matrix (with |
197 6 parameters $(a,b,c,d,e,f)$) is sampled according to the $complexity$ level. | |
541 | 198 Output pixel $(x,y)$ takes the value of input pixel |
199 nearest to $(ax+by+c,dx+ey+f)$, | |
200 producing scaling, translation, rotation and shearing. | |
467 | 201 The marginal distributions of $(a,b,c,d,e,f)$ have been tuned by hand to |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
202 forbid large rotations (not to confuse classes) but to give good |
467 | 203 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times |
204 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3 | |
205 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times | |
523 | 206 complexity]$. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
207 \end{minipage} |
523 | 208 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
209 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
210 \centering |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
211 \includegraphics[scale=.45]{images/Localelasticdistorsions_only.PNG} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
212 \label{fig:Elastic} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
213 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
214 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
215 {\bf Local Elastic Deformations.} |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
216 This filter induces a ``wiggly'' effect in the image, following~\citet{SimardSP03-short}, |
467 | 217 which provides more details. |
541 | 218 The intensity of the displacement fields is given by |
219 $\alpha = \sqrt[3]{complexity} \times 10.0$, which are | |
220 convolved with a Gaussian 2D kernel (resulting in a blur) of | |
221 standard deviation $\sigma = 10 - 7 \times\sqrt[3]{complexity}$. | |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
222 \vspace{.4cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
223 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
224 \vspace{-.7cm} |
523 | 225 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
226 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
227 \centering |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
228 \includegraphics[scale=.45]{images/Pinch_only.PNG} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
229 \label{fig:Pinch} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
230 \vspace{.6cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
231 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
232 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
233 {\bf Pinch.} |
541 | 234 This is the ``Whirl and pinch'' GIMP filter but with whirl was set to 0. |
235 A pinch is ``similar to projecting the image onto an elastic | |
521
13816dbef6ed
des choses ont disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
520
diff
changeset
|
236 surface and pressing or pulling on the center of the surface'' (GIMP documentation manual). |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
237 For a square input image, this is akin to drawing a circle of |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
238 radius $r$ around a center point $C$. Any point (pixel) $P$ belonging to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
239 that disk (region inside circle) will have its value recalculated by taking |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
240 the value of another ``source'' pixel in the original image. The position of |
495 | 241 that source pixel is found on the line that goes through $C$ and $P$, but |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
242 at some other distance $d_2$. Define $d_1$ to be the distance between $P$ |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
243 and $C$. $d_2$ is given by $d_2 = sin(\frac{\pi{}d_1}{2r})^{-pinch} \times |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
244 d_1$, where $pinch$ is a parameter to the filter. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
245 The actual value is given by bilinear interpolation considering the pixels |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
246 around the (non-integer) source position thus found. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
247 Here $pinch \sim U[-complexity, 0.7 \times complexity]$. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
248 %\vspace{1.5cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
249 \end{minipage} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
250 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
251 \vspace{.1cm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
252 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
253 {\large\bf Injecting Noise} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
254 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
255 \vspace*{-.2cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
256 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
257 \centering |
552 | 258 \includegraphics[scale=.45]{images/Motionblur_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
259 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
260 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
261 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
262 {\bf Motion Blur.} |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
263 This is GIMP's ``linear motion blur'' |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
264 with parameters $length$ and $angle$. The value of |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
265 a pixel in the final image is approximately the mean value of the first $length$ pixels |
467 | 266 found by moving in the $angle$ direction. |
523 | 267 Here $angle \sim U[0,360]$ degrees, and $length \sim {\rm Normal}(0,(3 \times complexity)^2)$. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
268 \vspace{.7cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
269 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
270 |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
271 \vspace*{-5mm} |
523 | 272 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
273 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
274 \centering |
552 | 275 \includegraphics[scale=.45]{images/occlusion_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
276 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
277 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
278 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
279 {\bf Occlusion.} |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
280 Selects a random rectangle from an {\em occluder} character |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
281 image and places it over the original {\em occluded} |
467 | 282 image. Pixels are combined by taking the max(occluder,occluded), |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
283 closer to black. The rectangle corners |
467 | 284 are sampled so that larger complexity gives larger rectangles. |
285 The destination position in the occluded image are also sampled | |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
286 according to a normal distribution (more details in~\citet{ift6266-tr-anonymous}). |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
287 This filter is skipped with probability 60\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
288 \vspace{.4cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
289 \end{minipage} |
523 | 290 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
291 \vspace*{-5mm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
292 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
293 \centering |
552 | 294 \includegraphics[scale=.45]{images/Permutpixel_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
295 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
296 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
297 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
298 {\bf Pixel Permutation.} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
299 This filter permutes neighbouring pixels. It first selects |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
300 fraction $\frac{complexity}{3}$ of pixels randomly in the image. Each of them are then |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
301 sequentially exchanged with one other in as $V4$ neighbourhood. |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
302 This filter is skipped with probability 80\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
303 \vspace{.8cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
304 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
305 |
523 | 306 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
307 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
308 \centering |
552 | 309 \includegraphics[scale=.45]{images/Distorsiongauss_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
310 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
311 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
312 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
313 {\bf Gaussian Noise.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
314 This filter simply adds, to each pixel of the image independently, a |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
315 noise $\sim Normal(0,(\frac{complexity}{10})^2)$. |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
316 This filter is skipped with probability 70\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
317 \vspace{1.1cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
318 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
319 \vspace{-.7cm} |
523 | 320 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
321 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
322 \centering |
552 | 323 \includegraphics[scale=.45]{images/background_other_only.png} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
324 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
325 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
326 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
327 {\bf Background Images.} |
469 | 328 Following~\citet{Larochelle-jmlr-2009}, this transformation adds a random |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
329 background behind the letter, from a randomly chosen natural image, |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
330 with contrast adjustments depending on $complexity$, to preserve |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
331 more or less of the original character image. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
332 \vspace{.8cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
333 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
334 \vspace{-.7cm} |
523 | 335 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
336 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
337 \centering |
552 | 338 \includegraphics[scale=.45]{images/Poivresel_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
339 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
340 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
341 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
342 {\bf Salt and Pepper Noise.} |
467 | 343 This filter adds noise $\sim U[0,1]$ to random subsets of pixels. |
344 The number of selected pixels is $0.2 \times complexity$. | |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
345 This filter is skipped with probability 75\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
346 \vspace{.9cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
347 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
348 \vspace{-.7cm} |
523 | 349 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
350 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
351 \centering |
552 | 352 \includegraphics[scale=.45]{images/Bruitgauss_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
353 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
354 \vspace{.5cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
355 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
356 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
554 | 357 {\bf Spatially Gaussian Smoothing.} |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
358 Different regions of the image are spatially smoothed by convolving |
554 | 359 the image with a symmetric Gaussian kernel of |
495 | 360 size and variance chosen uniformly in the ranges $[12,12 + 20 \times |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
361 complexity]$ and $[2,2 + 6 \times complexity]$. The result is normalized |
554 | 362 between $0$ and $1$. We also create a symmetric weighted averaging window, of the |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
363 kernel size, with maximum value at the center. For each image we sample |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
364 uniformly from $3$ to $3 + 10 \times complexity$ pixels that will be |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
365 averaging centers between the original image and the filtered one. We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
366 initialize to zero a mask matrix of the image size. For each selected pixel |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
367 we add to the mask the averaging window centered to it. The final image is |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
368 computed from the following element-wise operation: $\frac{image + filtered |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
369 image \times mask}{mask+1}$. |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
370 This filter is skipped with probability 75\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
371 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
372 \vspace{-.7cm} |
523 | 373 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
374 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
375 \centering |
552 | 376 \includegraphics[scale=.45]{images/Rature_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
377 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
378 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
379 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
380 \vspace{.4cm} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
381 {\bf Scratches.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
382 The scratches module places line-like white patches on the image. The |
517
0a5945249f2b
section 2, quick first pass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
511
diff
changeset
|
383 lines are heavily transformed images of the digit ``1'' (one), chosen |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
384 at random among 500 such 1 images, |
467 | 385 randomly cropped and rotated by an angle $\sim Normal(0,(100 \times |
554 | 386 complexity)^2$ (in degrees), using bi-cubic interpolation. |
495 | 387 Two passes of a grey-scale morphological erosion filter |
467 | 388 are applied, reducing the width of the line |
389 by an amount controlled by $complexity$. | |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
390 This filter is skipped with probability 85\%. The probabilities |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
391 of applying 1, 2, or 3 patches are (50\%,30\%,20\%). |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
392 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
393 \vspace{-.7cm} |
523 | 394 |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
395 \begin{minipage}[b]{0.14\linewidth} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
396 \centering |
552 | 397 \includegraphics[scale=.45]{images/Contrast_only.PNG} |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
398 \label{fig:Original} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
399 \end{minipage}% |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
400 \hspace{0.3cm}\begin{minipage}[b]{0.86\linewidth} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
401 {\bf Grey Level and Contrast Changes.} |
495 | 402 This filter changes the contrast and may invert the image polarity (white |
544
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
403 to black and black to white). The contrast is $C \sim U[1-0.85 \times complexity,1]$ |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
404 so the image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The |
1cdfc17e890f
ca fitte maintenant
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
541
diff
changeset
|
405 polarity is inverted with probability 50\%. |
551
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
406 \vspace{.7cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
407 \end{minipage} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
408 \vspace{-.7cm} |
8f365abf171d
separete the transmo image
Frederic Bastien <nouiz@nouiz.org>
parents:
550
diff
changeset
|
409 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
410 |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
411 \iffalse |
523 | 412 \begin{figure}[ht] |
538 | 413 \centerline{\resizebox{.9\textwidth}{!}{\includegraphics{images/example_t.png}}}\\ |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
414 \caption{Illustration of the pipeline of stochastic |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
415 transformations applied to the image of a lower-case \emph{t} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
416 (the upper left image). Each image in the pipeline (going from |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
417 left to right, first top line, then bottom line) shows the result |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
418 of applying one of the modules in the pipeline. The last image |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
419 (bottom right) is used as training example.} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
420 \label{fig:pipeline} |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
421 \end{figure} |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
422 \fi |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
423 |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
424 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
425 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
426 \section{Experimental Setup} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
427 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
428 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
429 Much previous work on deep learning had been performed on |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
430 the MNIST digits task~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006,Salakhutdinov+Hinton-2009}, |
472 | 431 with 60~000 examples, and variants involving 10~000 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
432 examples~\citep{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}. |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
433 The focus here is on much larger training sets, from 10 times to |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
434 to 1000 times larger, and 62 classes. |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
435 |
535 | 436 The first step in constructing the larger datasets (called NISTP and P07) is to sample from |
499
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
437 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, |
2b58eda9fc08
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
495
diff
changeset
|
438 and {\bf OCR data} (scanned machine printed characters). Once a character |
547 | 439 is sampled from one of these sources (chosen randomly), the second step is to |
440 apply a pipeline of transformations and/or noise processes described in section \ref{s:perturbations}. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
441 |
547 | 442 To provide a baseline of error rate comparison we also estimate human performance |
443 on both the 62-class task and the 10-class digits task. | |
535 | 444 We compare the best MLPs against |
445 the best SDAs (both models' hyper-parameters are selected to minimize the validation set error), | |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
446 along with a comparison against a precise estimate |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
447 of human performance obtained via Amazon's Mechanical Turk (AMT) |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
448 service (http://mturk.com). |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
449 AMT users are paid small amounts |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
450 of money to perform tasks for which human intelligence is required. |
522
d41926a68993
remis les choses qui avaient disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
521
diff
changeset
|
451 Mechanical Turk has been used extensively in natural language processing and vision. |
d41926a68993
remis les choses qui avaient disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
521
diff
changeset
|
452 %processing \citep{SnowEtAl2008} and vision |
d41926a68993
remis les choses qui avaient disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
521
diff
changeset
|
453 %\citep{SorokinAndForsyth2008,whitehill09}. |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
454 AMT users were presented |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
455 with 10 character images (from a test set) and asked to choose 10 corresponding ASCII |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
456 characters. They were forced to make a hard choice among the |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
457 62 or 10 character classes (all classes or digits only). |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
458 80 subjects classified 2500 images per (dataset,task) pair, |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
459 with the guarantee that 3 different subjects classified each image, allowing |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
460 us to estimate inter-human variability (e.g a standard error of 0.1\% |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
461 on the average 18.2\% error done by humans on the 62-class task NIST test set). |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
462 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
463 \vspace*{-3mm} |
472 | 464 \subsection{Data Sources} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
465 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
466 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
467 %\begin{itemize} |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
468 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
469 {\bf NIST.} |
501 | 470 Our main source of characters is the NIST Special Database 19~\citep{Grother-1995}, |
472 | 471 widely used for training and testing character |
516
092dae9a5040
make the reference more compact.
Frederic Bastien <nouiz@nouiz.org>
parents:
514
diff
changeset
|
472 recognition systems~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005}. |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
473 The dataset is composed of 814255 digits and characters (upper and lower cases), with hand checked classifications, |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
474 extracted from handwritten sample forms of 3600 writers. The characters are labelled by one of the 62 classes |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
475 corresponding to ``0''-``9'',``A''-``Z'' and ``a''-``z''. The dataset contains 8 parts (partitions) of varying complexity. |
534 | 476 The fourth partition (called $hsf_4$, 82587 examples), |
477 experimentally recognized to be the most difficult one, is the one recommended | |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
478 by NIST as a testing set and is used in our work as well as some previous work~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} |
534 | 479 for that purpose. We randomly split the remainder (731668 examples) into a training set and a validation set for |
480 model selection. | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
481 The performances reported by previous work on that dataset mostly use only the digits. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
482 Here we use all the classes both in the training and testing phase. This is especially |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
483 useful to estimate the effect of a multi-task setting. |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
484 The distribution of the classes in the NIST training and test sets differs |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
485 substantially, with relatively many more digits in the test set, and a more uniform distribution |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
486 of letters in the test set (where the letters are distributed |
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
487 more like in natural text). |
549 | 488 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
489 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
490 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
491 {\bf Fonts.} |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
492 In order to have a good variety of sources we downloaded an important number of free fonts from: |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
493 {\tt http://cg.scs.carleton.ca/\textasciitilde luc/freefonts.html}. |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
494 % TODO: pointless to anonymize, it's not pointing to our work |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
495 Including the operating system's (Windows 7) fonts, there is a total of $9817$ different fonts that we can choose uniformly from. |
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
496 The chosen {\tt ttf} file is either used as input of the Captcha generator (see next item) or, by producing a corresponding image, |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
497 directly as input to our models. |
549 | 498 \vspace*{-1mm} |
479
6593e67381a3
Added transformation figure
Xavier Glorot <glorotxa@iro.umontreal.ca>
parents:
476
diff
changeset
|
499 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
500 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
501 {\bf Captchas.} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
502 The Captcha data source is an adaptation of the \emph{pycaptcha} library (a python based captcha generator library) for |
472 | 503 generating characters of the same format as the NIST dataset. This software is based on |
495 | 504 a random character class generator and various kinds of transformations similar to those described in the previous sections. |
472 | 505 In order to increase the variability of the data generated, many different fonts are used for generating the characters. |
495 | 506 Transformations (slant, distortions, rotation, translation) are applied to each randomly generated character with a complexity |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
507 depending on the value of the complexity parameter provided by the user of the data source. |
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
508 %Two levels of complexity are allowed and can be controlled via an easy to use facade class. %TODO: what's a facade class? |
549 | 509 \vspace*{-1mm} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
510 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
511 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
512 {\bf OCR data.} |
472 | 513 A large set (2 million) of scanned, OCRed and manually verified machine-printed |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
514 characters where included as an |
472 | 515 additional source. This set is part of a larger corpus being collected by the Image Understanding |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
516 Pattern Recognition Research group led by Thomas Breuel at University of Kaiserslautern |
495 | 517 ({\tt http://www.iupr.com}), and which will be publicly released. |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
518 %TODO: let's hope that Thomas is not a reviewer! :) Seriously though, maybe we should anonymize this |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
519 %\end{itemize} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
520 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
521 \vspace*{-3mm} |
472 | 522 \subsection{Data Sets} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
523 \vspace*{-2mm} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
524 |
472 | 525 All data sets contain 32$\times$32 grey-level images (values in $[0,1]$) associated with a label |
526 from one of the 62 character classes. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
527 %\begin{itemize} |
549 | 528 \vspace*{-1mm} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
529 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
530 %\item |
534 | 531 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has |
535 | 532 \{651668 / 80000 / 82587\} \{training / validation / test\} examples. |
549 | 533 \vspace*{-1mm} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
534 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
535 %\item |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
536 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
537 and sending them through the transformation pipeline described in section \ref{s:perturbations}. |
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
538 For each new example to generate, a data source is selected with probability $10\%$ from the fonts, |
472 | 539 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
540 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$. |
535 | 541 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. |
549 | 542 \vspace*{-1mm} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
543 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
544 %\item |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
545 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources) |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
546 except that we only apply |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
547 transformations from slant to pinch. Therefore, the character is |
495 | 548 transformed but no additional noise is added to the image, giving images |
534 | 549 closer to the NIST dataset. |
535 | 550 It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
551 %\end{itemize} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
552 |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
553 \vspace*{-3mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
554 \subsection{Models and their Hyperparameters} |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
555 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
556 |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
557 The experiments are performed with Multi-Layer Perceptrons (MLP) with a single |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
558 hidden layer and with Stacked Denoising Auto-Encoders (SDA). |
553
8f6c09d1140f
ca fitte de nouveau
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
552
diff
changeset
|
559 \emph{Hyper-parameters are selected based on the {\bf NISTP} validation set error.} |
472 | 560 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
561 {\bf Multi-Layer Perceptrons (MLP).} |
472 | 562 Whereas previous work had compared deep architectures to both shallow MLPs and |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
563 SVMs, we only compared to MLPs here because of the very large datasets used |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
564 (making the use of SVMs computationally challenging because of their quadratic |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
565 scaling behavior). |
472 | 566 The MLP has a single hidden layer with $\tanh$ activation functions, and softmax (normalized |
520
18a6379999fd
more after lunch :)
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
519
diff
changeset
|
567 exponentials) on the output layer for estimating $P(class | image)$. |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
568 The number of hidden units is taken in $\{300,500,800,1000,1500\}$. |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
569 Training examples are presented in minibatches of size 20. A constant learning |
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
570 rate was chosen among $\{0.001, 0.01, 0.025, 0.075, 0.1, 0.5\}$ |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
571 through preliminary experiments (measuring performance on a validation set), |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
572 and $0.1$ was then selected for optimizing on the whole training sets. |
549 | 573 \vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
574 |
521
13816dbef6ed
des choses ont disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
520
diff
changeset
|
575 |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
576 {\bf Stacked Denoising Auto-Encoders (SDA).} |
472 | 577 Various auto-encoder variants and Restricted Boltzmann Machines (RBMs) |
578 can be used to initialize the weights of each layer of a deep MLP (with many hidden | |
520
18a6379999fd
more after lunch :)
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
519
diff
changeset
|
579 layers)~\citep{Hinton06,ranzato-07-small,Bengio-nips-2006}, |
18a6379999fd
more after lunch :)
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
519
diff
changeset
|
580 apparently setting parameters in the |
472 | 581 basin of attraction of supervised gradient descent yielding better |
582 generalization~\citep{Erhan+al-2010}. It is hypothesized that the | |
583 advantage brought by this procedure stems from a better prior, | |
584 on the one hand taking advantage of the link between the input | |
585 distribution $P(x)$ and the conditional distribution of interest | |
586 $P(y|x)$ (like in semi-supervised learning), and on the other hand | |
587 taking advantage of the expressive power and bias implicit in the | |
588 deep architecture (whereby complex concepts are expressed as | |
589 compositions of simpler ones through a deep hierarchy). | |
530
8fe77eac344f
Clarifying the experimental setup, typos here and there
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
524
diff
changeset
|
590 |
547 | 591 \begin{figure}[ht] |
592 \vspace*{-2mm} | |
593 \centerline{\resizebox{0.8\textwidth}{!}{\includegraphics{images/denoising_autoencoder_small.pdf}}} | |
549 | 594 \vspace*{-2mm} |
547 | 595 \caption{Illustration of the computations and training criterion for the denoising |
596 auto-encoder used to pre-train each layer of the deep architecture. Input $x$ of | |
597 the layer (i.e. raw input or output of previous layer) | |
598 is corrupted into $\tilde{x}$ and encoded into code $y$ by the encoder $f_\theta(\cdot)$. | |
599 The decoder $g_{\theta'}(\cdot)$ maps $y$ to reconstruction $z$, which | |
600 is compared to the uncorrupted input $x$ through the loss function | |
601 $L_H(x,z)$, whose expected value is approximately minimized during training | |
602 by tuning $\theta$ and $\theta'$.} | |
603 \label{fig:da} | |
604 \vspace*{-2mm} | |
605 \end{figure} | |
606 | |
472 | 607 Here we chose to use the Denoising |
608 Auto-Encoder~\citep{VincentPLarochelleH2008} as the building block for | |
609 these deep hierarchies of features, as it is very simple to train and | |
532
2e33885730cf
changements aux charts.ods
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
529
diff
changeset
|
610 explain (see Figure~\ref{fig:da}, as well as |
521
13816dbef6ed
des choses ont disparu
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
520
diff
changeset
|
611 tutorial and code there: {\tt http://deeplearning.net/tutorial}), |
472 | 612 provides immediate and efficient inference, and yielded results |
613 comparable or better than RBMs in series of experiments | |
519
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
614 \citep{VincentPLarochelleH2008}. During training, a Denoising |
eaa595ea2402
section 3 quickpass
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
518
diff
changeset
|
615 Auto-Encoder is presented with a stochastically corrupted version |
472 | 616 of the input and trained to reconstruct the uncorrupted input, |
617 forcing the hidden units to represent the leading regularities in | |
535 | 618 the data. Once it is trained, in a purely unsupervised way, |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
619 its hidden units' activations can |
472 | 620 be used as inputs for training a second one, etc. |
621 After this unsupervised pre-training stage, the parameters | |
622 are used to initialize a deep MLP, which is fine-tuned by | |
623 the same standard procedure used to train them (see previous section). | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
624 The SDA hyper-parameters are the same as for the MLP, with the addition of the |
472 | 625 amount of corruption noise (we used the masking noise process, whereby a |
626 fixed proportion of the input values, randomly selected, are zeroed), and a | |
627 separate learning rate for the unsupervised pre-training stage (selected | |
628 from the same above set). The fraction of inputs corrupted was selected | |
629 among $\{10\%, 20\%, 50\%\}$. Another hyper-parameter is the number | |
630 of hidden layers but it was fixed to 3 based on previous work with | |
547 | 631 SDAs on MNIST~\citep{VincentPLarochelleH2008}. |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
632 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
633 \vspace*{-1mm} |
523 | 634 |
635 \begin{figure}[ht] | |
541 | 636 \vspace*{-2mm} |
523 | 637 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/error_rates_charts.pdf}}} |
549 | 638 \vspace*{-3mm} |
547 | 639 \caption{SDAx are the {\bf deep} models. Error bars indicate a 95\% confidence interval. 0 indicates that the model was trained |
523 | 640 on NIST, 1 on NISTP, and 2 on P07. Left: overall results |
548
34cb28249de0
suggestions de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
547
diff
changeset
|
641 of all models, on NIST and NISTP test sets. |
523 | 642 Right: error rates on NIST test digits only, along with the previous results from |
643 literature~\citep{Granger+al-2007,Cortes+al-2000,Oliveira+al-2002-short,Milgram+al-2005} | |
644 respectively based on ART, nearest neighbors, MLPs, and SVMs.} | |
645 \label{fig:error-rates-charts} | |
541 | 646 \vspace*{-2mm} |
523 | 647 \end{figure} |
648 | |
649 | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
650 \section{Experimental Results} |
549 | 651 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
652 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
653 %\vspace*{-1mm} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
654 %\subsection{SDA vs MLP vs Humans} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
655 %\vspace*{-1mm} |
535 | 656 The models are either trained on NIST (MLP0 and SDA0), |
657 NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested | |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
658 on either NIST, NISTP or P07, either on the 62-class task |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
659 or on the 10-digits task. |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
660 Figure~\ref{fig:error-rates-charts} summarizes the results obtained, |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
661 comparing humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1, |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
662 SDA2), along with the previous results on the digits NIST special database |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
663 19 test set from the literature respectively based on ARTMAP neural |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
664 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search |
516
092dae9a5040
make the reference more compact.
Frederic Bastien <nouiz@nouiz.org>
parents:
514
diff
changeset
|
665 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
666 ~\citep{Milgram+al-2005}. More detailed and complete numerical results |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
667 (figures and tables, including standard errors on the error rates) can be |
535 | 668 found in Appendix I of the supplementary material. |
669 The deep learner not only outperformed the shallow ones and | |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
670 previously published performance (in a statistically and qualitatively |
535 | 671 significant way) but when trained with perturbed data |
672 reaches human performance on both the 62-class task | |
523 | 673 and the 10-class (digits) task. |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
674 17\% error (SDA1) or 18\% error (humans) may seem large but a large |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
675 majority of the errors from humans and from SDA1 are from out-of-context |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
676 confusions (e.g. a vertical bar can be a ``1'', an ``l'' or an ``L'', and a |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
677 ``c'' and a ``C'' are often indistinguishible). |
523 | 678 |
679 \begin{figure}[ht] | |
549 | 680 \vspace*{-3mm} |
523 | 681 \centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/improvements_charts.pdf}}} |
549 | 682 \vspace*{-3mm} |
523 | 683 \caption{Relative improvement in error rate due to self-taught learning. |
684 Left: Improvement (or loss, when negative) | |
685 induced by out-of-distribution examples (perturbed data). | |
686 Right: Improvement (or loss, when negative) induced by multi-task | |
687 learning (training on all classes and testing only on either digits, | |
688 upper case, or lower-case). The deep learner (SDA) benefits more from | |
689 both self-taught learning scenarios, compared to the shallow MLP.} | |
690 \label{fig:improvements-charts} | |
691 \vspace*{-2mm} | |
692 \end{figure} | |
693 | |
694 In addition, as shown in the left of | |
695 Figure~\ref{fig:improvements-charts}, the relative improvement in error | |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
696 rate brought by self-taught learning is greater for the SDA, and these |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
697 differences with the MLP are statistically and qualitatively |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
698 significant. |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
699 The left side of the figure shows the improvement to the clean |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
700 NIST test set error brought by the use of out-of-distribution examples |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
701 (i.e. the perturbed examples examples from NISTP or P07). |
547 | 702 Relative percent change is measured by taking |
548
34cb28249de0
suggestions de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
547
diff
changeset
|
703 $100 \% \times$ (original model's error / perturbed-data model's error - 1). |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
704 The right side of |
523 | 705 Figure~\ref{fig:improvements-charts} shows the relative improvement |
486
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
706 brought by the use of a multi-task setting, in which the same model is |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
707 trained for more classes than the target classes of interest (i.e. training |
877af97ee193
section resultats et appendice
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
485
diff
changeset
|
708 with all 62 classes when the target classes are respectively the digits, |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
709 lower-case, or upper-case characters). Again, whereas the gain from the |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
710 multi-task setting is marginal or negative for the MLP, it is substantial |
547 | 711 for the SDA. Note that to simplify these multi-task experiments, only the original |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
712 NIST dataset is used. For example, the MLP-digits bar shows the relative |
547 | 713 percent improvement in MLP error rate on the NIST digits test set |
548
34cb28249de0
suggestions de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
547
diff
changeset
|
714 is $100\% \times$ (1 - single-task |
493
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
715 model's error / multi-task model's error). The single-task model is |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
716 trained with only 10 outputs (one per digit), seeing only digit examples, |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
717 whereas the multi-task model is trained with 62 outputs, with all 62 |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
718 character classes as examples. Hence the hidden units are shared across |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
719 all tasks. For the multi-task model, the digit error rate is measured by |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
720 comparing the correct digit class with the output class associated with the |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
721 maximum conditional probability among only the digit classes outputs. The |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
722 setting is similar for the other two target classes (lower case characters |
a194ce5a4249
difference stat. sign.
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
491
diff
changeset
|
723 and upper case characters). |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
724 %\vspace*{-1mm} |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
725 %\subsection{Perturbed Training Data More Helpful for SDA} |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
726 %\vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
727 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
728 %\vspace*{-1mm} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
729 %\subsection{Multi-Task Learning Effects} |
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
730 %\vspace*{-1mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
731 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
732 \iffalse |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
733 As previously seen, the SDA is better able to benefit from the |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
734 transformations applied to the data than the MLP. In this experiment we |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
735 define three tasks: recognizing digits (knowing that the input is a digit), |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
736 recognizing upper case characters (knowing that the input is one), and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
737 recognizing lower case characters (knowing that the input is one). We |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
738 consider the digit classification task as the target task and we want to |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
739 evaluate whether training with the other tasks can help or hurt, and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
740 whether the effect is different for MLPs versus SDAs. The goal is to find |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
741 out if deep learning can benefit more (or less) from multiple related tasks |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
742 (i.e. the multi-task setting) compared to a corresponding purely supervised |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
743 shallow learner. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
744 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
745 We use a single hidden layer MLP with 1000 hidden units, and a SDA |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
746 with 3 hidden layers (1000 hidden units per layer), pre-trained and |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
747 fine-tuned on NIST. |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
748 |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
749 Our results show that the MLP benefits marginally from the multi-task setting |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
750 in the case of digits (5\% relative improvement) but is actually hurt in the case |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
751 of characters (respectively 3\% and 4\% worse for lower and upper class characters). |
495 | 752 On the other hand the SDA benefited from the multi-task setting, with relative |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
753 error rate improvements of 27\%, 15\% and 13\% respectively for digits, |
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
754 lower and upper case characters, as shown in Table~\ref{tab:multi-task}. |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
755 \fi |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
756 |
475 | 757 |
549 | 758 \vspace*{-2mm} |
529 | 759 \section{Conclusions and Discussion} |
549 | 760 \vspace*{-2mm} |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
761 |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
762 We have found that the self-taught learning framework is more beneficial |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
763 to a deep learner than to a traditional shallow and purely |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
764 supervised learner. More precisely, |
520
18a6379999fd
more after lunch :)
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
519
diff
changeset
|
765 the answers are positive for all the questions asked in the introduction. |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
766 %\begin{itemize} |
487 | 767 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
768 $\bullet$ %\item |
547 | 769 {\bf Do the good results previously obtained with deep architectures on the |
549 | 770 MNIST digits generalize to a much larger and richer (but similar) |
547 | 771 dataset, the NIST special database 19, with 62 classes and around 800k examples}? |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
772 Yes, the SDA {\bf systematically outperformed the MLP and all the previously |
529 | 773 published results on this dataset} (the ones that we are aware of), {\bf in fact reaching human-level |
774 performance} at around 17\% error on the 62-class task and 1.4\% on the digits. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
775 |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
776 $\bullet$ %\item |
547 | 777 {\bf To what extent do self-taught learning scenarios help deep learners, |
778 and do they help them more than shallow supervised ones}? | |
529 | 779 We found that distorted training examples not only made the resulting |
780 classifier better on similarly perturbed images but also on | |
781 the {\em original clean examples}, and more importantly and more novel, | |
782 that deep architectures benefit more from such {\em out-of-distribution} | |
783 examples. MLPs were helped by perturbed training examples when tested on perturbed input | |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
784 images (65\% relative improvement on NISTP) |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
785 but only marginally helped (5\% relative improvement on all classes) |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
786 or even hurt (10\% relative loss on digits) |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
787 with respect to clean examples . On the other hand, the deep SDAs |
472 | 788 were very significantly boosted by these out-of-distribution examples. |
529 | 789 Similarly, whereas the improvement due to the multi-task setting was marginal or |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
790 negative for the MLP (from +5.6\% to -3.6\% relative change), |
547 | 791 it was very significant for the SDA (from +13\% to +27\% relative change), |
792 which may be explained by the arguments below. | |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
793 %\end{itemize} |
472 | 794 |
524
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
795 In the original self-taught learning framework~\citep{RainaR2007}, the |
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
796 out-of-sample examples were used as a source of unsupervised data, and |
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
797 experiments showed its positive effects in a \emph{limited labeled data} |
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
798 scenario. However, many of the results by \citet{RainaR2007} (who used a |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
799 shallow, sparse coding approach) suggest that the {\em relative gain of self-taught |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
800 learning vs ordinary supervised learning} diminishes as the number of labeled examples increases. |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
801 We note instead that, for deep |
524
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
802 architectures, our experiments show that such a positive effect is accomplished |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
803 even in a scenario with a \emph{very large number of labeled examples}, |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
804 i.e., here, the relative gain of self-taught learning is probably preserved |
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
805 in the asymptotic regime. |
524
07bc0ca8d246
added paragraph comparing "our" self-taught learning with "theirs"
Dumitru Erhan <dumitru.erhan@gmail.com>
parents:
523
diff
changeset
|
806 |
547 | 807 {\bf Why would deep learners benefit more from the self-taught learning framework}? |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
808 The key idea is that the lower layers of the predictor compute a hierarchy |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
809 of features that can be shared across tasks or across variants of the |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
810 input distribution. Intermediate features that can be used in different |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
811 contexts can be estimated in a way that allows to share statistical |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
812 strength. Features extracted through many levels are more likely to |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
813 be more abstract (as the experiments in~\citet{Goodfellow2009} suggest), |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
814 increasing the likelihood that they would be useful for a larger array |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
815 of tasks and input conditions. |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
816 Therefore, we hypothesize that both depth and unsupervised |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
817 pre-training play a part in explaining the advantages observed here, and future |
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
818 experiments could attempt at teasing apart these factors. |
529 | 819 And why would deep learners benefit from the self-taught learning |
820 scenarios even when the number of labeled examples is very large? | |
821 We hypothesize that this is related to the hypotheses studied | |
822 in~\citet{Erhan+al-2010}. Whereas in~\citet{Erhan+al-2010} | |
823 it was found that online learning on a huge dataset did not make the | |
824 advantage of the deep learning bias vanish, a similar phenomenon | |
825 may be happening here. We hypothesize that unsupervised pre-training | |
826 of a deep hierarchy with self-taught learning initializes the | |
827 model in the basin of attraction of supervised gradient descent | |
828 that corresponds to better generalization. Furthermore, such good | |
829 basins of attraction are not discovered by pure supervised learning | |
830 (with or without self-taught settings), and more labeled examples | |
550
662299f265ab
suggestions from Ian
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
549
diff
changeset
|
831 does not allow the model to go from the poorer basins of attraction discovered |
529 | 832 by the purely supervised shallow models to the kind of better basins associated |
833 with deep learning and self-taught learning. | |
502
2b35a6e5ece4
changements de Myriam
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
501
diff
changeset
|
834 |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
835 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
836 can be executed on-line at {\tt http://deep.host22.com}. |
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
837 |
498
7ff00c27c976
add missing file for bibtex and make it smaller.
Frederic Bastien <nouiz@nouiz.org>
parents:
496
diff
changeset
|
838 \newpage |
496
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
839 { |
e41007dd40e9
make the reference shorter.
Frederic Bastien <nouiz@nouiz.org>
parents:
495
diff
changeset
|
840 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,aigaion-shorter,specials} |
469 | 841 %\bibliographystyle{plainnat} |
842 \bibliographystyle{unsrtnat} | |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
843 %\bibliographystyle{apalike} |
484
9a757d565e46
reduction de taille
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
483
diff
changeset
|
844 } |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
845 |
485
6beaf3328521
les tables enlevées
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
484
diff
changeset
|
846 |
464
24f4a8b53fcc
nips2010_submission.tex
Yoshua Bengio <bengioy@iro.umontreal.ca>
parents:
diff
changeset
|
847 \end{document} |