comparison writeup/nips2010_submission.tex @ 500:8479bf822d0e

merge
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Tue, 01 Jun 2010 12:13:10 -0400
parents 2b58eda9fc08 7ff00c27c976
children 5927432d8b8d
comparison
equal deleted inserted replaced
499:2b58eda9fc08 500:8479bf822d0e
83 Self-taught learning~\citep{RainaR2007} is a paradigm that combines principles 83 Self-taught learning~\citep{RainaR2007} is a paradigm that combines principles
84 of semi-supervised and multi-task learning: the learner can exploit examples 84 of semi-supervised and multi-task learning: the learner can exploit examples
85 that are unlabeled and/or come from a distribution different from the target 85 that are unlabeled and/or come from a distribution different from the target
86 distribution, e.g., from other classes that those of interest. Whereas 86 distribution, e.g., from other classes that those of interest. Whereas
87 it has already been shown that deep learners can clearly take advantage of 87 it has already been shown that deep learners can clearly take advantage of
88 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008} 88 unsupervised learning and unlabeled examples~\citep{Bengio-2009,WestonJ2008-small}
89 and multi-task learning, not much has been done yet to explore the impact 89 and multi-task learning, not much has been done yet to explore the impact
90 of {\em out-of-distribution} examples and of the multi-task setting 90 of {\em out-of-distribution} examples and of the multi-task setting
91 (but see~\citep{CollobertR2008-short}). In particular the {\em relative 91 (but see~\citep{CollobertR2008}). In particular the {\em relative
92 advantage} of deep learning for this settings has not been evaluated. 92 advantage} of deep learning for this settings has not been evaluated.
93 93
94 In this paper we ask the following questions: 94 In this paper we ask the following questions:
95 95
96 %\begin{enumerate} 96 %\begin{enumerate}
170 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times 170 variability of the transformation: $a$ and $d$ $\sim U[1-3 \times
171 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3 171 complexity,1+3 \times complexity]$, $b$ and $e$ $\sim[-3 \times complexity,3
172 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times 172 \times complexity]$ and $c$ and $f$ $\sim U[-4 \times complexity, 4 \times
173 complexity]$.\\ 173 complexity]$.\\
174 {\bf Local Elastic Deformations.} 174 {\bf Local Elastic Deformations.}
175 This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03}, 175 This filter induces a "wiggly" effect in the image, following~\citet{SimardSP03-short},
176 which provides more details. 176 which provides more details.
177 Two "displacements" fields are generated and applied, for horizontal 177 Two "displacements" fields are generated and applied, for horizontal
178 and vertical displacements of pixels. 178 and vertical displacements of pixels.
179 To generate a pixel in either field, first a value between -1 and 1 is 179 To generate a pixel in either field, first a value between -1 and 1 is
180 chosen from a uniform distribution. Then all the pixels, in both fields, are 180 chosen from a uniform distribution. Then all the pixels, in both fields, are
610 %\end{itemize} 610 %\end{itemize}
611 611
612 A Flash demo of the recognizer (where both the MLP and the SDA can be compared) 612 A Flash demo of the recognizer (where both the MLP and the SDA can be compared)
613 can be executed on-line at {\tt http://deep.host22.com}. 613 can be executed on-line at {\tt http://deep.host22.com}.
614 614
615 615 \newpage
616 {\small 616 {
617 \bibliography{strings,ml,aigaion,specials} 617 \bibliography{strings,strings-short,strings-shorter,ift6266_ml,aigaion-shorter,specials}
618 %\bibliographystyle{plainnat} 618 %\bibliographystyle{plainnat}
619 \bibliographystyle{unsrtnat} 619 \bibliographystyle{unsrtnat}
620 %\bibliographystyle{apalike} 620 %\bibliographystyle{apalike}
621 } 621 }
622 622