changeset 466:6205481bf33f

asking the questions
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Fri, 28 May 2010 17:39:22 -0600
parents a48601e8d431
children e0e57270b2af
files writeup/nips2010_submission.tex
diffstat 1 files changed, 16 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Fri May 28 17:33:15 2010 -0600
+++ b/writeup/nips2010_submission.tex	Fri May 28 17:39:22 2010 -0600
@@ -89,6 +89,22 @@
 converted into a deep supervised feedforward neural network and trained by
 stochastic gradient descent.
 
+In this paper we ask the following questions:
+\begin{enumerate}
+\item Do the good results previously obtained with deep architectures on the
+MNIST digits generalize to the setting of a much larger and richer (but similar)
+dataset, the NIST special database 19, with 62 classes and around 800k examples?
+\item To what extent does the perturbation of input images (e.g. adding
+noise, affine transformations, background images) make the resulting
+classifier better not only on similarly perturbed images but also on
+the {\em original clean examples}?
+\item Do deep architectures benefit more from such {\em out-of-distribution}
+examples, i.e. do they benefit more from the self-taught learning~\cite{RainaR2007} framework?
+\item Similarly, does the feature learning step in deep learning algorithms benefit more 
+training with similar but different classes (i.e. a multi-task learning scenario) than
+a corresponding shallow and purely supervised architecture?
+\end{enumerate}
+The experimental results presented here provide positive evidence towards all of these questions.
 
 \section{Perturbation and Transformation of Character Images}