changeset 539:84f42fe05594

merge
author Dumitru Erhan <dumitru.erhan@gmail.com>
date Tue, 01 Jun 2010 19:34:22 -0700
parents f0ee2212ea7c (current diff) caf7769ca19c (diff)
children 269c39f55134
files writeup/nips2010_submission.tex
diffstat 1 files changed, 15 insertions(+), 10 deletions(-) [+]
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Tue Jun 01 19:34:00 2010 -0700
+++ b/writeup/nips2010_submission.tex	Tue Jun 01 19:34:22 2010 -0700
@@ -357,15 +357,15 @@
 to focus here on the case of much larger training sets, from 10 times to 
 to 1000 times larger.  
 
-The first step in constructing the larger datasets is to sample from
+The first step in constructing the larger datasets (called NISTP and P07) is to sample from
 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
 and {\bf OCR data} (scanned machine printed characters). Once a character
 is sampled from one of these sources (chosen randomly), the pipeline of
 the transformations and/or noise processes described in section \ref{s:perturbations}
 is applied to the image.
 
-We compare the best MLP against
-the best SDA (both models' hyper-parameters are selected to minimize the validation set error), 
+We compare the best MLPs against
+the best SDAs (both models' hyper-parameters are selected to minimize the validation set error), 
 along with a comparison against a precise estimate
 of human performance obtained via Amazon's Mechanical Turk (AMT)
 service (http://mturk.com). 
@@ -525,7 +525,8 @@
 Auto-Encoder is presented with a stochastically corrupted version
 of the input and trained to reconstruct the uncorrupted input,
 forcing the hidden units to represent the leading regularities in
-the data. Once it is trained, its hidden units activations can
+the data. Once it is trained, in a purely unsupervised way, 
+its hidden units activations can
 be used as inputs for training a second one, etc.
 After this unsupervised pre-training stage, the parameters
 are used to initialize a deep MLP, which is fine-tuned by
@@ -560,20 +561,24 @@
 %\vspace*{-1mm}
 %\subsection{SDA vs MLP vs Humans}
 %\vspace*{-1mm}
-
+The models are either trained on NIST (MLP0 and SDA0), 
+NISTP (MLP1 and SDA1), or P07 (MLP2 and SDA2), and tested
+on either NIST, NISTP or P07, either on all 62 classes
+or only on the digits (considering only the outputs
+associated with digit classes).
 Figure~\ref{fig:error-rates-charts} summarizes the results obtained,
-comparing Humans, three MLPs (MLP0, MLP1, MLP2) and three SDAs (SDA0, SDA1,
+comparing Humans, the three MLPs (MLP0, MLP1, MLP2) and the three SDAs (SDA0, SDA1,
 SDA2), along with the previous results on the digits NIST special database
 19 test set from the literature respectively based on ARTMAP neural
 networks ~\citep{Granger+al-2007}, fast nearest-neighbor search
 ~\citep{Cortes+al-2000}, MLPs ~\citep{Oliveira+al-2002-short}, and SVMs
 ~\citep{Milgram+al-2005}.  More detailed and complete numerical results
 (figures and tables, including standard errors on the error rates) can be
-found in Appendix I of the supplementary material.  The 3 kinds of model differ in the
-training sets used: NIST only (MLP0,SDA0), NISTP (MLP1, SDA1), or P07
-(MLP2, SDA2). The deep learner not only outperformed the shallow ones and
+found in Appendix I of the supplementary material.  
+The deep learner not only outperformed the shallow ones and
 previously published performance (in a statistically and qualitatively
-significant way) but reaches human performance on both the 62-class task
+significant way) but when trained with perturbed data
+reaches human performance on both the 62-class task
 and the 10-class (digits) task. 
 
 \begin{figure}[ht]