diff writeup/nipswp_submission.tex @ 598:a0fdc1f134da

minor changes to nips workshop submission
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Thu, 14 Oct 2010 22:09:55 -0400
parents 5ab605c9a7d9
children
line wrap: on
line diff
--- a/writeup/nipswp_submission.tex	Thu Oct 14 18:04:11 2010 -0400
+++ b/writeup/nipswp_submission.tex	Thu Oct 14 22:09:55 2010 -0400
@@ -174,7 +174,7 @@
 (perturbed or out-of-class)
 for a deep learner vs a supervised shallow one.
 Code for generating these transformations as well as for the deep learning 
-algorithms are made available at {\tt http://hg.assembla.com/ift6266}.
+algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}.
 We also estimate the relative advantage for deep learners of training with
 other classes than those of interest, by comparing learners trained with
 62 classes with learners trained with only a subset (on which they
@@ -227,7 +227,10 @@
 \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}}
 \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}}
 \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}}
-\caption{Transformation modules}
+\caption{Top left (a): example original image. Others (b-o): examples of the effect
+of each transformation module taken separately. Actual perturbed examples are obtained by
+a pipeline of these, with random choices about which module to apply and how much perturbation
+to apply.}
 \label{fig:transform}
 \vspace*{-2mm}
 \end{figure}
@@ -247,7 +250,7 @@
 a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
 and {\bf OCR data} (scanned machine printed characters). Once a character
 is sampled from one of these sources (chosen randomly), the second step is to
-apply a pipeline of transformations and/or noise processes described in section \ref{s:perturbations}.
+apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}.
 
 To provide a baseline of error rate comparison we also estimate human performance
 on both the 62-class task and the 10-class digits task.
@@ -256,7 +259,7 @@
 both models' hyper-parameters are selected to minimize the validation set error.
 We also provide a comparison against a precise estimate
 of human performance obtained via Amazon's Mechanical Turk (AMT)
-service (http://mturk.com). 
+service ({\tt http://mturk.com}). 
 AMT users are paid small amounts
 of money to perform tasks for which human intelligence is required.
 Mechanical Turk has been used extensively in natural language processing and vision.