# HG changeset patch # User Yoshua Bengio # Date 1287108595 14400 # Node ID a0fdc1f134dab9571d3baa7bbf0ed82e859056e3 # Parent 5ab605c9a7d9a88c115e1e4d2424989b1756f99a minor changes to nips workshop submission diff -r 5ab605c9a7d9 -r a0fdc1f134da writeup/nips2010_submission.pdf Binary file writeup/nips2010_submission.pdf has changed diff -r 5ab605c9a7d9 -r a0fdc1f134da writeup/nipswp_submission.tex --- a/writeup/nipswp_submission.tex Thu Oct 14 18:04:11 2010 -0400 +++ b/writeup/nipswp_submission.tex Thu Oct 14 22:09:55 2010 -0400 @@ -174,7 +174,7 @@ (perturbed or out-of-class) for a deep learner vs a supervised shallow one. Code for generating these transformations as well as for the deep learning -algorithms are made available at {\tt http://hg.assembla.com/ift6266}. +algorithms are made available at {\tt http://anonymous.url.net}.%{\tt http://hg.assembla.com/ift6266}. We also estimate the relative advantage for deep learners of training with other classes than those of interest, by comparing learners trained with 62 classes with learners trained with only a subset (on which they @@ -227,7 +227,10 @@ \subfigure[Salt \& Pepper]{\includegraphics[scale=0.6]{images/Poivresel_only.png}} \subfigure[Scratches]{\includegraphics[scale=0.6]{images/Rature_only.png}} \subfigure[Grey Level \& Contrast]{\includegraphics[scale=0.6]{images/Contrast_only.png}} -\caption{Transformation modules} +\caption{Top left (a): example original image. Others (b-o): examples of the effect +of each transformation module taken separately. Actual perturbed examples are obtained by +a pipeline of these, with random choices about which module to apply and how much perturbation +to apply.} \label{fig:transform} \vspace*{-2mm} \end{figure} @@ -247,7 +250,7 @@ a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas}, and {\bf OCR data} (scanned machine printed characters). Once a character is sampled from one of these sources (chosen randomly), the second step is to -apply a pipeline of transformations and/or noise processes described in section \ref{s:perturbations}. +apply a pipeline of transformations and/or noise processes outlined in section \ref{s:perturbations}. To provide a baseline of error rate comparison we also estimate human performance on both the 62-class task and the 10-class digits task. @@ -256,7 +259,7 @@ both models' hyper-parameters are selected to minimize the validation set error. We also provide a comparison against a precise estimate of human performance obtained via Amazon's Mechanical Turk (AMT) -service (http://mturk.com). +service ({\tt http://mturk.com}). AMT users are paid small amounts of money to perform tasks for which human intelligence is required. Mechanical Turk has been used extensively in natural language processing and vision.