diff writeup/nips2010_submission.tex @ 499:2b58eda9fc08

changements de Myriam
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Tue, 01 Jun 2010 12:12:52 -0400
parents 5764a2ae1fb5
children 8479bf822d0e
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Tue Jun 01 11:02:10 2010 -0400
+++ b/writeup/nips2010_submission.tex	Tue Jun 01 12:12:52 2010 -0400
@@ -32,7 +32,7 @@
   developed a powerful generator of stochastic variations and noise
   processes character images, including not only affine transformations but
   also slant, local elastic deformations, changes in thickness, background
-  images, color, contrast, occlusion, and various types of pixel and
+  images, grey level changes, contrast, occlusion, and various types of pixel and
   spatially correlated noise. The out-of-distribution examples are 
   obtained by training with these highly distorted images or
   by including object classes different from those in the target test set.
@@ -277,7 +277,7 @@
 cases, two patches are generated, and otherwise three patches are
 generated. The patch is applied by taking the maximal value on any given
 patch or the original image, for each of the 32x32 pixel locations.\\
-{\bf Color and Contrast Changes.}
+{\bf Grey Level and Contrast Changes.}
 This filter changes the contrast and may invert the image polarity (white
 on black to black on white). The contrast $C$ is defined here as the
 difference between the maximum and the minimum pixel value of the image. 
@@ -285,7 +285,7 @@
 The image is normalized into $[\frac{1-C}{2},1-\frac{1-C}{2}]$. The
 polarity is inverted with $0.5$ probability.
 
-
+\iffalse
 \begin{figure}[h]
 \resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}\\
 \caption{Illustration of the pipeline of stochastic 
@@ -296,16 +296,17 @@
 (bottom right) is used as training example.}
 \label{fig:pipeline}
 \end{figure}
-
+\fi
 
 \begin{figure}[h]
 \resizebox{.99\textwidth}{!}{\includegraphics{images/transfo.png}}\\
 \caption{Illustration of each transformation applied alone to the same image
 of an upper-case h (top left). First row (from left to right) : original image, slant,
-thickness, affine transformation, local elastic deformation; second row (from left to right) :
+thickness, affine transformation (translation, rotation, shear), 
+local elastic deformation; second row (from left to right) :
 pinch, motion blur, occlusion, pixel permutation, Gaussian noise; third row (from left to right) :
 background image, salt and pepper noise, spatially Gaussian noise, scratches,
-color and contrast changes.}
+grey level and contrast changes.}
 \label{fig:transfo}
 \end{figure}
 
@@ -320,9 +321,11 @@
 examples~\cite{Larochelle-jmlr-toappear-2008,VincentPLarochelleH2008}, we want
 to focus here on the case of much larger training sets, from 10 times to 
 to 1000 times larger.  The larger datasets are obtained by first sampling from
-a {\em data source} (NIST characters, scanned machine printed characters, characters
-from fonts, or characters from captchas) and then optionally applying some of the
-above transformations and/or noise processes.
+a {\em data source}: {\bf NIST} (NIST database 19), {\bf Fonts}, {\bf Captchas},
+and {\bf OCR data} (scanned machine printed characters). Once a character
+is sampled from one of these sources (chosen randomly), a pipeline of
+the above transformations and/or noise processes is applied to the
+image.
 
 \vspace*{-1mm}
 \subsection{Data Sources}