changeset 538:f0ee2212ea7c

typos and stuff
author Dumitru Erhan <dumitru.erhan@gmail.com>
date Tue, 01 Jun 2010 19:34:00 -0700
parents 47894d0ecbde (diff) 4d6493d171f6 (current diff)
children 84f42fe05594
files writeup/nips2010_submission.tex
diffstat 1 files changed, 5 insertions(+), 5 deletions(-) [+]
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex	Tue Jun 01 22:12:13 2010 -0400
+++ b/writeup/nips2010_submission.tex	Tue Jun 01 19:34:00 2010 -0700
@@ -334,7 +334,7 @@
 
 \iffalse
 \begin{figure}[ht]
-\centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}}\\
+\centerline{\resizebox{.9\textwidth}{!}{\includegraphics{images/example_t.png}}}\\
 \caption{Illustration of the pipeline of stochastic 
 transformations applied to the image of a lower-case \emph{t}
 (the upper left image). Each image in the pipeline (going from
@@ -446,7 +446,7 @@
 
 %\item 
 {\bf NIST.} This is the raw NIST special database 19~\citep{Grother-1995}. It has
-\{651668 / 80000 / 82587\} \{training / validation / test} examples.
+\{651668 / 80000 / 82587\} \{training / validation / test\} examples.
 
 %\item 
 {\bf P07.} This dataset is obtained by taking raw characters from all four of the above sources
@@ -454,7 +454,7 @@
 For each new example to generate, a data source is selected with probability $10\%$ from the fonts,
 $25\%$ from the captchas, $25\%$ from the OCR data and $40\%$ from NIST. We apply all the transformations in the
 order given above, and for each of them we sample uniformly a \emph{complexity} in the range $[0,0.7]$.
-It has \{81920000 / 80000 / 20000\} \{training / validation / test} examples.
+It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples.
 
 %\item 
 {\bf NISTP.} This one is equivalent to P07 (complexity parameter of $0.7$ with the same proportions of data sources)
@@ -462,7 +462,7 @@
   transformations from slant to pinch. Therefore, the character is
   transformed but no additional noise is added to the image, giving images
   closer to the NIST dataset. 
-It has \{81920000 / 80000 / 20000\} \{training / validation / test} examples.
+It has \{81920000 / 80000 / 20000\} \{training / validation / test\} examples.
 %\end{itemize}
 
 \vspace*{-1mm}
@@ -695,7 +695,7 @@
 experiments showed its positive effects in a \emph{limited labeled data}
 scenario. However, many of the results by \citet{RainaR2007} (who used a
 shallow, sparse coding approach) suggest that the relative gain of self-taught
-learning diminishes as the number of labeled examples increases, (essentially,
+learning diminishes as the number of labeled examples increases (essentially,
 a ``diminishing returns'' scenario occurs).  We note instead that, for deep
 architectures, our experiments show that such a positive effect is accomplished
 even in a scenario with a \emph{very large number of labeled examples}.