# HG changeset patch # User Dumitru Erhan # Date 1275446062 25200 # Node ID 84f42fe05594a0016d14484d8b09ce972980bd42 # Parent f0ee2212ea7cbf49c94da985792c1da14481165f# Parent caf7769ca19c0e71bb5b27e1f112e692213e3fb0 merge diff -r caf7769ca19c -r 84f42fe05594 writeup/nips2010_submission.tex --- a/writeup/nips2010_submission.tex Tue Jun 01 22:25:35 2010 -0400 +++ b/writeup/nips2010_submission.tex Tue Jun 01 19:34:22 2010 -0700 @@ -334,7 +334,7 @@ \iffalse \begin{figure}[ht] -\centerline{\resizebox{.99\textwidth}{!}{\includegraphics{images/example_t.png}}}\\ +\centerline{\resizebox{.9\textwidth}{!}{\includegraphics{images/example_t.png}}}\\ \caption{Illustration of the pipeline of stochastic transformations applied to the image of a lower-case \emph{t} (the upper left image). Each image in the pipeline (going from @@ -700,7 +700,7 @@ experiments showed its positive effects in a \emph{limited labeled data} scenario. However, many of the results by \citet{RainaR2007} (who used a shallow, sparse coding approach) suggest that the relative gain of self-taught -learning diminishes as the number of labeled examples increases, (essentially, +learning diminishes as the number of labeled examples increases (essentially, a ``diminishing returns'' scenario occurs). We note instead that, for deep architectures, our experiments show that such a positive effect is accomplished even in a scenario with a \emph{very large number of labeled examples}.