# HG changeset patch # User Dumitru Erhan # Date 1275442089 25200 # Node ID 5157a583012508797ded72fa22bfc1b6a626ff11 # Parent 85f2337d47d284b71a642de979e3c68fb38bd099 One comma diff -r 85f2337d47d2 -r 5157a5830125 writeup/nips2010_submission.tex --- a/writeup/nips2010_submission.tex Tue Jun 01 18:19:40 2010 -0700 +++ b/writeup/nips2010_submission.tex Tue Jun 01 18:28:09 2010 -0700 @@ -691,7 +691,7 @@ experiments showed its positive effects in a \emph{limited labeled data} scenario. However, many of the results by \citet{RainaR2007} (who used a shallow, sparse coding approach) suggest that the relative gain of self-taught -learning diminishes as the number of labeled examples increases, (essentially, +learning diminishes as the number of labeled examples increases (essentially, a ``diminishing returns'' scenario occurs). We note instead that, for deep architectures, our experiments show that such a positive effect is accomplished even in a scenario with a \emph{very large number of labeled examples}.