# HG changeset patch # User Dumitru Erhan # Date 1275442123 25200 # Node ID 47894d0ecbde3a487d5d6cc03b5cb919611e7390 # Parent 5157a583012508797ded72fa22bfc1b6a626ff11# Parent 22d5cd82d5f08cc0123105f955c7c16014e6be0e merge diff -r 22d5cd82d5f0 -r 47894d0ecbde writeup/nips2010_submission.tex --- a/writeup/nips2010_submission.tex Tue Jun 01 21:24:39 2010 -0400 +++ b/writeup/nips2010_submission.tex Tue Jun 01 18:28:43 2010 -0700 @@ -692,7 +692,7 @@ experiments showed its positive effects in a \emph{limited labeled data} scenario. However, many of the results by \citet{RainaR2007} (who used a shallow, sparse coding approach) suggest that the relative gain of self-taught -learning diminishes as the number of labeled examples increases, (essentially, +learning diminishes as the number of labeled examples increases (essentially, a ``diminishing returns'' scenario occurs). We note instead that, for deep architectures, our experiments show that such a positive effect is accomplished even in a scenario with a \emph{very large number of labeled examples}.