Mercurial > ift6266
diff writeup/nips2010_submission.tex @ 536:5157a5830125
One comma
author | Dumitru Erhan <dumitru.erhan@gmail.com> |
---|---|
date | Tue, 01 Jun 2010 18:28:09 -0700 |
parents | 85f2337d47d2 |
children | 47894d0ecbde |
line wrap: on
line diff
--- a/writeup/nips2010_submission.tex Tue Jun 01 18:19:40 2010 -0700 +++ b/writeup/nips2010_submission.tex Tue Jun 01 18:28:09 2010 -0700 @@ -691,7 +691,7 @@ experiments showed its positive effects in a \emph{limited labeled data} scenario. However, many of the results by \citet{RainaR2007} (who used a shallow, sparse coding approach) suggest that the relative gain of self-taught -learning diminishes as the number of labeled examples increases, (essentially, +learning diminishes as the number of labeled examples increases (essentially, a ``diminishing returns'' scenario occurs). We note instead that, for deep architectures, our experiments show that such a positive effect is accomplished even in a scenario with a \emph{very large number of labeled examples}.