Mercurial > ift6266
comparison writeup/nips2010_submission.tex @ 536:5157a5830125
One comma
author | Dumitru Erhan <dumitru.erhan@gmail.com> |
---|---|
date | Tue, 01 Jun 2010 18:28:09 -0700 |
parents | 85f2337d47d2 |
children | 47894d0ecbde |
comparison
equal
deleted
inserted
replaced
531:85f2337d47d2 | 536:5157a5830125 |
---|---|
689 In the original self-taught learning framework~\citep{RainaR2007}, the | 689 In the original self-taught learning framework~\citep{RainaR2007}, the |
690 out-of-sample examples were used as a source of unsupervised data, and | 690 out-of-sample examples were used as a source of unsupervised data, and |
691 experiments showed its positive effects in a \emph{limited labeled data} | 691 experiments showed its positive effects in a \emph{limited labeled data} |
692 scenario. However, many of the results by \citet{RainaR2007} (who used a | 692 scenario. However, many of the results by \citet{RainaR2007} (who used a |
693 shallow, sparse coding approach) suggest that the relative gain of self-taught | 693 shallow, sparse coding approach) suggest that the relative gain of self-taught |
694 learning diminishes as the number of labeled examples increases, (essentially, | 694 learning diminishes as the number of labeled examples increases (essentially, |
695 a ``diminishing returns'' scenario occurs). We note instead that, for deep | 695 a ``diminishing returns'' scenario occurs). We note instead that, for deep |
696 architectures, our experiments show that such a positive effect is accomplished | 696 architectures, our experiments show that such a positive effect is accomplished |
697 even in a scenario with a \emph{very large number of labeled examples}. | 697 even in a scenario with a \emph{very large number of labeled examples}. |
698 | 698 |
699 Why would deep learners benefit more from the self-taught learning framework? | 699 Why would deep learners benefit more from the self-taught learning framework? |