changeset 643:24d9819a810f

reviews aistats finales
author Yoshua Bengio <bengioy@iro.umontreal.ca>
date Thu, 24 Mar 2011 17:04:38 -0400
parents 507cb92d8e15
children e63d23c7c9fb
files writeup/ReviewsAISTATSfinal.html writeup/ReviewsAISTATSfinal_files/conferencelogo.gif
diffstat 2 files changed, 379 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/writeup/ReviewsAISTATSfinal.html	Thu Mar 24 17:04:38 2011 -0400
@@ -0,0 +1,379 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<!-- saved from url=(0096)https://cmt.research.microsoft.com/AIS2011/Protected/Author/ViewReviewsForPaper.aspx?paperId=126 -->
+<html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>
+	Reviews For Paper
+</title>
+<style>
+#header
+{
+    width: 100%;
+    font-size: small;
+    background-color:#F7F7F7;
+}
+.printThemeText
+{
+    font-size:small;
+}
+.printThemeTable td
+{
+    vertical-align:top;
+}
+.printThemeGrid th
+{
+    color:white;
+    background:#5D7B9D;
+    font-weight:bold;
+}
+.printThemeGrid
+{
+    border-collapse:collapse;
+}
+.printThemeGrid td, .printThemeGrid th
+{
+    border:solid 1px #D6D3CE;
+    padding:4px 4px 4px 4px;
+}
+.printThemeGrid .row
+{ 
+    background-color:#F7F6F3;
+    color:#333333;
+    vertical-align:top;
+}
+.printThemeGrid .altrow
+{ 
+    background-color:White;
+    color:#284775;
+    vertical-align:top;
+}
+.cellprompt
+{
+	font-weight:bold;
+	white-space:nowrap;
+    width:100px;	
+}
+.paperHeader
+{
+    background-color:#dee3e7;
+    margin:5px 5px 15px 0px;
+    width:99%;
+    font-family:Verdana;
+    font-size:medium;
+    font-weight:bold;
+}
+.sectionHeader
+{
+    background-color:#dee3e7;
+    padding:5px 5px 5px 0px;
+    width:99%;
+    text-decoration:underline;
+    font-family:Verdana;
+    font-size:small;
+    font-weight:bold;
+}
+.underlineheader
+{
+    text-decoration:underline;
+    font-weight:bold;
+    padding:5px 0px;
+}
+.response
+{
+    padding:5px 0px;
+}
+.reviewerlabel
+{
+    padding-right:20px;
+}
+.pageTitle
+{
+    background-color:#dee3e7;
+    padding:5px 5px 5px 5px;
+    margin-top:10px;
+    width:99%;
+    font-family:Verdana;
+    font-size:medium;
+    font-weight:bold;
+}
+.submissionDetailsView
+{
+}
+.submissionDetailsView tr
+{
+    vertical-align:top;
+}
+.submissionDetailsView td.prompt
+{
+    font-weight:bold;
+}
+.submissionDetailsView tr.sectionSeparator
+{
+
+}
+.submissionDetailsView tr.sectionSeparator td
+{
+    background-color:#dee3e7;
+    padding:5px 5px 5px 5px;
+    font-family:Verdana;
+    font-size:small;
+    font-weight:bold;
+    color:Navy;
+}
+</style>
+</head>
+<body>
+<form name="aspnetForm" method="post" action="./ReviewsAISTATSfinal_files/ReviewsAISTATSfinal.html" id="aspnetForm">
+<div>
+<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUKMTAxNDM4ODU3Ng9kFgJmD2QWAgIDD2QWAmYPZBYCAgUPDxYCHgdWaXNpYmxlZ2QWBgIBD2QWAmYPZBYEAgMPDxYCHgRUZXh0BQMxMjZkZAIFDw8WAh8BBTxEZWVwIExlYXJuZXJzIEJlbmVmaXQgTW9yZSBmcm9tIE91dC1vZi1EaXN0cmlidXRpb24gRXhhbXBsZXNkZAIDDw8WAh8AaGRkAgUPFgIeC18hSXRlbUNvdW50AgMWBmYPZBYGAgMPZBYCZg8VARNBc3NpZ25lZF9SZXZpZXdlcl8yZAIHDzwrAA0BAA8WBB4LXyFEYXRhQm91bmRnHwICB2QWAmYPZBYQAgEPZBYEZg8PFgIfAQWBAk92ZXJhbGwgcmF0aW5nOiBwbGVhc2Ugc3ludGhlc2l6ZSB5b3VyIGFuc3dlcnMgdG8gb3RoZXIgcXVlc3Rpb25zIGludG8gYW4gb3ZlcmFsbCByZWNvbW1lbmRhdGlvbi4gIFBsZWFzZSB0YWtlIGludG8gYWNjb3VudCB0cmFkZW9mZnMgKGFuIGluY3JlYXNlIGluIG9uZSBtZWFzdXJlIG1heSBjb21wZW5zYXRlIGZvciBhIGRlY3JlYXNlIGluIGFub3RoZXIpLCBhbmQgZGVzY3JpYmUgdGhlIHRyYWRlb2ZmcyBpbiB0aGUgZGV0YWlsZWQgY29tbWVudHMuZGQCAQ9kFgJmDxUBGVZlcnkgZ29vZDogc3VnZ2VzdCBhY2NlcHRkAgIPZBYEZg8PFgIfAQVMVGVjaG5pY2FsIHF1YWxpdHk6IGlzIGFsbCBpbmNsdWRlZCBtYXRlcmlhbCBwcmVzZW50ZWQgY2xlYXJseSBhbmQgY29ycmVjdGx5P2RkAgEPZBYCZg8VAQRHb29kZAIDD2QWBGYPDxYCHwEFZ09yaWdpbmFsaXR5OiBob3cgbXVjaCBuZXcgd29yayBpcyByZXByZXNlbnRlZCBpbiB0aGlzIHBhcGVyLCBiZXlvbmQgcHJldmlvdXMgY29uZmVyZW5jZS9qb3VybmFsIHBhcGVycz9kZAIBD2QWAmYPFQEYU3Vic3RhbnRpYWwgbmV3IG1hdGVyaWFsZAIED2QWBGYPDxYCHwEFgwFJbnRlcmVzdCBhbmQgc2lnbmlmaWNhbmNlOiB3b3VsZCB0aGUgcGFwZXIncyBnb2FsLCBpZiBjb21wbGV0ZWx5IHNvbHZlZCwgcmVwcmVzZW50IGEgc3Vic3RhbnRpYWwgYWR2YW5jZSBmb3IgdGhlIEFJU1RBVFMgY29tbXVuaXR5P2RkAgEPZBYCZg8VAQtTaWduaWZpY2FudGQCBQ9kFgRmDw8WAh8BBXVUaG9yb3VnaG5lc3M6IHRvIHdoYXQgZGVncmVlIGRvZXMgdGhlIHBhcGVyIHN1cHBvcnQgaXRzIGNvbmNsdXNpb25zIHRocm91Z2ggZXhwZXJpbWVudGFsIGNvbXBhcmlzb25zLCB0aGVvcmVtcywgZXRjLj9kZAIBD2QWAmYPFQEIVGhvcm91Z2hkAgYPZBYEZg8PFgIfAQV9Q3JlYXRpdml0eTogdG8gd2hhdCBkZWdyZWUgZG9lcyB0aGUgcGFwZXIgcmVwcmVzZW50IGEgbm92ZWwgd2F5IG9mIHNldHRpbmcgdXAgYSBwcm9ibGVtIG9yIGFuIHVudXN1YWwgYXBwcm9hY2ggdG8gc29sdmluZyBpdD9kZAIBD2QWAmYPFQEyTW9zdCBjb250ZW50IHJlcHJlc2VudHMgYXBwbGljYXRpb24gb2Yga25vd24gaWRlYXNkAgcPZBYEZg8PFgIfAQURRGV0YWlsZWQgQ29tbWVudHNkZAIBD2QWAmYPFQG9CFRoaXMgcGFwZXIgc2hvd3MgdGhhdCBkZWVwIG5ldHdvcmtzIGJlbmVmaXQgbW9yZSBmcm9tIG91dC1vZi1kaXN0cmlidXRpb24gZXhhbXBsZXMgdGhhbiBzaGFsbG93ZXIgYXJjaGl0ZWN0dXJlcyBvbiBhIGxhcmdlIHNjYWxlIGNoYXJhY3RlciByZWNvZ25pdGlvbiBleHBlcmltZW50LiBBIHRob3JvdWdoIGVtcGlyaWNhbCB2YWxpZGF0aW9uIHNob3dzIHRoYXQgZGVlcCBuZXRzIHByb2R1Y2UgYmV0dGVyIGRpc2NyaW1pbmF0aW9uICh0aGFuIHNoYWxsb3dlciBuZXRzKSB3aGVuIHRyYWluZWQgd2l0aCBkaXN0b3J0ZWQgY2hhcmFjdGVycyBhbmQgd2hlbiB0cmFpbmVkIG9uIG11bHRpcGxlIHRhc2tzLiANPGJyIC8+QWx0aG91Z2ggdGhlIG1ldGhvZHMgdXNlZCBhcmUgYWxyZWFkeSB3ZWxsIGVzdGFibGlzaGVkIGluIHRoZSBjb21tdW5pdHksIHRoZXNlIHJlc3VsdHMgYXJlIHNpZ25pZmljYW50IGFuZCBwcm92aWRlIG5ldyBpbnNpZ2h0cyBvbiB0aGUgcmVwcmVzZW50YXRpb25hbCBwb3dlciBvZiB0aGlzIGNsYXNzIG9mIG1ldGhvZHMuDTxiciAvPg08YnIgLz5DT01NRU5UUyBBRlRFUiBSRUFESU5HIFNFQ09ORCBWRVJTSU9ODTxiciAvPlRoZSBwYXBlciBpcyByZWFkeSBmb3IgcHVibGljYXRpb24uDTxiciAvPg08YnIgLz4tIE1pbm9yIGNvbW1lbnQ6IGl0IHdvdWxkIGJlIGhlbHBmdWwgdG8gYWRkIHRoZSBmdW5jdGlvbmFsIGZvcm0gb2YgdGhlIGVuY29kZXIgYW5kIGRlY29kZXIgZnVuY3Rpb24gb2YgdGhlIERBRS4gVGhlIHJlZmVyZW5jZSB0byBbMzBdIChEQUUgaXMgZXF1aXZhbGVudCB0byBHYXVzc2lhbiBSQk0gdHJhaW5lZCB3aXRoIFNjb3JlIE1hdGNoaW5nKSBtaWdodCBub3QgYmUgc28gcmVsZXZhbnQgaWYgdGhlIGF1dGhvcnMgdXNlIGRpZmZlcmVudCBraW5kcyBvZiBlbmNvZGVyIGFuZCBkZWNvZGVyIGZ1bmN0aW9ucyAoaW4gcGFydGljdWxhciwgaXMgdGhlIHJlY29uc3RydWN0aW9uIHNxdWFzaGVkIHRocm91Z2ggYSBsb2dpc3RpYyBhcyB1c3VhbGx5IGRvbmUgdG8gbW9kZWwgYmluYXJ5IGRpc2NyZXRlIHZhcmlhYmxlcz8sIGFyZSB0aGUgd2VpZ2h0IG1hdHJpY2VzIHN5bW1ldHJpYz8pZAIIDw8WAh8AaGRkAggPFQEAZAIBD2QWBgIDD2QWAmYPFQETQXNzaWduZWRfUmV2aWV3ZXJfM2QCBw88KwANAQAPFgQfA2cfAgIHZBYCZg9kFhACAQ9kFgRmDw8WAh8BBYECT3ZlcmFsbCByYXRpbmc6IHBsZWFzZSBzeW50aGVzaXplIHlvdXIgYW5zd2VycyB0byBvdGhlciBxdWVzdGlvbnMgaW50byBhbiBvdmVyYWxsIHJlY29tbWVuZGF0aW9uLiAgUGxlYXNlIHRha2UgaW50byBhY2NvdW50IHRyYWRlb2ZmcyAoYW4gaW5jcmVhc2UgaW4gb25lIG1lYXN1cmUgbWF5IGNvbXBlbnNhdGUgZm9yIGEgZGVjcmVhc2UgaW4gYW5vdGhlciksIGFuZCBkZXNjcmliZSB0aGUgdHJhZGVvZmZzIGluIHRoZSBkZXRhaWxlZCBjb21tZW50cy5kZAIBD2QWAmYPFQEZVmVyeSBnb29kOiBzdWdnZXN0IGFjY2VwdGQCAg9kFgRmDw8WAh8BBUxUZWNobmljYWwgcXVhbGl0eTogaXMgYWxsIGluY2x1ZGVkIG1hdGVyaWFsIHByZXNlbnRlZCBjbGVhcmx5IGFuZCBjb3JyZWN0bHk/ZGQCAQ9kFgJmDxUBCVZlcnkgZ29vZGQCAw9kFgRmDw8WAh8BBWdPcmlnaW5hbGl0eTogaG93IG11Y2ggbmV3IHdvcmsgaXMgcmVwcmVzZW50ZWQgaW4gdGhpcyBwYXBlciwgYmV5b25kIHByZXZpb3VzIGNvbmZlcmVuY2Uvam91cm5hbCBwYXBlcnM/ZGQCAQ9kFgJmDxUBGFN1YnN0YW50aWFsIG5ldyBtYXRlcmlhbGQCBA9kFgRmDw8WAh8BBYMBSW50ZXJlc3QgYW5kIHNpZ25pZmljYW5jZTogd291bGQgdGhlIHBhcGVyJ3MgZ29hbCwgaWYgY29tcGxldGVseSBzb2x2ZWQsIHJlcHJlc2VudCBhIHN1YnN0YW50aWFsIGFkdmFuY2UgZm9yIHRoZSBBSVNUQVRTIGNvbW11bml0eT9kZAIBD2QWAmYPFQELU2lnbmlmaWNhbnRkAgUPZBYEZg8PFgIfAQV1VGhvcm91Z2huZXNzOiB0byB3aGF0IGRlZ3JlZSBkb2VzIHRoZSBwYXBlciBzdXBwb3J0IGl0cyBjb25jbHVzaW9ucyB0aHJvdWdoIGV4cGVyaW1lbnRhbCBjb21wYXJpc29ucywgdGhlb3JlbXMsIGV0Yy4/ZGQCAQ9kFgJmDxUBCFRob3JvdWdoZAIGD2QWBGYPDxYCHwEFfUNyZWF0aXZpdHk6IHRvIHdoYXQgZGVncmVlIGRvZXMgdGhlIHBhcGVyIHJlcHJlc2VudCBhIG5vdmVsIHdheSBvZiBzZXR0aW5nIHVwIGEgcHJvYmxlbSBvciBhbiB1bnVzdWFsIGFwcHJvYWNoIHRvIHNvbHZpbmcgaXQ/ZGQCAQ9kFgJmDxUBKE1vc3QgY29udGVudCByZXByZXNlbnRzIG5vdmVsIGFwcHJvYWNoZXNkAgcPZBYEZg8PFgIfAQURRGV0YWlsZWQgQ29tbWVudHNkZAIBD2QWAmYPFQHfFVRoaXMgcGFwZXIgY2xhaW1zIHRoYXQgdXNpbmcgb3V0LW9mLWRpc3RyaWJ1dGlvbiBleGFtcGxlcyBjYW4gYmUgbW9yZSBoZWxwZnVsIGluIHRyYWluaW5nIGRlZXAgYXJjaGl0ZWN0dXJlcyB0aGFuIHNoYWxsb3cgYXJjaGl0ZWN0dXJlcy4gSW4gb3JkZXIgdG8gdGVzdCB0aGlzIGh5cG90aGVzaXMsIHRoZSBwYXBlciBkZXZlbG9wcyBleHRlbnNpdmUgdHJhbnNmb3JtYXRpb25zIGZvciBpbWFnZSBwYXRjaGVzIChpLmUuLCBpbWFnZXMgb2YgaGFuZHdyaXR0ZW4gY2hhcmFjdGVycykgdG8gZ2VuZXJhdGUgYSBsYXJnZS1zY2FsZSBkYXRhc2V0IG9mIHBlcnR1cmJlZCBpbWFnZXMuIFRoZXNlIG91dC1vZi1kaXN0cmlidXRpb24gZXhhbXBsZXMgYXJlIHRyYWluZWQgdXNpbmcgTUxQcyBhbmQgc3RhY2tlZCBkZW5vaXNpbmcgYXV0by1lbmNvZGVycyAoU0RBcykuIEluIHRoZSBleHBlcmltZW50cywgdGhlIHBhcGVyIHNob3dzIHRoYXQgU0RBcyBvdXRwZXJmb3JtIE1MUHMsIGFjaGlldmluZyBodW1hbi1sZXZlbCBwZXJmb3JtYW5jZSBmb3IgTklTVCBkYXRhc2V0LiBUaGUgcGFwZXIgYWxzbyBwcm92aWRlcyB0d28gaW50ZXJlc3RpbmcgZXhwZXJpbWVudHMgc2hvd2luZyB0aGF0OiAoMSkgU0RBcyBjYW4gYmVuZWZpdCBmcm9tIHRyYWluaW5nIHBlcnR1cmJlZCBkYXRhLCBldmVuIHdoZW4gdGVzdGluZyBvbiBjbGVhbiBkYXRhOyAoMikgU0RBcyBjYW4gc2lnbmlmaWNhbnRseSBiZW5lZml0IGZyb20gbXVsdGktdGFzayBsZWFybmluZy4NPGJyIC8+DTxiciAvPg08YnIgLz5RdWVzdGlvbnMsIGNvbW1lbnRzLCBhbmQgc3VnZ2VzdGlvbnM6DTxiciAvPjEuIFJlZ2FyZGluZyB0aGUgaHVtYW4gbGFiZWxpbmcsIEkgaGF2ZSBzb21lIGNvbmNlcm5zIGFib3V0IGxhYmVsaW5nIG5vaXNlL2JpYXNlcyBkdWUgdG8gQU1ULiBIb3cgd2VyZSB0aGUgYW5vbWFsaWVzIGluIGxhYmVsaW5nIG9yIG91dGxpZXJzIGNvbnRyb2xsZWQ/IFdhcyB0aGVyZSBhbnkgcHJvY2VkdXJlIHRvIG1pbmltaXplIGxhYmVsaW5nIG5vaXNlL2JpYXNlcyBvciB0byBlbnN1cmUgdGhhdCBodW1hbiBsYWJlbGVycyB0cmllZCB0aGVpciBiZXN0IChlLmcuLCBmaWx0ZXJpbmcgb3V0IHJhbmRvbSBndWVzc2VzIG9yIGVuY291cmFnaW5nIHRoZSBsYWJlbGVycyB0byBjb25zaWRlciBhbGwgcG9zc2liaWxpdGllcyBjYXJlZnVsbHkgYmVmb3JlIHByb3ZpZGluZyBwcmVtYXR1cmUgZ3Vlc3Nlcyk/IEZvciBleGFtcGxlLCBtdWx0aS1zdGFnZSBxdWVzdGlvbm5haXJlcyAoZS5nLiwgYXNraW5nICJjaGFyYWN0ZXJzL2RpZ2l0cyIsICJ1cHBlcmNhc2UvbG93ZXJjYXNlIiwgdGhlbiBjaG9vc2luZyBvbmUgb3V0IG9mIDEwIGRpZ2l0cywgb3IgMjYgY2hhcmFjdGVycykgbWlnaHQgc2lnbmlmaWNhbnRseSByZWR1Y2UgbGFiZWxpbmcgbm9pc2UvYmlhc2VzLCByYXRoZXIgdGhhbiBzaG93aW5nIDYyIGNhbmRpZGF0ZSBhbnN3ZXJzIHNpbXVsdGFuZW91c2x5Lg08YnIgLz4NPGJyIC8+Mi4gSXQgc2VlbXMgdGhhdCB0aGUgcGFwZXIgZml4ZWQgdGhlIG51bWJlciBvZiBoaWRkZW4gbGF5ZXJzIGFzIHRocmVlLiBEZXNwaXRlIGdvb2QgcGVyZm9ybWFuY2Ugb2YgdGhlIHByb3Bvc2VkIGFyY2hpdGVjdHVyZSwgaXQgaXMgc29tZXdoYXQgdW5jbGVhciB3aGV0aGVyIHRoZSBiZW5lZml0IGNvbWVzIG1haW5seSBmcm9tIGRlZXAgYXJjaGl0ZWN0dXJlIG9yIHRoZSB1c2Ugb2YgZGVub2lzaW5nIGF1dG8tZW5jb2RlcnMuDTxiciAvPg08YnIgLz5UaGVyZWZvcmUsIGl0IHdpbGwgYmUgbW9yZSBpbnRlcmVzdGluZyB0byBzZWUgdGhlIGVmZmVjdCBvZiB0aGUgbnVtYmVyIG9mIGxheWVycyBhbmQgb3RoZXIgcHJlLXRyYWluaW5nIG1ldGhvZHMgKGUuZy4sIFJCTXMgb3IgYXV0by1lbmNvZGVycykuIFRoaXMgZXhwZXJpbWVudCB3aWxsIGNsYXJpZnkgd2hlcmUgdGhlIGJlbmVmaXQgY29tZXMgZnJvbSAoaS5lLiwgZGVlcCBhcmNoaXRlY3R1cmUgdnMuIHByZS10cmFpbmluZyBtb2R1bGVzKSBhbmQgcHJvdmlkZSBtb3JlIGluc2lnaHRzIGFib3V0IHRoZSByZXN1bHRzLg08YnIgLz4NPGJyIC8+My4gVGhlIHBhcGVyIGJyaWVmbHkgbWVudGlvbmVkIGFib3V0IHRoZSB1c2Ugb2YgbGliU1ZNLCBidXQgaXQgd2lsbCBiZSB1c2VmdWwgdG8gY29tcGFyZSBhZ2FpbnN0IHRoZSByZXN1bHRzIHVzaW5nIG9ubGluZSBTVk0gKGUuZy4sIFBFR0FTT1MpLg08YnIgLz4NPGJyIC8+NC4gVGhlIHBhcGVyIGFsc28gdGFsa3MgYWJvdXQgdGhlIGVmZmVjdCBvZiBsYXJnZSBsYWJlbGVkIGRhdGEgaW4gc2VsZi10YXVnaHQgbGVhcm5pbmcgc2V0dGluZy4gVG8gc3RyZW5ndGhlbiB0aGUgY2xhaW0sIGl0IHdpbGwgYmUgaGVscGZ1bCB0byBzaG93IHRoZSB0ZXN0IGFjY3VyYWN5IGFzIGEgZnVuY3Rpb24gb2YgbnVtYmVyIG9mIGxhYmVsZWQgZXhhbXBsZXMuDTxiciAvPg08YnIgLz5PdmVyYWxsLCB0aGUgcGFwZXIgaXMgY2xlYXJseSB3cml0dGVuLCBhbmQgaXQgcHJvdmlkZXMgaW50ZXJlc3RpbmcgZXhwZXJpbWVudHMgb24gbGFyZ2Ugc2NhbGUgZGF0YXNldHMsIGFkZHJlc3NpbmcgYSBudW1iZXIgb2YgaW50ZXJlc3RpbmcgcXVlc3Rpb25zIHJlbGF0ZWQgdG8gZGVlcCBsZWFybmluZyBhbmQgbXVsdGktdGFzayBsZWFybmluZy4gRnVydGhlcm1vcmUsIHRoaXMgd29yayBjYW4gcHJvdmlkZSBhIG5ldyBsYXJnZSBzY2FsZSBiZW5jaG1hcmsgZGF0YXNldCAoYmV5b25kIE1OSVNUKSBmb3IgZGVlcCBsZWFybmluZyBhbmQgbWFjaGluZSBsZWFybmluZyByZXNlYXJjaC4NPGJyIC8+ZAIIDw8WAh8AaGRkAggPFQEAZAICD2QWBgIDD2QWAmYPFQETQXNzaWduZWRfUmV2aWV3ZXJfNGQCBw88KwANAQAPFgQfA2cfAgIHZBYCZg9kFhACAQ9kFgRmDw8WAh8BBYECT3ZlcmFsbCByYXRpbmc6IHBsZWFzZSBzeW50aGVzaXplIHlvdXIgYW5zd2VycyB0byBvdGhlciBxdWVzdGlvbnMgaW50byBhbiBvdmVyYWxsIHJlY29tbWVuZGF0aW9uLiAgUGxlYXNlIHRha2UgaW50byBhY2NvdW50IHRyYWRlb2ZmcyAoYW4gaW5jcmVhc2UgaW4gb25lIG1lYXN1cmUgbWF5IGNvbXBlbnNhdGUgZm9yIGEgZGVjcmVhc2UgaW4gYW5vdGhlciksIGFuZCBkZXNjcmliZSB0aGUgdHJhZGVvZmZzIGluIHRoZSBkZXRhaWxlZCBjb21tZW50cy5kZAIBD2QWAmYPFQEUR29vZDogc3VnZ2VzdCBhY2NlcHRkAgIPZBYEZg8PFgIfAQVMVGVjaG5pY2FsIHF1YWxpdHk6IGlzIGFsbCBpbmNsdWRlZCBtYXRlcmlhbCBwcmVzZW50ZWQgY2xlYXJseSBhbmQgY29ycmVjdGx5P2RkAgEPZBYCZg8VAQlWZXJ5IGdvb2RkAgMPZBYEZg8PFgIfAQVnT3JpZ2luYWxpdHk6IGhvdyBtdWNoIG5ldyB3b3JrIGlzIHJlcHJlc2VudGVkIGluIHRoaXMgcGFwZXIsIGJleW9uZCBwcmV2aW91cyBjb25mZXJlbmNlL2pvdXJuYWwgcGFwZXJzP2RkAgEPZBYCZg8VARhTdWJzdGFudGlhbCBuZXcgbWF0ZXJpYWxkAgQPZBYEZg8PFgIfAQWDAUludGVyZXN0IGFuZCBzaWduaWZpY2FuY2U6IHdvdWxkIHRoZSBwYXBlcidzIGdvYWwsIGlmIGNvbXBsZXRlbHkgc29sdmVkLCByZXByZXNlbnQgYSBzdWJzdGFudGlhbCBhZHZhbmNlIGZvciB0aGUgQUlTVEFUUyBjb21tdW5pdHk/ZGQCAQ9kFgJmDxUBC1NpZ25pZmljYW50ZAIFD2QWBGYPDxYCHwEFdVRob3JvdWdobmVzczogdG8gd2hhdCBkZWdyZWUgZG9lcyB0aGUgcGFwZXIgc3VwcG9ydCBpdHMgY29uY2x1c2lvbnMgdGhyb3VnaCBleHBlcmltZW50YWwgY29tcGFyaXNvbnMsIHRoZW9yZW1zLCBldGMuP2RkAgEPZBYCZg8VAQhUaG9yb3VnaGQCBg9kFgRmDw8WAh8BBX1DcmVhdGl2aXR5OiB0byB3aGF0IGRlZ3JlZSBkb2VzIHRoZSBwYXBlciByZXByZXNlbnQgYSBub3ZlbCB3YXkgb2Ygc2V0dGluZyB1cCBhIHByb2JsZW0gb3IgYW4gdW51c3VhbCBhcHByb2FjaCB0byBzb2x2aW5nIGl0P2RkAgEPZBYCZg8VATJNb3N0IGNvbnRlbnQgcmVwcmVzZW50cyBhcHBsaWNhdGlvbiBvZiBrbm93biBpZGVhc2QCBw9kFgRmDw8WAh8BBRFEZXRhaWxlZCBDb21tZW50c2RkAgEPZBYCZg8VAbUMVGhlIHBhcGVyIGRlbW9uc3RyYXRlcyB0aGF0IGRlZXBlciBuZXR3b3JrcyBiZW5lZml0IGZyb20gInJlbGF0ZWQgZGF0YSIgbW9yZSB0aGFuIHNoYWxsb3cgb25lcywgIGFuZCBpcyBmYWlybHkgd2VsbCB3cml0dGVuLiBBbG9uZyB0aGUgd2F5LCBpdCBjb25zdHJ1Y3RzIGEgdmVyeSBwb3dlcmZ1bCBjaGFyYWN0ZXIgcmVjb2duaXRpb24gc3lzdGVtLg08YnIgLz4NPGJyIC8+SSBoYXZlIHNldmVyYWwgbWlub3Igc3VnZ2VzdGlvbnMgdGhhdCB3b3VsZCBpbXByb3ZlIHRoZSBwYXBlci4gDTxiciAvPg08YnIgLz5GaXJzdCwgdGhlICJodW1hbi1sZXZlbCBwZXJmb3JtYW5jZSIsIGFzIG9idGFpbmVkIGJ5IHRoZSBNZWNoYW5pY2FsDTxiciAvPlR1cmssIGxpa2VseSBpZ25vcmVzIHRoZSBpbnRyaW5zaWMgbm9pc2UgYW5kIHNsb3BwaW5lc3Mgb2YgdGhlIGh1bWFuDTxiciAvPmFubm90YXRvcnMuICBUaGF0IGlzLCB0aGUgYW5ub3RhdG9ycyBtYXkgb2NjYXNpb25hbGx5IHNlbGVjdCBhbiBpbmNvcnJlY3QNPGJyIC8+bGFiZWwgaGF2aW5nIHJlY29nbml6ZWQgdGhlIGltYWdlIGNvcnJlY3RseS4gVGhpcyBlZmZlY3QgY2FuIGJlIG1lYXN1cmVkIGJ5DTxiciAvPmNyZWF0aW5nIGEgdmVyeSBjbGVhbiBkYXRhc2V0IChzbyB0aGUgdHJ1ZSBlcnJvciByYXRlIGlzIHplcm8pLCBhbmQgdG8NPGJyIC8+aGF2ZSBpdCBsYWJlbGxlZCBieSBNVC4gVGhlIGVycm9yIHJhdGUgaXMgbGlrZWx5IHRvIGJlIGdyZWF0ZXIgdGhhbg08YnIgLz56ZXJvLCB3aGljaCBzaG91bGQgYmUgdGFrZW4gaW50byB0aGUgaHVtYW4tcGVyZm9ybWFuY2UgZXN0aW1hdGlvbi4gDTxiciAvPg08YnIgLz5TZWNvbmQsIHRoZSBleHBlcmltZW50cyBjb252aW5jaW5nbHkgc2hvd2VkIHRoYXQgZGVlcCBTREEgbmV0cyBkbw08YnIgLz5xdWl0ZSB3ZWxsIGFuZCBiZW5lZml0IGZyb20gbW9yZSByZWxhdGVkIGRhdGEuICBIb3dldmVyLCByZWNlbnQgd29yaw08YnIgLz5baHR0cDovL2FyeGl2Lm9yZy9hYnMvMTAwMy4wMzU4XSBoYXMgc2hvd24gdGhhdCB2ZXJ5IGRlZXAgbmV1cmFsDTxiciAvPm5ldHdvcmtzIGFyZSB2ZXJ5IGVmZmVjdGl2ZSBpbiB0aGUgcmVnaW1lIGV4cGxvcmVkIHByZWNpc2VseSBpbiB0aGlzDTxiciAvPnBhcGVyLCBldmVuIHdpdGhvdXQgcHJldHJhaW5pbmcgb2YgYW55IGtpbmQuIFRoZXJlZm9yZSwgaXQgd291bGQgaGVscCANPGJyIC8+dG8gcmVkbyB0aGVzZSBleHBlcmltZW50cyB3aXRoIGFuIFNEQSB3aXRoIDYgb3INPGJyIC8+ZXZlbiA4IGxheWVycyAob3IgbW9yZSksIGFuZCB0byB0cmFpbiB0aGUgZGVlcCBuZXR3b3JrcyB3aXRoIG5vDTxiciAvPnByZXRyYWluaW5nIGJ1dCB3aXRoIGEgY2FyZWZ1bCByYW5kb20gaW5pdGlhbGl6YXRpb24gdGhhdA08YnIgLz51c2VzIHRoZSBjb3JyZWN0IHNjYWxlIGF0IGVhY2ggbGF5ZXIuDTxiciAvPg08YnIgLz5MYXN0bHksIHRoZXJlIGFyZSBubyBNTklTVCByZXN1bHRzLiBJdCBpcyBpbXBvcnRhbnQgdG8gY29tcGFyZSBvbg08YnIgLz5NTklTVCBiZWNhdXNlIHRoZXJlIGFyZSBtYW55IGNhcmVmdWwgcmVzdWx0cyBvbiBNTklTVC4NPGJyIC8+DTxiciAvPg08YnIgLz5kAggPDxYCHwBoZGQCCA8VAQBkGAMFH2N0bDAwJGNwaCRndlJldmlld3MkY3RsMDIkY3RsMDAPPCsACgEIAgFkBR9jdGwwMCRjcGgkZ3ZSZXZpZXdzJGN0bDAxJGN0bDAwDzwrAAoBCAIBZAUfY3RsMDAkY3BoJGd2UmV2aWV3cyRjdGwwMCRjdGwwMA88KwAKAQgCAWTKAZshq0NjrqrhwEjAhynIH7rejg==">
+</div>
+
+<table id="header">
+<tbody><tr>
+<td><img src="./ReviewsAISTATSfinal_files/conferencelogo.gif"></td>
+<td width="100%"><a href="http://www.aistats.org/">AI &amp; Statistics 2011 </a><br><b>Fourteenth International Conference on Artificial Intelligence and Statistics </b><br>April 11-13, 2011<br>Ft. Lauderdale, FL<br>USA</td>
+</tr>
+</tbody></table>
+<table id="content"><tbody><tr><td class="contentBorder">&nbsp;</td><td class="contentContainer">
+<span id="ctl00_cph_Label4" style="font-size:Small;font-weight:bold;">Reviews For Paper</span>
+<span id="ctl00_cph_lblErrorMessage" class="error" style="font-size:Small;"></span>
+<div id="ctl00_cph_pnlReviews">
+	
+    <span style="font-size:Small;">
+<table class="nicetable2" style="text-align:left; width: 100%;">
+    
+    <tbody><tr>
+        <td width="100px"><b>Paper ID</b></td>
+        <td><span id="ctl00_cph_infoSubmission_lblPaperId" style="font-size:Small;">126</span></td>
+    </tr>
+    <tr>
+        <td><b>Title</b></td>
+        <td><span id="ctl00_cph_infoSubmission_lblPaperTitle" style="font-size:Small;">Deep Learners Benefit More from Out-of-Distribution Examples</span></td>
+    </tr>
+    
+    
+    
+    
+    
+</tbody></table></span>
+    
+    
+            <hr>
+            <table>
+                <tbody><tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl00_Label2" style="font-size:Small;font-weight:bold;">Masked Reviewer ID:</span>
+                    </td>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl00_Label1" style="font-size:Small;">Assigned_Reviewer_2</span>
+                    </td>
+                </tr>
+                <tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl00_Label3" style="font-size:Small;font-weight:bold;">Review:</span>
+                    </td>
+                    <td>
+                    </td>
+                </tr>
+            </tbody></table>
+            <div>
+		<table cellspacing="0" cellpadding="4" rules="all" border="1" style="color:#333333;border-width:1px;border-style:None;font-family:Verdana;font-size:Small;border-collapse:collapse;">
+			<tbody><tr style="color:White;background-color:#5D7B9D;font-weight:bold;">
+				<th scope="col">Question</th><th scope="col">&nbsp;</th>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Overall rating: please synthesize your answers to other questions into an overall recommendation.  Please take into account tradeoffs (an increase in one measure may compensate for a decrease in another), and describe the tradeoffs in the detailed comments.</td><td style="width:80%;">
+                            Very good: suggest accept
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Technical quality: is all included material presented clearly and correctly?</td><td style="width:80%;">
+                            Good
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Originality: how much new work is represented in this paper, beyond previous conference/journal papers?</td><td style="width:80%;">
+                            Substantial new material
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Interest and significance: would the paper's goal, if completely solved, represent a substantial advance for the AISTATS community?</td><td style="width:80%;">
+                            Significant
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Thoroughness: to what degree does the paper support its conclusions through experimental comparisons, theorems, etc.?</td><td style="width:80%;">
+                            Thorough
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Creativity: to what degree does the paper represent a novel way of setting up a problem or an unusual approach to solving it?</td><td style="width:80%;">
+                            Most content represents application of known ideas
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Detailed Comments</td><td style="width:80%;">
+                            This paper shows that deep networks benefit more from out-of-distribution examples than shallower architectures on a large scale character recognition experiment. A thorough empirical validation shows that deep nets produce better discrimination (than shallower nets) when trained with distorted characters and when trained on multiple tasks. 
+<br>Although the methods used are already well established in the community, these results are significant and provide new insights on the representational power of this class of methods.
+<br>
+<br>COMMENTS AFTER READING SECOND VERSION
+<br>The paper is ready for publication.
+<br>
+<br>- Minor comment: it would be helpful to add the functional form of the encoder and decoder function of the DAE. The reference to [30] (DAE is equivalent to Gaussian RBM trained with Score Matching) might not be so relevant if the authors use different kinds of encoder and decoder functions (in particular, is the reconstruction squashed through a logistic as usually done to model binary discrete variables?, are the weight matrices symmetric?)
+                        </td>
+			</tr>
+		</tbody></table>
+	</div>
+            
+        
+            <hr>
+            <table>
+                <tbody><tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl01_Label2" style="font-size:Small;font-weight:bold;">Masked Reviewer ID:</span>
+                    </td>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl01_Label1" style="font-size:Small;">Assigned_Reviewer_3</span>
+                    </td>
+                </tr>
+                <tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl01_Label3" style="font-size:Small;font-weight:bold;">Review:</span>
+                    </td>
+                    <td>
+                    </td>
+                </tr>
+            </tbody></table>
+            <div>
+		<table cellspacing="0" cellpadding="4" rules="all" border="1" style="color:#333333;border-width:1px;border-style:None;font-family:Verdana;font-size:Small;border-collapse:collapse;">
+			<tbody><tr style="color:White;background-color:#5D7B9D;font-weight:bold;">
+				<th scope="col">Question</th><th scope="col">&nbsp;</th>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Overall rating: please synthesize your answers to other questions into an overall recommendation.  Please take into account tradeoffs (an increase in one measure may compensate for a decrease in another), and describe the tradeoffs in the detailed comments.</td><td style="width:80%;">
+                            Very good: suggest accept
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Technical quality: is all included material presented clearly and correctly?</td><td style="width:80%;">
+                            Very good
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Originality: how much new work is represented in this paper, beyond previous conference/journal papers?</td><td style="width:80%;">
+                            Substantial new material
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Interest and significance: would the paper's goal, if completely solved, represent a substantial advance for the AISTATS community?</td><td style="width:80%;">
+                            Significant
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Thoroughness: to what degree does the paper support its conclusions through experimental comparisons, theorems, etc.?</td><td style="width:80%;">
+                            Thorough
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Creativity: to what degree does the paper represent a novel way of setting up a problem or an unusual approach to solving it?</td><td style="width:80%;">
+                            Most content represents novel approaches
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Detailed Comments</td><td style="width:80%;">
+                            This paper claims that using out-of-distribution examples can be more helpful in training deep architectures than shallow architectures. In order to test this hypothesis, the paper develops extensive transformations for image patches (i.e., images of handwritten characters) to generate a large-scale dataset of perturbed images. These out-of-distribution examples are trained using MLPs and stacked denoising auto-encoders (SDAs). In the experiments, the paper shows that SDAs outperform MLPs, achieving human-level performance for NIST dataset. The paper also provides two interesting experiments showing that: (1) SDAs can benefit from training perturbed data, even when testing on clean data; (2) SDAs can significantly benefit from multi-task learning.
+<br>
+<br>
+<br>Questions, comments, and suggestions:
+<br>1. Regarding the human labeling, I have some concerns about labeling noise/biases due to AMT. How were the anomalies in labeling or outliers controlled? Was there any procedure to minimize labeling noise/biases or to ensure that human labelers tried their best (e.g., filtering out random guesses or encouraging the labelers to consider all possibilities carefully before providing premature guesses)? For example, multi-stage questionnaires (e.g., asking "characters/digits", "uppercase/lowercase", then choosing one out of 10 digits, or 26 characters) might significantly reduce labeling noise/biases, rather than showing 62 candidate answers simultaneously.
+<br>
+<br>2. It seems that the paper fixed the number of hidden layers as three. Despite good performance of the proposed architecture, it is somewhat unclear whether the benefit comes mainly from deep architecture or the use of denoising auto-encoders.
+<br>
+<br>Therefore, it will be more interesting to see the effect of the number of layers and other pre-training methods (e.g., RBMs or auto-encoders). This experiment will clarify where the benefit comes from (i.e., deep architecture vs. pre-training modules) and provide more insights about the results.
+<br>
+<br>3. The paper briefly mentioned about the use of libSVM, but it will be useful to compare against the results using online SVM (e.g., PEGASOS).
+<br>
+<br>4. The paper also talks about the effect of large labeled data in self-taught learning setting. To strengthen the claim, it will be helpful to show the test accuracy as a function of number of labeled examples.
+<br>
+<br>Overall, the paper is clearly written, and it provides interesting experiments on large scale datasets, addressing a number of interesting questions related to deep learning and multi-task learning. Furthermore, this work can provide a new large scale benchmark dataset (beyond MNIST) for deep learning and machine learning research.
+<br>
+                        </td>
+			</tr>
+		</tbody></table>
+	</div>
+            
+        
+            <hr>
+            <table>
+                <tbody><tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl02_Label2" style="font-size:Small;font-weight:bold;">Masked Reviewer ID:</span>
+                    </td>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl02_Label1" style="font-size:Small;">Assigned_Reviewer_4</span>
+                    </td>
+                </tr>
+                <tr>
+                    <td>
+                        <span id="ctl00_cph_gvReviews_ctl02_Label3" style="font-size:Small;font-weight:bold;">Review:</span>
+                    </td>
+                    <td>
+                    </td>
+                </tr>
+            </tbody></table>
+            <div>
+		<table cellspacing="0" cellpadding="4" rules="all" border="1" style="color:#333333;border-width:1px;border-style:None;font-family:Verdana;font-size:Small;border-collapse:collapse;">
+			<tbody><tr style="color:White;background-color:#5D7B9D;font-weight:bold;">
+				<th scope="col">Question</th><th scope="col">&nbsp;</th>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Overall rating: please synthesize your answers to other questions into an overall recommendation.  Please take into account tradeoffs (an increase in one measure may compensate for a decrease in another), and describe the tradeoffs in the detailed comments.</td><td style="width:80%;">
+                            Good: suggest accept
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Technical quality: is all included material presented clearly and correctly?</td><td style="width:80%;">
+                            Very good
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Originality: how much new work is represented in this paper, beyond previous conference/journal papers?</td><td style="width:80%;">
+                            Substantial new material
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Interest and significance: would the paper's goal, if completely solved, represent a substantial advance for the AISTATS community?</td><td style="width:80%;">
+                            Significant
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Thoroughness: to what degree does the paper support its conclusions through experimental comparisons, theorems, etc.?</td><td style="width:80%;">
+                            Thorough
+                        </td>
+			</tr><tr style="color:#284775;background-color:White;">
+				<td style="width:20%;">Creativity: to what degree does the paper represent a novel way of setting up a problem or an unusual approach to solving it?</td><td style="width:80%;">
+                            Most content represents application of known ideas
+                        </td>
+			</tr><tr style="color:#333333;background-color:#F7F6F3;">
+				<td style="width:20%;">Detailed Comments</td><td style="width:80%;">
+                            The paper demonstrates that deeper networks benefit from "related data" more than shallow ones,  and is fairly well written. Along the way, it constructs a very powerful character recognition system.
+<br>
+<br>I have several minor suggestions that would improve the paper. 
+<br>
+<br>First, the "human-level performance", as obtained by the Mechanical
+<br>Turk, likely ignores the intrinsic noise and sloppiness of the human
+<br>annotators.  That is, the annotators may occasionally select an incorrect
+<br>label having recognized the image correctly. This effect can be measured by
+<br>creating a very clean dataset (so the true error rate is zero), and to
+<br>have it labelled by MT. The error rate is likely to be greater than
+<br>zero, which should be taken into the human-performance estimation. 
+<br>
+<br>Second, the experiments convincingly showed that deep SDA nets do
+<br>quite well and benefit from more related data.  However, recent work
+<br>[http://arxiv.org/abs/1003.0358] has shown that very deep neural
+<br>networks are very effective in the regime explored precisely in this
+<br>paper, even without pretraining of any kind. Therefore, it would help 
+<br>to redo these experiments with an SDA with 6 or
+<br>even 8 layers (or more), and to train the deep networks with no
+<br>pretraining but with a careful random initialization that
+<br>uses the correct scale at each layer.
+<br>
+<br>Lastly, there are no MNIST results. It is important to compare on
+<br>MNIST because there are many careful results on MNIST.
+<br>
+<br>
+<br>
+                        </td>
+			</tr>
+		</tbody></table>
+	</div>
+            
+        
+    <br>
+    <br>
+
+</div>
+</td><td class="contentBorder">&nbsp;</td></tr></tbody></table>
+</form>
+
+
+</body></html>
\ No newline at end of file
Binary file writeup/ReviewsAISTATSfinal_files/conferencelogo.gif has changed