Mercurial > pylearn
changeset 1018:790376d986a3
initial document for sampling
author | gdesjardins |
---|---|
date | Fri, 03 Sep 2010 15:01:02 -0400 |
parents | 3977ecd49431 |
children | 91916536a304 |
files | doc/v2_planning/sampler.txt |
diffstat | 1 files changed, 39 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/doc/v2_planning/sampler.txt Fri Sep 03 15:01:02 2010 -0400 @@ -0,0 +1,39 @@ +OVERVIEW +======== + +Before we start defining what a sampler is and how it should be defined in +pylearn, we should first know what we're up against. + +The workflow I have in mind is the following: +1. identify the most popular sampling algorithms in the litterature +2. get up to speed with methods we're not familiar with +3. identify common usage patterns, properties of the algorithm, etc. +4. decide on an API / best way to implement them +5. prioritize the algorithms +6. code away + +1.BACKGROUND +============= + +This section should provide a brief overview of what exists in the litterature. +We should make sure to have a decent understanding of all of these (not everyone +has to be experts though), so that we can *intelligently* design our sampler +interface based on common usage patterns, properties, etc. + +Sampling from basic distributions +* already supported: uniform, normal, binomial, multinomial +* wish list: beta, poisson, others ? + +List of sampling algorithms: + +* inversion sampling +* rejection sampling +* importance sampling +* Markov Chain Monte Carlo +* Gibbs sampling +* Metropolis Hastings +* Slice Sampling +* Annealing +* Parallel Tempering, Tempered Transitions, Simulated Tempering +* Nested Sampling (?) +* Hamiltonian Monte Carlo