1018
|
1 OVERVIEW
|
|
2 ========
|
|
3
|
|
4 Before we start defining what a sampler is and how it should be defined in
|
|
5 pylearn, we should first know what we're up against.
|
|
6
|
|
7 The workflow I have in mind is the following:
|
|
8 1. identify the most popular sampling algorithms in the litterature
|
|
9 2. get up to speed with methods we're not familiar with
|
|
10 3. identify common usage patterns, properties of the algorithm, etc.
|
|
11 4. decide on an API / best way to implement them
|
|
12 5. prioritize the algorithms
|
|
13 6. code away
|
|
14
|
|
15 1.BACKGROUND
|
|
16 =============
|
|
17
|
|
18 This section should provide a brief overview of what exists in the litterature.
|
|
19 We should make sure to have a decent understanding of all of these (not everyone
|
|
20 has to be experts though), so that we can *intelligently* design our sampler
|
|
21 interface based on common usage patterns, properties, etc.
|
|
22
|
|
23 Sampling from basic distributions
|
|
24 * already supported: uniform, normal, binomial, multinomial
|
|
25 * wish list: beta, poisson, others ?
|
|
26
|
|
27 List of sampling algorithms:
|
|
28
|
|
29 * inversion sampling
|
|
30 * rejection sampling
|
|
31 * importance sampling
|
|
32 * Markov Chain Monte Carlo
|
|
33 * Gibbs sampling
|
|
34 * Metropolis Hastings
|
|
35 * Slice Sampling
|
|
36 * Annealing
|
|
37 * Parallel Tempering, Tempered Transitions, Simulated Tempering
|
|
38 * Nested Sampling (?)
|
|
39 * Hamiltonian Monte Carlo
|