1. “To review, you can approximate any function over an interval of length L with sines and cosines with period equal to L:”

> Depends on what you mean by “approximate” and what you mean by “any function”. If you only require the Fourier series to converge at “most” points, then Carleson’s Theorem [1] (only proven in 1966 after a lot of searching over the years!) is the definitive result. It says the Fourier series will converge pointwise to the value of the function for almost all values of x if the function raised to all powers strictly grater than 1 has a finite integral over the interval of length L.

> Let me also add that the question of which functions could be represented by a Fourier Series was the subjective of enormous debate in 19th century mathematics involving Poincare and others. [2]

2. “… and 0 of m≠n. This can be used to isolate c_m:” and a few lines later “The output of the Fourier transform is the set of c_m’s”

> This is right before the start of the Discrete Fourier Series section. The first “of” should be “if”, the word “set” should be “sequence”, and I think you managed to get a backwards apostrophe on the last c_m.

3. “This is effectively computing the integral with a rectangular approximation” … “runtime of this circuit” … “butterfly diagram”

> This is called a Riemann sum. [3] Also it is unclear what circuit you are referring to, doesn’t it make sense to simply say that N^2 terms must be added to compute f(0), …, f(N-1) in this manner? Lastly, it seems cryptic to refer to the butterfly diagram without including or linking to a picture of what it is you have in mind.

4. Citations: “ThÃ©orie analytique de la chaleur (in French). Paris: Firmin Didot PÃ¨re et Fils.” and “An algorithm for the machine calculation of complex Fourier series”. Math. Comput. 19: 297â€“

> There are some symbols that aren’t rendering correctly for me in your citations list (they appear to mostly be accents), and again a backwards quotation mark has snuck in.

Overall nice writeup. The whole point of the FFT can be expressed quite concisely: it is a classic example of a “divide and conquer” algorithm, and such algorithms typically yield speedups that replace a factor of N by log N. I would have written something to this effect earlier on to get the main point across, rather than focusing on the details before getting the main point across.

[1] https://en.wikipedia.org/wiki/Carleson's_theorem

[2] http://henripoincarepapers.univ-lorraine.fr/chp/text/michelson.html

]]>“The first machine computation with this algorithm known to the author was done by Vern Herbert, who used it extensively in the interpretation of reflection seismic data. He programmed it on an IBM 1401 computer at Chevron Standard Ltd., Calgary, Canada in 1962. Herbert never published the method. It was rediscovered and widely publicized by Cooley and Turkey in 1965.” [1]

[1] Claerbout, J., 1985, Fundamentals of Geophysical Data Processing, p. 12.

]]>Also, nice pictures! 😉

]]>The sum Z = \sum_{s’ \in S} e^{-E(s)/kT} initially seems daunting.

Shouldn’t this be:

The sum Z = \sum_{s’ \in S} e^{-E(s’)/kT} initially seems daunting.

Still processing the rest :p

]]>If you change transition probabilities mid-algorithm, you must be very careful that you have not made your Markov chain irreversible.

An intuitive way to understand detailed balance is that it means your state transitions are reversible: the likelihood of a path is the same in either direction.

Simulated annealing makes use of the idea of Metropolis-Hastings by constructing a FAMILY of reversible Markov chains (each temperature). http://www.mit.edu/~dbertsim/papers/Optimization/Simulated%20annealing.pdf is a survey paper.

]]>