In non-examinable Section 13.4 and 13.5 I showed you some moderately sophisticated applications of the simple ideas we are learning about p.g.f. and conditional expectation. I have added an Appendix B to the notes to say a bit more about the fast Fourier transform.
The brief story is this. To find the distribution of Y, the sum of two i.i.d. r.vs X1 and X2 taking values in {0,1,…,N−1}, we might find its p.g.f. pY(z)=p(z)2=(p0+p1z+⋯+pN−1zN−1)2. This would take O(N2) multiplications of the form pipj.
However, it is quicker to evaluate p(z) at 2N complex values z=ω0,…,ω2N−1, square those values, and then recover the real coefficients of the polynomial p(z)2. The calculation simplifies because by taking ω as a 2Nth root of unity, ω=exp(−iπ/N), the calculation of the p(ωk), k=0,…,2N−1, can be done by a clever method, the discrete fast Fourier transform, which needs only O(NlogN) multiplications.
Further details are in Appendix B of the notes, and the Part II course, Numerical Analysis.
The brief story is this. To find the distribution of Y, the sum of two i.i.d. r.vs X1 and X2 taking values in {0,1,…,N−1}, we might find its p.g.f. pY(z)=p(z)2=(p0+p1z+⋯+pN−1zN−1)2. This would take O(N2) multiplications of the form pipj.
However, it is quicker to evaluate p(z) at 2N complex values z=ω0,…,ω2N−1, square those values, and then recover the real coefficients of the polynomial p(z)2. The calculation simplifies because by taking ω as a 2Nth root of unity, ω=exp(−iπ/N), the calculation of the p(ωk), k=0,…,2N−1, can be done by a clever method, the discrete fast Fourier transform, which needs only O(NlogN) multiplications.
Further details are in Appendix B of the notes, and the Part II course, Numerical Analysis.