Processing math: 100%

Friday, 30 January 2015

Lecture 7

You might like to revisit the blog for Lecture 4. The example I do there about tossing coins for which kpk= makes use of both continuity of P (proved today), and independence of events (Lecture 5).

We were careful today in defining the notion of a random variable. It is a function mapping elements of the sample space Ω to some other set ΩX (which is typically some set of real numbers). So X is a function and X(ω) is value in its range. Most of the time we leave out the ω and speak of an event like X1.45. This really means the event {ω:X(ω)1.45}.

I remarked that the terminology ‘random variable’ is somewhat inaccurate, since a random variable is neither random nor a variable. However, a random variable has an associated probability distribution and one can think informally about a random variable as "a variable which takes its value according to a probability distribution".

The reason we like to define a random variable X as a function on Ω is that it ties X back to an underlying probability space, for which we have postulated axioms and know their consequences. If we try to define a random variable X only in terms of its probability distribution, we would not have such a strong axiomatic basis from which to derive further properties of X.

Fine print. In Lecture 4 I gave you the definition of a probability space (Ω,F,P). For TΩX, we can calculate P(XT)=P({ω:X(ω)T}) only if {ω:X(ω)T}=X1(T)F. This may put restrictions on allowable T and X. But this will not bother us in Probability IA, since we will only look at situations in which F consists of all subsets of Ω, or T is something nice like a subinterval of the reals.