Andrew Ng's "Machine Learning" on Coursera is also very well presented. Maybe a bit light on the hardcore Math side (which he acknowledges several times), but he gives a very good overview of what's available, how and when to use different ML techniques. He never loses track of the big picture, which really is one of the most important aspects of tackling any problem space, because in the end you're not going to re-implement a neural network, you'll just use an existing package.
I did the PGM course (successfully) and Daphne Koller warns us in the introduction that it is a hard course (even by Stanford CS standards) and Stanford students do spend a significant amount of time to it weekly ( I think it was 15-20 hrs avg). I did indeed often get lost in some of the ramblings where I was thinking "why is this necessary and what are we trying to do here?". It was not always clear to me how some of the techniques connected to reality, and why it was better than others. But still a very useful course going deep into Bayesian networks, Markov fields, etc...
So, your mileage may definitely vary, and some courses really do require you to be on top of your game and have some serious prior background knowledge. But I love MOOCS and how can one not be thankful to get access to courses given by the most prominent researchers and profs in their field?