As the posts there point out that only works if you're still in range of the old network. It's a pain to have to remember to forget a network each time I check out of a hotel, nor do I want to have to reset all settings and reteach the phone about the networks I do want it to use.
I've wanted the ability to tell my iPhone to forget old networks so it doesn't waste time and power sending probe frames trying to provoke any hidden access points/SSIDs to advertise themselves. The security concern raised by this article is yet another.
The Hollywood Entertainment Museum had a ST:TNG set (and Cheers bar) so I'm left wondering which set that was unless it was reconstructed from the destroyed remnants.
Don't require someone to be a compiler or makefile or package expert to "install" an app, get X to perform decently on a heavily loaded system, get power management/sleep/hibernate to perform decently/reliably, and don't make it a pain to do basic configuration changes like change screen resolution. Also stop thinking that having so many distros doing things in different ways is a good idea. I prefer MSFT's one set of rules over the chaos and disorganization of Linux.
This is only noteworthy or nonobvious if you only have a basic understanding of computers. RTP allowed extension headers, and IPv4 does as well so you could embed extra data for almost any type of traffic on the Internet.
WoL doesn't have to a specific packet. On Windows you have a choice between a magic packet (which is special), or just allowing the system to wake on any ARP or IP packet that's sent to the system's IP address. What was added in Windows 7 was a way for NICs to respond to ARP, ping, NDP while the system is in low power so the system doesn't wake for these. Seems like MSFT research should have factored this into their, um, "research".
The other thing they added was that waking on ARP/IP has historically been designed around using a sequence of bits and a mask as a filter to decide which frames should wake the system. This approach was changed so that more generic concepts like "TCP SYN" can be used to match packets. The difference is that you need multiple filters to handle TCP frames that use different extension lengths, while the latter approach only needs one.
Others have mentioned file size, but another good approach is to look at the quantization tables in the image as an overall quality factor. E.g., JPEG over RTP (RFC 2435) uses a quantization factor to represent the actual tables, and the value of 'Q' generally maps to quality of the image. Wikipedia's doc on JPEG has a less technical discussion of the topic, although the Q it uses is probably different from the example RFC.