unless someone enables debugging and authorizes a computer with its individual key to connect.
Authorizing an individual computer wasn't introduced until around 4.2 (Jelly Bean 2) or thereabouts. There are still Android devices in use running older operating systems whose manufacturer declines to update the operating system.
where is he gonna get a phone that will actually play the resulting 64kbps Vorbis? [...] he'd be better off encoding to 128kbps MP3 or simply investing in a bigger microSD
No iPhone has a microSD slot. So unless and until Windows Phone or BlackBerry becomes popular again, pretty much any phone with a microSD slot is going to ship with Android. And as chowdahhead pointed out, every Android device I've owned going back to 2.2 has come with a Vorbis decoder.
Vorbis is strictly CPU decode which will end up costing more in battery life.
MoonShell and Guitar Hero On Tour run 67 MHz ARM9 (ARMv5) CPU of a Nintendo DS, and they decode Vorbis in real time. With the higher clock speed and signal processing instructions of the processor in a modern phone, CPU use of a Vorbis decoder would probably be a drop in the bucket compared to things like the cell and Wi-Fi radios, the screen, and even the headphone op-amp. The Settings > Battery report on my Nexus 7 tablet consistently shows two-thirds of energy spent on Screen.
but it's not like there's a black border, why would you not want to view the edges?
I guess it must throw the composition out of balance, especially for things like news tickers at the bottom and sports scores at the top. And older film and video might still have things like a boom mic just out of the action safe area (but protruding slightly into the overscan).
The difference is that it's 'plugins' that anything can use, rather than your specific choice of media player.
True, Video for Windows codecs and DirectShow codecs work in a wider variety of media players and editors. But I know VFW applications such as VirtualDub can't use DirectShow codecs. And I'm told VFW itself has limits that make it less than ideal for certain codecs and containers, which is why you don't see a lot of, say, MOD players using the VFW architecture. I guess Nullsoft might have developed its own input plug-in architecture to work around VFW's limits, and I have since learned about other players that can also use Winamp input plug-ins for just this reason.
Why would you ever need to overscan HDMI?
Because television video is authored with early-adopter CRT HDTVs (and thus with overscan) in mind.
There are so many possibilities in web applications for really nice font management.
Which are all wasted if the end user's browser lacks WebGL support entirely, as is the case with all web browsers for iPhone or iPad, or if the end user's browser detects insufficiency in the underlying OpenGL implementation, as my browser does (Firefox 26.0 on Xubuntu 12.04 LTS on Atom N450). All I get is "Hmm. While your browser seems to support WebGL, it is disabled or unavailable. If possible, please ensure that you are running the latest drivers for your video card", even after doing sudo sh -c "apt-get update; apt-get upgrade" this morning. In about:support I get "Driver Version: 1.4 Mesa 8.0.4" and "WebGL Renderer: Blocked for your graphics card because of unresolved driver issues."
So what fallback would you choose to use when WebGL is unavailable?
Using smaller formats means I can download it faster [...] and that I can store more stuff on my phone for listening to it later. It's kind of annoying when podcast creators end up generating a 120 MB file for 60 minutes of audio.
Consider using a quad-core CPU to transcode 320 kbps MP3 to 64 kbps Vorbis. Divide the recording into four parts and run one part on each core. Then you can store the four parts on your phone without using too much internal storage. It won't solve your download time or monthly transfer cap problem, but it will help you work around phone makers' tendency to cut out microSD slots and mark up internal storage at highway robbery prices.
Subpixel text rendering is just antialiasing with the red channel offset by a third of a pixel in one direction and the blue channel by a third of a pixel in the other direction. I'd compare it to anaglyph rendering, which offsets the camera position in the red channel by one intrapupil distance from the green and blue channels so that 3D glasses can reconstruct it. If the rest of your system performs correct antialiasing of edges (FSAA, MSAA, etc.), the video card will do the subpixel AA for you.
The PDF mentions another technique I've read about in Team Fortress 2, called "SDF" or "signed distance field" fonts. This makes a slight change to the rasterization and blitting steps to store more edge information in each texel. First the alpha channel is blurred along the edges of glyphs so that becomes a ramp instead of a sharp transition, and the glyphs are uploaded as a texture. The alpha forms a height map where 128 is the center, less than 128 is outside the glyph by that distance, and more than 128 is inside the glyph by that distance. This makes alpha into a plane at any point on the contour. The video card's linear interpolation unit interpolates along the blurred alpha, which is ideal because interpolation of a plane is exact. Finally, a pixel shader uses the smooth-step function to saturate the alpha such that the transition becomes one pixel wide. This allows high-quality scaling of bitmap fonts even with textures stored at 32px or smaller. It also allows programmatically making bold or light faces by setting the transition band closer to 96 or 160 or whatever. But it comes at the expense of slightly distorting the corners of stems, so it's probably best for sans-serif fonts.
The PDF also mentions approximating the outline as piecewise arcs of a circle, parabola, etc. and drawing each arc with an arc texture. This would be especially handy for TrueType glyph outlines, which are made of "quadratic Bezier splines", a fancy term for parabolic arcs.
if someone is using a certain video player for their videos and it also plays their sound files - why not use that?
Because audio and video use cases have different playlist expectations.
Video is more often a foreground application, requiring the viewer's primary attention, compared to audio that's more often used as background noise. And audio and video typically have different durations. In my experience, audio is more often stored with one file per track, while video is more often stored with one file per "album", with cue marks between scenes. People are more likely to put a collection of songs from several albums in a playlist and shuffle it than single scenes from motion pictures. And video is far more likely to be unavailable from the publisher in a DRM-free format than audio.
Theres something to be said for only learning to use one program to do different several things - assuming it does it well.
I guess the thinking is that audio libraries and video libraries are so different in metadata structure that it's difficult to make one application that plays both well.
Why are you so obsessive about this?
Certain game genres are historically shut off to indie developers because they don't work well on PCs with their small median monitor size, and console makers haven't made an effort to reach out to indies until very recently. I'm trying to find the best route to market for a startup video game developer with a local multiplayer game in development.
Storage is cheap
Only if your listening device has a USB mass storage port or microSD card slot. Many don't. Instead, several manufacturers of mobile listening devices, such as dedicated digital audio players and smartphones, sell one model with tiny storage at cost and put an excessive mark-up on the models with more storage.
No hardware designer should be allowed to produce any piece of hardware until three software guys have signed off for it. -- Andy Tanenbaum