Whut up, yo? Mostly moved to Twitter... You have an account... why don't I see you there much?
Easy. The economics of journals and textbooks are completely different.
Journals are cheap to produce magazines where the publisher's goal is subscribers, so individual copies don't matter much economically. And they're cheap to convert to ePub because the formatting doesn't matter so much (typically).
Textbooks are huge, expensively produced content very precisely formatted and can't simply be re-flowed into ePub because the result (after some publishers tried this with Amazon a few years ago) was completely unusable by students. For example, when a professor tells you to look at the diagram on the right column on page 47, in an ePub it would be on a random page (wherever the reader's screen size, text size, etc., flowed it). So textbook publishers that produce digital textbooks have to invested a great deal of effort making a digital textbook that's essentially a content-oriented software application sold to students. And they get paid by students buying the textbook, not by subscribers, so every copy matters.
So as a result, journals are much more open to digital distribution, allowing previews, etc., while textbooks are much more locked-down.
I worked in the music industry (in IT). I have no idea where the idea came from that the music publishers didn't have to renegotiate contracts to get digital rights to the music. In reality, when digital rights became important, the music companies spent a huge amount of time and money having teams for at least a decade tracking down rights-holders and negotiating digital rights in order to sell their back catalog, and of course made sure that their new contracts covered selling through the digital service providers. Book publishers have essentially the same legal challenge (though admittedly the details are different).
What is really different is the production logistics.
Music has been digitally produced for a very long time, using open standard formats, and for pre-digital material it's relatively easy to digitize audio (and video) from master tapes, so you only need to do "work" to deal with some very old, obscure media, which is only done selectively. And the music publishers have built systems that are very, very good at managing and format converting huge libraries of audio and video. So, 99% of the time, digitally selling back-catalog music and video is logistically fairly easy - QA, package, price, and send the files to the digital service providers.
Books, however, have been authored in a series of random formats, and for older books there's only the physical book or manuscript and nothing digital. Which means that you often need to physically scan every page in the book/manuscript, OCR it, clean it up, QA the result, etc. And even for the digitally authored books, you need to track down whatever specific physical media and formats each publisher or author used (MacAuthor on 3.5" floppy, LaTeX, MS Word 3 on 5.25" floppy, etc.). So, overall, physically and logistically really complex to deal with for every single back-catalog book.
Look at what Project Gutenberg has produced - an amazing collection, but it required a massive investment of (volunteer) effort to process the books into digital formats.
The greatest time in that articel is spent claiming that 192Khz is overkill because everything above 20Khz is unhearable. He shows how a square looking waveform has all the right spectral components in the 20Khz range and so therefore it it is not missing anything. This is fourier and nyquist type argument that assumes linearity.
as you put it F( a+b) = F(a) + F(b). When this is true then it's as he said. But if F(a+b) != F(a) + F(b) then you need more than 20Khz to describe the spectrum.
I'm not saying 192Khz is the right thing. I'm just say the entire argument in the article is assuming linearity to draw the conclusions that the 0-20Khz spectrum contains all the information you can hear.
In fact we already know that ears are not linear. This is in fact how some compression algorithms function. They know that as it gets loud that you can't hear quieter frequencies as efficiently so they are removed. This is an example that actually works in the opposite direction-- that there's less information needed. However it supports the notion that describing everything by spectral analysis is wrong when things are linear.
You said, well it's just a change of basis. Sort of. How tightly you want to sample has th be determined first. This is what actually sets the bases that the analysis is going to be changing between. A given point spacing in time for a given lenght of time forces the interval over which the fourier transform exists. Conversely if you insist that the highes frequency is 20K (or 40K for nyquist) then you have fixed the time interval of the sampling. You are then blind to any point in the intervals between which is where the non-linear effects could, conceivebly, hide.
Whats this "early" mean?
The whole analysis at the list site assumed fourier spectral analysis, nysquat limits, etc.... Thats assuming linearity in the way they used them.
Apparently this link hasn't been posted enough times yet. It addresses both your first question (partially) and your second question (in huge detail).
The video you're comparing to is being treated no better than audio. It's simply that human eyes are much better than human ears, so to give a comparable experience much higher bitrates are needed for video than audio.
What all these linear analyses assume is that hearing is a linear process. If its non linear then these analyses are incorrect.
I'm sure Samsung is sending in the blade runner for these replicants hackers
Pono music is an ecosystem to sell music in FLAC audio file format: 1) production of FLAC files from existing recordings, 2) a dedicated player, and 3) a web store to sell FLAC files.
The problem with FLAC is how does one get FLAC? you could use your own encoder to record a CD in FLAC. But then you just have CD quality Why not reach back to the studio quality if you are going the FLAC route?. Cause you don't have access to that. But now you do-- the PONO ecosystem does that. And if you wanted to play that FLAC file, well your mp3 player might not play it and if it does it probably has a lot less memory than you would like. soe PONO players are chubbier in memory. And finally what if you are one of those people who likes to roll there own and prefers to just buy it pre recorded. Well agains the PONO ecosystem is there for you.
Is there nothing he couldn't do?
Withhold certain information while talking to cops.
Actually, it's running break-even as it's supposed to. While the Baby Boomers were working it accumulated a surplus, and now the Baby Boomers are retiring so they're consuming the surplus. But they worked out the math decades ago and it's proceeding as planned - the system is fine. It's possible that there might need to be a few percent adjustment in a decade or two (e.g. raise the cap on taxable income, or cut benefits slightly). But that's nothing to get worked up about, unless your goal is to lie to people to panic them so that you can destroy social security because your goal is social insecurity.
That's not my goal.
Luckily the people running the social security system are responsible adults who know math, so they saved up the surplus paid in during the years the Baby Boomers were working, so there's money there to pay out their retirement benefits.
Yes, anyone who thought that they could give away the money (e.g. Bush, Jr.) instead of saving it was an idiot.
It turns out that in practice, the smaller units of government are far more corrupt than the larger ones, because there's less oversight. Sure, corruption at the federal level gets news coverage, and it should. But if you knew what was going on at the state and local levels, you'd be horrified. But they usually get away with it, because the press has been nearly wiped out at the local and regional level, and the police aren't going to challenge the politically powerful most of the time. Look at, for example, NJ, Nevada, Texas, Arizona, Pennsylvania - a series of amazingly corrupt schemes that made individual politicians, police, etc., rich but destroyed the victim's lives. And there's almost no enforcement of anti-corruption laws except at the federal level - according to http://web.missouri.edu/~milyo... 95% of corruption arrests are made by federal prosecutors (NB: often of state or local officials), meaning that local corruption can fly "under the radar" much more easily than federal corruption.
The problem with your analysis is that the money isn't going to the poor majority, it's largely going to the already well off middle class and rich minority.
Care to try again?
No. People who survive to retire collect more than they paid in, but that's paid for by people who die and thus don't collect anything. The total expenditures balance against payments, and don't require the population to be growing at all. The only ratio that has to hold is the ratio between people's working lifespan and their retired lifespan, which hasn't really changed ever. The increase in average lifespan is almost entirely driven by improved infant mortality rates; people who live to retirement live about as long now, on average, as they did 30 years ago, so the money paid in vs. paid out hasn't changed over the long term.
What did happen is that the baby boomers were a large bump in population, so as they were all working social security accumulated a huge surplus (because there were a "surplus" of people working), which is being paid out as those people retire (and there is a "surplus" of people retired), but neither of those changed the long term finances of social security, which is in (surprise) in fine shape, because the people who run the social security system are responsible people who plan decades ahead for these things because they have to.