s/EXE format/PE format/
(for pedantic correctness)
s/EXE format/PE format/
(for pedantic correctness)
How did this get to +4... does the modern-day Slashdot reader really not know the difference here?
WINE is a re-implementation of the Windows system-call library. Tepples is absolutely correct above: It's a reimplementation of the API, and no more a emulator than Linux is of UNIX.
An "emulator" is very specifically a program that reproduces the behavior of an entire system, hardware included. An emulator reproduces the system in software, and then you can run device drivers, etc. on top of it. The machine code you run on an emulator never gets executed as instructions on the host hardware -- it's executed as instructions within the emulator; the host runs the code of the emulator alone. DOSBox is an example of an emulator; it runs in software all of the hardware of an early x86 system including the CPU itself, so that you're able to run 386 games on *anything* that you can compile DOSBox for, even PowerPC, MIPS and ARM systems. I myself used DOSBox on PowerPC many moons ago to play old DOS games.
The next level up from that is a Virtual Machine. A virtual machine can only expose hardware that actually exists on your system, and your CPU actually switches between the different contexts -- your CPU is actually aware that it is running different systems on modern chips. The abstraction here is mostly at the driver level; your guest OS is typically using drivers provided by the VM software maker that interact with the VM software to expose the hardware's functionality. Whereas an emulator can emulate any hardware you do not have from the CPU on up, a virtual machine simply exposes your existing hardware, and lets your hardware do as much of the work as possible.
With an API reimplementation like WINE, you are still running Linux (or Mac OS, or whatever), and the driver layer is Linux's drivers (or Mac OS's, or whatever's). All you've done is add a library to the mix which:
1. Add a new kind of executable loader; in addition to a.out-format and ELF-format (and Mach-O on Mac, etc), you now have the ability to load EXE format files, and
2. Translate Windows library calls into the corresponding Linux (or Mac, or whatever) library calls.
So, in brief:
1. An EMULATOR (like DOSBox) emulates the hardware, and the programs are completely divorced from your system's actual hardware;
2. A VIRTUAL MACHINE (like VMWare) creates a virtual driver layer for your existing hardware that allows you to run different OSes simultaneously;
3. An API (like WINE) is just a new set of functions that add capabilities to your existing system.
There will be a quiz on Friday.
Your first paragraph analysis is correct. In fact, I'm somewhat displeased that the comments China and others have made that criticize this decision are essentially left in the article at face value, and that we need people like you to put this explanation in.
The market was not at all free, nor was it a level playing field, prior to this decision. Oligopolies and monopolies are not free markets; playing fields where one side has a government flooding money into subsidies are not level. So, your criticism of the "free market" in the second paragraph is based on a misunderstanding of what a "free market" is; "free market" is not a synonym for laissez-faire. Yes, I'm aware that a lot of proponents of laissez-faire equate it with free market economics, but two wrongs still don't add up to one right.
There's a similar problem with Dougherty's comments in the article, warning of a "trade war;" it seems to me that we were already in a trade war. A very one-sided trade war, where we just let China hit us with subsidized goods and did nothing to respond. Of course the bully will whine like a little bitch as soon as they get hit back...
I think the real issue here is that I didn't make myself clear, because you seem to be addressing points I wasn't trying to make.
When I say "stair-step," I mean something entirely different from what the author means. What I'm saying is that when you sample, you also quantize; your amplitude is not continuous, but discrete. Ergo, information is lost.
I know the ear doesn't perform an FFT, but what it does is a frequency analysis. PCM data measures the position of molecules over time; an ear measures frequency and volume over time.
The point is, when you take two pitches, play them together, and map it as a waveform, you end up with a waveform that is far more complex than what an ear actually perceives. The ear perceives two notes, while the waveform describes a zig-zag that wiggles -more- often than the original pitches.
When error gets into those wiggles, while it can manifest itself as harmonics far beyond the range of human hearing in both amplitude and frequency (aliasing), it also manifests itself when the waveform is reconstructed as distortions in amplitude and in frequency of the original fundamental pitches. The more audible high-frequency sounds are in the mix, the more likely you are to get audible distortions that AA filters will struggle with.
The fact that you're needing to do AA at all is a symptom of using too low of a bitrate anyway. You can simply throw more bits at the problem and achieve the same effect, but without the necessity of having stupidly expensive DACs; and if you have the stupidly expensive DACs, then you can do more anyway. And storage space is cheap so... why is this even a discussion?
Granted, I don't think there's a need beyond 24-bit 96kHz. But I do think, for certain types of audio, 44.1/16 is pretty clearly not enough for everyone.
I'll grant to you that for most people AND for most kinds of recordings, what you say is absolutely true.
"For the 192 KHz fans out there, there is direct and proven mathematical reasoning for why 44 KHz audio is plenty."
Both you and the article say this, but my understanding of sampling theorem differs from the conclusions you both draw. The main issue is that Nyquist sampling theory is based around the idea that you are moving from a continuous to a discrete path in one dimension only. The theory that you can reconstruct frequencies perfectly is based around the y-axis being continuous. In digital audio, it isn't. So both of the graphs he has in the section "Sampling fallacies and misconceptions" are actually incorrect; a proper graph would show the "stair steps" being slightly off-center where the line goes (and off-center by different amounts). In fact, that he equates bit depth with dynamic range shows he really doesn't understand the mathematics of PCM audio very well at all.
What's more, despite the article author's excellent description of how we hear, he never really connects the FFT the ear performs to how it limits the effectiveness of anti-aliasing, and assigns to anti-aliasing algorithms magical properties that they don't have; heavy metal and string orchestra music in particular represent worst-case scenarios for anti-aliasing algorithms.
So the math IS clear, but it doesn't show what either you or the article author think it shows. It may not justify 192kHz, but it definitely justifies a sample rate greater than 44.1kHz for certain kinds of music.
Pretty much goes with what I said above, as in: I'm not sure which of those is really a revelation. #1 and #4 are just Stratfor's best guesses, like reading an "insider" report on a closed basketball practice on the 'net. #2 is just like hiring a PI to snoop on your wife. #3 and #5 are pretty much common knowledge. (Also: I think Burton's kinda nutty -- also easily-attainable knowledge for anyone who read his autobiography.)
Wikileaks and Anonymous keep bragging about this as if they'd exposed some private arm of government spying, when really, the government paying Stratfor is like a software development company paying for a Dr. Dobbs' journal subscription; when it's your profession, it's good to have external perspectives, even if they don't necessarily align with your mission and aren't informed on your specific details.
And on that note, Friedman's free weekly write-ups are some good bull; he'll usually spend the bulk of any given article providing facts for context before he delves into the op-ed parts, and so you end up learning a few things even if the rest of what he has to say is bunk. He's kind of the Chip Brown of Stratfor (re: the above comparison of Stratfor to Rivals.com).
Uhm... maybe that's because Stratfor is not an "intelligence" agency in the same way that the FBI or CIA are. They're just a private company trying to make a buck by selling their opinions.
They're basically Rivals.com, but focused on politics rather than sports. And about as much a part of the US intelligence structure as Rivals.com is.
That's why folks like AC above and myself are shaking our collective heads, wondering when Allen Funt is going to jump out from behind Julian Assange and shout, "Surprise!"
Depends on what you mean by the terms. If you're talking about destructive sham cults vs. non-destructive, non-sham cults ("legitimate religions"), a few of the notable differences are:
The above looks almost like a point-by-point rebuttal of Scientology, but that's just an odd coincidence; Scientology is far from the first or only destructive cult to fit that definition. You can find mainline Christian churches that fit into both categories, although I think you'll find that most of them don't.
By "pseudo-religion" you could also mean something that has all the trappings of religion but claims to be anti-religion, e.g. Maoism in China.
"Faux-G?" I get legit 3.5Mbps downloads at work on this "faux" G network. I'm pretty happy with that.
I have a soft spot in my heart for FT, because when I was growing up, FT was vastly superior to anything Lego made. I had a lot of fun using FT robotics on my Apple
It's not so much that FT has faded as that Lego has caught up in the areas where it was weak and remained strong in the areas where it had FT beat. Modern Lego models are a lot better at showing you how to put pieces together, and not just how to build the thing, than they used to be. The piece selection is more diverse, and the piece -quality- has improved greatly; they hold together better and come apart more easily. The new robotics kits, power functions kits and other stuff in the Technics line give Lego a lot of the things that came with FT (although not all).
The old FT that I kept for my son, and the new FT I bought for him, largely sits in a bin collecting dust while he discovers and builds entire new worlds with Lego. Even when I was a kid, Lego somehow was an every-day kind of toy, while FT was more a once-in-a-while kind of thing.
So... while I have admiration for FT and an emotional attachment to it, Lego is what dominates my 7-year-old's playtime. Maybe when he's older, FT will be a new challenge...
Didn't Pioneer 10 do this a while back? Is Voyager 1 really the first to leave?
This is one of those times where the comments section in
Right now, on over half of the plays, this is what you see: Quarterback drops back to pass. He's looking downfield, which is off-screen. He sees something, and makes the decision to throw/hold on to the ball/dump the ball off/run for it.
We, the viewers, have no idea what just happened. Now that passing dominates the game, without All-22, you can't even tell what's happening in the game any more.
The objections of current NFL folks, and your objections, are heedless of this simple fact, and this overrules all objections. Without seeing what's happening downfield, we might as well not be watching the game at all.
Fortunately, these objections don't translate to the college game (which passes just as much as the NFL does; the Big XII throws the ball even more). You can get All-22 footage in college. And I'd bet good money that a big reason for college football's growing popularity has everything to do with that.
In summary: I haven't posted on Slashdot in a while, and I just came here to say you shut your damn whore mouth. It's about time someone shone a spotlight on this problem, because as a fan of the game, this issue is ruining it for me. I hardly even bother watching the NFL any more because of it. Is that what the league wants? Confusion leading to disinterest?
Probably because these people think that Xemu is real....
TimeOut by Dejal is an app that I use that simply grays out the screen for 15 seconds every ten minutes, and for ten minutes every hour. It reminds you to look away from the screen, stretch, get up and walk, etc. I've found that while the timing of the screen-blanks is annoying (it does give you a way to "snooze" the breaks), the overall effect is that I'm happier and healthier at work.
As for a keyboard, I use one of the new black USB Model M designs now made by Unicomp. The extra muscular effort and tactile response somehow has been the best for keeping from having repetitive motion injuries.
I'd also recommend making sure your fonts are large enough to read easily. Small fonts are bad.
Leveraging always beats prototyping.