Comment Re:Huh? (Score 2) 465
When you make the rules, you are right when you're wrong.
When you make the rules, you are right when you're wrong.
It's lingual, "could be 1000 times faster" includes every portion thereof. Heck, "could be 1000x faster" includes 2000x faster too.
I always preferred the phrasing "up to 1000 times faster, or more!" Totally devoid of meaning.
Except browsers can actually send a header that lists your preferred languages, in order. Chrome can actually does this, although it's buried away under "Advanced Settings". Google just don't pay any attention to it on their servers (apparently).
If a lot of browsers are getting it wrong in what they send, the incentive to support it is not strong. Guess what? A quick test with Chrome, Safari and Firefox indicate that they all get it wrong by default. Safari doesn't provide an option to change it that I can find; the other two pick the wrong default for me, instead of using the system language settings (which are correct and available for software to read) even if those are imperfect for the task. (I'm on the wrong platform for testing IE and I don't have Opera.)
Why would you make your website use a feature that no browser gets close to right by default?
Turing also didn't say anything about crippling the test by making it a child who doesn't speak fluent English.
He also didn't specify that the software couldn't do that. The test isn't about knowledge, it's about intelligence. Computers are starting to get good at the more knowledge-y side of things, but intelligence has got to be more about establishment of shared context, dealing with weird semi-out-of-the-blue digressions, and so on. Language fluency is someone else's research project.
He is a "useful idiot" with a lot of information in his pocket. When they are finished with him, he is either going to be returned to the U.S or he is just going to "disappear" into the abyss.
Snowden's principal value to the Russians is for propaganda purposes, and this was the case all along. Making one's opponents look very bad is quite thoroughly valuable from a diplomacy perspective, since it persuades third parties (e.g., most of Latin America and Africa) to be more receptive to your message.
This setup allows for plug-in charging, as well as high density fuel usage.
At a cost of quite a lot of complexity and weight. That might be justifiable, but it sure isn't free.
ARM is SO not going to be competing in servers any time soon. Our "cheap" x86-64 servers are already at 24 cores and 64-96GB RAM. Once ARM gets anywhere near that those server specs will be 4x that, or more...
With large server deployments, performance per watt is a very relevant metric. This is because the limiting factor is actually keeping the server cluster from cooking itself, even with very good cooling in place. PPW is an area where ARM is generally quite a lot better than the x86 variants. After all, who cares if a system is slower when you can compensate by just cramming more cores in? (I don't know if ARM's floating point handling is good enough yet to compete in this area though; the big server guys really care about their number crunching.)
That said, ARM's really large advantage over at least x86 is that they're a design that they license out to others to manufacture. This does mean that they can't typically use all the very latest fabrication technologies, but it also means that for a third party manufacturer who has some of their own secret sauce for a specific market, they can get an ARM core for a reasonable amount and integrate it (rather than having to send it elsewhere for fab where they won't know who is watching). This is an advantage that it is hard to overstate; there will be lots of ARMs about for a long time to come.
The other thing to consider in mobile applications is what the noise ceiling of the CPU is; lower noise means less shielding and so less weight and cost. You've got to think in terms of optimising the whole system, not just the CPU core.
Taxis take your credit card after the ride is over. A serial killer has plenty of time to do bad stuff to you before your card is used. Uber knows who you are from the moment you hail the cab.
Not if the serial killer has stolen someone else's phone (and killed them too, natch).
Agreed, LHR makes no sense to me. I never know where I am or how to get where I'm going. The only place that comes close is Japan's NRT, but that's only a maze in the 4th floor shopping area.
LHR (except T5, and possibly the new T2 but that only opened this week) is like one of these intelligence testing devices for mice, but for people. You just have to follow the signs and hope. I find that FRA and ZRH are pretty bad that way too, at least for transit passengers, and BRU was managing to hide where the gates were at all earlier this year. (Past the bar and hidden behind some large advertising stands promoting an anonymous sports-car that were also covering up the signs saying where the gates were; yeah, hiding the absolute #1 thing that people want in an airport is idiotic.) And please don't route me via LIN. I'll be good, I promise.
In the US, the places to really avoid are JFK and LAX. Both are horrible places to change planes (especially between terminals). ATL, DTW, MSP, ORD, IAD and SFO are all much better. (MEM is OK, but a bit twisty, and I hate BOS for other reasons that aren't the airport's fault. I've yet to change planes elsewhere in the US.)
Don't know NRT well enough to comment.
You are being tracked.
In an airport, a place with substantive overt security, likely many cameras, and where the government sees passenger manifests before takeoff? Oh noes!
That would be the farm budget; this is robotics we're talking about here.
That's a lambda term (i.e., anonymous function) and not a closure. You tend to use them together, but they're formally different. A closure would be having a variable defined in the outer function being accessed and/or updated in the inner one.
But I'm sure they'll all be bullet-proof secure, don'tcha think?
What kind of glass are you using?
Oh, that kind of "bullet-proof". Not the Chicago Musicians' Union kind...
The shared libs approach is like the legacy of a chemical waste dump... it's there, it seemed like a good idea at the time, and there is not a whole lot anyone is doing to deal with the problems it causes simply by existing. Memory and disk space are no longer expensive, but catch-22 shared library hell is forever.
The original Linux shared library system was the toxic waste dump, being basically impossible to use if you weren't a distribution maker (every shared library had to have its own unique address in memory because code was just mmap()ed in without relocation). What we've got now is better, with just the problems of ensuring that versioning across effectively-independent software products works (and that's just plain hard for everyone).
Memory and disk have only recently become effectively too cheap for anyone (excluding embedded) to worry about the size of code; that's a new phenomenon.
So there we are in 93 or 94, the 386 just taking off, OS/2 and Windows are still pretty much children's toys compared to UNIX and mainframe OSes, the only commercial Intel UNIX is $1200 for the base OS and the fuckers want another $1200 for a C compiler, you can take your chances with a bunch of BSD tapes and I'd just heard about this nifty new Linux thing coming on the scene.
At that time there was 386BSD but they were tearing themselves apart for some reason which I never bothered to get to the bottom of (I think the corpse of that became FreeBSD, but I could be wrong). Linux was not as polished at all, but did a few things reasonably. In particular, it had shared libraries, greatly reducing the memory requirements at a time when memory was expensive, and it had built in floating-point coprocessor emulation. (This was back when programming on DOS/Windows still meant using a segmented memory model or futzing around with a DOSExtender. Linux's flat model — heck, all Unixes' flat model — was much nicer, with far fewer contortions in the code to deal with squeezing things into 64kB blocks.)
That very first slakware distribution that I downloaded onto 26 floppies was better than anything they'd ever done.
Good memories...
Hacking's just another word for nothing left to kludge.