I'm actually working on a fairly JS intensive algorithm right now. FF's JS engine is, on this test, *slightly* faster and a bit less memory intensive. Subjectively chrome has a faster layout engine (I'm not testing that right now and , honestly, I try not to anger that particular demon since reflow is the slowest thing one can do in JS!) .
Right now it is a bit of a wash. *This is a good thing.* Everyone chasing each other, trying to out-perform, out-do, etc. Remember before the browser speed wars? How slow it was be default? Sub-second laoding-procesing-rendering times wasn't always the norm!
Pick whichever browser you want. IE10+, FF, Chrome, etc etc. They are all relatively compliant, fast, and will serve you well. Choose on features!
Isn't the future awesome?
I really like Wikipedia's opening line on science:
“Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.”
I also like the opening monologue from “Haloween on Military Street”:
“We measure things by what we are.
To the maggots in the cheese, the cheese is the universe.
To the worms in the corpse, the corpse is the cosmos.
How, then, can we be so cocksure about our our world?
Just because of our telescopes and microscopes and the splitting of the atom?!
Science is but an organized system of ignorance!
There are more things in Heaven and on Earth than are dreamt of in any philosophy.
What do we know about the beyond?
Do we know what's behind the beyond?
I'm afraid some of us hardly know what's beyond the behind”
Both, in their own way, do a good job of highlighting that our knowledge is fallible, and that at best we can hope to merely organize our understanding as it stands and find ways forward.
It would seem that claiming that science has a “monopoly on truth” is at odds with the admission that the grand sum of scientific knowledge stands upon the single caveat that it is based on observation, experimentation, and repeatability.
I agree that it would be the height of hubris to believe that we will or can understand everything in the universe, but it is not hubris to attempt the understanding of everything we have power to.
Whomever gave you the impression that humans are in any way infallible, weather following the scientific method or not, was a zealot.
Science is a way to organize knowledge. The scientific method is a way to observe, experiment, and theorize such that we can obtain theories that fit those observations and can be reproduced. Any other claims are either ancillary or false.
It isn't that insane. Instead of remapping one page at a time the queue up a number of changes and commit them all at once. This is because on stock Linux every page table re-map creates a lot of cache coherency traffic to make sure all the processors know of the new mapping. By committing them all at once they only need one round of that.
Section 5.1 is where it starts going into some details on that.
Quite a few already do! http://en.wikipedia.org/wiki/List_of_smartphones_using_GLONASS_Navigation That is just phones, but that is how a huge chunk of the population gets their sat-nav these days
On your own network that is under your complete and absoloute control there isn't an overwhelming need to have a globally unique MAC address. Sure, it is nice to be able to plug any device into your network without having to assign the MAC address to guarantee uniqueness, but that is just convenience.
If you don't control every single aspect of your network or don't wish to (or are incapable of) managing everything to the level of the MAC address (which needs to exist under the current protocols. It need not exist in an absolute sense, but with what we use today it is required and works rather well) then having a central authority issuing unique blocks to all takers is a good strategy.
So I guess there isn't an advantage to “owning” a block. The advantage is that everyone knows that their MAC addresses are globally unique and, therefore, can plug and play on any network.
As to Xen throwing a fit, I'd say it was a misguided security feature of some sort, but that is just speculation.
Just fine, at least I do. Just different sets of optimizations to keep in mind, as well as different expectations. I don't think any reasonable person would approach the two problems the same way, but it all boils down to basic computer science.
Light up pin 1 when the ADC says voltage is dropping which indicates that pressure is too low on the other side of the PPC. Compare that to indexing a few gigs of text into a search engine. Completely different goals, completely different expectations. I'm not master of the embedded domain, but I don't think it is a dark art.
Perhaps I'm looking at it the wrong way or perhaps my experience is unique or at least rare, but in my eyes it is all the same thing at different scales. Tell me my app is using too much memory then I'll first look at how I can reduce memory pressure, then I'll tell you what is and isn't possible to do and give you a list of sacrifices that would be needed to reduce memory pressure (time to refactor, concurrent operations, latency because of disk, etc etc etc. Not just talking about capabilities but the whole deal). Find the balance and go for it. On the embedded side the same sorts of compromises are made but the scale is just so much smaller. Finite number of IO pins, time to optimize your code to accommodate a new feature, meeting real-time, writing something in ASM to get around a GOD FREAKING AWFUL EMBEDDED COMPILER, etc etc etc.
I dunno, do I have my head on straight here? All seems fairly straightforward in the end. Specialists can do their bits faster than someone less familiar but with equal skill and understanding. Thats what earns the bucks, getting things done in a timely fassion.
((Or at heart I'm an embedded guy. Possible!))
Farily small. 60663.20 Peta FLOPS (60 exaflops) at my time of clicking if those numbers can be trusted (likely since the network hashrate can be derived from the average speed of blocks being found) Not that bitcoin mining uses floating point units since it is brute forcing a hash... but I digress.
That is really lazy work on the programmers part. It is trivial to use AJAX to submit the form and selectively wipe the captcha field whist refreshing the captcha. Thats what I do when we require a captcha for one reason or another.
I really agree with you.
My take on it, from both my personal experience and my interaction with my little cousins and the like, is that children don't have the breadth of knowledge, but they have great depth of knowledge. Kids can be schokingly insigtful in areas that they've explored.
I taught myself how to program starting when I was eight (I wanted to, though, quite badly.), and my parents gave me every single tool that they could to further that even though they are both highly non technical. Sure, they didn't like me spending all day on the computer, and they didn't really understand how the thing worked, but they encouraged my particular talent and intrest and I became very good at what I do because of it.
My wife is the same way, but she is a designer. Same situation (piles of her old art still haunts my in-law's house) and it was the same result.
Recognize what your kids are good at, help encourage it however you can. I'm certain that to any decent parent this is obvious, but I feel like it doesn't get said enough. I know what a difference that can make to a growing mind and it is how I plan to parent my child when I have one. I might try to sneak a little logic and critical thinking into them when they think they are having fun, though. Hope they don't mind.
A fair point. "Wholly custom" is hyperbole, on further reflection.
Especially considering that most of the changes I'm aware of were outside of the ALU and trickier main meory logic, like cache coherency, and I'm certain IBM was more than helpful. Way cheaper than developing a cell or some other new tech from scratch.
+1, very well considered point.
Nope! I was a bit checked to learn that it is a true-blue down-to-the-metal tri-core myself! but decapped processors don't lie http://www.dvhardware.net/article6606.html
The processor in the xbox 360 was a wholly custom part. It has extra components to encrypt and hash memory to and from main memory (only the hypervisor is hashed, the rest of memory is encrypted) as well as e fuses for locking out downgrades. It is also a 3 core part, definitely uncommon.
Much more information in the google tech talk The Xbox 360 Security System and its Weaknesses .
Really good tech talk, worth watching if you are interested in that sort of thing, as well as the original Deconstructing The Xbox Security System for the original xbox.
Some stream ciphers are as you say, but the keystream is not the same as the underlying key. One can't guess the next character in the keystream without deriving the key. Most modern stream ciphers use internal feedback much in the same way that block ciphers use external feedback modes, like CBC, to prevent these attacks.
In any system without feedback like this it is always considered insecure to re-use a key at all.
I'll undo my moderation in this thread just to tell you that you are wrong. One cannot determine the key from the ciphertext. If they can this is known as a "break" in the cipher.
A "break" in a cipher does not mean that it is practical to find the key, merely that it is more feasible than mere brute force. For example, a "break" could reduce the effective strength of a cipher from 256 bits to 212 bits under a known plaintext attack. This is a BAD break in the cipher given current standards, but it is the cipher is still completely uncrackable in human (or even geologic) timescales.
The "weeks or months" number, by the way, has nothing to do with cracking cryptographic keys. I would surmise that is a number more geared towards cracking passwords, which is an entirely different topic. Also, for some realistic numbers on cracking encryption keys, check out Thermodynamic limits on cryptanalysis