What the hell is wrong with the word "shutting"?
Nukes also have an easier time leveling buildings than they do utterly decimating populations. The fireball generally is very small, the overpressure that will kill you is a bit bigger, but theres a wide zone of "buildings become unsound" where people suffer much lesser effects.
It's already been said, but it bears repeating. If you want to kill people instead of things you go for radiological warfare, i.e. you rely on fallout, not blast overpressure. (Incidentally, since the military is almost always concerned with other types of targets, they're typically exclusively concerned with blast overpressure, at the exclusion of all other types of effects).
Compare the exercise retold by Stuart Slade where it only took a small portion of the US arsenal to kill as near as all of the Chineese as it wouldn't matter.
Now, that that wouldn't happen, even in a large scale exchange is another matter. Nuclear weapons are far too valuable to use for such a purpose (usually), and there are lots of other strategies that would be tried before a counter population strike.
That's research from 130 years ago, and solely about rote memorisation. I'm talking about things such as this, and more general research into neuronal development.
Nope, it's actually heavily influenced by Perl of all things, one of the guys who created it has also written a book on PS which has various sidebars about the design of the language.
The part I'm I think will be the big show-stopper is the likelihood of people 'catching goodies from the sky'. Given the technical restrictions of these drones it seems fair to assume they'll be used mostly for 'small but expensive' goods. What's to stop people from building a microwave-gun to fry the electronics and run of with the cargo ? Heck, a decent slingshot could probably bring them down. I realize one could rob any courier service, but with drones it's going to be dead-simple unless they start building in all kinds of security measures but thus limiting the capacity/range/... of the machine.
Yes, I was thinking "shotgun", but your ideas are better. Let's run with it a bit. How about a good old fashioned barrage balloon? Or use it to loft a fishing net, or why not a kite? "Honest officer, here I was just flying my kite, minding my own business and all these drones started to fall all around me, not my fault really..."
Or, for the ultimate thrill. Your own drone/RPV. There have been "dog fighting" competitions between RC-planes for ages, trying to cut someone else's streamer with your propeller. Now you could actually make that game worth your time.
Then there's good old fashioned GPS-spoofing that can be done for cheap. Make the drone land/drop thinking it has reached its destination? It's manna from heaven all over again, only this time courtesy of Amazon...
The patriarchy is crumbled and will die off with those that are 45+.
And you had me all the way up until you had to discriminate against me based on my age alone...
Not all us 46-year olds are as bad as you think. Just so you know. Oh, and now you've had your say, get off my lawn, etc.
Much of the later complexity didn't exist in the late 70s.
Yes, I should have said that I put RISC as beginning with Hennessy & Patterson's work, that became MIPS and SPARC respectively. So we're a bit later than that. And of course when I said "compiler" I meant "optimizing compiler". Basic compilation as you say, was not a problem on CISC, but everybody observed that the instruction set wasn't really used. I remember reading VAX code from the C-compiler (on BSD 4.2) when I was an undergrad and noting that the enter/leave instructions weren't used. My betters answered: "Of course it isn't, they put so much useless stuff in there that it's much too slow..." (Only they didn't use the word "stuff"...)
But yes, the x86 is perhaps more "braindead" than "CISC" from that perspective, I was actually thinking VAX and it's ilk as they were what "RISC" came to replace, since the x86 wasn't a serious contender for workstations/minicomputers when they entered the arena. It was strictly for "PCs", which were a decidedly lesser class of computer, for lesser things. If anything RISC replaced the MC68000 and similar in the workstation space. And even though that was CISC, it was of course a much nicer architecture than Intels ever were, or became.
The downside of having few registers in the ISA is it means the compiler may have to choose instruction ordering based on register availability or worse still "spill" registers to memory to fit the code to the available registers.
Yes, but the score boarding takes care of those spills as well. The processor won't actually perform them. But, whether they're visible or not, the compiler still has to optimise as if they're there in order to have a chance to wring out the maximum performance, so whether they're visible or not turns out to not mean that much in practice, rather, keeping them invisible isn't that much of a gain, as the compiler will have to assume that they're backed by invisible ones anyway and you'll take a substantial performance hit if they were ever to go away. (Which they won't as they take up next to no real estate today anyway.)
Yes, simplicity of design was important, but the simplicity was to free up chip resources to use elsewhere, not to make it easier for humans to design it.
Well, yes. I think we're forgeting one of the main drivers for RISC, and that was making the hardware more compatible with what the then current compilers could actually fruitfully use. Compilers couldn't (and typically didn't) actually use all the hideously complex instructions that did "a lot" and were instead hampered by lack of registers, lack of orthogonality etc. So there was a concerted effort to develop hardware that fit the compilers, instead of the other way around, which had been the dominating paradigm up to that point.
Take for example the MIPS without interlocked pipe-line stages. That was difficult for a human assembly coder to keep track of, but easy for a compiler, and it made the hardware design simpler and faster, so that's the way they went. (In fact, the assembler put in no-ops for you to fix inject pipeline stalls in order for your code to make sense when you programmed it in assembly. That made the object dump show stuff you didn't put there which was a bit disconcerting...
CISC ISAs may have individual "complex" instructions, such as procedure call instructions, string manipulation instructions, decimal arithmetic instructions, and various instructions and instruction set features to "close the semantic gap" between high-level languages and machine code, add extra forms of data protection, etc. - although the original procedure-call instructions in S/360 were pretty simple, BAL/BALR just putting the PC of the next instruction into a register and jumping to the target instruction, just as most RISC procedure-call instructions do. A lot of the really CISCy instruction sets may have been reactions to systems like S/360, viewing its instruction set as being far from CISCy enough, but that trend has largely died out.
I know you say "current", but one of the original ideas behind RISC was also to make each instruction "short", i.e. make each instruction take one cycle, and reduce cycle times as much as possible so that you could have really deep pipelines (MIPS), or increase clock speed. Now, while most "RISCs" today, sort-of follow this idea, by virtue of the ISA having been made with that in mind in the old days, i.e. load-store etc. they're typically not as strict about it (if they in fact ever where). I guess the CISC situation is even more complicated, as they're "internally" RISC, and you can kind-a-sort-a treat them that way by staying away from the "heavy" instructions. That is if you can reason about what kind of time you're going to see from your micro-opped+out-of-order core anyway. The internals, and specifically the timing models have gotten even more complex than they already were. I don't know what your take on that would be?
And, given that most processors running GUI systems these days, and even most processors running GUI systems before x86/ARM ended up running most of the UI code people see, didn't have register windows, no, they're not needed. Yeah, SPARC workstations may have been popular, but I don't think register windows magically made GUIs work better on them. (And remember that register windows eventually spill, so once the stack depth gets beyond a certain point, I'm not sure they help; it's shallow call stacks, in which you can go up and down the call stack without spilling register windows, where they might help.)
I remember reading research back in the day, that showed that register windows were orthogonal to any RISC/CISC considerations, i.e. they were about as easy/costly to implement in either architecture and they gave the same boost/or not, in either case. As you point out, in practice it turned out to not be really worth the trouble, and they died out rather quickly.
Generally the ones who have problems are the "vocal minority": that is, if you have problems, you are more likely to speak up, so if you're only seeing 20 / 13million, it could well indicate that the problem is quite limited.
Sure, I'm one of those. I raised hell, on a Swedish electrical/electronics forum... Didn't even bother to call HP. I assumed it was a one off, and what are they going to do anyway? Tell me to send the cable to them? (That's too much of a hassle) and give me a new one? (I could just grab a new one from one of the conveniently situated piles at work).
In fact, the usual rule, born out by science, when it comes to customer satisfaction here in Sweden (originally talking about large enterprises like TV/Radio) is that for every complaint you have 4000 people who are dissatisfied but don't bother to make contact.
Now, of course, a recall could still be warranted even if there were only 20 out of 6 million, since there shouldn't be any at all. Compare the Challenger disaster. That the O-rings had only been eroded through a third of the way, didn't really mean that they had a safety factor of three, since they weren't supposed to erode at all! Likewise, these cables weren't supposed to melt either, and by a substantial safety margin at that. If as many as 20 do, that means that there is a systematic error somewhere.
With the limited info I have I would guess either a cheapskate manufacturer that tried to pass the wrong gauge of cable as the correct one or a crappy connection between a plug and the cable.
I have one of these cables and after having analysed it, we (the guys on the forum and I
In my own case, the cable finally conducted enough to trip the RDC on my house, and that's when I started investigating. Having confirmed the problem, I then remembered the time on the airplane across the Atlantic when my socket wouldn't stay on, or the time on the train when the whole side of my car kept switching off... I had inadvertently travelled the world, leaving darkness and despair in my wake...
Pictures of my measurements (and discussion, in Swedish I'm afraid). At 20mA, the cable developed 4.6W and my measured 80C (176F) is reasonable. I didn't really notice it before, since the powersupply gets quite warm as it is.
No, I think it's more that these people have always been here, but the last couple of years has seen a) the flourishing of the disgusting "Men's Rights" movement, b) more talk about sexism in the tech industry, which has lead to c) more articles about the subject here, and thus more arguments about it. In almost 15 years here I don't think the demographic has changed very much at all - there's a wider age range and it's more international, but fundamentally it's the same sort of audience. The site just has more articles on topics where this sort of thing comes up.
The rampant sexism seen in a lot of these articles is pretty depressing though.