And down they go again. Looks like China has decided that it is time to play a pretty damned nasty card.
As an addendum to my previous comment, starting at approximately 11 pm Beijing time, three different VPNs I am using- one free, one paid, and one corporate- have all been blocked within China. Hopefully this is temporary. IT offices will have heads on the chopping block if this is still happening in 8 more hours (or less).
As of 11:45 pm, two of my three VPNs are working again. The one that isn't is, of course, the one that people will get fired for. C'est la vie, time for bed.
I am in Beijing. I know not of this restored service of which TFA speaks. Anything Google related website-wise (and including other services, like Google Earth) have been blocked on the PC since July or so (as opposed to just slowed down to 1 Kbyte/minute previously) but Google's client on cell phones worked without a problem. What was blocked on Friday was the phone client and any other services not covered by the previous block. As of my posting this message, everything G is still only accessible for me via VPN- Freegate or paid, either works fine. Funny thing, though, if I use Freegate, I can't use any services at slashdot (I had to shut off FG so I could post this message and see how a comment I made in another thread had been moderated).
The only thing that I use frequently that is hit and miss functionally without a VPN is Google Translate, which I am guessing is because some big Chinese web sites claiming to do translation are actually just front ends to Google translate and thus stop working.
Actually, it is funny, because Orbitz was started by the airlines themselves. They didn't need to scrape cheap airfares to lower prices as much as cut out the travel agents as middlemen:
Five airlines--United Airlines Inc., Delta Air Lines Inc., Continental Airlines Inc., Northwest Airlines Corp., and, later, AMR Corp. (American Airlines)--teamed to create a new online travel service. (American became an equity partner in March 2000; total start-up funding was around $100 million.) Together, the five founding partners controlled 90 percent of seats on domestic commercial flights. Existing computer reservations systems such as SABRE did not present competing fares in an unbiased way, said company officials.
What makes it even funnier to me is that American Airlines was one of the founding companies of Orbitz who was trying to lower prices from SABRE, which American Airlines started in 1960!!!
I teach both C and Data Structures. Bubble sort is for my C class where I am trying to make sure the students fully comprehend arrays (most of my students come from a non-existent programming background, and the school isn't sold on my teaching them Python as a first language just yet as apparently I am the only instructor who can use it meaningfully), how indexes work (getting them to reverse an array or a string doesn't quite seem to do it for about half of them), and my better students will have implemented bubble sort on linked lists by the end of the semester, as well. Understanding that bubble sort works isn't a problem for them, but they are only starting to think beyond what a single loop can do algorithmically. I have tried jumping to insertion sort and as a whole there is poor integration of the knowledge to take forward. Good/better/best options just can't happen for most of these students at this point, and so it has to wait for Data Structures where the first sort they learn is Insertion Sort and I don't care which language they use. I guess if someone started using Mindfuck or Ada I might start to care. They add every sort to one big program where they can calculate how much work each sort does and how much system time it takes to operate for randomly generated lists.
As for concerning themselves with cache misses as one person suggested it, until students have a better understanding of systems, that can't be readily done with a class of 40+ frequently unmotivated students (it can with ~10 motivated students, the breakpoint occurring somewhere in the middle, of course), though I start introducing the penalties of register/cache/RAM/disk in this class so that by the time I am teaching embedded systems the students who chose that path usually have a good grasp of it, with the best ones looking even at divide instructions with disdain. With the semester system being what it is, though, teaching all of that as a simultaneous, cohesive chunk of information just isn't going to happen.
It can, but the chances of it staying perfectly readable is very small. And realize that removing RAM from a machine puts it under a very different condition than intentionally accessing the RAM in a pattern which causes faster than normal leakage, so the results aren't mutually exclusive.
In this case, you an believe both, as the statements aren't contradictory, only your misunderstanding of what you read. The law the judge is referring to is Illinois Uniform Vehicle Code, not federal law (though it generally does follow Federal Highway Administration guidelines). But you are more than welcome to keep believing your misconceptions and misunderstandings. Every state has one, btw.
The summary is shit (not shiat), though, because THERE IS NO FEDERALLY MANDATED MINIMUM TIME FOR AN AMBER SIGNAL LIGHT! Why do people think there is??? There are lots and lots of recommendations, and most states follow them, but local traffic laws aren't covered by federal law, and shouldn't be unless a traffic light gets used on an interstate.
As a general rule, what was taught to the Civvies I knew (I was comp sci, the civivies were on the floor below us) when I was in college was that amber lights should have a minimum duration of 3 seconds and go up by
BAE is still making the RAD750. I worked with the predecessor that is in the RAD6000 computer board.
To correct myself, based on something I read downstream (thanks, trparky), the P4 was 31 stages, not 19. That is really a number I shouldn't have misremembered.
That is more or less accurate. The goals of the original RISC were stated to be making a Reduced Instruction Set Computer, but what was in fact produced was a Reduced Instruction Set Complexity CPU. By restricting the touching of memory to only loads and stores, all other instructions that were able to be executed in one clock COULD be executed in one clock always. Whereas some CISC instructions involving arrays could kick off 10+ memory touches as a side effect, RISC instructions could never do that (sans via exceptions). So when all 10 of those memory touches weren't required, the RISC architecture could optimize away the unnecessary ones (which was a bitch in 1990, but common place by 2000 and exceedingly trivial by 2010, to put it roughly).
I taught CISC architectures (68K mostly) and was a minor architect for PowerPC (I helped work on the early EABI- embedded application binary interface- architecture)
But this leads to a problem: Cache. That CISC operation that made 10 memory touches took roughly 10-18 bytes of instruction storage (68K example), and 10 data cache accesses that would either hit or miss. But a 16 bit RISC would take 22 bytes (and didn't double the number of useful registers available) and a 32 bit RISC would take 44 bytes (but generally doubled the number of useful registers, reducing the need for so many loads and stores). Thank goodness you took fewer transistors to implement the instruction pipeline, because you need them all back to make the Icache bigger! The hope being that those 10 memory touches were rarely needed if you had more registers, so you could cut back on other loads somewhere (but we didn't get really good at doing that automatically until the late '90s, by which time we could show that the RISC penalty was effectively negated, specific numbers remain the property of my name-changed employer but were down to single digit percentage differences). Dcache would have the same hits and misses, unless you were also able to allocate saved transistors to some Dcache which might affect hit rates by some low single percentage points.
But with complicated instructions come pipeline clocking challenges. Implementing the entire x86 pipeline in 5 stages would result in having a sub-200 MHz pipeline today- the P4 push to 4 GHz required up to 19 stages (and who knows how many designers) worst case, IIRC! Meanwhile, most RISC architectures zoom along happily with 5-7 stages and only manufacturing nodes or target design decisions keep them from clocking up to x86 frequencies.
Hands down, it was never any 'benefits' of CISC (or, specifically, the x86 architecture) that allowed Intel to take the field, it was market forces and manufacturing might. A win is a win.
BTW, to the AC GP, just because an instruction appears complex (most SIMD operations, MADDs, FPSQRTRES, etc...), they still count as RISC if they can be either executed in one clock or at least pipelined with nominally one result per clock if they don't impact the pipeline for all the other commonly executed instructions. After all, we can made a divide instruction execute in 1 clock, too, as long as you don't mind your add instructions taking 16x longer (though still one clock), but that is cheating.
At some point in your life you're going to have to go all Zen about it and not care so much.
Only then can you throw those old SCSI cables out.
Hah, I scrapped 4 cubic yards of collected computer detritus, including at least a dozen different SCSI cables (with some ultraSCSIs) today. Been needing to do that for years. I did shed a bit of a tear over the Amiga stuff, though.
Yes, I donated to anyone and everyone all that I could before I scrapped. But 4 working PCs couldn't even be given away to an orphanage!
The robots can build 30,000 devices PER YEAR.
Which would be a perfectly reasonable reading and what I expected to find, as well, though without knowing what units are being produced you have no idea if 30,000 is an impressive number at all.
And, yet, across neither of the two articles I posted previously, nor any of these have any information suggesting that any one robot can make 30,000 units in any specific time, in fact one of them explicitly says that the robots are incapable of building a single iPhone from start to finish as they don't have the necessary functionality However, the new machine can perform only a few basic tasks, such as lifting and placing components. In other words, they do not have the precision needed for the assembly of the iPhone... which suggest they are capable of making 0 units per year, and not 30,000.
However, each and every one of them say that Foxconn plans to have 30,000 robots installed by the end of the year. Wanna play Occam's Razor?
"Foxconn said its new "Foxbots" will cost roughly $20,000 to $25,000 to make, but individually be able to build an average of 30,000 devices."
So approximately $1.2-$1.5 of the cost of an iPhone will be eaten up by a robot that can only make 30,000 devices before having to be replaced? For some reason, I think Foxconn is probably even better at the financial math than that, and the quote seems so wrong in both a factual error and a grammatical error sense I actually had to RTFA (I hate you, redletterdave) and sure enough the quote is direct from the Businessweek article (I hate you even more, Dave Smith of Businessweek). However, reading 5 other variations of the same announcement, not one of them uses the same phraseology, which makes me wonder where the quote actually came from. Dailytech, for example, says that Foxconn will have 30,000 Foxbots installed by the end of the year and makes no mention of the speed at which they can build anything (which makes sense, since the robots are so simple- basically pick and place- that no one robot could build an entire device). Another website, Regator, gives the same clue, saying they already have 10K Foxbots, and plan to install another 20K by the end of the year.