RC4 comes from Ron Rivest [of RSA fame] and stands for either "Rivest Cipher 4" or "Ron's Code 4" http://en.wikipedia.org/wiki/RC4
Given the following:
what would happen if the car decided to recharge itself? Would the car be arrested?
I still have my original copy of the IEEE journal paper that I clipped in the 1970's. It stood out as a landmark paper then. About 15 years ago, I was at a technical talk and was able to get Martin Hellman to autograph it.
Let the person dream [just like the postdoc they want to do]
You're right. Graphene is probably the future. I think so too. If the [zero] bandgap problem can be solved. There was an article on
A graphene processor running at 100GHz would [probably] outperform a 32 core x86. Initially, a graphene chip would probably have far fewer gates. Thus, a true RISC ISA [like ARM] would be the logical choice. It could run x86 in software emulation (e.g. QEMU) and still beat the pants off the best Intel offering. Yum.
As to process/die shrinkage, I, too, have seen the announcements about the end of Moore's Law. They always seem to occur at the same interval of ML [2 years]. I've also seen articles about techniques to extend ML for the foreseeable future. No worries.
The other part of the puzzle is memory bandwidth. Getting a cache miss [no matter how large a cache (e.g. 12 MB)] slows things down to the point where advances in chip computation speed stall on memory transfer.
For that, we'd need a DRAM replacement like magneto-resistive RAM [or ferroelectric RAM]. MRAM has the same cell size as DRAM, retains data without power, and is at least 10x faster than DRAM. L1 cache is pure static RAM [which has active power draw and large footprint]. MRAM is as fast as L2 cache.
Hewlett Packard [which has been taking the lead in MRAM development] has a roadmap for MRAM deployment:
- Replace flash memory [and unlike flash, MRAM doesn't "wear out"]
- Replace DRAM
- Replace/eliminate L2 [and L3] cache
- Put MRAM at the heart of an SOC solution
HP is willing to license the tech to anybody that wants it. Unfortunately, the last announcement of any real progress was a while back.
ARM is starting to encroach on x86 in the server space:
- lower data center power requirements
- they're coming out with a 64 bit version
- ARM has a much smaller die footprint.
Intel must do ARM to stay in that game.
ARM would not be Intel's first foray into alternate architectures for x86 [8080, 8085, 8086, 80186, 80286, 80386, 486, 586, 686]. Remember Itanium [;-)] but also the 432. The Itanium and 432 didn't pan out because [the market for] x86 was so strong, but this indicates that while Intel is wedded to x86, it isn't slavish to it either. They care about making chips at a profit more than they do a given processor architecture. x86 has been a great tool to allow them to do that. But, x86 is just a means to an end for them.
When x86 ceases to be the asset it currently is, Intel will adopt whatever the market demands. The trend for this is ARM (vs. sparc, mips, etc.). At this point, [even] Intel can't kill ARM. ARM has too much demand for it now [it's a better solution for mobile and embedded/hybrid systems and will surpass in the server space in the near future]. Intel is adapting/reacting now, while it has time to do so on its terms instead of waiting 10 years and being forced to do it in a panic.
Contrast this with MS and Windoze. MS lost the mobile space race because of its insistence on Windoze. Intel won't make the same mistake, if for no other reason, they saw what it did to MS.
As to MS, most likely, in 10 years, we'll see MS/Office running on OSX/iOS, Linux, and Android with Windows just a fond memory.
Long term, Intel must become a foundry because it will lose its process generation edge (e.g. 22nm->14nm, but after 6(?)nm there isn't much room left. Others will catch up).
Intel will make money on this. In the mid 80's, Intel was selling its first generation 386 chip for $750. An Intel engineer told me that the same chip was designed to be profitable even if it sold [had to be sold] for $35.
This forces Global Foundries to be more competitive with Intel, which benefits AMD.
GF, TMSC, etc. have been riding the [profitable] curve of being a generation back. That is, Intel is always a generation [or two] ahead, but also incurs significant R&D costs to do so. The competitors could wait and get the same results for far less investment in R&D. They could do this because Intel wasn't competing with them [by producing ASIC's, FPGA's, etc.]
This forces the non-Intel foundries to produce cutting edge stuff sooner. AMD was a big chagrined after spinning off GF and seeing it fall back into the TMSC model [making AMD less competitive against Intel].
The benefit for Intel is trifold:
- More ROI for their expensive fabs. Previously, costs were always recovered because the PC market was always expanding. With this now shrinking, a nextgen Intel fab may need to do piece work to stay profitable.
- Forcing the competition to compete head on [with the increased costs of being first generation], weakening them in the process [pun intended].
- A toe-in-the-water with ARM and mobile space [Atom notwithstanding] as a hedge against x86 arch going the way of the dinosaurs [without the stigma to x86 of a full fledged announcement of direct ARM support].
I think "low yield" was referring to the nature of the over-pressure attack (vs. the rotor speed attack). Or, that things could have been orchestrated to damage/disable all centrifuges at one time [which would have been detected] instead of just increasing the failure rate [which, as Langner pointed out, would confuse/confound the Iranian engineers].
Langner talks a lot about avoiding detection circa 2007 but that being less of a concern in 2009 [e.g. "now that the program has achieved its objectives, let's shock the world with our cyber attack prowess"].
But, perhaps "uproar" was/became a desired result of Stuxnet. I recently got an email from my local congressman regarding defense against cyber warfare.
So, Stuxnet set back the Iranian program a bit. But, it also got Congress thinking about [read: funding] cyber warfare defense [offense is implied].
"Cyber warfare" [although, perhaps, a legitimate concern in the wake of Stuxnet] also becomes the "bogeyman under the bed" that could provide public justification for more NSA-like intrusion/trickery.
In-N-Out-Mummy, Mummy King, Mummy-In-The-Box, McMummy's
Researchers at the University of Maryland have been using the tobacco mosaic virus for similar purposes: http://phys.org/news/2010-12-virally-nano-electrodes-boost-energy-capacity.html
Sounds like you're logged into gmail when you go to youtube.
Logout [of gmail] first [possibly clearing some cookies] and you'll have no problem. I have a gmail account [but I only access it through POP3/IMAP from thunderbird--thus, it's never logged in] and I don't have the same problem. I did have the same problem one time when I was logged into gmail.
If you'd rather not logout/login on gmail repeatedly, you can create a separate browser profile [Firefox, at least] for youtube, etc.
See Jane Mayer's New Yorker piece http://www.newyorker.com/reporting/2010/08/30/100830fa_fact_mayer to get a truer sense of the depth and breadth of the machinations.
The article mentions that one can change the bandgap of a material with the laser. Isn't this what has been holding back graphene semiconductors--that they have a zero bandgap? Could this technique be used to produce practical graphene semiconductors?
Yes. Just artificially dropping some packets (either deliberately or just to implement some notion of quality-of-service) can be problematic. While an established TCP socket can deal with this, doing a DNS lookup [which is datagram based] can be severely affected. My ISP implements QoS and most of the delay I experience is a failed DNS query that must timeout and be retried (e.g. I'll wait a minute to get a page load but 55 seconds of that is waiting for the DNS request to succeed).
It's a way to censor things without doing outright censorship (e.g. blocking Google 100%). It's my belief that when Google was negotiating with the Chinese government about access, they argued strenuously, but in the end, they took the best deal they could get [were offered]. I mean, if Google had taken a stronger stance (e.g. "we won't limit access"), what would the government's response have been (e.g. 100% blockage) and what would the Chinese people's response have been?
Rest assured that your government's artificial push for Baidu was known here in the US, not that we've been able to do much to help. Our government people do talk with Chinese officials about such things [and many more], but your government is usually quite stoic in its responses
Economic freedom and political freedom are two sides of the same coin. You can't really have one without the other. Note that I'm not talking about capitalism per se. Many European countries have a modified form of socialism, but still have a government that fosters political freedom.
Hopefully, Google's initiative will provide some improvement/relief. Time will tell.
Seems to me the limiting factor will be ISP datacaps.
The ISPs that tend to have them are the ones that also want to send content (e.g. U-Verse, Comcast, to name a few). Datacaps limit peer-to-peer networks.
A more sinister interpretation is that datacaps limit the amount of traffic that the NSA has to sift through. The ISPs that seem to have the greatest track record of caving to NSLs, etc. are also the ones with datacaps. Coincidence?
Thus, datacaps also apply when one's "friend" routes traffic through one's connection to support a distributed VPN scheme.