Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:I want (Score 1) 238

Such systems exist.

Even Dell sells them.

ARM is quite capable of competing with Intel, but it is no magic bullet.

CPU Core power usage is only part of the overall system power usage.

What you want is a number of web requests served per second. You can have a fast quad-core x86_64 chip do the job, or you can have many more (considerably slower) ARM cores do the job.

In the end, the number of requests/second is similar, as is the power consumption.

You can't get around the laws of physics - there is a minimal amount of power to perform an operation. Intel chips use more power because they perform more operations.

Comment Re:That's what is so funny to me (Score 1) 238

75% of the performance for 25% of the power compared to x86

The problem is they don't provide anything approaching that sort of efficiency.

I've had the, um, privilege of benchmarking a few of the new up-and-coming ARM server systems and chips. It's pretty neat to be able to have four quad-core servers, each with 4GB of memory, pulling a total of 40W or so. That's a great system for a web server farm.

The problem is when you compare the throughput vs performance for high performance computing. For a few workloads, the new ARM systems compare favorably - giving a small edge in work done per watt. The performance advantage per watt in these workloads is usually less than 5%.

The best-case for the ARM systems, under workloads that are most ideal for the ARM systems in question: 105% of the performance for the same power draw.

If your workload doesn't scale to a large number of cores easily, or has a large amount of inter-process communication, the current ARM systems are hopeless. Even with a supposedly high-performance backplane in a chassis hosting around 40 ARM nodes, it was soundly trounced by one x86_64 node.

Note this is for a cluster system, so many x86_64 and ARM nodes are being used for the benchmark.

One node X86-64 can handle the workload of 40+ ARM nodes. Granted, you can fit 40+ ARM nodes in a 3U chassis, but it's still a lower overall compute density than with 1U x86_64 nodes. Simply throwing more cores at the problem doesn't necessarily give you the gains you'd think.

While ARM is theoretically capable of better performance/watt, it's impossible to get anything resembling theoretical in a supercomputing application. Any advantage ARM has in performance/watt is eaten up by the overhead of having to use so many (slower) cores. Very few workloads scale linearly as you throw in more cores, as various overheads (MPI, network, etc.) decrease the overall efficiency dramatically.

Currently, you can't use ARM for memory-hungry applications, as you'll hit the 4GB limit. 64-bit ARM is promising, but it's also not for sale.

The best performance per watt for supercomputing workloads is still found in accelerators, such as GPU's or Intel's Xeon Phi.

ARM is very promising for many datacenter type workloads, where there are a large number of unrelated, independent processes, such as a farm of web servers. (Any database backend is, however, a different matter).

While slightly OT (as it's a non-supercomputing application): What if you want to use an application that uses Java server-side? Forget it. The current ARM JVM's (both openJDK as well as Oracle's) both appear to lack JIT; the only way to get Java to have a similar performance/watt between ARM and x86_64 is to disable JIT on x86_64. This is largely a software issue, but until it's fixed, forget about Java on an ARM server.

Comment Re:I actually believe Rossi (Score 4, Interesting) 426

Read the actual report and see if you really think a few chemicals could really do what you suggest -- keep the temperature steady and glowing hot for 100 hours. If so, that would be amazing.. especially since the weight of the reactor did not change!

I don't only think a few chemical could really do what I'm suggesting, I know they can.

The "Glowing hot" reaction was, by their own admission, a very short term reaction, to "prove' that it can generate a lot of heat - in fact, enough to melt the steel and ceramic device. This was an earlier test, and had no part in either of the tests that lasted around 100 hours. In fact, there's no indication in the paper of the duration of the "glowing hot" test at all.

The 100 hour "long term" test in their report is an entirely different device, "purposely" running the device cooler. There's no evidence it's even the same design.

Chemical reactions can easily be used to melt the steel and ceramics used in the E-Cat. Thermite will melt through steel, concrete, and a few feet of dirt underneath. Thermite is self-oxidizing, the reaction is Fe2O3 + 2 Al 2 Fe + Al2O3 -- note that the resulting compounds aren't gaseous and therefore won't exhaust into the atmosphere - so the weight before and after the reaction is the same. That's only one of many reactions that can produce the same effect.

When the test is prepared beforehand, and/or you're not allowed to monitor the whole process, it's quite easy for a charlatan to adulterate the test.

Generating a lower level of heat for 100 hours isn't particularly difficult either, though not necessarily by chemical means this time.

Notable is the fact that Rossi would not let anyone disconnect the power cables, instead demanding they use an ammeter to "prove' the cables weren't drawing any power.

I've seen such a demonstration by an EE professor - the working meter read zero current. The purpose was to demonstrate that you really should understand how the meters work before infer anything from their readings. Otherwise, you end up with a meter reading zero on a circuit with enough electricity flowing to kill. Rossi could easily be using one of several methods to deliver power through the wire such that an ammeter still shows zero.

Putting too much trust in a measuring instrument you don't understand has been the source of a lot of scientific embarrassment over the centuries.

Similarly, Rossi will not allow any chemical analysis after the fact that would prove his claims (such as analysis for the type of copper isotopes that would result from the proposed reaction). Trying to claim that it would give away his "trade secret" catalyst is a joke: If he ever plans on selling this thing commercially, he will be required by law to provide an MSDS which details exactly what is in the catalyst.

Even Coca-Cola has to list its ingredients on the can. The difference is the laws are a bit more lenient towards "natural and artificial flavorings" in foods than they are towards industrial reagents.

Comment Re:I actually believe Rossi (Score 4, Insightful) 426

You obviously have no experience in the realm of charlatans.

In every single case of too-good-to-be-true power sources, there is a gimmick to make the results seem legitimate to the untrained.

It's little more than an magician's trick of misdirection, and ignorance of the audience. There's no real magic in a magician's act; misdirection and suspension of disbelief are their breadwinners.

There's a reason charlatans won't allow people who know what they're doing to examine their apparatus. It has nothing to do with "trade secrets", and everything to do with the fact that experienced chemists, engineers, and physicists generally find the gimmick and expose their fraud in minutes.

There are a lot of ways to heat up the reactor chemically. If you honestly believe that "it can't be done conventionally," then you've utterly failed chemistry.

Simply heating a lithium-ion battery pack over 185 C starts an unstoppable chain-reaction that quickly reaches a couple thousand degrees. Surround a LiPo battery with a chunk of steel, short circuit it, and it it will reach a temp over 185 C. Now you just wait for the steel to slowly heat up. Create an array of a few of these, add copper to equalize the heat in your bank of LiPo batteries, and voila, you've got a heat source of well over 500 watts that can last for hours - days even, if you set it up right.

As much as the /. crowd hates to admit it, there are reasons for most intellectual property law, NDA's, patents, and so forth: one of them is to protect a in inventor - so he can have outside experts verify his apparatus, and can publish how it works for expert scrutiny, yet still retain the rights to profit from his work.

I'd like to believe that Rossi has somehow found out how to generate abundant, clean, and cheap energy. His actions, however, are identical to a garden-variety charlatan. Activities such as grandstanding before customers or the press, "demonstrations", and refusal to let experienced and external experts examine his device.

Rossi does none of this. The smell of charlatan coming from the guy is overpowering. His behavior is identical to a classic con/charlatan. As the saying goes, you don't have to eat the whole turd to know it isn't chocolate.

Comment After reading the patent, Google is in the clear.. (Score 2) 122

After having actually read the patent, it looks like Google Authenticator, for example is in the clear.

The patent states that the following must occur:

1.) User inputs a password
2.) Authenticating device receives the password from #1, generates a password, and sends this new password out-of-band to an external device. (Pager, phone, etc)
3.) Person then reads the password from the device
4.) Person inputs the new password into their computer
5.) Computer sends second password over to authenticating device.
6.) Authenticating device finally grants access.

Google authenticator works differently.
1.) User input password
2.) User inputs password read from device
3.) BOTH are sent over the network to the authenticating computer, at the same time.
4.) Authenticating computer grants access.

Note that Google Authenticator does not generate the 'multi-factor' password after receiving the first password from the user.

The multi-factor password is streamed passed to the (pager, phone, etc.) every X seconds.

It's an entirely different mechanism.

Which means that my already low opinion of this guy is now lower, as he's descended into obvious patent troll territory.

Comment Re:Meh (Score 1) 286

The situation has actually only gotten worse, as process sizes shrink. You can take it however you want; my sources are from the Engineering & QA departments of ultra-high performance SSD manufacturers (guys that make any of the disks you mentioned seem slower than carving the data into stone).

You can take that however you want; I have no interest in endangering my friend's jobs, and I don't really care if you believe it or not.

Anybody who takes a manufacturer's warranty as a life expectancy has a few screws loose. It's as much a marketing tactic as flashy ads. Sales & Marketing droids don't give a rat's ass if their campaign costs the company a load of cash in 2-3 years; they'll just shift the blame to engineering & QA and then go out for a drink. They're almost pathologically averse to accepting responsibility for any destruction or debt they cause.

Sales & Marketing drones care about making the sale now. There are few I've met that care if the company succeeds or goes under. The droid will just move on to the next company, as long as the commission is high enough. The sad truth is a good salesdroid doesn't need a good product, and can easily move around between companies, and between industries.

Comment Re:Meh (Score 1, Insightful) 286

SSD's suffer from one fairly critical problem: Longevity.

As densities become higher (and process sizes smaller), reliability with SSD's will only decrease from its already horribly poor state.

I'm sure you're probably thinking "but improvements in technology will bring greater reliability!" Unfortunately, we're at the point where quantum tunneling starts screwing with us, and there's little we can do about it. Heisenberg's Uncertainty Principle is a brick wall, and we've run smack into it. Quantum tunneling is relatively rare at the current process sizes, but it won't be long before it's an insurmountable problem.

I've seen few SSD's last more than two years even under relatively low workloads.

Many of the faster drives are lucky to last six months.

SSD's even lose data to bit rot at a rate much higher than is advertised.

I think it was Jeff Atwood (dev of stack overflow, discourse.org) who described SSD's as being the "crazy/hot" scale of data storage.: SSD's crazy-bad reliability is only tolerated with because their performance is so hot.

That's not to say SSD's don't have their place, but it's not likely I'll ever trust SSD's with data that I want to actually keep long-term.

Comment Re:One hole at a time (Score 1) 129

It's difficult to really read all of the meaning here. One less hole can be worse than nothing with our congressional circus.

Give the scientists some benefit of the doubt: They are not only aware of the scientific issues involved, but of the political minefield any change must charge through. Trying to focus all efforts on one issue (banning an insecticide) is relatively simple.

Even if you got an insecticide ban through, these scientists apparently don't feel it's enough. Banning the insecticide first, and finding out what else to do later, will be a total disaster with legislators. The insecticide ban alone won't make a different "large enough", and we'll then see a group of familiar congressmen & senators block any further action (of any kind) because the insecticide ban didn't fix the problem by itself.

Comment Re:Once upon a qwest (Score 4, Insightful) 105

It's Qwest's famouse Spirit of Service

For CenturyLink, it was probably a good deal: They get to be a Tier-1 peer, instead of having to pay extortion fees like TIER-2 and 3.

It was a very good deal for Qwest's customers. They went from being limited to 1.5 Mbit to being able to buy "up to" 40 Mbit...

I dumped CenturyLink/Qwest long before then, but my brother supposedly got close to 30 Mbit measured.

Like most telecom idiots, CenturyLink has a 12-month "introductory rate", and they won't negotiate. Since all of their competitors do the same, the practice has become switching networks every year, after the introductory rate expires. The same applies if you have Cable or Satellite TV; customers just switch every year for a lower rate.

I really don't see how being so boneheaded helps either company, but that's telecom in the USA.

Comment Re:www.sixxs.net appears to be under attack (Score 1) 338

Have you, by any chance, imported the CAcert.org root certificate(s)?

I don't recall if CAcert.org is included in by default... I'm thinking not.

If you had trouble figuring out the certificate was signed by CAcert.org, nor do you know who CAcert.org is or the X.509 CA racket in general, I'd suggest you just wait for your ISP to do everything for you.

Comcast is currently deploying IPv6. A few news items down, they state: IPv6 has been launched on all Arris DOCSIS 3.0 C4 CMTSes, covering over 50% our network. We are targeting completion of the rest of the network by mid-2013.

Slashdot Top Deals

"Little else matters than to write good code." -- Karl Lehenbauer

Working...