Forgot your password?
typodupeerror

Comment: I doubt it (Score 4, Interesting) 101

by enriquevagu (#46711665) Attached to: Intel and SGI Test Full-Immersion Cooling For Servers

(sorry for the duplicated posting; the previous one was cut because of problems with the html marks)

In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE lower than 3. Nowadays, PUE lower than 1.15 can be obtained easily. As a referecence, Facebook publishes the instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).

On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.

Comment: I doubt it (Score 0) 101

by enriquevagu (#46711657) Attached to: Intel and SGI Test Full-Immersion Cooling For Servers

In order to obtain a 90% reduction in the energy bill, cooling must account for 90% of the power of the DC. This implies a PUE >= 10. As a reference, 5 years ago virtually any DC had a PUE instantaneous PUE of one of its DC in Prineville, which at the moment is 1.05. This implies that any savings in cooling would reduce the bill, at much, in a factor of 1.05 (1/1.05 = 0.9523).

On the other hand, I believe that this is not the first commertial offer for a liquid-cooled server, Intel was already considering it two years ago, and the idea has been discussed in other forums for several years. I can't remember right now which company that was actually selling these solutions, but I believe it was already in the market.

Comment: Homework are part of the learning process (Score 1) 278

by enriquevagu (#46554047) Attached to: Don't Help Your Kids With Their Homework

Homework is part of the learning process; helping with homework prevents the kid from doing them and, in the process, learning. Solving math problems for homework, as an example, is not because the teacher wants to know the final answer; it's because the teacher wants the student to confront a new type problem and, in the process of confronting it and guessing how to obtain the solution, learn. I give detailed solutions to the class problems to my undergraduate students (they would obtain them anyway, and in many cases with errors), but I always insist that looking at the solution should be the last resort if they don't know how to face the problem (solutions are intended for checking their own's).

The problem with external help (from the parents, typically) is that, in many cases, the parents get involved in excess and actually do the homework for their sons, so the teacher cannot find any error in the child's results. I know a case of a mother who was doing exactly this, because his kid didn't get very good grades (I was even asked for help in some work the child had to do for school with the computer!). The grades started getting worse and, three years later, the kid was in special education. I know this is not the only reason, but I'm confident it affects a lot.

Comment: Classification, first step (Score 1) 983

by enriquevagu (#46463851) Attached to: How Do You Backup 20TB of Data?

The first step is to classify the data in two groups: what you would not want to lose at any cost, and the redundant data (movies, music, etc) that you could survive without. This is the most important step

The second step is to backup the important data using an external 1 TB drive, tape or similar.

Optionally, the third step is to delete the remaining 19 TB.

Security

Hackers Gain "Full Control" of Critical SCADA Systems 195

Posted by samzenpus
from the protect-ya-neck dept.
mask.of.sanity writes "Researchers have found holes in industrial control systems that they say grant full control of systems running energy, chemical and transportation systems. They also identified more than 150 zero day vulnerabilities of varying degrees of severity affecting the control systems and some 60,000 industrial control system devices exposed to the public internet."

Comment: Our approach in our research group (Score 1) 189

by enriquevagu (#45750709) Attached to: Scientific Data Disappears At Alarming Rate, 80% Lost In Two Decades

This problem occurs even for people in the same group, who often find problems to repeat the simulations from our own papers, and even as recent as one year ago. The problems typically come from people leaving (PhD finished, grants that expire, people that move to a different job), changes in the simulation tools, etc.

In our Computer Architecture research group we employ Mercurial for versioning the simulator code. Thus, we can know when each change was applied. For each simulation, we store both the configuration file that is used to generate that simulation (which also includes the Mercurial version of the code which is being used) and the simulation results, or at least only the interesting results. Multiple simulators allow for different verbosity levels, and in most cases most of the output is useless, so we typically store the interesting data (such as latency and throughput) because otherwise we would have no disk space.

Even with this setup, we often find problems trying to replicate the exact results of our own previous papers, for example because of poor documentation (this is typical in research, since homebrew simulation tools are not maintained as one would expect from commertial code), changes that introduce subtle effects, code that gets lost when some person leaves or simply large files that get deleted to save disk space (for example, simulation checkpoints or network traces, which are typically very large).

However, you typically do not need to look back and replicate results, so keeping all the data is a useless effort. I completely understand that research data gets lost, but I think that it is largely unavoidable.

Intel

Intel's Wine-Powered Microprocessor 126

Posted by samzenpus
from the it-does-go-well-with-the-fish dept.
angry tapir writes "In a new twist on strange brew, an Intel engineer has showed off a project using wine to power a microprocessor. The engineer poured red wine into a glass containing circuitry on two metal boards during a keynote by Genevieve Bell, Intel fellow, at the Intel Developer Forum in San Francisco. Once the red wine hit the metal, the microprocessor on a circuit board powered up. The low-power microprocessor then ran a graphics program on a computer with an e-ink display."

Comment: Server description (Score 1) 301

by enriquevagu (#44559805) Attached to: EFF Slams Google Fiber For Banning Servers On Its Network

all ISPs are deliberately vague about what qualifies as a 'server.' ... because TCP clearly specifies it.

The fact that some programs might behave correctly when implementing a server, or not (eg: skype) or the fact that, in some cases, ISPs allow certain services or ports, does not mean that a 'server' is something arcane. It's you that don't know it.

Comment: Comparison to PCM (Score 5, Informative) 69

by enriquevagu (#43987891) Attached to: Computer Memory Can Be Read With a Flash of Light

The link to the actual Nature Communications paper is here: Non-volatile memory based on the ferroelectric photovoltaic effect.

This somehow resembles Phase-Change Memory (PCM). PCM devices are composed of a material which, under a high current, there is a thermal fusion and changes to a different material status, from amorphous to crystalline. This changes two properties: light reflectivity (exploited in CDs and DVDs) and electrical resistance (exploited in emerging non-volatile PCM memories). The paper cites PCM and other types of emerging non-volating memories.

In this case, it is the polarization what changes, without requiring a thermal fusion, therefore increasing the endurance of the device, one of the main shortcomings of PCM. The other main shortcoming of PCM is write speed due to the slow thermal process, in the paper they claim something like 10ns. If this can be manufactured with a large scale of integration and low cost, it will probably be a revolution in computer architecture.

Comment: Conclusion wrong (Score 4, Informative) 248

by enriquevagu (#43729401) Attached to: Major Advance Towards a Proof of the Twin Prime Conjecture

"the existence of any finite bound, no matter how large, means that that the gaps between consecutive numbers don't keep growing forever"

Actually, I disagree with the unfortunate writing of the sentence. The gaps between consecutive prime numbers are variable, and on average they DO tend to keep growing forever. This is a widely known result, the density of prime numbers decreases as the numbers grow. However, since the gap between consecutive primes is variable and it does not follow a regular function (otherwise, it would be very easy to calculate prime numbers), even with a very low density of prime numbers we can find a pair of consecutive prime numbers with a gap of only 2.

The problem under study is not wether the gap between consecutive primes keeps growing forever (which is true only on average, considering a long secuence of integers), but wether there are infinite such pairs of primes with gap 2. The new result found says that there exist infinite pairs of primes with gap 70M or less. However, this does not imply at all that no consecutive pairs of primes with gap > 70M exist (which, in fact, they do).

nohup rm -fr /&

Working...