Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Re:Thanks to (Score 1) 369

Ars Technica allows 30 minutes, I believe, and it doesn't seem to be abused. People that reply will quote the bit they reply to so it's clear what they refer to anyway.

So how about 30 minutes editing window, and a quick, one-button-press to quote the parent post? Just to encourage people to include the original bits in their replies?

For added protection you could colour the edited text in dark purple, say, just to make it clear to people what has been edited?

Comment Re:Great (Score 1) 89

Well, yes and no. You're limited to 100Mbit/s, which is if course a lot slower than gigabit ethernet, But normally a scientific cluster (which is what I'm interested in) isn't really limited by bandwidth as much as by latency. Going through the USB subsystem for all packets is going to give you worse latency than dedicated hardware. But then, I also use a cheap switch that's probably not a speed demon for retransmitting packets either.

And the thing is, the Pi is a fairly slow computer. I suspect that as a ratio of computing speed to transmission delays, the Pi has as effective communication as a "real" cluster of server systems connected with high-end hardware. The CPU is even slower than the network if you will.

Comment Re:Great (Score 1) 89

Any particular reason not to just do it in software, e.g xenserver or virtualbox? Virtual networking is kind of messy, but it leaves less cables around :)

VMs would work well, I agree. But this way I also get real(ish) network latency and delays in the same way a full-size system does. And an actual tiny cluster on my desk is a lot more fun :)

Comment Re:Great (Score 1) 89

It's really easy to set up. Take a few Pi's, add a small switch (get one that takes 5V). Connect them up, and use a single larger power brick that can power all Pis and the switch. Either make some kind of enclosure, or - as I did - rack them up with spacers, drill holes in the switch lid and mount the rack of PI's to it.

One wrinkle is that you probably want to keep the switch only for the internal network. I use a USB-Ethernet dongle on the login node for external communication. it's just as fast as the on-board Ethernet in practice (it's internally treated as a USB device anyhow), and you can set up the login node to act as router and gateway to the other nodes.

Then you can install and play with whatever cluster-related software you like: Slurm, OpenMPI, Ansible, GNU Modules, XscalableMP, ZeroMQ and so on.

Comment Re:Great (Score 1) 89

It's fairly common in complex robotics to have a set of tiny MCUs like the AVR (that Arduino is based on) to control one or two joints, then a larger single-board computer to send commands to those units, and receive status updates about angles and speeds.

The Arduino and Raspberry Pi are well suited to those two roles.

Comment Great (Score 3, Interesting) 89

I just finished a small Raspberry Pi cluster, with two RPi 3 compute nodes and an Rpi 2 front-end node. Not because it has such great computational capabilities - it doesn't - but because it's a low-cost way to get a "training system" that I can abuse without messing up anything on the real cluster I also use.

These new Pi's would be even better; could have a single backplane that the nodes slot into. Ideally you'd be able to route both power and ethernet through the backplane as well, but I don't know how feasible that'd be.

Comment Re:I Know Where The 22,000 Went! (Score 1) 474

Why is it society's responsibility to teach you job skills?

Because long-term unemployment is a societal burden, not just an individual one? And it's a missed economic opportunity for society as well as the individual?

It is a shared responsibility because mismatches between worker skills and opportunities is a shared economic burden.

Comment Re:Parallelization... (Score 1) 55

When you subdivide a problem, each core works on a smaller subset. If those subsets fit into a cache that the bigger problem didn't, you can easily get superlinear increase as a result. In many cases you could actually rewrite the bigger problem to be more cache-friendly and get a similar speedup, so you generally don't make much of such "extra" performance increases.

Comment Re:Why do people think self driving cars will catc (Score 1) 622

never having to worry about being in a car accident and thusly never having to pay for collision insurance will save you a lot of money.

If you don't have or can get the money to actually buy the thing in the first place, it doesn't matter if it saves you money in the long run. This is part reason why it tends to cost more to be poor than to be rich.

Comment Megacorps (Score 4, Insightful) 91

One one hand, revitalizing city centers is not necessarily a bad thing. On the other, this starts to smell a little of Shadowrun-style megacorporations (or of industrial-era company towns).

Live and work your entire life within the protective confines of your employer. Go to the company school, work at the company office, live in company housing paid for with a company-bank supplied mortgage, dine at your choice of company restaurants, vacation at the company resort, get a company funeral...

Comment Re:Microsoft, do this: (Score 1) 288

And the end result is another Android phone, except with small compatibility issues and without the actual app store. You'd be left with the worst of two words: Users would rather get a proper Android device with all the apps; and developers would rather develop for the billions of people using the Android ecosystem and would not bother rewriting and submitting their stuff to MS own app store variant.

Comment Re:They were so eager to see if they could... (Score 1) 86

Well, duh. But F77 is obsolete and MPI is now only used for distributed systems (which are harder to write for than shared-memory systems). You want to see equally painful, write the same MPI code using a pre-ANSI C-compiler because that's the equivalent of what you're complaining about.

The modern way to do this is OpenMP and Fortran 95 (which is supported by all major compiler vendors) or Fortran 2003 (which is 90% supported by all major compiler vendors). Just tell the compiler what you want, and you have parallel code. No fuss, no muss.

I certainly love OpenMP, and I much prefer it over MPI. But big machines are all distributed, not shared memory. If your code is going to run on anything on the top100 list, for instance, or any university cluster, then OpenMP is not enough. All the big systems I've worked with need a hybrid design where you use OpenMP for within-node parallelism, and MPI across the nodes.

With that said, there are some encouraging signs. Things like XScalableMP attempt to implement OpenMP-like semantics on top of MPI, and mostly succeeds. Although you still need to do explicit synchronization on occasion. The offloading infrastructure in the latest GCC versions could possibly also abstract away this in the future, but again, we're not there yet.

Comment Re:FP (Score 5, Interesting) 152

For large-scale simulations you need them to be pseudo-random, as in repeatable. If you are running a parallel simulation across hundreds or thousands of nodes, you can't send random data to all the nodes. You'd spend all your time sending that, not getting anywhere with your simulation.

Instead, what you do is that each node runs its own random generator, but seeded with the same state. That way they all have the same random source, with no need to flood the interconnect with (literally) random data.

Another reason is repeatability of course. You may often want to run the exact same simulation many times when you're developing and debugging your code.

Slashdot Top Deals

The party adjourned to a hot tub, yes. Fully clothed, I might add. -- IBM employee, testifying in California State Supreme Court

Working...