Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:You're already making more progress... (Score 1) 444

Hm, my bank never took a cut from my debit card. The credit cards I have don't either, unless I don't pay the bills the same month I made the purchases. Where do banks do this?

Here in Sweden, there have actually been plans (don't know the status) to add a fee to cash withdrawals. This is to reduce cash circulation, in an attempt to minimize robberies (no cash in stores, not much to rob).

Comment Re:It's worse than that. Very flaky players (Score 1) 642

Whether or not BItcoins are a good idea, the market ecosystem behind them is far too flaky.

Yes, this is correct. However, Bitcoin is still in its infancy. Nobody could reasonably expect it to have a well established "market ecosystem" this early on. This is an argument why Bitcoin is not fit for major use at this time, not that it will never be.

Comment Re:Gambling... (Score 1) 168

As the number of hands played goes to infinity, the player who makes the best decisions with regards to his odds of winning, the size of the pot etc is going to win. When talking about cashgames it is said that one needs to play on the order of 100k hands (which isn't that much when talking about online poker) to be sure whether his strategy and skill is good enough to win.

Sure, if you play a few tournaments during a lifetime then luck will be a significant factor. If you play every tournament of the WSOP, WPT, EPT every year skill is likely to determine your results.

Comment Re:Submitter here (Score 1) 264

Hi,

As I already said, no we will not be running X on all the nodes. One of them will run it, with a few cores reserved for the purpose (this is the way we do it today and we have no significant issues with the arrangement). But like I say, we may very well decide to use one of the boxes from the old cluster for this task, seems like a good idea. But I still think we'd like the same distro to make the administration bit easier.

One of our most used softwares is practically closed source (to get access to the source you have to pass a series of security screenings as well as motivate why you really REALLY need the source, it's almost impossible for non-US citizens). This software does not implement GPGPU functionality at this point, and whether or when it will be supported is unknown to us. Therefore, getting GPUs at this point would probably not be the best idea. It has been discussed, and we might look at getting boxes that can be upgraded with GPUs in the future.

As for being an administrator, I volunteered for it so I'm not "unsuspecting" :) I have my reasons for wanting to do it, won't go into them here. But it will not count towards the time I have for research, that is already reserved specifically for research. We will be at least 3 people sharing the load on this too, I don't really see a problem with it.

Yes, we have looked at the other clusters around here. The problem with them is that we have to pay for them per unit of processing time per specific project. These will be used for larger jobs that our local cluster cannot handle, the local one will mainly be used for developing software to run on the larger ones as well as running smaller simulations and verifying algorithms. This is very useful to us, since it would be a pain to plan small things like that ahead of time. And we do need more processing power than we have today, even smaller jobs of the type we do require some power.
Our current setup works fine for this and it has been of great use, but it is getting old and is beginning to be a limitation. So now that we're considering upgrading we wanted to do some research first, hence the question.

Thanks very much for your advice, I will keep your comments in mind. Maybe we can find someone else who wants to run a cluster for a similar purpose. Though we really do like having one reserved just for us as well. This is a general opinion in the department, and it would probably be hard to convince everyone to do it any other way. Any additional comments are of course welcome!

Comment Submitter here (Score 1) 264

Hello,

Thank you all for the informative replies, this will help us in deciding what to use.

It seems that Redhat or a variant thereof is what most of you agree is good, so we will probably go with one of those. Especially since that is what we have used in the past.

The reason for having X is that we work in X, some of the software we use need that for various reasons such as plotting. This will only be used on one node. Since this will be a small cluster (probably 4 boxes with 32 cores each) we do not intend on building a separate box for running X. We might use one of the old boxes for X, but I think we still would want the same dist on all of them for simplicity. (Oh, and to those who asked: these will be in racks and not used for desktops)

Answer to another question that came up: This is for use at a university, we will be using it mainly for (nuclear physics) simulations/calculations based on Monte Carlo methods.

Again, many thanks!

Submission + - Best Linux dist for computational cluster? (engadget.com)

DrKnark writes: I am not an IT professional, even so I am one of the more knowledgeable in such matters at my department. We are now planning to build a new cluster (smallish, ~128 cores).

The old cluster (built before my time) used Redhat Fedora, and this is also used in the larger centralized clusters around here. As such, most people here have some experience using that.

My question is, are there better choices? Why are they better? What would be recommended if we need it to fairly user friendly?

It has to have an X-windows server since we use that remotely from our Windows (yeah, yeah, I know) workstations.

Comment Peak or average power? (Score 1) 475

It is unclear to me from the article and the summary whether this is peak or average power? Does anyone have a quote on this?

Whether it is average or not, I have always been interested in what this type of technology can actually achieve. It is definitely an interesting project and I will be watching for the final verdict on it.

Comment Re:On getting rid of old hardware... (Score 1) 585

This is true. I know companies that still rely on old DOS-boxes and ISA cards for certain critical tasks. Sometimes they just consider it too expensive to develop the stuff all over again (and source code may have disappeared, making it hard to figure out exactly what the old program does). And of course, those applications don't need a lot of CPU cycles.

Comment Re:Another isolated incident? (Score 2) 436

I will try to give a somewhat educated answer to your question.

First on the note of "no way to stop it":

The wikipedia article does not mention which control rod mechanisms could have failed; there are two. The SCRAM (emergency shutdown) system uses stored pressurized gas to effectively "blow" the control rods in fully within the space of seconds. The system used during normal operation is of a different kind (whether electrical or hydraulic, I'm not intimate whith those details) which takes on the order of 5 minutes for a complete insertion.

Secondly, this was a light water reactor, meaning that as soon as the water level sunk below the fuel rods (or even water density decreased due to depressurization) the reactor would have halted. This is due to the moderation effect of the water necessary to control the neutrons energies to sustain the reaction in an LWR. In other words a passive shutdown.

What could have happened:

The reactor had a system for reinjecting lost coolant into the core. If this system operated correctly a meltdown could probably have been avoided (again, I am not intimate with the details or effectiveness of such systems). The wikipedia article that issues were found with this system following the incident. So assuming the system failed:

A meltdown would have occurred, the scale would have depended on the functionality of other emergency cooling systems. What the consequences of this would have been is hard to say. The wikipedia article mentions issues were found the emergency diesel generators. As we all know by now, failure in this system is a big reason why Fukushima turned out like it did.

So yes, this could have been a Fukushima type event. It would have required several safety systems to fail, but given the flaws found the risk cannot be discounted. But given the limited information in the article I cannot say more, or even if my assessment is correct.

I am not an expert in reactor safety, I work mainly on the theoretical side of things. I hope this sheds some light on your question.

Comment Re:But.... (Score 1) 405

Well, that wouldn't be enough. They probably couldn't filter out someone with just a couple of plants anyway. But if you have say 3kW of growing lamps going on and off at the same second every day then maybe, if added to other suspicious activity it could warrant some form of action. I've been told that this is one way police look for marijuana farms here in sweden.. whether there's any truth to that I do not know.

Slashdot Top Deals

Those who do things in a noble spirit of self-sacrifice are to be avoided at all costs. -- N. Alexander.

Working...