Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:TIme flies (Score 1) 97

Unfortunately, this is not a simple fix because some of the logic was assuming that the tile size was a multiple of the movement speed. Breaking that assumption causes the enemies to be able to, for example, walk over a gap which they were supposed to fall into. The player's movement speed is 4 pixels per frame and changing the enemies' movement speed to 3 causes problems. I could make the enemy movement speed 2 and that'd probably work. In case I don't get around to a proper fix, do you think the game would be better changing the enemies to move at half the player's speed or, leaving them at the same speed at the player?

Comment Re:Classic (Score 1) 133

There's no point in saying what he's sorry for because he can only be sorry for 1 of 2 things: 1) Deceiving everyone about being the creator; 2) Not being willing to expose himself as the creator. Claiming to be sorry for #2 (which I think he pretty clearly did) won't really make much difference to people who want to believe #1 anyway. If it did, it would kind of defeat the purpose of keeping the creator anonymous. He could outright apologize for #1, but if he is in fact the creator, then he'd be lying. It's a bit of an information paradox buried in here somewhere.

Comment Re:Uh, just pay extra (Score 1) 644

It's not about altruism - they're talking about benefits for themselves just as much as for everyone else: we can all do better. Some things can't be done alone nearly as well as they can by a collective, which is why we have government in the first place. Sure one could say that simpler collectives like religious organizations, corporation and charities can accomplish collective goals, but even they operate at levels too small and single-minded to accomplish goals as large as an interstate highway system or an international space station as well as government does. These people want better roads and I think are seeing many areas in which our whole country is lacking and wondering why we are so hesitant to do something about it, considering how easily it appears we could afford it. I think often times people are stuck in their ways not because they really don't want anything else, but because they don't realize the consequences of their choices and what really matters. Have you heard the stories of the areas that have discovered that housing the homeless ends up costing less than leaving them homeless? It takes someone with a vision to suggest such a counter-intuitive improvement. People who measure *everything* in dollars are missing a lot. And now some people are speaking up, pointing out, hey, we can all live in a better world for all of us. Don't you want to try? It may mean fewer dollars in individual pockets, but I think they're proposing that the benefits outweigh the cost for *everyone* affected, and can we agree to find a better balance here? Just think about it... too many people just don't think about what really matters and don't realize what all the impacts of their capitalist upbringing are. As the late great Paul Wellstone said, "We all do better when we all do better."

Of course the non-wealthy would be proponents of raising taxes on the wealthy, but when wealthy people themselves are saying the same thing, it's really time to call into question whether the weight consensus should really be shifted in favor of higher taxes on the wealthy even if there are hold-outs... seems like it's time to at least talk about it.

Comment Re:I don't understand this (Score 1) 96

I believe the problem is not that the keys used by the bitcoin infrastructure are too short, but rather that the variation in brain wallet passwords is insufficient, or that it's too easy to convert a brain wallet passwords into a bitcoin public keys to check if they match. The fact that randomly generated keys are not susceptible to this attack like brain wallet passwords are is an indication that its not the infrastructure at fault, I believe.

Comment Still depends on user trusting installer (Score 2) 162

This doesn't seem like a very big vulnerability because it still requires the user to explicitly trust an installer to install executable code. Whether that code is an executable or a DLL that gets loaded into another application, once you've installed malicious software, you're screwed.

Comment Re:Give a raise to overworked programmers (Score 1) 241

Only if you specify the domain as humans. There are far too many insects to make that true if your domain is multi-cellular animal life forms. The problem here is that people seem to be forgetting about having different averages across different domains. To clarify, I think the intended statement was, "many employed *people* make more than your average employed *programmer*." I don't know if that's true. I'm a programmer, and I certainly think I make more than an average American employee. But I'm working for an international company and I'm reasonably certain that some of our programmers in other geographies make much less. Actually technically I may not be considered a programmer any more seeing as how I'm writing designs *for* the programmers. So maybe the statement is accurate.

Comment Wrong questions. More details needed. (Score 5, Informative) 219

You're not asking the right questions:

The first correct question is why on earth would someone need to access half a petabyte? In most cases the commonly accessed data is less than 1%. That's the amount of data that realistically needs to reside on disk. It never is more than 10% on such a large dataset. Everything else would be better placed on tape. Tiered storage is the answer to the first question. You have RAM, solid/flash storage (PCI based), fast disks, slow high capacity disks and tape. Choose your tiering wisely.

The second question you need to ask is how the customer needs to access that large datastore. In most cases you need serious metadata in parallel with that data. For Petabytes of data you cannot in most cases just use an intelligent tree structure. You need a web-site or an app to search that data and get the required "blob". For such an app you need a large database since you have 5M objects with searchable metadata (at 200MB/blob).

The third question is why do you have SAN as a premise? Do you want to put a clustered filesystem with 5-10 nodes? Probably Isilon or Oracle ZS3-2/ZS4-4 are your answer.

Fourth question: what are the requirements? (How many simultaneous clients? IOPS? Bandwidth? ACL support? Auditing? AD integration? Performance tuning?)

Fifth question: There is no such thing as 100% availability. The term disaster in Disaster Recovery is correctly placed. Set reasonable SLA expectations. If you go for five-nine availability it will triple the cost of the project. Keep in mind that synchronous replication is distance limited. Typically, for a small performance cost, the radius is 150 miles and everything above impacts a lot.

Even if you solve the problems above, if you want to share it via NFS/CIFS or something else you're going to run into troubles. Since CIFS was not realistically designed for clustered operation regardless of the distributed FS underneath the CIFS server, you get locking issues. Windows Explorer is a good example since it creates thumbs.db files, leaves them open and when you want to delete the folder you cannot unless you magically ask the same node that was serving you when it created the Thumbs.DB file. Apparently, the POSIX lock is transferred to the other server and stops you from deleting, but when Windows Explorer asks the other node who has the lock on the file you get screwed since the other server doesn't know. Posix locks are different from Windows locks. It affects all Likewise based products from EMC (VNX filler, Isilon, etc.) and it also affects the CIFS product from NetApp. I'm not sure about Samba CTDB though.
I would design a storage based on ZFS for the main tiers, exported via NFSv4 to the front-end nodes and have QFS on top of the whole thing in order to push rarely accessed data to Tape. The fronted nodes would be accessed via WebDAV by a portal in which you can also query the metadata with a serious DB behind it.

I've installed Isilon storage for 6000 xendesktop clients that all log-on at 9AM, i've worked on an SL8500, Exadata, various NetApp and Sun storages and I can tell you that you need to do a study. Have simulations with commodity hardware on smaller datasets to figure out the performance requirements and optimal access method (NAS, Web, etc.). Extrapolate the numbers, double them and ask for POC and demos from vendors, be it IBM, EMC, Oracle, NetApp or HP. Make sure that in the future, when you'll need 2PB you can expand in an affordable manner. Take care since vendors like IBM tend to use the least upgradable solution. They will do a demo with something that can hold 0,6PB in their max configuration and if you'll need to go larger you'll need a brand new solution from another vendor.

It's not worth doing it yourself since it will be time-consuming (at least 500 man-hours until production) and with at least 1 full-time employees for the storage. But if you must, look at Nexenta and the hardware that they recommend.

And remember to test DR failover scenarios.

Good luck!

Data Storage

Ask Slashdot: How Do You Store a Half-Petabyte of Data? (And Back It Up?) 219

An anonymous reader writes: My workplace has recently had two internal groups step forward with a request for almost a half-petabyte of disk to store data. The first is a research project that will computationally analyze a quarter petabyte of data in 100-200MB blobs. The second is looking to archive an ever increasing amount of mixed media. Buying a SAN large enough for these tasks is easy, but how do you present it back to the clients? And how do you back it up? Both projects have expressed a preference for a single human-navigable directory tree. The solution should involve clustered servers providing the connectivity between storage and client so that there is no system downtime. Many SAN solutions have a maximum volume limit of only 16TB, which means some sort of volume concatenation or spanning would be required, but is that recommended? Is anyone out there managing gigantic storage needs like this? How did you do it? What worked, what failed, and what would you do differently?

Slashdot Top Deals

Real Users know your home telephone number.

Working...