Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:"So who needs native code now?" (Score 4, Informative) 289

deallocation can (and often is) made O(1) using memory pools in C and C++ programs, something that can't be done in GCd languages

I believe current Java (not Javascript!) virtual machines do exactly this. They do escape analysis, and free a complete block of objects in a single step. This works out of the box, there is no need for memory pools or any other special constructs.

Comment Re:Why does UEFI matter? (Score 1) 211

I did not intend to say that motherboards should understand all file systems. But they should understand at least one non-proprietary, non patent-encumbered one. I mentioned ext2 because that is a relatively simple one. I am afraid you confused ext2 and the current, more complicated ext4. But the actual file system is not important, except it should be patent-free.

Comment Re:Why does UEFI matter? (Score 1) 211

as long as a USB drive is fat32

Does this mean that I am paying to Microsoft if I buy an UEFI motherboard? AFAIK they still extort money for their FAT file systems. Why did somebody choose FAT? If I am clever enough to dual boot than I am also clever enough to format a drive with the completely free ext2.

Comment Bacula (Score 1) 321

It might be an overkill, but the open source backup software Bacula has a verify task, which you can schedule to run regularly. It can compare the contents of files to thir saved state in backup volumes, or it can compare the MD5 or SHA1 hashes which were saved in the previous run. I assume other backup software has similar features.

Comment Re: maybe (Score 1) 267

My point is that if we have no OFFLINE backup, then a physical or network attack can destroy both our live data and our online backups at the same time. If I were an attacker, and I would really like to destroy a firm, then I would first target their backup system. If I can delete all backups immediately thats the best. If not, I would slowly poison their data, so their backups become useless. Only after that I would destroy live data. Therefore it is not enough if you have one offline backup, you must have several one, recorded at different times.

We do use replication, and we have standby servers. Those are useful for high availability. But that is not backup.

We also used offline disks for backups, but I find that inconvenient, and the backup software we use supports tapes much better than disks. I also do not trust disks for long term storage, see my other comment about this.

Until now we were the subject of targeted hacking attempts a few times every year, and they become more sophisticated as the time goes on. I am quite happy here, I want to keep my workpace safe.

Comment Re: maybe (Score 1) 267

Some anectodal evidence: Five years ago 2TB was the highest capacity drives. I bought 3 pieces of 1.5TB drives, so not the biggest, but next to it. They were the cheapest drives of the my usual manufacturer. One almost immediately failed. The replacement drive also failed within a year (different type, but same manufacturer). A third drive is still working but after a power loss a year ago quite a few bad sectors appeared on it, some data was lost. All in all only one from the four had no issues within 5 years.

On the other hand, I had no problems with many other drives over the years from the same manufacturer, which were also SATA drives, but medium capacity, relatively more expensive models with longer guarantee.

For me the lesson is that I should not buy the highest capacity drives of their generation, because the technology may not be mature enough at that point.

Comment Re:maybe (Score 1) 267

Highspeed GPU accelerated and hardware accelerated compressors exist for cold storage systems

It is funny, that on one hand you (or who knows, maybe another anonymous coward) use the cheapest consumer HDD prices you found at the cheapest places in your examples, and on the other hand you continuously use extra or not even existing future hardware when you talk about features.

Comment Re:No shit Sherlock (Score 1) 267

And it's not like you go from needing nothing to needing 60TB in a week

The required backup capacity depends on how quickly your data changes. If you have a quickly changing 6 TB, then for a reasonable backup you will need about 60-120 TB of backup within a year. In our case, reasonable means daily backups preserved for one week and monthly backups preserved for one year. (And yearly backups preserved forever, but I do not count them.) It did happen to us that a coding error rendered important data useless, in such a tricky way that we had not noticed it for a year.

If you have the ability to cheaply and conveniently backup a large amount of data, then you start to backup things which would not even have occurred to you previously. For example it could have been useful if we had HTTP logs from several months earlier when we talked to the police about an attack on our system. And according to my - admittedly limited - experience, tape is the medium which is cheap and convenient.

Comment Re:Never underestimate the bandwidth (Score 4, Informative) 267

Yes, they are surprisingly fast. The maximum speed of a current Tandberg LTO-6 drive is 160 megabytes/s if the data is uncompressable. With the usual compressible data it can be about 320 megabytes/s (officially 400).

These drives can even be too fast. The drives do speed matching, but they have a minimum speed, below that they start shoe-shining. One reason I have chosen an older generation, LTO-3 tape drive, instead of the current generation, because I cannot easily feed an LTO-6 with at least 60 MB/s, which is the minimum speed of the drive. Considering compression, that is about 120 MB/s, which saturates a 1Gb network.

Comment Re:...and (Score 1) 182

Capacitors make possible to cache already written and synced data on the drive. For example, you write many updates to a single file, like in case of the MySQL replication status file. If you cannot lost even only a few writes, then you must flush all these writes to the disk platter / flash memory. This is of course really slows down things. And quite unnecessarily, because everything which is written out, will be overwritten within a few milliseconds.

If you have capacitors on the drive, these small writes never reach the flash memory (except on system shutdown), because the drive can safely store them in memory. If there is a power loss, or other problem, the capacitors provide enough energy to write out the content of the cache to the flash memory.

Capacitors are the smaller equivalents of the battery backup units in RAID controllers.

Comment Re:...and (Score 1) 182

I also considered 840 Pro, because I assumed that "Pro" means that it has capacitors, but no, it has not. Absolute performance is misleading alone. It must be considered together with reliability and consistency of performance. These three often represents trade-offs. It is easy to create a drive which is very good in random IO: use a large write cache on the drive without capacitors, and lie to the OS about sync. Manufacturers have done this previously, maybe they do this today, they do not talk about the internals of the drives, I do not trust them. I ended up with Intel DC S3500, contrary to the fact that I am not a fan of Intel. It is a server drive, so I hope it does not lie to the OS, and the price is not much higher, if at all. It is not optimized for the "desktop", but it has a consistent performance. I haven't even checked absolute performance, but I am sure it will be fast enough for me (because a hard disk was also enough).

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...