Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:100k employees making 100k a day in email (Score 1) 215

(shrugs) your IT is definitely stuck in the 2000s (i.e. 5+ years ago).

Cost per TB (raw storage, the hardware to hold the storage, plus the backup tapes / disks) for bulk storage - is definitely more like $800-$1000 per TB these days and not $10k. The sweet spot for bulk storage these days is the 3TB 3.5" enterprise SATA drives at about $230 each. Add in the loss of capacity due to RAID + server costs and you're at about $500/TB of actual storage.

Primary storage is still much more expensive at $1500-$2000 per TB. But primary storage is using SSDs (around $1/GB) or 15k SAS drives (about $0.35/GB to $0.50/GB). And not the relatively inexpensive 3TB enterprise drives at $0.08/GB.

Comment Re:About time (Score 1) 313

After all those years of the big sweaty one Nadella is just the breath of fresh air that MSFT needed!

I'll believe that once they spin off some divisions and simplify licensing costs for corporate users. And release all of their applications on Android + iOS + OS X.

This is just a retrenchment. Their game plan is still "lock-in lock-in lock-in", also known as "Embrace, Extend, Extinguish".

Comment Re:How about transfer rate and reliability? (Score 1) 215

In practice, SSDs have only 20-100x the IOPS of a similar number of spinning platter drives. Which is still a huge improvement, but not three orders of magnitude (1000x). The bigger advantage is that when you have more workers accessing the drive, latency performance doesn't dive off a cliff like it does with spinning platter drives. It instead degrades gracefully on the SSDs.

SSDs are definitely edging 15k SAS drives out of the market. SSDs do everything at 15k SAS drives can do, with at least an order of magnitude more IOPS/drive, for only about 2-4x the cost of the 15k SAS drive. And putting a writeback SSD cache in front of a spinning platter drive array is even more economical.

Comment Re:What about long-term data integrity? (Score 1) 438

A powered-down SSD that has been written once should be able to retain data for ~10 years or so. Longer if kept in a cool place.

Nope. Most MLC SSDs will lose their data in about a year and the TLC SSDs in about 6 months of being powered off. (Don't confuse older flash media which was probably SLC with newer MLC/TLC media. Or which had larger feature sizes.)

As the size of the feature that stores your bits shrinks, so does the archival lifetime before something bad happens to one or more of the bits. That holds true for everything from tape, to hard drives, to CDs to flash drives.

Comment Re:Empty article.. (Score 3, Interesting) 438

Also incorrect assertion that drives don't go faster than 7200 (there are 15k drives, just they are pointless for most with SSD caching strategies available).

With Enterprise SSD drive prices hitting $1/GB (granted some are still $2-3/GB), the days of 15k RPM drives are definitely numbered. You get 50-100x the IOPS out of SSDs compared to the 15k RPM SAS drives. That means for a given level of IOPS that you need, you can use a lot fewer drives by switching to SSDs.

I'd argue that if you are short-stroking your 15k SAS drives to get increased IOPS out of the array, it's past time to switch to enterprise SSDs.

Comment Re:How do WE fight this? (Score 1) 155

Using rdiff-backup, rsnapshot or rsync across the LAN via SSH in a "pull" configuration is the safest. The server pulls the files from the client PC. Alternately, you could do the above in a push configuration and limit where the origin PC can write to on the backup server. Even in a "push" configuration, I don't know of any malware currently capable of figuring out that there is an rdiff-backup script which stores data on a different server.

The server then sends files to tape / disk / offsite.

Basically - you need to have a centralized backup solution with multi-generation removable media.

For immediate restores, you pull the files back off the backup server. The next level after that is pulling files off of removable media which has been kept offsite or disconnected.

Comment Re:Microsoft Windows only (Score 1) 143

microsoft is one price and you get a server and tools and all the features

That's a good one, go ahead and pull my other leg while you're trying to spin that for Microsoft.

Microsoft licensing is a nightmare. Just look at the segments for the desktop operating system. Or try to figure out which version of MS Office you need and whether a volume license will save you money (and whether you'll be in compliance). The server-side is no different with the different restrictions on the different variants of Windows Server, SQL Server, etc.

(They're still a babe in the woods compared to some other vendors like Oracle, but they're trying to catch up.)

Comment Re:Microsoft Windows only (Score 1) 143

That meme "security through obscurity" only really applies in cases of improper reliance on "security via obscurity", once the secret is known - the system is insecure and anyone can access it.

Examples of this would be "hand rolled encryption algorithm that we hide in a black box", "secret handshakes", "back doors which are left unlocked".

Comment Re:I will be changing to FreeBSD too (Score 1) 450

There's definitely going to be some teething pains. Which is why I'm not rolling out anything production on RHEL7 until 7.2 or 7.3 comes out next year.

But I am looking forward to having (1) log file to dig through instead of two dozen or more. And being able to easily pull that to a centralized log server (and pull is more secure then push). I'm also looking forward to not having to write monit / nagios scripts to restart services if other services restart.

Comment Re:OpenPGP (Score 1) 63

The problem with Perfect Forward Secrecy (PFS) in the case of GPG/PGP encrypted messages is that PFS requires two-way communication between the end-points at the start to securely transmit and agree on a ephemeral key for that session.

That's not practical in the case of sending an encrypted email/file to someone. There is no "session" to speak of. There's no two-way conversation at the start before the file/information is transmitted.

GPG/PGP is designed to defend against disclosure of data-at-rest (i.e. an email body sitting on someone's server or a file sitting on your hard drive). It just so happens that because it defends in the data-at-rest scenario that it can also help protect the contents in transit. It's very good at what it does, but trying to use it in a situation where you want PFS is a misapplication of the technology.

(So yeah... the EFF folks are idiots and are lumping together apples and oranges.)

Comment Re:Still a second class citizen (Score 1) 214

In general, if a device supports microSD cards of 64GB, they'll work fine past that point.

The original SD spec was limited in size. SDHC came out in 2006 and allowed for card capacity of up to 32GB. Most devices made in 2013 or earlier are SDHC with a 32GB limit (such as my Thinkpad T61p laptop and my Asus TF700T tablet). That means putting a 64GB card into a SDHC slot is a bad idea (it will probably corrupt the data once it tries to write past the 32GB mark).

SDXC was introduced three years later in 2009, and allows for cards up to 2TB in size. A lot of times, the manufacturers will only certify up to the size that was available when the device was released. So larger cards may very well work, up to the limits of the spec.

Comment Re:I have just one word for you (Score 1) 217

A lot of Java boilerplate code (and not just getters/setters) can be gotten rid of with a bit of AspectJ (Spring Roo leverages this heavily). With good use of AspectJ, your java objects look like POJOs (plain old java objects) with all of the extra stuff added at compile-time by the .aj files.

Slashdot Top Deals

Work is the crab grass in the lawn of life. -- Schulz

Working...