Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Tape (Score 1) 141

OK, I've gone and looked into the current situation with tape. All prices are in AUD from vendors in Australia; 1 AUD = 1.06 USD at the moment, so they're close enough to the the same.

LTO-5 drives are now AU$2500 to AU$3000 + SAS HBA, a good year or two after I built that backup server. Most of our data set is data in formats that're already efficiently compressed with JPEG, LZO, PNG, deflate/zlib, etc, there's no significant compression gain; tapes can be presumed to be 1.5TB. The weekly hot set is over 1TB and the archives are over 5TB. We can probably pare the weekly down enough to fit it on a single LTO-5 along with the differentials, but we don't have tons of headroom. Library/loader units are AU$5k (sans tape drive) and up. Tapes are AU$80 or so, and shipping renders direct import no cheaper except in vast quantities.

Because of the cost of the LTO-5 drive it's just not worth it when you're only running a few tapes. We'd be better off with a fire-protected power-protected LTO-4 autoloader system. I've found one on end-of-line special at AU$4k down from AU$10k for 19TB of capacity (24 tapes), which is almost attractive. With LTO-4 tapes at about AU$55, that's a total of a bit over AU$5000 for library+tapes, or $263/TB, plus the controlling server. Still not exactly cheap.

Given that 1TB HDDs were down to below $100 ea, I could build a 10TB backup array (11 disks; 8TB usable; RAID6 + 1 hot spare) for about $1100 + enclosure. 4xSATA HBAs are less than $100 ea and putting three in a regular full-ATX or mini-ATX board is no big deal. Linux's software RAID (md) does a great job for this sort of thing, where write-through caching is acceptable because writes are mostly linear.

Given that we need the same kind of fire-resistant enclosures etc whether we're using tape or HDD, HDD wins by a mile.

If we were removing and rotating tapes daily then we might be able to get away without the fire protection. Maybe. The backups tend to run overnight with tapes exchanged in the morning, and that means there's a big window for loss before the tape gets taken away. Then there's the issue of finding someone trustworthy and reliable to exchange the tapes and, more importantly, not lose them.

As far as I can see, tape still loses unless you're doing LOTS of off-site archiving and have a large rotating media library. Even then it's tempting to just use HDDs in hot-swap caddies.

I don't think the economics support tape anymore.

Comment Re:Storage is pathetic (Score 1) 141

I used to use DDS4 back in the dark old days, and I quickly became all too familiar with their tendency to be write-only media even with regular drive cleaning, tape replacement and occasional tape re-tensioning. I know LTO is better, but when I was building this backup server high-capacity LTO was also eye-bleedingly expensive.

LTO has been forced down dramatically by effective competition from rotating magnetic media. Tapes and drives sure weren't those kind of prices when I built that system! I priced alternatives and disk won by a *lot*. Of course, I'm in Australia, the land of $OMGWTF pricing for IT equipment.

Given our data set size we'd be up for a pair of drives or a changer - because as you know, if human attendance is required for a backup to complete, the backup will fail on the day it's most important that it succeeds. With a pair of drives we'd need a lot more human intervention to swap tapes than we currently require. Physical access to the backup server is a pain, as is to be expected if it's separate enough to be any good, so tape changes would be pretty annoying.

Nonetheless, thanks for the tip. I'm increasingly tempted to move to tape for my long-term archives, because HDDs are bulky, delicate, and not as stable as tape over long idle periods. I should really consider using tape for archival and keeping the HDDs as a staging area before copying to tape, reducing the frequency of tape changes and making sure backups are in more than one place.

Comment Re:Storage is pathetic (Score 1) 141

While I like the *idea* of cloud hosting making the office portable, it only works for some kinds of work, and has its own downsides.

Australia has only a couple of big international data links, and they've been known to go down when cables get cut by idiot trawlers. This usually causes severe congestion on the other links and I would NOT want to be running my business off those links when this happens. If you're not in .AU/.NZ, this probably isn't an issue for you, and in .AU or .NZ one can always host within the country - if you one find a provider.

The bigger issue is volume of data. I work at a small local newspaper, and even here we're dealing with over a terabyte of data in our "hot" set that we absolutely must have to produce each edition. This is not going to work over Internet links to cloud hosting. Keeping the data set on the other end but simply isn't practical when colour-correcting and touching up images, producing artwork, etc.

Comment Re:Multi-Modal Trip Planning (Score 2) 187

Yes, this! Car+train, bike+train, etc are key ways to make public transport more usable and time efficient, but Maps doesn't understand them.

Maps needs to not only understand mixed journeys, but which services you can take bikes on. In Perth, Western Australia, for example you can bring a bike on the train (but not bus), except between 7am-9am and 4:30pm-6:30pm weekdays. There are also secure keycard-controlled bike lockups at stations if you want to just ride to the station. If you make use of those facilities it makes getting around a lot easier.

I usually ride my bike to the local train station, where I chuck it in a keycard-controlled lockup and hop on the train using the same keycard. Maps can't understand that I can get to the train stn in 5 mins not 20 mins, so it makes poor planning choices. This is no big deal when following regular routes or planning well ahead, but it's a real PITA for the ad-hoc "I'm here, get me there" journeys Maps is so great for.

This is a pretty minor limitation in a generally amazing application, though. I'm truly impressed it works as well as it does.

Comment Re:Would be great... if it worked (Score 4, Interesting) 187

I must second that - I rely heavily on Google Maps in Perth, and in fact it's helped me avoid needing a car for the last five years. I only recently got one to make it easier to get out of the city, go windsurfing, and get around on sundays.

The only issue I've ever had with Google Maps transit in WA has been the odd special-occasion public holiday or special event where Transperth appears to have failed to inform Google of the schedule changes. That can be annoying. On the other hand, Google Maps had perfect data about all the New Years' Eve special and adjusted services, so they're clearly getting it pushed most of the time.

I cannot possibly praise Transperth and Google enough for Google Maps Transit. It's fantastic, and it's a real shame that so few people seem to know about and use it. It was a real lifesaver when I last visited Auckland, too, as I could just use Maps instead of having to fart about with a different city's transit systems and timetables. Fantastic!

Comment Australia (Score 1) 141

Now try being in Australia, where in addition to those downsides we have tightly metered traffic on Internet links, not just for international traffic over the undersea cables but for ALL traffic not to/from our local ISP.

People trying to sell cloud storage in this environment are off their nut.

Comment Storage is pathetic (Score 5, Insightful) 141

I have the same issue. I work for a small suburban newspaper, and even our hot data set is over 1TB, plus append-only archival data of more than 4TB.

When I tell these "cloud backup" providers this they do a double-take and then start talking laughably high prices or they just back off and say they can't really handle our archival data set. It's quite pathetic when my 10TB backup storage server in a fire-resistant, water-resistant enclosure in the shed cost under $5k when built - and that was when 10x1TB disks was a lot so the disks cost over $2500 by themselves.

Because I'm in Australia I also have the issue of bandwidth. I'd need a backup provider to peer with my ISP via a local peering point that offers unmetered traffic; with 100GB/month limits considered very big here I couldn't possibly back up over a metered link. Even then, my redundant two ADSL2+ links achive about 6Mbit/750kbit and 4Mbit/500kbit per second each, so I'd probably need to pay to run fibre from the nearest line along the train line (est $50,000) and pay over $1000/month for a fibre service just to talk to the backup storage host.

I'm negotiating to move our backup server to a business down the street and run an 802.11n point-to-point directional link between us instead. We each get to fail over to each others' Internet services if necessary, we exchange backup storage, and neither of us gets to pay through the nose for it. It's not as good as a fast link to a DC somewhere, but it's a hell of a lot more practical.

The other issue with cloud backups arises when you need that 5TB (mine) or 38TB (yours) in a hurry, for disaster recovery. You can't exactly run down the street and grab the server with its disk array then restore over 1Gbit ethernet or direct to locally attached SAS/eSATA/whatever. Nope, you have to download all that data over whatever Internet link you have access to. If that's not the dedicated fast link your premises has (say, if they've burned down) then you are screwed.

I'll keep my primary backups within driving distance, thanks.

Comment Good for backups, but few decent svcs exist (Score 4, Informative) 141

For me the one attractive use case for cloud storage is for backups - and it's one that's catered to particularly poorly by current offerings.

For backups, you want (a) fast, unmetered links to the host and (b) moderately reliable, cheap, and not-that-fast storage you can access in a variety of different ways depending on what's most convenient, with or without running your own VPS to mediate between storage and storage clients.

One user will want to rsync to their cloud storage. One will want to remote-mount a file system on it via iSCSI. Another will want to run a Bacula storage daemon on it. Yet another will want to use it as a co-ordinator for a full network backup system. All these use cases should really be supported, and the first two shouldn't need the customer to maintain their own VPS to control the storage.

As things stand, almost everyone wants to sell SAN-based high performance storage that's *expensive* and *fast*, not cheap and slow. Most backup services seem to want you to use their tools or a local appliance to talk to their storage. Half of them act very confused when you mention "Linux" or "UNIX" and ask if that's a new kind of Mac or something. At least in Australia I've found the market miserably unsatisfying so far.

What I'd really like is for ISPs to begin offering, or partnering with others to offer via peering, bulk near-line storage at moderately affordable rates. That way you can talk to it over your business's main ADSL/SHDSL/fibre/whatever link(s) without dealing with quotas, it's fast, there are multiple routes to it, and it's unlikely to go down if an international link has a hiccup.

iiNet's cloud offering looked like it might have potential for this, but it turns out to be just another EC2-wannabe crossed with Linode-done-badly-and-expensively. The storage offerings are miserable and they don't even mention whether traffic between iiNet internet services and their cloud is metered

Comment Fixing things (Score 1) 404

Alas, fixing things is getting harder all the time.

I recently had to give up on my Dell XPS M1330 and "upgrade" to a laptop that's inferior in almost all ways. It has a horrid trackpad instead of the beautiful Synaptics one on the old machine, a crappy display, etc etc. I'd prefer to keep on using the old M1330, but it was beginning to overheat and crash much too frequently under load.

Of course I tried cleaning out the cooling sub-system. I tried stripping the degraded old paraffin pads off the CPU & GPU and replacing them with suitable copper shims coated with a thin layer of thermal paste. Nothing seems to be working - perhaps something has been damaged by the heat already.

The point is that my chances of repairing the machine are slim. The GPU is surface-mounted and without massively expensive equipment I have zero chance of removing it even if I could source a replacement or confirm the GPU was faulty. At least the CPU in a ZIF socket so I could theoretically strip a scrap machine to get one, presuming that damage to its packaging is actually at fault. If it's something else then I'm even more stuffed, because it's an intermittent fault in a very complex system.

Another case: my TV sends the wrong EDID on its HDMI ports, causing no end of frustration. I know what's wrong and could fix it, but cannot get any documentation for the proprietary protocol of the TTL serial interface used to program the control board, and the vendor refuses to admit that anything's wrong. It also enables overscan whenever it detects audio-enabled HDMI, which is horrid - but fixing this would require source code to the TV's firmware and a build environment as well as a way to update the controller firmware. Good luck with that.

Even in simpler cases repair isn't necessarily practical for many. I've learned enough that I could locate, remove and re-solder a fried resistor or diode in a PSU. I could probably isolate a faulty electrolytic cap and replace that, too. Here parts aren't the problem, safety is. Messing around with high voltage transformers and PSUs isn't really bright unless you have a bit more than dabbler-level electronics skill - it's liable to leave you needing repair as well as the device. Taking it to someone who does have the skills, though, will cost you more than a replacement device.

Why? Because we expect to get paid way, way, way more than we pay the people who're building the devices in the first place. So long as that remains the case, the massive waste it gives rise to will continue.

Comment Sound *was* crap early on (Score 1) 404

Current "3D" is more like the performance sound effects added to silent film in the theatre. Think coconuts for horses clopping along pavement. It adds something, but it's not really a good match for the real thing.

The parent post's argument would be absurd were it against, say, high-quality holography, but it isn't. It's against pseudo-3D achieved using stereoscopy, and frequently abused at that. I don't think your substitution really makes sense given that.

(Personally I'm moderately fond of stereoscopic 3D when used without ridiculous post-production gimmicks, but I can see why people hate it.)

Comment Or if your code isn't a product (Score 5, Informative) 325

I'm releasing tools from my work that I developed for our operations.

We don't want to sell the tools - for the kind of money we could get for them in a market full of existing commercial options, it wouldn't be worth the trouble, let alone the sales and support overheads.

We could keep them closed in-house. There's nothing wrong with that and it's a viable option, but it means we give up the chance of sharing maintenance costs with others and benefiting from others' improvements to the tools.

Consequently, we've decided to open them up. This will permit competitors to use them - but most of our local competitors have already licensed expensive commercial equivalents they're committed to, so the only way they're likely to benefit is if we push pricing down across the industry, which isn't likely at this stage given that our tools are significantly less polished and more limited than the existing commercial offerings. It'd also permit new start-ups who wanted to compete with us to use them - but we're the dominant player in a mature and saturated local market with significant community loyalty. Startups have consistently failed despite having vast amounts of cash pumped into them by outfits who want to knock us out of the way and don't mind taking epic short-term losses to do it.

The upside of opening our tools up is that we're hoping to see participation from other companies and non-commercial publications, reducing the cost of ownership of our in-house tools, making them easier to maintain and less dependent on just one person in one company. That should help future-proof them for us if they're successful, and hopefully get us the use of contributed enhancements we wouldn't have developed ourselves.

IMO this is one area where OSS is really key in commercial use: when you need to build tools that help your business but aren't viable as a product.

Comment The devices are not bricked, just IMEI-blacklisted (Score 5, Informative) 234

All this is is a list of blacklisted IMEIs that's shared between most (not all) carriers. The phones are still perfectly functional when used in other countries with compatible UMTS/GSM frequencies, and on carriers that don't use the IMEI blacklist.

Some carriers do subscribe to the IMEI blacklist but take so long to update it that they might as well not. I'm looking at you, Vodafone.

Not only can stolen phones be sold overseas, but it's pretty trivial to rewrite the IMEI on many phones. This is a disincentive to casual theft, but not much more.

The Almighty Buck

Feds Call Full-Tilt Poker a 'Global Ponzi Scheme' 436

blair1q writes "Popular (and heavily advertised) poker website Full-Tilt Poker was sued today by the U.S. government, following an investigation that revealed it to be a massive Ponzi Scheme. The principals in the company set up a complicated system to direct funds from subscribers' poker accounts into their own bank accounts. This was in contravention of their own claim that users' money was untouched. Players' accounts amounted to $390 million, but the company only has $60 million in the bank, having over time distributed $440 million to its own directors and executives."

Slashdot Top Deals

Doubt isn't the opposite of faith; it is an element of faith. - Paul Tillich, German theologian and historian

Working...