Charter Accidentally Wipes 14K Email Accounts 213
dacut writes with the sad news that Charter Communications, which provides cable and Internet access to 2.6 million customers, accidentally and irretrievably wiped out 14,000 active email accounts while trying to clear out unused accounts. They're providing a $50 credit to each affected customer, which seems a paltry sum for anyone who was less than diligent about backing up their email — though those who relied on Charter's webmail interface had no easy way to accomplish backups. From the article: "There is no way to retrieve the messages, photos and other attachments that were erased from inboxes and archive folders across the country on Monday, said Anita Lamont, a spokeswoman for the suburban St. Louis-based company. 'We really are sincerely sorry for having had this happen and do apologize to all those folks who were affected by the error,' Lamont said Thursday when the company announced the gaffe."
Re:my gut feelings.... (Score:3, Insightful)
You know, the corporate version of the celebrity arrest...
Unfortunate (Score:4, Insightful)
It's one thing if you have angry customers over something you have control over. It's another thing entirely if your customers are angry at you AND there isn't a single solitary thing you can do. That said, I hope that they are more careful in the future...
"No way"? (Score:3, Insightful)
No backups? Come on! (Score:4, Insightful)
So, either Charter doesn't back up email very well, or their process to "clear out old accounts" involves actually deleting all of the backups of those accounts as well. I already addressed the issue with the former scenario, but if it's the latter, I'd have to say that's a pretty nasty practice too. Any time you clear out old and "unused" data, you have to assume that you're likely to accidentally hit some false positives, which is one of the reasons we have backups in the first place.
The case against remotely hosted applications (Score:3, Insightful)
I always POP my email down to my own local computer.
At least if
Re:Standard statement... (Score:5, Insightful)
I see no reason why bytes are any more "volatile" in an IMAP file than anywhere else.
Store on Server? (Score:2, Insightful)
poop happens (Score:3, Insightful)
But what's done is done and props to them for a bullshit-free apology.
Most people are prepared to cut you some slack when you screw up as long as you admit your mistake.
- recognise what it was that you did wrong
- claim responsibility for your actions
- apologise
- state clearly what you learned and what actions you will take to prevent a recurrence
Or you could take to legal advice / bush administration route
- flatly refuse to acknowledge that anything bad actually happened
- talk about how 'the other guys' screw up all the time
- start an internal investigation and refuse to comment on the issue while it is under investigation
- eventually admit that 'mistakes were made' but no, you can't think of any specific examples right now and it was all someone else's fault and you there's no way you could have known it would happen.
Spokeman talk (Score:3, Insightful)
Note the use of the passive voice, which is commonly done to avoid taking responsibility. It seems like even when they're trying to apologize, spin-doctors can't turn off their instinct of avoiding responsibility for mistakes.
Re:email servers are not long term archives... (Score:2, Insightful)
1. Most access is plain text and subject to snooping/hijacking (passwd/userid/content)
2. Email is the most abused internet protocol (my opinion) by zombots, spammers, and virus/trojan propagators. ISPs do a lot to counter spam and threatening content but sometimes they get hosed. Or your home machine gets compromised and the ISP will do things to clean up.
3. Grooming accounts for stale accounts, unauthorized accounts, and stale/large data is a reality on most messaging systems. "Ooops"s happen. Usually stated as "sxxt happens"
Whatever your feelings of outrage are, common sense says put your important stuff somewhere close at hand and under your own control.
My 2 cents FWIW
some days, sysadmining just sux (Score:2, Insightful)
for eight months, i worked for a small-town isp with dsl and dialup customers. we had old equipment and no budget for upgrades. we had an autoloader that would occasionally snap tapes, old drive arrays that would fail with no replacement parts on hand ("whuddya mean, we got harddrives right there" "those are ide, i need scsi3"). backupexec would report completed jobs but find no restorable data. our dhcp scope was too small to serve all our customers at once (meaning I would hunt down inactive leases and free them for people trying to connect at that moment).
i did get a new tapedrive after six months of empty promises from my managers (and two catastrophic domain controller failures). i left them a year ago, and they still have my job posted on their site. (don't bother asking for the link, you do not want to work there.)
Re:Crap (Score:5, Insightful)
One might be able to reach a person at Charter, but a helpful person? Not on your life. You speak about behemoth corporations, and Charter embodies the worst of corporate bureaucracy. They are total idiots, the left hand doesn't talk to the right hand, and their prices are unreasonable. And yes, I dumped them as soon as I could so I don't have to deal with them any more. But not once did I deal with a helpful person.
And deleting 14,000 email accounts just shows the heights of stupidity this company has achieved.
Re:Standard statement... (Score:3, Insightful)
It would be impossible to centralize that much data, except for perhaps the database to verify users. Mbox data would have to be distributed. That would eliminate any single point of failure, but also increase the chance of a small failure. Say you have 60 servers, and 6,000 disks. Say you try to replicate all this, so the same backup software runs on each of the 60 servers. You have people adding/removing and changing passwords constantly. Not to mention, you have a few gigabytes of data rolling over on this system every second. Say for whatever reason, a small percentage of accounts on one of those file systems was not being backed up. Perhaps there was some database corruption, in the database which points to the users mbox. Sometimes these things get tied together, and the loss of the accounts correlated to them not being backed up. The complexity and variability of such a large distributed system is easy to underestimate.
It happens to the best of us, really.
Re:funny story....(totally true) (Score:2, Insightful)
Re:funny story....(totally true) (Score:3, Insightful)
It's like snapshots never happened, isn't it? (Score:3, Insightful)
NetApps are commodity. ZFS is free. Bigger storage iron is a competitive marketplace with thin margins. Who on earth is doing production storage without modern data management facilities?
ian