What has Google gained by selling the phone this way? Why can't customers just get in line, pre-order a phone and get it when inventory is availble? Why didn't Google and Motorola anticipate the demand for the Nexus 6? They had similar problems with the N4 and N5, so they're either unable to learn from the past, horrible at planning for releases, or there's some hidden agenda / benefit that they get out of releasing devices this way.
Who was responsible for this release? Why was it handled this way? Why is Google making it so hard for me to give them my money?"
Here's the message I sent. If you're lazy, feel free to use it:
Disabling Apple Pay and Google Wallet, which were previously accepted is not OK. If you want to come up with your own competing system and give people rewards to use it, that's fine, but don't break existing functionality. Google Wallet just works. Apple and Google's solutions don't cost you any more money than a credit card transaction. Your payment app isn't even available yet and relies on QR codes, which means that when it does launch it will likely be very clunky by comparison.
If you can't come up with a sane response to this, I guess I'll be switching to Walgreens.
Have any other sites done this to you recently? What's your stance on using an easy to remember 'throwaway' password on sites that don't have any of your sensitive data?"
A couple clarifications: we do have redundant systems, on multiple physical machines with redundant power and network connections. If a VM (or even an entire hypervisor) dies, we're generally OK. Unfortunately, some things are very hard to make HA. If a primary database server needs to be rebooted, generally downtime is required. We do have a pretty good monitoring setup, and we also have support staff that work all shifts, so there's always someone around who could be tasked with 'call me if this breaks'. We also have a senior engineer on call at all times. Lately it's been pretty quiet because stuff mostly just works.
Basically, up to this point we haven't automated anything that will / could be done during a maintenance window that causes downtime on a public facing service, and I can understand the reasoning behind that, but we also have lab and QA environments that are getting closer to what we have in production. They're not quite there yet, but when we get there, automating something like this could be an interesting way to go. We're already starting to use Ansible, but that's not completely baked in yet and will probably take several months.
My interest in doing this is partly that sleep is nice, but really, if I'm doing maintenance at 5:30 AM for a window that has to be announced weeks ahead of time, I'm a single point of failure, and I don't really like that. Plus, considering the number of systems we have, the benefits of automating this particular scenario are significant. Proper testing is required, but proper testing (which can also be automated) can be used to ensure that our lab environments do actually match production (unit tests can be baked in). Initially it will take more time, but in the long run anything that can eliminate human error is good, particularly at odd hours.
Somewhat related, about a year ago, my cat redeployed a service. I was up for an early morning window and pre staged a few commands chained with &&'s, went downstairs to make coffee and came back to find that the work had been done. Too early. My cat was hanging out on the desk. The first key he hit was "enter" followed by a bunch of garbage, so my commands were faithfully executed. It didn't cause any serious trouble, but it could have under different circumstances. Anyway, thanks for the useful feedback
I have a maintenance window at about 5AM tomorrow. It's fairly simple — upgrade CentOS, remove a package, install a package, reboot. Downtime shouldn't be more than 5 minutes. While I don't think it would be wise to automate this window, I think with sufficient testing we might be able to automate future maintenance windows so I or someone else can sleep in. Aside from the benefit of getting a bit more sleep, automating this kind of thing means that it can be written, reviewed and tested well in advance. Of course, if something goes horribly wrong having a live body keeping watch is probably helpful. That said, we do have people on call 24/7 and they could probably respond capably in an emergency. Have any of you tried to do something like this? What's your experience been like?"
Of course, ECC doesn't fix everything, but it should halt your system if your RAM has an uncorrectable error, which is better than corrupting your files on disk.