Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:US cell system (Score 0) 201

I was a recent visitor to the USA and was astonished at the 3rd world nature of its cell system. I had never imagined it was so bad, before visiting.

What exactly did you find so subpar? Service from just about any carrier should have been good considering you were in a city with a subway. My experience with world phones in Europe wasn't exactly thrilling. I bought a sim in London and then got to pay 2 pounds per day just to turn it on in Spain. What union/country/territory do you live in who's carriers have seemingly gotten it so right?

Comment Re:You're complicating things. (Score 1) 539

They offer KVM access, at $35.00/day, which in this case I refuse to pay to fix what they broke, outside of the context of the server.

Stop being stubborn - why not KVM in, prove it's their fault, and then make them reimburse you for it?

Alternately, they want me to hand over the root password (not a privileged account, but THE root password), so they can do it themselves.

Sounds like you're either incapable or unwilling to fix the problem yourself, but at the same time you refuse to let them do it? What exactly would make you happy here? And don't say "moving me back to the old hardware and datacenter that I was paying for!" Be realistic.

Only a few days ago, they indicated that the NIC on the server may be causing the issues. I'm down 2-3 hours every other Sunday because of this.

You said yourself that it's new hardware. It's completely reasonable for them to suggest you've got a bad NIC driver in there for whatever card you were moved to.

...every other Sunday between 7:00am and 8:00am EST, my server's load goes over 100 as incoming connections spike over 700/sec., sendmail refuses connections due to the load, and the box seizes up. The logs show that the connections are established and then hang.

This is almost certainly a problem on the system itself. I've seen a handful of cases where hardware load balancers in DSR mode can lead to connection pileups under certain conditions, but 99% of the time it's a problem on the server itself. In any event, tuning should be able to prevent that from knocking the box over completely, allowing you to stay logged in and see what's going on.

Also, by claiming that nothing has changed on the system, you're either lying, or you're a horrible sysadmin who doesn't apply updates. Another potential scenario I see here (obviously aside from new hardware using previously unused drivers...) is that you or your package management system installed a new kernel or NIC driver, but never rebooted. Then when the server was powered off and migrated to the new facility, it came back up with the new (and potentially problematic) driver/kernel.

Comment You're complicating things. (Score 4, Interesting) 539

Switch providers. Plenty offer remote reboot and serial console or KVM for both VMs or physical servers, which would allow you to go crazy with custom encrypted partitions etc. At the end of the day though, someone somewhere at the hosting company is still going to be able to reboot your server into a rescue environment and reset the root password. Go colocation if you're really that paranoid about it.

You also have zero chance with litigation, unless you've somehow gotten them to sign something saying they specifically won't muck around in your server.

I'd also like to know how you *know* it's a hardware or network issue outside of your server. How do you know it's not your NIC driver hanging up? Older e1000 drivers (super common card in the hosting industry) are quite flaky. What research have you done outside of your internal monitoring?

Slashdot Top Deals

I've noticed several design suggestions in your code.

Working...