Comment Re: Did you say cock? (Score 2) 50
I am a girl, you insensitive clod.
I am a girl, you insensitive clod.
Running wp without at least a WAF in front of it, what could possibly go wrong?
yes brother, I am all with you!
I am a drummer too.
more so when an update rolls around and potentially throws a wrinkle in the mix.
You are right about this. Once, a linux kernel update, or was it mdtools? was screwed. You would add a new partition to an linux MD raid array and it wouldn't sync the partition before putting it online
Anyways, toying around with linux MD and cheap solutions makes you more creative in the long run IMHO.
Just keep your mind open please. There are plenty of approaches and trade-offs available and just as you said:
Run the cost/risk aassessment and apply accordingly.
Furthermore, it depends on SLAs and such and having the best cost effective solution. As long as you know what you are doing and document it, you don't have to worry about covering your arse so much...
Who says I don't ALSO work for others and I don't know about more expensive solutions? I just don't brag about it mister Shaman
I know enough to know about people covering their arses, it is pretty common you know...
Yet, I never lost any data on the cheaper setup I run on the side.
Take care man!
Hello,
I am in a data center and I had email rejected by hotmail for no reasons (not on any rbl blacklist etc.). I solved it by masquerading outgoing mail for hotmail on another IP on a different subnet I own on my datacenter connection. I would try this first. You can also try to contact hotmail so they whitelist your IPs.
If your 5 IPs are on the same subnet and blacklisted by hotmail, I don't see any other solutions than routing your mail through an intermediate mail server. Have you tried relaying it through comcast MX? I can't imagine hotmail rejecting emails from all comcast subscribers.
Also, you probably have somebody sending spam on the same subnet as yours and hotmail seem to like to block
Run the cost/risk aassessment and apply accordingly.
Exactly, use ZFS that does just that if you want to afford the extra memory. Use a fancy hardware raid controller that does that if you wish. I just use cheap drives and Linux MD. Do your research before commenting on setup you don't seem to know about. You don't have to brag about your hardware here and try to convince others to do as you do.
Didn't I mention in my first post: "Most people would say this is crazy but in my opinion,..."?
I do not see what was your point in replying to my posts anyway other than brag about using more expensive solutions and treat others that don't do just like you like idiots.
Oh, and while at it, RAID 1 doesn't have parity information!
If you're running RAID 1, 5, 6, 10, etc, it's a moot point as data will be rebuilt from remaining parity information.
I did not learn a single thing from your replies.
Take care nevertheless!
Good one! mke2fs -c -c
Thanks for pointing this out!
I suggest you do a little more research. If a sector was successfully written to and then 2 months later the drive hardware can't read from it, there is no way for the drive hardware to automagically correct the error and recover the data. The drive hardware then just increment the Current_Pending_Sector count. You could start by reading your own link but then again, you seem to have problems reading my own posts so your mileage may vary
Exactly, the first thing I thought about was this:
I know that. I run e2fsck -c -c (write+read test) to generate random pattern writes on the drives then read the data to make sure it is the same. If I put the drive back on line, e2fsck -c -c will always report 0 bad blocks and no timeouts will have occurred. I also check for timeouts in the logs.
Failed reads on a drive part of a RAID array will usually cause the drive to be kicked out of the RAID array after a timeout slowing down the machine. The strategy I suggested allow the drive hardware to indeed relocate the bad blocks and to bring Current_Pending_Sector to zero.
So, more or less:
1) Drive gets kicked out of the array.
2) Look for read timeouts in the logs
3) Use what I described until read timeouts vanish.
4) Keep an eye on smart data and further read timeouts to insure the drive has stabilized. I actually use cron scripts for that.
just take the drive off-line and try this:
http://slashdot.org/comments.p...
Current_Pending_Sector will go back to zero if the drive is still usable.
I have had drives fail. I took them off line and wrote 0 and 1 to them with dd until Reallocated_Sector_Ct stops raising and Current_Pending_Sector goes to zero then ran e2fsck -c -c on them 2 or 3 times then, I put them back on line!!!
Most people would say this is crazy but in my opinion, the surface of the drives often have bad spots while the rest is perfectly OK. Some on those drives are still on line without reporting any new errors after more than 5 years, some almost 10 years. Those are server drives with very low Start_Stop_Count, Power_Cycle_Count and Power-Off_Retract_Count. All lower than 250 after 10 years. Those drives are spinning all the time.
Newer drives will relocate bad sectors to free reserved space they keep for that purpose. As long as you don't run out of free spare space, IMHO, it is worth a try.
Well at least no plane crashed because it flew into an undetected storm...
That's exactly what I suggested in my post and that's why I always disable that. Doesn't it sound like a security risk? Duh..
Real Programmers don't write in FORTRAN. FORTRAN is for pipe stress freaks and crystallography weenies. FORTRAN is for wimp engineers who wear white socks.