Forgot your password?

Comment: Re: Here's the solution (Score 2) 367

by GigaplexNZ (#48042863) Attached to: Will Windows 10 Finally Address OS Decay?
I'm well aware of the hardlinking that Microsoft does. I've also done some actual analysis measuring the real disk usage of WinSxS by using tools that count the hardlink references. On my Windows 7 installation that had a 16GB WinSxS folder, 14GB of that was unique to WinSxS with no other hardlinks. It isn't as efficient as Microsoft claims it to be.

Comment: Re: Switched double speed half capacity, realistic (Score 1) 316

by GigaplexNZ (#47781079) Attached to: Seagate Ships First 8 Terabyte Hard Drive

Sorry but I've dealt with more failed drives at the shop than you've had hot meals and if they fail "without" warning?

Unnecessary hyperbole.

Then YOU sir are not paying attention! Before a HDD fails you will see several rather blatant warning signs, warning about delayed write fails being the most obvious but there is also temp spikes on the drive (as the motor heats up trying and failing seeks) and SMART changes (not talking SMART fail, which is usually at the end, we are talking large changes in the SMART values which can be read by one of several free programs such as HWMon or HDTune) not to mention most modern drives get REALLY noisy when they are getting ready to croak.

I never said that HDDs never give warnings. I claimed that HDDs can fail without warning. I've had a few die with controller failures. It's not always a mechanical failure. I've also seen mechanical failures where the SMART information didn't contain any errors. For example, sometimes a head can just crash (rare, but can still happen even on stationary drives). You're making some dangerous assumptions on the types of ways that HDDs can fail, which if you really had dealt with more failed drives than I've had hot meals, you'd know that they aren't always predictable.

Compare this to the "dirty little secret" of the SSD world which is the majority of SSD fails are NOT the flash chips themselves but the SSD controller chip. When that fails? NO warning, NO chance to back up your data, just flip the switch and...poof. this is why I tell my customers they should use a religiously adhered to backup system along with cloud computing to insure no data loss.

I hope you give that same advice to HDD customers. And why are you suggesting that backing up from a drive showing signs of failure is desirable? If I see signs of failure, I don't trust the data coming off it. I junk it, either rebuilding the array or recovering from backup.

Comment: Re:My opinion on the matter. (Score 1) 826

by GigaplexNZ (#47753823) Attached to: Choose Your Side On the Linux Divide

making the init system a large complex system that does lots of things rather than the old school ideology of doing one thing and doing it well

Which init scripts didn't do. They approximated what's really dependency system (B and C need A to be up before starting, and D needs both) with a bunch of sequentyally-ran numbered scripts. The end result was both inefficient and fragile.

That part, systemd is good at. I have no objections to systemd advantages as an init system. The large complexity of doing more than it should encompasses the non-init-system parts of systemd. For example, last I heard they just added DNS caching. Into the init system. Since when does the init system need to handle DNS caching? I'd also argue (controversially) that journald is outside the scope of an init system. It has some compelling arguments for its existence, but surely it should be a separate project rather than an integral part of systemd. I'm sure there are other parts of systemd that I would object to being part of the init system, but systemd adds new features so quickly that I miss most of the feature announcements.

Programmers do it bit by bit.