Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Automatic SSD caching of spinning disks in Linu (Score 1) 353

And none of those solutions are quite ready for prime time, unless you set them up at the same time you setup your machine and you don't need to cache multiple file systems...

(I think they're on the right track, but there are a lot of gotchas and "oh, you can't do that" cases with those solutions.)

Comment Re:Holy shit did they get cheap fast (Score 1) 353

The better quality SSDs are still up around $0.80-$1.10 per GB. The server-quality SSD drives are around $1.50-$2.50 per GB.

Which, is not all that bad a price for server quality SSD storage. When you start adding up the number of 15k SAS RPM drives that you would need to short-stroke in order to get equivalent IOPS, the SSDs are competitive.

Comment Re:WOW! (Score 1) 132

A 7 year old machine is getting quite long in the tooth.

Maybe, maybe not. Per-core performance has basically flat-lined for the last 7 years. Long-gone are the days where clock speeds doubled every 12-18 months or where buying a new PC would get you something that ran 4-8x faster then the one you had from 3-4 years ago.

At the moment, I'm still using a 2007-era Thinkpad T61p (Core2 Duo 2.2GHz, 8GB RAM, Win7 Pro, SSD). It originally shipped with WinXP, 4GB RAM and a 7200 RPM HD. This is still the machine I use for the majority of my work.

The main advantage I have is that before the 4yr warranty ran out, I made *sure* to have it serviced, so it has a new backlight, new keyboard (which was acting up), etc.

Is it slow? Eh, the CPU is not the zippiest and I would definitely prefer a faster quad-core, but it still works well enough that I'm not ready to spend $2200 on a new Thinkpad. I have a much more powerful desktop sitting beside me for things that need raw CPU power.

Really, the thing that makes it still usable is the SSD. Without that I would have given up on it years ago. It's why we are putting SSDs on all the desktops at the office. With a good SSD, you spend a lot less time twiddling your fingers and less fear that if you do X that you can't do Y at the same time because of disk contention.

Comment Re:no capacitors (Score 1) 76

go look at Intel SSDs but be prepared to pay an arm and a leg for it.

Well, maybe not an arm and a leg, the 300GB Intel DC S3500 units are only $300. Or the 600GB unit for $600. So around $1/GB and they come with the large capacitors inside to deal with power loss.

The Intel DC S3700 units, OTOH, are $2.25-$2.50 per GB. Which isn't all that much either in the big view, even regular SSDs 3-4 years ago were $1.50-$2.00 per GB.

Comment Re:Take 'em offline (Score 1) 423

there are good reasons to keep XP around in a virtual machine for running apps that won't work on newer OS's, but I fear that i won't be able to authorize XP so there will be no more fresh installs / reinstalls of XP

In other words, companies with products who rely on software that only runs in WinXP have had their head in the sand for 5+ years now.

We spent the last 5 years moving everything to web applications (that work fine across all the major browsers) and switching to open-source applications in every possible niche. I estimate that in another year, 80-90% of our desktop users could easily be switched to OS X or Linux. There's only a few remaining applications which are Windows-only.

Comment Re:Consider the source (Score 1) 164

Exactly right. Clinical depression is a life-threatening illness.

Spot-on. And the main reason it is so life-threatening (and frequently fatal) is because you are your own worst enemy. Sometimes realizing that you are not being rational is enough (CBT), but sometimes drug-therapy is also needed.

I've done a course of CBT (about a year, with monthly visits). The tips and tricks that you learn during CBT are very useful. It teaches you coping mechanisms, ways to self-diagnose that you are not thinking straight, and gives you symptoms to watch for that indicate you need to go get help again.

But for me, medication is the only long-term solution. Fortunately, there is a generic that works well enough to make me functional (even optimistic) without only minor side-effects.

Heck, the best part about opening that dialog with the doctor and going through that first few months of intensive treatment is that it is no longer scary. I'm not afraid to pickup the phone and call my doctor's office to seek help with it.

Comment Re:Pointless (Score 1) 173

Indeed, the sweet spot for CPU prices is about $80-$120 for the "budget" minded, and $150-$250 for the "mid-range". The CPUs at $300+ are where you spend a lot of cash for small improvements in performance over the mid-range CPUs.

The same price ranges also apply to GPUs. Any GPU in the $80-$120 range can probably handle most games at 720p, going with something in the $180-$220 range gets you a GPU that can handle almost everything at 1080p. Spending $300+ on a GPU is only needed if you are doing a triple-headed setup with multiple 4K resolution monitors.

You can still build a mid-ranged gaming PC these days for about $1000. That gives you $200 for the CPU, $200 for the GPU, $200 for RAM/motherboard, $100 for the Windows license and $150 for the case/PSU/misc and $150 for SSD.

It won't be top-of-the-line, but it will last you 3-5 years.

Comment Re:Pointless (Score 1) 173

The few times I'm ever waiting on CPU, it's multi-threaded. Video transcoding, occasionally compiling. I can't remember the last time I heard of a game being CPU bound - that's always GPU-bound these days.

There are dozens of AAA titles which are CPU bound. Especially multi-player games where the CPU has to keep track of everything so that it can all happen in a deterministic order. Since that can't happen across multiple-threads, your FPS gets limited by the speed of a single core.

(Planetside 2 is probably the best known example. Even with their OMFG patches last fall to try and make it more multi-threaded, the game performance is still limited by the speed of a single core.)

Comment Re:Pointless (Score 1) 173

The thing is when you look more closely you find that most of those processes are spending most of their time asleep. So there is little to be gained from more than 2 cores (one for the program you actually care about, one for the background crap)

That's a good argument for dual-core over single-core. Buying a single-core CPU is for chumps and has been since 2007. But it is not a very good argument for staying with dual-core over moving up to quad/hex/octo core setups. That argument boils down to cost, and whether paying an extra $50-$75 for the extra two cores is worth it over the life of the machine.

The fastest dual-core Intel chips right now are 3.5-3.6GHz (Ivy Bridge and Haswell, roughly $150-$160). The fastest quad-core Intel chips are also 3.5-3.6GHz (Ivy Bridge and Haswell, roughly $300). Or you can go with the budget-priced quad-core Intel chips for around $200 and get something with 3.1-3.3GHz clocks.

That means, by giving up about 10% single-core performance, you gain the ability to spread your work out across four cores instead of just two cores. So as soon as you get into the realm of multi-threaded applications which can use the multiple cores, you gain.

It's not hard to swamp a dual-core CPU. I do it all the time on my 2007 era Core2 Duo. It's a lot harder to swamp a quad-core CPU enough to make the system unresponsive. And for server work, we need 8/16 core CPUs.

Comment Re:tm abbrevs mk it hrd 2 rd. (Score 1) 91

Successor to ext3 and generally faster

Create a (non-sparse) 1GB or larger file on ext3. Then time how long it takes to delete it. For large file handling, ext4's use of extents rather then allocating individual blocks is far superior to ext3. (It's not an edge case either these days, not with MKV & MP4 files measured in 100s of megabytes. Or disks measured in terabytes.)

Plus you get the faster fsck times at boot. And a bunch of other useful features (shrink/growing the file system, faster handling of directories with lots of files inside, etc.)

All in all, ext4 is pretty darned good for its purpose of storing files. And while I'm looking forward to BTRFS, it's just not ready yet, and ext4 serves me well enough.

Comment Re:150 tabs? (Score 1) 142

Obviously you have a job where you can focus on one project at a time (maybe two) and work without constant interruptions.

Right now, I have half a dozen Firefox windows open:

#1 is the corporate intranet applications (task tracking, project tracking and a bunch of other things). This window typically has anywhere from 6-30 tabs open.

#2 is currently open to a wiki with technical documentation for the software I am working with. That has at least a few tabs and sometimes as many as 10-15 because the vendor's wiki sucks and is poorly indexed. So if you find something useful, may as well leave the tab open to refer to again in a few hours.

#3 has a bunch of Linux man pages open along with other reference pages. Typically 6-12 pages, depending on how many different Linux commands I am referring to.

#4 has up flight schedules, hotel bookings, etc. because I am planning a trip in a few months, but keep getting interrupted. There's another 6-12 tabs.

#5 has SlashDot open with a handful of tabs for stories with comments that I want to read later. Or other news articles that I want to look at through the day.

And since I am juggling multiple projects at the moment, windows #6-#9 are open to various technical resources dealing with that topic. Each with a handful of tabs open. And if someone calls me with a technical issue, I open up a new Firefox window and can end up with 12-24 tabs open by the end of figuring out the issue. If it was an easy fix, that window gets closed right away, but if it was a "try this and let me know" solution, then I am better off keeping that window around for a few hours or days.

Usual memory usage for me is about 1GB for Firefox, sometimes 1.5GB (out of 8GB RAM). It's not difficult to get up to 100+ tabs across multiple windows.

Comment Re:From the Article (Score 1) 220

I conclude that password authentication on servers is alive and well, as long as done right.

Depends on the service and whether it does rate-limiting of attack attempts.

For SSH-based services? There's really no excuse not to use a password-protected SSH public key pair, and turn off password-authentication for SSH. Plus disallowing the ability for "root" to login over SSH. It raises the bar by an order of magnitude. Unless the attackers can get a copy of your private key file, and the password to decrypt it, and know which servers that key is used on, they can't get in. That's a pretty tall order for a non-focused attack.

Moving your SSH service to an alternate and non-standard port in the upper part 1-1024 range is also a good idea. Mostly because it keeps your log files from being cluttered up by the brain-dead attacks which only look for tcp/22. That makes it easier for you to spot the more dangerous attackers who took the time to figure out what port you had SSH running on.

Comment Re:reduce the amount (Score 1) 983

That's the dreaded TOC error. There are (far more) expensive drives that will ignore TOC errors and let you dump the disk to an ISO file. But for us mere mortals a TOC error means you are well and truly hosed.

But for any situation where you are dealing with the far more common errors that the sector-level ECC can't deal with, the recovery data files give you the option to recover everything.

Slashdot Top Deals

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...