Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Forget the Transformer Prime. (Score 2) 356

The most obvious issue is the lack of availability, but even if you have time to wait there is a serious problem that is likely to sour your interest in the Transformer Prime: Locked boot-loader. Until someone breaks it or the key gets leaked, it's uncertain whether you would be able to install your own OS on it. It looks like a great tablet/netbook, and I was real hot to buy one, with the idea of possibly being able to install a full Linux on it and use it as more of a lightweight netbook with 18 hours of battery.

Comment Re:Thoughts from my home storage server experience (Score 1) 355

Nono, the cards I'm using have 8 discrete SATA ports on them. See the photo on the URL that I pasted in my original comment.

The cards I'm using are all fairly old at this point, they weren't exactly the latest stuff in 2008 when I put together my storage server. They had a really good reputation then though. The SIlicon Image controllers I really don't remember, but the model number 3114 sticks in my head, they were 4 port PCI cards.

Comment Re:Thoughts from my home storage server experience (Score 1) 355

The cards I had just worked. Maybe you're talking different cards, I'm pretty sure mine are SATA not SAS (check the part number for more details). I got them originally because they were one of the few chipsets supported by OpenSolaris, and they were apparently the chipsets used in the Sun "Thumper" boxes, but I moved away from the Solaris kernel and found they worked great under Linux as well. Perhaps you have a defective card? I've got maybe 6 of these cards and have been really, really happy with them.

I also played with the Silicon Image cards, and they worked great as well. They're a lot less expensive, but only supported PCI, not PCI-X, and I wanted the extra bandwidth. During RAID rebuilds or verifies, I'm getting 250MB/sec or so, so the extra bandwidth is nice.

Sorry I couldn't offer more help.

Comment Re:Only caveat: Use RAID6 not RAID5 (Score 1) 355

I almost mentioned this in my previous post, but didn't. As you suggest, I have great backups, so if the RAID-Z/RAID-5 fails during the rebuild, it's not a huge issue, I just need to drive an hour away to pull down all the data.

In the case of the 1% per hour rebuild, that is actually a work machine with RAID-Z2 (ZFS equivalent of RAID-6). Having that extra safety net of still having the ability for another drive to fail was very nice when I was mucking about with the array.

For home, would I use RAID-6/RAID-Z2? Probably not. But that's because I really, truly have great backups. The other poster talking about using unRAID and how if a drive dies then he only loses the data on that drive, I was thinking "where are your backups"? If you're just storing rips of your CDs or movies, and you're ok with spending the time to re-rip, I guess that's ok. However, I'm storing original content there, I don't want that to go away.

Comment Thoughts from my home storage server experience. (Score 4, Informative) 355

I wrote about the latest storage server I built back in 2008, and a lot of my thoughts at the time are written up in http://www.tummy.com/Community/Articles/ultimatestorage2008/

However, to answer a few of your questions...

External disc enclosures? Avoid them like the plague. My initial experience with the 5 bay eSATA enclosures was pretty good -- sometimes it wouldn't pick up the external drives, but usually I could get it to find them after some tweaking, rebooting, etc... I ended up getting 3 of them, the AMS DS-2350S, which at the time were well reviewed, etc... I have since pulled all 3 of them out of active use and have them just sitting around. I don't know exactly the mode of the failures, but eventually after replacing some with others, I finally put them in internal SATA enclosures, which have been very reliable (I used these Supermicro CSE-M35T-1.

Also note that eSATA connectors don't really hold on that well. If anything, they're not as robust as internal SATA connectors, despite being outside the case where they can get banged around.

If I were to do it over again, I'd probably stick with the case I started with, with 5 internal 3.5" bays, and 3 front 5.25" bays, and put the Supermicro in there. I'd also probably go with fewer big drives rather than more smaller drives like I did previously (even though at the time the drives were free, I had them from another project).

As far as running it in the garage, don't even think it, unless your garage is not where you store your cars. I have some computers that I've run in the garage for the last 9 months, and they are filthy, I've had a lot of fan failures, lots of dust, insects, and random other crap. I put mine in our furnace room, which has enough extra space.

As far as using a server case? Hard to see the payback there unless you have a cabinet. Most server cases are HUGE, heavy, and expensive. A 3U case with 12 drive bays likely costs $500, plus you usually have to deal with special form-factor power supplies, expect to spend another $200 on one of those. I wouldn't do it, and I have a 3U 12-bay Chenbro case just sitting at my office that I could re-purpose.

As far as the file-system, I selected ZFS (via zfs-fuse under Linux) and I've been VERY happy with it. The primary benefit is that it checksums *ALL* data and can recover from some types of corruption or at least alert about corruption if it can't correct it. So, if you are storing photos or home videos that you may not be accessing very often, that's good peace of mind to have, I know in 10 years I won't go to look at some photographs I've taken and find they were silently corrupted. Of course, you could get similar benefits by saving off a database of file checksums and checking and alerting if they are bad. Really the only downside of ZFS that I've seen is that if you need to do a RAID rebuild it is a seek-heavy task rather than just streaming. I have a 8x2TB drive array that I'm currently rebuilding (drive failure, at work), and it's 33% done after 31 hours. A normal RAID-5 array would have rebuilt that in what, 10? The system is idle except for the rebuild.

If you care about the data going into it, make sure you checksum and verify the files regularly.

The 8 port PCI SATA card I got is fantastic, it's a Supermicro with the Marvel chipset and is very well supported (even supported by Nexenta).

Finally, all this data is encrypted, so if someone were to burgle us I only have to worry about them getting the hardware, I don't have to worry about them now having scanned bills and other documents and other personal and private data, etc... This is why I'm running ZFS in Linux, it gave me encryption plus ZFS (not available otherwise in 2008), as well as being an OS I'm very familiar with.

As far as OS, I am personally running CentOS on my system because that means I can install and set it up and then forget about it for quite a few years, except for regularly running "yum update". Debian should be fine, but you will get/have to track upstream changes more frequently.

Comment I'm not sure they would be able to tell... (Score 5, Funny) 30

I've had the pleasure of working with iBahn in the past at conferences. They don't have the sharpest techs I've dealt with. For example, I had a tcpdump of their DHCP server handing out a lease with the gateway in a different network*. Obviously, this didn't work... "Well, I can reboot all the APs for you..." Now, the APs weren't doing DHCP...

So, iBahn is saying they "haven't found any breach"? I'm not convinced that their lack of finding it is an indication that it hasn't happened. I wonder what equipment they've rebooted trying to find it. :-)

(* Details: the DHCP server handed out an address like 10.1.1.2 in a /24 network, and the gateway was 10.5.254.254. These are rough approximations, not the exact IPs, but give you an idea)

Comment Re:Realize the limitations... (Score 1) 311

Agreed, if your primary use of your system is rebooting it (in particular, you don't read more than the cache size between reboots), then the hybrid drive is probably the way to go. :-)

My lapotp? uptime reports "up 30 days". My desktop? "up 14 days". I'm not saying everyone doesn't reboot, one of my employees would always shut down rather than suspending, but that was partly because his laptop with SSD so so fast at rebooting. But I've found that I just don't have to reboot that often. YMMV.

As far as "most desktop activity is reads", I agree. Which is why you probably should keep it in RAM. The added benefit of upgrading your RAM rather than going to a hybrid hard drive is that if you NEED that memory for running a huge application once in a while, you will realize huge performance benefits over swapping.

So, yes, if you reboot all the time then a hybrid drive may be good for you. However, the "boot reordering" work makes that pretty speedy on a regular spinning drive as well. So it's still hard for me to see the win.

Comment Realize the limitations... (Score 4, Insightful) 311

Hybrid drives, and even all of the hybrid RAID controllers I've looked at, only use the SSD for read acceleration. They aren't used for writes, from what I could tell from their specs. So you're almost certainly better off upgrading your system to the next larger amount of RAM rather than getting a hybrid drive.

Personally, I looked at my storage usage and realized that if I didn't keep *EVERYTHING* on my laptop (every photo I'd taken for 10+ years, 4 or 5 Linux ISOs, etc) and instead put those on a server at home, I could go from a 500GB spinning disc to an 80GB SSD. So I did and there's been no looking back. The first gen Intel X-25M drives had some performance issues, but since then I've been happy with the performance of them.

Comment Re:Physical damage (Score 1) 182

That sounds very plausible. A friend of mine dropped her kindle onto the concrete. There's no physical sign of the damage (she doesn't know exactly which way it landed), but when she tried to turn it on the display was totally messed up. It also gets hot right in the middle between the screen and the keyboard. But looking at it, there is no obvious sign that it was dropped.

Comment Step away from the fs and nobody gets hurt! (Score 1) 803

This sure seems like a bad idea, are there really people who are complaining about this? Seems like it could lead to a backlash of unity-proportions. :-) I'd be ok with if it if looked exact like the current file-system, but without littering my home directory with empty "Videos", "Pictures", "Documents", "Webcam", "Music", "Desktop", "Downloads", "Public", "Templates" directories...

Comment Re:/bin, /sbin had their functions (Score 1) 803

The bigger issue, which is still relevant, is that /bin and /sbin were tools that were necessary to bring the system up to the point where it could get to the network and mount the bulk of their file-systems from resources on the network (NFS, iSCSI). Though these days, that's as likely to be tools that are in the initrd...

Comment Magnetic stickers, eh? (Score 1) 170

The problem with magnetic stickers is... Corvettes have fiberglass body panels. :-)

I once ran timing, here are my thoughts:

Personally, I don't think that transponders are expensive, and I think they'd be a great solution which would absolutely fail because of politics. "You mean I have to buy a $100 device (or rent for $5/event) to mount to my $40,000 car that has $2,000 rims and $1,400 tires?!? What do you think I am, made of money?!?"

I suspect you won't be able to do good detection except if the cars stop at the end. That's something you'll have to play with though, maybe you can set up a zone past the end where they have to stop to get recognized, or *MAYBE* the camera can deal with them if they stick to the recommended speed off the track. Cameras are very bad at getting sharp shots of sideways motion though. It'll also depend on the conditions out.

I imagine you will need to use a hardware timing device that runs in real-time and then you can pull the time off in the non-realtime OS. That or you'll need to run real-time OS extensions. Maybe you can get something reasonable out of a hardware interrupt like a serial/parallel port line change. The normal x86 Linux clock is 1ms resolution, and plenty of jitter, so just expecting to use the clock under Linux is probably unrealistic.

These people are as serious as a heart attack about this hobby. Saying "Accurate to within a few thou is probably good enough" is a good way to see exactly how good your insurance plan is. :-)

You're going to have to deal with things like a car leaving the starting line with "185" on it's side and crossing the finish with "85", "18", or even "1 5" on it. :-)

The "Predator" OpenCV system sounds like it would be awesome to try in this situation.

Consider setting up a place where the cars can go to get recognized and their number entered, maybe at the starting line, but maybe a dedicated area. Predator/OpenCV may be able to detect things like the letter that fell off during the run, but it may also mis-detect in some cases. You'll probably need someone eye-balling the start and finish anyway.

Good luck with that. I tried writing up some documentation for how to run the system I at our Autocross after they trained me on it, and I had my ass handed to me...

Idle

Submission + - Hate the new Star Wars movies? Love the Subway! (wired.com)

jafo writes: I can't imagine that even the most steadfast haters of Lucas' meddling in the series won't warm their cold, cold hearts a little when the new release brings the awesomeness of light sabers to the Tokyo subway system. As a promotional tie-in, the handrails have been outfitted with stickers, LEDs, and buttons, turning them into fully-functional (well, almost) Jedi weapons. Be careful, Tokyo, of what part of the handrail you reach out for!

Slashdot Top Deals

"Lead us in a few words of silent prayer." -- Bill Peterson, former Houston Oiler football coach

Working...