Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Re:IBM's hardware vendor mind is taking over (Score 1) 863

Wine's biggest problem is that compatibility is currently more important than making it look like it belongs on Linux. At the moment it really feels more like I just opened up a Windows app on Windows 95 running in linux. What makes it so annoying is the following:

Wine will NEVER be fully compatible. Every month a dozen apps (that people actually use) come out (whether a new app or an update to an existing app) that aren't compatible and won't be for a few weeks at best.

Making Wine integrate properly with the desktop environment needs to be done once and has much more value than working with iTunes 9 (which programs like Wine-doors allow 99% of Wine users to work around easily anyway). By making your few legacy Windows apps look like native Linux apps, you make Linux a far stronger alternative to Windows.

Comment Re:Lost the point (Score 1) 543

I do admit that the strictness of the GPL does make the driver situation on Linux difficult, and I can clearly see where both sides have a good point. Some manufacturers don't legally have the right to open their driver (Ati is in that situation with FGLRX, actually), but the community doesn't want any situation where a manufacturer stops caring, drops support, and boom, everyone on the binary driver is now screwed (look up Intel's GMA 500 for an example, tho there's talk of that driver getting open-sourced, which is crazily unexpected). Not to mention drivers that arbitrarily divide market segments for devices. Nobody wants to release enough information to create an open-source driver if that runs them the risk of people buying their $50 devices and getting them to do the work they'd otherwise need the $300 device to do. It's a sad, perpetual, tug of war there.

As an aside, a video card driver is by no means an arbitrary undertaking, especially more recent cards. Mind you, work is being done by developers on Ati hardware, partially because of all the NDA-free documentation that's been released (check out the guy doing Radeon-Rewrite, for example, on pre-R400 hardware). Hopefully we'll eventually see someone join Ati's (Novell's?) team on the open source R500-700 drivers.

Comment Lost the point (Score 4, Informative) 543

Keep in mind, the basis behind GPL isn't it just have code that's open, it's to have code that STAYS open.

It's essentially self-perpetuating open source. I don't get all the people who discuss GPL work-arounds. It's really simple. If the GPL isn't for you, look for something with an MIT license, or even something in the public domain, or fucking code your own. The GPL borders on being an ecosystem, and if you wanna plunder it and move on, go somewhere else.

Every GNU zealout shouts this out at the top of their lungs, it should be pretty easy to understand by now: If you don't like the GPL license, don't fucking link to a GPL'd library. End of discussion.

Comment Even for power users... (Score 4, Interesting) 121

Even for power users, HTPC's can be aggravating. Why, in a world where you could put together a tiny monster PC for around $300 would someone buy a MivX or NMT player? Simple. Take any HTPC on the market, ANY.

Plug it into a regular, yellow, composite television.
Plug it into an HDTV via component or HDMI.

If you can turn it on, boot it up, and play a video on it without a single configuration edit without any hassle from installation, then please, reply to this topic because as far as I know, an HTPC that does this is akin to a fucking unicorn.

I have an iStar Mini and a Popcorn Hour, both NMT devices. The Mini's in the living room. If I wanna take that thing to the kitchen TV (13", composite in), I just put the movie on a USB stick and it's showing the film inside of the 2 minutes it takes to set up and boot. When it goes back to the living room, it's an HDMI connection to the TV and coax to the (admittedly cheap) surround system. Works just fine, automatically detects 1080p at startup. Over component, I'd have to hit two buttons to get 720p or 1080i (worst-case, 480p is instantly automatically enabled).

I had a friend try to build a MythTV box. Hours went by as this man tried to get MythTV to show up at a decent resolution on his HDTV (this was a few years ago, via DVI). This is a guy who runs and actually knows how to use Gentoo, and would be a sysadmin if he wasn't a programmer at a Fortune 500 company (A good one, you've probably used their services at some point(s) in the last six months). On the AppleTV, the first test isn't even a possibility without some insane level of hacking (especially if you want color out of the composite out). I can only IMAGINE what it's like an a Windows Media Center rig. And in the last two cases, playing videos other than Quicktime or WMV, respectively, (let alone something like MKV) is a hassle that goes more hours into getting up and running than those devices are probably WORTH.

As crappy and low-end as the interfaces are on mini video boxes are, they happen to work remarkably well for the simple process of "Plug into TV, watch stuff", whether "stuff" is on a usb stick or the network. Give me a call when the HTPC manages to get there on a remote-friendly interface.

Comment This is fucking retarded. (Score 5, Insightful) 119

Nintendo's the only one not surprised by this. They didn't have a single major release, save for maybe Wii Sports Resort (which came out when, 2 days ago?), this year. By Christmas they'll release New Super Mario Bros. Wii and next year brings Mario Galaxy 2, possibly a Wii fit expansion or whatever they're doing with the pulse sensor, and lo and behold, those months will do ridiculously well for the Wii, and the year afterward, on the same month, analysts will worry about Nintendo's downfall when the sales aren't as high due to a lack of major titles.

It's the same dumb shit with Hollywood. Half a dozen studios release films in June with quarter-of-a-billion budgets+marketing campaigns and when all of those types of films don't come out 'till August the next year, there's an article about how the film industry is failing, all because it's easier to make up "Sky is falling" predictions than to actually wait a whole fucking fiscal year and take into account the number major releases that hit a particular year.

Games and film have 2-3-year production cycles, and many times projects get delayed. The money still comes in (albiet with a higher cost due to the delay, which, for better companies, tend to result in more revenue for a better product), but as it doesn't come in steadily, it gives "analysts" plenty of fuel to predict doom whenever there is none.

Comment If you're paranoid... (Score 1) 611

OpenSolaris and 8 drive RAIDZ-2. PHYSICALLY disconnect that fileserver (and turn it off) and sync up to it once a month.

Use GlusterFS or RSync to sync that up to your main computer. If you can figure it out, make incrimental backups to DVD once a week (or day, if it's that important). Take those DVDs off-site into a vacuum sealed (not expensive, you can make one that uses a hand pump and a box). If everything goes to hell, restoring from DVDs takes forever but you have that option, and that's what's important.

Comment Re:Death knell (Score 1) 361

Okay, there's no need to generalize. Let's look at what really happens in ZFS.

1. Data in ZFS is held in blocks. The verification for those blocks is held in their parent blocks. Meaning, a corrupted block whose corruption manages to return a viable checksum in most filesystems would not do so in ZFS. If it's broken, the parent will tell you, and if the parent is broken, the grandparent will tell you. This works all the way to the highest parent (known as the superblock), which has multiple redundant copies of itself. Basically, it's HARD for shit to break in ZFS.
2. RAID and mirroring can tell you if a disk is bad, and if you're rebuilding, if a sector is bad. But the latter doesn't work unless you're scrubbing. ZFS catches errors like that while reading, and then it fixes them while it's sending you the good data.
3. Let's say you have 8 1TB drives in a RAID6 configuration. Let's say you have about two terabytes of data across the whole thing. Let's say a drive fails. You replace it. The entire drive now has to be rebuilt. ZFS only needs to fill the 160 megabytes that drive actually contains.
4. Data in ZFS is held as a tree. This can make some things slower (possibly why ZFS tends to aggressively cache) but it also means that stuff like snapshotting is built into the filesystem and cheap as hell to do. You want to snapshot your OS's root folder before a risky patch? Go ahead! If anything breaks, you can roll back instantly.
4.a. Feel free to correct me on this, as I've never tested it, but as far as I've read, ZFS is probably the only filesystem that could survive an rm -rf / and a (single) accidental dd at the same time, the latter under a RAID-Z2 config.
5. Cloning is just as easy as snapshotting. Lets say you have a developer account or VPN account template? The files that you start out with only need to be held once on your hard drive. Only edits to those files result in new data being stored.

Basically, ZFS works best with the fact that both hard drives and RAM are getting cheaper. You could put together a system with 8 gigs of RAM (more if the motherboard supports it) and 8 terabyte drives (plus a spare) for under a grand. That's 6 terabytes of lightning fast (about 120 megs a second I'd imagine), absurdly reliable (it's statistically nearly impossible for the same block to be broken across two drives, and unlike RAID, ZFS won't stop rebuilding if that happens. It'll just report which files were affected)

It's essentially a filesystem designed for 2009 (or whatever year this happens to be for at least the next 10), and people who need a ridiculously good filesystem on a budget.

Did I forget to mention that adding space is only limited by your hardware? (e.g. available sata slots, or available PCI slots for new sata cards) Or that if a drive gets disconnected (say, you have them all on a USB connection) and you reconnect it, ZFS only adds the missing data since it was disconnected rather than rebuilding the whole drive? (What RAID does, mostly because it doesn't know anything about the underlying filesystem)

Slashdot Top Deals