Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re: Do you know how easy it is to make that stuff? (Score 1) 421

If you think Smirnoff makes good vodka that's worth spending money on, then I'm sorry to say that you're the chump. Smirnoff is swill, and you can taste it in anything you mix it with. You shouldn't be able to taste vodka, that's kind of the point of vodka. If you can taste vodka, they screwed up somewhere. And you can definitely taste Smirnoff ... especially the Blue Label.

A great way to taste test vodka is to stick it in the freezer overnight (all vodka should be refrigerated, as should most tequilas). Then take a couple shots the next day, letting the liquid linger in your mouth a bit. If you cringe, snort, sniff, cough, or spit, it's crap vodka. And Smirnoff will make you do all of that.

Sure, if you just want to get smashed, Smirnoff will do. But if you want to actually enjoy your drinks, you'll avoid Smirnoff.

Comment Re: Do you know how easy it is to make that stuff? (Score 1) 421

There's a very big difference between a $6 bottle of random vodka on the back shelf of a gas station in butt-fuck-nowhere Iowa, and a $40 bottle of Grey Goose. One you can taste, no matter how duluted you make the drink; the other you won't taste no matter how strong you make the drink.

Granted, not all $40 bottles of vodka are better than the $6 bottle. Just as not all $40 bottles of whiskey are better than the $6 bottles. But some most definitely are!

Comment Re:Don't worry actors (Score 1) 360

He also had issues with scene length.

4-6 were ok. Scenes were fairly long, with decent amounts of dialogue and/or action in them. And the transitions between scenes were fairly smooth.

1-3 were horribly short (it felt like 30-45 seconds at most), with crazy transition effects. The action jumped around so quickly most of the time you didn't really know where you were or what was going on. It was like watching a PowerPoint presentation where the class assignment was "how many transition effects can you squeeze into a 2 minute presentation".

Comment Re:Only 100 meters (Score 3, Informative) 71

Only 100 MHz, and using 100 MHz of spectrum. Most carriers in North America are lucky to have 10-20 MHz of contiguous spectrum, and maybe 40 MHz total usable spectrum in a specific area. Good luck finding 100 MHz of spectrum to use anywhere other than lab conditions.

Would be nice if they worked on increasing the number of bits that can be transferred per MHz of spectrum, instead of increasing the amount of spectrum required to send the bits.

Comment Re:Linux was better when there was little funding. (Score 1) 95

Unless you used an Amiga or MacOS, if you played a sound, that was it - no one else could play a sound (MacOS and Amiga had software mixers so you could listen to music AND hear application generated sounds - you could use exclusive mode if you needed it, though).

FreeBSD 4.x, also from the 90s, allowed you to play multiple sounds simultaneously. It used the same OSS code that Linux used ... but they enhanced it to support features Linux never did. Unfortunately, Linux devs continuted with their NIH syndrome and came up with ALSA as a fix for this non-issue. Even that didn't do all the things OSS did on FreeBSD, and eventually led to the development of the horrid PulseAudio (why fix the foundation when we can just paper over top). Other than a few network- and BlueTooth-related things, PA still doesn't work as nicely/smoothly as OSS on FreeBSD.

Fixing "exclusive sound" issues on Linux shouldn't have required a 10+ year commitment; but nobody wanted to fix OSS-on-Linux.

And your networking options were... single. You either had Ethernet, or a modem, and only one IP per host. And rarely did you move - I mean, if you were on Ethernet, it was assumed you were on the same network permanently, or at least changes were rare.

Been running wireless on laptops since the days of the Orinoco Silver and Orinoco Gold PCMCIA cards (aka before 802.11b). Windows 9x and FreeBSD never had issues with them. Plug the card in, dhclient runs, you have Internet access. Remove the card, connect the Ethernet cable, dhclient runs, and you have Internet access. Moving between networks would (rightly so) drop running connections, but everything worked. It did require a bit of manual configuration for the wireless side of things, but that was all in a single configuration file and easy to manage. And it got even easier in the early 802.11g days with the advent of wpa_supplicant.conf

Again, Linux devs and their NIH syndrome saw them go through multiple different wireless stacks, multiple different ways to configure things, and things were a mess! Each wireless driver included its own wireless networking stack, for pete's sake. And what you configured to work with one driver wouldn't work with the next. There was no centralised configuration file for wireless on Linux, although Debian got close with their wpa_supplicant extensions to /etc/network/interfaces. Once things were working nicely on Linux, the desktop devs came down with their own case of NIH and had to wrest control of wireless from the CLI guys, coming up with NetworkManager. And then WiCD. And a bunch of other alternatives to them. Now you couldn't configure wireless (or any networking) until after you logged into the GUI! (Unless you jumped through some hoops. Eventually, that was fixed.)

Users haven't gotten more complicated; nor have use-cases. But Linux desktop developers have certainly developed more complex cases of NIH and are constantly re-writing everything "just because", thus overly-complicating things. Things are not better now than they were 15 years ago on the Linux desktop. Especially not compared to other OSes out there. Even the other F/OSS OSes.

Comment Re:Not really for mastery ... (Score 1) 75

it's slow unless you through massive hardware at it,

Ran my home file server / desktop PC on a 32-bit Intel P4 with only 2 GB of RAM. Booted off a pair of 2 GB USB sticks (/ and /usr installed there, RAID1 via gmirror), and a 4 GB USB stick for L2ARC, while using 4x 160 GB SATA1 harddrives in a raidz1 vdev. Ran XBMC locally to catalogue all the shows into MySQL, and then to stream the videos to the other two XBMC systems in the house (10/100 Ethernet). No issues watching 480p and 720p shows while others were downloading.

Later, migrated to 4x 500 GB SATA2 hardrives in two mirror vdevs, running same XBMC setup. No issues there, and was even able to remove the L2ARC device as the pool was now faster than the cache.

This past summer, I migrated the system to an AMD Phenom-II X4 system with 8 GB of RAM, and a zfs-on-root setup using 1 TB SATA3 drives (no USB sticks anywhere). Switched to a 64-bit install at this point (no changes to the pool). Switches to Plex everywhere instead of XBMC, and added a bunch of extra services like CUPS. Also does real-time transcoding for the little one's tablet (she uses Plex on the tablet).

No issues to report. No performance issues, even when multiple torrents are downloading while we're watching shows on the tablet and the TV. The pool migrated along between each upgrade (with the exception of the first raidz->mirror conversion that used zfs send/recv). And it's all backed up to an external 3 TB drive via zfs send/recv.

ZFS is only as complicated or as "slow" as you make it.

Comment Storage Mastery 2 will cover ZFS (Score 4, Informative) 75

I said that this covers *almost* everything you need to know, and the big omission here is ZFS. It shows up, but only occasionally and mostly in contrast to other filesystem choices. For example, there's an excellent discussion of why you might want to use FreeBSD's plain UFS filesystem instead of all-singing, all-dancing ZFS. (Answer: modest CPU or RAM, or a need to do things in ways that don't fit in with ZFS, make UFS an excellent choice.) I would have loved to see ZFS covered here â" but honestly, that would be a book of its own, and I look forward to seeing one from Lucas someday; when that day comes, it will be a great companion to this book, and I'll have Christmas gifts for all my fellow sysadmins.

That's planned as another book in the Storage Mastery series (with a possible third on networked storage). But, whether that book is written depends on how well this first book is received and what his schedule is like for other books. If the first book doesn't sell enough or garner enough attention, then it will be the last one in that series.

There's a bunch more detail on Michael's blog about this.

Comment Re:Technologically maybe... (Score 2) 93

Going from an IBM PC-compatible system with a 4 MHz CPU and a Hercules Monochrome graphics chipset (16 shades of amber FTW!) over to a friend's house where he had a dual-speed external CD-ROM playing Wing Commander 3 with FMV was a quantum leap in computing power (I think it was a 486?).

Going from that IBM PC-comptabile system to a Compaq Presario all-in-one with a 486sx2 66 Mhz CPU, VGA graphics, onboard SB16-compatible sound, and a 19.2K modem was the next quantum leap. Using the computer to browse BBSes and talk with people over FIDOnet around the world blew my teenage mind.

Going from a SoundBlaster 16-compatible sound chipset to a Gravis Ultrasound ACE (and all the extra cables that required) in my own 486dx4 133 MHz system was another quantum leap in computing power. Playing MOD trackers and MIDI files off the Internet just blew my mind. A sub-512 KB file that sounded like a full symphony of real instruments? Mind ... blown!

Going from a 19.2 K modem to a K56Flex modem (the non-standard 56.6 Kbps setup) and connecting to a K56Flex modem pool at the local college and hearing those extra beeps at the end, and actually connecting at 53.3 Kbps was mind-boggling. Under 10 minutes to download 1 MB (or something like that)! Web browsing was now a thing!

But storage hasn't really blown me away. Sure, going from dual 5.25" floppies (under a MB of storage) to single 3.5" floppies (over a MB of storage) to CD-R/RW to DVD-R/RW to USB flash stick was interesting, but not mind-boggling. Going from a 40 MB HD to a 20 GB HD to multi-TB HDs is awesome, but not "mind ... blown" territory. Progress has been steady over the past 20 years without any real giant leaps.

About the only thing in storage that has really amazed me is ZFS and how easy it makes managing storage systems in the 10-100 TB range with disks spread across multiple JBOD chassis. But even that was done in a steady progression over the past 7 years or so, without any real giant leaps.

Maybe if MRAM, RRAM, memristors, and all that other non-volatile RAM stuff actually appears, then storage will be existing again. Otherwise, it'll just continue to plod along, slow and steady, with capacities increasing each year, and prices slowly coming down, and speeds increasing slowly. Storage is actually one of the least exciting areas of technology right now.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...