Follow Slashdot stories on Twitter


Forgot your password?

Comment Re:I agree that programming is not for geeks (Score 1) 317

The problem with programming most people will face is the level of detail and accuracy required for a computer. The human mind has a remarkable ability to understand someone when you use the wrong word, and most on the job training I've gotten was from someone who couldn't have reduced the task to an exact list of step by step instructions without pictures or demonstrations. They rightfully expect people to fill in the gaps and know what they meant. Computers don't have that feature.

I do think small scale programming would be useful and doable for a lot of professionals. Excel spreadsheets (or equivalent) would generally be a good place to start for most. You can chain together functions and logic, and possibly even some looping (at least copy and paste the operation). Something like awk or bash might be useful for someone getting a little more advanced. However complex or open-ended algorithms quickly lose most everyone. Even writing a binary tree and putting a simple algorithm in it baffled my 2nd year CS classmates at a difficult college.

Comment Re:Interesting theory (Score 1) 207

Move to Kansas City. We're getting 1Gbps synchronous connections that give dedicated bandwidth to a router on the Internet backbone for $70/month I suppose it might be the most oversold access, since all all the servers on AT&T and Verizon have to squeeze through a 100Gbps (each company) backbone, limiting them to 100 full speed connections. There's nothing Google, the 1Gbps ISP, can do about other companies being lousy, except force them to change or die.

Comment Re:Can you take one or two class during the day? (Score 1) 433

When I went to Rochester Institute of Technology, most of my CS classes were at night, including all the upper level ones. I had scheduling conflicts from 8-10 PM, and hated being up 'till dawn so I wouldn't fall asleep during such late cryptography lectures. They also graded on code style and structure, while concentrating on theory and concepts over any technologies. Unfortunately, I don't see a distance-ed program.

Comment I probably shouldn't feed the trolls.. (Score 1) 180

You like Apple and nice, clean UI's? I hate to break it to you, but memorizing 10 gestures and multi-key operations (e.g. option+click) is about as nice and clean as my compiz install that used no program menus nor icons. Apple gives me one button. I pressed on an iPhone because I wanted a menu, then the program that someone opened for me was gone. They wanted me to press imaginary buttons or play Pictionary to get where I wanted. HOW IS THAT CLEAN??!!! Apple either has the absolutely least intuitive UI I've ever seen, or I'm retarded to think a button on my UI should operate the running program. If you had listed something like the quality hardware, variety of shortcuts, consistent platform or the nice workflow (I suppose all the fanboys like it), then I could accept that or even agree with you.

Good open source has some of the best UI I've seen: Google Chrome is a release of the open source Chromium browser vs Safari looking like my browser from 2002 and Microsoft Internet Explorer being IE, OpenOffice had better menu arrangements than MS Office (pre-2007, who put format actions in multiple menus. haven't used newer versions much), the GIMP menu arrangment is easier to learn than Photoshop (counter complaints are from people already used to Photoshop), and the Pidgin (formerly GTK AIM, GAIM, Gaim, and gaim through various trademark issues and AOL bullying) UI beat the pants off of the standard AIM messenger (enough to be featured in Forbes). Ubuntu has some of the simplest & best software install/management software I've seen (unless clicking a single alert icon to update everything is too technical for you), where Microsoft has nothing I've seen. I think I heard Apple has something, but I don't expect it to be as flexible in adding 3rd party sources.

Before you get too out of sorts defending the UI on Reeder (a Google Reader client), you should know I've seen all of those UI elements in open source projects that predate Google Reader itself. Since you didn't say what project you're accusing of ripping off a UI and a quick search didn't reveal Reeder's age, how do you know Reeder didn't steal the exact UI from the other software? Based on the information you provided, you leave me with thinking you always assume the software was built in the order you find out about it, but I assume there's just some information missing.

I'm sick of snotty Apple fans who defend their brand and bash others without real information. As I've grown up, I've realized that every proprietary platform will burn me unless I keep giving them money over and over again for something I've already bought.

Comment Re:RURAL MEANS THE BOONIES !! (Score 1) 205

A lot of rural phone companies have gotten government subsidies to build out broadband service. It may not be available in the boonies yet, but the nearby villages have it. The goal is to get broadband to everyone eventually (government's version of ASAP), so we can get rid of the old, expensive behemoth of the telephone network. Paying to run room size computer/electronic systems to route calls for 1000 people seems a little foolish in 2012, when the possible traffic would fit on a 100Mbit duplex ethernet. You can also run VoIP using 3G broadband, where available. I've never tried on anything slower, but 2.5G well exceeds the 8kbps (each direction) that VoIP can be compressed to, as does good dial-up.

Fixed line broadband penetration map from end of 2010:

Comment Re:RURAL MEANS THE BOONIES !! (Score 1) 205

The rural company has it's hands tied by the US Government to offer a certain quality of service for a reasonable price. A big part of the problem is that some of the least cost routing companies are illegally routing the more expensive intrastate (within the state) long distance calls out of state, and creating new interstate calls using VoIP. The interstate calls are governed by the Federal government and have to be billed at a lower rate. There's enough evidence to see that it's happening but enough for a court case, and very few companies can do the data analysis, even if they had the data.

Further complicating the issue, most places the rural companies are connecting up to equipment owned by the big players, who are the ones profiting from the lawbreaking least cost routing companies. The one or two states where it's owned collectively by the rural companies, the public utilities commission and whoever else would be involved, don't care enough to pursue the issue.

Comment Re:Find better prospects? (Score 1) 287

Facebook uses MySQL for their main data. I can pretty well guarantee it's bigger than any Oracle install, handling 2.5 billion shares and 2.7 billion likes per day, and back in 2010, with half the current user base and much fewer mobile users who are constantly on, they were already seeing peaks of 13M queries per second, reading a peak of 450M rows per second and updating a peak of 3.5M rows per second. Early Dec, 2011 reports show 60M queries per second, and the number was dated then. Oracle wouldn't scale to that size if only because of license costs (cpu licenses for how many thousands of cores?), and Oracle has way too many bugs that they're too lazy or incompetent to fix. Just for the tech support, Oracle may have to hire more people than Facebook employs for MySQL. At Facebook scale, bugs show up rather quickly, compounded by Facebook's motto: move fast and break things. If Facebook hits a bug or limitation in MySQL, it gets fixed, not documented. Otherwise, complaints show up in the twitterverse.

Their messaging system runs on HBase, which stores 6+ billion messages a day, handling a peak 1.6M ops/sec (compression enabled), with 45% write ops that average 16 records across multiple column families (all as of 10/2011). That only looks small when you compare it to their MySQL system. It's still faster than Oracle's SPARK SuperCluster (fastest on's tpmC with 30 million), but HBase does it on 3.5", 7200 RPM disks, with probably an order of magnitude (or 2) more storage and potentially lower cost.

They also archive 500+ TB/day of web logs into a Hadoop cluster, and batch process the data. It's quite a system, which only uses high capacity, 3.5", 7200 RPM drives and treats each computer as redundant, instead of having any redundancy (RAID, power, etc.) in the nodes themselves. Oracle hasn't even attempted to compete with Hadoop, and is instead offering instructions on using Hadoop and Oracle together.

Comment Internet vs Wagon-net (Score 1) 79

With a quick search, I found a Ford Flex listed with 83.2 cubic feet of space and the dimensions of an LTO cartridge. 83.2 ft^3/((102 x 105.4 x 21.5) mm^3)=10192 tape cartridges, nearly 25 PB using LTO 6 w/o compression. Google says the drive between the colleges is 19 hours and 48 minutes. Neglecting copy times, it works out to about 366 GB/s, more than 8x the speed.

In reality, you can stack tapes in the passenger seat, but if you want to have any hope of sorting the tapes back out, you'll need to pack them much less efficiently. Using the most compact media cases I could find, I figure you can carry maybe 90 cases, each holding 36 tapes, for a total of 3240 tapes. That drops the data to 8.1 PB per car. Best case scenario, you have 3240 drives at each location, reducing write times to the 4.55 hours for a single tape. Worst case scenario is that you only have 270 tape drives or less. Then the tape drive can't keep up with the network speed. Add in some time to load/unload tapes in the car, stop for gas and read the tapes back in, and you're at 29 hours easily. Now your speed is down to 81 GB/s or 650 Gbps. It would be a lot of work (and well over $1,000 each for the tape drives), but the station wagon wins, being almost twice as fast. The break-even time for this trip carrying 8.1 PB is 55 hours and 40.6 minutes (including read/write times).

For 3.5" hard drives, you'll get 60% more data (4TB vs 2.5TB) in a 64.7% larger package (147mm x 102mm x 25.4mm). I didn't research similar hdd cases, but I would think you would want bulkier packing for cushioning. LTO 6 isn't on the market quite yet though. LTO 5 only gives you 60% of the storage or 33.4 hrs to beat this Internet speed. Until we have helium filled drives, hard drives would probably be in the middle somewhere closer to LTO 6.

Comment Re:Surprised? (Score 1) 403

I have real tools: mrxvt, bash, vim, g++. They all ran well on my college roommate's 486 DX2-66 in 1997-98 (except it was a default xterm), and aside from missing unicode support, they still run fine enough to build the best software in my market. Compiling was a little slow, but good file splits and stable .h files mostly sorted out recompiling. For me, the main thing wrong with a new $500-$600 laptop is the resolution and graphics aren't any better than a 13" XPS. I don't yet trust Intel to keep supporting older graphics in newer kernels.

Comment Re:LHC data sets, eat your heart out (Score 1) 79

Facebook analyzes and stores roughly 500 TB a day (Apache web logs), just to know how their service is being used. I know it's quite an order of magnitude easier to analyze, but efficient cluster and distributed computing does wonders. Telescope data would fit the paradigm quite well, probably even playing nicely with the uber-simple mapreduce framework.

Google figured out how to get untrained n00bs to classify images. They invented the Google Image Labeler game. IIRC, you would be paired up with someone to describe an image for 2 minutes. For each keyword that both people used, both people would get a point. Google would run leader boards for things like all time high score, highest score per day, highest score per game, etc. It was surprisingly successful, and the fruits of the game are quite evident when comparing Google's image search to Bing's.

Comment Re:The point (Score 1) 79

Kansas City is already being wired with 1Gbps dedicated connections in people's homes. A fiber line runs from the home to a "fiber hut" (a room sized switch or router) and the fiber hut is placed on the Internet backbone, with no aggregation in between. I saw synchronous 900Mbit(ish) bandwidth tests results on a screen at Google Fiber Space. Even Verizon's and AT&T's 100Gbps network backbones are going to fill up pretty fast once this rolls out to more customers and Google starts installing for their second rally.

The nice thing about streaming video/TV on a 1Gbps network is that a HD movie only uses bandwidth for 30 seconds or a few minutes tops, then the customer will be mostly idle while watching. It's not like current broadband, where a single higher-bitrate video service (e.g. VUDU) could saturate a customer connection an hour or 2+. Eventually, it would be smart to employ some sort of consumer-side caching and p2p sharing on data intense services like movies. There's no sense in wasting backbone bandwidth, when neighbors already have it on the local network. The biggest obstacle would be cramming the storage requirements into a small, cheap, set top player.

Comment Re:in that case.. (Score 1) 210

Why not btrfs and backups?

BTRFS is not stable! I just lost my /home and all it's snapshots, two days ago.

"You should keep and test backups of your data, and be prepared to use them."

Yes I know about the latest tools. In the end I had to do a btrfs-restore.

Stable or not, Oracle Linux has already declared Btrfs "production-ready"

Comment BtrFS (Score 1) 210

I would keep a close eye on BtrFS, which is currently supported by SUSE and Oracle Linux (based on RHEL), and stick with whatever you have until it's ready (if you have nothing, go with the default). I don't know about SUSE, but Oracle is already calling BtrFS "production-ready" (if their DB is any indication, keep lots and lots of backups). I suspect a lot of the harder to track bugs revolve around things like power loss, that aren't common with production servers, so Oracle's claim might not be too far off.

It has a lot of nice features (lvm type features, data mirroring, subvolumes, compression -- zlib and LZO, dynamic inodes, data and metadata crc32c checksums, SSD support, snapshots, seed devices, efficient incremental backup, automatic background repair of mirrored files), and growing (background defragmentation, RAID 5/6 on files or objects, more checksum options, more compression options -- zippy and lzo, probably fewer compression penalties, automatically move hot data to faster devices, online file system check). The lzo compression can be quite fast depending on usage patterns, and with a little work, can be turned on or off for each folder (e.g. /var or /home). You can hop over to to find some benchmarks on file systems under different loads.

Slashdot Top Deals

"Card readers? We don't need no stinking card readers." -- Peter da Silva (at the National Academy of Sciencies, 1965, in a particularly vivid fantasy)