Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Yay (Score 1) 2987

Shockingly enough, in countries where there are strict gun laws, there appear to be less shootings by criminals than int he U.S.

While non-gun petty and violent crime have risen as the number of firearms in private hands has decreased.

This is the simple fact opponents of gun control simply cannot deal with.

Less guns mean less gun violence.

And a fact that proponents of gun control in the U.S. ignore is that drunk driving kills more people each year than firearms, by about 15%, and vehicle crashes in general kill 4,000 times as many as guns..

In 2010, 8,874, people were killed by firearms.
http://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/crime-in-the-u.s.-2011/tables/expanded-homicide-data-table-8

In 2010, 10,228 people were killed in alcohol-impaired driving crashes
http://www.cdc.gov/motorvehiclesafety/impaired_driving/impaired-drv_factsheet.html

In 2009, 35,900,00 people were killed by automobiles.
http://www.census.gov/compendia/statab/2012/tables/12s1103.pdf

Yet I don't see the Dianne Feinstein's of the world on a mad rush to ban alcohol, or ban automobiles. If the push to ban guns was about deaths of citizens, young children, then we'd have banned cars long ago, or passed laws to make it much harder for people who just don't have the coordination and brain power to drive a car safely, etc, etc, and we'd have banned alcohol outright, again (with the same results).

Pull the blinders off people. Stop drinking the Kool-aid. The push to ban guns is about political ideology of the left, not saving lives. Always has been, always will be. Mass shootings like this are simply a timely opportunity to push their ideological agenda again, hoping the outrage will put enough wind in their legislative sails to pass something.

The 2nd amendment to the U.S. constitution guarantess us the right to bear arms. And it puts no limitations on the types of arms.

There is no guarantee in the constitution of a right to own or drive an automobile, or consume alcohol.

Seems to me if the concern were truly for dead children, as is being claimed here, then we'd surely embark on passing legislation to once again ban alcohol, and if we really want to cut down on deaths, ban automobiles.

Comment Re:If there was a Bad at Math Map... (Score 3, Interesting) 1163

This looks like a good place to post this. I took the data from this economist article and broke it down by red vs. blue state according to this map. This is what I found:

[snip]

What I found is that you have no clue how to do data analysis and have concocted some bogus correlations to push a liberal agenda. In 1984 and 1972 all states were red but one . That one shot just sank your bogus analysis and agenda, but I'll add some detail. I'll focus on a prime example we can all relate to of why these federal spending "deficits" into states exist and that they have nothing to do with which presidential candidate carried the state in the last election, i.e. whether the state is "blue" or "red". Since 1968 New Mexico has voted Republican 7 times and Democrat 5 times. It is blue after the 2012 election and was blue in 2008, Obama winning the state easily both times. In 2004 it was red when G.W. Bush won the state by a gnat's hair. New Mexico has the highest federal spending to taxes paid ratio of any state, $2.03 for each $1 in taxes as of 2005, and has been roughly equally blue and red over the last 50 years. Why such a deficit?

* population of only 2 million people
* Los Alamos National Laboratory, 2.2 $bn/year, $100+ million each year on compute hardware
* Sandia National Laboratory, 2.1 $bn/year, $100+ million each year on compute hardware
* 3 US Air Force bases: Holloman AFB, Kirtland AFB, Cannon AFB, many $bn/year, no time to research exact $$
* White Sands missile testing range, unkown $
* Protection and management of 6 National Forests in the state, unkown $
* many other fed govt facilities

The reasons for these federal spending "deficits" and "surpluses" have nothing to do with red and blue. New Mexico has been blue 5/6 recent elections, and red in 6/6 from '68 to '92. New Mexico's current 2:1 ratio and the state's growth are directly linked to a single project in the 1940s called "Manhattan". The first nuclear bomb test of the Trinity device destroyed nothing in New Mexico but the tower upon which it was perched and some wooden shacks. But it was nuclear fertilizer for the state, spurred population and economic growth for decades, with nearly all of the money coming into the state economy for 50 years from Uncle Sam for nuclear weapons research.

To understand these federal spending "deficits" and "surpluses" into the states you must look at each state individually. It usually boils down to how many federal facilities and employees are in a state, and/or defense/govt contractors, vs population. California has a great number of military bases, defense contractors, govt labs, etc, but the state's population is over 1/10 of the entire US, 37 million people, greater than the population of Canada and 160 other countries. Thus private sector output and federal taxes are greater than the dollars Uncle Sam is injecting into the state's economy.

Comment Re:Shocking (Score 1) 360

The only thing this does is that they can't have the same advertisements follow me around wherever I go.

Which is really damn annoying. Newegg is really bad about this targeted ad bullshit. I really like Newegg but I'm really getting tired of hitting various news and other non product selling websites to see an ad for a product I recently viewed on Newegg. If I'm going to buy it, I'll buy it. Having something I'm not going to buy but merely looked at shoved in my face on a dozen different websites, simply makes me want to take my business elsewhere. I could probably defeat this by deleting all my Newegg cookies, but then goes cookies I probably need for navigating the site in the manner I'm accustomed.

Comment Re:Why would you even care? (Score 1) 317

has much stigma due to Hans Reiser

Really? You can't just judge it based on it's features and performance?

So if Linus Torvalds ever commits a crime, you'll stop using Linux?

There was a vast treasure trove of useful human biomedical data produced by methods 100x beyond what we would call "inhumane torture" captured from both Germany and Japan at the end of WWII. Every doctor and researcher in the Western World refused to use this data, because it was so tainted by the fact our enemies produced the data, and the methods used to obtain it. I.e. infecting perfectly healthy people with things like plague, small pox, maleria, etc. Shooting, stabbing, and cleaving people at precise body locations to see how long it took for them to die of blood loss or infection. Then devising procedures and medicines their soldiers could use on the battlefield to stay alive long enough to make it to a field hostpital, etc.

The stigma attached to the Reiser filesystems only differs to the above data obtained via atrocities by scale of the crime. Reiser killed and probably tortured one. The Nazis and the militarist Japanese killed millions. Given Reiser's personality and vindictiveness, he almost certainly tortured his wife before taking her life. Whether he commited these acts upon one victim or millions is irrevelant.

To judge this work solely on technical merit and use it if found superior for your given workload is a purely techical decision. However, as a human being and member of society, you'd clearly have to be a sociopath to actually use it. Were the remaining devs to change the name, by forking it or whatever means, and cleansing best they can of Hans' code (rewrite/etc), it may begin to lose the stigma. But given most other Linux filesystems are now better than Reiser in every way, why bother with it?

Comment Re:Time to let it go... (Score 1) 317

What's also key is that the better points of ReiserFS, such as journaling, have migrated into other file systems. The experiment wasn't a failure, it was a darn good idea that has led to an overall improvement in reliability and speed of other file systems.

WTF have you been smoking? JFS1 and XFS predate ReiserFS by 10 and 7 years respectively. Both are journaling filesystems. There are probably mainframe journaling FSes that predate these. In short, Hans borrowed the journaling idea from others, same goes for most of his FS concepts. Hans had no original filesystem concepts of his own, none that were ever implemented or proven any good in production. Optimizing a filesystem for high performance with small files isn't a concept, but an execution and tuning detail.

All filesystem developers borrow ideas from prior work, and Hans was no different than others in this regard. In fact there is frequent cross pollination of concepts. Proof of point: Ted Ts'o borrowed from the allocation group concept in XFS and implemented something similar in EXT4. Dave Chinner borrowed a concept from the journaling mechanism in EXT3 and implemented something similar in XFS. Note the praise Hans piles on the XFS devs for schooling him in delayed allocation, which prevents fragmentation (AIUI, Resier3 was pretty horrible about fragmenting files and free space):

From: http://www.osnews.com/story/69
Hans Reiser: This is an area we are still experimenting with. We currently do what ext2 does, and preallocate blocks. What XFS does is much better, they allocate blocknrs to blocks at the time they are flushed to disk, and this allows a much more efficient and optimal allocation to occur. We knew we couldn't do it the XFS way and make code freeze for 2.4, but reiser4 is being built around delayed allocation, and I'd like to thank the XFS developers for taking the time to personally explain to me why delayed allocation is the way to go.

Hans Reiser was no visionary. Like all kernel developers, he borrowed from others' ideas, improving on some. Note that Reiser4 was to be built around delayed allocation. This interview was published in 2001. It's 11 years later and still no Reiser4. On the other hand, we've seen constant full blown development of XFS and EXTx in these 11 years, by large teams of dedicated developers. Both XFS and EXT4 steamroll the small file performance of Reiser3 by a large margin, and we've seen the introduction of a copy on write FS, BTRFS, which doesn't even use a journal. Though the true performance metrics of a mature BTRFS have yet to be realized, nor the level of fragmentation, which is sure to be an issue with COW.

Comment Re:Not So Fast On The Pointers (Score 1) 326

I'm going to have to disgree with Linus on that one. When I'm coding in a mixed group of people that includes old farts and interns and the performance isn't that critical, I'll do the former over the latter...

You're not disagreeing with Linus' point here. You're referencing an entirely different scenario.

You: When performance isn't critical
Linus: He's always working on the kernel, as are those whose pointer code he called 'sad' here. Performance is *always* critical with kernel code.

Comment Re:So? (Score 1) 946

And this is why graphics support will always be a third class citizen on linux.
Congratz!

's/always/currently/g'

Larabee my have failed. But MIC survives. If demand exists (big IF), I can imagine in the not so distant future, a startup discrete graphics card company, or Intel itself, bringing a 96 to 128 core MIC type graphics card to market, with wholly open source drivers. The performance wouldn't be nearly as good as an integrated GPU from AMD/nVidia, but it would be 'good enough' for OpenGL Linux applications, and anyone could work on the driver code. If the board design incorporated hardware support for something like either 3Dfx' original scan line interleaving, or nVidia's Scalable Link Interface, in a 2 or 4-way setup, the performance *could* be pretty phenominal, and open souce. I'd think 512 MIC cores with a good scalable driver would yield impressive 3D performance indeed.

Comment Re:Lucky bastards (Score 1) 296

For home users, you have to wonder if they're just being cheap. If they can't fork out for an OS upgrade once a decade, how else will they be like on the consumer side?

For many it has nothing to do with money, but usability. Post WinXP, Microsoft has gone out of their way to break the UI functionality that is actually good and works. They spent hundreds of milllions of dollars researching and designing the Start Menu for Windows 95. Then NT and W2K got the same interface, and Windows XP has an option to re-enable it. It is intuitive, worked well, and everyone became accustomed to it. Then, in Microsoft fashion, they changed it simply to make it different. The reason? People won't pay for an upgrade if they feel they're not getting something for their money. How do you convey this better than chainging the UI function that people use more than any other?

In the US (and many countries) the clutch is on the left, the brake in the center, the gas pedal on the right, and the tranny shifter on right. For automatic trannies the clutch is deleted. This layout hasn't changed since the 1920s. It works well and is universally understood.

Like the driving conrols in automobiles, the UI "program menu" is something that should never need to be fundamentally changed. All changes to it since W2K have been detrimental, not beneficial. They have been made for profit reasons, not usability reasons. Microsoft Windows is a utilitarian tool, just like an automobile. What if screw manufacturers suddenly one day switched to counter clockwise threads when screws have been clockwise for over a century? Wall clocks? How about changing the road system overnight so we now drive on the left instead of the right?

For those who will inevitably, ignorantly, reply that "technology must more forward and that requires change", you miss the salient point that change is only progress when it makes something better, more usable, more intuitive. None of Microsoft's "Start Menu" changes since W2K have done so. They're done the opposite.

Comment Re:Get Hardware RAID (Score 1) 192

Rubbish. The default and recommended RAID schemes for two of the biggest storage vendors on the planet (EMC and NetApp) are both parity RAID.

You're failing to recognize a key characteristic of EMC/NetApp arrays: persistent cache. SAN heads that have 8GB to 512GB of persistent cache that acks to fsync can certainly hide much of the RMW latency from transactional applications, and to a degree, the long rebuild times of their 4-8 drive parity array building blocks. EMC and NetApp arrays have massive quantities of such cache. As do the likes of the other SAN heavy hitters, IBM, SGI, HP, Oracle, etc.

Do note however that many organizations using the enterprise SAN heads with large parity RAID pools for generic bulk storage do often create separate RAID10 arrays within the unit for their high transaction rate applications, i.e. POS/CRM/BI databases, mail spools and mailboxes, etc.

And when you come down out of the stratosphere to the midrange SAN heads and then HBA RAID controllers, your persistent cache size is typically 4GB for SAN heads and 512MB for HBAs. With these systems a parity rebuild significantly degrades application performance, and during normal operation with a heavy random IOPS transactional workload RMW latency will as well. And with software RAID you don't have any persistent cache, RMW is constant, and rebuilds bog the entire system down. RAID10 or striped/concatenated mirror pairs, depending on the workload, are a much better option for these 3 cases.

Indeed, with the rise of SSDs (and their relatively small sizes) nearly eliminating the performance penalty of parity RAID schemes, expect to see its usage grow, not shrink.

It absolutely will grow, but it won't entirely displace rust. And yes, SSD latency/bandwidth do eliminate most of the performance problems with parity RAID on rust. Though the current crop of controller silicon isn't fast enough to fully take advantage of SSDs. Take your big EMC and NetApp for example. If one were to allow the controller to use up to 100% of its resources to rebuild a RAID6 array of 8 SSDs for the fastest possible rebuild time, the operation would eat 100% of the controller's cycles and other IO would suffer. With an 8 disk RAID6 rust array, it would have sufficient excess capacity to service other IO in a timely manner. To fully take advantage of SSDs in RAID, we need much faster silicon. Almost any number of certified SSDs in RAID5/6 will saturate the dual core ASIC in LSI's top RAID HBA as its parity engine can't keep up with the IO rate.

Comment Re:Get Hardware RAID (Score 1) 192

The only real advantage to "Hardware RAID" is the battery backed cache.

Hardware RAID has many advantages. Persistent cache, while important to performance, is but one. Far better management infrastructure is another. Many RAID vendors offer a single web management console which can control all RAID devices across a network from a single console. Try that with mdadm. Then you have superior alerting and monitoring, etc. Most RAID vendors have had excellent easy to setup/use snmp capability for over a decade. mdadm is still lacking here as is the inbuilt Microsoft RAID (does anyone actually use it?).

Hardware RAID comes with the disadvantage of a whole other operating system "firmware" with its own bugs and often proprietary disk layout.

All hardware comes with firmware, even the SATA controller and NIC on your consumer mobo, and everyone has bugs to fix on occasion, including software RAID. This is why a good administrator reads release notes. Also note that most hardware RAID controller (PCIe card) vendors have been moving to the SNIA on disk layout metadata standards. That said, you won't find me swapping out an LSI RAID with an Adaptec, or with software RAID any time soon, simply because they all use the same metadata format and thus it should "just work". That's just not smart due to all other kinds of issues.

Parity calculations are nothing for current CPUs, so the onboard processor is not so useful.

Spoken as I'd expect from an individual with no real hardware RAID experience/knowledge. Parity work is a tiny fraction of the operations peformed by a RAID ASIC. And in fact most enterprises don't even use parity RAID due to the huge performance penalty of RMW and the unacceptable rebuild times of parity arrays. The bulk of the work done by a RAID ASIC today is IO request processing and cache management. So no, it doesn't matter on what chip XOR calculation are performed, because those with real workloads aren't using parity RAID. If you're using Linux mdraid your parity calculations are limited to a single core per array, so if one must use parity RAID they're likely better off with a good dual core RAID card.

Advanced filesystems such as ZFS or BTRFS need direct access to the disks.

You really need to educate yourself. Oracle sells hardware SAN RAID arrays. ZFS doesn't have direct disk access with these.
http://www.oracle.com/us/products/servers-storage/storage/san/pillar/pillar-axiom-600/features/index.html

I'd like to see drives and/or controllers with battery backed cache. Until then, I rely on my UPS.

A UPS is not a substitute for persistent RAID cache. Persistent cache saves you from kernel panics and other crash scenarios that could corrupt your filesystem journal and/or filesystem proper, as well as saves you from power outage. A UPS only saves you from power outage.

Stop regurgitating the misinformation you read on the Wikipedia RAID page. Expend some effort and do your own research. Just about everything you've stated here is incorrect. In fact, don't do any research. Just simply keep quiet since you obviously don't use RAID and have no experience with it.

Comment Re:I work in the storage industry. (Score 1) 192

Don't assume that "enterprise" disks do this correctly either.

Those educated in enterprise storage assume it doesn't matter. This is a non issue with "enterprise" drives. Those willing to pay for them are attaching them to "enterprise" RAID controllers with [F/B]BWC. These controllers, whether PCIe or in a SAN head, disable the drives' NCQ/TCQ and onboard caches. The BBWC does write ordering negating NCQ/TCQ, and ensures resiliency which onboard drive caches cannot.

Comment Re:Do you need a unified filesystem at all? (Score 1) 234

You must redesign your workflow. At this point you're attempting to re-engineer a flawed workflow system. Cut your losses and start over, doing this in a way known universally to work.

The first place to start is to take a large blunt object and hit the idiot over the head who decided he needed 500GB/day per "sensor" of environmental data for "undeveloped land". Oil/gas company seismic surveys of potential oil fileds don't even capture this much data, and they survey hundreds of square miles at a pop, with multiple billions of dollars potentially on the line.

Second, as others mentioned, do not try to mount and directly share the field node disks. Create a batch copy system and pull everything off the sensor node drives onto a RAID array on the server. This setup is still light years away from an optimal field data collection system. What you *should* do is:

Build a centralized field "office". I.e. a cheap plywood building relatively weather proof. Acquire a ruggedized rack server vertified for outdoor field use and a half rack cabinet on wheels. Dozens of companies sell such gear. Install a wireless router that can accept an external high gain antenna. Build a rigid square box antenna mount with 2x4s on the roof of the structure, about 6ft tall. Assuming the roof of the structure is 8ft high this gives an antenna height of 14 feet, which should be plenty if the ground you're surveying is relatively flat and you locate the building relatively close to the center of the sensor field. Connect the remote nodes securely to the AP. Create a share on the server and write all data in real time from the nodes to the server. You'll power the rack with a small gas generator sitting outside the shed, with a fuel tank large enough to run for 48 hours. This allows you to keep collecting data in the event weather etc causes you to miss a cycle.

If the data needs to be analyzed on a 24 hour cycle, you will build two identical server cabinets. Instead of collecting drives from all the nodes, you simply drive your van out, power down, disconnect a few cables, roll the cabinet out, roll the sister cabinet in, connect cables, power on, check for proper function, refill the gas tank in the generator, roll the retrieved cabinet onto the van, and go. Return to base, wheel the cabinet into the office, jack into the network, connect to the share, analyze.

THIS is how field data collection and analysis is done properly for most scenarios.

Comment Re:hunh? (Score 1) 383

Gmail's filters are outstanding, and maybe 1-2 spams slip through per month.

You must be referring strictly to their inbound spam filters. My MX SMTP logs suggest the rate of spam slipping past their outbound filters is a couple orders of magnitude higher.

Comment Re:1366x768 (Score 1) 382

Also known as the cheap laptop screen.

Don't forget LCD and plasma 720p HDTVs. Over the past 5-6 years there have been over 100 million of these units sold worldwide with a native panel res of 1366x768, in the 27-42" range. I'm sure some are running Win7 and seeing media center duty today. These TVs were $900-1200 USD in 2006. People are more likely to move them to the bedroom/basement when they buy the new 50"+ 1080p model for the living room. I've even heard of some guys using the 32" models for their desktop monitor.

Comment Re:Youtube video. (Score 1) 1127

There's a marked difference between hunters who eat what they kill (pheasants, deer, fish, etc), and proto serial killers who kill/torture for a thrill. You're conflating the two.

"marked difference" from a legal perspective or your personal moral perspective? I assume you mean moral as there is no legal distinction in most, if not all, states, WRT species covered in the state game code. For instance, prairie dog hunting in a number of Western states is legal and is a purely sporting endeavor. There is typically no limit on the number one can kill per day or per month. Nobody eats prairie dog. It tastes like shit and there's not enough meat on em to make half a sandwidh anyway. They are classified as a varmint, or pest species, because the tunnel complexes they dig often cause soil erosion problems. So farmers/land owners have an ecological reason to trim their population. Everyone else kills them purely for sport with long range rifles.

Coyotes have become a problem in multiple midwestern states due to dramatically increased populations in recent years. They used to be taken for their pelts, but prices bottomed out a number of years ago. People quit hunting them as their was no monetary gain. Nobody ate coyote, ever. They were either shot for their petls or for sport, or both. Since the pelts are worthless today, anyone shooting them is doing it mostly for sport. One other motivation is that because their populations are so high today, they are routinely killing house pets, mostly cats and micro dogs, on farms and in rural towns. My own grey tabby cat, whom I'd had for 14 years, was taken by a coyote on Oct 27, 2011. I had motivation to start killing coyotes South of town, but local law enforcement had already taken up the task. That doesn't preclude licensed hunters from still killing coyotes outside the city limits and some certainly do.

My point here is that taking game for reasons other than meat is often just as 'moral', if not more. And since everyone has a different moral compass, or should I say, everyone's moral compass points in a slightly different direction, we have laws that dictate who can shoot what animials and under what circumstances. If you have a problem with the game laws in a particular state, I suggest you start a campaign to change such laws instead of complaining on Slashdot about someone else's moral compass pointing a different direction than your own. Taking such a stance here is about as productive as pissing in the wind.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...