Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Lie, Damned Lies, and Statistics... (Score 4, Insightful) 269

The fact that a large majority of voters make judgments on what happens in the immediate past (i.e. 3-4 months) prior to an election, rather than the entire term of office (2, 4, or 6 years for various US Congress/Presidents) is well documented, so no surprise here.

Much of that has to do with the difficulty virtually all people have distilling a complex, hugely multivariant problem, into easily understood metrics and views. That's not going to change, because even a super genius is going to only be able to accurately remember a half-dozen major points, while there may be as many as several DOZEN relevant metrics/issues that you probably can consider important.

The proposed solution in the paper is yet another form of a simplification and lie, NOT a real solution. The simple answer is that I see no indication that the claimed "yearly growth" rate is any more accurate than the absolute income. Do the grow rates take into account inflation? (I see no indication they do) What about changes in the job market over those years? What about overall economic indicators? I.e. if the average income managed to grow ANY over the period 2007-2009 (in the middle of the most severe recession in 80 years), then that a huge accomplishment vs say merely keeping up with inflation in 2003. The authors are merely substituting one questionably useful statistic with another (of the same dubious relevance).

Never trust someone selling you a simple numerical answer to a complex problem. Politicians and Statisticians are both extremely adept at contriving lots of meaning from simple numbers. There's a reason this post is titled the way it is.

-Erik

Comment I don't mean to belittle the will to do so... (Score 0) 150

But this has long since ceased to be any sort of technical challenge or accomplishment.

Putting a lander on the moon (or, even, for that matter, a human) is not much of a technical challenge, insofar as needing to do anything other than learn how to properly use complex (but well-known) technology.

There's a whole raft of small aerospace companies (of which SpaceX is merely the best known) with funding in the low millions than can produces a lunar lander for you within 6 months of a go-ahead. And building a sufficiently large rocket to put 100kg on the moon for a 1-way trip is merely a matter of money, not even advanced tech these days.

The bigger obstacle is political will, and being able to divert the few tens of millions it costs from being cannibalized by special interests.

If the Israelis do it, good for them. But it's not really advancing the state-of-the-art in any way, nor is it much more than a publicity stunt.

Comment Why so low a commonality? (Score 4, Interesting) 202

Neanderthals are barely a separate species.

They're homo neanderthalensis, while modern man is homo sapiens sapiens. The immediate predecessor to modern humans is homo sapiens idaltu, which is minutely different than us. While a simple majority of paleontologists classify Neanderthals as a separate species, there's a significant minority that advocate them as merely another subspecies (home sapiens neanderthalensis) being more correct.

Given that the ENTIRE Neanderthal genome differs from ours by 0.15% or less (we're about 2% different than our closest modern primate relative), I'm very surprised that the Homo-specific genome part is only 20% in common between Neanderthal and Modern Human. Particularly since it's now commonly accepted that they interbred with modern humans.

I think the 20% commonality (if it bears out) probably reinforces the "separate species" theory more than the "distinct subspecies" theory of the Homo genus family tree.

-Erik

Comment For a noted pragmatist, Linus is dead wrong... (Score 2, Insightful) 279

Normally, I see Linus being pragmatic about things, but I have no idea why he's against CLAs.

Having a CLA (with some form of copyright assignment or "unlimited" sublicensing) is the ONLY way to run a flexible, long-term Open Source project.

The Linux kernel is the only substantial project that doesn't do this, and, frankly, can only get away with it because it's so critical. Even there, it's a pain, because (to pick a stellar example), Linux will NEVER be able to relicense itself under an improved GNU license. It's stuck FOREVER on the GNU v2 license. Which is hardly a good thing.

CLAs are a consequence of copyright, just like the licenses themselves are. They're necessary to allow a project to update the license, defend the entire codebase in court, keep track of ACTUAL authors, etc. If you don't have this, you have a toy project, one which ultimately will fail to succeed.

If you don't like CLAs, then use the BSD or Public Domain route, because they're the only licenses (or non-license) that avoids all the traps of copyright law. Otherwise, if you want copyleft of any sort, then you have to use a CLA.

Linus is basically complaining that having a driver's license is an obstacle to people just getting on the road and driving whenever they want. Sure, CLAs restrict the "fly by night" patcher. That's a feature not a bug. Sometimes, you do want to set the bar higher than the lowest common denominator. Naturally, some CLAs are worse than others, but the concept as a whole is sound.

-Erik

Comment As can ANY of the major CLAs... (Score 5, Interesting) 279

Take a look at pretty much any major CLA out there.

I'll name three big ones: OpenJDK, FSF's for GNU, and Apache's.

ALL of them either directly assign the copyright of the contribution to the org, and thus, you lose any ability to control it whatsoever, or give the org the ability to relicense it explicitly.

This is intentional, and a GOOD thing, because it increases the flexibility of the project, including making it easier to defend rights in court. Frankly, have a project with multiple copyright assignment is impossible to manage from a legal standpoint, let alone one where you don't even know the real identity of a contribution's author.

The Linux kernel is stuck on the GNU v2 license for exactly this reason, and can never change. That's the fate of any such non-CLA'd Open Source project (other than something using Public Domain or the BSD license).

FYI: the FSF can (and has) relicensed code contributed to GNU projects under a proprietary license. (gcc and part of the toolchain)

Comment Report validates the "dead man walking" assessment (Score 1) 207

No, the report effectively validates the "they're dead, Jim" assessment.

The "repair" theory was so riddled with uncertainties that NASA itself acknowledged that it was too high risk to even contemplate. Basically, they'd have to do a spacewalk just to figure out the extent of the damage, then jetison all the cargo, then try to jury-rig some sort of thing. The idea they had was laughable: use spare metal parts to pack the hole with something of substance, then use an ice pack to try to maintain wing aerodynamics. Then make it back through re-entry, where a best case scenario exposes the patch to several hundred degrees of heat and Mach 25+ airspeed.

The "rescue" option was only slightly less hair-raising, and, frankly, ran a significant risk of loss of TWO orbiters.

The realistic assessment is that as soon as the accident happened, they were doomed. Citing low-probability theoretical (and that's all they were, theoretical) plans that have never been tested or even simulated, and that would have to be executed under extreme time pressure as evidence that they weren't doomed is muddy-headed wishful thinking.

The report makes it pretty clear that saving the Columbia was about as realistic as saving the Titanic.

Comment Except it's not useful at all... (Score 1) 87

It's NOT more flexible, except in hairball wacko scenarios that never happen in reality.

A UH-60 or a OH-6 have better range, better speed, much better maneuverability, and either higher cargo capacity or radically more nimble. And saying these things could be used as a UAV is completely brain-dead - they're so slow and vulnerable that they'd never survive in a hostile environment. At least helicopters have the speed and maneuverability for quick insert and retrieval missions.

And if you think helicopters have high maintenance requirements, and are vulnerable to ground fire, that's nothing compared to what this beast will need/be vulnerable to. All they are is a SUV-sized target.

Every scenario I can possibly think of (including those listed) is more effectively performed at a lower overall price (because you have to factor in losses) by existing helicopters, UAVs, and ground transport.

This is the modern version of steampunk - looks cool, completely practically worthless.

Comment Make a lot of sense... (Score 4, Insightful) 234

For as bad as the NSA and GCHQ programs are/were, there is at least some reasonable way to restrict them from damage.

For corporations, there's effectively no limit to the amount of damage they can do.

Yes, government-level info gathering can result in some pretty awful things - prison, in the least, for a limited number of people. A breakdown in trust of government as a whole, however, is probably the worst thing such pervasive intrusion can cause. BUT, we have relatively fast control over this kind of behavior. We (citizens) simply pitch a fit to our representatives, and a loud enough fit (aided hopefully by expose from people like Edward Snowden) gets results rather quickly (weeks or months). The NSA policies and practices are changing, as we speak. In the end, government is responsible to the people, and if enough of society says to change the policy, it gets changed.

Compare that to information gathering and use by a company. It's regulated by? Well, if you're lucky, the government. If not, then by nobody. And there's no oversight at all. They pretty much can do whatever they want with it, and there's virtually nothing the average citizen can do about it, even in large numbers. The company's management controls the data, and they're pretty much completely insulated from outside influence. Not even stockholders have much say here. And there's virtually no penalty for them misusing it. Take the Target debit card leak. It's a very temporary, minor PR problem. They're not on the hook for any damage they cause those people by mishandling their info. And that's a minor case - think of all the places where corporations buy and sell info for no benefit of the individual, profit from it, and usually to the detriment of the individual.

I'm in no way saying that government info gathering is good - we need to keep a close eye on it at all times. However, corporate information gathering and trading is infinitely more damaging to society, especially in unregulated places such as the USA. At least we have a reasonably ability to correct government oversteps - when was the last time you saw a company penalized (or heck, even substantially change its policies) due to mishandling of individual data?

Thanks, but I'll trust a representative government long before I'll trust a private, for-profit entity.

-Erik

Comment Depends... (Score 5, Informative) 462

When comparing modern mortality improvement over the older pre-industrial, pre-modern-medicine regimes, the "most helpful" reductions vary with the age group you're dealing with:

  • INFANT (i.e. under 2 years of age) mortality reductions are overwhelmingly due to two things: (1) improvements in reducing childbirth deaths and complications (2) infant vaccinations. Sanitation (but not necessarily clean water) has helped somewhat, but not anywhere near as much as getting the kid out of the mother in good shape, and effective pre- and post-natal care. Vaccines (even though most aren't fully protected until after 2 years of age) have nonetheless stopped cold the huge killers of infants: measles, smallpox, pertussis, etc.
  • CHILDHOOD (2-12) mortality reductions are pretty much split between vaccinations and improved clean water/sanitation, maybe with the latter edging out for bigger impact (probably due mostly to reducing malaria, cholera and typhus).
  • TEENAGE (13-18) mortality reductions are due to a combination of vaccines (TB, smallpox, polio, and measles being a big here), clean water/sanitation, and trauma medicine.
  • ADULT (18-65) reductions are mostly clean water/sanitation, with trauma medicine following up behind. Vaccinations aren't a huge contributor here, since the vast majority of folks died of the major vaccinated diseases before they got to be adults, and thus, a much smaller percentage of people were saved.
  • ELDERLY (65+) mortality reductions are heavily improvements in drugs and chronic illness treatments (think cancer and heart disease).

Overall, clean water and sanitation probably win as the single most important advancement in public health, ever, but vaccines are a *very* strong second. Frankly, drugs are at best a distant fourth, behind even improved medical understanding of the human body (enabling more effective trauma and non-drug treatments of common diseases and accidents). Drug improvements really have helped two big categories of people: soldiers at war, and the elderly.

-Erik

Comment I've got a Bridge in Arizona I want to sell you... (Score 1, Interesting) 214

That statement from Apple doesn't even pass the laugh test, let alone a sniff test.

I live and work in Silicon Valley, and have a substantial number of friends and former co-workers that either are, or have recently, worked for Apple.

They're collecting data on you. Lots of it. And their "opt out" ways are about as effective as Google's at protecting your data.

iTunes play patterns, and purchase history. Apple Maps. Location data around phone usage. Location usage, period. Apple Store purchase patterns. Every time you visit an Apple Store. Purchase data from the on-line Apple App Store. The list goes on and on.\

Some of it anonymized, but most of it really isn't. Even if you "opt out", there's more than enough metadata being collected to identify you.

So, yeah, Apple's just lying through it's teeth.

Comment Let me introduce you to my System/360.... (Score 5, Insightful) 184

Large corporate environments chance at a glacial speed. If anything, they merely add, never subtract - the proportion of Fortune 1000 companies which have mission-critical mainframes is close to 100%, as it has been for the past 50 years. Similarly, pretty much all of them still have a VAX or AS/400 similar mini-computer running something critical. The waves of consultant-pushed fads wash over these institutions with virtually no effect. They all run small "incubator" tech-evalutation groups so they can sort out which of the new tech is likely to produce useful ROI, but the actual adoption rates of these new techs is very slow.

Mid-sized companies are pretty similar, though they're a bit more aggressive with dumping older technology. They don't generally replace it with cutting edge stuff, though, since that's a huge risk they don't want to take. Pretty much every "tech upgrade" I've ever seen in this space is replacing a 30-year-old setup with a design which first showed up a decade before. Mid-sized companies go for solidly-proven tech.

Little companies are where the most change happens, for the good and bad. The bad side is that many small companies don't have the expertise to handle the adoption of new processes and tech properly, and thus screw it all up, and then kill the company. I've seen this happen at both small tech AND non-tech companies, where an insufficiently funded/staffed/knowledgable IT "department" killed the company. Literally. The good is that small companies are where the experimentation happens, and, particularly in tech-oriented ones, it's where the next wave of computing is really prototyped then refined.

The general answer to the article is that any sane company's IT department will look 90% identical to what it is now in 10 years, and even in 50 years will almost certainly still be at least 50% identical. For those able to handle the risk, things will chance on a decade-by-decade basis; but, the reality is, those companies will either have died or turned into mature (and risk adverse) companies by then. So, while the small company space is a place of rapid change in IT, at a specific company, a period of rapid evolution will be followed either by death of the company, or evolution to the long-term stability type.

The short of it is: NEVER trust a consultant trying to predict the future for you. Particularly if they're extrapolating on "new" tech.

Comment Which Subfield determines which Maths... (Score 1) 656

The short answer to your question is: YES, no matter what subfield of computing you go into (Networking, Systems, Software Engineering, QA, Release, or Project Management) you'll need advanced Maths. Which advanced maths depend on the specific subfield. But the reality is, you're far better off knowing most of the stuff that a 2nd-year Math major has to take.

If you're a Software Engineer (and, to a lesser extent, QA), you'll likely need the Maths which help you describe real-world actions or model real-world happenings. This means Geometry, Trig, Calc, plus Maths common in Physics, plus application-specific stuff, like various Linear Algebra, Complexity, Markov Modeling, Game Theory, etc. Basically, Software Engineering has the biggest demand on Math knowledge, but it varies according to the type of project you're on.

Networking and Systems depend heavily on the Linear Algebra and Discrete Math fields, particularly Set Theory, Game Theory, Complexity/Computability, and Graph Theory. Most of this is not writing down equations, but having an intuitive understanding of the problem being presented because you've had the requisite background. For instance - modeling network traffic flow and determining system load both require Graph Theory and Complexity, but it wouldn't be immediately obvious to the outsider.

Release and Project Management are less Math-intentive, but it's still important to have college-level Maths as a strong foundation. Complexity/Computability, Linear Algebra, and, particularly, Statistics, Graph and Game Theory are cornerstones of these fields.

The reality is that Math is a significant part of any Computer Science degree, and is critical in daily professional use. Outside specific programming positions (e.g. those involved in modeling of some kind), it's not the same use as a Civil Engineer or the like would be using Maths. But you have to be comfortable thinking about Maths, and you need to have significant educational background to be successful.

Personally, beyond Geometry and Trig, I'd think that you'll have to take about 6 semesters of some sort of Math in a reasonably rigorous CompSci program. You'll probably only use 3 of those courses on a regular basis, but you'll never know WHICH 3 you'll be using at, so you need all of them.

If you find Math difficult, tedious, or boring, you need to seriously rethink a CompSci degree (and, by extension, a career in something normally requiring a CompSci degree). Or you need to talk with your Maths professors/teachers, and figure out why you have difficulty or are bored during Math classes. Either way, it's a required skill for the profession.

Comment If you think Bitcoin was ever Anonymous... (Score 4, Interesting) 158

...I've got a bridge somewhere that needs to be sold that you might be interested in.

Bitcoin does irrefutability (i.e. the ability to prove that a transaction occurred, and occurred only once). I can thus prove that I do, in fact, own all Bitcoin I possess.

It never has been anonymous. There are characteristics that make it more difficult to trace the payer, but the protocol and implementation have never been configured (or designed) to be a strongly anonymous technology.

Comment Re:Their Game, Their Content (Score 2) 297

You (and the +5 poster a thread or two up) misunderstand "transformative" use.

The proper analogy to video gameplay has already been decided on by the courts, and it is written plays (which, also applies to screenwriting).

All three take another work, and produce an interpretation of that work. The original playwright/screenwriter/videogame author still is the owner of the base copyright being used, and the work is classified as a Derivative Work. The performer has also contributed significant copyrightable-product, but the genesis and base of the entire (new) work still rests on the original play/screenplay/game.

Also, just because something is "transformative" doesn't absolve it of the requirements to be of "limited" domain. Using the entirety of a video game (artwork included) in your new LP video is pretty much the definition of "not limited".

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...