Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:Linus is right (Score 1) 449

by SpinyNorman (#48730975) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

The need for massive parallelism will come (already has in the lab) from future applications generally in the area of machine learning/intelligence.

Saying that "single threaded loads" won't benefit from parallelism is a tautology and anyways irrelevant to Linus's claim.

FWIW I'd challenge you to come up with more than one or two applications that are compute bound and too slow on existing hardware that could NOT be rewritten to take advantage of some degree of parallelism.

Comment: Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

by SpinyNorman (#48730931) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

Well, there's obviously no need to add more cores/parallelism until there's a widespread need for it (unless you are Chinese, when octocore is a must!), but I think the need is coming pretty fast.

There are all sorts of cool and useful things you can do with high quality speech, image, etc recognition, natural language processing and AI, and these areas are currently making rapid advances in the lab and slowly starting to trickle out into consumer devices (e.g. speech and natural language support both in iOS and Android).

What is fairly new is that in the lab state of the art results in many of these fields are now coming from deep learning / recurrent neural net architectures rather than traditional approaches (e.g. MFCC + HMM for speech recognition) and these require massive parallelism and compute power. These technologies will continue to migrate to consumer devices as they mature and as the compute requirements become achievable...

Smart devices (eventually *really* smart) are coming, and the process has already started.

Comment: Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

by SpinyNorman (#48719189) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

The trouble is that extrapolating the present isn't a great way to predict the future!

If computers were never required to do anything much different than they do right now then of course the processing/memory requirements won't change either.

But... of course things are changing, and one change that has been a long time coming but is finally hitting consumer devices are the hard "fuzzy" problems like speech recognition, image/object recognition, natural language processing, artificial intelligence... and the computing needs of these types of application are way different than running traditional software. We may start with accelarators for state-of-the-art offline speech recognition, but in time (a few decades) I expect we'll have pretty sophisticated AI (think smart assistant) functionality widely available that may shake up hardware requirements more significantly.

Comment: Re:Linus is right (Score 1) 449

by SpinyNorman (#48717479) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

Yeah, parallel computing is mostly hard the way most of us are trying to do it today, but advances will be driven by need, and advised by past failures, not limited by them.

You also argue against yourself by pointing out that CPU's have hit a speed limit - this is of course precisely why the only way to increase processing power is to use parallelism, and provides added incentive to find ways to make use of parallel hardware easier.

The way massively parallel hardware will be used in the future should be obvious... we'll have domain specific high level libraries that will encapsulate the complexity, just as we do in any other area (and as we do for massively parallel graphics today). Massive parallelism is mostly about SIMD where the programmer basically wants to provide the data ("D") and high level instructruction ("I") and have a high level library take on the donkey work of implementing it on a given platform.

Current parallel computing approaches such as OpenCL, OpenMP, CUDA are all just tools to be used by the library writers or those (which will become increasingly few) whose needs are not met by off-the-shelf high level building blocks. No doubt the tools will get better, but for most programmers it makes no difference as they use libraries rather than write them. Compare for example to all the advances in templates and generic programming in C++11 and later... how many C++ programmers are intimately familiar and proficient in these new facilities, and how many actually need to use them as opposed to enjoying the user-friendly facilities of the STL built atop them?!

Comment: Let's see how that sounds in 5-10 years time ... (Score 1) 449

by SpinyNorman (#48717041) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

It sounds rather than Bill Gates' [supposed] "64KB is enough for anyone", but no denying that Linus said this one!

Saying that graphics is the only client side app that can utilize large scale parallelism is short sighted bunk, and even ignores what is going on today let alone the future. In 20 years time we'll have handheld devices that would look just as much like science fiction, if available today, as today's devices would have looked 20 years ago.

I have no doubt whatsoever that in the next few decades we'll see human level AI in handheld devices as well as server-based apps, and you better believe that the computing demands (both processing and memory) will be massive. Even today we're starting to see impressive advances in speech and image recognition and the underlying technology is increasingly becoming (massively parallel) connectionist deep learning architectures, not your grandfather's (or Linus's) traditional approaches. Current deep-learning architectures can be optimized to use significantly less resources for recognition-only deployment vs learning, but no doubt we'll see live learning in the future too as AI advances and technology develops.

Linus's relegation of parallelism to server side is equally if not more shortsighted than his lack of vision of client-side CPU-sucking applications! If you want systems that are always available, responsive and scalable then that calls for distributed (client side) implementation, not server based. Future devices are not only going to be smart but the smarts are going to be local. Bye-bye server based Siri.

Comment: Re:No group "owns" any day on the calendar. (Score 4, Informative) 681

by SpinyNorman (#48690453) Attached to: Neil DeGrasse Tyson Explains His Christmas Tweet

Close, but no banana.

The Dec 25th date was co-opted from the Roman holiday/feast of Natalis Invictus (= birth of the sun-god Sol Invictus), the date being chosen as it was then (re: procession of the equinoxes) the winter solstice when the days start to get longer again (i.e the sun is reborn). This holiday was created by the Roman emporor Aurelian in the 3rd century AD, and was co-opted by the Christians maybe a 100 years later.

Saturnalia was a separate - very popular - Roman holiday in (if memory serves) November/December, which FWIW had a present giving component.

However, the gross external form of modern Christmas - Tree, Holly, Mistletoe (i.e. general greenery) and Yule log all come from a different, northern European, winter solstice celebration called "Yule".

So, the Xmas feast/date comes from Natalis Invictus, the Tree/Holly/ etc from Yule, the presents *perhaps* from Saturnalia, and we'll have to concede the nativity (there's that "natalis" again) to the Christians, who prior to 300AD would never have celebrated Jesus' birth!

Comment: Re:It's not him.. (Score 2) 681

by SpinyNorman (#48689653) Attached to: Neil DeGrasse Tyson Explains His Christmas Tweet

He didn't fold (where's the back down?). He blatently and successfully trolled the Christian fundamentalists**, and his follow-up was little more than a gloat.

** and/or anyone ignorant enough of history to think that Jesus was born on 25th Dec and/or was the basis of the Dec 25th holiday we now call "Christmas"

Comment: C++ getting better and better... (Score 1) 641

by SpinyNorman (#48554391) Attached to: How Relevant is C in 2014?

It seems C++11 was just finalized yesterday, but already we now have C++14 finalized and C++17 in the works....

This is hardly the same language from a few years ago - the power and ease of use that has been added, both for library and application developers is amazing.

Anyone programming in C++ who isn't thoroughly familiar with all the new stuff in C++11 is missing out tremendously...

Comment: Punctuated equilbrium (Score 1) 282

by SpinyNorman (#48233965) Attached to: High Speed Evolution

evolution = variation + selection

What's happening here is likely more about selection then variation, although maybe a bit of both. I suspect this is largely the mechanics of punctuated equilibrium at work.

The way evolution is taught at high school level is typically over simplified to the point of being wrong, as indeed are many subjects. Evolution is NOT a continuous process of each generation getting better fitted to the environment via the process of natural selection acting on genetic changes introduced in individuals in that generation...

The normal way that evolution is understood to play out in practice is via "punctuated equilibrium" whereby genetic changes - which are typically too small and/or irrelevant to have any immediate impact on fitness - accumulate in animal populations over many generations. It's not the genetics of individuals that are changing so much as the genetics of the interbreeding population as a whole as accumulated changes get spread throughout the population over a number of generations. This is the "equlibrium" phase whereby genetic changes are accumulating but there is no external evidence of this as the changes are irrelevant to fitness.

What happens next is the "punctuated" part of "punctuated equilibrium" - something changes in the external environment that the animals are part of - in this case the arrival of an invasive species. These changes in the environment (drought, disease, invasive species, etc, etc) can happen very quickly compared to the speed at which genetic change accumulates. Now, it may happen that in the new changed environment some of the accumulated genetic changes that were previously benign now become a factor in fitness (either positively or negatively) and therefore a "sudden" change in the population may be seen as those individuals possessing what has now become a helpful trait, or not posessing a negative trait, prosper relative to their peers and rapidly come to dominate the population.

When a change in the environment brings about a quick change in an animal species, it is tempting - but sloppy - to say they are rapidly evolving. What happened rapidly was the change in the environment, not the slow process of genetic change that suddenly became significant.

In this case the Florida lizard population presumably already had all the traits - to some degree - that would prove positive or negative when the invading Cuban species arrived, and a quick change was seen as natural selection did it's thing and over a few generations the population became dominated by individuals having the (slowly come by) traits that now proved to be critical.

Of course there's more to how the dynamics of evolution play out than just puncuated equilibrium... While it's always going to take a long time for any complex feature such as sticky toes (or toes themselves for that matter) to evolve, the way genetic coding works is such that it may be very easy for a feature - once it exists - to be modified by a small change (e,g. a birth defect giving you unwanted extra limbs or extra sticky toes - advantageous if you mostly climb slippery trees, disadvantageous if you don't).

So.. the big picture here is that the Florida lizard species will have already accumulated the feature set that proved advantageous (or disadvantagous for those that dies giving way to the "new" variety), and this just played out once the environment changed. Subsequent to the changed environment additional variation/selection (which you could think of as optimization "tweaking") of the most critical features (toe pad size, scale stickyness) may have occurred.

Comment: Add noise to fix (Score 1) 230

by SpinyNorman (#47100729) Attached to: The Flaw Lurking In Every Deep Neural Net

If the misclassification only occurs on rare inputs then any random perturbation of that input is highly likely to be classified correctly.

The fix therefore (likely what occurs in the brain) is to add noise and average the results. Any misclassified nearby input will be swamped by the greater number of correctly classified ones.

Comment: Re:Que Oversaturation in 3...2...1... (Score 1) 126

by SpinyNorman (#47082319) Attached to: US Wireless Carriers Shifting To Voice Over LTE

This will actually help!

First, voice doesn't use that much data. For example, Viber (a popular VOIP app) uses 0.5MB/min which would be about 0.5GB for 1000min.

More importantly, once every one is transitioned off 3G onto 4G/LTE (i.e. VOIP over LTE) the carriers can repurpose the 3G spectrum for 4G and thereby gain more 4G/LTE capacity.

Comment: Re:Don't understand it. (Score 1) 198

by SpinyNorman (#46959781) Attached to: Apple Reportedly Buying Beats Electronics For $3.2 Billion

The deal doesn't make sense to me, but presumably it would involve Dr Dre and Jimmy Iovine being contracted to stay for some minimum amount of time, which brings a lot of clout (esp. Iovine) in the music biz.

The $3.2B price if true seems insane though. Between 2012 and 2013 Beats bought out HTC's 50% ownership for a total of $415M (25% in 2012 for $150M, 25% in 2013 for $265). So, if half the company is worth $415M, the whole thing should be worth closer to $430M, not $3.2B!

Lend money to a bad debtor and he will hate you.