Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Facepalm (Score 1) 486

by lorinc (#49338013) Attached to: No, It's Not Always Quicker To Do Things In Memory

This kind of useless paper is exactly why idiots should not be allowed in computer science. They even give the explanation in the paper and still draw to bad conclusion. To me, it should be renamed "Bad programming habit performs worse than very bad performing habit in the absence of knowledge about the tool used".

Comment: Re:Good. (Score 4, Interesting) 198

They'll likely convince some people to continue with public transportation, which would be a victory, even if small.

Probably not. We are voting this Sunday. My guess is that people will be so upset not to be allowed to take their car tomorrow, that they will vote for the very first idiot that will promise to ban the measure. Usually, these idiots are right wing extremists.

I'm not very optimistic. Mankind is greedy by nature and probably can't understand the logic of environment preservation as long as it generates a net individual loss.

Comment: Re: Foxconn Factories' Future: Fewer Humans, More (Score 1) 187

by lorinc (#49158705) Attached to: Foxconn Factories' Future: Fewer Humans, More Robots

Weren't people saying the same sort of things when the "assembly line" was first invented? After all, the main purpose of the "assembly line" was to make the same amount of stuff with fa fewer workers than had been needed previously.

I'm not saying this will be next year or so, and I'm not sure the parent post was meant to be exact with respect to the timeline. But, yeah, it's the kind of global direction we're heading towards. The ultimate goal is to replace of work done by humans by work done by machines, simply because we're lazy. that and the fact that capitalism is about gaining the benefits of someone else's work because you own the business. If owning robot overlords can assure you all you ever need without working, it's obvious everybody will want these, but only the most fortunate will afford it, leaving the rest of us in misery.

Oddly, we seem to have managed to get past the introduction of the assembly line without the sort of problems you're predicting - humanity is still here, its population is still growing, and technology is still advancing.

Isn't population growing mainly due to latency? Many second order systems simply overshoot before stabilizing. The mentality we have is still stuck in the post WWII era, were growth was over 5% every year and you could make plenty of kids without worrying for the future. I won't count on humanity to be something else than a pure reacting system, which means it will always adjust too late, contrarily to a predicting system (which is what individuals can be).

Comment: Re: Foxconn Factories' Future: Fewer Humans, More (Score 2) 187

by lorinc (#49157585) Attached to: Foxconn Factories' Future: Fewer Humans, More Robots

Although the way it's written is brutal and arrogant, I think it is the closest to what will happen. The more I look at it, the more it seems the future will look like "the Dancers at the End of Time" by M. Moorcock. It is either that, which means a brutal decrease of the unneeded population, or the end of technological advancement or the end of humanity.

Comment: Re:Do they actually work well now? (Score 1) 45

by lorinc (#49136361) Attached to: The Believers: Behind the Rise of Neural Nets

Last time I looked there was no application of ANNs which couldn't be solved more efficiently by other algorithms ... and the best ANNs used spiking neurons with Hebbian learning which are not amenable to efficient digital implementation.

Is it possible that last time you checked was a long time ago? Deep neural networks are again all the rage now (i.e. huge teams working with them at Facebook and Google) because

  1. (1) They have resulted in a significant performance improvement over previously state-of-the-art algorithms in many application tasks,
  2. (2) Although they are computation-heavy, they are amenable to massive parallelization (modern computational power is probably the main reason why they have improved singificantly with respect to ANNs of the 80-90s, given that the main architecture itself has not changed a lot, except possibly for the "convolution" trick which effectively introduces hard-coded localization and spatial invariance).

To be fair, it always seems to me that (1) and (2) are very closely related. CNNs that won recent computer vision benchmarks are the only methods that used so much processing power so far. Not that they're less efficient than other, tough. It's just that I would love to see other methods with that many engineering, tunning, dedicated computational power and how they compare.
Also, not that when it comes to classification, the standard is to throw the last layer and train a linear SVM on the penultimate layer, which also show that CNNs alone are not enough.

Comment: Predictable (Score 2) 441

by lorinc (#48825725) Attached to: Why We Have To Kiss Off Big Carbon Now

In the long run, it will fade away because most of the grouwth has already been consumed. That being said, trade is chaotic in nature, and short term prediction is difficult ("especially when it's about the future"), but in the long run, the trend is well known.

Sometimes, I like to think that the "Limits to growth" report will be regarded in some distant future as our epoch's Eratosthenes calculations.

Comment: Re:Well he would say that. (Score 0) 894

by lorinc (#48819847) Attached to: Pope Francis: There Are Limits To Freedom of Expression

P.s We don't get our morals from religion, my observation is that quite often "religious" people have less ethics and morality than atheists.

Well, to be completely honest, people with an imaginary friend are usually send to psychiatric hospitals. Unless there are several millions/billions of them having the same imaginary friend...

Comment: scientific stars wannabe (Score 2) 227

by lorinc (#48806179) Attached to: Lawrence Krauss On Scientists As Celebrities: Good For Science?

The problem, is that scientific research is now like music was in the 80s. People are much more interesting in writing the article that will be cited 1k times, like people were looking to write that single getting sold 1M times, than actually improving common knowledge.

Well at least in computer vision, I do have this impression.

Comment: Re:It's hard to take this article seriously (Score 1) 628

by lorinc (#48642617) Attached to: What Happens To Society When Robots Replace Workers?

What if it is possible to be completely autonomous with machines? Machines that take care of your food, clothes, transportation, anything you ever need or want, but machines that require a massive amount of wealth to acquire first. What if the future of automation is the mere 1% that live in a libertarian utopia (kind of The dancers at the end of time by M. Moorcock) and the remaining 99% struggling to survive? what if the over-concentration of wealth you are observing right now is just the first step towards that kind of future?

Isn't it plausible that over-concentration of wealth is a natural consequence of automation? To me it seems intuitive that the concentration of wealth into singular points is the end goal of automation.

Comment: The New Magic (Score 3, Insightful) 74

by lorinc (#48396275) Attached to: Machine Learning Used To Predict Military Suicides

Stop speaking of machine learning as if it's a new kind of black magic. I know it sounds better than "using a mathematical algorithm" or "performed statistical analysis", but to me it sounds as ridiculous as the "quantum whatever" of the 90's. Seriously, ML is being hyped beyond reasonable.

Never say you know a man until you have divided an inheritance with him.