Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Same old story (Score 3, Informative) 266

by paradigm82 (#49099781) Attached to: The Robots That Will Put Coders Out of Work
This story comes up every few years. They are all written off the same template, like how this new generation of tools will allow everyone to write their own apps, and you don't need professional programmers, and here's an example of an 8-year old who made an app in two minutes etc. These stories are written by people who don't have a clue what the working day of a programmer looks like, and imagines something sitting isolated at his desk typing in code all day
The programmer's job is to translate the requirements from other humans into requirements that a machine can understand. For very well-defined domains of applications, it is indeed possible for non-programmers to use tools that can achieve the same thing. That was the same with VB 1.0 or equivalents prior to that. I don't think the scope of possible applications that can be written in this way has broadened much in the decades after. Applications that is just a dialog with very simple logic can be written by drawing the dialog and copy-pasting and adapting small statements etc.
Beyond that there's not been much progress in "auto-writing" applications. The reason is, that the current programming languages are actually fairly efficient and precise ways of explaining to a computer what it should do. The job of the programmer is to translate the "business" requirements into that specification.
Even for a fairly trivial stand-alone program computers can't do this today. Even if that were to happen, consider that much of the programmer's time is not spent writing code within some confined, well-defined environment, just writing line after line. Rather, it is spent understanding the customer-specific requirements, realizing if they are impossible/inconsistent and working out a solution together with the customer, integrating multiple systems and many different levels in those systems using completely different interfaces and paradigms, and figuring out how it is all going to work together.
My experience is that most automatic code-writing and analysis tool stop working when you add this layer of complexity. They may work well for a completely stand-alone Java application with a main() method, but how does it work when you have client code, native code, server code, 10 diff. frameworks, 3rd party libraries some written in other languages, external web services, legacy systems, end-users who think about things in different ways than the internal data models etc, implicit contracts/conventions, leaky abstractions, unspecified time-out conditions, workarounds required due to bug in other systems and difference in end-user setups? The complexity and bugs here is always resolved by human programmers and I suspect it will be that way for a long, long time even if computers manage to be able to write stand-alone programs themselves (which IMHO will be the point where we have something approaching general AI). While you can take individual aspects of these problems and formalize them, the amount of combinations and the diversity in scenarios is so big, that it becomes impractical.
If you have competent programmers, most of the bugs are going to be in these areas, since it is also easy for a competent human programmer to write a stand-alone program that works well on one machine.

Comment: Re:Where are those recent AI progress deployed? (Score 1) 688

by paradigm82 (#48619389) Attached to: Economists Say Newest AI Technology Destroys More Jobs Than It Creates
I was talking about Ai, not IT in general. But even then - in 1994 I wouldn't have crapped out over seeing a 2014 smartphone. Yes it would be much nicer than what was available then, but by no means surprising. There were already color-screen PDA's where you could install apps and I believe some also had phone capability. They were expensive, clunky, slow and due to lack of adoption there weren't that many apps. Clearly it was immature at that point, but it would by no means have been surprising that it would become much better over time (Moores law and all).

Comment: Re:They're a resource, not a "problem". (Score 2) 307

Honestly, if you are a "hotshot" and you are sitting next to someone who can barely type a word per minute, and have to wait 5-10 min. for just a few lines of code to be written, that is pretty frustrating and not of benefit to anyone. Sometimes when a colleague asks for my help and is slow in typing etc. I just politely ask if I can type and they always agree, and I can do it 5-10x faster than they can, saving time for both of us. Then they have more time to study the solution at their own pace. That has nothing to do with arrogance, just making best use of the time. Of course some politeness and social skill is required!

Comment: Is there enough student material? (Score 3, Interesting) 307

Having a CS degree and having 10+ years of professional experience in industry, it is clear that a significant amount of those taking CS (or related IT/programming-oriented programs) don't really have the qualifications for a CS-career, and even after years of employment are still struggling with rather basic programming tasks and are having problems handling just a few levels of abstraction, which is routinely required in any serious programming. Some of the skills required seem to be an "either you have it, or you don't thing" at least after a few years into a career. The saving grace for them is the good job market (for employees) and the ability to go into more management or PM-oriented roles, or at least very soft CS-roles. That, and the fact that many employers are not able (or make no effort) to truly compare the productivity between different employees, so that the weaker ones are somewhat shielded by the performance of the stronger ones.
With this in mind, it's concerning with this big ramp-up in number of CS-trained individuals. I feel we have been at the bottom of the barrel for some years already. Given that it has been well-known to everyone for many years that IT is one of the easiest areas to find employment in and that the salary is comparatively good, and the constant media focus on smartphones, apps and whatnot, it seems reasonable to assume that most people with just a faint interest and ability in IT would have pursued that path already. With this ramp-up, it seems there's a high risk that the market will be flooded by sub-par candidates and that it will be much more than what the market is already absorbing. The result will be massive unemployment among those newly trained CS-people, who were never meant to study CS to begin with.

Comment: Men are more creative (Score 1) 579

by paradigm82 (#47785509) Attached to: Why Women Have No Time For Wikipedia
While a lot of people would like to think the reverse is true, I'm 100% convinced based on 32 years of experience that men are plain more creative, in practically any area. That doesn't mean that all men are creative, or that all women are not. Some women are definitely more creative than some men. But by and large my observation is true. Men are much more likely than women to be geeks about things and spend hours on improving small details or leaning everything about a subject. Women are only likely to do that if there's a distinct benefit to themselves or their children. Women generally do things to obtain some benefit (put something on their CV, brag to their boss etc.) and rarely for the thing itself which is what I consider the 'geeky' aspect of the way men gets interested in things. Since creativity tends to be fostered by spending a lot of time on a subject, men are far more likely to actually produce something and be creative.
So in short, women don't want to contribute to Wikipedia because the personal gains to them are small. Men do it for the fun of it, for the creative process.

Comment: Missing info (Score 1) 177

by paradigm82 (#47621157) Attached to: Algorithm Predicts US Supreme Court Decisions 70% of Time
It would be useful to know how many of the court's decisions are affirm vs reverse. If 70% are affirm, it's not impressive to be able to correctly predict 70% of decisions since you can just always guess on "affirm". Of course, if it is equally distributed (50% affirm, 50% reverse), getting 70% correct shows the algorithm has some prediction power (assuming it is trained on a different dataset than is used for evaluating it). But it is impossible to determine if this is the case, based on the information in the article.

Comment: Re:He just described how MPEG works (sort of) (Score 1) 90

by paradigm82 (#47542483) Attached to: How Stanford Engineers Created a Fictitious Compression For HBO
I don't think any of what you wrote goes against what I wrote. I have the exact same distinction that some components of MPEG movie compression (and my post applies to MPEG-2 also btw) are lossy, that is compensated for by other steps, but other steps yet again makes it lossy.

Comment: He just described how MPEG works (sort of) (Score 1) 90

by paradigm82 (#47540097) Attached to: How Stanford Engineers Created a Fictitious Compression For HBO

He came up with the idea of using lossy compression techniques to compress the original file, then calculating the difference between the approximate reconstruction and the original file and compressing that data; by combining the two pieces, you have a lossless compressor.
This type of approach can work, Misra said, but, in most cases, would not be more efficient than a standard lossless compression algorithm, because coding the error usually isn’t any easier than coding the data.

Well, this is almost how MPEG movie compresion works - and it really does work! MPEG works by partly describing the next picture from the previous using motion vectors. These vectors described how the next picture will look based on movements of small-ish macroblocks on the original picture. Now, if that was the only element of the algorithm movies would look kind of strange (like paper-doll characters being moved about)! So the secret to make it work is to send extra information allowing the client to calculate the final picture. This is done by subracted the predicted picture from the actual next frame. This difference-picture will (if the difference between the two frames was indeed mostly due to movement) be mostly black but it will contain some information due to noise, things that have become visible due to the movement, rotation etc. So MPEG also contains an algorithm that can very efficiently encode this "difference picture". Basically an algorithm that is very efficient for encoding an almost black picture.

So there you have it - MPEG works by applying a very crude, lossy compression (only describing the picture difference in terms of motion vectors) and in addition transmitting the information required to correct the errors and allow reconstruction of the actual frame.

The only part where the comparison breaks down is that MPEG is not lossless. Even when transmitting the difference picture, further compression and reduction takes place (depending on bandwidth) so that the actual frame can not be reconstructed 100%. Still MPEG is evidence that the basic principle of using a crude lossy compressor combined with sending a compensation, works.

Comment: Potentially carcinogenic (Score 1) 190

Paracetamol/acetaminophen is potentially carcinogenic. It has been shown to inhibit DNA repair (see e.g. http://www.ncbi.nlm.nih.gov/pubmed/7715617 or Google yourself) and has been associated with an increase in various cancers, for instance blood cancers which are often associated with factors that cause DNA damage (see e.g. http://www.ncbi.nlm.nih.gov/pubmed/21555699).
It has also been associated with other cancers. In contrast to this, the alternative painkiller aspirin has been associated with a decreased risk of many different types of cancer, notably colon cancer but also blood cancers as well as many other cancers (too lazy to find all the references). In animals, it can protect animals exposed to other carcinogens from cancer.
Ibuprofen seems to be mixed, in some studies it is associated with increased cancer in other with decreased risk.
By the way, note that aspirin is not a magic bullet against cancer; it is also a blood thinner. People who take it chronically are prone to getting bruises from the slightest hits. This also means it increases the risk of potentially fatal bleeds, as well as certain types of strokes (intercranial haemorrage). On the other hand it decreases the risk of blood clots. The net effect is difficult to assess.

Comment: What about his microwave? (Score 2) 278

by paradigm82 (#43189553) Attached to: Where Have All the Gadgets Gone?
Where has the microwave gone? I'm not aware of any technological developments since 2005 that has "converged" the microwave into any other device. Also, none of the devices he displays in the 2013 seems able to displace a microwave (unless there's some new app I'm not aware of). Hence, we must conclude that this article merely represents the lifestyle choices made by this particular person, with no relevance to the rest of the population.

Comment: The Danish perspective (Score 1) 370

by paradigm82 (#42493983) Attached to: Forbes 2013 Career List Flamed By University Professors
I can add a Danish perspective. In the last 10-15 years, the number of people accepted for the PhD program has ramped up by a factor of 2-3x from an already high level. On some studies up to one third of graduates are accepted into the PhD program. The PhD program is free to the candidate and in fact pays a decent but not in anyway high salary. But to someone who used to live on 1/3rd of that it's a good salary.

The problem is that it used to be that PhD students were the best students of that year, at least in the natural sciences. I remmeber that when I studied only 1 PhD student was accepted every other year, and he/she would always be one who was very clearly (after a 2 min. conversation) smarter than the rest of the students. Of course that person would also have top grades -- and that was when top grades were more scarce that is the case today.

Today it's not like that. Everything from the top students and down to the "joe average" (i.e. someone with half a brain) in the graduate studies usually gets offered a PhD. In fact, many programs have more PhD's than they can fulfill from the pool of candidate students. There's no prestige about getting a PhD anymore. And since the financial crisis came, it's in some way the opposite, in that the PhD is an obvious path to take for someone who couldn't get a better paying job in the private sector.

This state of affairs is extremely expensive to society. It's expensive to provide an extra 3-year education and salary to people who are just average intelligence. Also, the science that comes out of this is mostly bogus. I.e. not really innovative, just buying a lot of fancy equipment and applying what others already found out. Then publishing it with a twist. There has already been some articles out on how little the science produced in academia is contributing to the economy, and there has also been a few news stories about how the "too many PhD's" has lowered the quality. But it still seems most politicians are in the uncritical "more education" mode... so I suspect it will be a few years until anything substantial happens in this area.

Comment: Seen this before... (Score 1) 544

by paradigm82 (#42043405) Attached to: What "Earth-Shaking" Discovery Has Curiosity Made on Mars?
Am I the only one who remembers last time NASA did this? Back in 2010 they made a lot of marketing fluff about a discovery of a new life form etc. etc. http://science.slashdot.org/story/10/11/30/1846255/curious-nasa-pre-announcement Many assumed it would have to be about NASA finding indications of life in some other planets (even if it was only indirect evidence of some chemical sort it would be huge). Well, it turned out the news was that they had found a microbe (on earth) that had apparently had some uptake of arsenic when deprived of phosphorous. This was interpreted as the organism using arsenic instead of phosphorous in its DNA (though this had not at all been verified) and that this had huge implications for the search of life in space. If that wasn't disappointment enough, the discovery later came under fire and later research found no evidence of any arsenic uptake, so it was all just a bunch of crap to begin with. In other words, just because NASA makes huge announcements and calls enormous press conferences days in advance, don't hold your breath. It could be nothing more than... well, nothing.

Comment: Re:Heh (Score 1) 241

by paradigm82 (#39943793) Attached to: A Boost For Quantum Reality
I have to agree with the OP. There's something really strange about sentience. I don't see how just adding more complexity would create this "internal awareness". Intuitively it seems that such awareness is of an entirely different quality than what could be created by any amount of computational complexity. What is it that breathes life and sentience into these computations (more on that later)?

Of course my intuition could be wrong. Maybe it's just that sentience appears to have a different quality but that is just because it is so infinitely complex that our intuition has no grasp of such complexity. And that if it did, we would be able to see how sentience can be developed just from the standard properties of physical matter. I can't exclude that this is the case, I don't see how anyone could.

By the way, I'm curious about you saying that such computers would have to be made from something else than silicon and would have to be ultra-parallel, plastic etc.
I don't see why, if you have already accepted sentience from following from mechanical processes. As long as the materials obey usual physics, the properties of this material could be simulated easily on a computer. You could simulate ultra flexible/plastic neurons on any computer. Likewise, a single-thread can simulate any number of parallel threads with a linear slow-down. So to me it seems that if you go along the mechanistic route, you have to accept that sentience could occur even on the simplest of CPU's (or any kind of Turing machine in fact) just given the right program. Of course the program might execute at a slow speed, but it should be totally equivalent. And this is why I find the premise of "sentience is just complex computation" unappealing.

Another problem is that I find the whole concept of computation somewhat subjective. While computation is well-defined at the "input/output" level, the actual computation process is not. It's usually just abstracted away. And sentience must be a property of the computation process itself and not a property of the actual I/O (after all, I still feel sentient when no one is looking).

If I look at a Turing-machine made of organ-pipes, it's essentially just my interpretation that these organ pipes are operating as a Turing machine executing on some piece of information. You have to look at the system at just the right level and type of abstraction to see it's a Turing machines. To someone not "in" on it, it would just seem like a weird spectacle. The individual physical components are operating exactly as they would if they were not part of a Turing machine. That a computation is taking place is clearly not a physical matter. It just happens that humans have labeled this type of interaction between organ pipes "computation". So couldn't there be some subjective view of the organ pipes implementing a different type of machine running a different program that did not involve sentience? Likewise, couldn't I interpret leafs flying across the street of some sort of calculation? Given the huge number of physical processes going on in a glass of water, and the almost infinitely ways of intepretating what those processes are doing, it seems likely you could find some way of interpreting one process as a sentient process. That's why I think computation is in some way subjective.

So if computation is so subjective, what is it that breathes sentience into the computations making up our brain and thoughts?

By the way I have been debating this with people for 10-15 years. Some people understand immediately what I mean. Others seem to never see the problem. Is that because the latter are more clever and can better see how sentience follows from physics? Or is it because they are not sentient themselves? ;) In my view, this question and the question of "why anything exists" are the only two real mysteries remaining for science. Everything else seems to be just tiny details that could eventually be worked out.

If you can't understand it, it is intuitively obvious.

Working...