Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Not only finances are an issue (Score 1) 287

Most companies would not keep any developers over 50 on staff.

It's their loss. Most companies are horrendously bad at making software. The ones that are very good at it know better and have lots of older developers on staff.

And the research shows they're right:
Older Is Wiser: Study Shows Software Developers’ Skills Improve Over Time

My experience tells me the same. Most of the really good software engineers I know are at least 40, many are much older.

Go ahead, throw 'em out and get some newly-minted CS grads. Someone with some sense is going to get a great hire.

Comment Re:LOL@ Use-case (Score 2) 45

Well, I still think the data can be deanonymized. I don't need to make any assumptions other than what you've told us.

the places that an individual goes to, and how they got there, how long it took, and how long and where they were stationary. key factors critical for shopping mall owners to be able to provide to their retailers: (1) how many unique shoppers went into *their* store (broken down by time and date is also helpful). (2) how long each unique shopper spent in their store. (3) also useful to know is where they went *before* going to another store.

Even if the time resolution is 5 minutes, and the spacial resolution is only enough to identify which stores I visit, that is enough to identify me. If I go to the mall, stop by and get a coffee, wander around for a while, then make another purchase in another store, using my credit card both times, I may very well be the only person who made purchases at those two stores within a 5 minute window at each store. Each purchase makes it more likely to be unique. Now if I put on dark glasses and a baseball cap and stop by Victoria's Secret to buy some lingerie for my mistress, with cash, it's possible to link that to me via your path data.

It isn't the path data per se that is identifying me -- it's a combination of that and other data. It doesn't have to be credit card data, like I said. It could be wifi, loyalty cards, security cameras, even witnesses... anything that can associate me uniquely with one of your paths. And it doesn't even have to be unique, just narrowing it down to a handful of people is useful to law enforcement.

Don't get me wrong: It sounds like you and the company you worked for care about privacy and did everything you could to protect it. That's commendable. And it sounds like you did a good job. (Plus I think it's cool you used GNU Radio.)

It's also commendable that you understand the conflict of interest. The retailers would like to have better spacial and temporal resolution: they'd like to know which aisles people walk down, what displays they stand in front of and for how long, etc. The retailers will ask for that and if you don't provide it someone else will. So there will always be pressure to make it more useful. But the more useful it is to retailers, the more useful it is to anyone else who might try to get access to it, whether it be through hacking or subpoena.

I am skeptical whenever I hear "don't worry, we've anonymized the data." I've seen too many ways that data can be deanonymized, and I'm not a professional data miner or forensic hacker, so I don't know what other devious methods there might be that I've never heard of and would never occur to me. The key point is that as long as you store the path itself then anything that can link me to part of it can link me to all of it. The only way to avoid that would be to obliterate the path data and only store aggregate information (averages, sums, etc.)

Comment Re:LOL@ Use-case (Score 3, Insightful) 45

instead they used GNURadio to do GSM passive decoding and signal-strength detection. and no, you *can't* track the person themselves, nor can you get their telephone number, nor can you decode their phone conversations, nor can you decode their SMS messages (not "and track 1000s of phones on affordable commodity off-the-shelf hardware at the same time"). they also track bluetooth and wifi, but again, the mac addresses are hashed (with salting) *before* being stored on disk.

I think it would still be possible to deanonymize that path data. If you make a credit card purchase, the information about time and place of the credit transaction can be associated with whatever id you use hashed or not. The path data has information that someone was standing at the cash register at that time and place. With the credit card information (or even just loyalty card information) you know who it was and can associate that with the entire path through the mall. Similarly, if they walk past a Starbucks and their smartphone associates with their WIFI, now if you have access to Starbuck's information you can deanonymize it from that. Or it could be deanonymized with the security cameras.

I don't see then how it could be subpoena-proof if you store the actual path, regardless of however you anonymize it. They can subpoena your data together with other data to get what they want.

Comment Re:Gut flora (Score 3, Informative) 152

TFA says "the microbiome can influence, and be influenced by, a range of characteristics such as weight, disease, diet, exercise, mood and much more." So they acknowledge the causation can go both ways.

But it gets even more interesting. According to a Ted talk I saw about this the other day, there are apparently two different ways that our gut microbes might cause obesity. One is related to what you said:

Which gets to part 2 of the problem: We shit stuff that is still quite nutritious. Ask your local fly population. Our "waste" is not just waste. There's quite a bit of stuff in there that could still be "digested".

From the Ted talk:

When we take the microbes from an obese mouse and transplant them into a genetically normal mouse that's been raised in a bubble with no microbes of its own, it becomes fatter than if it got them from a regular mouse. Why this happens is absolutely amazing, though. Sometimes what's going on is that the microbes are helping them digest food more efficiently from the same diet, so they're taking more energy from their food, but other times, the microbes are actually affecting their behavior. What they're doing is they're eating more than the normal mouse, so they only get fat if we let them eat as much as they want.

So apparently some microbes allow you to extract more energy from your food so you put on more weight for a given amount of calories. But other ones might affect your appetite somehow. There's more in the talk, and it's not just mice: there's research in humans as well.

Comment Re:Thought it was already the norm abroad (Score 1) 230

The key difference between Japan and the US though, at least from what I've heard, is that in the US they want to tie payments directly to bank accounts or credit cards.

There's nothing stopping you from opening another checking account just for use with mobile payment system and transferring money from your primary account as needed. So you could argue the US system is more flexible and the Japanese system prevents you from just using one account if you want that. You might have to pay a little for a checking account if you don't keep a minimum balance though. Does anyone offer free checking any more?

Comment Re:Nope (Score 3, Informative) 44

I agree that it won't spark an interest that isn't already there, but it might spark a dormant interest in a lasting way. Maybe your criticisms are valid, I haven't read it, but then again we're not 8-year-olds.

I was probably going to be interested in math & science eventually anyway, but at that age I wasn't voluntarily reading any non-fiction science. But then I accidentally discovered an Isaac Asimov book, and I was hooked. Not long after that I asked for a subscription to Scientific American for Christmas. That book had a huge influence on me and got me started much younger than I would have otherwise. (And I can't even remember which Asimov book it was.)

The thing is, you don't know you're interested until you are exposed to it. I'm sure most slashdotters will agree that math & logic are inherently interesting, but that's because they already know a lot of it. You somehow have to learn enough to understand that it is more interesting than it is typically presented in school (hard as they try.)

Comment Re:FFS (Score 1) 398

I was hospitalized once for about 9 weeks, 2 of them in intensive care. I was on a lot of pain medication, so much that it had to be administered by an anesthesiologist. When it came time to start weaning me off, I didn't know what was happening at first. I felt sick like I had the flu. I had hot & cold flashes. I was nervous and irritable and couldn't sleep. The sheets had to be changed twice a day because I was sweating so much. It was awful. And that was in a controlled environment, weaning me off over a period of many days.

When I asked a nurse what was going on she told me I was going through withdrawal from all the pain meds I had been on. I was certainly addicted at that point, and going through physical withdrawal. I wasn't going through psychological withdrawal though, and I think that's because my brain had never made an association between the high and any behavior of mine. The withdrawal was terrible, but I never thought "I'd feel better if I just got some more." The drugs were being administered by the doctors and in a time-release way throughout the day. If I had taken the same amount of morphine and Oxycontin and whatever else on my own, in a way that made allowed my brain to associate it with a behavior (shooting the drug or whatever) then I would have been an addict by then.

I think that's partly how nicotine replacement therapy works, too. You put on a patch and get a steady dose of nicotine throughout the day, breaking the link between the drug and the behavior (smoking.) You don't go through physical withdrawal, because you're getting nicotine, but you still miss the behavior. As long as you don't smoke, eventually the brain stops associating the drug with the behavior. Then when it comes time to wean yourself off the patch, you're brain is in the situation I was in in the hospital: it wants the drug, but that no longer motivates the behavior of smoking since the association has been broken.

It has been proven many times that addiction is a result of miserable living conditions - social, economic, and/or psychological - not a result of the drugs themselves. the addiction is to the relief of pain, whether physical or psychological.

I don't think that is true: some drugs are inherently addictive. However, I have seen studies showing that the brain doesn't really distinguish emotional pain (from a shitty life, abusive relationships, whatever) from physical pain, so that could explain how people in those situations are more willing to take risks and continue doing things even when the negative consequences start to stack up, but once you're addicted you're addicted -- pain or not.

Comment Re:amazing (Score 1) 279

On the other other hand brains are orders of magnitude more energy efficient. I don't know if the efficiency is even related to the parallelism, asynchronicity, and ultra low "clock speed" of the brain, but it seems plausible that it is. The brain is optimized for efficiency above all else, where we have so far made the opposite trade-offs with computers.

We're doing that "real-time 3D vision and context-sensitive pattern recognition" with a few watts. Doing that with a bunch of GPUs would take thousands of watts at least. Doing it on a serial CPU is totally impossible but would require ludicrous clock speed and a few orders of magnitude more energy.

Comment Re:If we heard the guy... (Score 1) 421

Yes, lets go back to the caves and live like Noble Savages.

For another point of view, see this talk by David Deutsch from 2005. He rambles for a while (in an entertaining way) before getting to the point. Here's the ending:

So let me now apply this to a current controversy, not because I want to advocate any particular solution, but just to illustrate the kind of thing I mean. And the controversy is global warming. Now, I'm a physicist, but I'm not the right kind of physicist. In regard to global warming, I'm just a layman. And the rational thing for a layman to do is to take seriously the prevailing scientific theory. And according to that theory, it's already too late to avoid a disaster. Because if it's true that our best option at the moment is to prevent CO2 emissions with something like the Kyoto Protocol, with its constraints on economic activity and its enormous cost of hundreds of billions of dollars or whatever it is, then that is already a disaster by any reasonable measure. And the actions that are advocated are not even purported to solve the problem, merely to postpone it by a little. So it's already too late to avoid it, and it probably has been too late to avoid it ever since before anyone realized the danger. It was probably already too late in the 1970s, when the best available scientific theory was telling us that industrial emissions were about to precipitate a new ice age in which billions would die.

Now the lesson of that seems clear to me, and I don't know why it isn't informing public debate. It is that we can't always know. When we know of an impending disaster, and how to solve it at a cost less than the cost of the disaster itself, then there's not going to be much argument, really. But no precautions, and no precautionary principle, can avoid problems that we do not yet foresee. Hence, we need a stance of problem-fixing, not just problem-avoidance. And it's true that an ounce of prevention equals a pound of cure, but that's only if we know what to prevent. If you've been punched on the nose, then the science of medicine does not consist of teaching you how to avoid punches. (Laughter) If medical science stopped seeking cures and concentrated on prevention only, then it would achieve very little of either.

The world is buzzing at the moment with plans to force reductions in gas emissions at all costs. It ought to be buzzing with plans to reduce the temperature, and with plans to live at the higher temperature -- and not at all costs, but efficiently and cheaply. And some such plans exist, things like swarms of mirrors in space to deflect the sunlight away, and encouraging aquatic organisms to eat more carbon dioxide. At the moment, these things are fringe research. They're not central to the human effort to face this problem, or problems in general. And with problems that we are not aware of yet, the ability to put right -- not the sheer good luck of avoiding indefinitely -- is our only hope, not just of solving problems, but of survival. So take two stone tablets, and carve on them. On one of them, carve: "Problems are soluble." And on the other one carve: "Problems are inevitable." Thank you. (Applause)

Comment Re:on starting with smaller-scale albedo modificat (Score 1) 421

you can't impose these kinds of burdens (financial and otherwise) on people without the certainty that they'll make things better.

That's insane. You have to weigh the uncertainty against the consequences if the predictions are right. There will always be some uncertainty... even if just manufactured uncertainty. You're just burying your head in the sand.

Comment Re:disclosure (Score 1) 448

Honestly it is better that he doesn't, otherwise all the papers would simply be attacked ad hominem based on who pays the grants. You want to discredit his work, attack the science in it, not the funding for the science.

The same could be said about publishing the names of the researchers and the institutions they are associated with. That will affect how a paper is received much more than the source of funding will, but the paper should speak for itself whether published by a world-famous scientist at a prestigious university or a high school student working in his garage.

Unfortunately credibility matters. Most people (including scientists) are unable to judge most research entirely on its merit, so they rely on the opinions of those who can. But how do you judge their credibility? Even if the reviewers are completely fair and honest, the volume of research is too high to review everything. Some measure of credibility (e.g. a degree, or a recommendation) is necessary to even be considered. You could just publish everything without regard to its merit, but then how can you call that science? No matter how egalitarian your attitude, there is no escaping the need for credibility. The reputations of the scientists, institutions and journals, and the sources of their funding all help to establish credibility.

When researchers publish bad science, their reputation suffers. Hopefully if a corporation or special interest group continues to fund bad science, the research they fund will also lose credibility and be met with increased skepticism. Eventually scientists won't want to risk their own reputation by accepting the money. But that can only work if the information is disclosed.

Even if the science is good, it is not unfair to be skeptical toward research funded by someone with an obvious stake in the outcome. Just getting a larger number of papers published on one side of a controversial issue gives that side the appearance of increased legitimacy. And grant money always has an effect on researchers even when the money supposedly comes with "no strings attached."

That's the problem with science, insofar as there is a problem: it's ultimately a social and political process done by fallible people. Established scientists and institutions have an unfair advantage and socially unpopular or inconvenient ideas can be suppressed, sometimes for generations. And it can be affected by money. (It's still far better than the alternatives.)

Comment Re:Some misconceptions (Score 1) 319

Languages aren't compiled or interpreted: implementations are.

That's true in theory, utterly false in practice. Major design choices hinge on whether the language is intended to be interpreted or compiled. Interpreting languages that are intended to be compiled can be done but it usually amounts to compiling in the background and doesn't work out well. Compiling languages that are intended to be interpreted typically results in an order of magnitude slower performance compared to a language designed to be compiled. The information needed to optimize the compiled code just isn't there so the compiler cannot eliminate type checks, type coercions, bounds checks, overflow checks, etc. Typically all function calls are virtual and many memory accesses are doublely indirect.

Node.js isn't fast. It's concurrent. You can handle many thousands of simultaneous requests, as long as none of them are working particularly hard.

That's not what concurrent means in this context. The word you're looking for is "asynchronous". All of the javascript code you write will execute on a single thread in Node.js. Some APIs are asynchronous with a callback. That style of programming is much older than me and I'm a greybeard. Asynchronous code is great if you need the performance and can deal with the complexity but it shouldn't be the only option. And seriously, Javascript? for performance? really?

Server-side web programming can have a lot of in-flight requests being handled simultaneously, and not much need for synchronization because the requests are relatively independent (and the heavy lifting of dealing with race conditions is being handled by the database, operating system, file system and other libraries not written in javascript.) Real concurrent programming has much more data passing between threads of execution and the Node.js design of one single-threaded process per core is going to really suck for that.

Look at this stackoverflow question about Node.js and multi-core processors (scroll down to the more up to date anwser.)

"Node.js is one-thread-per-process. This is a very deliberate design decision and eliminates the need to deal with locking semantics. If you don't agree with this, you probably don't yet realize just how insanely hard it is to debug multi-threaded code."

That made me actually laugh out loud. Threads were invented because they are easier than asynchronous programming. Asynchronous programming has it's own pitfalls. Writing asychronous libraries is hard, most programmers don't have much experience with it, and many available libraries can't be used because they're not asynchronous. It's all or nothing: one blocking call (or just one long running call) and every in-flight request in that process is stalled and you have one core doing nothing. The node.js people will tell you that if you're writing long running code in javascript, you're probably doing something wrong, completely contradicting the supposed advantage of being able to do client and server code in the same language.

It's an intentional design decision all right, but not for that reason. The fact is that javascript engines like V8 were designed from day one to run in browsers and are inherently single threaded. Now that they've escaped the browser they're trying to make their single threaded nature out to be a feature. They're hyping asynchronous single threaded code because that's the only option. The browser implementers have zero interest in adding multi-threaded javascript support. Adding threaded javascript to browsers would be very difficult to say the least and rich source of new bugs, and the browsers just don't need it. Adding threading to just the server would fracture the language and remove the only advantage Node.js has. So instead Node.js tries to spin the lack of threads as a feature.

The performance "gains" people talk about with Node.js are entirely due to being single threaded and asynchronous. They're comparing asynchronous Node.js code to e.g. threaded Java. It's been well-known for decades that if you want ultimate performance you have to eliminate the overhead of context switches and go asynchronous. Again, big deal. We've known that since forever, and you can do that in many languages. C# has better language and library support for asynchronous operations. C++11 has excellent support as well. And if you want to see the ultimate in high performance asynchronous programming, just look at any OS kernel. All mainstream OSs (Linux, Windows, OS/X, iOS, BSD) are written this way, in plain old C. So most of the hype around Node.js is just because they've accidently rediscovered asychronous programming because they can't have threads.

Node.js : The speed of a dynamic interpreted language combined with the programming simplicity of low-level asynchronous systems code.

Exactly what collision course are we talking about?

The collision is that javascript is becoming the de-facto standard "byte code" and virtual machine target that other languages compile to, which is traditionally Java's (and somewhat .Net's) turf. TFA completely misses that point. The javascript VM is already deployed everywhere and the Java and .NET (and other) VMs definitely are not. Plus, for better or worse, HTML5 is a cross-platform GUI framework that actually works and both programmers and users are familiar with it. Java GUI sucks everywhere, .NET GUI sucks everywhere but Windows.

If you can write code in Java, C#, Python, or even C++, run it directly on the server, and compile it to javascript on the client, then you can use one language for the client and the server and it doesn't have to be javascript. This is both an incredibly great idea and a horrible idea at the same time. It's great because a ubiquitously deployed VM, with reasonable security and with cross-platform GUI capabilities that don't totally suck is exactly what we need. It's horrible because javascript is just the wrong tool for that, but it's what we have. As bad as it is, it can actually work pretty well in practice. Ironically the only thing really missing is threads.

This video is a pretty funny take on javascript as a bytecode.

Slashdot Top Deals

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...