Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:39/100 is the new passing grade. (Score 4, Insightful) 174

Gah. I have mod points but want to add to this conversation.

The point of publishing is to share results of an experiment or study. Basically, a scientific publication tells the audience what the scientist was studying, how they did the experiment, what they found, and what they learned from it. The point of peer review is to review the work to make sure appropriate methods were followed and that the general results agree with the data. Peer review is not meant to verify or reproduce the results, but rather just make sure that the methods were sound.

Scientific papers are _incremental_ and meant to add to the body of knowledge. It's important to know that papers are never the last word on a subject and the results may not be reproducible. It's up to the community to determine which results are important enough to warrant reproduction. It's also up to the community to read papers in the context of newly acquired knowledge. An active researcher in any field can quickly scan old papers and know which ones are likely no-longer relevant.

That said, there is a popular belief that once something is published, it is irrefutable truth. That's a problem with how society interacts with science. No practicing scientist believes any individual paper is the gospel truth on a topic.

The main problem in science that this study highlights is not that papers are difficult to reproduce (that's expected by how science works), but that some (most?) fields currently allow large areas of research to move forward fairly unchecked. In the rush to publish novel results and cover a broad area, no one goes back to make sure the previous results hold up. Thus, we end up with situations where there are a lot of topics that should be explored more deeply but aren't due to the pursuit of novelty.

If journals encouraged more follow-up and incremental papers, this problem would resolve itself. Once a paper is published, there's almost always follow-up work to see how well the results really hold up. But, publishing that work is more difficult and doesn't help advance a career, especially if the original paper was not yours, so the follow-up work rarely gets done.

tl;dr: for the general public, it's important to understand that the point of publishing is to share work, peer review just makes sure the work was done properly and makes no claims on correctness, and science is fluid. For scientists, yeah, there are some issues with the constant quest for novel publications vs. incremental work.

-Chris

Comment Yes, Please!!! (Score 5, Interesting) 161

For 99% of the applications out there, there's no reason not to do it in the browser if you're starting from scratch today. Most (useful) mobile apps simply display remote content in a way that's contextually relevant to the moment (Yelp, shopping (ordering and product reviews), *Maps, news sites, social media, etc). There's no reason for any of those to be app based. Most apps that aggregate content are poorly designed and not updated frequently. Couple that with the fact that most do not have useful offline modes (the only reason to have an app for content, IMHO), it just makes sense to optimize for the mobile browser rather than spend all the time and effort on an app. Hell, even most games I play casually have no reason being written as apps any more - any word game or puzzler would work fine in the browser.

Instead, put the effort into good mobile design and development practices. Hire good developers to optimize for JavaScript. Hire good developers to optimize your backend operations to reduce latency. Find what features are missing in HTML/JavaScript (e.g., a good client side persistence layer) and encourage the browser vendors to improve there so everyone can benefit.

For context, I develop complex scientific software. We use the browser (desktop) as our client and push the limits of what you can do there. Mobile is not far behind and should be the first choice for new development.

-Chris

Comment Re:Lets use correct terminology. (Score 4, Insightful) 177

As others have pointed out already in this thread: in the US, if you're laid off you can collect the unemployment insurance you've already paid for. If you're fired or leave voluntarily, you can't collect unemployment insurance.

I'm sure there are other legal differences, but as an employee, this is the important one.

If you are planning on leaving a job under good terms, it's always worth scheduling it around a layoff. You can tell your boss (discretely) and see if you can be laid off instead. The win for your boss is that two employees won't be lost (you plus the person who'd be laid off). The win for you is that you get severance and can collect unemployment.

Comment We restrict our kids' access to YouTube (Score 2) 92

We cut the cord years ago and have used a mix of Hulu, Netflix, and the various network apps for content (PBS Kids, etc). YouTube has always been problematic, not just for the ads, but also for the content and the "next up" algorithm. As a result, we only let the kids use YouTube (and YouTube Kids) when we're in the room with them and have our finger on the remote.

Here are the specific problems with YouTube:

Ads: The ads are not targeted at all. If you've ever paid attention to ads, you already know the promise of targeted advertising is bunk. The problem with YouTube is that it's doubly bunk when it comes to kids programming on normal YouTube (and apparently on kids' YouTube as well). Completely inappropriate ads will pop up after kids shows. It's not rocket science to tweak your algorithm to play a kid appropriate add after a cartoon, even if it means the occasional adult will get the wrong ad.

Content: This is trickier. A lot of the cartoon content on YouTube consists of collections of episodes bundled into a single video. The problem is, the bundles are created by fans and you have no idea what's in it until you watch it. Sometimes they're crappy screen captures. Sometimes they're dubbed in another language (without calling it out in the title). In those cases, you spend 10 minutes with the kids just trying to find one they can watch. The worst, however, are the ones that are "archival" and created by superfans. My best example is a compilation of Donald Duck cartoons that includes the WWII episode where Donald fights Hitler*. Great episode... for adults who understand the context. Terrible episode for kids. YouTube has no good way of warning parents about this.

Next up: This is easy. The algorithm appears to randomly pick something that has the same word in the title as the previous or has been tagged to be similar. It's very easy to go from Donald Duck to Duck Hunting to Duck Dynasty to an unhinged Phil Robertson rant. Leave your kids alone with YouTube at your own risk!

Look, Google has more money than God and a lot of smart engineers. If they cared about this, they could fix it. YouTube Kids isn't the solution.

-Chris

*does that count for Goodwin?

Comment Re:I wonder (Score 1) 258

The same thing train engineers are thinking.

Trains have solved the problem that driverless cars are trying to solve. Instead of cameras, GPS, and detailed maps, they simply use tracks to guide them. Guess what? After a few hundred years of using trains, we've found it helps to have a human on board. Same will be true of "driverless" cars and trucks.

-Chris

Comment Didn't we just learn why this isn't a good idea? (Score 1) 96

http://search.slashdot.org/sto...

Sure, I can Google my symptoms and get a superficial understanding of some medical conditions, but that doesn't really mean I have the context to make any sense of them.

Do you really want the person using stackoverflow as their "brain" building your app? No. You want someone who already knows how to build apps and uses it as a reference on occasion. Big difference and the same one with medicine.

-Chris

Comment Re:Suck it Millenials (Score 1) 407

Nice points. I have two kids under 6 right now and was starting to worry about how smart phones might replace computers for most of what they do and thus never expose them to an easy to program platform. What's really exciting for them is the abundance of hobbyist computers and embedded project kits available now. They're going to grow up in a world where simple microcontroller-style projects are completely accessible to them. Makes me almost want to be 6 again!

Today, I can teach my kids some basic UI programming with HTML/CSS/Javascript (not much harder than VB) to get them familiar with high level concepts. I can also get them a BrickPi or any other embedded(ish) system and teach them how hardware works and how to interface with external devices. What a great time to learn technology!

Millennials, by and large, got shafted when it comes to learning how computers work. Most of them went to school when Java was the only language being taught and Linux was becoming too complicated to easily understand for the casual user. When they started working, a little Javascript and CSS got them really far. There weren't many opportunities to really understand how the full stack works. And, with the rise of social media and apps, their exposure to technology was more social than technical. As others in this thread have pointed out, being able to use a simple UI on an iPhone doesn't make you the technology whiz that the media keeps saying you are.

Millennials can still catch up, but I think the next generation is the one that's really going to be primed to do amazing things.

-Chris

Comment Re:It's just hard work and machine learning (Score 2) 68

I don't think it's that computers and machine learning really trump an exact model. It's more that manual curated semantic information is difficult to do well and even when done well is simply the curator's interpretation of the key points. Ontologies and controlled vocabularies (necessary to make semantic solutions work) are always biased towards their creators view of the world. Orthogonal interpretations rarely fit with the ontologies and require mapping between knowledge systems. Rather than simplifying things, this just creates another layer of abstraction and meta-data that now must be managed.*

Machine learning, on some level, basically admits this flaw in structured knowledge representation and punts. Instead, it provides tools for querying knowledge bases and finding patterns in them. I think the latter part is just as flawed as manual curation, but the query tools combined with a human are incredibly powerful.

A simple example: Yahoo originally indexed and categorized the Web. When I interviewed there in '96 (and, silly me, turned down the offer), they had a room full of people that did just that. Google, on the other hand, used a graph algorithm combined with standard text search methods to leverage the structure of the web to give good search results. Yahoo eventually bailed on manual curation and we learned how to leverage Google's approach to search to mine knowledge.

tl;dr: manual and automated curation will never properly capture human's representation of knowledge. Instead, better tools plus the human brain will improve our ability to leverage knowledge.

-Chris

*and there's that old saying: every software problem can be solved with another layer of abstraction.

Comment Re:'Virtual Water': Fee Fie Foe Fum, I Smell ENRON (Score 2) 417

I was going to make a similar post...

It's my understanding that the current almond tree bubble is driven by (wall street?) investors who noticed the price mismatch in water and are using it to make a quick buck, the rest of the state me damned. Of course, these funds have deep pockets and probably can lobby effectively to keep prices where they are until they cash out.

Seems very much like a variation on ENRON but with water instead of gas.

Comment Re:this is just nonsense. (Score 1) 144

Feeding a troll here, I know...

I'm pretty sure I had more Legos than you growing up. But, I didn't make the Legos, I built with them. I also was never under the delusion that my lego skillz would translate to a job building lego buildings. It was a fun, creative activity that required allow no learning and occupied most of my childhood.

The current push for everyone to program is the exact opposite of that. Learning to program for all but us autodidactics requires coursework and commitment. Sure, once you can do it it's a lot of fun. But, to keep the lego analogy going, it's like require a basic understanding of mechanical engineering before being allowed to use Legos (sorry Susan Williams - they'll always be Legos, not lego bricks).

Comment Re:this is just nonsense. (Score 1) 144

Bricks are also a fundamental building block of our modern world. But I'll be damned if I know how to make one.

Not everyone needs to know everything. I love to code but I also appreciate that my friends who build houses for a living could give a shit about learning to code.

I'm amazed that people on a tech forum don't get that.

Comment R is not a programming language (Score 5, Insightful) 144

It's a statistical computing environment. R is much closed to what VB was pre-VB6 - a loosely defined domain specific language with lots of libraries aimed at a specific task. It's not really a general purpose programming language and not a great one to learn if you want to learn to program.

If you do a lot of number crunching and want to move beyond Excel, R is a great choice (as is matlab, s-plus, or any of the others aimed at analytics).

If you do analytics AND want to learn to program, go Python and NumPy/Pandas.

If you just want to learn to program, VB, JavaScript, Python, Java are all good. Just find what you'd like to program and see what languages people are using.

And yes, at some point, pick up a few more languages if you find you like programming.

-Chris

Comment So we'll all have our own CRM for friends? (Score 2) 209

A few random thoughts on this:

Influencing people by having instant recall is a classic sales trick. Old school sales people wrote notes in their Rolodex to remember spouse's names, birthdays etc,. Today, Salesforce, Zoho, and the like (hell, even linkedin) handle this role. However, as soon as you realize that the sales person remembered something using a CRM rather than actually remembering it, that interaction quickly becomes awkward. In the past, sales techniques like these weren't well known outside of sales circles. Nowadays, everyone knows about them and they're less effective. The value in the technique is that people weren't aware it was being used and mistook the sales person remember personal details as actual friendship, rather than just a sales trick. Same will happen with timelines - we'll quickly sort those who use it as a gimmick and those who are sincere.

Another angle is the fitbit/life tracking. You know who obsessively tracks everything they do in hopes of improving themselves? People who obsessively track everything in hopes of improving themselves. The rest of us don't. Those people will always be around and will use these tools, the rest of us won't.

More importantly on the personal side of things: anyone who's accumulated a lifetime's worth of photos knows you never really go back an look at them in an detail. Sure, once in a while you'll reminisce, but you never do the detailed analysis of your past that these data hoarding stories predict. Instead, you live your life in the present, learning from the past with an eye toward the future. A few million years of evolution has made our brains very good at that. Every attempt to document and catalog our lives externally has failed to really live up to what our brain already does (hint: we likely don't have perfect recall for evolutionarily important reasons).

From the corporate side, data will be tracked as long as it can be traced back to profits. Right now, most of the profits are going to companies selling big data analysis services. It's only a matter of time before their customers move on to the next marketing trend.

trl;dr: live in the present and stop trying to cheat nature. :)

-Chris

ps: yes, the government collecting all this data is scary as hell. Voting can help fix that (at least in America - it'll take a few elections, but it's possible).

Comment Re:What is this ? Keep asking the same question (Score 1) 291

This. Most of the workforce would benefit from basic education in all aspects of business. Sales, marketing, finance, project management, business development, etc.

In our neck of the corporate world (software), too few employees understand how business actually functions and what it really takes to make a business work. The current culture of "just build an app and you're set for life" leaves out many of the key steps needed to build a business. As a result, most promising applications go no where and most "successful" exists are really just acqui-hires (making the entrepreneur just a well paid headhunter, which has nothing to do with coding ability).

Simple things like knowing how to develop top down and bottom up models of a market would help app developers understand who their users are and how they might generate revenue to continue to fund their app. Even something as simple as understanding that revenue is actually necessary for success is lost on most developers I know.

-Chris

Slashdot Top Deals

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...