Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment: I had microwave Internet 15 years ago... (Score 3, Informative) 219

In Lousiville, CO, I lived in one of the few neighborhoods that was skipped over for broadband in 1999. Sprint setup a microwave service that filled in the gap. Bandwidth was awesome - I was getting 10-30 MBs regularly. The downside was the latency - 100 ms ping times were the norm. I remember trying to play Duke Nuke 'Em with friends and having the unfair "advantage" of disappearing regularly when my client didn't ping back in time. Being line-of-site, there were also issues with trees occasionally swaying in front of the dish (a pizza box attached to my roof) and snow blocking the signal.

As others have pointed out, microwave Internet isn't something new and, unfortunately, in the real world isn't a perfect solution.


Comment: Hadoop was never really the right solution... (Score 5, Insightful) 100

by rockmuelle (#49686651) Attached to: Is Big Data Leaving Hadoop Behind?

A scripting language with a good math/stats library (e.g., NumPy/Pandas) and decent raid controller are all most people really need for most "big data" applications. If you need to scale a bit, add few nodes (and put some RAM in them) and a job scheduler into the mix and learn some basic data decomposition methods. Most big data analyses are embarrassingly parallel. If you really need 100+ TB of disk, setup Lustre or GPFS. Invest in some DDN storage (it's cheaper and faster than the HDFS system you'll build for Hadoop).

Here's the break down of that claim in more computer sciencey terms: Almost all big data problems are simple counting problems with some stats thrown in. For more advanced clustering tasks, most math libraries have everything you need. Most "big data" sizes are under a few TB of data. Most big data problems are also I/O bound. Single nodes are actually pretty powerful and fast these days. 24 cores, 128 GB RAM, 15 TB of disk behind a RAID controller that can give you 400 MB/s data rates will cost you just barely 5 figures. This single node will outperform a standard 8 node Hadoop cluster. Why? Because the local, high density disks that HDFS encourages are slow as molasses (30 MB/s). And...

Hadoop has a huge abstraction penalty for each record access. If you're doing minimal computation for each record, the cost of delivering the record dominates your runtime. In Hadoop, the cost is fairly high. If you're using a scripting language and reading right off the file system, your cost for each record is low. I've found Hadoop record access times to be about 20x slower than Python line read times from a text file, using the _same_ file system for Hadoop and Python (of course, Hadoop puts HDFS on top of it). In Big-O terms, the 'c' we usually leave out actually matters here - O(1*n) vs. O(20*n). 1 hour or 20 hours, you pick.

If you're really doing big data stuff, it helps to understand how data moves through your algorithms and architect things accordingly. Almost always, a few minutes of big-O thinking and some basic knowledge of your hardware will give you an approach that doesn't require Hadoop.

tl;dr: Hadoop and Spark give people the illusion that their problems are bigger than they actually are. Simply understanding your data flow and algorithms can save you the hassle of using either.


Comment: Re:Old browsers (Score 2) 276

by rockmuelle (#49666879) Attached to: Ask Slashdot: What's the Future of Desktop Applications?

Two years is our horizon for browser support. Two other trends that have helped us in this regard are (1) that most browsers auto-update or at least nag you a lot and (2) IT departments are more accepting of users running Chrome/Safari/Firefox alongside IE. We're targeting enterprise/internal users, not everyone on the Web, so we can also put some requirements in place when we deploy.

Most of our functionality uses standard HTML/CSS/DOM features, so our we haven't had any issues with features dropping. We don't rely on 3rd party extensions such as Flash or APIs/features that don't have broad support. The decision to use Canvas over SVG for complex visualizations is due to partly this - SVG support is spotty across browsers, Canvas is pretty stable now. Canvas is also much faster at rendering large data sets, which is the other reason for using it.


Comment: Why we targeted the browser... (Score 5, Interesting) 276

by rockmuelle (#49665155) Attached to: Ask Slashdot: What's the Future of Desktop Applications?

I run a company that develops a laboratory informatics platform for data intensive science applications that mix wet lab and analytics operations into single workflows, with gene sequencing as the motivating application - think LIMS with a pipeline and visualization engine, if you're familiar with the space. (Lab7 Systems, if you're curious -

When we started development a few years ago, we had to make the decision as to whether or not to build a desktop application or a browser-based application. At the time, this wasn't an easy decision. Some aspects of the UI are straightforward form-style interfaces, but others are graphics heavy visualizations of very large data sets (100+ GB in some cases). Scientific and information visualization have almost always benefitted from local graphics contexts and native rendering engines. In addition, the data decomposition tasks often require efficient implementations in compiled languages. Our platform also controls analysis processes on large clusters, another task not well suited for the browser.

We gambled a bit and decided that the browser would be our primary user interface. Two trends at the time helped us make the decision (and luckily they both held steady):

  (1) The JavaScript engines in all the major browsers get faster with each new release and now outperform other scripting languages for many tasks.
  (2) The JavaScript development community is maturing, with more well-engineered and stable libraries available

As few other considerations helped us make the call:

  (1) Our platform is a multi-user system. A desktop client would add to the support burden for our customers.
  (2) Our backend needs to integrate with compute clusters, scientific instruments, and large, high-performance file systems. It is server-based, regardless of the client.
  (3) The data scales we were dealing with also required "out-of-core" (to use an older term) algorithms for redenering, so the client would never get entire data sets at once.
  (4) REST/json... XML, XMLRPC, SOAP, and all the others are a pain to develop for (I speak from experience), REST/json significantly reduced the amount of code we needed to maintain state between the client and server.

Since we made the call to use the browser, we haven't looked back. Early on there were some user interactions that were tricky to implement across all browsers, but today they've all caught up. Our application looks much more like a desktop or (*shudder*) Flash application, with a very rich UI (designed by an actual UX team that gets scientific software ;) ) and complex visualizations. It's also been relatively straight-forward to implement, thanks in large part to the maturity of some JavaScript libraries (we use jQuery, D3 (for complex filtering, but not for visualization), Canvas, Backbone, and a few others).

Personally, I can't imagine ever writing a desktop application again. The browser is just too convenient and, in the last few years, finally powerful enough for most tasks.


Comment: Re:Investments? (Score 1, Insightful) 202

by rockmuelle (#49663729) Attached to: Study Reveals Wikimedia Foundation Is 'Awash In Money'

$20M on salaries sounds about right for an organization with a complex IT infrastructure and global reach. Not sure what the outrage is here, unless you're expecting the people that keep the site up to work for free.

If they were developing the content as well, I'd expect their salaries to be in the $30-50M range. $1M probably gives you 6-8 editorial FTEs, so $30-50M would give you the few hundred editors and their support staff necessary to produce the content. The numbers are different for IT staff - 4-5 FTEs/$1M, so $20M could cover 80 technical staff and a few managers. Of course, there's all other staff as well, so the technical staff numbers are probably lower.

Other posts have already pointed out that $50M in the bank is a smart move for a non-profit.


Comment: Re:HIPAA violation (Score 1) 101

by rockmuelle (#49632425) Attached to: Apple's Plans For Your DNA

But the FDA did smack down 23andMe pretty hard for making medical claims based on SNP profiling.* While HIPPA isn't the right regulatory regime here, the FDA definitely is. 23andMe tried the Uber approach to flaunting regulations and found that when actual human health is involved, "trust us" doesn't cut it.


*Can we please stop calling what these companies are doing "DNA sequencing"? It's not and never has been. It's just looking for specific, known markers in your genome. Sequencing is actually getting a readout of your genome.

Comment: Just Like the "Liberal Media" (Score 5, Insightful) 347

Growing up in the 80s, all I heard was how liberal the media was and how we had to fight against it. Now, with the benefit of hindsight, it's clear that the phrase "liberal media" was a conservative talking point that they repeated ad infinitum until people stopped questioning it and just assumed it was true.

The same thing is happening now with claiming scientists are politically or monetarily motivated (the conservative machine hasn't settled on which script to stick with).

Look, I'm a scientist. I know scientists. I know scientists at NOAA, NCAR, NIST, the Labs, in academia, in industry, at biotechs, at agri-science companies, at space exploration companies, and at oil and gas companies. I know conservative scientists, liberal scientists, agnostic scientists, religious scientists, and hedonistic scientists.

You know what motivates scientists? Science. And to a lesser extent, their ego. If someone doesn't love science, there's no way they can cut it as a scientist. There are no political or monetary rewards available to scientists in the same way they're available to lawyers and lobbyists.

Science if hard work for little pay and possibly some recognition. Unfortunately, the conservative noise machine is slowly building a narrative that scientists are all politically and monetarily motivated. The public doesn't really know any better and will believe this to be true if they hear it enough.

This attempt to paint scientists as political actors is pure bullshit and demeans the hard work and great sacrifices working scientists make every day.


Comment: Did a paid shill write this summary? (Score 5, Informative) 179

by rockmuelle (#49603357) Attached to: NASA Gets Its Marching Orders: Look Up! Look Out!

Seriously. The real story with this bill is that the republicans are defunding the climate monitoring programs. It will take decades to regain the capabilities we'll lose by defunding them now. There's no turf war between NASA and NOAA, just one between republicans and science.

Nice job trying to write a summary for geeks that attempts to bury the real story.

Comment: Re:39/100 is the new passing grade. (Score 4, Insightful) 174

Gah. I have mod points but want to add to this conversation.

The point of publishing is to share results of an experiment or study. Basically, a scientific publication tells the audience what the scientist was studying, how they did the experiment, what they found, and what they learned from it. The point of peer review is to review the work to make sure appropriate methods were followed and that the general results agree with the data. Peer review is not meant to verify or reproduce the results, but rather just make sure that the methods were sound.

Scientific papers are _incremental_ and meant to add to the body of knowledge. It's important to know that papers are never the last word on a subject and the results may not be reproducible. It's up to the community to determine which results are important enough to warrant reproduction. It's also up to the community to read papers in the context of newly acquired knowledge. An active researcher in any field can quickly scan old papers and know which ones are likely no-longer relevant.

That said, there is a popular belief that once something is published, it is irrefutable truth. That's a problem with how society interacts with science. No practicing scientist believes any individual paper is the gospel truth on a topic.

The main problem in science that this study highlights is not that papers are difficult to reproduce (that's expected by how science works), but that some (most?) fields currently allow large areas of research to move forward fairly unchecked. In the rush to publish novel results and cover a broad area, no one goes back to make sure the previous results hold up. Thus, we end up with situations where there are a lot of topics that should be explored more deeply but aren't due to the pursuit of novelty.

If journals encouraged more follow-up and incremental papers, this problem would resolve itself. Once a paper is published, there's almost always follow-up work to see how well the results really hold up. But, publishing that work is more difficult and doesn't help advance a career, especially if the original paper was not yours, so the follow-up work rarely gets done.

tl;dr: for the general public, it's important to understand that the point of publishing is to share work, peer review just makes sure the work was done properly and makes no claims on correctness, and science is fluid. For scientists, yeah, there are some issues with the constant quest for novel publications vs. incremental work.


Comment: Yes, Please!!! (Score 5, Interesting) 161

by rockmuelle (#49562415) Attached to: Has the Native Vs. HTML5 Mobile Debate Changed?

For 99% of the applications out there, there's no reason not to do it in the browser if you're starting from scratch today. Most (useful) mobile apps simply display remote content in a way that's contextually relevant to the moment (Yelp, shopping (ordering and product reviews), *Maps, news sites, social media, etc). There's no reason for any of those to be app based. Most apps that aggregate content are poorly designed and not updated frequently. Couple that with the fact that most do not have useful offline modes (the only reason to have an app for content, IMHO), it just makes sense to optimize for the mobile browser rather than spend all the time and effort on an app. Hell, even most games I play casually have no reason being written as apps any more - any word game or puzzler would work fine in the browser.

Instead, put the effort into good mobile design and development practices. Hire good developers to optimize for JavaScript. Hire good developers to optimize your backend operations to reduce latency. Find what features are missing in HTML/JavaScript (e.g., a good client side persistence layer) and encourage the browser vendors to improve there so everyone can benefit.

For context, I develop complex scientific software. We use the browser (desktop) as our client and push the limits of what you can do there. Mobile is not far behind and should be the first choice for new development.


Comment: Re:Lets use correct terminology. (Score 4, Insightful) 177

by rockmuelle (#49496301) Attached to: MakerBot Lays Off 20 Percent of Its Employees

As others have pointed out already in this thread: in the US, if you're laid off you can collect the unemployment insurance you've already paid for. If you're fired or leave voluntarily, you can't collect unemployment insurance.

I'm sure there are other legal differences, but as an employee, this is the important one.

If you are planning on leaving a job under good terms, it's always worth scheduling it around a layoff. You can tell your boss (discretely) and see if you can be laid off instead. The win for your boss is that two employees won't be lost (you plus the person who'd be laid off). The win for you is that you get severance and can collect unemployment.

Comment: We restrict our kids' access to YouTube (Score 2) 92

We cut the cord years ago and have used a mix of Hulu, Netflix, and the various network apps for content (PBS Kids, etc). YouTube has always been problematic, not just for the ads, but also for the content and the "next up" algorithm. As a result, we only let the kids use YouTube (and YouTube Kids) when we're in the room with them and have our finger on the remote.

Here are the specific problems with YouTube:

Ads: The ads are not targeted at all. If you've ever paid attention to ads, you already know the promise of targeted advertising is bunk. The problem with YouTube is that it's doubly bunk when it comes to kids programming on normal YouTube (and apparently on kids' YouTube as well). Completely inappropriate ads will pop up after kids shows. It's not rocket science to tweak your algorithm to play a kid appropriate add after a cartoon, even if it means the occasional adult will get the wrong ad.

Content: This is trickier. A lot of the cartoon content on YouTube consists of collections of episodes bundled into a single video. The problem is, the bundles are created by fans and you have no idea what's in it until you watch it. Sometimes they're crappy screen captures. Sometimes they're dubbed in another language (without calling it out in the title). In those cases, you spend 10 minutes with the kids just trying to find one they can watch. The worst, however, are the ones that are "archival" and created by superfans. My best example is a compilation of Donald Duck cartoons that includes the WWII episode where Donald fights Hitler*. Great episode... for adults who understand the context. Terrible episode for kids. YouTube has no good way of warning parents about this.

Next up: This is easy. The algorithm appears to randomly pick something that has the same word in the title as the previous or has been tagged to be similar. It's very easy to go from Donald Duck to Duck Hunting to Duck Dynasty to an unhinged Phil Robertson rant. Leave your kids alone with YouTube at your own risk!

Look, Google has more money than God and a lot of smart engineers. If they cared about this, they could fix it. YouTube Kids isn't the solution.


*does that count for Goodwin?

Comment: Re:I wonder (Score 1) 258

by rockmuelle (#49409513) Attached to: A Robo-Car Just Drove Across the Country

The same thing train engineers are thinking.

Trains have solved the problem that driverless cars are trying to solve. Instead of cameras, GPS, and detailed maps, they simply use tracks to guide them. Guess what? After a few hundred years of using trains, we've found it helps to have a human on board. Same will be true of "driverless" cars and trucks.


Comment: Didn't we just learn why this isn't a good idea? (Score 1) 96

by rockmuelle (#49398815) Attached to: The Democratization of Medical Diagnosis and Discovery

Sure, I can Google my symptoms and get a superficial understanding of some medical conditions, but that doesn't really mean I have the context to make any sense of them.

Do you really want the person using stackoverflow as their "brain" building your app? No. You want someone who already knows how to build apps and uses it as a reference on occasion. Big difference and the same one with medicine.


Comment: Re:Suck it Millenials (Score 1) 407

by rockmuelle (#49353953) Attached to: Millennial Tech Workers Losing Ground In US

Nice points. I have two kids under 6 right now and was starting to worry about how smart phones might replace computers for most of what they do and thus never expose them to an easy to program platform. What's really exciting for them is the abundance of hobbyist computers and embedded project kits available now. They're going to grow up in a world where simple microcontroller-style projects are completely accessible to them. Makes me almost want to be 6 again!

Today, I can teach my kids some basic UI programming with HTML/CSS/Javascript (not much harder than VB) to get them familiar with high level concepts. I can also get them a BrickPi or any other embedded(ish) system and teach them how hardware works and how to interface with external devices. What a great time to learn technology!

Millennials, by and large, got shafted when it comes to learning how computers work. Most of them went to school when Java was the only language being taught and Linux was becoming too complicated to easily understand for the casual user. When they started working, a little Javascript and CSS got them really far. There weren't many opportunities to really understand how the full stack works. And, with the rise of social media and apps, their exposure to technology was more social than technical. As others in this thread have pointed out, being able to use a simple UI on an iPhone doesn't make you the technology whiz that the media keeps saying you are.

Millennials can still catch up, but I think the next generation is the one that's really going to be primed to do amazing things.


One good suit is worth a thousand resumes.