(Old academic joke)
(Old academic joke)
I'm not sure this will attract the best candidates. For a Master's program, candidates from from three pools:
(1) Students who just finished undergrad and want additional specialization before entering the workforce
(2) Working professionals who want to return to school to gain additional skills or enter a new field
(3) Those who never found a job and are trying to wait out the market in school
Of these, only (1) and (3) likely have the time to commit to a MOOC. (2) could (and many people do this), but will always have their normal responsibilities taking priority.
The problem is that a MOOC is a huge time commitment. If it's the only way to get into a Master's program, you're taking a huge risk if you're already working and have responsibilities. The GRE/GMAT + an application + interview is reasonable to ask for something that's not guaranteed and likely has an acceptance rate of 10-20%. A three month time commitment isn't. This will simply exclude the most desireable and qualified group of students and limit the pool of applicants to those who had the free time to commit to it.
It's kinda like companies that require programming assignments prior to interviews. That tactic, while trendy and popular, tends to exclude the top 10% coders simply because they have better ways to spend their weekends and evenings and know it.
Uber is great in the same way Pets.com was great: they're burning their investor's money to run an unsustainable business. I loved getting 40lb bags of dog food delivered for free and I love paying less than the driver is making for my Uber rides. As a consumer, I win!
What's new about Uber compared to Pets.com is that Uber is the VC world's experiment in seeing if they can create illegal businesses and then use their huge piles of money to change the law in their favor. This is what should really scare everyone.
Huh? We're a small shop and use Jira just fine. But, we also don't blindly apply Agile(tm) either. We use agile (as in the manifesto version, not the Certified versions).
Jira is a productivity tool for managing tasks and workloads. If it's not effective for you, find another way to manage things. But, do find a way to manage your tasks and issues in a traceable manner. If you don't see the value in that, your process is likely the problem and no tool will fix it.
The basic idea behind the Mythical Man Month is essentially Amdahl's Law for human (instead of compute) resources. At some point, there's just no getting around it.
But, just like with parallel and distributed computing, there are always people who don't understand the basic tenets of it and think they've found a way to transcend it (I'm looking at you Hadoop users).
Learn it and never forget it:
*sigh* The core concept of progressivism is what most of us want - policy based on our current best understanding of how the natural and social worlds work. The fact that it's been used to promote questionable policies in the past shows its flexibility: as we learn more, those policies are abandoned. The alternative, blindly holding on to policies that have been proven not to work (supply side economics on the right, Marxism on the left) just shows... what? That adherents are too proud to admit mistakes and evolve?
It's true that in modern American/Western politics the term has a slightly different connotation, but to pretend that the idea of using data and knowledge to find good policies is new (which the OP claimed - millennials are the first generation to use data!) is silly. Smart people for centuries have been trying this approach and it's never caught on with the general public.
The fact that my original post got modded "funny" shows just how hard it is to get people to think seriously about this approach.
Data driven politics has a name. No need to reinvent it. Unfortunately, it's always struggled to get a strong following.
That's not exactly correct... the Google cars have incredibly precise maps of the roads they're on, not just the route, but maps of the actual surface of the road (e.g., where the potholes are). That level of detail available to the onboard computers is pretty much the same as having sensors on the road. It requires an incredible amount of prep work. Of course, map updates could be handled by sensors on other cars constantly providing real time information. It's a cool approach, but only practical when you have that level of detail available.
Google, et al, are showing very controlled research projects. Even though they're testing in the real world, they're still highly controlled experiments.
Sure, many of the problems are resolvable using this approach, but what we don't know is what new problems will evolve once there are more than a handful of self driving cars on the road. More research will help identify these, but anyone who's done real science or engineering knows that what works at small scale rarely scales as you would hope/expect.
Well, if we find a way to measure either of those using high-energy experiments, we'll get a few more decades out of the field.
Just when we think we're done, we're usually just at the beginning...
Um, you shouldn't be one, either.
Let's do the math: drop tens of millions of dollars of your investor's money to be the beat out the other institutional investors to a company with a cool prototype built from Kickstarter funds. Realize a few months into it that the tech, while kick ass, isn't quite ready for prime time and won't have an available market of 10M users for at least a few years. Do some more math and realize they'll run out of money before then and have to take on additional VC money, possibly in a down round that will affect your position. Call one of your successful investments with money to burn, ask them to buy you out of your investment and give everyone a good return. Repeat with the next cool tech. Claim you have the unique ability to spot unicorns. Raise more money for your fund.
_That's_ how you think like a modern VC.
Of course, this is just a variation on the IPO scams of the first boom. In this case, the few successful companies (Facebook, Google, etc) replace the role of the public in providing quick returns on questionable investments. The public foots the bill indirectly: some IPO money and shares are used for the buyouts and ad revenue provides the rest of it.
That's pretty much my point: most people can't even get it right with simple statistical models. I advocate sticking with the simple mathematical models and dumping the "turn-key", simple statistical models that keep getting people in trouble. p value
Those that truly grok the statistics behind the more complex models can still use them, but the bar for acceptance by the community needs to be much higher. (which goes into fixing peer review and down that whole rabbit hole...)
To illustrate, the summary could easily be restated this way:
"...field of [data science] frequently uses [machine learning] in an unhealthy way. Many [data scientists] don't use [machine learning] as a tool to describe reality, but rather as an abstract foundation for whatever theory they've come up with."
Replacing "math" with "machine learning" isn't going to make a difference if the practitioners don't understand how to use it properly. Machine learning models are much more subtle and complex than simple mathematical models and very easy to misuse. To use them properly, you really need a much stronger understanding of the math behind them than most people have.
See the entire field of psychology and most GWAS studies for an example of where over reliance on (simple) models can get you into a lot of trouble.
That reminds me of a Java "class" I took in the late 90s. The "instructor" kept talking about how great "jay vac" was and how it made Java run faster. Yeah, that was javac he was referring too ("java see"). Took all of us programmers half the day to figure that one out. I still call it jayvac when I want to mess with people.
Indeed. In mountaineering circles it's always been Denali as well. Pretty much every group that has a physical connection to the mountain has always called it Denali.
Replication and reproducibility are not the same. Simply getting the source code and re-running the results is just replicating the study. It doesn't tell you anything about how reproducible the results are.
To be reproducible, someone should be able to use similar methods and get the same results. If a result is completely dependent on a specific build of the software, it's not robust enough to be considered reproducible.
Publications should require a concise written description of the method and solution that is complete enough that a competent practitioner could reproduce the results using whatever appropriate tools they want.
I'm dismayed that in CS that the academic community is putting so much emphasis on replication and not enough on robust reproducibility.
The trouble with doing something right the first time is that nobody appreciates how difficult it was.