Hi TuringTest, thanks for your comment! Contrary to your past tense, Flow continues to be in active development, and continues to be deployed to new use cases, most recently a new user help forum on French Wikipedia, and a technical support forum on Catalan Wikipedia. Since the only way to roll out a system like this is to replace existing use of wiki pages, we're proceeding conservatively to test it out in social spaces where people want to try a new approach, and improving it in partnership with real users in those venues.
It's true that talk pages, being ordinary wiki pages, support "making your own workflow". I love the Douglas Engelbart reference, though I doubt Engelbart would have remained content with talk pages for very long. The lack of a discrete identity for separate comments makes it impossible to selectively monitor conversations you're participating in (you literally have to use diffs to know what's going on), or to show comments outside of the context of the page they were added to. This is a pretty tough set of constraints to work with. At the same time, you're absolutely right that a modern system can't simply emulate patterns used by web forums or commenting systems like this one.
Like wiki pages, Flow posts have their own revision history. Flow-enabled pages have a wiki-style header. Each thread has a summary which can be community-edited. Threads can be collapsed and un-collapsed by anyone. All actions are logged. In short, wiki-style principles and ideas are implemented throughout the system. At the same time, we believe that as we add modern capabilities like tagging, we can replace some of the convoluted workflows that are necessary in wikitext. Already, Flow adds capabilities missing from talk pages -- notifications for individual replies, watching specific threads (rather than a whole page), in-place responses, etc. More to come.
Hello metasonix! First, congratulations on the successful article submission. In answer to your question, I was referring to LQT development. LQT was put into maintenance mode in early 2011, so of your "10 plus year project", about 7 years elapsed with a little bit of paid effort dedicated to the development of LQT. $150K max spent (not all of it by WMF) on LQT is really a high estimate -- Andrew Garrett, the only dedicated developer, also worked on other projects during that time, including the widely used AbuseFilter extension.
Flow development kicked off in summer 2013, about 18-19 months of development effort so far by a team that's fluctuated in size but currently comprises three full-time engineers, about half a person's time for UX design and research, a product manager and a community liaison. During that entire timeframe, I would estimate money spent on the project so far at less than $1M. Even if you combine both efforts, "millions of dollars spent" is pure hyperbole, and adding up elapsed time to exaggerate scale and scope of these efforts is equally misleading.
The article summary speaks of "millions of dollars spent" on a new discussion system for Wikipedia. The article actually tells a very different story -- the LiquidThreads extension started out as a Google Summer of Code project, was funded for a while by an interested third party, and then received a little attention from the Wikimedia Foundation (one designer, one developer) before development was put into maintenance mode. I would ballpark the total money spent around $100-$150K max. Elapsed time does not equate money spent. LQT continues to be in use on a number of projects, but its architecture and UX needed to be fundamentally overhauled.
Flow, the designated successor to LQT, continues to be in development by a small team, and is gradually being deployed to appropriate use cases. It is now running on designated pages in a couple of Wikipedia languages, and old LiquidThreads pages are being converted over using a conversion script developed by the Flow team. Contrary to the article's claim, WikiEducator upgraded to a recent version of LQT, and will be able to migrate to Flow in future using the conversion script.
You can give Flow a try in the sandbox on mediawiki.org and see for yourself whether the article's claims are hyperbole or not. Disclaimer: I am the person referenced in the headline of the Wikipediocracy article, so take my view with a grain of salt, as well.
Below the line are languages that are more popular on GitHub. Above the line are languages that are more popular on Sewer Overflow. There's a distinct difference. The "GH" languages tend to be systems languages (Go/Rust/D) and CS favorites (Haskell/OCaml/Erlang). The "SO" languages tend to be more lightweight and application-specific - Visual Basic, Matlab, ColdFusion. "Assembly" seems to be an outlier, but other than that the pattern seems pretty consistent. Conclusions about the audiences for the two sites are best left as an exercise for the reader.
If you think weather forecasting is easy, let's see some of your forecasts. A forecast which has been substantially correct for New England and merely didn't extend as far south as had been expected only underscores the difficulty of the exercise. Occam's Razor suggests that no cause beyond "honest mistake" need be posited. I know some people like to take every opportunity to prattle on about government overreach, but you're *really* stretching that fabric too thin this time. Get a grip.
So, from "this isn't to say that we should throw intelligence out" you conclude that they want to throw intelligence out? Truly, you have a dizzying intellect. I can see that you enjoy playing "devil's advocate" (to use the more polite term) but when you have to try so hard that you make yourself look ridiculous maybe it's time to find a new game.
What Poropat, Duckworth, and others suggest is that multiple traits - including "grit" - contribute to success. He even provides evidence to back up that hardly-surprising conclusion. So how does Kohn respond? By immediately projecting a "one trait uber alles" mentality onto the grit proponents. To be even more clear, he's attributing to them exactly the idea they're trying to refute. Then he cherry-picks examples of excessive persistence leads to adverse outcomes, ignoring the issue of whether those outcomes would be likely to occur in people who had developed other traits such as curiosity and openness. In the end he only demonstrates further the problems with any single-trait theory of learning, supporting exactly the point he meant to oppose.
Maybe his parents or teachers should have helped Kohn develop some more of those other traits. Like honesty.
Like how people weren't bothered by Y2K bugs? Let's not repeat that every century.
Start at != fully loaded.
run a business without paying the traditional costs in the field and socialize your costs. in this case he wants every internet customer to pay for his bandwidth whether they use netflix or not.
ISPs chose their flat-rate business model; Netflix didn't force it on them. If that business model no longer works, ISPs should switch to a different one.
Nah, Netflix used to use other CDNs. But then they got big enough that it was cheaper to build their own.
That's orthogonal to the issue that (in most people's opinion) no CDN should have to pay broadband ISPs.
I agree with the first part of your comment, and came here to say almost the same thing. The law of unintended consequences strikes again.
The second part makes you seem like a moron. Seriously, losing access to your e-toy for a minute or two is worth killing over? Get a grip.
There are two different ways to do network display: the RDP way and the right way. With RDP you're sending the entire "screen" over the network, so all the windows have to be composited first. Thus RDP requires a fully featured compositor like Weston on the remote end.
The right way is to send each window over the network, which should require a lightweight compression proxy. No one appears to be working on this.
Work is the crab grass in the lawn of life. -- Schulz