If you think weather forecasting is easy, let's see some of your forecasts. A forecast which has been substantially correct for New England and merely didn't extend as far south as had been expected only underscores the difficulty of the exercise. Occam's Razor suggests that no cause beyond "honest mistake" need be posited. I know some people like to take every opportunity to prattle on about government overreach, but you're *really* stretching that fabric too thin this time. Get a grip.
So, from "this isn't to say that we should throw intelligence out" you conclude that they want to throw intelligence out? Truly, you have a dizzying intellect. I can see that you enjoy playing "devil's advocate" (to use the more polite term) but when you have to try so hard that you make yourself look ridiculous maybe it's time to find a new game.
What Poropat, Duckworth, and others suggest is that multiple traits - including "grit" - contribute to success. He even provides evidence to back up that hardly-surprising conclusion. So how does Kohn respond? By immediately projecting a "one trait uber alles" mentality onto the grit proponents. To be even more clear, he's attributing to them exactly the idea they're trying to refute. Then he cherry-picks examples of excessive persistence leads to adverse outcomes, ignoring the issue of whether those outcomes would be likely to occur in people who had developed other traits such as curiosity and openness. In the end he only demonstrates further the problems with any single-trait theory of learning, supporting exactly the point he meant to oppose.
Maybe his parents or teachers should have helped Kohn develop some more of those other traits. Like honesty.
I agree with the first part of your comment, and came here to say almost the same thing. The law of unintended consequences strikes again.
The second part makes you seem like a moron. Seriously, losing access to your e-toy for a minute or two is worth killing over? Get a grip.
About five years ago, I was involved in the installation of a thousand-node cluster in Boulder. We knew *before we went in* that we needed to change our EDAC (memory error correction) code to account for the higher rate of bit-flips due to the altitude. Some of the people we were working with had been there when those same problems nearly caused a months-long delay in a larger installation at NCAR nearby. We ended up running into a more subtle problem involving lower air density, heat and voltage, but *this* problem was incredibly old news even then.
Good point. In fact, what made me think of mentioning Adventure is that I'm hacking on Adventure 2.5 (a.k.a. 550) to make it playable with my daughter. I've already added code to work around one build error, modified some of the game logic having to do with save/restore annoyances, and found one crash if you "say" something too long. The point is that all of this is happening in Linux, based on code that was written well before Linux even existed. Surely there's a lesson there. Thanks for clarifying it.
Adventure, a.k.a. Colossal Cave, by Crowther and Woods (extended by others).
This was many old-school programmers' first exposure to computers as entertainment. For example, both my wife and I recall playing it on TI SilentWriters (paper output plus an acoustic modem) when we were kids. Even more than Space Wars, which was written at least a year later and only ran on much less common hardware, this was the start of computer gaming.
If one developer is that critical within your organization, you've got bigger issues than source code line width.
On most projects, there will be parts of the code that few understand. Yes, if that number is zero or one you have bigger problems, but trying to get it beyond two or three for every single piece is infeasible. Given how mobile people tend to be nowadays, and how variable their working hours across different time zones might be, it's quite common that the only reviewer available at a time of need (e.g. fixing a customer's problem in production) might be in a constrained environment. I guess it's not a problem if all you want is rubber-stamp reviews of simple code, but otherwise it's something you have to consider.
Think of the majority of developers that are sitting in an office environment.
Um, no. Having all of your developers in a single office all the time is increasingly uncommon, especially on open-source projects. Even fairly stodgy companies often have remote workers nowadays, all the way down to cutting-edge startups where practically nobody lives in the same city. Assuming such a majority is a bad basis for deciding policy.
I've seen this kind of thing kill code reviews. Instead of looking for logic problems or design flaws, you'll get that one guy being anal retentive about line width or ratio of one thing to something else.
Yes, I hate that kind of review myself, and I'll bet I've been subjected to about a dozen times more of them than you. However, conforming to a line-length limit is comparatively easy. Required scaffolding and forbidden constructs, function-length and even variable-naming conventions can all be much more of a pain. Part of teams being professional is respecting your colleagues and putting needs before whims. If you can't do that, or if staying within a length limit is so hard for you, then - to borrow your phrase - you have bigger problems.
The only issue I have is with code diff utilities that don't work well with multi-monitor setups.
You should try to appreciate that not everyone shares your circumstance. Sometimes the most senior developers on a project might have to review code while on the road, e.g. visiting customers or presenting at conferences. Not too many laptops have multiple monitors, and you wouldn't want to carry one if it did. Some of the very latest have pretty decent resolution, but they cost a lot more and they have a very fine dot pitch so the number of characters doesn't scale up as much as the number of pixels. Under those circumstances, code that doesn't display well in a *side by side* diff on a single small-ish monitor is a more serious issue than the junior developer's fetish for super-long lines. Eighty columns might not be the absolute best width, but it's in the range that makes such diffs under such circumstances productive, and it's a width that a lot of people (and tools developed over the last few decades) can handle reliably.
Also, people who study reading have known for half a century that long lines are hard to scan accurately without a saccade leaving the reader's eyes on the previous or next line, which means that they're bad for readability even on wide monitors. There's a reason newspapers used to set type in columns instead of all the way across the page. You'll need a much better reason than personal aesthetics to do something that's bad for readability and a pain for other members of your team. Without such a reason - and I haven't seen any, anywhere in this thread - that's just selfish and immature.
Actually, there is a reason not to have different apps using different filesystems in partitions on one disk. If those apps just use subdirectories within one filesystem, that filesystem can do a pretty good job of linearizing I/O across them all, minimizing head motion (XFS is especially good at this). If those apps use separate partitions, you'll thrash the disk head mercilessly between them if more than one is busy. Your advice is good in the multiple-disk case, but terrible in the single-disk case, and any well trained sysadmin would know not to lump them together. Perhaps next time you shouldn't be so quick to attack others for asking reasonable questions.
It probably has something to do with the difference between claims and description in a patent application. Claims are the part that matter. Often the claims are constructed so they *just barely* pass the obviousness test, e.g. by taking two ideas that are too obvious by themselves, but combining them in a way that's less obvious. The description can then be far more general, and is often shared between many patents, but that doesn't affect the validity of the claims *at all*. To determine the validity of a patent you have to look very carefully at what is being claimed, and only refer to the description as background to understand the claims.
Disclaimer: IANAL and I don't give legal advice. I've just been through this nearly a dozen times.
Disclaimer: I'm the project lead for HekaFS, which is based on GlusterFS.
If you're concerned about data protection, you'll want to worry about node as well as disk failures. Some distributed filesystems, including Lustre and PVFS*, take a rather old-school "use RAID and implement your own heartbeat/failover between server pairs" approach, and that just sucks. GlusterFS and Ceph don't have that wart; neither do MooseFS or XtreemFS, which I would consider the other alternatives. They all have their own forms of replication built into the filesystem, so you don't need to set up and maintain another layer for them. Unfortunately, neither MooseFS nor Ceph survived even simple tests - write a few files in parallel, flush caches, read them back in parallel - when I ran those tests on the same hardware as GlusterFS and XtreemFS which did fine. That was a while ago, though, so take that with a grain of salt. Ceph in particular has a lot of awesome technology and has a very bright future IMO, but it's taking a while for it to realize that potential.
Out of GlusterFS and XtreemFS, the choice has a lot to do with your exact use case. XtreemFS has a pretty strong focus on wide-area replication, so if that's part of your need now or likely to be in the future then it's probably a bit stronger. GlusterFS does have some wide-area replication, but I consider it rather weak. Within a single data center, I'd give GlusterFS the edge. It has better local performance than XtreemFS in my tests, and it has what I consider by far the best setup/management interface.
The one caveat I'd offer is that all of the filesystem I've mentioned excel for sequential access for large files. For random access, and especially for metadata-heavy workloads, they all suck to some degree. As others have mentioned, you might very well be better off with a simple NFS server pair with cheap shared storage and heartbeat/failover to ensure availability.
Look them up. They already allow you to attach arbitrary metadata to a file. Most modern filesystems and user-level utilities support them already. They're even used as the underpinnings for security mechanisms such as POSIX ACLs and SELinux. Sure, there are issues with performance when you have *lots* of xattrs on a file, and that's a fruitful area of research, but we sure don't need some brand-new Microsoft-invented thing to deal with metadata.
Doing something for 7857 files and doing it for 10 billion are very different situations. 7857 files, including metadata, can easily be sucked into memory in one big chunk and unpacked/examined from there. That simply doesn't work for datasets larger than memory. At the higher scale, modern filesystems do tend to fall apart, badly, so different approaches are needed. Comparing your paper airplane to an F-22 doesn't make it look like you know anything about writing software properly. Quite the opposite.
...when people in the community, instead of setting a good example, fetishize the act of trolling itself. When high technical contribution is combined with presentations full of pornographic images/metaphors and Twitter streams full of laughter at others' consternation, such childish behavior becomes the New Conformity. It's just as cliquish and pointless as the Old Conformity these rebels without a clue pretend to reject, but whenever aspiring programmers see that opinions presented in one set of clothes get a quicker/more friendly hearing than the same opinions presented in a different set of clothes it's totally predictable how they'll respond. They'll imitate all the off-color and trollish behavior that they see, and some of them will end up stepping over lines that actually matter. It's all good fun until promising projects and startups fail because would-be users and collaborators get turned off by the hipster posing. What kind of sociopath would make a decision where the only possible upside is a few laughs and the potential downside is colleagues losing their jobs? It doesn't matter if you feel your own job is secure, or if you feel that people shouldn't react as they do; anybody who pulls this kind of stunt doesn't deserve a job or funding or anything else but our contempt.