Maybe arglebargle isn't understanding the purpose, maybe they are...I can't really say. I'm quite certain I do understand the purpose...but having a purpose and being the best way to achieve that purpose aren't synonymous. I think the "best way" would be the one which most reliably prevents the abuses you mention while best limiting collatoral damage from the policy. Returning to the OP, when talking about the author's motivation for writing a work, there can be no more authoritative source than the primary source. A secondary source would necessarily be *less accurate* than the primary source. A policy which fails to recognize those sorts of nuances is not doing a good job of limiting collatoral damage, regardless of how pure its purpose might be. To me, this whole issue with the "no original research" policy reminds me of the "Zero Tolerance" policies that you often see lambasted. The similarity being that in both cases, little to no leeway is given for discretion, there is no consideration of context or nuance. And so we see an author being unable to verify that their inspiration for writing something was X; and we see kindergarteners suspended for having GI-Joe sized miniature weapons in their knapsack. In neither case is the true purpose really being served. Instead, people are abdicating thought and debate to policy, attempting to absolve themselves of responsibility for dealing with a world that is not full of bright line distinctions.
To the latter point, that verifying people are who they say they are is difficult, I concur. Verifying that sources are reliable is difficult as well. Wikipedia editors seem to believe that the latter is at least worth a reasonable effort. If the case merits it, why would the former not also be worth a reasonable effort? For example, if you have a professor at a university who wishes to address some aspect of an article about them or their work (such as what inspired them)...would it really be that difficult to verify the source? Most, if not all universities seem to have public directories available, many professors have web pages on their departmental web sites. Wouldn't a quick email to the listed address for the professor suffice to ensure that the source has been reasonably verified? Certainly not conclusively...but Wikipedia can't possibly have "conclusive" as its standard. Even the standard you have for the article you refer to, "peer reviewed", doesn't "conclusively" establish anything, it just gives a good chance that the information is as accurate as our current understanding allows.
This is something I struggle with. Lots of people would reply "python", but I think they're off their rocker. Yes, python is probably just fine for a lot of website development, and yes, I know some enterprises are using it heavily, but when you dig into it, it's really a hacked up POS that carries WAY too much of its evolutionary baggage. Java certainly has a bit of that as well, mostly in the bundled libraries, but they are much more consistently architected than the Python libraries. Plus, the lack of true multi-threading support is just...unconscionable for a modern language, I think. Yeah, it simplifies things for the hoi polloi, but that should hardly be the standard we aspire to.
Unfortunately, the only languages I know which have the features I expect from the next great modern language are all research languages at this point. What I'd really like: Start with Java (convenient syntax that is familiar to many people, and a VM with a lot of important concepts). Go through the standard library and rework it to make it consistent, ditch the older paradigms that still hang around to support backwards compatibility. Rework generics, also ditching compatibility but to improve usefulness. Add support for design-by-contract. Add in language level (not library level) features to support fork-join with support for some mechanism to declare affinity between work units and data so that the VM can optimize thread placement and data placement in memory. Add better built in support for both dynamic class creation and bytecode injection. Add a smart/flexible int/float/number types where the VM will take care of sizing depending on how big the number is, something which can flow up to the Big range without needing to keep track of sizes yourself...and crucially, where the math operations work regardless of number size, efficiently (i.e., under the covers, this would mean allowing for a mutable big integer/decimal). Also add support for primitive collections...but do it in such a way that it's made as transparent as possible. This would probably mean it would allow treating primitives as Objects from a parameter passing perspective, so, say, your Map put method would still be put(K,V), but if you used a map which supported primitives (which would be a lot easier to write with the smart-number facility), it would pass a primitive straight through without any boxing/unboxing.
I'm sure if I thought a bit longer, I could come up with some other features I'd like to see. Importantly, this language still has a VM...I think that becomes more important for the future, not less, as we move to higher core/processor counts and NUMA becomes a bigger and bigger issue. There will always be a place for lower level coding a-la C/C++; but I think that a higher level language really...you need a VM. And, as with the JVM/CLR, I would want the VM for this language to offer support for running bytecode which could be compiled from a multitude of languages. People who have done work developing those sorts of compilers would probably have suggestions on how that could be even better supported, and I certainly think that input would be important for ensuring that support is done right.
Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin