Have you seen Linus' opinion of recent GCCs? My web browser melted from the heat.
Sigh. Emacs is an operating system with text editing facilities.
Functionality comes at a price. Complexity introduces bugs by necessity, reduces performance and increases memory footprint.
Below some given threshold, adding complexity is fine. The reduction in wasted time/money exceeds the increase in overheads. Above that threshold, the reverse is true.
As with all systems, for any given variable, the plot of efficiency vs complexity follows the standard S curve. Memorize this curve, it will save you much grief. The aggregate will be more complex because the variables have inter-dependencies and unique characteristics. You need to resolve to orthogonal components if you want to do anything useful.
Since nobody can be bothered to do that much maths, it becomes a simple question - do you get anything out of using them?
For me, the answer is usually no. There are no editors out there that handle more than a small fraction of the languages I use. Several critical languages use specialized formatting rules and it is a syntax error to not follow them. It would be nice to actually have an editor remember the rules for me, but formatting editors prettify code. The notion of languages having rules is beyond them.
Most code editors I've used also insist on adding truly ugly dummy code. And by "ugly", I mean I would demote a first year student by a year for writing such crap.
Maintainer convenience is not a factor I allow in mitigation. NetBeans and Eclipse score poorly. Eclipse doubly so, as I've seen it suffer seizures when updating purportedly compatible extensions. If I can write code faster by chiselling it into rock than typing it into an editor, the editor's coding isn't being written for the benefit of users. If portability and compatibility are claimed, I expect that claim to be true or rescinded. Transactions, including updates, should be bulletproof - which may include rollbacks for the irretrievably mangled.
Good code isn't the problem. Good code is never a problem. Finding good coders IS a problem, finding good coders who can work together is almost impossible. (Ergo, Linux is the byproduct of alien experiments on the brains of Linus Torvalds and Alan Cox, coinciding with a freak quantum entanglement with Dread Cthulhu in a parallel universe.)
If it did, the quality of the pictures would be better.
The Internet is not powered by experiments on humans. Not even in the DARPA days.
No, websites do NOT experiment on users. Users may experiment on websites, if there's customization, but the rules for good design have not changed either in the past 30 years or the past 3,000. And, to judge from how humans organized carvings and paintings, not the past 30,000 either.
To say that websites experiment on people is tripe. Mouldy tripe. Websites may offer experimental views, surveys on what works, log analysis, etc, but these are statistical experiments on depersonalized aggregate data. Not people.
Experiments on people, especially without consent, is vulgar and wrong. It also doesn't help the website, because knowing what happens doesn't tell you why. Early experiments in AI are littered with extraordinarily bad results for this reason. Assuming you know why, assuming you can casually sketch in the cause merely by knowing one specific effect, is insanity.
Look, I will spell it out to these guys. Stop playing Sherlock Holmes, you only end up looking like Lestrade. Sir Conan Doyle's fictional hero used recursive subdivision, a technique Real Geeks use all the time for everything from decision trees to searching lists. Isolating single factors isn't subdivision because there isn't a single ordered space to subdivide. Scientists mask, yes, but only when dealing with single ordered spaces, and only AFTER producing a hypothesis. And if it involves research on humans, also after filling out a bloody great load of paperwork.
I flat-out refuse to use any website tainted with such puerile nonsense, insofar as I know it to have occurred. No matter how valuable that site may have been, it cannot remain valuable if it is driven by pseudoscience. There's also the matter of respect. If you don't respect me, why should I store any data with you? I can probably do better than most sites out there over a coffee break, so what's in it for me? What's so valuable that I should tolerate being second-class? It had better be damn good.
I'll take a temporary hit on what I can do, if it safeguards my absolute, unconditional control over my virtual persona. And temporary is all it would ever be. There's very little that's truly exclusive and even less that's exclusive and interesting.
The same is true of all users. We don't need any specific website, websites need us. We dictate our own limits, we dictate what safeguards are minimal, we dictate how far a site owner can go. Websites serve their users. They exist only to serve. And unlike with a certain elite class in the Dune series, that's actually true and enforceable.
Visit central America then get back to us.
Work at McDonald's or Walmart (or more likely both) for a living and get back to us.
Markets in poor neighborhoods carry what 'poor' people buy
They buy what gives them the most calories per dollar, while also focusing on foods that require the least preparation time (since their work typically leaves them with little time to spare). End result: saturated fat, refined sugar and sodium, with very little in the way of necessary vitamins and minerals.
Poverty is now owning... a car out of warranty!
For most of the United States, owning a car is a necessity for both working and buying food.
Everyone covers up mistakes. Everyone reveals everyone else's mistakes.
The San are pretty much where they were when humanity evolved.
Nothing is objectively known about the airliner. Everything, from Ukrainian air traffic control ordering the plane to descend to a dangerous altitude to who detected what, is all supposition and hearsay at this point.
It is my personal suspicion that the Ukrainian authorities were hoping for an accident of this sort and were intent on placing a civilian airliner in as dangerous a position as possible. Whether that was the case for this specific airliner on this specific flight is unclear.
And I'd argue that Korean Airlines 007 is a better example for this reason. The US had been using civilian airliners for spying on Russia for some time and doctored the evidence to remove Russian pilots radioing warnings to the aircraft in order to make the incident more incriminating than it was. Whether that flight was used for spying, was shadowed by such an aircraft, or merely happened to be in the wrong place at the wrong time, all becomes incidental. The accident was inevitable and the US government of the time was guilty of ensuring civilians would someday die for the benefit of military intelligence. It was merely a matter of which plane would be blown out of the sky and when.
In this case, the Ukranian authorities deliberately downplayed the risk of missile attacks on overflying aircraft and deliberately worked to place aircraft in the most dangerous air corridors that the airlines would permit. That is indisputable. Their opponents were known to be firing on aircraft and had shot several down. When your time to respond is measured in milliseconds, the nearest aircraft identification guide is mere hours away, to paraphrase what Americans often say about cops.
An accident was inevitable. The separatists weren't interested in avoiding one, the Ukrainian authorities certainly weren't. It was merely who would die for someone else's ideals. Whether or not this aircraft was deliberately placed in the path of a SAM battery is unimportant.
Both sides are therefore guilty. Both sides deserve blame.
Wrong multiverse theory. And, indeed, wrong experiment. In fact, the wrongitudinal level of your post is so extreme that it should really be on K5.
It's testable, it's measurable, it's repeatable, it's capable of prediction. it's either the simplest model that meets these requirements AND produces correct predictions, OR it is not.
Therefore it is science.
Maths is a science, for the reasons given in the first line. Science is a mathematical system, because ultimately there is nothing there, just numbers. (See: Spinons and other quasiparticles.)
There are many multiverse theories and they can all be tested.
Many Worlds: The theory that there are no real "probability waves" in QM, merely overlapping realities that diverge at the time the "waveform" collapses.
This is an easy one. Entangled particles operate using the same physics as wormholes. If one of the entangled pair is accelerated to relativistic velocities, say in a particle accelerator, they will not exist in the same relative timeframe. It would seem to follow that if Many Worlds is correct, one of the particles will be entangled with multiple instances of the other particle, which would imply that every state would be seen at the same time. If the options are left spin and right spin, you'd see an aggregate state of no spin even if no spin isn't a physical possibility. And seeing something that doesn't exist either means you're in a Phineas and Ferb cartoon or Many Worlds is correct.
Foam Universe: This is the sort described in the article.
Yes, impact studies are possible, but they're only meaningful if you have enough data and you can't possibly know if you do. You're better off trying to make a universe, preferably a very small one with a quantum black hole at the throat of the bridge linking this universe to that one. What you will observe is energy apparently vanishing, not existing in any form - mass included, then reappearing as the bridge completely collapses.
Orange Slice Universe: This conjectures that multiple, semi-independent, universes formed out of the same big bang and will eventually converge in a big crunch.
It doesn't matter that this universe would expand forever, left to its own devices, because the total mass is the total mass of all the slices. Although they are semi-independent, they interact at the universe-to-universe level. In this scheme, because there's a single entity (albeit partitioned), leptons cannot have just any of the theoretical states. The state space must also be partitioned. Ergo, if you can't create a state for an electron (for example) that it should be able to take, this type of multiverse must exist.
Membrane-based Universe: This postulates that universes are at an interface between a membrane and something else, such as another membrane.
However, membranes intersecting with the universe are supposed to be how leptons are formed, in this theory. The intersection will be governed by the topology of the membranes involved (including the one the universe resides on), which means that lepton behaviour must vary from locality to locality, since the nature of the intersections cannot vary such as to perfectly mirror variations in the shape of the membrane the universe is on. Therefore, all you need to do is demonstrate a result that is perfectly repeatable anywhere on Earth but not, say, at the edge of the solar system.