Open source projects live on contributions. Very few successful projects are built only by a single developer. In the open source world you don’t recruit your contributors, they must come to you, and while you can chose which contributions to accept, the more you get the stronger you grow. This is where a level playing field like Go thrives.
This open source fitness is why I think you are about to see more and more Go around in spite of what some might think of it. In fact, Go has already succeeded. Much of the meaningful systems software coming out these days is written in Go. OSS companies like docker, CoreOS or HashiCorp are leading a server revolution with Go as their primary tool. You have emerging databases, search libraries, http proxies or monitoring systems. Go is already a big player in server software and it’s only extending its reach.
There are some valid issues with Go however as the author states:
- Go not optimally achieving Go’s design goals
Sometimes it’s a matter of opinion. Sometimes things are technically more complicated that people assume. Sometimes the Go authors, human beings as they are, just didn’t get it completely right.
Link to Original Source
Now, Dr. Andrew Truscott of the Australian National University has reported the same thing in Nature Physics, but this time using a helium atom, rather than a photon.
Link to Original Source
It's like how a real terrorist would not joke about a bomb at an airport. But someone who does is detained or arrested, and time is spent by TSA that could be better spent looking for real terrorists.
I studied and tutored experimental design and this use of inferential statistics. I even came up with a formula for 1/5 the calculator keystrokes when learning to calculate the p-value manually. Take the standard deviation and mean for each group, then calculate the standard deviation of these means (how different the groups are) divided by the mean of these standard deviations (how wide the groups of data are) and multiply by the square root of n (sample size for each group). But that's off the point. We had 5 papers in our class for psychology majors (I almost graduated in that instead of engineering) that discussed why controlled experiments (using the p-value) should not be published. In each case my knee-jerk reaction was that they didn't like math or didn't understand math and just wanted to 'suppose' answers. But each article attacked the math abuse, by proficient academics at universities who did this sort of research. I came around too. The math is established for random environments but the scientists control every bit of the environment, not to get better results but to detect thing so tiny that they really don't matter. The math lets them misuse the word 'significant' as though there is a strong connection between cause and effect. Yet every environmental restriction (same living arrangements, same diets, same genetic strain of rats, etc) invalidates the result. It's called intrinsic validity (finding it in the experiment) vs. extrinsic validity (applying in real life). You can also find things that are weaker (by the square root of n) by using larger groups. A study can be set up in a way so as to likely find 'something' tiny and get the research prestige, but another study can be set up with different controls that turn out an opposite result. And none apply to real life like reading the results of an entire population living normal lives. You have to study and think quite a while, as I did (even walking the streets around Berkeley to find books on the subject up to 40 years prior) to see that the words "99 percentage significance level" means not a strong effect but more likely one that is so tiny, maybe a part in a million, that you'd never see it in real life.
Slashdot has notoriously always had a comically unfunny April 1st, and at this point I have to think the complete lameness of it all is the real meta-joke.
The whole "HTTP/2 stink" thing seems to be a bit of a meme, but it's remarkable how the people who state it vaguely wave their hands around and make unsupported claims.
1. HTTP/2 is *fantastic* for higher latency connections. If you're a small site and you can't afford to have geolocated servers around the globe, HTTP/2 offers a much better experience for those high latency connections. I've been using SPDY for a couple of years to service clients in Singapore from a server in the US (which for a variety of legislative and technical reasons I can't replicate there). It is absolutely better.
2. HTTP Pipelining is when you know that someone is just doing the "I oppose" thing and searching around for objections. HTTP pipelining is not supported by default in a *single* major browser because it has critical, deadly faults that render it useless. When people bring it up to oppose HTTP/2, their position is rendered irrelevant.
3. HTTP/2 removes the need to do script and resource coalescing. It removes the need to deal with difficult to manage image sprites. All of those are bullshit that are particularly onerous and expensive to little sites.
4. HTTP/2 makes SSL much cheaper to the experience. This is very good.
HTTP/2 is a *huge* benefit especially to the little guy. Google can do every manner of optimization, they can deploy across legions and armies of servers around the globe. This can be expensive and logistically difficult for little sites, especially if you want SSL. HTTP/2 levels the playing field to some degree.
It isn't about "a chip". It's about a system that is designed for a specific thermal and electrical load. nvidia probably got flak from notebook makers who were facing dissatisfied customers.
You only have to look at a lot of the nonsense comments throughout, such as yours -- people just contriving how "easy" everything is, and how simple it is. Yeah, and I'll bet all of you design notebooks. No? Then shut up.
This article only takes into account direct emissions. It neglects the CO2 emissions due to the energy used in the manufacture of said airplanes, which is proportional to their cost.
The headline could be confusing. The garage was significant. My point is that people extend the concept in their heads and imagine a lot more than it really was. That is the myth part.
Five years ago we launched the Go project. It seems like only yesterday that we were preparing the initial public release: our website was a lovely shade of yellow, we were calling Go a "systems language", and you had to terminate statements with a semicolon and write Makefiles to build your code. We had no idea how Go would be received. Would people share our vision and goals? Would people find Go useful?
The Go programming language has grown to find it own niche in the cloud computing word, having been used to code Docker and the Kubernetes projects. The developers also announce details of further projects to be released, such as a new low-latency garbage collector and support for running Go on mobile devices.
Link to Original Source
I travel a ton and stay in dozens of different hotels every year. Domestically, and in maybe 50% of the foreign cases, the high priced hotels had worse and slower internet up until a couple of years ago. For the last 2 years they have gotten better, on the average. Oh, I was in a 5-star Vegas resort last night that had horrible bandwidth. In the past, my joke was accurate that the difference between a Four Seasons (just an example) and a Super 8 is that at the Super 8 the internet worked and was free. The most important thing to me in a hotel is computer use. The fancy suites in major hotels are often set up for entertaining friends and DON'T even have a computer desk. I ask my wife to book me into Super 8's whenever possible.