I want to eat meat. I will never become a vegan, because I need meat (my levels of B12 and iron constantly falling if I don't eat enough meat, and I rather get it from real food rather than vitamin pills). Find a way to make it sustainable then. It's not a solution to telling everyone to "stop eating meat". I rather have free range or wild, sustainable, organic meat, than industrialized meat. But to not have any at all, is not an option.
Youtube said the same thing a few days ago too. Which is unfair to segregate access to your service based on a profession.
The EU took the right stance. Even Yanis Varoufakis has said that the idea of taxing robots does not work. This is what needs to be done instead: https://www.weforum.org/agenda...
I personally *despise* the episodic model. I'm all for the serialized one, and in fact, except Netflix's offerings, the serialized versions found on networked shows pale in comparison (in terms of serialization that is). I'm one of those people who really enjoyed the serializing nature of LOST (minus the disastrous 6th season). I absolutely never watch episodic television. I find it cheap, and non-artistic. In a perfect world, I'd like most TV shows (not all, but most) to end in 3 seasons: beginning-middle-end. And each season to comprise from 6-9 episodes: beginning-middle-end. Like a book.
Non-native EMF (such as from devices) is not the same as EMF from the sun.
Maybe I missed something.
Very interesting; is there a technical book (or chapter) or paper with a good overview of this comparative aspect of fly neurons?
I was just starting to look around to see what's available on comparative neuroscience in general, based on an interest in the most salient functional differences from human neurons, so anything related to that more general topic would also be welcome.
In theory, if we can capture coherent pictures in the visible spectrum from many billions of light years away, we should be able to do the same with RF.
It's actually very easy to see why the opposite is true: stars famously broadcast a truly vast amount of power in the visible spectrum, which is what makes solar energy and photosynthesis effective.
Humans clearly do not have the power resources of the entire sun to use to power RF broadcasts. The total amount of power we have at our disposal from all sources is a tiny, tiny fraction of what the sun broadcasts.
And most of our power does not go into RF in the first place, it goes into transportation, manufacturing, etc.
So it's quite straightforward that there is no comparison between the brightness of stars in the visible spectrum versus the Earth in RF. Stars win hands down.
It isn't the technology, it's just the hardware.
Unfortunately, it is very much both. It's true that we can do better by building better listening arrays, and SETI has been continually doing that for many years, but there is also a problem of signal to noise ratio that gives a hard limit on sensitivity due to noise from terrestrial sources and from thermal and quantum noise in the receiving electronics.
Part of that could be improved by putting radio telescopes e.g. on the far side of the moon. The electronics issue simply needs better technology.
Yet the sky is not saturated with their communications. So therefor those civilizations must be using some other technology.
That seems logical, but that turns out not to be the case. A SETI scientist said in a talk (and I've seen this in articles since) that our deployed SETI listening technology is still nowhere near sensitive enough to pick up signals even from as close as the nearest star (Proxima Centauri, 4 light years away), if a planet there was broadcasting RF at current Earth levels.
(That doesn't mean SETI to date is pointless, because there's always a chance of a highly directional signal beamed our way, or of just something unexpected, like signals far far brighter than Earth's.)
So no, we have no idea whether the sky is saturated with radio waves or not.
I think that you being modded down to -1 is a bit much, but there is a problem here. What you said is potentially well and good for contexts that are purely utilitarian to the degree that anything but pure pragmatic functionality is to be viewed as an active negative, such as industrial control, power plants, etc.
But for most people's desktops, people expect both functionality *and* some degree of modern aesthetics, and there is an extremely common rejection of interfaces that look 15 years old, even if they were considered close to ideally functional and aesthetic 15 years ago.
Since that is demonstrably what the market generally wants, that is therefore the general trend over time: "flashy graphics" are sometimes overdone, but the word "flashy" is in the eye of the beholder, and most improvements to GUIs over the decades have been about modernization to meet the moving target of whatever "modern" means in each era, with actual breakthroughs in usability being far less common.
Furthermore, the people who design and implement GUIs are (with the exception of 1-person development teams) rarely the same team members who would address software vulnerabilities, so maybe that's where your -1 came from.
"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach