So you're saying it's a holo promise?
So you're saying it's a holo promise?
Google should tweak their algorithm to block takedown requesters who are spamming google with generic requests.
No seriously, we should set up a very large synthetic aperture array of telescopes on the far side of the moon to look at these and similar promising exoplanets in high resolution and spectroscopically etc.
Yes. I know the far side of the moon isn't always dark, but half the time it is, and is shaded from Earth's light and our EM emissions etc.
An unlikely impossibility is equivalent to a likely possibility, not to be confused with an infinite improbability.
the first to welcome our new automated overlords, and I hope they keep that in mind.
Well, ok, but for the sake of argument, between Trump's NS (natural stupidity) and the AI of a decade from now, I know which one I would pick.
Scientific study of the benefit or harm is good. No doubt.
But, from a common sense 30,000 foot perspective, if there is even the slightest effect among the majority of these apps of embarrassing you into getting off your ass a little more often, isn't that likely to be a net health positive?
You've made good progress.
The first step in getting un-addicted to bullshit is to recognize that you're swimming in it.
Worthy goal: Design means of facilitating harmonious prosperous global (tribe-de-emphasized) human society with
-ethos of decreasing inequality
-ethos of decreasing ecological harm of human civilization
-ethos of maximum liberty consistent with previous two tenets
-ethos of recognizing and denigrating bullshit in all its vari-shaded brown forms.
Well, the nature of a certain type of complex system (e.g. governance of a herd of cats/human society) dictates a lot of the necessary behaviours of those attempting to gain leadership position by convincing people to back them. So there will be a lot of commonality of behaviour in the camapaigning politician, no matter their position on the policy/values spectrum.
There are some universals:
1. You can't get elected by promising only what you could actually deliver (given the realities of the finances and ability to shift the supertanker of state). That would be too little to meet expectations, and you would lose to your exaggerating, over-promising opponent.
In business, the corollary is, you can never win the competitive contract by bidding what it will actually take to do the job. Your dishonestly underbidding competitor will win. Instead, you have to bid low and make up the difference by charging for change requests when the customer realizes they didn't order what they really wanted.
2. A huge state with its bureaucracy and laws has enormous inertia, and any leader of it, in their short term of office, and with constitutional restrictions on power, can at best introduce a very slight leftward or rightward angle of a few degrees in the state's direction of operation. This must be contrasted with the hyperbole of election rhetoric about how sweeping the change they're going to institute will be.
3. Many people think of themselves as being in a camp or a tribe, and think there are competing camps/tribes trying to eat their lunch. Politicians often have to resort to issue-framing that paints matters in these terms, and that often works. An alternative strategy is to claim to be the great unifier, but only a few can pull this off. Anyway, when they get in office, they'll just be tweaking (landscaping) a mountain-like entrenched system rather than moving the mountain.
4. Most people for whatever reason, are still religious, so even intelligent politicans have to pretend to be religious to win. See camps above.
So it's understandable why people think politicans lie all the time. They kind of have to, to get elected. That's just how we are, as electors.
7. Having no taxation, and a completely powerless central government, leads to harmony and prosperity.
What we need is AI that can do automated story/fact credibility analysis.
Google is in the best position to develop this these days, maybe in a collaboration with IBM.
Then is should be released as OpenAI so that people will believe the system's results.
The system should consider factors such as:
1) Logical/factual compatibility of statement/story elements with scientific/subject expert well accepted consensus knowledge.
2) Logical coherence of statement/story
3) Use of terms with clear unambiguous meanings from well-accepted theories/models of the world or aspects of it.
4) Utterance theory: a theory of people and organizations as motivated actors with preferences and goals.
Of course in human society one way to achieve one's goals is to influence the focus of attention, beliefs and behavours of other people and organizations.
Uttering particular statements or stories (in particular situation contexts) is an effective way of influencing focus of attention, beliefs, and behaviours of others.
So any system assessing credibility of statements/stories must be able to reason about who the utterer / source is, what their situation is, what their goals for attention, belief, behaviour influence are, and what the situation, disposition, and prior knowledge of the intended audience is.
5) Theories of framing as a means of belief crafting and attention focussing and behavour influence. This is a particular sub-part of 2.
Since some people in the home might not know it was there or what it did?
Or does the fact that Alexa or Home only respond when a keyword is spoken mean it's somehow ok under these laws?
The Alexa or Home device is still listening and transmitting the voices to a server right?
Ok so human nature has many of us wanting to strive and do better and do something valuable.
But you haven't addressed how this urge is going to be satisfied when say 50% of jobs are replaced by automation and the remaining jobs are jobs where humans and machines do the job together so the remaining human work component (say, of being a doctor) is devalued to about half its current economic value.
You seem to be implying that we can just put our finger in the dam, and our other hand over our eyes so we won't accept or acknowledge what's coming and will somehow magically prevent this transition from happening. I'm telling you that's wishful thinking. If there's a more cost-effective way of doing some necessary task with more or better automation, some organization somewhere is going to be offering to do it that way for cheap, and the market will move to that, disrupting the current way of doing it with more labour. Have you looked at the self-order McDonald's restaurants lately, or self-checkouts in stores? A small, small harbinger of what's to come.
I'm saying we have to be as innovative at dealing with this socioeconomically as we are innovative in developing this highly automated economy in the first place.
Exactly what that looks like I don't know, but it would seem it's either some form of UBI. If we don't do that, we'll have much greater inequality of means than even now, and with that, will come social turmoil and irrational "solutions". It is not a stretch to say that what made Herr Trump's election possible was loss of jobs and job value to automation, and the deluded belief that immigrants and cheap labour around the world were to blame. It is mostly automation causing this hollowing out of employment opportunities. And the robots and AIs are already inside the walls.
So, I'm just saying that yammering "No No No No No" is not a solution and is not going to stop the rising tide of automated economy. Let me read your creative solution to this changed environment instead.
Look. We're going to have to accept, in the near future, that smart machines are better than humans at many tasks.
So why would we want, as humans, to keep doing those tasks? Isn't that just embarrassing to keep trying? You're not actually being useful. You're just pretending to be.
So yes, businesses that make profit via automated processes should pay tax to help give people a UBI (universal basic income), but the tax shouldn't be different than paid by any profitable business.
Why keep people working at tasks they are second-rate at? Doesn't make any sense. People should be free to find something actually meaningful and useful to do, given their unique experience and talent. They shouldn't do make-work projects that a robot can do better. That's just a dumb policy.
Isn't it enough to be able to share memory between threads, rather than full processes, for most concurrent programming purposes?
I stipulated that no programs except for those written in the single high-level language would be permitted to run on the machine. And that language would be designed to only allow secure, in-bounds memory access, via use of a high-level memory model such as LISP uses.
So how would you write the exploit and get it to execute on the machine? You'd write it in the LISP equivalent language?
Sure, such a machine would lack bleeding edge performance due to the memory abstraction, but on the other hand today's processors are probably 30,000 times faster than a LISP machine's was, and those ran fine.
So what if all my programs are 5 to 10 times slower or whatever than those running in a more general, but dangerous architecture.
I believe there would be a use for machines locked down from the bottom up in this manner, e.g. wherever security is at a premium and we don't have the absolutely most compute-intensive applications to run.
The next person to mention spaghetti stacks to me is going to have his head knocked off. -- Bill Conrad