Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Let's set up a telescope array on the moon now (Score 4, Interesting) 272

No seriously, we should set up a very large synthetic aperture array of telescopes on the far side of the moon to look at these and similar promising exoplanets in high resolution and spectroscopically etc.

Yes. I know the far side of the moon isn't always dark, but half the time it is, and is shaded from Earth's light and our EM emissions etc.

Comment They get you off your ass (Score 4, Insightful) 93

Scientific study of the benefit or harm is good. No doubt.

But, from a common sense 30,000 foot perspective, if there is even the slightest effect among the majority of these apps of embarrassing you into getting off your ass a little more often, isn't that likely to be a net health positive?

Comment Re:But our entire belief system is based on bullsh (Score 1) 392

You've made good progress.
The first step in getting un-addicted to bullshit is to recognize that you're swimming in it.

Worthy goal: Design means of facilitating harmonious prosperous global (tribe-de-emphasized) human society with
-ethos of decreasing inequality
-ethos of decreasing ecological harm of human civilization
-ethos of maximum liberty consistent with previous two tenets
-ethos of recognizing and denigrating bullshit in all its vari-shaded brown forms.

Comment Re:I don't need a course on this (Score 5, Interesting) 392

Well, the nature of a certain type of complex system (e.g. governance of a herd of cats/human society) dictates a lot of the necessary behaviours of those attempting to gain leadership position by convincing people to back them. So there will be a lot of commonality of behaviour in the camapaigning politician, no matter their position on the policy/values spectrum.

There are some universals:
For example:
1. You can't get elected by promising only what you could actually deliver (given the realities of the finances and ability to shift the supertanker of state). That would be too little to meet expectations, and you would lose to your exaggerating, over-promising opponent.
In business, the corollary is, you can never win the competitive contract by bidding what it will actually take to do the job. Your dishonestly underbidding competitor will win. Instead, you have to bid low and make up the difference by charging for change requests when the customer realizes they didn't order what they really wanted.
2. A huge state with its bureaucracy and laws has enormous inertia, and any leader of it, in their short term of office, and with constitutional restrictions on power, can at best introduce a very slight leftward or rightward angle of a few degrees in the state's direction of operation. This must be contrasted with the hyperbole of election rhetoric about how sweeping the change they're going to institute will be.
3. Many people think of themselves as being in a camp or a tribe, and think there are competing camps/tribes trying to eat their lunch. Politicians often have to resort to issue-framing that paints matters in these terms, and that often works. An alternative strategy is to claim to be the great unifier, but only a few can pull this off. Anyway, when they get in office, they'll just be tweaking (landscaping) a mountain-like entrenched system rather than moving the mountain.
4. Most people for whatever reason, are still religious, so even intelligent politicans have to pretend to be religious to win. See camps above.

So it's understandable why people think politicans lie all the time. They kind of have to, to get elected. That's just how we are, as electors.

Comment Google should be on this (Score 2) 392

What we need is AI that can do automated story/fact credibility analysis.

Google is in the best position to develop this these days, maybe in a collaboration with IBM.
Then is should be released as OpenAI so that people will believe the system's results.

The system should consider factors such as:
1) Logical/factual compatibility of statement/story elements with scientific/subject expert well accepted consensus knowledge.
2) Logical coherence of statement/story
3) Use of terms with clear unambiguous meanings from well-accepted theories/models of the world or aspects of it.
4) Utterance theory: a theory of people and organizations as motivated actors with preferences and goals.
Of course in human society one way to achieve one's goals is to influence the focus of attention, beliefs and behavours of other people and organizations.
Uttering particular statements or stories (in particular situation contexts) is an effective way of influencing focus of attention, beliefs, and behaviours of others.
So any system assessing credibility of statements/stories must be able to reason about who the utterer / source is, what their situation is, what their goals for attention, belief, behaviour influence are, and what the situation, disposition, and prior knowledge of the intended audience is.
5) Theories of framing as a means of belief crafting and attention focussing and behavour influence. This is a particular sub-part of 2.

Comment Would this apply to Alexa and Google Home too? (Score 1) 142

Since some people in the home might not know it was there or what it did?

Or does the fact that Alexa or Home only respond when a keyword is spoken mean it's somehow ok under these laws?
The Alexa or Home device is still listening and transmitting the voices to a server right?

Comment Re:Because Human Nature (Score 1) 388

Ok so human nature has many of us wanting to strive and do better and do something valuable.
I agree.
But you haven't addressed how this urge is going to be satisfied when say 50% of jobs are replaced by automation and the remaining jobs are jobs where humans and machines do the job together so the remaining human work component (say, of being a doctor) is devalued to about half its current economic value.

You seem to be implying that we can just put our finger in the dam, and our other hand over our eyes so we won't accept or acknowledge what's coming and will somehow magically prevent this transition from happening. I'm telling you that's wishful thinking. If there's a more cost-effective way of doing some necessary task with more or better automation, some organization somewhere is going to be offering to do it that way for cheap, and the market will move to that, disrupting the current way of doing it with more labour. Have you looked at the self-order McDonald's restaurants lately, or self-checkouts in stores? A small, small harbinger of what's to come.

I'm saying we have to be as innovative at dealing with this socioeconomically as we are innovative in developing this highly automated economy in the first place.
Exactly what that looks like I don't know, but it would seem it's either some form of UBI. If we don't do that, we'll have much greater inequality of means than even now, and with that, will come social turmoil and irrational "solutions". It is not a stretch to say that what made Herr Trump's election possible was loss of jobs and job value to automation, and the deluded belief that immigrants and cheap labour around the world were to blame. It is mostly automation causing this hollowing out of employment opportunities. And the robots and AIs are already inside the walls.

So, I'm just saying that yammering "No No No No No" is not a solution and is not going to stop the rising tide of automated economy. Let me read your creative solution to this changed environment instead.

Comment tax profit yes but not to slow automation (Score 5, Insightful) 388

Look. We're going to have to accept, in the near future, that smart machines are better than humans at many tasks.

So why would we want, as humans, to keep doing those tasks? Isn't that just embarrassing to keep trying? You're not actually being useful. You're just pretending to be.

So yes, businesses that make profit via automated processes should pay tax to help give people a UBI (universal basic income), but the tax shouldn't be different than paid by any profitable business.

Why keep people working at tasks they are second-rate at? Doesn't make any sense. People should be free to find something actually meaningful and useful to do, given their unique experience and talent. They shouldn't do make-work projects that a robot can do better. That's just a dumb policy.

Comment Re:Maybe it's time to return to LISP machines (Score 1) 157

Isn't it enough to be able to share memory between threads, rather than full processes, for most concurrent programming purposes?

I stipulated that no programs except for those written in the single high-level language would be permitted to run on the machine. And that language would be designed to only allow secure, in-bounds memory access, via use of a high-level memory model such as LISP uses.

So how would you write the exploit and get it to execute on the machine? You'd write it in the LISP equivalent language?

Sure, such a machine would lack bleeding edge performance due to the memory abstraction, but on the other hand today's processors are probably 30,000 times faster than a LISP machine's was, and those ran fine.
So what if all my programs are 5 to 10 times slower or whatever than those running in a more general, but dangerous architecture.
I believe there would be a use for machines locked down from the bottom up in this manner, e.g. wherever security is at a premium and we don't have the absolutely most compute-intensive applications to run.

Slashdot Top Deals

The next person to mention spaghetti stacks to me is going to have his head knocked off. -- Bill Conrad

Working...