Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Looks like GPL violation to me (Score 1) 143

I agree. I don't understand a lot of the commentary above - this appears to be a clear and obvious violation of the GPL and is exactly the sort of thing that was discussed all over the place 20 years ago.

To my understanding:

1. RH are perfectly fine to restrict who can access their software. They can sell it to only a small group of customers, bundle it with other software, give it away for free, whatever they want. The only restriction on them is that they must make the source code available to anyone who to whom they distribute the software.

2. Any who receives the software is free to re-distribute it (if they want, they are not required to, anymore than RH are above - https://www.gnu.org/licenses/g...).

If they do choose to distribute it, they are also required to distribute the source code too.

3. Any person who does distribute the software is not permitted to place any restrictions on the distribution rights of the recipients (section 12 of the GPL3)

https://www.gnu.org/licenses/g...

Red Hat's behavior is a clear violation of principle 3 above.

Comment Re:No (Score 1) 144

This is, by and large, the UK model. When British Telecom was broken up in the 80s and 90s, the infrastructure division was spun off into a separate company who owns and managed the lines to the houses (and business). The company is now known as Openreach.

They are a monopoly, in so far as no one else will pay to install their own, when Openreach got them for free and doesn't have to pay any capital costs for them.

They are regulated as a monopoly just as you suggest, and all internet providers can rent access to the lines managed by Openreach to provide services to households and business. As a result, to an almost universal degree, any household in the UK can choose from at least half a dozen providers.

There are other companies that lay their own - e.g. Virgin have installed their own cable infrastructure. But in general, all the FTTC provision in the UK is through Openreach.

It seems to work fairly well for internet provision. The UK tried much the same model for railways in the 90s, and it has failed utterly. The first incarnation (Railtrack) went bust and has to be re-nationalizsd. The second attempt (Network Rail) is shambolic, and its hard to see how a government agency could run it worse.

Comment Re:Exponential emergence (Score 1) 77

How about: "It has pre-programmed decisions (be it a single behaviour or a behaviour tree of some kind)" vs. "we have no idea what it'll decide at any given moment" ?

Again, if the pre-programmed behaviour is, in practice, indistinguishable from the "we have no idea what it'll decide" behaviour, then what's the difference?

Comments about the state of society at large I'll leave alone. That's a subject for people more qualified than me to jump into!

Comment Re:Exponential emergence (Score 1) 77

But it doesn't. It's literally just a fancy program. It doesn't drive taxi because it has made a career decision, but because there's a piece of code telling it that it's a taxi.

For the second time, I agree with you. I am NOT trying to argue that simulating motiviation is the same as having motivation.

I am merely making the point that, in the context of regulating the AI involvement in critical systems (the point of this whole thread), the difference between a simulated motivaiton and a rela motiviation is a distinction without a difference.

Whoever said AIs should be given any rights? Lock the lunatic up before he does something stupid.

Of course this will be an issue. It won't take very long before some parts of society take up the line that a) if they behave like intellegent (and possibly emtional) beings and b) we can't actually tell what intellegence or consciousness are in order to properly evaluate and compare (i.e the only decent definitions of consciousness really on access to one's subjective experience to confirm), and c) we are putting them into involentury service for our needs, therefore d) we are creating a race of slaves.

Comment Re:Exponential emergence (Score 1) 77

"That's like saying "all it takes to get rich is one or two successful startups" or "all it takes to win a war is to defeat the enemy".

There's a few things being worked on that might at first glance LOOK like AI making decisions. Such as self-driving cars pimping themselves out as taxis. But that's still fairly simple, programmed behaviour where the AI part is just a module in a program doing a specific function, and not a motivational driver for behaviour."

That's my point - what's the difference? If it is given a "simulated" motivation (i.e. a script designed to look like motivatinal behaviour) and it does it sufficiently well that no one can tell the difference, then our rules (laws, regulations, etc.) had better recognise the fact that it behaves as if it has human motivations.

I'm not trying to say that making a real set of human motivations is easy, merely that using the standard set of pattern matching routines we're already using to simulate human motivation (i.e. make the output of the system appear as if it is generated by someone with human motiviations) is in principle a simple thing - enoguh trial and error and we'll get there.

"If what you're saying is that building Robocop, training it on real police actions and giving it a gun is a stupid idea, then I totally agree. But you could leave out the AI part and just say that giving a gun to a piece of software is stupid, period."

I am indeed saying that (albeit, indirectly, since that isn't really the point I'm trying to make). I could indeed leave the "AI" part out, and just treat it as a piece of software instead. That's sort of the point we're both making - the distinction between true AI and mundane software is huge (and probably insurmountable, e.g. John Searle), but if mundane software behaves in a way that is indistinguishable from AI, and we're concerned with making sure that the use of AI is properly managed, then what't the difference?

As I said above, it makes a huge difference for moral questions (should AIs be given rights, etc.), but for practical regulations on what AI can and can't interact with, what is the different between a true AI and a very (very) clever software script?

Comment Re:It's AI (Score 1) 77

It wouldn't help. Almost all the data that went into the decision wont make sense to us (appart from the blindingly obvious data that is directly related to the subject at hand). Most of the interesting stuff in modern AI-like bots is in the connections between the data, and the wieghting it gives to those connections.

The things it choose to make connections about will not likely make much sense to us (neither would the connections our own neurons make), and the particular data points that are used to generate an output wouldn't make much sense either (since they depend on a network of such connections).

Comment Re:Exponential emergence (Score 1) 77

I agree with you. However, we are slowly approaching the stage where the distinction betweem what an AI is and what an AI does is of little practical difference, it is merely a philosophical difference.

All it would take if for someone to create a future version of an AI and give it the simulated motivation to act in a way that more closely resembles human behaviour.

It would then scan through all its availble data, index it according to its "human behaviour" content, and would then integrate that search data into its output when interacting with the world, presumably doing things like simiulating emotional responses, behaving erratically, appearing to have sub-conscious motivations, etc.

It wouldn't be that hard for a search / pattern-matching based AI to do, and would be hard to distinguish from genuine human behaviour. You and I both know that all its doing it pattern matching, but if the end result is exactly the same...

If we want to tlak about certain issues to do with AI (rights, analagies to slavery, moral agency, etc.) then the question of whether an bot is really an AI is huegly important. If we want to talk about whether an AI can kill us if not properly managed, then bahviour is all that matters.

Comment Re:Get it back up, already! (TWSS) (Score 1) 180

More importantly, who cares if it was offensive.

Let's be honest, any of us (if arguing in good faith) can understand why someone might find the comments offensive (even if we don't agree with them), and the nature of offense is that it is subjective and personal.

The better question if why we should care? Why should a post be removed just because (some) people might take offense to it?

Comment Re:Ah, good. (Score 1) 122

M.A.D isn't really intended to be a deterence against these things. It is intended to be a deterence against:

a) a nation state with a strong army threating your state

b) a nation state with their own nukes

In both those case, the opponent nation will want to protect their nation (which includes themselves), and therefore will not want you to nuke them.

This does not work if the opponent is:

a) a small non-territorial group

or

b) irrational, or motivated by non-material objectives (e.g. ideological fundamentalists)

But it isn't intended to do so.

A key part of MAD theory, is that your opponent has to beleive that you are willing and able to use the nukes if provoked. It isn't enough just to have them, you have to make them believe you'd use them. Hence the constant development of newer and better weapons - it is part of the theatre of presenting a credible deterence.

Therefore, your analagy between a pop-gun and an AK47 misses the point. Everyone knows that the pop-gun will do the job just as well as the AK. However, the signal that is sent by owning only the pop-gun is that you do not take this seriously.

Comment Re:Ah, good. (Score 1) 122

It sort of doesn't matter in the case of the USA. The US military is so much more powerful than any other country's (or group of countries) that it serves the same purpose as a nuclear deterence. (i.e. if you piss off the US and they send in their military, you and your government and probably most of your country are done for).

The US doens't need the nukes to have a deterence.

Conversely, because of the overwelming size and power of the US military, other nations do feel they need the nukes, as it is the only thing that protects them. The US cant march its military on their capital if they have the nukes to deter it.

This unfortunately means that:

a) the US unilateraly disarming would change anythign in terms of nuclear risks.

b) other countries won't disarm, unless the US shrinks its military by about 3/4.

Comment Re:Unwanted? (Score 1) 122

Would they? Can you count on your opponent believing it?

If you are a citizen of the USA, can you be sure your government would use nukes if Hawaii were invaded? Alaska? Even if they know that by using the nukes they'll get nuked too? Its a lot to ask of a government to kill themselves over a principle like deterence. But if not then, when?

Similarly, if you're a UK citizen would you even want your nukes used if your enemy invaded the Channel Islands? If they then pushed through to the Isle of Wight, and then stopped there? If not then, when? And if then, why, for what benefit?

Granted, if your enemy were rolling their tanks into Washington or London, then its a different story. But no one attacking a nuclear power would be foolish enough to it that way - they'd do it in little slices instead, each of which were too small individually to trigger a strategic response of that level.

Comment Re:Unwanted? (Score 1) 122

In this particular sense, he's playing the game and playing it well.

Having a nuclear arsenal is of little use if your opponent doesn't believe you'll actually use them. Part of having the arsenal is making your opponent believe that if they cross certain lines they'll get a nuke in the face. This is why there was so much tension throughout the cold war about hard-trigger alerts and command-and-control and such like - the leadership knew that it had to appear like they were ready ot use the weapons when provoked, but in order to do that you have to actaully make it look credible to the other side.

Anyway, part of Russia's position in invading the Ukraine is that NATO will not intervene because they are scared of the nuclear aspect. This has proved completely true so far (and rightly so in my opinion). Since the Russian armed forces have proved to be so much more useless than anyone (including the Russians) thought, they are more reliant now than when they started on that nuclear deterent aspect preventing NATO from getting involved. (They know that if NATO got directly involved they'd lose in about a week)

Hence they keep ramping up the nuclear aspect, as it is the only way they can stay in the game. Putin understands this, and is playing the game well.

Comment Re:did you just wake from a coma? (Score 1) 122

To some degree, nuclear weapons caused this current conflict in Ukraine.

If you take the nukes of the table, the western forces would have driven the Russians back out of Ukraine in about a week and would be happily rolling up their borders forces, taking control of Belarus, and deciding whether they wanted to push on into the Russian interior.

The western forces are so staggeringly overpowered compared to the Russians that the conflict wouldn't have lasted a couple of weeks. The only thing keeping the Russians in the game (i.e. stopping western forces from getting directly involved) is that they have the nukes and don't appears to be as afraid to use them as we'd like.

They can see this as well as we can (albeit, they surprised themselves a bit by just how rubbish their armed forces are), and are, to some degree, hiding behind their nuke shield for protection. They know this is why NATO doesn't get involved and they're counting on it.

If the nukes didn't exist there isn't a chance in hell they'd have invaded like they did. They only invaded because they have the nukes.

(obviously, at a larger outlook, if they didn't have the nukes then a whole lot about Russia and its relationship with the world would be different)

Slashdot Top Deals

Kleeneness is next to Godelness.

Working...