Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Why the Arts has a Problem (Score 1) 128

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

This is why the Arts have a serious problem. Art should be judged on the merits of the work, not on who created it. Why do you need to understand the artist's career, frame of mind, or anything else about them to appreciate their work? If an AI can create something as stunning as the Sistine Chapel roof or compose something like Einer Kleiner Nachtmusik why would we care that it was made by a machine? It may be that AI will find it extremely hard to produce such works of art but, if it succeeds in doing so we'd be idiots to find it not so good simply because it was made by a machine.

Comment Learning the Hard Way (Score 5, Informative) 258

Well, to be fair, the US Supreme Court did say that the president was immune from prosecution for "official acts" and now every act seems to be official so technically your president is above the law, like the old absolute monarchs in Europe during the early medieval period. As we learnt back then it doesn't tend to work out well - indeed the reason modern English common law was invented was expressly to curb the power of the monarch.

I guess if the US can't learn that lesson the easy way it will have to learn it the hard way. Good luck, and for all our sakes I hope the lesson is not too painful!

Comment Re:Ian Betteridge laughs... (Score 1) 136

It's also not a model because unless you plan to demolish existing housing and rebuild it with better insulated walls there is not a lot you can do to improve wall insulation. In addition, as someone living in a very well insulated house I can tell you that you absolutely do need airconditioning - the great insulation we need to get through Canadian winters is a liability in the summer because it traps the heat generated inside the house. After about the couple of days of hot weather without air conditioning the inside of the house gets up to the upper 20's and does not cool off until the early hours of the morning even with all the windows open thanks to the great insulation.

Rooftop solar does work well - I think we will probably get some installed the next time we have the roof re-done since the lifetime of solar pannels is comparable to shingles. However, it takes about 14 or so years to recover the cost so it's not not exactly going to deliver savings quickly.

Comment Re:Label, not Prevent (Score 1) 66

I think the problem that the two of you are having is that you do not understand how these "AI" algorithms work and are actually seeing them as "intelligent" when they are really not - they are complex text predictive engines: all they do is calculate the best "next word" in a sentence based on their training data. They have no idea or understanding of what they are saying hence, if they say they are a doctor, therapist, lawyer etc. it is merely because, based on their training data, that's the "best" set of words, given their training, to respond to a query with.

Now, if everyone unerstood that there would be no issue with people using them since anytime they made a claim that they were some sort of expert we'd all know they were completely bullshitting us - like an actor in a film where we all know they are not a real doctor, lawyer ot whetever the script says they are. Clearly some people, such as yourselves, do not understand that about "AI" chatbots so a potential solution is to clearly label the chatbot as such so we are all on the same page when it comes to understanding its output.

If we all know that the output cannot be treated as true then it prevents harm. Nobody should follow the advice of a chatbot claiming it is a doctor in the same way that we do not follow the advice of a actor in a film claiming to be a doctor. So it is not that I want to protect any "superintelligence" it is that I know it is no such thing, indeed that it lacks intelligence -- although it can pretend very convincingly, but nevertheless it can be a useful tool to copy edit documents, summerize content etc as long as you treat the output with care and stick to things the "AI" is good at which is basically text manipulation.

Comment Re:Context Matters a Lot (Score 1) 66

Acting in a role of a doctor is not impersonating a doctor, sad that needs to be explained to you.

It's not me that it needs to be explained to but you. The reason that you know an actor is not a real doctor, lawyer etc. is because of the context that they are making the claim in. They may hand out advice or suggest a treatment etc. in the play/film but you know they are just pretending because of the context....so...and here is the part you seem to have trouble with...if we clearly label AI chatbots as fictional then we make the context the same for them as an actor in a film.

If actors give advice on camera as through they are doctors, interestingly there are disclaimers?

No there are not - perhaps in the US but not in the rest of the world because we understand the difference between reality and films/TV...and that does seem to be your problem here so perhaps disclaimers really are needed in the US. Regardless they would be easy to add - just put a popup screen before you access the chatbot indicating that anything the chatbot saiys may be a complete fabrication and nothing should be trusted as being correct and there you go. I really do not understand why you are having such a hard time grasping how this would work.

Comment That is literally my exact point! (Score 1) 66

There, that was not too hard, was it?

No it was not - thank you for making the _exact_ point that I made i.e. that context matters. If we clearly label AI chatbots as fictional, like a film or play, then people's expectations should be the same as a film or play i.e. if the chatbot says they are a doctor or a lawyer then they know it is not true, just as they would with an actor in a film.

Comment Context Matters a Lot (Score 1) 66

Does not matter. If the machine claims it is a licensed therapist, this either has to stop or the machine has to be turned off.

Yes it does matter. If you watch a film and an actor in it says they are a medical doctor does that mean the actor deserves a lengthy prison sentence for claiming they are a doctor when they are not? Your approach would pretty much make the acting profession illegal. The difference between an actor and a scam artist is purely context: in a film or play we know that not everything we see is true so there is no intent to defraud, only to entertain.

Labelling AI chatbots in a way that makes it clear that their output is not always going to be true is all that is needed. It is then up to the user to decide whether that means they are still useful or not.

Comment Label, not Prevent (Score 1) 66

You regulate that by punishing the chatbot owners if they do not prevent it.

You can't prevent it: current "AI" technology does not understand what it is saying so not only can it lie/hallucinate it has no idea that it even has lied. The correct response is to correctly label it i.e. make sure that all users know that AI output cannot be trusted as being correct. This would not only solve this therapist issue but would also solve all the other problems related to people trusting AI output, like lawyers submitting AI written court documents with fabricated references.

Essentially treat AI output like a work of fiction. It may sound plausible and it may even contain real facts but, just like some "fact" you read in a fiction book you should not rely on anything it says to be true.

Comment Re:What laws? (Score 1) 100

I do not see anything in any of those ammendments about not purchasing data. There was no search or seizure, the airlines voluntarily sold their data and as for the 5th ammendment the only part that seems to apply here is "nor shall private property be taken for public use, without just compensation." and since the data were sold clearly there was "just compensation" and arguably the property was not "taken" but offered for sale.

Unless you can show that the airlines were somehow compelled to hand over the data I don't see how anything in those ammendments applies. Your government bought the data from companies who were more interested in making extra money than in protecting their customer's privacy. It's shitty behaviour and in most places with data protection laws it would be illegal for the company to sell private data like that but it's the company at fault here, not the government...although I'll grant you that it raises definite questions about what your government is planning to do with all that data.

Comment Bad Example (Score 1) 100

If you are wanting to steal a car then clearly you are commiting theft and the question of data is secondary. Suppose instead your government is wanting to track your car to see where it has been. It is (I hope) illegal for them to force everyone to have a GPS tracker installed on their vehicle for this purpose. However, it is not illegal for a car manufacturer to choose put one on your vehicle - after all you need if for navigation - but then have it record your location data to a file that they can read when they service your car, or even transmit it over a mibile data connection.

Now, in Europe or Canada, data protection laws would probably make it illegal for the car manufacturer to sell that personal data to anyone. However, if they did sell it to say a government then the person breaking the law would be the car manufacturer, not the government, because the data is under the control of the company and they have a duty under the law to protect it which includes not selling it to anyone, government or otherwise.

Hence my question anout what laws the _government_ broke because, from where I'm standing, it looks like the airlines who are at fault here because they owned the data and so it is they who have the duty to protect it although, given the weaker data protection laws in the US, it may be that they are allowed to sell everyone's personal data.

Comment What laws? (Score 4, Insightful) 100

It is perfectly fine for the government to break laws

What laws did your government break? The airlines were not compelled to release the data, they chose to sell the data to the government. If anyone broke the law it was the airlines who sold the private data they held...which is probably why they required the government not to tell anyone how they got it.

Comment Depends on Reasons (Score 3, Interesting) 57

Obviously, if you're interested in an evidence-based, rather than politically-based approach.

It depends very much on the reason for the change. It may be that the new proposed definition is for some good scientific reason that has little to do with the political/social need to classify a group of chemicals that build up over the long term in the environment and cause damage. Indeed, it would seem to me that you would be better off completely separating the two definitions since it seems likely that there are more "forever chemicals" than just pfas.

Slashdot Top Deals

grep me no patterns and I'll tell you no lines.

Working...