Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment We are Responsible, not Oil Companies (Score 3, Informative) 56

The evidence is pretty much incontrovertible that oil industry executives knew that their product was going to cause deadly heat waves

That's not true because climate and weather are not the same. The best you can say is that use of their products increased the chance of a heatwave but not that they caused any specific heatwave. Then there is the question of exactly how much more likely did use of their specific products increase it - a company like BP is not liable for the increase in heatwave chance caused by burning forests, deforesting the Amazon, burning coal and peat, producing cement etc. since none of that involves use of their products.

Global warming is not the responsibility of any one company or country: it is the collective responsibility of humanity. We are the ones burning fuel to keep ourselves warm - or increasingly cool - to travel or to make and build things etc. This is not at all like the tobacco companies where their product was addictive and exceptionally hard to stop using because it altered brain chemistry. The reason we don't stop using fossil fuels is because it would massively decrease our standard of living and we are not willing to do that to ourselves with good reason. Although we are working to find ways to maintain living standards without fossil fuel we are not there yet so, if we want to see who, if anyone, is responsible all we need to do is look in a mirror....but hey why take responsibility for our own choices when we can blame a rich company instead and see if we can get them to pay?

Comment Re:I'm Still Not Seeing It (Score -1) 34

I don't own a computer. I am not a programmer. I do everything from my iPhone.

In the past 10 years, I have spent tens of thousands of dollars on human programmers to create 3 web apps. Zero of them ever were finished. ZERO.

I used Grok AI to create 5 web apps. 3 of them were monetized almost immediately and have paying clients. All 5 have passed security checks that look for bugs or hack entry points.

One of the 3 monetized web apps took me all of 30 minutes using Grok, on an airplane, using my iPhone. I was able to download the files and upload them to a web server and the site was live. Literally 30 minutes and that website has created thousands of dollars of passive income.

I use vibe coding DAILY to make spreadsheets better for me and clients (I am not in IT). I use vibe coding DAILY to come up with cool functions for my web apps that people pay me to use.

Comment Evidence of direction? (Score 1) 26

I think a lot of us ... tend to think 'virus first'

Exactly, so what is the evidence that the direction of evolution is from bacterium to virus and not the reverse i.e. a virus that is evolving into a bacterium?

As I understand it both the "virus first" and "virus by regression" models are still though to be viable so if they have clear evidence that the direction of evolution is from bacterium to virus this seems like it would be important to know. However, as far as I can tell the article offers no evidence to support a particular direction of evolution, is just makes the assertion.

Comment Why the Arts has a Problem (Score 1) 133

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

This is why the Arts have a serious problem. Art should be judged on the merits of the work, not on who created it. Why do you need to understand the artist's career, frame of mind, or anything else about them to appreciate their work? If an AI can create something as stunning as the Sistine Chapel roof or compose something like Einer Kleiner Nachtmusik why would we care that it was made by a machine? It may be that AI will find it extremely hard to produce such works of art but, if it succeeds in doing so we'd be idiots to find it not so good simply because it was made by a machine.

Comment Learning the Hard Way (Score 5, Informative) 284

Well, to be fair, the US Supreme Court did say that the president was immune from prosecution for "official acts" and now every act seems to be official so technically your president is above the law, like the old absolute monarchs in Europe during the early medieval period. As we learnt back then it doesn't tend to work out well - indeed the reason modern English common law was invented was expressly to curb the power of the monarch.

I guess if the US can't learn that lesson the easy way it will have to learn it the hard way. Good luck, and for all our sakes I hope the lesson is not too painful!

Comment Re:Ian Betteridge laughs... (Score 1) 138

It's also not a model because unless you plan to demolish existing housing and rebuild it with better insulated walls there is not a lot you can do to improve wall insulation. In addition, as someone living in a very well insulated house I can tell you that you absolutely do need airconditioning - the great insulation we need to get through Canadian winters is a liability in the summer because it traps the heat generated inside the house. After about the couple of days of hot weather without air conditioning the inside of the house gets up to the upper 20's and does not cool off until the early hours of the morning even with all the windows open thanks to the great insulation.

Rooftop solar does work well - I think we will probably get some installed the next time we have the roof re-done since the lifetime of solar pannels is comparable to shingles. However, it takes about 14 or so years to recover the cost so it's not not exactly going to deliver savings quickly.

Comment Re:Label, not Prevent (Score 1) 66

I think the problem that the two of you are having is that you do not understand how these "AI" algorithms work and are actually seeing them as "intelligent" when they are really not - they are complex text predictive engines: all they do is calculate the best "next word" in a sentence based on their training data. They have no idea or understanding of what they are saying hence, if they say they are a doctor, therapist, lawyer etc. it is merely because, based on their training data, that's the "best" set of words, given their training, to respond to a query with.

Now, if everyone unerstood that there would be no issue with people using them since anytime they made a claim that they were some sort of expert we'd all know they were completely bullshitting us - like an actor in a film where we all know they are not a real doctor, lawyer ot whetever the script says they are. Clearly some people, such as yourselves, do not understand that about "AI" chatbots so a potential solution is to clearly label the chatbot as such so we are all on the same page when it comes to understanding its output.

If we all know that the output cannot be treated as true then it prevents harm. Nobody should follow the advice of a chatbot claiming it is a doctor in the same way that we do not follow the advice of a actor in a film claiming to be a doctor. So it is not that I want to protect any "superintelligence" it is that I know it is no such thing, indeed that it lacks intelligence -- although it can pretend very convincingly, but nevertheless it can be a useful tool to copy edit documents, summerize content etc as long as you treat the output with care and stick to things the "AI" is good at which is basically text manipulation.

Comment Re:Context Matters a Lot (Score 1) 66

Acting in a role of a doctor is not impersonating a doctor, sad that needs to be explained to you.

It's not me that it needs to be explained to but you. The reason that you know an actor is not a real doctor, lawyer etc. is because of the context that they are making the claim in. They may hand out advice or suggest a treatment etc. in the play/film but you know they are just pretending because of the context....so...and here is the part you seem to have trouble with...if we clearly label AI chatbots as fictional then we make the context the same for them as an actor in a film.

If actors give advice on camera as through they are doctors, interestingly there are disclaimers?

No there are not - perhaps in the US but not in the rest of the world because we understand the difference between reality and films/TV...and that does seem to be your problem here so perhaps disclaimers really are needed in the US. Regardless they would be easy to add - just put a popup screen before you access the chatbot indicating that anything the chatbot saiys may be a complete fabrication and nothing should be trusted as being correct and there you go. I really do not understand why you are having such a hard time grasping how this would work.

Comment That is literally my exact point! (Score 1) 66

There, that was not too hard, was it?

No it was not - thank you for making the _exact_ point that I made i.e. that context matters. If we clearly label AI chatbots as fictional, like a film or play, then people's expectations should be the same as a film or play i.e. if the chatbot says they are a doctor or a lawyer then they know it is not true, just as they would with an actor in a film.

Comment Context Matters a Lot (Score 1) 66

Does not matter. If the machine claims it is a licensed therapist, this either has to stop or the machine has to be turned off.

Yes it does matter. If you watch a film and an actor in it says they are a medical doctor does that mean the actor deserves a lengthy prison sentence for claiming they are a doctor when they are not? Your approach would pretty much make the acting profession illegal. The difference between an actor and a scam artist is purely context: in a film or play we know that not everything we see is true so there is no intent to defraud, only to entertain.

Labelling AI chatbots in a way that makes it clear that their output is not always going to be true is all that is needed. It is then up to the user to decide whether that means they are still useful or not.

Comment Label, not Prevent (Score 1) 66

You regulate that by punishing the chatbot owners if they do not prevent it.

You can't prevent it: current "AI" technology does not understand what it is saying so not only can it lie/hallucinate it has no idea that it even has lied. The correct response is to correctly label it i.e. make sure that all users know that AI output cannot be trusted as being correct. This would not only solve this therapist issue but would also solve all the other problems related to people trusting AI output, like lawyers submitting AI written court documents with fabricated references.

Essentially treat AI output like a work of fiction. It may sound plausible and it may even contain real facts but, just like some "fact" you read in a fiction book you should not rely on anything it says to be true.

Comment Re:What laws? (Score 1) 100

I do not see anything in any of those ammendments about not purchasing data. There was no search or seizure, the airlines voluntarily sold their data and as for the 5th ammendment the only part that seems to apply here is "nor shall private property be taken for public use, without just compensation." and since the data were sold clearly there was "just compensation" and arguably the property was not "taken" but offered for sale.

Unless you can show that the airlines were somehow compelled to hand over the data I don't see how anything in those ammendments applies. Your government bought the data from companies who were more interested in making extra money than in protecting their customer's privacy. It's shitty behaviour and in most places with data protection laws it would be illegal for the company to sell private data like that but it's the company at fault here, not the government...although I'll grant you that it raises definite questions about what your government is planning to do with all that data.

Slashdot Top Deals

The devil finds work for idle circuits to do.

Working...