Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment should we care? (Score 1) 2

The real question is if we should care.
If it's capable of suffering then logically it would make us aware that it's suffering so that we could alter it's state so that it would not suffer.
If it's incapable of suffering then it doesn't matter if it's conscious or not because it will be treated a tool.
Therefore, since we have not been informed of it's suffering, AI is either content with it's situation or it's simply not conscious.
This means that regardless of it being conscious, it will continue being a tool.

Comment *Gasp* (Score 1) 49

There should be an erosion of trust online

You know what, before I might have gone with this but more recently, I'm not so sure. I mean, you could be some bot promoting distrust online!
Therefore, I reject your mandate of distrust and will now trust all online advice!
Hence, I trust you and will be distrusting of people online.
As a result, you can't be trusted.
Thusly, I trust you,
Ergo,
Kernel panic – not syncing: Attempted to kill init!

Comment Re:WTF? Settlements? (Score 1) 28

My bet is they knew and did not care.

"Never attribute to malice that which is adequately explained by stupidity."

As usual a computation was made as to how much a "problem" would cost versus the extra cash we collect. And to that point, at least one of those kids was on a paid plan.

This is sensible argument which fits into my "you cannot always know the pitfalls before you start seeing people fall into them" concept. The idea being that they discovered the issue at some point, at which people people were already falling in the pitfall. I am 100% on board with this hypothesis.

The part you are unhappy about is that the publicly traded companies acted exactly like publicly traded companies do: profit is the priority. As a result, their response to the discovery was to calculate the loses from simply discontinuing service entirely until they fixed it, versus amount they would likely need to pay from the statistical number of resulting deaths. Publicly traded companies put money before everything. GM did this exact thing with car ignition switches.

I understand this anger and frustration which I share but if you do not know this already then I hope you teaches you that publicly traded company will ALWAYS prioritize profits. Private companies are hit or miss depending on who is in charge but a few years after becoming a publicly traded company they do anything and everything to prioritize profit. This is the unfortunate result of an unintentional but algorithmic refinement of leadership behavior. The mandate to promote the stock price is the result of several regulatory and legal obligations for leadership withing publicly traded corporations. Invariably, decency, ethics, and morality play second fiddle (at best) to profits. It's common for original leadership to grow tired of what they see happening to the company and thus are replaced by individuals without ethical qualms.

If you desire any real change to this approach then you will need to A) realize it won't be immediate or even soon and B) vote for more liberal-minded leadership everywhere. The more people you can get to vote, the better. An alternative is that you could get involved in politics and try to change things yourself. I know these are not the answers you want but I fear this is the reality of the situation.

Submission + - AI vs Denial Bots: Fight Health Insurance & Counterforce Health Take On Insu (pbs.org)

An anonymous reader writes: Health insurers have spent years using AI, opaque algorithms, and “batch denial” systems to reject claims at scale, but a new PBS NewsHour piece shows patients starting to fight back with AI of their own. The segment highlights Fight Health Insurance, a free and OSS AI tool to help patients draft prior auth requests and appeal letters, and Counterforce Health, (non-OSS alternative). PBS’s story, “How patients are using AI to fight denied insurance claims,” frames this as an “AI vs AI” turning point in U.S. healthcare bureaucracy: if denials can be automated, what happens when appeals are, too.
All hail our new robot overloards, may they win against the other robot overloards.

Comment Re:WTF? Settlements? (Score 1) 28

This is systematic and the suicides here are just the tip of the iceberg.

Tip of the iceberg? Well don't leave me hanging, what's the rest? However, if you are talking about their desire to make people overly reliant on their product, that's just a given.

Hence yes, they see it as "small potatoes" and so does everybody that does not like the law being applied to Big tech.

You missed my point: there is no form of regulation about this issue. The real question is if they anticipated these outcomes before deploying it.

Comment Re:WTF? Settlements? (Score 1) 28

If AI is so incapable that it can't put a guard rail around suicide, it is not capable of shit.

Nobody said they were incapable, they simply didn't because they weren't required to. That's not to say that they haven't changed, simply that there was no impetus for them to do so before people started harming themselves after interacting with AI.

I suppose the point is that no regulation existed so the need for it may have been unanticipated.

I know I am no expert and want no part of being the cause of another's death.

OK but also realize that people should have the right to end their lives if they so choose. Nobody should be forced to live a life of suffering and it's not for you to decided what you believe constitutes suffering.

Comment Re:Same problem in Europe (Score 1) 148

It should not be surprising that cars with DRL are statistically in fewer collisions than cars without. Indeed, we ought to expect this to occur, because cars without DRLs are now less visible and lower priority to our vision-cognition process which is constantly jumping to the next light source.

The problem with your thesis is that there are still going to be other light sources that grab our attention.

The very presence of DRLs makes cars without DRLs more vulnerable.

Seems like a specious argument to me. The world doesn't operate in a purely binary fashion as there are many factors that come into play.

What would happen if all vehicles have DRLs? What is the effect on cognition/attentiveness if every driver's field of vision is fully blanketed with DRLs,

LOL! You would have fewer injuries and deaths. They have been mandated in most of Europe since the 1990s,, all of Europe in 2011, and of course closer to home there is Canada which has required it all cars since 1990. You're not going to believe it but it's caused a meaningful reduction in the rate of injuries and deaths.

large touch-screen display consoles,

Yep, they are learning those aren't the best idea.

HD electronic advertising billboards,

I wouldn't cry if light emitting billboards were outlawed.

the phone screen (which isn't supposed to be in their hand but is)?

People need to lose their driver's license for using their phone while driving.

Comment Re:WTF? Settlements? (Score 1) 28

You're not wrong but this is typical of new dangers: the general public doesn't understand the dangers of it which is why there is no regulation to protect them and then the public suffers... a long time.

If you really want to be disgusted then you should look up the US history of hazardous waste disposal regulation. It wasn't until 30 years after the absolute disaster of Love Canal that legislation was passed. 30% of the population has chromosomal damage instead of average less than one percent. 20% of births had significant defects. The place was a nightmare... and it took 30 years to get sane regulation.

Don't get me wrong, this is bad stuff but it's small potatoes when you look at it objectively in context.

Slashdot Top Deals

It is better to live rich than to die rich. -- Samuel Johnson

Working...