Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re: Likeness rights are something that have existe (Score 2) 111

Yes. Federal law already protects them with protection around intrusion, appropriation, unreasonable publicity, and false light. Notable actors and actresses have made claims on trademark for their likeness including voice, and copyright covers direct copies and derived works pretty well.

They are calling for additional protection both through contract and through additional laws.

The law currently has a gray area around appropriation when it comes to generated, synthetic voice. There is very little case law.

I am sympathetic to wanting specific authorization for each use. The law probably already covers it under personality right appropriation, but it is not completely clear, settled law.

Comment Re:AI a Force For Good ? (Score 2) 67

I believe the technology itself is strictly neutral. It is the people using it, and the way they use it, that is problematic. Most humans are great, and will choose to do the decent thing. Some are not. A few criminals have already latched on to them.

  • * We already have AI-driven drones that are completely computer controlled. Some of them are used in warfare, with weapons. By international treaty against booby-traps and for better control the militaries have a human that gives the order to fire, but it could easily be fully automated.
  • * We already have AI-driven turrets that can identify a single individual in a crowd and track them. In commercial environments usually they're used in sports tracking the ball carrier, or sometimes used by news agencies to track key people, or even paparazzi to find stars for photos. It wouldn't take much to make them into a lethal AI-driven weapon.
  • * We already have AI-driven vehicles that are completely computer controlled. Right now in many major cities you can have an automated taxi pull up and deliver you to your destination. Honestly it's only a matter of time before someone straps a bomb in one as the passenger.
  • * We already have AI-driven smart weapons, ranging from guided missiles to smart bullets. Currently they're controlled by militaries and by smart engineers and not in the hands of crazy folks who want to see the world burn, but eventually someone with enough technical know-how and also world-burning desire will collide.
  • * We already have systems that make deep fakes, a few minutes of video can give a convincing image and convincing voice. There are sadly already people out there who will fake a kidnapping and use automated voices to simulate a victim to blackmail a family who thinks a loved one is being held for ransom.

For the most part people do good things, or at least not terrible things. But there are people who won't hesitate to use the technology to commit major crimes, and that's the difficulty.

Comment Re: A message, but easily adressed. (Score 4, Insightful) 125

The anonymous coward brings up another good point: they are the nanny state. They dictate your medical care without medical training nor knowledge of the situation, they dictate education despite not being trained as educators nor knowing the details of the child, and otherwise work to eliminate self-determination and claim it is freedom.

Comment A message, but easily adressed. (Score 4, Insightful) 125

It's a strong political message, but little more.

The new law was going to be for two years, and it passed.

The courts can change it back to 2 years. The next legislature can change it. The next governor can change it.

It's really only gullible people who would think the state's education budget is now set in stone for four centuries. Sadly, there are plenty of gullible people and clickbait-writing reporters, but really that's bound to make the political message that much bigger.

Comment They can have multiple motives. (Score 3, Insightful) 52

Where do you see them writing that they aren't protesting? People can have more than one motive at a time. It doesn't look like they're saying we are not protesting, in the announcement post they make it clear they're acting for both reasons.

They are protesting and were clear about it. Just look at the titles of the various admin posts. It absolutely started as part of the protest, they openly declared it.

They are also fixing a known oversight. The game is rated M (17+) by ESRB for nudity, strong sexual content, use of drugs, blood and gore, all categories that get NSFW tags on Reddit. It should have been tagged, but wasn't. That's a second motive. It doesn't mean the first motive is invalid, and the first motive doesn't invalidate this one. Both motives support the NSFW marker.

Once they got involved in the protest, because "the Reddit admins have threatened us", they're openly in protest and not going to switch back. They're also providing a canary. That is, the moderators declared that they wouldn't remove it; if the flag gets removed people will know Reddit modified it against the community's wishes. What started as two good reasons grew an additional good reason, it now also indicates a loss of community control.

Comment Zuck is a jerk, but the China statements were fair (Score 4, Informative) 13

We've got a lot of things that are true. Zuck is a jerk, and has been his whole career, that's unchanged.

He said some bad things about Chinese business, including how they had government-mandated technology transfer agreements and a tremendous amount of both private and state-sponsored industrial espionage. While his statements may have been bad, that doesn't make them false.

I mean, those statements about China's IP theft are so commonly repeated on Slashdot that they're not really news, yet still show up every week or two. I mean, you don't even have to look very far:

Comment More government problems. (Score 3, Insightful) 31

Currently a 255 page bill, so nothing clean and easy.

It is completely vague to how it is to be accomplished. In each section the quote is the same. First, that it must not be possible for children to access it, followed by "a provider is only entitled to conclude that it is not possible for children to access a service, or a part of it, if there are systems or processes in place (for example, age verification, or another means of age assurance) that achieve the result that children are not normally able to access the service or that part of it."

They require record keeping, which is disturbing. Many variations of what they need to keep, including "A provider must make and keep a written record, in an easily understandable form, of every children's access assessment." Since each "assessment" determines if they're adult or child, and it needs to have enough detail to prove they're an adult or a child in case the government comes looking. That's basically creation of a masturbation log for every adult.

Naturally it doesn't state certain things. There are no data retention guides, so no guidance of when it can be deleted. They reference investigations at 12 month limits so records must be kept for at least a year. There is nothing mentioned about data security, safety of the record keeping, nor consequences for distributing or safeguarding the data ... so we must conclude the masturbation log is free to be sold, given to marketers, and passed along to government as business records that can be subpoenaed.

Comment Of course, AI is already everywhere. (Score 1) 37

I love the use of AI in tools and technology, including the red underlined word I saw when typing this out. AI is amazing.

The confusion for most people is that people think a wide range of things when you say "AI".

AI is already everywhere.

My programming tools include a huge array of AI systems. The optimizers are great and filled with all kinds of AI. Autocomplete is a form of AI. Spell check is a form of AI. Validating my code syntax is a form of AI. Suggestions based on code heuristics and recognized patterns are a form of AI. The search engines I use to look up answers is AI.

We're surrounded by AI. When I open maps, that's an AI. When I see a suggestion that a birthday is coming up and I should do something, that's an AI. The system that double-checks medication for drug interaction is AI. Machine learning, image classifiers, the filtered list of top search results, all of them are AI. Self-driving cars use AI. Drones use AI for balance and guidance. Modern satellites are adjusted using AI. The automatic adjustments done by my phone when I take a selfie are AI driven. Many of my vehicle's engine adjustments are done by a type of AI. My thermostat is a type of AI. These days just about anything with a microcontroller in it more likely than not is doing something that could be classified as a type of AI.

Too many people associate the term "AI" with SciFi characters, not realizing that they're already got a bunch of AI systems running in electronics on their body, and thousands more on the tools and technology they use daily.

Comment Re:Dangerous and prone to abuse? (Score 1) 61

Very different issues.

It isn't "simulate what a person would say" that's dangerous. It is the mix of "say what they would say and play it back in their voice, in real-time" that is especially troublesome.

We already have problems where criminals get voice segments, then call up their victims with "I've kidnapped your daughter, stay on the phone and get us money. Hang up and your daughter will be killed". Criminals already pull from social media and use real voice clips pulled from social media, but critically they aren't interactive.

When the criminals can make a fully interactive model, with the voice pulled directly from social media samples, the scams can be that much worse.

Comment Re:Apple: Be Evil. (Score 2) 17

Agreed. Personally I'd hit them for at least 7 of the 10 counts based on what I know of the company.

For those who weren't following along:

  • Count 1: Federal Unlawful Monopoly Maintenance in the iOS App Distribution Market;
  • Count 2: Federal Denial of Essential Facility in the iOS App Distribution Market;
  • Count 3: Federal Unreasonable Restraints of Trade in the iOS App Market;
  • Count 4: Federal Unlawful Monopoly Maintenance in the iOS in-App Payment Processing Market;
  • Count 5: Federal Unreasonable Restraints of Trade in the iOS In-App Payment Processing Market;
  • Count 6: Federal Unlawful Tying the App Store in the iOS App Distribution Market to In-App Purchases in the iOS In-App Payment Processing Market;
  • Count 7: California Unreasonable Restraints of Trade in the iOS App Distribution Market,
  • Count 8: California Unreasonable Restraints of Trade in the iOS In-App Payment Processing Market;
  • Count 9: California Unlawful Tying the App Store in the iOS App Distribution Market to In-App Purchase in the iOS In-App Payment Processing Market; and
  • Count 10: California Unfair Competition.

The three about restraint of trade are a little more questionable to demonstrate, and mostly depend on how you define their market of their store specifically as Epic tried to present, or if you count it as all stores across broad industries, which is the picture Apple tries to present. #1 is constantly mentioned in hobbyist circles, the illegal tying counts are right there in the contracts for anyone to read.

Antitrust and monopoly laws are tricky in that they require the context of the market they're in. I don't think it is possible to pinpoint the date, but there was absolutely a transition time when Apple and Google marketplaces transitioned from being a small fish in a big pond to being the monopolistic big fish --- and for Apple the contract-enforced ONLY fish --- in the pond. When that transition happened, monopoly regulations began to apply. Apple is delusional to continue to claim they aren't in a monopolistic position.

Comment Re:No AI-specific laws - just sensible laws, pleas (Score 2) 36

They didn't say "you can't use JavaScript to deal in private data" - the laws are not tied to a specific technology. Ignore the "AI" part, ignore ChatGPT and Bard. What *effects* are important? What rights need protected? Because those effects and those rights need to be protected *anyway*, regardless of what technology you are protecting them from.

US law is littered with that. Very often it has been a way to score points on being "tough on crime", making things that are already illegal even more illegal. Other times it is because of prosecutors who fear that the courts won't enforce the laws the way they want so they don't bother the lawsuit without a very specific law on the books. In the end we get tons of ultra-specific laws dealing with a specific technology or a specific method of doing things, and then we have the general form that doesn't get enforced at all.

As a simple example, fraud itself is illegal under federal law. It's an easy read, and covers what needs to be covered.

Then we have specialized laws on mail fraud, wire fraud, credit card fraud, healthcare fraud, check fraud, access card fraud, insurance fraud, tax fraud, securities fraud, privileged information financial fraud, bankruptcy fraud, mortgage fraud, loan modification fraud, computer fraud, identity theft fraud, citizenship fraud, mass marketing fraud, and so many more. Each with different requirements, different consequences, different severity.

There are similar trends at the state level, they're constantly in the news. Recent years has seen it in LGBT related laws. One of many has been all the states going for bathroom bills, even though there are already laws about sexual assault, voyeurism, sexual exploitation, and so on, the criminal element is already covered and prosecutors have no shortage of crimes they could charge an offender with, but instead they want it to be even more illegal not because it reduces offenses, but because it targets an unpopular group. Similarly with racist laws, and less common today but also sexist laws, the thing they claim they want to regulate is already regulated, but the extra laws are to punish or control a specific group rather than being good, broad, all-purpose regulations.

Comment Re:Yay Economics (Score 1) 190

Not only is there the economics angle, but there is also the legal angle. There are the economics drivers of "Let's do it to make money!" but there is also the legal angle of "if we deploy this, we're either going to prison or get the death penalty under international law".

Right now today, the only thing preventing fully autonomous killing machines is international treaties. International law restricts creation of devices that aren't human-involved, mostly under the name of booby traps and mines, but through happenstance of wording it also managed to cover fully autonomous weapons.

Right now today the big militaries of the world already have fully autonomous weapons. The US uses fully autonomous drones that can launch, cruise around a battlefield, identify potential targets, and then wait idly until a human gives the command to fire. The drone can auto-select likely targets, identify the source of gunfire and automatically target the origin, and more. It's only the treaty and policies that designers added a requirement that a human push a button to authorize it.

Right now today researchers have smart bullets and smart turrets. There are platforms that can scan a crowd, pick a face out of the crowd, and target them. The targeting can be used for social media or police state just as easily as it can for pointing a gun. And right now today we have smart bullets, basically extremely miniaturized versions of guided missiles that can change direction mid-flight to hit a dodging target, and even turn around corners to hit their target. The only thing stopping it right now is that systems are designed with a human step to authorize hitting the button.

We could already build the things people fear, and realistically in the lab they already exist. They aren't mass produced and placed on the battlefield because humans have decided not to. It isn't because AI will somehow find a way, it's that the machines aren't deployed that way because militaries know it is a war crime.

Comment Re:No weapons (Score 1) 225

We already have all that today. It is only existing international treaties that prevent it from escalating.

That is, in systems from guided missiles to smart bullets, the only human intervention is to click a button or press a trigger.

Guided missiles can pick their targets based on battlefield imagery, such as targeting the source of fire or heat signatures. They can intercept missiles, including hypersonic missiles if fired in time. The computers do all the flight calculations, steer during flight, even overcoming evasion by the oncoming missile, needing only a confirmation of a human clicking a button to authorize it.

Smart bullets can use facial recognition to pick up their target out of a crowd. They can steer around moving people, follow their target, and even turn corners. All they need is a human to authorize firing.

Slashdot Top Deals

Function reject.

Working...