Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses

AI Workers Seek Whistleblower Cover To Expose Emerging Threats 7

Workers at AI companies want Congress to grant them specific whistleblower protection, arguing that advancements in the technology pose threats that they can't legally expose under current law. From a report: "What people should be thinking about is the 100 ways in which these companies can lose control of these technologies," said Lawrence Lessig, a Harvard law professor who represented OpenAI employees and former employees raising issues about the company. Current dangers range from deepfake videos to algorithms that discriminate, and the technology is quickly becoming more sophisticated. Lessig called the argument that big tech companies and AI startups can police themselves naive. "If there's a risk, which there is, they're not going to take care of it," he said. "We need regulation."
This discussion has been archived. No new comments can be posted.

AI Workers Seek Whistleblower Cover To Expose Emerging Threats

Comments Filter:
  • Hm (Score:4, Informative)

    by byronivs ( 1626319 ) on Wednesday November 06, 2024 @12:03PM (#64925035) Journal

    Gonna have to frame that as a liberal scourge or pay some lawmakers. Sorry kids, new rules. Labor laws are gonna be shunted to "state's-rights" and we'll see results after a pattern illustrated by the struck Roe v Wade. "Chevron" is dead, is the pathway. The king live forever!

    • by HBI ( 10338492 )

      Larry Lessig being right about something might be a unique occurrence, and that may impact his effectiveness in this case. He'd be the last advocate i'd pick. Desperation only.

      • We got to remember that this is old news. I don't know this guy, don't care. But, "We need regulation." is simply adorable. I almost want to take him home and feed him and call him Larry The Harvard Guy. A-woozah. Wooza wu! Widdlehawardguy!

  • It's the same old story since way back. Some people are doing things that at the time morally or otherwise are questionable and they don't like it being exposed.

    What they afraid of is statistics that don't fit a narrative. If its said, 'In this area, 7 out of 10 people matching X do X' would look bad. It would be correct, but people don't like the reality. I won't get into more complicated nuances but it will apply to everything including economics. Politicans and people who will make a lot of money don't w

  • I am particularly entertained by the drip of rumblings from unskilled people hired to do "safety" work who then go off the rails and sound the alarms (all of them) when surprise ZOMG they find the things they were looking for.

    I'll believe these endless pushes are anything other than "power seeking" when they begin to advocate for something that would constrain and disadvantage rather than always push to further aggregate power and enrich corporations. It really is quite strange none of the safety groups ev

  • by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Wednesday November 06, 2024 @02:47PM (#64925763)

    Given the rapid advancements in AI and its potential impacts on society, it makes sense to consider specific protections for workers who raise concerns about potentially harmful deployments. Existing whistleblower protections cover areas like fraud and foreign bribery, but AI brings unique challenges that could directly affect public safety, privacy, and even democracy itself.

    A legislative shield for AI whistleblowers could follow a few basic principles:

            Protected Disclosures: Employees should be allowed to disclose concerns about AI systems that could impact public well-being without fear of retaliation. These disclosures should focus on significant issues, such as safety risks, ethical violations, or misuse of personal data, that could harm the people the AI interacts with.

            Confidentiality Exceptions: While protecting trade secrets is essential, this shield could carve out exceptions for disclosures that prioritize public interest, much like whistleblower protections for environmental or health risks. This would ensure that workers aren’t muzzled by NDAs when genuinely dangerous AI practices are involved.

            Clear Guidelines for Whistleblowers: Providing clear guidance on what qualifies as a protected disclosure would empower workers to act responsibly without the risk of guessing whether their protections apply. This clarity is essential for creating an environment where transparency around AI risks can thrive.

            Anti-Retaliation Measures: To prevent isolation and retaliation, the law should ensure whistleblowers are safeguarded from losing their livelihoods for bringing critical issues to light.

    A framework like this wouldn’t solve all issues around AI, but it could start by fostering a culture of accountability, giving voice to those on the inside who understand the stakes. Just as whistleblower protections helped address financial and environmental crises, similar protections could help AI develop more safely, in the public’s interest rather than solely in pursuit of profit.

  • ... AI workers can whistle [pinimg.com].

Promising costs nothing, it's the delivering that kills you.

Working...