178120299
submission
david.emery writes:
https://www.engadget.com/compu... leads their story with "Stop me if you've heard this one before: Microsoft is making it harder to use Chrome on Windows. The culprit? This time, it's Windows' Family Safety feature. Since early this month, the parental control measure has prevented users from opening Chrome. Strangely, no other apps or browsers appear to be affected." This bug is at least 17 days old.
I've always wondered in situations like this: Which would Microsoft (in this case) prefer we believe? That Microsoft is so incompetent to let something like this slip through their QA? Or that they're sufficiently evil to try to block a competitor? (I suppose both could be true.)
176288505
submission
david.emery writes:
TheVerge reports "A team from Elon Musk’s SpaceX is visiting the Air Traffic Control Command Center in Virginia Monday to help overhaul the system in the wake of last month’s deadly air disaster in Washington, DC, US Secretary of Transportation Sean Duffy announced. The news comes after CNN reported that the Federal Aviation Administration fired hundreds of probationary employees who maintain critical air traffic control infrastructure." https://www.theverge.com/news/... TheVerge also noted "And the agency itself lacked a permanent head at the time of the crash — mostly because Musk had a hand in ousting the last administrator after the FAA fined SpaceX for failing to submit safety data."
(Makes me wonder how SpaceX would approach the Air Traffic Control mission: " 'Rapid unscheduled collisions' until we figure it out?")
175144045
submission
david.emery writes:
Brian H Choi writes an opinion piece on AI liability over on Lawfare: https://www.lawfaremedia.org/a...
To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems. ...
I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California’s AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the “developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm” (emphasis added). Although tech leaders have opposed California’s bill, courts don’t need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
However, the author ignores the established precedent of engineering liability (i.e. who to blame if the building falls down) and licensing, which establishes both liability and limits on same. This is an important issue for the AI industry and for engineering societies to consider.
171361867
submission
david.emery writes:
CNN reports on a wide-ranging class action lawsuit claiming Google scraped and misused data to train its AI systems. https://www.cnn.com/2023/07/11... This goes to the heart of what can be done with information that is available over the Internet.
The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.
In response to an earlier Verge report on the update, the company said its policy “has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate. This latest update simply clarifies that newer services like Bard are also included.”
“Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”
The plaintiffs, the Clarkson Law Firm, previously filed a similar lawsuit against OpenAI
56681069
submission
david.emery writes:
According to this story, Target's own IA/computer security raised concerns months before the attack: http://www.theverge.com/2014/2... Quoting a story in the Wall Street Journal.)
But management allegedly "brushed them off."
This begs a more general question for the Slashdot community? How many have identified vulnerabilities in your company's/client's systems, only to be "brushed off?" And if the company took no action, did they ultimately suffer a breach?
34258029
submission
david.emery writes:
Julian Assange, his appeals in the United Kingdom having run out, today went to the Ecuadorian Embassy in London to request asylum from his pending extradition to Sweden to face rape charges. http://news.blogs.cnn.com/2012/06/19/julian-assange-requests-asylum-in-ecuador-foreign-minister-says
8965690
submission
david.emery writes:
MacOS and iPhones that haven't been jailbroken fare pretty well (although vulnerabilities exist, there's not been a lot of exploitation). Apple does come in for criticism for 'time to fix' known vulnerabilities. Jailbroken iPhones are a mess. The biggest risk to Macs are Trojan Horses, often from pirated software.
8216554
submission
david.emery writes:
SIR Patrick Stewart... came out on the Queen's New Years Honors listing...