Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 32 declined, 10 accepted (42 total, 23.81% accepted)

Submission + - Windows Parental Controls blocking Chrome (engadget.com)

david.emery writes: https://www.engadget.com/compu... leads their story with "Stop me if you've heard this one before: Microsoft is making it harder to use Chrome on Windows. The culprit? This time, it's Windows' Family Safety feature. Since early this month, the parental control measure has prevented users from opening Chrome. Strangely, no other apps or browsers appear to be affected." This bug is at least 17 days old.

I've always wondered in situations like this: Which would Microsoft (in this case) prefer we believe? That Microsoft is so incompetent to let something like this slip through their QA? Or that they're sufficiently evil to try to block a competitor? (I suppose both could be true.)

Submission + - SpaceX employees brought in to look at FAA (theverge.com)

david.emery writes: TheVerge reports "A team from Elon Musk’s SpaceX is visiting the Air Traffic Control Command Center in Virginia Monday to help overhaul the system in the wake of last month’s deadly air disaster in Washington, DC, US Secretary of Transportation Sean Duffy announced. The news comes after CNN reported that the Federal Aviation Administration fired hundreds of probationary employees who maintain critical air traffic control infrastructure." https://www.theverge.com/news/... TheVerge also noted "And the agency itself lacked a permanent head at the time of the crash — mostly because Musk had a hand in ousting the last administrator after the FAA fined SpaceX for failing to submit safety data."

(Makes me wonder how SpaceX would approach the Air Traffic Control mission: " 'Rapid unscheduled collisions' until we figure it out?")

Submission + - "Negligence Liability for AI Developers'

david.emery writes: Brian H Choi writes an opinion piece on AI liability over on Lawfare: https://www.lawfaremedia.org/a... To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems. ...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California’s AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the “developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm” (emphasis added). Although tech leaders have opposed California’s bill, courts don’t need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

However, the author ignores the established precedent of engineering liability (i.e. who to blame if the building falls down) and licensing, which establishes both liability and limits on same. This is an important issue for the AI industry and for engineering societies to consider.

Submission + - Google hit by 'data theft to train AI' lawsuit (cnn.com)

david.emery writes: CNN reports on a wide-ranging class action lawsuit claiming Google scraped and misused data to train its AI systems. https://www.cnn.com/2023/07/11... This goes to the heart of what can be done with information that is available over the Internet.

The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirety of our digital footprint,” including “creative and copywritten works” to build its AI products.

In response to an earlier Verge report on the update, the company said its policy “has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate. This latest update simply clarifies that newer services like Bard are also included.”

“Google needs to understand that ‘publicly available’ has never meant free to use for any purpose,” Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. “Our personal information and our data is our property, and it’s valuable, and nobody has the right to just take it and use it for any purpose.”

The plaintiffs, the Clarkson Law Firm, previously filed a similar lawsuit against OpenAI

Submission + - Target's internal security team warned management

david.emery writes: According to this story, Target's own IA/computer security raised concerns months before the attack: http://www.theverge.com/2014/2... Quoting a story in the Wall Street Journal.)
But management allegedly "brushed them off."

This begs a more general question for the Slashdot community? How many have identified vulnerabilities in your company's/client's systems, only to be "brushed off?" And if the company took no action, did they ultimately suffer a breach?
GUI

Submission + - Samsung's comparison of Galaxy S to iPhone (scribd.com)

david.emery writes: "In a document from the ongoing Samsung/Apple trial, provided in both English translation and Korean original, Samsung engineers provided a detailed comparison of user interface features in their phone against the iPhone. In almost all cases, the recommendation was to adopt the iPhone's approach.

Among other observations, this shows how much work goes into defining the Apple iPhone user experience."

OS X

Submission + - Intego issues 'Year in Mac Security' malware repor (intego.com)

david.emery writes: MacOS and iPhones that haven't been jailbroken fare pretty well (although vulnerabilities exist, there's not been a lot of exploitation). Apple does come in for criticism for 'time to fix' known vulnerabilities. Jailbroken iPhones are a mess. The biggest risk to Macs are Trojan Horses, often from pirated software.

Submission + - Sir Patrick Stewart (cnn.com)

david.emery writes: SIR Patrick Stewart... came out on the Queen's New Years Honors listing...
Microsoft

Submission + - Washington Post on "5 years of WinXIP"

david.emery writes: "In an article in the Washington Post entitled If Only We Knew Then What We Know Now About Windows XP, Post Technology Columnist points out the 5 year legacy of Windows XP. The article starts "Windows XP is turning five years old, but will anybody want to celebrate the occasion?". This is (IMHO) a very well-reasoned critique of WinXP, although it does fail to credit XP as being markedly better than its precedessors."

Slashdot Top Deals

Message from Our Sponsor on ttyTV at 13:58 ...

Working...