Security

Microsoft Appoints Deputy CISO For Europe To Reassure European IT leaders (csoonline.com) 19

Microsoft has appointed a Deputy CISO for Europe to address growing regulatory pressure and reassure EU leaders about its cybersecurity commitment. "The move also highlights strong fears from European IT execs and government officials that the Trump administration may exert significant influence on cybersecurity companies," reports CSO Online. From the report: Who that Deputy CISO will ultimately be is unclear. Wednesday's statement simply said that Microsoft CISO Igor Tsyganskiy is "appointing a new Deputy CISO for Europe as part of the Microsoft Cybersecurity Governance Council," but the phrasing made it unclear when that would happen. However, Tsyganskiy made a separate announcement on LinkedIn that he has given the role to current Deputy CISO Ann Johnson. But he then said that Johnson, who is based at Microsoft's head office in Redmond, Washington, will hold that post "temporarily."

In his LinkedIn post, Tsyganskiy explained that the Cybersecurity Governance Council, which was created in 2024, consists of "our Global CISO and Deputy Chief Information Security Officers (Deputy CISOs) representing each of our technology services. This Council oversees the company's cyber risks, defenses, and compliance across regions and domains." "The Deputy CISO for Europe will be accountable for compliance with current and emerging cybersecurity regulations in Europe, including the Digital Operational Resilience Act (DORA), the NIS 2 Directive, and the Cyber Resilience Act (CRA)," Tsyganskiy wrote. "These laws will prove transformative not only in EU markets, but worldwide, and Microsoft is actively engaged in preparing for what lies ahead."
Microsoft said in Wednesday's statement: "the appointment of a Deputy CISO for Europe reflects the importance and global influence of EU cybersecurity regulations and the company's commitment to meeting and exceeding those expectations to prioritize cybersecurity across the region. This new position will report directly to Microsoft's CISO."

Michela Menting, France-based digital security research director at ABI Research, said when she heard on Wednesday that Microsoft was creating such a role, "I was mostly surprised that they don't already have one."

"GDPR has been in place for quite some time now and the fact they are only now putting in a European deputy CISO is concerning," Menting added. "They are playing catch up."
AI

Police Departments Are Turning To AI To Sift Through Unreviewed Body-Cam Footage (propublica.org) 40

An anonymous reader quotes a report from ProPublica: Over the last decade, police departments across the U.S. have spent millions of dollars equipping their officers with body-worn cameras that record what happens as they go about their work. Everything from traffic stops to welfare checks to responses to active shooters is now documented on video. The cameras were pitched by national and local law enforcement authorities as a tool for building public trust between police and their communities in the wake of police killings of civilians like Michael Brown, an 18 year old black teenager killed in Ferguson, Missouri in 2014. Video has the potential not only to get to the truth when someone is injured or killed by police, but also to allow systematic reviews of officer behavior to prevent deaths by flagging troublesome officers for supervisors or helping identify real-world examples of effective and destructive behaviors to use for training. But a series of ProPublica stories has shown that a decade on, those promises of transparency and accountability have not been realized.

One challenge: The sheer amount of video captured using body-worn cameras means few agencies have the resources to fully examine it. Most of what is recorded is simply stored away, never seen by anyone. Axon, the nation's largest provider of police cameras and of cloud storage for the video they capture, has a database of footage that has grown from around 6 terabytes in 2016 to more than 100 petabytes today. That's enough to hold more than 5,000 years of high definition video, or 25 million copies of last year's blockbuster movie "Barbie." "In any community, body-worn camera footage is the largest source of data on police-community interactions. Almost nothing is done with it," said Jonathan Wender, a former police officer who heads Polis Solutions, one of a growing group of companies and researchers offering analytic tools powered by artificial intelligence to help tackle that data problem.

The Paterson, New Jersey, police department has made such an analytic tool a major part of its plan to overhaul its force. In March 2023, the state's attorney general took over the department after police shot and killed Najee Seabrooks, a community activist experiencing a mental health crisis who had called 911 for help. The killing sparked protests and calls for a federal investigation of the department. The attorney general appointed Isa Abbassi, formerly the New York Police Department's chief of strategic initiatives, to develop a plan for how to win back public trust. "Changes in Paterson are led through the use of technology," Abbassi said at a press conference announcing his reform plan in September, "Perhaps one of the most exciting technology announcements today is a real game changer when it comes to police accountability and professionalism." The department, Abassi said, had contracted with Truleo, a Chicago-based software company that examines audio from bodycam videos to identify problematic officers and patterns of behavior.

For around $50,000 a year, Truleo's software allows supervisors to select from a set of specific behaviors to flag, such as when officers interrupt civilians, use profanity, use force or mute their cameras. The flags are based on data Truleo has collected on which officer behaviors result in violent escalation. Among the conclusions from Truleo's research: Officers need to explain what they are doing. "There are certain officers who don't introduce themselves, they interrupt people, and they don't give explanations. They just do a lot of command, command, command, command, command," said Anthony Tassone, Truleo's co-founder. "That officer's headed down the wrong path." For Paterson police, Truleo allows the department to "review 100% of body worn camera footage to identify risky behaviors and increase professionalism," according to its strategic overhaul plan. The software, the department said in its plan, will detect events like uses of force, pursuits, frisks and non-compliance incidents and allow supervisors to screen for both "professional and unprofessional officer language."
There are around 30 police departments currently use Truleo, according to the company.

Christopher J. Schneider, a professor at Canada's Brandon University who studies the impact of emerging technology on social perceptions of police, is skeptical the AI tools will fix the problems in policing because the findings might be kept from the public just like many internal investigations. "Because it's confidential," he said, "the public are not going to know which officers are bad or have been disciplined or not been disciplined."
Social Networks

India Sets Up Panels With Veto Power Over Social Media Content Moderation (techcrunch.com) 23

India will set up one or more grievance committees with the veto power to oversee content moderation decisions of social media firms, it said today, moving ahead with a proposal that has rattled Meta, Google and Twitter in the key overseas market. From a report: The panels, called Grievance Appellate Committee, will be created within three months, it said. In an amendment to the nation's new IT law that went into effect last year, the Indian government said any individual aggrieved by the social media's appointed grievance officer may appeal to the Grievance Appellate Committee, which will comprise a chairperson and two whole time members appointed by the government. (In compliance with the IT rules, social media firms last year appointed grievance and other officers in India to hear feedback and complaints from their users.) The Grievance Appellate Committee will have the power to reverse the social media firm's decision, the government said.
Businesses

US Opens Probe Into Amazon Warehouse Fatal Collapse in Illinois (reuters.com) 129

The U.S. workplace safety watchdog is investigating the circumstances around the collapse during Friday night's storm of an Amazon.com building in Illinois in which six workers died, an official at the U.S. Department of Labor said on Monday. From a report: The U.S. Occupational Safety and Health Administration (OSHA) has six months to complete its investigation, issue citations, and propose monetary penalties if violations of workplace safety and/or health regulations are found, Scott Allen, a U.S. Department of Labor regional director for public affairs, said via email. He added that compliance officers have been on site since Saturday. Six workers were killed when the Amazon warehouse in Edwardsville, Illinois, buckled under the force of a devastating storm, police said. A barrage of tornadoes ripped through six U.S. states, leaving a trail of death and destruction at homes and businesses stretching more than 200 miles (322 km).
AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

Privacy

Police Use of Facial Recognition Violates Human Rights, UK Court Rules (arstechnica.com) 58

An appeals court ruled today that police use of facial recognition technology in the UK has "fundamental deficiencies" and violates several laws. Ars Technica reports: South Wales Police began using automated facial recognition technology on a trial basis in 2017, deploying a system called AFR Locate overtly at several dozen major events such as soccer matches. Police matched the scans against watchlists of known individuals to identify persons who were wanted by the police, had open warrants against them, or were in some other way persons of interest. In 2019, Cardiff resident Ed Bridges filed suit against the police, alleging that having his face scanned in 2017 and 2018 was a violation of his legal rights. Although he was backed by UK civil rights organization Liberty, Bridges lost his suit in 2019, but the Court of Appeal today overturned that ruling, finding that the South Wales Police facial recognition program was unlawful.

"Too much discretion is currently left to individual police officers," the court ruled. "It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR can be deployed." The police did not sufficiently investigate if the software in use exhibited race or gender bias, the court added. The South Wales Police in 2018 released data admitting that about 2,300 of nearly 2,500 matches -- roughly 92 percent -- the software made at an event in 2017 were false positives. The ruling did not completely ban the use of facial recognition tech inside the UK, but does narrow the scope of what is permissible and what law enforcement agencies have to do to be in compliance with human rights law. Other police inside the UK who deploy facial recognition technology will have to meet the standard set by today's ruling. That includes the Metropolitan Police in London, who deployed a similar type of system earlier this year.

Businesses

51 Tech CEOs Send Open Letter To Congress Asking For a Federal Data Privacy Law (zdnet.com) 35

The chief executive officers (CEOs) of 51 tech companies have signed and sent an open letter to Congress leaders today, asking for a federal law on user data privacy to supersede the rising number of privacy laws that are cropping up at the state level. From a report: The open-letter was sent on behalf of Business Roundtable, an association made up of the CEOs of America's largest companies. The CEOs of Amazon, AT&T, Dell, IBM, Qualcomm, SAP, Salesforce, Visa, Mastercard, JP Morgan Chase, State Farm, and Walmart, are just some of the execs who put their name on the dotted line. CEOs blamed a patchwork of differing privacy regulations that are currently being passed in multiple US states, and by several US agencies, as one of the reasons why consumer privacy is a mess in the US.

This patchwork of privacy regulations is creating problems for their companies, which have to comply with an ever-increasing number of laws across different states and jurisdictions. Instead, the 51 CEOs would like one law that governs all user privacy and data protection across the US, which would simplify product design, compliance, and data management. "There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers," the open letter said.

Businesses

FTC To Hold Facebook CEO Mark Zuckerberg Liable For Any Future Privacy Violations (npr.org) 60

Facebook CEO Mark Zuckerberg will have to personally answer to federal regulators under an agreement to settle a privacy case with the Federal Trade Commission that includes a $5 billion penalty for the giant social media company, the agency announced Wednesday. From a report: Separately, Facebook will pay $100 million to settle a case with the Securities and Exchange Commission for making misleading disclosures about the risk that users' data would be misused, the SEC said. Under the FTC agreement, Zuckerberg will be required to submit quarterly compliance reports directly to the federal regulators and to Facebook's board of directors. If the Facebook co-founder or "designated compliance officers" violate the agreement, they could be subject to civil and criminal penalties, the FTC said.

"There's no way that the CEO can bury his head in the sand," James Kohm, head of the FTC's enforcement unit, told NPR. "There's no ostrich defense." According to FTC investigators, Facebook violated the terms of its 2011 settlement with the agency, in which it promised to protect user data from broad sharing with third-party apps. The company also committed new violations, they said. Kohm described two major incidents in which Facebook effectively lied to users. First, the company solicited phone numbers, saying they were being collected to verify users' identity if a password needed to be reset. Millions of people trusted the company, and then Facebook took those phone numbers and used them not just for security, but also for advertising purposes, the FTC said.

Businesses

US Reaches Deal To Keep Chinese Telecom ZTE in Business (reuters.com) 104

The Trump administration told lawmakers the U.S. government has reached a deal to put Chinese telecommunications company ZTE Corp back in business, a senior congressional aide said on Friday. From a report: The deal, communicated to officials on Capitol Hill by the Commerce Department, requires ZTE to pay a substantial fine, place U.S. compliance officers at the company and change its management team, the aide said. The Commerce Department would then lift an order preventing ZTE from buying U.S. products.

ZTE was banned in April from buying U.S. technology components for seven years for breaking an agreement reached after it violated U.S. sanctions against Iran and North Korea. The Commerce Department decision would allow it to resume business with U.S. companies, including chipmaker Qualcomm Inc.

AI

In Banking, 70% of Front-Office Jobs Will Be Dislocated By AI (americanbanker.com) 138

An anonymous reader shares a report: Some bankers and observers have suggested that only the boring parts of jobs, drudgery like data entry and filling out forms, will disappear so the humans will be able to focus on more interesting tasks, and that no actual jobs will be lost. Bank employees themselves seem to think this. In an Accenture survey released last week of 1,300 nonexecutive bank employees, 67% said they believe AI will improve their work-life balance, and 57% expect it will expand their career prospects.

But Autonomous Research also issued a report last week that estimated that in the U.S. alone, 2.5 million financial services employees will be "exposed" to AI technologies in the front, middle and back office -- 1.2 million working in banking and lending, 460,000 in investment management, and 865,000 in insurance. "These functions will see 20-40% productivity gains, or unemployment, depending on your vantage point," the report stated. About $1 trillion in costs will be exposed to AI transformation in financial services sectors by 2030, according to the report; $450 million of this would in banking. In banking, 70% of front-office jobs will be dislocated by AI, the researchers say: 485,000 tellers, 219,000 customer service representatives, and 174,000 loan interviewers and clerks. They will be replaced by chatbots, voice assistants and automated authentication and biometric technology.

And 96,000 financial managers and 13,000 compliance officers will be laid off as AI-based anti-money-laundering, anti-fraud, compliance and monitoring software fills in. Another 250,000 loan officers will lose their jobs to AI-based credit underwriting and smart contracts technology.

Security

Most Healthcare Managers Admit Their IT Systems Have Been Compromised 122

Lucas123 writes: Eighty-one percent of healthcare IT managers say their organizations have been compromised by at least one malware, botnet or other kind of cyber attack during the past two years, and only half of those managers feel that they are adequately prepared to prevent future attacks, according to a new survey by KPMG. The KPMG survey polled 223 CIOs, CTOs, chief security officers and chief compliance officers at healthcare providers and health plans, and found 65% indicated malware was most frequently reported line of attack during the past 12 to 24 months. Additionally, those surveyed indicated the areas with the greatest vulnerabilities within their organization include external attackers (65%), sharing data with third parties (48%), employee breaches (35%), wireless computing (35%) and inadequate firewalls (27%). Top among reasons healthcare facilities are facing increased risk, was the adoption of digital patient records and the automation of clinical systems.
Debian

Interviews: Bruce Perens Answers Your Questions 224

A while ago you had the chance to ask programmer and open source advocate Bruce Perens about the future of open source, its role in government, and a number of other questions. Below you'll find his answers and an update on what he's doing now.
Government

Carl Malamud Answers: Goading the Government To Make Public Data Public 21

You asked Carl Malamud about his experiences and hopes in the gargantuan project he's undertaken to prod the U.S. government into scanning archived documents, and to make public access (rather than availability only through special dispensation) the default for newly created, timely government data. (Malamud points out that if you have comments on what the government should be focusing on preserving, and how they should go about it, the National Archives would like to read them.) Below find answers with a mix of heartening and disheartening information about how the vast project is progressing.

Slashdot Top Deals