Television

FAA Revokes Certificates of Two Pilots Involved in Plane-Swapping Attempt (cbs8.com) 84

Whatever happened to those two pilots who attempted to swap planes in mid-air — skydiving from one to the other while the planes slowly tumbled toward the desert 65 miles southeast of Phoenix?

One pilot successfully reached the other plane — but the other pilot didn't, parachuting safely to the ground instead. "All of our safety protocols worked," the first pilot said triumphantly in a documentary streamed on Hulu. Er, but what about that second plane, slowly tumbling toward the ground without a pilot? It fell 14,000 feet, landing "nose first" (according to footage from a local newscast) — though its descent was also slowed by a parchute. (Both planes also had a specially-engineered braking system to slow their fall so the skydiving pilots could overtake them.) The stunt was sponsored by Red Bull.

Both pilots had previously conducted more than 20,000 skydives — "but there's a problem," that local newscast pointed out. "The FAA says it had denied Red Bull permission to attempt the plane swap because it would not be in the public's interest." So now both pilots — who'd had "commercial pilot certificates" from America's Federal Aviation Administration — have had their certificates revoked.

The Associated Press reports: In a May 10 emergency order, the FAA cites the two pilots, Luke Aikins and Andrew Farrington, and describes their actions as "careless and reckless." Aikins also faces a proposed $4,932 fine from the agency....

Aikins had petitioned for an exemption from the rule that pilots must be at the helm with safety belts fastened at all times. He argued the stunt would "be in the public interest because it would promote aviation in science, technology, engineering and math."

While both pilots must surrender their certificates immediately, there is an appeal process.

Aikins had shared a statement on Instagram after the stunt, saying he made the "personal decision to move forward with the plane swap" despite the lack of the FAA exemption.

"I regret not sharing this information with my team and those who supported me."

"I am now turning my attention to cooperatively working transparently with the regulatory authorities as we review the planning and execution."
Piracy

US Copyright Office Seeks Input On Mandatory DMCA 'Upload Filters' (torrentfreak.com) 83

An anonymous reader quotes a report from TorrentFreak: The U.S. Copyright Office has launched a public consultation to evaluate whether it's wise to make certain technical protection measures mandatory under the DMCA. The Office hopes to hear all relevant stakeholders and the public at large in what may become a de facto review of the recently introduced SMART Copyright Act. [...] Following repeated nudges from Senators Thom Tillis and Patrick Leahy, the Copyright Office started looking into automated tools that online services can use to ensure that pirated content can't be easily reuploaded. This "takedown and staydown' approach relies on technical protection tools, which include upload filters. This is a sensitive subject that previously generated quite a bit of pushback when the EU drafted its Copyright Directive. To gauge the various options and viewpoints, the Copyright Office launched a consultation last year, which triggered a wave of objections and opposition.

Last week, the Office followed up with yet another consultation, asking for input on shortcomings in the current DMCA legislation and what alternatives could help to improve things. As things stand, online services are allowed to implement their own upload filters, which many do. Scanning uploads for potentially copyright-infringing content isn't mandatory but that could change in the future. The consultation outline mentions several potential changes to the DMCA's Section 512, such as online services losing their safe harbor protection if they fail to implement specific "standard technical measures" (STMs). "Is the loss of the section 512 safe harbors an appropriate remedy for interfering with or failing to accommodate STMs?" the Copyright Office asks. "Are there other obligations concerning STMs that ought to be required of internet service providers?" the list of questions continues.

Stakeholders are asked to share their views on these matters. While it is uncertain whether any measures will be made mandatory, the Copyright Office is already looking ahead. For example, who gets to decide what STMs will be mandatory, and how would the rulemaking process work? "What entity or entities would be best positioned to administer such a rulemaking? What should be the frequency of such a rulemaking? What would be the benefits of such a rulemaking? What would be the drawbacks of such a rulemaking?"

China

Pentagon's China Warning Prompts Calls To Vet US Funding of Startups (wsj.com) 20

Congress may soon require government agencies to vet tech startups seeking federal funding, after a Defense Department study found China is exploiting a popular program that funds innovation among small American companies. From a report: The study, which was viewed by The Wall Street Journal, found China is using state-sponsored methods to target companies that have received Pentagon funding from the Small Business Innovation Research program. The SBIR program for decades has sought to promote innovation through a competitive U.S. government award process.

The April 2021 report, which has been circulating among lawmakers on Capitol Hill, details eight case studies it says have "national and economic security implications." The studies include examples of program participants who dissolve their American companies, join Chinese government talent programs and continue their work at institutions that support the People's Liberation Army, the armed wing of the Communist Party. The report also documents instances of SBIR recipients taking venture-capital money from Chinese state-owned firms and of working with Chinese entities that support the country's defense industry. The report concludes that the SBIR program needs a due-diligence process to identify entities of potential concern that would then receive a more detailed review.

Medicine

The Gene-Edited Pig Heart Given To a Dying Patient Was Infected With a Pig Virus (technologyreview.com) 45

An anonymous reader quotes a report from MIT Technology Review: The pig heart transplanted into an American patient earlier this year in a landmark operation carried a porcine virus that may have derailed the experiment and contributed to his death two months later, say transplant specialists. [...] In a statement released by the university in March, a spokesperson said there was "no obvious cause identified at the time of his death" and that a full report was pending. Now MIT Technology Review has learned that Bennett's heart was affected by porcine cytomegalovirus, a preventable infection that is linked to devastating effects on transplants.

The presence of the pig virus and the desperate efforts to defeat it were described by Griffith during a webinar streamed online by the American Society of Transplantation on April 20. The issue is now a subject of wide discussion among specialists, who think the infection was a potential contributor to Bennett's death and a possible reason why the heart did not last longer. The heart swap in Maryland was a major test of xenotransplantation, the process of moving tissues between species. But because the special pigs raised to provide organs are supposed to be virus-free, it now appears that the experiment was compromised by an unforced error. The biotechnology company that raised and engineered the pigs, Revivicor, declined to comment and has made no public statement about the virus.
"It was surprising. That pig is supposed to be clean of all pig pathogens, and this is a significant one," says Mike Curtis, CEO of eGenesis, a competing company that is also breeding pigs for transplant organs. "Without the virus, would Mr. Bennett have lived? We don't know, but the infection didn't help. It likely contributed to the failure."
Google

Google Overhauls Performance Review System After Employee Criticism (theinformation.com) 29

Google is scrapping a time-consuming, twice-a-year staff performance review process in an effort to improve morale and reduce the time employees spend preparing the assessments, The Information reported Wednesday, citing people familiar with the changes. From the report: CEO Sundar Pichai told staff Wednesday that the new program, which will take place only once a year, aims to give more employees a sense of accomplishment and acknowledge that "most Googlers are doing great work." The new system, which also creates an easier path to promotions, came after only 53% of Googlers said in surveys that the current system is "time well spent," Pichai said. The change makes Google the latest Silicon Valley company to switch to less-frequent reviews. Meta last year said it would conduct performance reviews once per year rather than twice. Google has for years conducted extensive performance review processes twice a year through a process that required extensive preparation from employees and managers. Now, the review process will happen once a year and staff won't have to prepare packets of information ahead of time, the company told employees. The company will still consider promotions twice a year.
The Courts

16 States, Several Environmental Groups Sue USPS Over Purchase of Gas-Guzzling Mail Trucks (arstechnica.com) 209

An anonymous reader quotes a report from Ars Technica: The US Postal Service is facing lawsuits from 16 states and several environmental groups challenging its decision to buy tens of thousands of gasoline-powered delivery vehicles instead of electric vehicles. As previously reported, the Environmental Protection Agency says the gas-powered trucks being ordered by the USPS "are expected to achieve only 8.6 miles per gallon (mpg), barely improving over the decades-old long-life vehicles that achieve 8.2 mpg." The USPS countered that the vehicles get 14.7 mpg when air conditioning isn't being used and that the trucks' size will make it possible to deliver the same amount of mail in fewer trips. The USPS plan is to buy 50,000 to 165,000 vehicles over 10 years. Of those, at least 10 percent are slated to be battery-electric vehicles (BEV). [...]

A lawsuit filed by California and 15 other states on Thursday said the USPS failed "to follow a process mandated by the National Environmental Policy Act (NEPA)," continuing: "Instead, the Postal Service first chose a manufacturer with minimal experience in producing electric vehicles, signed a contract, and made a substantial down payment for new vehicles. Only then did the Postal Service publish a cursory environmental review to justify the decision to replace 90 percent of its delivery fleet with fossil-fuel-powered, internal combustion engine vehicles, despite other available, environmentally preferable alternatives. In doing so, the Postal Service failed to comply with even the most basic requirements of NEPA."

The lawsuit seeks an injunction forcing the USPS to stop the vehicle purchases "until it has complied with NEPA." It was filed against the USPS and Postmaster General Louis DeJoy, who was appointed by the USPS Board of Governors in 2020 under then-President Donald Trump. All 16 states involved in the lawsuit have Democratic attorneys general. They allege that the USPS "violated well-established legal precedent prohibiting 'an irreversible and irretrievable commitment of resources' before completing the NEPA process by signing contracts with a defense company (Oshkosh Defense, LLC) to procure vehicles six months before even releasing its draft environmental review and a year prior to issuing the Final Environmental Impact Statement ('Final EIS') and Record of Decision." The states also claim the USPS failed to consider and evaluate reasonable alternatives. "Specifically, the Postal Service did not properly evaluate several environmental impacts of its action, including air quality, environmental justice, and climate harms, by simply assuming that any upgrade to its vehicle fleet would have positive impacts on the environment," the complaint said. States also alleged the USPS "failed to ensure the scientific integrity of its analysis by relying on unfounded assumptions regarding the costs and performance of electric vehicles, infrastructure, and gas prices, and refusing to identify the source of the data relied upon in the Final EIS."
"The Postal Service conducted a robust and thorough review and fully complied with all of our obligations under NEPA," a USPS spokesperson told Ars.

The statement continues: "The Postal Service is fully committed to the inclusion of electric vehicles as a significant part of our delivery fleet even though the investment will cost more than an internal combustion engine vehicle. That said, as we have stated repeatedly, we must make fiscally prudent decisions in the needed introduction of a new vehicle fleet. We will continue to look for opportunities to increase the electrification of our delivery fleet in a responsible manner, consistent with our operating strategy, the deployment of appropriate infrastructure, and our financial condition, which we expect to continue to improve as we pursue our plan."
Android

Android's App Store Privacy Section Starts Rolling Out Today (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: Following in the footsteps of iOS 14, Google is rolling out an app privacy section to the Play Store on Tuesday. When you look up an app on the Play Store, alongside sections like "About this app" and "ratings and reviews," there will be a new section called "Data privacy & security," where developers can explain what data they collect. Note that while the section will be appearing for users starting today, it might not be filled out by developers. Google's deadline for developers to provide privacy information is July 20. Even then, all of this privacy information is provided by the developer and is essentially working on the honor system.

Here's how Google describes the process to developers: "You alone are responsible for making complete and accurate declarations in your app's store listing on Google Play. Google Play reviews apps across all policy requirements; however, we cannot make determinations on behalf of the developers of how they handle user data. Only you possess all the information required to complete the Data safety form. When Google becomes aware of a discrepancy between your app behavior and your declaration, we may take appropriate action, including enforcement action."

Once the section is up and running, developers will be expected to list what data they're collecting, why they're collecting it, and who they're sharing it with. The support page features a big list of data types for elements like "location," "personal info," "financial info," "web history," "contacts," and various file types. Developers are expected to list their data security practices, including explaining if data is encrypted in transit and if users can ask for data to be deleted. There's also a spot for "Google Play's Families Policy" compliance, which is mostly just a bunch of US COPPA and EU GDPR requirements. Google says developers can also indicate if their app has "been independently validated against a global security standard."

Space

Is Dark Matter Just Old Gravitons from Other Dimensions? (livescience.com) 104

"Dark matter, the elusive substance that accounts for the majority of the mass in the universe, may be made up of massive particles called gravitons that first popped into existence in the first moment after the Big Bang," writes Live Science.

"And these hypothetical particles might be cosmic refugees from extra dimensions, a new theory suggests." The researchers' calculations hint that these particles could have been created in just the right quantities to explain dark matter, which can only be "seen" through its gravitational pull on ordinary matter. "Massive gravitons are produced by collisions of ordinary particles in the early universe. This process was believed to be too rare for the massive gravitons to be dark matter candidates," study co-author Giacomo Cacciapaglia, a physicist at the University of Lyon in France, told Live Science. But in a new study published in February in the journal Physical Review Letters, Cacciapaglia, along with Korea University physicists Haiying Cai and Seung J. Lee, found that enough of these gravitons would have been made in the early universe to account for all of the dark matter we currently detect in the universe.

The gravitons, if they exist, would have a mass of less than 1 megaelectronvolt (MeV), so no more than twice the mass of an electron, the study found. This mass level is well below the scale at which the Higgs boson generates mass for ordinary matter — which is key for the model to produce enough of them to account for all the dark matter in the universe....

The team found these hypothetical gravitons while hunting for evidence of extra dimensions, which some physicists suspect exist alongside the observed three dimensions of space and the fourth dimension, time. In the team's theory, when gravity propagates through extra dimensions, it materializes in our universe as massive gravitons. But these particles would interact only weakly with ordinary matter, and only via the force of gravity. This description is eerily similar to what we know about dark matter, which does not interact with light yet has a gravitational influence felt everywhere in the universe. This gravitational influence, for instance, is what prevents galaxies from flying apart.

"The main advantage of massive gravitons as dark matter particles is that they only interact gravitationally, hence they can escape attempts to detect their presence," Cacciapaglia said.

AI

California Suggests Taking Aim At AI-Powered Hiring Software (theregister.com) 34

An anonymous reader quotes a report from The Register: A newly proposed amendment to California's hiring discrimination laws would make AI-powered employment decision-making software a source of legal liability. The proposal would make it illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class by the California Department of Fair Employment and Housing. Broad language, however, means the law could be easily applied to "applications or systems that may only be tangentially related to employment decisions," lawyers Brent Hamilton and Jeffrey Bosley of Davis Wright Tremaine wrote.

Automated-decision systems and algorithms, both fundamental to the law, are broadly defined in the draft, Hamilton and Bosley said. The lack of specificity means that technologies designed to aid human decision-making in small, subtle ways could end up being lumped together with hiring software, as could third-party vendors who provide the code. Strict record keeping requirements are included in the proposed law that double record retention time from two to four years, and require anyone using automated-decision systems to retain all machine-learning data generated as part of its operation and training. Training datasets leave vendors responsible, too: "Any person who engages the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, to an employer or other covered entity must maintain records of the assessment criteria used by the automated-decision system," the proposed text says. It specifically mentions it must maintain records for each customer it trains models for, too.

Unintentional filtering isn't covered by the newly proposed California law, which focuses on the ways in which software can discriminate against certain types of people, unintentionally or otherwise. [...] Hamilton and Bosley suggest that California employers review their ATS and RMS software to ensure it conforms to the proposal, enhance their understanding of how the algorithms they use function, be prepared to demonstrate that the results of their process is fair and speak with vendors to ensure they are doing what they need to do to comply. The 45-day public commentary period for the proposed changes is not yet open, meaning there's no timetable for the changes to be reviewed, amended and submitted for passage.

NASA

Secret Government Info Confirms First Known Interstellar Object On Earth, Scientists Say (vice.com) 53

An anonymous reader quotes a report from Motherboard: An object from another star system crashed into Earth in 2014, the United States Space Command (USSC) confirmed in a newly-released memo. The meteor ignited in a fireball in the skies near Papua New Guinea, the memo states, and scientists believe it possibly sprinkled interstellar debris into the South Pacific Ocean. The confirmation backs up the breakthrough discovery of the first interstellar meteor -- and, retroactively, the first known interstellar object of any kind to reach our solar system -- which was initially flagged by a pair of Harvard University researchers in a study posted on the preprint server arXiv in 2019.

Amir Siraj, a student pursuing astrophysics at Harvard who led the research, said the study has been awaiting peer review and publication for years, but has been hamstrung by the odd circumstances that arose from the sheer novelty of the find and roadblocks put up by the involvement of information classified by the U.S. government. The discovery of the meteor, which measured just a few feet wide, follows recent detections of two other interstellar objects in our solar system, known as 'Oumuamua and Comet Borisov, that were much larger and did not come into close contact with Earth.

"I get a kick out of just thinking about the fact that we have interstellar material that was delivered to Earth, and we know where it is," said Siraj, who is Director of Interstellar Object Studies at Harvard's Galileo Project, in a call. "One thing that I'm going to be checking -- and I'm already talking to people about -- is whether it is possible to search the ocean floor off the coast of Papua New Guinea and see if we can get any fragments." Siraj acknowledged that the odds of such a find are low, because any remnants of the exploded fireball probably landed in tiny amounts across a disparate region of the ocean, making it tricky to track them down. "It would be a big undertaking, but we're going to look at it in extreme depth because the possibility of getting the first piece of interstellar material is exciting enough to check this very thoroughly and talk to all the world experts on ocean expeditions to recover meteorites," he noted.
"Siraj called the multi-year process a 'whole saga' as they navigated a bureaucratic labyrinth that wound its way though Los Alamos National Laboratory, NASA, and other governmental arms, before ultimately landing at the desk of Joel Mozer, Chief Scientist of Space Operations Command at the U.S. Space Force service component of USSC," adds Motherboard.

Mozer confirmed that the object indicated "an interstellar trajectory," which was first brought to Siraj's attention last week via a tweet from a NASA scientist. He's now "renewing the effort to get the original discovery published so that the scientific community can follow-up with more targeted research into the implications of the find," the report says.
Privacy

Deception, Exploited Workers, and Cash Handouts: How Worldcoin Recruited Its First Half a Million Test Users (technologyreview.com) 10

The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it's done is build a biometric database from the bodies of the poor. MIT Technology Review reports: On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother. A technology company was holding some kind of "social assistance giveaway" at the local Islamic elementary school, she said, and she urged him to go. Ruswandi joined a long line of residents, mostly women, some of whom had been waiting since 6 a.m. In the pandemic-battered economy, any kind of assistance was welcome. At the front of the line, representatives of Worldcoin Indonesia were collecting emails and phone numbers, or aiming a futuristic metal orb at villagers' faces to scan their irises and other biometric data. Village officials were also on site, passing out numbered tickets to the waiting residents to help keep order. Ruswandi asked a Worldcoin representative what charity this was but learned nothing new: as his mother said, they were giving away money.

Gunungguruh was not alone in receiving a visit from Worldcoin. In villages across West Java, Indonesia -- as well as college campuses, metro stops, markets, and urban centers in two dozen countries, most of them in the developing world -- Worldcoin representatives were showing up for a day or two and collecting biometric data. In return they were known to offer everything from free cash (often local currency as well as Worldcoin tokens) to Airpods to promises of future wealth. In some cases they also made payments to local government officials. What they were not providing was much information on their real intentions. This left many, including Ruswandi, perplexed: What was Worldcoin doing with all these iris scans?

To answer that question, and better understand Worldcoin's registration and distribution process, MIT Technology Review interviewed over 35 individuals in six countries -- Indonesia, Kenya, Sudan, Ghana, Chile, and Norway -- who either worked for or on behalf of Worldcoin, had been scanned, or were unsuccessfully recruited to participate. We observed scans at a registration event in Indonesia, read conversations on social media and in mobile chat groups, and consulted reviews of Worldcoin's wallet in the Google Play and Apple stores. We interviewed Worldcoin CEO Alex Blania, and submitted to the company a detailed list of reporting findings and questions for comment. Our investigation revealed wide gaps between Worldcoin's public messaging, which focused on protecting privacy, and what users experienced. We found that the company's representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent. These practices may violate the European Union's General Data Protection Regulations (GDPR) -- a likelihood that the company's own data consent policy acknowledged and asked users to accept -- as well as local laws.

AI

EU Clears First Autonomous X-Ray-Analyzing AI (theverge.com) 21

An artificial intelligence tool that reads chest X-rays without oversight from a radiologist got regulatory clearance in the European Union last week -- a first for a fully autonomous medical imaging AI, the company, called Oxipit, said in a statement. The Verge reports: The tool, called ChestLink, scans chest X-rays and automatically sends patient reports on those that it sees as totally healthy, with no abnormalities. Any images that the tool flags as having a potential problem are sent to a radiologist for review. Most X-rays in primary care don't have any problems, so automating the process for those scans could cut down on radiologists' workloads, the Oxipit said in informational materials.

The tech now has a CE mark certification in the EU, which signals that a device meets safety standards. The certification is similar to Food and Drug Administration (FDA) clearance in the United States, but they have slightly different metrics: a CE mark is less difficult to obtain, is quicker, and doesn't require as much evaluation as an FDA clearance. The FDA looks to see if a device is safe and effective and tends to ask for more information from device makers. Oxipit spokesperson Mantas Miksys told The Verge that the company plans to file with the FDA as well.

Oxipit said in a statement that ChestLink made zero "clinically relevant" errors during pilot programs at multiple locations. When it is introduced into a new setting, the company said there should first be an audit of existing imaging programs. Then, the tool should be used under supervision for a period of time before it starts working autonomously. The company said in a statement that it expects the first healthcare organizations to be using the autonomous tool by 2023.

United States

Hackers Gaining Power of Subpoena Via Fake 'Emergency Data Requests' (krebsonsecurity.com) 57

Krebs on Security reports: In the United States, when federal, state or local law enforcement agencies wish to obtain information about who owns an account at a social media firm, or what Internet addresses a specific cell phone account has used in the past, they must submit an official court-ordered warrant or subpoena. Virtually all major technology companies serving large numbers of users online have departments that routinely review and process such requests, which are typically granted as long as the proper documents are provided and the request appears to come from an email address connected to an actual police department domain name. But in certain circumstances -- such as a case involving imminent harm or death -- an investigating authority may make what's known as an Emergency Data Request (EDR), which largely bypasses any official review and does not require the requestor to supply any court-approved documents.

It is now clear that some hackers have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using their illicit access to police email systems, the hackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately. In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR -- and potentially having someone's blood on their hands -- or possibly leaking a customer record to the wrong person. "We have a legal process to compel production of documents, and we have a streamlined legal process for police to get information from ISPs and other providers," said Mark Rasch, a former prosecutor with the U.S. Department of Justice. "And then we have this emergency process, almost like you see on [the television series] Law & Order, where they say they need certain information immediately," Rasch continued. "Providers have a streamlined process where they publish the fax or contact information for police to get emergency access to data. But there's no real mechanism defined by most Internet service providers or tech companies to test the validity of a search warrant or subpoena. And so as long as it looks right, they'll comply." To make matters more complicated, there are tens of thousands of police jurisdictions around the world -- including roughly 18,000 in the United States alone -- and all it takes for hackers to succeed is illicit access to a single police email account.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

Medicine

CDC Coding Error Led To Overcount of 72,000 COVID-19 Deaths (theguardian.com) 213

Last week, after reporting from the Guardian on mortality rates among children, the CDC corrected a "coding logic error" that had inadvertently added more than 72,000 Covid deaths of all ages to the data tracker, one of the most publicly accessible sources for Covid data. The Guardian reports: The agency briefly noted the change in a footnote, although the note did not explain how the error occurred or how long it was in effect. A total of 72,277 deaths in all age groups reported across 26 states were removed from the tracker "because CDC's algorithm was accidentally counting deaths that were not Covid-19-related," Jasmine Reed, a spokesperson for the agency, told the Guardian. The problem stemmed from two questions the CDC asks of states and jurisdictions when they report fatalities, according to a source familiar with the issue.

One data field asks if a person died "from illness/complications of illness," and the field next to this asks for the date of death. When the answer is yes, then the date of death should be provided. But a problem apparently arose if a respondent included the date of death in this field even when the answer was "no" or "unknown." The CDC's system assumed that if a date was provided, then the "no" or "unknown" answer was an error, and the system switched the answer to "yes." This resulted in an overcount of deaths due to Covid in the demographic breakdown, and the error, once discovered, was corrected last week. The CDC did not answer a question on how long the coding error was in effect.

"Working with near real-time data in an emergency is critical to guide decision-making, but may also mean we often have incomplete information when data are first reported," said Reed. The death counts in the data tracker are "real-time and subject to change," Reed noted, while numbers from the National Center for Health Statistics, a center within the CDC, are "the most complete source of death data," despite lags in reporting, because the process includes a review of death certificates.

Government

The EPA Plans To Sunset Its Online Archive (theverge.com) 30

Come July, the EPA plans to retire the archive containing old news releases, policy changes, regulatory actions, and more. The Verge reports: The archive was never built to be a permanent repository of content, and maintaining the outdated site was no longer "cost effective," the EPA said to The Verge in an emailed statement. The EPA announced the retirement early this year, after finishing an overhaul of its main website in 2021, but says that the decision was years in the making. The agency maintains that it's abiding by federal rules for records management and that not all webpages qualify as official records that need to be preserved.

The EPA says it plans to migrate much of the information to other places. Old news releases will go to the current EPA website's page for press releases. When it comes to the rest of the content, the EPA has a process for making case-by-case decisions on what content can be deleted -- and what is relevant enough to move to the modern website. Some content might be deemed important enough to join the National Archives. The public will be able to request that content through the Freedom of Information Act.

The archive is the only comprehensive way that public information about agency policies, like fact sheets breaking down the impact of environmental legislation, and actions, like how the agency implements those laws, have been preserved, [says Gretchen Gehrke, one of the cofounders of a group called Environmental Data and Governance Initiative (EDGI) that's fighting for public access to resources like the EPA's online archives]. That makes the archive vital for understanding how regulation and enforcement have changed over the years. It also shows how the agency's understanding of an issue, like climate change, has evolved. And when the Trump administration deleted information about climate change on the EPA's website, much of it could still be found on the archive. Besides that, Gehrke says the content should just be available on principle because it's public information, paid for by taxpayer dollars.

Businesses

Google Employees Bombard Execs With Questions About Pay at Recent All-Hands Meeting (cnbc.com) 75

Google executives, facing a barrage of criticism from employees on issues related to compensation, defended the company's competitiveness at a recent all-hands meeting while acknowledging that the performance review process could change. From a report: The companywide virtual gathering earlier this month followed the release of internal survey results, which showed a growing number of staffers don't view their pay packages as fair or competitive with what they could make elsewhere. At all-hands meetings, Google CEO Sundar Pichai and other senior executives regularly read top submissions from Dory, a site where employees write questions and give a thumbs up to those they want leadership to address.

The second highest-rated question ahead of the March meeting was about the annual "Googlegeist" survey. As CNBC reported, the lowest scores from the survey, which went out to employees in January, were in the areas of compensation and execution. "Compensation-related questions showed the biggest decrease from last year, what is your understanding of why that is?" Pichai read aloud from the employee submissions. According to the survey results, only 46% of respondents said their total compensation is competitive compared to similar jobs at other companies. Bret Hill was first to respond. Hill is Google's vice president of "Total Rewards," which refers to compensation and stock packages. "There's some macro economic trends at play," Hill said. "It's a very competitive market and you're probably hearing anecdotal stories of colleagues getting better offers at other companies."

Entertainment

Amazon Closes $8.5 Billion Acquisition of MGM (variety.com) 57

Amazon has closed its $8.5 billion acquisition of MGM, the companies said Thursday. From a report: The pact was first announced in May and has been winding its way through the regulatory process. Per Amazon, "The storied, nearly century-old studio -- with more than 4,000 film titles, 17,000 TV episodes, 180 Academy Awards, and 100 Emmy Awards -- will complement Prime Video and Amazon Studios' work in delivering a diverse offering of entertainment choices to customers." The completion of the transaction comes two days after the Amazon-MGM deal received clearance from the European Union's antitrust regulator, which "unconditionally" approved Amazon's proposed acquisition of MGM, in part because "MGM's content cannot be considered as must-have." The European Commission, in its antitrust review, found that the overlaps between the Amazon and MGM businesses are "limited."
Patents

Open Source Zone Grinds Away At Patent Trolls (zdnet.com) 30

For the last two years, Unified Patents, an international organization of over 200 businesses, has been winning the battle against patent trolls "to keep them from stealing from the companies and organizations that actually use patents' intellectual property (IP)," writes ZDNet's Steven Vaughan Nichols. "This is their story to date." From the report: Unified Patents brings the fight to the trolls. It deters patent trolls from attacking its members by making it too expensive for the troll to win. The group does this by examining troll patents and their activities in various technology sectors (Zones). The United Patents Open Source Software Zone (OSS Zone) is the newest of these Zones. [...] Even before OSS Zone was formally launched, Unified Patents along with the Open Invention Network (OIN), the world's largest patent non-aggression group, launched legal cases against poor quality PAE-owned (Patent Assertion Entities) patents. The Linux Foundation and Microsoft have also joined the OSS Zone to battle these bad patents. [...]

Together, United Patents uses open-source software evidence as proof to establish that the trolls often don't have a case. This is done using Inter Partes Review (IPR), a 2012 legal tool for showing that a bad patent never should have been granted in the first place. [Linux Foundation Executive Director Jim Zemlin] notes, "The Patent Trial and Appeal Board (PTAB)'s discretionary rulings on IPRs have changed the landscape around NPEs. These cases take a long time to be resolved. Typically, it takes from 12 to 24 months. That also makes them expensive for both the OSS Zone and the trolls. Keith Bergelt, the OIN's CEO, said "In other technology areas when patents go through the IPR process or are reexamined, there is a settlement around 20% of the time. In the OSS Zone, there are few settlements. This makes it more costly and difficult to administer, but also is difficult on the PAEs. When the success rate against their patents is over 95%, certain PAEs that would otherwise hope to settle have essentially given up on defending their patents." Still, with such a high success rate, it's worth the expense.

To date, Unified has overseen and managed 43 challenges. Of these, 12 patents were found invalid, another 23 cases have been instituted, and six are still in process. This has led to multiple settlements for Unified Patents members. These, in turn directly pass through to OIN's 3,600+ community members. For example, an Accelerated Memory Tech patent 6,513,062, was used by the troll IP Investments Group to claim that the open-source Redis, which manages cache resources on the cloud, violated the patent. Redis, not having any money, IP Investments Group instead went after Hulu, Citrix Systems, Barracuda Networks, Kemp Technologies, and F5 Networks for their use of Redis software. IP Investments Group gave up rather than fighting it out. Everyone who uses Redis wins. It's one small victory, but that's how the patent troll wars are won. And, with the United Patents' high-success rate in knocking out bad patents, slowly but surely the patent trolls are being driven back from not only open-source software but all software.

Crime

SFPD Puts Rape Victims' DNA Into Database Used To Find Criminals, DA Alleges (arstechnica.com) 132

An anonymous reader quotes a report from Ars Technica: The San Francisco Police Department's crime lab has been checking DNA collected from sexual assault victims to determine whether any of the victims committed a crime, according to District Attorney Chesa Boudin, who called for an immediate end to the alleged practice. "The crime lab attempts to identify crime suspects by searching a database of DNA evidence that contains DNA collected from rape and sexual assault victims," Boudin's office said in a press release yesterday. Boudin's release denounced the alleged "practice of using rape and sexual assault victims' DNA to attempt to subsequently incriminate them."

"Boudin said his office was made aware of the purported practice last week, after a woman's DNA collected years ago as part of a rape exam was used to link her to a recent property crime," the San Francisco Chronicle reported yesterday. The woman "was recently arrested on suspicion of a felony property crime, with police identifying her based on the rape-kit evidence she gave as a victim, Boudin said." That was the only example provided, and Boudin gave few details about the case to protect the woman's privacy. But the database may include "thousands of victims' DNA profiles, with entries over 'many, many years,' Boudin said," according to the Chronicle. "We should encourage survivors to come forward -- not collect evidence to use against them in the future. This practice treats victims like evidence, not human beings. This is legally and ethically wrong," Boudin said.

San Francisco Police Chief Bill Scott said the department will investigate and that he is "committed to ending the practice" if Boudin's allegation is accurate. But Scott also said the suspect cited by Boudin may have been identified from a different DNA database. "We will immediately begin reviewing our DNA collection practices and policies... Although I am informed of the possibility that the suspect in this case may have been identified through a DNA hit in a non-victim DNA database, I think the questions raised by our district attorney today are sufficiently concerning that I have asked my assistant chief for operations to work with our Investigations Bureau to thoroughly review the matter and report back to me and to our DA's office partners," Scott said in a statement published by KRON 4. Scott also said, "I am informed that our existing DNA collection policies have been legally vetted and conform with state and national forensic standards," but he noted that "there are many important principles for which the San Francisco Police Department stands that go beyond state and national standards." "We must never create disincentives for crime victims to cooperate with police, and if it's true that DNA collected from a rape or sexual assault victim has been used by SFPD to identify and apprehend that person as a suspect in another crime, I'm committed to ending the practice," Scott said.
Even though the alleged practice may already be illegal under California's Victims' Bill of Rights, State Senator Scott Wiener (D-San Francisco) and District 9 Supervisor Hillary Ronen are planning legislation to stop the alleged misuse of DNA.

Wiener said that "if survivors believe their DNA may end up being used against them in the future, they'll have one more reason not to participate in the rape kit process. That's why I'm working with the DA's office to address this problem through state legislation, if needed."

Slashdot Top Deals