Businesses

At Talkspace, Startup Culture Collides With Mental Health Concerns (nytimes.com) 19

The therapy-by-text company Talkspace -- which has raised more than $100 million from investors -- made burner phones available for fake reviews and doesn't adequately respect client privacy, former employees say. From a report: The app launched in 2014 to positive press but lukewarm customer reviews, with ratings of about three stars out of five on both the Google and Apple app stores, according to a Times analysis. Users complained about glitchy software and unresponsive therapists. In 2015 and 2016, according to four former employees, the company sought to improve its ratings: It asked workers to write positive reviews. One employee said that Talkspace's head of marketing at the time asked him to compile 100 fake reviews in a Google spreadsheet, so that employees could submit them to app stores. Mr. Lori (an ex-employee) said that Talkspace gave employees "burner" phones to help evade the app stores' techniques for detecting false reviews. "They said, 'Don't do it here. Do it at home. Give us five-star ratings because we have too many bad reviews,'" Mr. Lori said.

Mr. Reilly, the Talkspace lawyer, disputed this account, saying that employees were free to write reviews any way they liked. "We alerted employees if they were to leave a review, to do it from their personal phones -- not from the Talkspace office network, as that would cause issues with the app store," Mr. Reilly said in an emailed statement. "To be clear: We have never used fake identities or encouraged anybody to do so; there is no event involving 'burner' phones, and the idea in and of itself is nonsensical relative to the large number of reviews outstanding."

Transportation

Tesla's Touchscreen Wiper Controls Ruled Illegal In Germany (electrek.co) 420

A user shares a report from Electrek: Tesla's wiper controls through its touchscreen have been ruled illegal in Germany after someone crashed their Model 3 while using them and fought a fine and driving ban through the court system. A Tesla Model 3 driver got into an accident while using the touchscreen to adjust the speed of the automatic windshield wipers. In Model 3 and Model Y vehicles, Tesla didn't install normal windshield wiper settings through a steering wheel stalk. Instead, the automaker is detecting the rain through its Autopilot cameras and automatically adjusting the speed based on the strength of the rainfall. If the driver wants to adjust the speed, they need to do it through the center touchscreen. The driver in Germany was adjusting those settings when he lost control of the vehicle and crashed. A local district court gave him a fine and a one-month driving ban and that's where the problem started for Tesla. He decided to fight the punishment -- bringing the case to the Higher Regional Court (OLG). "It comes as no surprise that enlightened Germans would be the first to rule Tesla's poly engineered cars a road hazard," adds the Slashdot reader. "Touch screen interfaces have no place in cars."
Windows

Windows 10: HOSTS File Blocking Telemetry Is Now Flagged As a Risk (bleepingcomputer.com) 159

AmiMoJo writes: Starting at the end of July, Microsoft has begun detecting HOSTS files that block Windows 10 telemetry servers as a 'Severe' security risk. Windows 10 users are reporting that Windows Defender had started detectingmodified HOSTS files as a 'SettingsModifier:Win32/HostsFileHijack' threat. So it seems that Microsoft had recently updated their Microsoft Defender definitions to detect when their servers were added to the HOSTS file. Users who utilize HOSTS files to block Windows 10 telemetry suddenly caused them to see the HOSTS file hijack detection. Users who intentionally modify their HOSTS file can allow this 'threat,' but it may enable all HOSTS modifications, even malicious ones, going forward.
AI

New Imaging System Creates Pictures By Measuring Time (phys.org) 63

An anonymous reader writes: Photos and videos are usually produced by capturing photons -- the building blocks of light—with digital sensors. For instance, digital cameras consist of millions of pixels that form images by detecting the intensity and color of the light at every point of space. 3-D images can then be generated either by positioning two or more cameras around the subject to photograph it from multiple angles, or by using streams of photons to scan the scene and reconstruct it in three dimensions. Either way, an image is only built by gathering spatial information of the scene. In a new paper published today in the journal Optica, researchers based in the U.K., Italy and the Netherlands describe an entirely new way to make animated 3-D images: by capturing temporal information about photons instead of their spatial coordinates.

Their process begins with a simple, inexpensive single-point detector tuned to act as a kind of stopwatch for photons. Unlike cameras, measuring the spatial distribution of color and intensity, the detector only records how long it takes the photons produced by a split-second pulse of laser light to bounce off each object in any given scene and reach the sensor. The further away an object is, the longer it will take each reflected photon to reach the sensor. The information about the timings of each photon reflected in the scene -- what the researchers call the temporal data -- is collected in a very simple graph.

Those graphs are then transformed into a 3-D image with the help of a sophisticated neural network algorithm. The researchers trained the algorithm by showing it thousands of conventional photos of the team moving and carrying objects around the lab, alongside temporal data captured by the single-point detector at the same time. Eventually, the network had learned enough about how the temporal data corresponded with the photos that it was capable of creating highly accurate images from the temporal data alone. In the proof-of-principle experiments, the team managed to construct moving images at about 10 frames per second from the temporal data, although the hardware and algorithm used has the potential to produce thousands of images per second. Currently, the neural net's ability to create images is limited to what it has been trained to pick out from the temporal data of scenes created by the researchers. However, with further training and even by using more advanced algorithms, it could learn to visualize a varied range of scenes, widening its potential applications in real-world situations.

Science

Scientists Solve Mystery Behind Body Odor (theguardian.com) 126

An anonymous reader quotes a report from The Guardian: Researchers at the University of York traced the source of underarm odor to a particular enzyme in a certain microbe that lives in the human armpit. To prove the enzyme was the chemical culprit, the scientists transferred it to an innocent member of the underarm microbe community and noted -- to their delight -- that it too began to emanate bad smells. The work paves the way for more effective deodorants and antiperspirants, the scientists believe, and suggests that humans may have inherited the mephitic microbes from our ancient primate ancestors.

Writing in the journal Scientific Reports, the York scientists describe how they delved inside Staphylococcus hominis to learn how it made thioalcohols. They discovered an enzyme that converts Cys-Gly-3M3SH released by apocrine glands into the pungent thioalcohol, 3M3SH. The bacteria take up the molecule and eat some of it, but the rest they spit out, and that is one of the key molecules we recognize as body odor. Having discovered the "BO enzyme", the researchers confirmed its role by transferring it into Staphylococcus aureus, a common relative that normally has no role in body odor. "Just by moving the gene in, we got Staphylococcus aureus that made body odor," one of the researchers said. "Our noses are extremely good at detecting these thioalcohols at extremely low thresholds, which is why they are really important for body odor. They have a very characteristic cheesy, oniony smell that you would recognize. They are incredibly pungent."

Facebook

Facebook Advertising Boycott Targets Misinformation and Hate Speech (cnet.com) 95

Two major outdoor-goods retailers "have joined a boycott of Facebook after six civil rights groups called on businesses to stop advertising on Facebook in July," reports CNET, "to push the social network to do more to combat hate speech and misinformation..." The moves by the high-profile brands [North Face and REI] suggest the ad boycott, unveiled Wednesday, is beginning to gain traction. In addition to the two retailers, digital-advertising firm 360i urged its clients in an email to stop purchasing ads on Facebook in July, The Wall Street Journal reported on Wednesday. The Anti-Defamation League, the NAACP, Sleeping Giants, Colors of Change, Free Press and Common Sense say that boycotting advertising on Facebook will put pressure on the platform to use its $70 billion in annual advertising revenue to support people who are targets of racism and hate and to increase safety for private groups on the site.

"We have long seen how Facebook has allowed some of the worst elements of society into our homes and our lives. When this hate spreads online it causes tremendous harm and also becomes permissible offline," Anti-Defamation League CEO Jonathan Greenblatt said in a press release announcing the campaign. "Our organizations have tried individually and collectively to push Facebook to make their platforms safer, but they have repeatedly failed to take meaningful action. We hope this campaign finally shows Facebook how much their users and their advertisers want them to make serious changes for the better."

In a press call Wednesday, Facebook Vice President of Global Affairs and Communications Nick Clegg said the company doesn't allow hate speech on its platform. Facebook removed nearly 10 million posts for violating its rules against hate speech in the last quarter, he said, and most were taken down before users reported them. The social network relies on a mix of human reviewers and technology to moderate content, but detecting hate speech can be challenging because machines have to understand the cultural context of words.

"Of course, we would like to do even better than that," Clegg said. "We need to do more. We need to move faster, but we are making significant progress."

Among the groups' demands: removing all ads that contain hate speech -- or misinformation.
Privacy

Incognito Mode Detection Still Works in Chrome Despite Promise To Fix (zdnet.com) 40

Websites are still capable of detecting when a visitor is using Chrome's incognito (private browsing) mode, despite Google's efforts last year to disrupt the practice. From a report: It is still possible to detect incognito mode in Chrome, and all the other Chromium-based browsers, such as Edge, Opera, Vivaldi, and Brave, all of which share the core of Chrome's codebase. Furthermore, developers have taken the scripts shared last year and have expanded support to non-Chrome browsers, such as Firefox and Safari, allowing sites to block users in incognito mode across the board. Currently, there is no deadline for a new Chrome update to block incognito mode detections, however, today, Google might be interested more than ever in fixing this issue.
Transportation

Tesla Model 3 Drives Straight Into Overturned Truck In What Seems To Be Autopilot Failure (jalopnik.com) 322

A viral video making the rounds on social media shows a Tesla Model 3 smacking into the roof of an overturned truck trailer. The crash took place on Taiwan's National Highway 1 and appears to be "caused by the Tesla's Autopilot system not detecting the large rectangular object right in front of it, in broad daylight and clear weather," reports Jalopnik. From the report: There's video of the wreck, and you can see the Tesla drives right into the truck, with only what looks like a solitary attempt at braking just before impact. For any human driver paying even the slightest bit of attention, this accident is almost an impossibility, assuming the driver had the gift of sight and functional brakes.

Tesla's Autopilot system primarily uses cameras for its Autopilot system, and previous wrecks have suggested that situations like this, a light-colored large immobile object on the road on a bright day can be hard for the system to distinguish. In general, immobile objects are challenging for emergency automatic braking systems and autonomous systems, as if you use radar emitters to trigger braking for immobile objects, cars tend to have far too many false positives and unintended stops than is safe or desirable.

News reports from Taiwanese outlets, clumsily translated by machine, do seem to suggest that the driver, a 53-year-old man named Huang, had Autopilot activated: "The Fourth Highway Police Brigade said that driving Tesla was a 53-year-old man named Huang, who claimed to have turned on the vehicle assist system at the time. It was thought that the vehicle would detect an obstacle and slow down or stop, but the car still moved at a fixed speed, so when the brakes were to be applied at the last moment, it would be too late to cause a disaster."
Thankfully, nobody was seriously hurt in the accident. The takeaway is that regardless of whether Autopilot was working or not the driver should always be paying attention and ready to step in, especially since no Tesla or any currently-available car is fully autonomous.
Chrome

Chrome 83 Released With Enhanced Privacy Controls, Tab Groups Feature (zdnet.com) 20

Google has released today version 83 of its Chrome web browser, one of the most feature-packed Chrome updates released since the browser's initial launch back in 2009. From a report: Today's v83 release includes a slew of new features. These include enhanced privacy controls, new settings for managing cookie files, a new Safety Check option, support for tab groups, new graphics for web form elements, a new API for detecting barcodes, and a new anti-XSS security feature, among many many others. The reason why Chrome 83 includes so many features is because Google canceled the Chrome 82 release due to the ongoing coronavirus pandemic. As a result, some of the Chrome 82 features were pushed into Chrome 83, while others were rescheduled for later this year.
First Person Shooters (Games)

'Doom Eternal' Is Using Denuvo's New Kernel-Level Anti-Cheat Driver (arstechnica.com) 68

"Doom Eternal has become the latest game to use a kernel-level driver to aid in detecting cheaters in multiplayer matches," reports Ars Technica: The game's new driver and anti-cheat tool come courtesy of Denuvo parent Irdeto, a company once known for nearly unbeatable piracy protection and now known for somewhat effective but often cracked piracy protection. But the new Denuvo Anti-Cheat protection is completely separate from the company's Denuvo Anti-Tamper technology... The new Denuvo Anti-Cheat tool rolls out to Doom Eternal players after "countless hours and millions of gameplay sessions" during a two-year early access program, Irdeto said in a blog post announcing its introduction. But unlike Valorant's similar Vanguard system, the Denuvo Anti-Cheat driver "doesn't have annoying tray icons or splash screens" letting players monitor its use on their system. "This invisibility could raise some eyebrows," Irdeto concedes.

To assuage any potential fears, Irdeto writes that Denuvo Anti-Cheat only runs when the game is active, and Bethesda's patch notes similarly say that "use of the kernel-mode driver starts when the game launches and stops when the game stops for any reason...."

"No monitoring or data collection happens outside of multiplayer matches," Denuvo Anti-Cheat Product Owner Michail Greshishchev told Ars via email. "Denuvo does not attempt to maintain the integrity of the system. It does not block cheats, game mods, or developer tools. Denuvo Anti-Cheat only detects cheats." Greshishchev added that the company's driver has received "certification from renown[ed] kernel security researchers, completed regular whitebox and blackbox audits, and was penetration-tested by independent cheat developers." He said Irdeto is also setting up a bug bounty program to discover any flaws they might have missed.

And because of Denuvo Anti-Cheat's design, Greshishchev says the driver is more secure than others that might have more exposure to the Internet. "Unlike existing anti-cheats, Denuvo Anti-Cheat does not stream shell code from the Web," Greshishchev told Ars. "This means that, if compromised, attackers can't send down arbitrary malware to gamers' machines...."

If a driver exploit is discovered in the wild, Greshishchev told Ars that revocable certificates and self-expiring network keys can be used as "kill switches" to cut them off.

The Internet

Vint Cerf on COVID-19's Impact on the Future of Internet (medianama.com) 53

Vint Cerf on the great many lessons that the coronavirus crises has taught us about infrastructure writ large: More directly associated with COVID-19 is the need for detecting exposure and tracking contacts to reduce the spread of the disease. Mobiles and the Internet appear to have roles to play for at least some tracking and tracing system designs. The application of machine learning to large medical datasets may help identify the ways in which SARS-COV-2 actually works. It seems that we are finding new syndromes triggered by this virus as research progress is made. We don't know enough and we must learn more.

Among the stark lessons we have learned is the fragility of food and medical equipment supply chains, either because of excessive concentration or because transport connections are broken. We are seeing this dramatically in the United States where farmers have been unable to sell to restaurants that are closed or operating at much reduced capacity out of concern for the propagation of the virus. These lessons should teach us to create much more resilient infrastructure in every dimension. We need to refresh national stockpiles of protective equipment, medical devices and vaccines. More generally, we must imagine other potential global catastrophes and put in place plans to mitigate. The time to agree on best practices for emergency response is before the emergency, not during. We must not allow this pandemic or a future one to become our society's Titanic.

Intel

Microsoft and Intel Project Converts Malware Into Images Before Analyzing It (zdnet.com) 45

Microsoft and Intel have collaborated on a new research project that explores a new approach to detecting and classifying malware. From a report: Called STAMINA (STAtic Malware-as-Image Network Analysis), the project relies on a new technique that converts malware samples into grayscale images and then scans the image for textural and structural patterns specific to malware samples. The Intel-Microsoft research team said the entire process followed a few simple steps. The first consisted of taking an input file and converting its binary form into a stream of raw pixel data. Researchers then took this one-dimensional (1D) pixel stream and converted it into a 2D photo so that normal image analysis algorithms can analyze it.
Security

Hackers Hide Web Skimmer Behind a Website's Favicon (zdnet.com) 18

In one of the most complex and innovative hacking campaigns detected to date, a hacker group created a fake icons hosting website in order to disguise malicious code meant to steal payment card data from hacked websites. From a report: The operation is what security researchers refer to these days as a web skimming, e-skimming, or a Magecart attack. Hackers breach websites and then hide malicious code on its pages, code that records and steals payment card details as they're entered in checkout forms. Web skimming attacks have been going on for almost four years, and as security firms are getting better at detecting them, attackers are also getting craftier. In a report published today, US-based cybersecurity firm Malwarebytes said it detected one such group taking its operations to a whole new level of sophistication with a new trick.
The Courts

Court Finds Algorithm Bias Studies Don't Violate US Anti-Hacking Law (engadget.com) 52

"A federal court in D.C. has ruled in a lawsuit against Attorney General William Barr that studies aimed at detecting discrimination in online algorithms don't violate the Computer Fraud and Abuse Act," reports Engadget: The government argued that the Act made it illegal to violate a site's terms of service through some investigative methods (such as submitting false info for research), but Judge John Bates determined that the terms only raised the possibility of civil liability, not criminal cases.

Bates observed that many sites' terms of service (which are frequently buried, cryptic or both) didn't provide a good-enough notice to make people criminally liable, and that it's problematic for private sites to define criminal liability. The judge also found that the government was using an overly broad interpretation when it's supposed to use a narrow view whenever there's ambiguity.

"Researchers who test online platforms for discriminatory and rights-violating data practices perform a public service," wrote the staff attorney for the American Civil Liberties Union (which filed the suit "on behalf of academic researchers, computer scientists, and journalists who wish to investigate companies' online practices.") "They should not fear federal prosecution for conducting the 21st-century equivalent of anti-discrimination audit testing."

Their announcement notes it's the kind of testing used by journalists "who exposed that advertisers were using Facebook's ad-targeting algorithm to exclude users from receiving job, housing, or credit ads based on race, gender, age, or other classes protected from discrimination in federal and state civil rights laws."
AI

Surveillance Company Says It's Deploying 'Coronavirus-Detecting' Cameras In US (vice.com) 87

An Austin, Texas based technology company is launching "artificially intelligent thermal cameras" that it claims will be able to detect fevers in people, and in turn send an alert that they may be carrying the coronavirus. Motherboard reports: Athena Security is pitching the product to be used in grocery stores, hospitals, and voting locations. It claims to be deploying the product at several customer locations over the coming weeks, including government agencies, airports, and large Fortune 500 companies. "Our Fever Detection COVID19 Screening System is now a part of our platform along with our gun detection system which connects directly to your current security camera system to deliver fast, accurate threat detection," Athena's website reads. Athena previously sold software that it claims can detect guns and knives in video feeds and then send alerts to an app or security system.

"The AI detects it, and it says I have a 99.5 degrees temperature. It notices that I have a fever, and that I am infected," an Athena employee says during a video demonstration of the product. "Since higher temperature is one of the first symptoms, these cameras can be life-saving" warning the person that they could have the virus and encouraging that person to take serious steps to self-quarantine," the representative added in an email, suggesting that the company could deploy them at polling locations. "Although many voters today are bound to get it, steps in the coming weeks could prevent them from spreading the bug to loved ones and strangers alike." The representative claimed that the software is accurate within half a degree and that it detects a dozen different parts on the body. They added the system has "no facial recognition, no personal tracking."

Space

We're Better Equipped to Find Extraterrestrial Life Now Than Ever Before (smithsonianmag.com) 59

"However small the probability of seeing a signal from E.T. is, those chances are soon going to be a lot better than they have been in the past," reports Smithsonian magazine: Sure, after decades of listening, there is still no message. But with more data to sift through, and new technologies with superior search capabilities, odds of hearing from E.T. are rapidly improving. If the probability in the decade 2011 - 2021 were x percent, it's going to be 1,000 times x in the following decade, says Andrew Siemion, director of the Berkeley SETI Research Center. (SETI stands for Search for Extra-Terrestrial Intelligence.) The reason for E.T. optimism stems largely from several new projects in the works, enhanced with advanced methods for discerning an actual message hidden in the static of cosmic cacophony...

Jill Tarter, chair emeritus for SETI Research at the pioneering SETI Institute, described new search projects in the works at the institute, including Laser SETI. It's a plan to train 96 cameras at a dozen locations around the world to keep a constant vigil for "intelligent" optical signals from space... [And] recent developments in artificial intelligence research should soon make machine learning an effective tool in the E.T. search, Tarter said at the AAAS meeting. "The ability to use machine learning to help us find signals in noise I think is really exciting," she said. "Historically we've asked a machine to tell us if a particular pattern in frequency and time could be found. But now we're on the brink of being able to say to the machine, 'Are there any patterns in there?'"

So it's possible that an artificially intelligent computer might be the first earthling to discern a message from an extraterrestrial. But then we would have to wonder, would a smart machine detecting a message bother to tell us? That might depend on whom (or what) the message was from. "I think there's something particularly romantic," said Siemion, "about the idea of machine learning and artificial intelligence looking for extraterrestrial intelligence which itself might be artificially intelligent."

The article also notes that SETI researchers "have long agreed that if a signal is detected, no response would be made until a global consensus had been reached on who will speak for Earth and what they would say.

"But that agreement is totally unenforceable..."
Privacy

An AI Surveillance Company is Watching Utah (vice.com) 39

An anonymous reader quotes Motherboard: The state of Utah has given an artificial intelligence company real-time access to state traffic cameras, CCTV and "public safety" cameras, 911 emergency systems, location data for state-owned vehicles, and other sensitive data. The company, called Banjo, says that it's combining this data with information collected from social media, satellites, and other apps, and claims its algorithms "detect anomalies" in the real world.

The lofty goal of Banjo's system is to alert law enforcement of crimes as they happen. It claims it does this while somehow stripping all personal data from the system, allowing it to help cops without putting anyone's privacy at risk. As with other algorithmic crime systems, there is little public oversight or information about how, exactly, the system determines what is worth alerting cops to.

In its pitches to prospective clients, Banjo promises its technology, called "Live Time Intelligence," can identify, and potentially help police solve, an incredible variety of crimes in real-time. Banjo says its AI can help police solve child kidnapping cases "in seconds," identify active shooter situations as they happen, or potentially send an alert when there's a traffic accident, airbag deployment, fire, or a car is driving the wrong way down the road. Banjo says it has "a solution for homelessness" and can help with the opioid epidemic by detecting "opioid events." It offers "artificial intelligence processing" of state-owned audio sensors that "include but may not be limited to speech recognition and natural language processing" as well as automatic scene detection, object recognition, and vehicle detection on real-time video footage pulled in from Utah's cameras.

In July, Banjo signed a five-year, $20.7 million contract with Utah that gives the company unprecedented access to data the state collects. Banjo's pitch to state and local agencies is that the more data that's fed into it, the better its product will work... Privacy experts are unsure how Banjo can be doing anything other than applying machine learning to a terrifying amount of data to create a persistent panopticon pointed at everyone who lives in Utah.

Banjo now has direct, real-time access to the thousands of traffic cameras in Utah, and is plugged into 911 systems across the state.
Science

Researchers Combine Lasers and Terahertz Waves In Camera That Sees 'Unseen' Detail (phys.org) 28

A team of physicists at the University of Sussex has successfully developed the first nonlinear camera capable of capturing high-resolution images of the interior of solid objects using terahertz (THz) radiation. Phys.Org reports: Led by Professor Marco Peccianti of the Emergent Photonics (EPic) Lab, Luana Olivieri, Dr. Juan S. Totero Gongora and a team of research students built a new type of THz camera capable of detecting THz electromagnetic waves with unprecedented accuracy. Images produced using THz radiation are called 'hyperspectral' because the image consists of pixels, each one containing the electromagnetic signature of the object in that point.

The EPic Lab team used a single-pixel camera to image sample objects with patterns of THz light. The prototype they built can detect how the object alters different patterns of THz light. By combining this information with the shape of each original pattern, the camera reveals the image of an object as well as its chemical composition. Sources of THz radiation are very faint and hyperspectral imaging had, until now, limited fidelity. To overcome this, The Sussex team shone a standard laser onto a unique non-linear material capable of converting visible light to THz. The prototype camera creates THz electromagnetic waves very close to the sample, similar to how a microscope works. As THz waves can travel right through an object without affecting it, the resulting images reveal the shape and composition of objects in three dimensions.

The Courts

Welfare Surveillance System Violates Human Rights, Dutch Court Rules (theguardian.com) 119

An anonymous reader quotes a report from The Guardian: A Dutch court has ordered the immediate halt of an automated surveillance system for detecting welfare fraud because it violates human rights, in a judgment likely to resonate well beyond the Netherlands. The case was seen as an important legal challenge to the controversial but growing use by governments around the world of artificial intelligence (AI) and risk modeling in administering welfare benefits and other core services. Campaigners say such "digital welfare states" -- developed often without consultation, and operated secretively and without adequate oversight -- amount to spying on the poor, breaching privacy and human rights norms and unfairly penalizing the most vulnerable.

A Guardian investigation in October found the Department for Work and Pensions (DWP) had increased spending to about $10 million a year on a specialist "intelligent automation garage" where computer scientists were developing more than 100 welfare robots, deep learning and intelligent automation for use in the welfare system. The Dutch government's risk indication system (SyRI) is a risk calculation model developed over the past decade by the social affairs and employment ministry to predict the likelihood of an individual committing benefit or tax fraud or violating labour laws. Deployed primarily in low-income neighborhoods, it gathers government data previously held in separate silos, such as employment, personal debt and benefit records, and education and housing histories, then analyses it using a secret algorithm to identify which individuals might be at higher risk of committing benefit fraud.
"A broad coalition of privacy and welfare rights groups, backed by the largest Dutch trade union, argued that poor neighborhoods and their inhabitants were being spied on digitally without any concrete suspicion of individual wrongdoing," the report adds. "SyRI was disproportionately targeting poorer citizens, they said, violating human rights norms."

"The court ruled that the SyRI legislation contained insufficient safeguards against privacy intrusions and criticized a 'serious lack of transparency' about how it worked. It concluded in its ruling that, in the absence of more information, the system may, in targeting poor neighborhoods, amount to discrimination on the basis of socioeconomic or migrant status."
AI

Google Releases a Tool To Spot Faked and Doctored Images (technologyreview.com) 34

Jigsaw, a technology incubator at Google, has released an experimental platform called Assembler to help journalists and front-line fact-checkers quickly verify images. MIT Technology Review reports: Assembler combines several existing techniques in academia for detecting common manipulation techniques, including changing image brightness and pasting copied pixels elsewhere to cover up something while retaining the same visual texture. It also includes a detector that spots deepfakes of the type created using StyleGAN, an algorithm that can generate realistic imaginary faces. These detection techniques feed into a master model that tells users how likely it is that an image has been manipulated. "Assembler is a good step in fighting manipulated media -- but it doesn't cover many other existing manipulation techniques, including those used for video, which the team will need to add and update as the ecosystem keeps evolving," the report notes. "It also still exists as a separate platform from the channels where doctored images are usually distributed. Experts have recommended that tech giants like Facebook and Google incorporate these types of detection features directly into their platforms. That way such checks can be performed in close to real time as photos and videos are uploaded and shared."

Slashdot Top Deals