Social Networks

Users React To Bluesky's Upcoming Blue Check Mark Verification System (neowin.net) 36

Bluesky is testing a new verification system featuring blue checks issued by "Trusted Verifiers" like news organizations, rather than a centralized authority or pay-to-play model like X (formerly Twitter). "Looking at the comments on the pull request, it's clear this idea has sparked a lot of discussion and a lot of concern among the community who follow the platform's development closely," reports Neowin. "Many users voiced strong opposition to the change, arguing that the existing domain name verification is sufficient and more aligned with the decentralized ethos that Bluesky aims for." From the report: There's a general worry that adding a visual badge, especially one controlled in part by Bluesky, feels too much like the centralized systems they were trying to escape from by joining Bluesky: "Do not want. BSky is not Twitter 2.0. Do not become like Elon Musk. We came here to get AWAY from that bs." Several commenters also expressed that the current domain name system, while not perfect, is an elegant and decentralized way to build trust, and that adding this new layer feels redundant and gives too much power to centralized entities, including Bluesky itself: "Let's please not do this. Domain names as user IDs is an elegant solution as a system of trust that builds off the infrastructure of an open web."

While the majority of the initial reaction seems negative, focusing on concerns about centralization and the value of the existing domain verification, there was some support for the idea of a visual badge, making it easier to quickly identify genuine accounts. One user commented: "I support this change. I like someone to verify that the account is indeed genuine and the username field showing the domain isn't helpful that much... A badge makes it easier to just tick it off that it's genuine." The PR author, estrattonbailey, later added a description to the pull request explaining that the goal is a "stronger visual signal" for notable accounts and clarifying it's not a paid service.

Bitcoin

US Indicts 26-Year-Old Gotbit Founder For Market Manipulation (crypto.news) 21

The feds have indicted Aleksei Andriunin, a 26-year-old Russian national and founder of Gotbit, on charges of wire fraud and conspiracy to commit market manipulation. Crypto News reports: According to the U.S. Attorney's Office, the indictment alleges that Andriunin and his firm participated in a long-running scheme to artificially boost trading volumes for various cryptocurrency companies, including some based in the United States, to make them appear more popular and increase their trading value. Andriunin allegedly led these activities between 2018 and 2024 as Gotbit's CEO. He could face up to 20 years in prison, additional fines, and asset forfeiture if convicted, according to the U.S. Attorney's Office. Prosecutors say the scheme involved "wash trading," where the firm used its software to make fake trades that inflated a cryptocurrency's trading volume. This practice, called market manipulation, can mislead investors by giving the impression that demand for a particular cryptocurrency is higher than it actually is. Wash trades are illegal in traditional finance and are considered fraudulent because they deceive investors and manipulate market behavior.

Court documents also identify Gotbit's two directors, Fedor Kedrov and Qawi Jalili, as co-conspirators. The indictment claims Gotbit documented these activities in detailed records, tracking differences between genuine and artificial trading volumes. The firm allegedly pitched these services to prospective clients, explaining how Gotbit's tactics would bypass detection on public blockchains, where transactions are recorded transparently. The U.S. Department of Justice has announced that it seized over $25 million worth of cryptocurrency assets connected to these schemes and made four arrests across multiple firms.
If you've been following the crypto industry, you're probably familiar with "pump-and-dump" schemes that have popped up throughout the years. Although it's a form of market manipulation, it's not quite the same as "wash trading."

In a pump-and-dump scheme, the perpetrator artificially inflates the price of a security (often a low-priced or thinly traded stock) by spreading misleading or exaggerated information to attract other buyers, who then drive up the price. Once the price has risen due to increased demand, the manipulators "dump" their shares at the inflated price, selling to the new buyers and pocketing the profits. The price typically crashes after the dump, leaving unsuspecting investors with overvalued shares and significant losses.

Wash trading, on the other hand, involves simultaneously buying and selling of the same asset to create the illusion of higher trading volume and activity. The purpose is to mislead other investors about the asset's liquidity and demand, often giving the impression that it is more popular or actively traded than it actually is. Wash trades usually occur without real changes in ownership or price movement, as the buyer and seller may even be the same person or entity. This tactic can manipulate prices indirectly by creating a perception of interest, but it does not involve a direct inflation followed by a sell-off, like a pump-and-dump scheme.
Intel

Intel Confirms Alder Lake BIOS Source Code Leaked (tomshardware.com) 61

Tom's Hardware reports: We recently broke the news that Intel's Alder Lake BIOS source code had been leaked to 4chan and Github, with the 6GB file containing tools and code for building and optimizing BIOS/UEFI images. We reported the leak within hours of the initial occurrence, so we didn't yet have confirmation from Intel that the leak was genuine. Intel has now issued a statement to Tom's Hardware confirming the incident:

"Our proprietary UEFI code appears to have been leaked by a third party. We do not believe this exposes any new security vulnerabilities as we do not rely on obfuscation of information as a security measure. This code is covered under our bug bounty program within the Project Circuit Breaker campaign, and we encourage any researchers who may identify potential vulnerabilities to bring them our attention through this program...."


The BIOS/UEFI of a computer initializes the hardware before the operating system has loaded, so among its many responsibilities, is establishing connections to certain security mechanisms, like the TPM (Trusted Platform Module). Now that the BIOS/UEFI code is in the wild and Intel has confirmed it as legitimate, both nefarious actors and security researchers alike will undoubtedly probe it to search for potential backdoors and security vulnerabilities....

Intel hasn't confirmed who leaked the code or where and how it was exfiltrated. However, we do know that the GitHub repository, now taken down but already replicated widely, was created by an apparent LC Future Center employee, a China-based ODM that manufactures laptops for several OEMs, including Lenovo.

Thanks to Slashdot reader Hmmmmmm for sharing the news.
AI

Humans Find AI-Generated Faces More Trustworthy Than the Real Thing (scientificamerican.com) 72

Scientific American reports on a new study published in the Proceedings of the National Academy of Sciences USA on the effectiveness of deep fakes.

"The results suggest that real humans can easily fall for machine-generated faces — and even interpret them as more trustworthy than the genuine article." "We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces," says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that "these faces could be highly effective when used for nefarious purposes." The first group did not do better than a coin toss at telling real faces from fake ones, with an average accuracy of 48.2 percent... The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people... Study participants did overwhelmingly identify some of the fakes as fake. "We're not saying that every single image generated is indistinguishable from a real face, but a significant number of them are," says study co-author Sophie Nightingale.... The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: "We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits," they write. "If so, then we discourage the development of technology simply because it is possible."
Thanks to Slashdot reader Hmmmmmm for sharing the link!
Microsoft

Microsoft Tempts Software Pirates With 50 Percent Discount On Office (theverge.com) 76

In a bold bid to turn digital crooks away from a life of crime, Microsoft is offering a 50 percent discount on its Office suite to some people using pirated versions. The Verge reports: Ghacks reports that a new message in the Office ribbon bar is appearing on pirated Office apps, tempting people with a 50 percent discount on a genuine Microsoft 365 subscription. The message links to an official Microsoft website that claims "pirated software exposes your PC to security threats." Microsoft warns Office pirates that they run the risk of running into viruses, malware, data loss, identify theft, and the inability to receive critical updates. The discount brings the price of a Microsoft 365 Family subscription down to just $49.99 for the first year, or $34.99 for a year of Microsoft 365 Personal.
Privacy

7-Eleven Breached Customer Privacy By Collecting Facial Imagery Without Consent (zdnet.com) 23

An anonymous reader quotes a report from ZDNet: In Australia, the country's information commissioner has found that 7-Eleven breached customers' privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers' facial images at two points during the survey-taking process -- when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commended an investigation into 7-Eleven's survey.

During the investigation [PDF], the OAIC found 7-Eleven stored the facial images on tablets for around 20 seconds before uploading them to a secure server hosted in Australia within the Microsoft Azure infrastructure. The facial images were then retained on the server, as an algorithmic representation, for seven days to allow 7-Eleven to identify and correct any issues, and reprocess survey responses, the convenience store giant claimed. The facial images were uploaded to the server as algorithmic representations, or "faceprints," that were then compared with other faceprints to exclude responses that 7-Eleven believed may not be genuine. 7-Eleven also used the personal information to understand the demographic profile of customers who completed the survey, the OAIC said.

7-Eleven claimed it received consent from customers who participated in the survey as it provided a notice on its website stating that 7-Eleven may collect photographic or biometric information from users. The survey resided on 7-Eleven's website. As at March 2021, approximately 1.6 million survey responses had been completed. In Australia, an organization is prohibited from collecting sensitive information about an individual unless consent is provided. [...] 7-Eleven [has been ordered] to cease collecting facial images and faceprints as part of the customer feedback mechanism. 7-Eleven has also been ordered to destroy all the faceprints it collected.

AI

AI Study Suggests a London Gallery's Been Exhibiting a Fake For Years (thenextweb.com) 121

Thomas Macaulay writes via The Next Web: Samson and Delilah is among the most famous works by Peter Paul Rubens, one of the most influential artists of the 17th century. The painting depicts an Old Testament story in which the warrior Samson is betrayed by his lover Delilah. When London's National Gallery bought the masterpiece in 1980, it became the third most expensive artwork (PDF) ever purchased at auction. But the buyers may now be searching for their receipt. According to a new AI analysis, their prized possession is almost certainly a fake.

The tests were conducted by Art Recognition, a Swiss company that uses algorithms to authenticate artworks. The firm's tool is based on a deep convolutional neuronal network. The system learns to identify an artist's characteristics by training the algorithm on images of their real works. The training dataset is then augmented by splitting the images into smaller patches, which are zoomed into to capture the finer details. Once the training is complete, the algorithm is fed a new image to assess. It then analyzes the picture's features to evaluate the likelihood of it being genuine. After comparing Samson and Delilah with 148 genuine Rubens paintings, the system gave the artwork a 91% probability of being inauthentic.
Carina Popovici, the cofounder of Art Recognition, was shocked by the results: "We repeated the experiments to be really sure that we were not making a mistake, and the result was always the same. Every patch, every single square, came out as fake, with more than 90% probability."
The Media

How Should the Media Depict Autism? (salon.com) 117

April 2nd was "World Autism Awareness Day." This prompted Salon to ask: What would a good representation of autism in the media look like? When you talk to people who are neurodiverse, one problem they consistently identify is that even well-developed characters who seem to be on the spectrum are frequently "coded" — that is, they are given personality traits associated with autism but are never directly identified as being autistic.

"I have yet to seen a portrayal in the media that feels genuine," Becca Hector, an autism and neurodiversity consultant and mentor in Colorado, told Salon via Facebook. After noting the prevalence of autistic stereotyping in media, and particularly the entertainment industry, she added that "the closest they ever got, in my opinion, is Temperance Bones from the TV show 'Bones.'" Hector praised how the character "acted" autistic and the people around her responded with a mixture of laughter and exasperation, which struck her as realistic. At the same time, Bones was "absolutely coded."

Jen Elcheson, a 39-year-old autistic paraeducator and published author living in western Canada, agreed with Hector about Bones in the Facebook conversation. "Honestly, I find autistic coded characters easier to relate to in entertainment than the ones they purposely make autistic," she observed. "Because when they do it deliberately, it's usually characters laden in all the stereotypes."

Although Elcheson argued the alternative was also bad.

"When characters are coded not only does the greater public miss out on seeing a different depiction of an autistic that isn't a stereotype, but the autistic community once again experiences erasure."
Robotics

Will California's New Bot Law Strengthen Democracy? (newyorker.com) 185

On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their "artificial identity" when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. From a report: Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California -- or rather, the people who deploy them -- will have to level with their audience. "It's literally taking these high-end technological concepts and bringing them home to basic common-law principles," Robert Hertzberg, a California state senator who is the author of the bot-disclosure law, told me. "You can't defraud people. You can't lie. You can't cheat them economically. You can't cheat 'em in elections."

California's bot-disclosure law is more than a run-of-the-mill anti-fraud rule. By attempting to regulate a technology that thrives on social networks, the state will be testing society's resolve to get our (virtual) house in order after more than two decades of a runaway Internet. We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government. Regulating bots should be low-hanging fruit when it comes to improving the Internet. The California law doesn't even ban them outright but, rather, insists that they identify themselves in a manner that is "clear, conspicuous, and reasonably designed."

Cellphones

Phones Can Now Tell Who Is Carrying Them From Their Users' Gaits (economist.com) 94

PolygamousRanchKid shares an excerpt from a report via The Economist: Most online fraud involves identity theft, which is why businesses that operate on the web have a keen interest in distinguishing impersonators from genuine customers. Passwords help. But many can be guessed or are jotted down imprudently. Newer phones, tablets, and laptop and desktop computers often have beefed-up security with fingerprint and facial recognition. But these can be spoofed. To overcome these shortcomings the next level of security is likely to identify people using things which are harder to copy, such as the way they walk. Many online security services already use a system called device fingerprinting. This employs software to note things like the model type of a gadget employed by a particular user; its hardware configuration; its operating system; the apps which have been downloaded onto it; and other features, including sometimes the Wi-Fi networks it regularly connects through and devices like headsets it plugs into.

LexisNexis Risk Solutions, an American analytics firm, has catalogued more than 4 billion phones, tablets and other computers in this way for banks and other clients. Roughly 7% of them have been used for shenanigans of some sort. But device fingerprinting is becoming less useful. Apple, Google and other makers of equipment and operating systems have been steadily restricting the range of attributes that can be observed remotely. That is why a new approach, behavioral biometrics, is gaining ground. It relies on the wealth of measurements made by today's devices. These include data from accelerometers and gyroscopic sensors, that reveal how people hold their phones when using them, how they carry them and even the way they walk. Touchscreens, keyboards and mice can be monitored to show the distinctive ways in which someone's fingers and hands move. Sensors can detect whether a phone has been set down on a hard surface such as a table or dropped lightly on a soft one such as a bed. If the hour is appropriate, this action could be used to assume when a user has retired for the night. These traits can then be used to determine whether someone attempting to make a transaction is likely to be the device's habitual user.
If used wisely, the report says behavioral biometrics could be used to authenticate account-holders without badgering them for additional passwords or security questions; it could even be used for unlocking the doors of a vehicle once the gait of the driver, as measured by his phone, is recognized, for example.

"Used unwisely, however, the system could become yet another electronic spy, permitting complete strangers to monitor your actions, from the moment you reach for your phone in the morning, to when you fling it on the floor at night," the report adds.
Facebook

Facebook Moderators Are Routinely High and Joke About Suicide To Cope With Job, Says Report (gizmodo.com) 217

According to a new report from The Verge, Facebook moderators in Phoenix, Arizona reportedly make just $28,800 a year and use sex and drugs to deal with the stress. "The report published on Monday detailed the experiences of current and former employees who worked at professional services company Cognizant, a company they say Facebook outsources its moderating efforts to," Gizmodo summarizes. "According to the report, employees experienced severe mental health distress, which they coped with by having sex at the office and smoking weed. Some even began believing the conspiracy theories they were tasked with reviewing. One quality assurance manager said he began bringing a gun to work in response to threats from fired workers." From the report: "There was nothing that they were doing for us," one former moderator told The Verge, "other than expecting us to be able to identify when we're broken. Most of the people there that are deteriorating -- they don't even see it. And that's what kills me." "Randy," a quality assurance worker at Cognizant charged with reviewing posts flagged by moderators, said that several times over his year at the company he was approached and intimidated by moderators to change his decisions. "They would confront me in the parking lot and tell me they were going to beat the shit out of me," Randy told The Verge. He also said that fired Cognizant employees made what he believed to be genuine threats of harm to their former colleagues. Randy started to bring a concealed gun to the office to protect himself.

Employees told The Verge that moderators in the Phoenix office dealt with the hellish reality of their jobs by having sex in the office -- in stairwells, bathrooms, parking garages, and a lactation room -- smoking weed on breaks, and joking about suicide. A former moderator claimed that there was a joke among colleagues that "time to go hang out on the roof" was subtext for wanting to jump off the building. Moderators for Facebook have to review graphic posts containing violence, dehumanizing speech, and child abuse, but they also have to weed through the conspiracy theories that run rampant on the web. It's well-reported that the former has resulted in moderators developing PTSD and other debilitating mental health issues, but Monday's report from The Verge indicates that the latter may be causing them to develop fringe beliefs.

Facebook

Papua New Guinea Bans Facebook For a Month To Root Out 'Fake Users' (theguardian.com) 86

The Papua New Guinean government will ban Facebook for a month in a bid to crack down on "fake users" and study the effects the website is having on the population. From a report: The communication minister, Sam Basil, said the shutdown would allow his department's analysts to carry out research and analysis on who was using the platform, and how they were using it, admits rising concerns about social well-being, security and productivity. "The time will allow information to be collected to identify users that hide behind fake accounts, users that upload pornographic images, users that post false and misleading information on Facebook to be filtered and removed," Basil told the Post Courier newspaper. "This will allow genuine people with real identities to use the social network responsibly." Basil has repeatedly raised concerns about protecting the privacy of PNG's Facebook users in the wake of the Cambridge Analytica revelations, which found Facebook had leaked the personal data of tens of millions of users to a private company. The minister has closely followed the US Senate inquiry into Facebook.
China

Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument (cnet.com) 69

Abstract of a study: The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called "50c party" posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. From a CNET article, titled, Chinese media told to 'shut down' talk that makes country look bad: Being an internet business in China appears to be getting tougher. Chinese broadcasters, including social media platform Weibo, streamer Acfun and media company Ifeng were told to shut down all audio and visual content that cast the country or its government in bad light, China's State Administration of Press, Publication, Radio, Film and Television posted on its website on Thursday, saying they violate local regulations. "[The service providers] broadcast large amounts of programmes that don't comply with national rules and propagate negative discussions about public affairs. [The agency] has notified all relevant authorities and ... will take measures to shut down these programmes and rectify the situation," reads the statement.
China

Counterfeit Air Bag Racket Blows Up 288

Hugh Pickens writes "According to Joan Lowy of the Associated Press, the National Highway Traffic Safety Administration has alerted the auto repair industry that tens of thousands of car owners may be driving vehicles with counterfeit air bags, which fail to inflate properly or don't inflate at all. Although no deaths or injuries have been tied to the counterfeit bags, it's unclear whether police accident investigators would be able to identify a counterfeit bag from a genuine one. The counterfeit bags typically have been made to look like air bags from automakers, and usually include a manufacturer's logo, but government investigators believe many of the bags come from China. Auto dealerships that operate their own body shops are usually required by their franchise agreements to buy their parts, including air bags, directly from automakers and therefore are unlikely to have installed counterfeit bags. But only 37 percent of auto dealers have their own body shops, so many consumers whose vehicles have been damaged are referred by their insurance companies to auto body shops that aren't affiliated with an automaker. Safety officials will warn millions of Americans that the air bags in over 100 vehicle models could be dangerous counterfeits, telling them to have their cars and trucks inspected as soon as possible. Dai Zhensong, a Chinese citizen, had the counterfeit air bags manufactured by purchasing genuine auto air bags that were torn down and used to produce molds to manufacture the counterfeit bags. Trademark emblems were purchased through dealerships located in China and affixed to the counterfeit air bags, which were then advertised on the Guangzhou Auto Parts website and sold for approximately $50 to $70 each, far below the value of an authentic air bag. The NHTSA has made a list of automobiles available that may be at risk for having counterfeit air bags."
Social Networks

Buried By The Brigade At Digg 624

Slashdot regular Bennett Haselton writes in with an essay on a subject we've dealt with internally at Slashdot for years: user abuses of social news... this time at Digg. He starts "Alternet uncovers evidence of a 'bury brigade' coordinating efforts to 'bury' left-leaning stories on Digg. Digg had previously announced that the 'bury' button will be removed from the next version of their site, to prevent these types of abuses, but that won't fix the real underlying issue — you can show mathematically that artificially promoting stories is just as harmful in the long run. Here's a simple fix that would address the real problem."
The Courts

P.I.I. In the Sky 222

Frequent Slashdot contributor Bennett Haselton writes "A judge rules that IP addresses are not 'personally identifiable information' (PII) because they identify computers, not people. That's absurd, but in truth there is no standard definition of PII in the industry anyway, because you don't need one in order to write secure software. Here's a definition of 'PII' that the judge could have adopted instead, to reach the same conclusion by less specious reasoning." Hit the link below to read the rest of his thoughts.
Power

Panasonic Begins To Lock Out 3d-Party Camera Batteries 450

OhMyBattery writes "The latest firmware updated for Panasonic digital cameras contains one single improvement: it locks out the ability to use 'non-genuine Panasonic' batteries. It does so for safety reasons, it says. It seems to indicate that this is going to be the norm for all new Panasonic digital cameras. From the release: 'Panasonic Digital Still Cameras now include a technology that can identify a genuine Panasonic battery. For the protection of our customers Panasonic developed this technology after it was discovered that some aftermarket 3rd party batteries do not meet the rigid safety standards Panasonic uses.' The firmware warning is quite clear as to what it does: 'After this firmware update your Panasonic Digital Camera cannot be operated by 3rd party batteries (non genuine Panasonic batteries).'"
Microsoft

Microsoft Uses WGA To Obtain Record Jail Sentences 311

theodp writes "According to Microsoft, 'No information is collected during the [Genuine Advantage Program] validation process that can be used to identify or contact a user.' That's little comfort to the software counterfeiters who were just handed jail sentences ranging from 1.5-6.5 years by the Futian People's Court in China, especially since Microsoft contends that much of the estimated $2B in bogus software was detected by its Windows Genuine Advantage program. 'Software piracy negatively impacts local economic growth,' explained Microsoft VP Fengming Liu in a celebratory New Year's Eve press release. But then again, so does transferring $16B of assets and $9B in annual profit to an Irish tax haven, doesn't it?"

Virtual Worlds and ESP 310

Yesterday's post about an experiment using virtual worlds in an attempt to investigate the possibility of telepathic ability elicited nearly 400 comments from readers who had points to raise about experimental design, skepticism and credulity, and quantum mechanics. Read on for the Backslash summary of the discussion.

MS Security VP Mike Nash Replies 464

You posted a lot of great questions for Mike Nash last week, and he put a lot of time into answering them. As promised, his answers were not laundered by PR people, which is all too common with "executive" interviews with people from any company. Still, he boosts Microsoft, as you'd expect, since he's a VP there. And obviously, going along with that, he says he likes Microsoft products better than he likes competing ones. But this is still a great look into the way Microsoft views security problems with their products, and what the company is trying to do about them.

Slashdot Top Deals