Security

Why Do So Many Sites Have Bad Password Policies? (gatech.edu) 242

"Three out of four of the world's most popular websites are failing to meet minimum requirement standards" for password security, reports Georgia Tech's College of Computing. Which means three out of four of the world's most popular web sites are "allowing tens of millions of users to create weak passwords."

Using a first-of-its-kind automated tool that can assess a website's password creation policies, researchers also discovered that 12% of websites completely lacked password length requirements. Assistant Professor Frank Li and Ph.D. student Suood Al Roomi in Georgia Tech's School of Cybersecurity and Privacy created the automated assessment tool to explore all sites in the Google Chrome User Experience Report (CrUX), a database of one million websites and pages.

Li and Al Roomi's method of inferring password policies succeeded on over 20,000 sites in the database and showed that many sites:

- Permit very short passwords
- Do not block common passwords
- Use outdated requirements like complex characters

The researchers also discovered that only a few sites fully follow standard guidelines, while most stick to outdated guidelines from 2004... More than half of the websites in the study accepted passwords with six characters or less, with 75% failing to require the recommended eight-character minimum. Around 12% of had no length requirements, and 30% did not support spaces or special characters. Only 28% of the websites studied enforced a password block list, which means thousands of sites are vulnerable to cyber criminals who might try to use common passwords to break into a user's account, also known as a password spraying attack.

Georgia Tech describes the new research as "the largest study of its kind." ("The project was 135 times larger than previous works that relied on manual methods and smaller sample sizes.")

"As a security community, we've identified and developed various solutions and best practices for improving internet and web security," said assistant professor Li. "It's crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand whether security is improving in reality."

The Slashdot community has already noticed the problem, judging by a recent post from eggegick. "Every site I visit has its own idea of the minimum and maximum number of characters, the number of digits, the number of upper/lowercase characters, the number of punctuation characters allowed and even what punctuation characters are allowed and which are not." The limit of password size really torques me, as that suggests they are storing the password (they need to limit storage size), rather than its hash value (fixed size), which is a real security blunder. Also, the stupid dots drive me bonkers, especially when there is no "unhide" button. For crying out loud, nobody is looking over my shoulder! Make the "unhide" default.
"The 'dots' are bad security," agrees long-time Slashdot reader Spazmania. "If you're going to obscure the password you should also obscure the length of the password." But in their comment on the original submission, they also point out that there is a standard for passwords, from the National Institute of Standards and Technology: Briefly:

* Minimum 8 characters
* Must allow at least 64 characters.
* No constraints on what printing characters can be used (including high unicode)
* No requirements on what characters must be used or in what order or proportion

This is expected to be paired with a system which does some additional and critical things:

* Maintain a database of known compromised passwords (e.g. from public password dictionaries) and reject any passwords found in the database.
* Pair the password with a second authentication factor such as a security token or cell phone sms. Require both to log in.
* Limit the number of passwords which can be attempted per time period. At one attempt per second, even the smallest password dictionaries would take hundreds of years to try...

Someone attempting to brute force a password from outside on a rate-limited system is limited to the rate, regardless of how computing power advances. If the system enforces a rate limit of 1 try per second, the time to crack an 8-character password containing only lower case letters is still more than 6,000 years.

Software

'Make It Real' AI Prototype Turns Drawings Into Working Software (arstechnica.com) 50

An anonymous reader quotes a report from Ars Technica: On Wednesday, a collaborative whiteboard app maker called "tldraw" made waves online by releasing a prototype of a feature called "Make it Real" that lets users draw an image of software and bring it to life using AI. The feature uses OpenAI's GPT-4V API to visually interpret a vector drawing into functioning Tailwind CSS and JavaScript web code that can replicate user interfaces or even create simple implementations of games like Breakout. "I think I need to go lie down," posted designer Kevin Cannon at the start of a viral X thread that featured the creation of functioning sliders that rotate objects on screen, an interface for changing object colors, and a working game of tic-tac-toe. Soon, others followed with demonstrations of drawing a clone of Breakout, creating a working dial clock that ticks, drawing the snake game, making a Pong game, interpreting a visual state chart, and much more.

Tldraw, developed by Steve Ruiz in London, is an open source collaborative whiteboard tool. It offers a basic infinite canvas for drawing, text, and media without requiring a login. Launched in 2021, the project received $2.7 million in seed funding and is supported by GitHub sponsors. When The GPT-4V API launched recently, Ruiz integrated a design prototype called "draw-a-ui" created by Sawyer Hood to bring the AI-powered functionality into tldraw. GPT-4V is a version of OpenAI's large language model that can interpret visual images and use them as prompts. As AI expert Simon Willison explains on X, Make it Real works by "generating a base64 encoded PNG of the drawn components, then passing that to GPT-4 Vision" with a system prompt and instructions to turn the image into a file using Tailwind.
You can experiment with a live demo of Make It Real online. However, running it requires providing an API key from OpenAI, which is a security risk.
Security

Highly Invasive Backdoors Hidden in Python Obfuscation Packages, Downloaded by 2,348 Developers (arstechnica.com) 50

The senior security editor at Ars Technica writes: Highly invasive malware targeting software developers is once again circulating in Trojanized code libraries, with the latest ones downloaded thousands of times in the last eight months, researchers said Wednesday.

Since January, eight separate developer tools have contained hidden payloads with various nefarious capabilities, security firm Checkmarx reported. The most recent one was released last month under the name "pyobfgood." Like the seven packages that preceded it, pyobfgood posed as a legitimate obfuscation tool that developers could use to deter reverse engineering and tampering with their code. Once executed, it installed a payload, giving the attacker almost complete control of the developerâ(TM)s machine. Capabilities include:


- Exfiltrate detailed host information
- Steal passwords from the Chrome web browser
- Set up a keylogger
- Download files from the victim's system
- Capture screenshots and record both screen and audio
- Render the computer inoperative by ramping up CPU usage, inserting a batch script in the startup directory to shut down the PC, or forcing a BSOD error with a Python script
- Encrypt files, potentially for ransom
- Deactivate Windows Defender and Task Manager
- Execute any command on the compromised host


In all, pyobfgood and the previous seven tools were installed 2,348 times. They targeted developers using the Python programming language... Downloads of the package came primarily from the US (62%), followed by China (12%) and Russia (6%)

Ars Technica concludes that "The never-ending stream of attacks should serve as a cautionary tale underscoring the importance of carefully scrutinizing a package before allowing it to run."
Python

Experimental Project Attempts a Python Virtual Shell for Linux (cjshayward.com) 62

Long-time Slashdot reader CJSHayward shares "an attempt at Python virtual shell."

The home-brewed project "mixes your native shell with Python with the goal of letting you use your regular shell but also use Python as effectively a shell scripting language, as an alternative to your shell's built-in scripting language... I invite you to explore and improve it!"

From the web site: The Python Virtual Shell (pvsh or 'p' on the command line) lets you mix zsh / bash / etc. built-in shell scripting with slightly modified Python scripting. It's kind of like Brython [a Python implementation for client-side web programming], but for the Linux / Unix / Mac command line...

The core concept is that all Python code is indented with tabs, with an extra tab at the beginning to mark Python code, and all shell commands (including some shell builtins) have zero tabs of indentation. They can be mixed line-by-line, offering an opportunity to use built-in zsh, bash, etc. scripting or Python scripting as desired.

The Python is an incomplete implementation; it doesn't support breaking a line into multiple lines. Nonetheless, this offers a tool to fuse shell- and Python-based interactions from the Linux / Unix / Mac command line.

Google

Inside Google's Plan To Stop Apple From Getting Serious About Search (nytimes.com) 22

Google has worried for years that Apple would one day expand its internet search technology, and has been working on ways to prevent that from happening. From a report: For years, Google watched with increasing concern as Apple improved its search technology, not knowing whether its longtime partner and sometimes competitor would eventually build its own search engine. Those fears ratcheted up in 2021, when Google paid Apple around $18 billion to keep Google's search engine the default selection on iPhones, according to two people with knowledge of the partnership, who were not authorized to discuss it publicly. The same year, Apple's iPhone search tool, Spotlight, began showing users richer web results like those they could have found on Google.

Google quietly planned to put a lid on Apple's search ambitions. The company looked for ways to undercut Spotlight by producing its own version for iPhones and to persuade more iPhone users to use Google's Chrome web browser instead of Apple's Safari browser, according to internal Google documents reviewed by The New York Times. At the same time, Google studied how to pry open Apple's control of the iPhone by leveraging a new European law intended to help small companies compete with Big Tech. Google's anti-Apple plan illustrated the importance that its executives placed on maintaining dominance in the search business. It also provides insight into the company's complex relationship with Apple, a competitor in consumer gadgets and software that has been an instrumental partner in Google's mobile ads business for more than a decade.

AI

Getty Images Built a 'Socially Responsible' AI Tool That Rewards Artists (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: Getty Images CEO Craig Peters told the Verge that he has found a solution to one of AI's biggest copyright problems: creators suing because AI models were trained on their original works without consent or compensation. To prove it's possible for AI makers to respect artists' copyrights, Getty built an AI tool using only licensed data that's designed to reward creators more and more as the tool becomes more popular over time. "I think a world that doesn't reward investment in intellectual property is a pretty sad world," Peters told The Verge. The conversation happened at Vox Media's Code Conference 2023, with Peters explaining why Getty Images -- which manages "the world's largest privately held visual archive" -- has a unique perspective on this divisive issue.

In February, Getty Images sued Stability AI over copyright concerns regarding the AI company's image generator, Stable Diffusion. Getty alleged that Stable Diffusion was trained on 12 million Getty images and even imitated Getty's watermark -- controversially seeming to add a layer of Getty's authenticity to fake AI images. Now, Getty has rolled out its own AI image generator that has been trained in ways that are unlike most of the popular image generators out there. Peters told The Verge that because of Getty's ongoing mission to capture the world's most iconic images, "Generative AI by Getty Images" was intentionally designed to avoid major copyright concerns swirling around AI images -- and compensate Getty creators fairly.

Rather than crawling the web for data to feed its AI model, Getty's tool is trained exclusively on images that Getty owns the rights to, Peters said. The tool was created out of rising demand from Getty Images customers who want access to AI generators that don't carry copyright risks. [...] With that as the goal, Peters told Code Conference attendees that the tool is "entirely commercially safe" and "cannot produce third-party intellectual property" or deepfakes because the AI model would have no references from which to produce such risky content. Getty's AI tool "doesn't know what the Pope is," Peters told The Verge. "It doesn't know what [Balenciaga] is, and it can't produce a merging of the two." Peters also said that if there are any lawsuits over AI images generated by Getty, then Getty will cover any legal costs for customers. "We actually put our indemnification around that so that if there are any issues, which we're confident there won't be, we'll stand behind that," Peters said.
When asked how Getty creators will be paid for AI training data, Peters said that there currently isn't a tool for Getty to assess which artist deserves credit every time an AI image is generated. "Instead, Getty will rely on a fixed model that Peters said determines 'what proportion of the training set does your content represent? And then, how has that content performed in our licensing world over time? It's kind of a proxy for quality and quantity. So, it's kind of a blend of the two,'" reports Ars.

"Importantly, Peters suggested that Getty isn't married to using this rewards system and would adapt its methods for rewarding creators by continually monitoring how customers are using the AI tool."
Google

Web Sites Can Now Choose to Opt Out of Google Bard and Future AI Models (mashable.com) 35

"We're committed to developing AI responsibly," says Google's VP of Trust, "guided by our AI principles and in line with our consumer privacy commitment. However, we've also heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases."

And so, Mashable reports, "Websites can now choose to opt out of Google Bard, or any other future AI models that Google makes." Google made the announcement on Thursday introducing a new tool called Google-Extended that will allow sites to be indexed by crawlers (or a bot creating entries for search engines), while simultaneously not having their data accessed to train future AI models. For website administrators, this will be an easy fix, available through robots.txt — or the text file that allows web crawlers to access sites...

OpenAI, the maker of ChatGPT, recently launched a web crawler of its own, but included instructions on how to block it. Publications like Medium, the New York Times, CNN and Reuters have notably done so.

As Google's blog post explains, "By using Google-Extended to control access to content on a site, a website administrator can choose whether to help these AI models become more accurate and capable over time..."
Linux

Linux Interoperability Is Maturing Fast Thanks To a Games Console (theregister.com) 41

Liam Proven writes via The Register: Steam OS is the Arch-based distro for a handheld Linux games console, and Valve is aggressively pushing Linux's usability and Windows interoperability for the device. Two unusual companies, Valve Software and Igalia, are working together to improve the Linux-based OS of the Steam Deck handheld games console. The device runs a Linux distro called Steam OS 3.0, but this is a totally different distro from the original Steam OS it announced a decade ago. Steam OS 1 and 2 were based on Debian, but Steam OS 3 is based on Arch Linux, as Igalia developer Alberto Garcia described in a talk entitled How SteamOS is contributing to the Linux ecosystem.

He explained that although Steam OS is built from some fairly standard components -- the normal filesystem hierarchy, GNU user space, systemd and dbus -- Steam OS has quite a few unique features. It has two distinct user interfaces: by default, it starts with the Steam games launcher, but users can also choose an option called Switch to Desktop, which results in a regular KDE Plasma desktop, with the ability to install anything: a web browser, normal Linux tools, and non-Steam games.

Obviously, though, Steam OS's raison d'etre is to run Steam games, and most of those are Windows games which will never get native Linux versions. Valve's solution is Proton, an open-source tool to run Windows games on Linux. It's formed from a collection of different FOSS packages, notably: [Wine, DXVK, VKD3D-Proton, and GStreamer]. The result is a remarkable degree of compatibility for some of the most demanding Windows apps around [...].
You can view Garcia's 49-page presentation here (PDF).
Google

Google Says It's No. 1 Search Tool Because Users Prefer It to Rivals (bloomberg.com) 170

Companies choose Alphabet's Google as the default search engine for their browsers and smartphones because it is the best one, and not because of a lack of competition, a Google lawyer said Tuesday at the start of a high-stakes antitrust trial in Washington. From a report: Consumers use Google "because it delivers value to them, not because they have to," John Schmidtlein, a partner at Williams & Connolly LLP who is representing the company, said during his opening statements on the first day of the trial. "Users today have more search options and ways to access information online than ever before."

Schmidtlein pushed back on claims by US Justice Department antitrust enforcers that Google has used its market power -- and billions of dollars in exclusive deals with web browsers -- to illegally block rivals. Users have choices, and it's easy to switch, he said. For example, Microsoft pre-selects its own search engine, Bing, on Windows PCs, yet most PC users switch to Google because it's a better product, he said. Web browsers offered by Apple and Mozilla, which makes Firefox, have long chosen a default search engine in exchange for a revenue-share that helps pay for innovations, Schmidtlein said.

AI

Gannett Halts AI-Written Sports Recaps After Readers Mocked the Stories (cnn.com) 51

CNN reports that newspaper chain Gannett "has paused the use of an AI tool to write high school sports dispatches after the technology made several major flubs in articles in at least one of its papers." In one notable example, preserved by the Internet Archive's Wayback Machine, the story began: "The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday...." The reports were mocked on social media for being repetitive, lacking key details, using odd language and generally sounding like they'd been written by a computer with no actual knowledge of sports.

CNN identified several other local Gannett outlets, including the Louisville Courrier Journal, AZ Central, Florida Today and the Milwaukee Journal Sentinel, that have all published similar stories written by LedeAI in recent weeks. Many of the reports feature identical language, describing "high school football action," noting when one team "took victory away from" another and describing "cruise-control" wins. In many cases, the stories also repeated the date of the games being covered multiple times in just a few paragraphs.

Gannett has paused its experiment with LedeAI in all of its local markets that had been using the service, according to the company. The pause was earlier reported by Axios... The AI tool debacle comes after Gannett axed hundreds of jobs in December when it laid off 6% of its news division.

From Axios's report: One such Dispatch article from Aug. 18 was blasted on social media for its robotic style, lack of player names and use of awkward phrases like "close encounter of the athletic kind." "I feel like I was there!" The Athletic senior columnist Jon Greenberg posted sarcastically.
More from the Washington Post: Another story about a game between the Wyoming Cowboys and Ross Rams described a scoreboard that "was in hibernation in the fourth quarter." When Ayersville High School staged a late comeback in another game, a write-up of their win read: "The Pilots avoided the brakes and shifted into victory gear...."

In a statement, Gannett called the deployment of Lede AI an "experiment" in automation to aid its journalists and add content for readers... LedeAI CEO Jay Allred said in a statement to The Post that he believes automation is part of the future of local newsrooms and that LedeAI allows reporters and editors to focus on "journalism that drives impact in the communities they serve."

Microsoft

After 28 Years, Microsoft Announces it Will Remove WordPad From Windows (thurrott.com) 120

"Microsoft has quietly revealed that WordPad, the basic word processor that's been included with Windows since 1995, is being retired," reports Windows blog Paul Thurrott: "WordPad is no longer being updated and will be removed in a future release of Windows," the Deprecated features for Windows client page on Microsoft Learn notes in a September 1, 2023 addition. "We recommend Microsoft Word for rich text documents like .doc and .rtf and Windows Notepad for plain text documents like .txt...."

[W]hile Microsoft's advice to use Microsoft Word instead seems a bit off-base, given that Word is a paid product, RTF is rarely used these days, and anyone can access the web versions of Word for free if needed.

The actual date of removal is unclear. But Neowin isn't the only thing Microsoft is removing from Windows: The company recently turned off Cortana, its neglected voice assistant, and announced the end of Microsoft Support Diagnostic Tool (MSDT). Also, Microsoft will soon disable old Transport Layer Security protocols to make Windows 11 more secure.
Desktops (Apple)

An Apple Malware-Flagging Tool Is 'Trivially' Easy To Bypass (wired.com) 9

One of the Mac's built-in malware detection tools may not be working quite as well as you think. From a report: At the Defcon hacker conference in Las Vegas, longtime Mac security researcher Patrick Wardle presented findings today about vulnerabilities in Apple's macOS Background Task Management mechanism, which could be exploited to bypass and, therefore, defeat the company's recently added monitoring tool. There's no foolproof method for catching malware on computers with perfect accuracy because, at their core, malicious programs are just software, like your web browser or chat app. It can be difficult to tell the legitimate programs from the transgressors. So operating system makers like Microsoft and Apple, as well as third-party security companies, are always working to develop new detection mechanisms and tools that can spot potentially malicious software behavior in new ways.

Apple's Background Task Management tool focuses on watching for software "persistence." Malware can be designed to be ephemeral and operate only briefly on a device or until the computer restarts. But it can also be built to establish itself more deeply and "persist" on a target even when the computer is shut down and rebooted. Lots of legitimate software needs persistence so all of your apps and data and preferences will show up as you left them every time you turn on your device. But if software establishes persistence unexpectedly or out of the blue, it could be a sign of something malicious. With this in mind, Apple added Background Task Manager in macOS Ventura, which launched in October 2022, to send notifications both directly to users and to any third-party security tools running on a system if a "persistence event" occurs. This way, if you know you just downloaded and installed a new application, you can disregard the message. But if you didn't, you can investigate the possibility that you've been compromised.

The Internet

'Tor's Shadowy Reputation Will Only End If We All Use It' (engadget.com) 65

Katie Malone writes via Engadget: "Tor" evokes an image of the dark web; a place to hire hitmen or buy drugs that, at this point, is overrun by feds trying to catch you in the act. The reality, however, is a lot more boring than that -- but it's also more secure. The Onion Router, now called Tor, is a privacy-focused web browser run by a nonprofit group. You can download it for free and use it to shop online or browse social media, just like you would on Chrome or Firefox or Safari, but with additional access to unlisted websites ending in .onion. This is what people think of as the "dark web," because the sites aren't indexed by search engines. But those sites aren't an inherently criminal endeavor.

"This is not a hacker tool," said Pavel Zoneff, director of strategic communications at The Tor Project. "It is a browser just as easy to use as any other browser that people are used to." That's right, despite common misconceptions, Tor can be used for any internet browsing you usually do. The key difference with Tor is that the network hides your IP address and other system information for full anonymity. This may sound familiar, because it's how a lot of people approach VPNs, but the difference is in the details. VPNs are just encrypted tunnels hiding your traffic from one hop to another. The company behind a VPN can still access your information, sell it or pass it along to law enforcement. With Tor, there's no link between you and your traffic, according to Jed Crandall, an associate professor at Arizona State University. Tor is built in the "higher layers" of the network and routes your traffic through separate tunnels, instead of a single encrypted tunnel. While the first tunnel may know some personal information and the last one may know the sites you visited, there is virtually nothing connecting those data points because your IP address and other identifying information are bounced from server to server into obscurity.

Accessing unindexed websites adds extra perks, like secure communication. While a platform like WhatsApp offers encrypted conversations, there could be traces that the conversation happened left on the device if it's ever investigated, according to Crandall. Tor's communication tunnels are secure and much harder to trace that the conversation ever happened. Other use cases may include keeping the identities of sensitive populations like undocumented immigrants anonymous, trying to unionize a workplace without the company shutting it down, victims of domestic violence looking for resources without their abuser finding out or, as Crandall said, wanting to make embarrassing Google searches without related targeted ads following you around forever.

AI

Apple Tests 'Apple GPT,' Develops Generative AI Tools To Catch OpenAI (bloomberg.com) 32

Apple is quietly working on artificial intelligence tools that could challenge those of OpenAI, Alphabet's Google and others, but the company has yet to devise a clear strategy for releasing the technology to consumers. From a report: The iPhone maker has built its own framework to create large language models -- the AI-based systems at the heart of new offerings like ChatGPT and Google's Bard -- according to people with knowledge of the efforts. With that foundation, known as "Ajax," Apple also has created a chatbot service that some engineers call "Apple GPT."

In recent months, the AI push has become a major effort for Apple, with several teams collaborating on the project, said the people, who asked not to be identified because the matter is private. The work includes trying to address potential privacy concerns related to the technology. [...] Apple employees say the company's tool essentially replicates Bard, ChatGPT and Bing AI, and doesn't include any novel features or technology. The system is accessible as a web application and has a stripped-down design not meant for public consumption. As such, Apple has no current plans to release it to consumers, though it is actively working to improve its underlying models.

Programming

Wix's New Tool Can Create Entire Websites from Prompts (techcrunch.com) 35

Wix, a longtime fixture of the web building space, is betting that today's customers don't particularly care to spend time customizing every aspect of their site's appearance. TechCrunch: The company's new AI Site Generator tool, announced today, will let Wix users describe their intent and generate a website complete with a homepage, inner pages and text and images -- as well as business-specific sections for events, bookings and more. Avishai Abrahami, Wix's co-founder and CEO, says that the goal was to provide customers with "real value" as they build their sites and grow their businesses. [...] AI Site Generator takes several prompts -- any descriptions of sites -- and uses a combination of in-house and third-party AI systems to create the envisioned site. In a chatbot-like interface, the tool asks a series of questions about the nature of the site and business, attempting to translate this into a custom web template. ChatGPT generates the text for the website while Wix's AI creates the site design and images.
Privacy

Bangladesh Government Website Leaks Citizens' Personal Data (techcrunch.com) 3

A Bangladeshi government website leaked the personal information of citizens, including full names, phone numbers, email addresses and national ID numbers. TechCrunch reports: Viktor Markopoulos, a researcher who works for Bitcrack Cyber Security, said he accidentally discovered the leak on June 27, and shortly after contacted the Bangladeshi e-Government Computer Incident Response Team (CERT). He said the leak includes data of millions of Bangladeshi citizens. TechCrunch was able to verify that the leaked data is legitimate by using a portion to query a public search tool on the affected government website. By doing this, the website returned other data contained in the leaked database, such as the name of the person who applied to register, as well as -- in some cases -- the name of their parents. We attempted this with 10 different sets of data, which all returned correct data.

TechCrunch is not naming the government website because the data is still available online, according to Markopoulos, and we haven't heard back from any of the Bangladeshi government organizations that we emailed asking for comment and alerting of the data exposure. In Bangladesh, every citizen aged 18 and older is issued a National Identity Card, which assigns a unique ID to every citizen. The card is mandatory and gives citizens access to several services, such as getting a driver's license, passport, buying and selling land, opening a bank account, and others.

Markopoulos said finding the data "was too easy." "It just appeared as a Google result and I wasn't even intending on finding it. I was Googling an SQL error and it just popped up as the second result," he told TechCrunch, referring to SQL, a language designed for managing data in a database. The exposure of email addresses, phone numbers and national ID card numbers is bad on its own, but Markopoulos said that having this type of information could also "be used in the web application to access, modify, and/or delete the applications as well as view the Birth Registration Record Verification."

Businesses

FBI Hired Social Media Surveillance Firm That Labeled Black Lives Matter Organizers 'Threat Actors' (theintercept.com) 151

The FBI's primary tool for monitoring social media threats is the same contractor that labeled peaceful Black Lives Matter protest leaders DeRay McKesson and Johnetta Elzie as "threat actors" requiring "continuous monitoring" in 2015. From a report: The contractor, ZeroFox, identified McKesson and Elzie as posing a "high severity" physical threat, despite including no evidence that McKesson or Elzie were suspected of criminal activity. "It's been almost a decade since the referenced 2015 incident and in that time we have invested heavily in fine-tuning our collections, analysis and labeling of alerts," Lexie Gunther, a spokesperson for ZeroFox, told The Intercept, "including the addition of a fully managed service that ensures human analysis of every alert that comes through the ZeroFox Platform to ensure we are only alerting customers to legitimate threats and are labeling those threats appropriately."

The FBI, which declined to comment, hired ZeroFox in 2021, a fact referenced in the new 106-page Senate report about the intelligence community's failure to anticipate the January 6, 2021, uprising at the U.S. Capitol. The June 27 report, produced by Democrats on the Senate Homeland Security Committee, shows the bureau's broad authorities to surveil social media content -- authorities the FBI previously denied it had, including before Congress. It also reveals the FBI's reliance on outside companies to do much of the filtering for them. The FBI's $14 million contract to ZeroFox for "FBI social media alerting" replaced a similar contract with Dataminr, another firm with a history of scrutinizing racial justice movements. Dataminr, like ZeroFox, subjected the Black Lives Matter movement to web surveillance on behalf of the Minneapolis Police Department, previous reporting by The Intercept has shown.

Education

Schools Say US Teachers' Retirement Fund Was Breached By MOVEit Hackers (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Two U.S. schools have confirmed that TIAA, a nonprofit organization that provides financial services for individuals in academic fields, has been caught up in the mass-hacks targeting MOVEit file transfer tools. Middlebury College in Vermont and Trinity College in Connecticut both released security notices confirming they experienced data breaches as a result of a security incident at the Teachers Insurance and Annuity Association of America, or TIAA. According to its website, TIAA serves mire than five million active and retired employees participating at more than 15,000 institutions and manages $1.3 trillion in assets in more than 50 countries.

Both of the security notices confirm that TIAA was affected by hackers' widespread exploitation of a flaw in MOVEit Transfer, an enterprise file transfer tool developed by Progress Software. The mass-hack has so far claimed more than 160 victims, according to Emsisoft threat analyst Brett Callow, including the U.S. Department of Health and Human Services (HHS) and Siemens Energy. Only 12 of these victims have confirmed the number of people affected, which already adds up to more than 16 million individuals.

While TIAA notified affected schools of its security incident, the organization has yet to publicly acknowledge the incident. In response to a Twitter user questioning the organization's silence, TIAA responded saying that its offices were closed. It's not yet known how many organizations have been impacted as a result of the cyberattack on TIAA. TIAA has not yet been listed on the dark web leak site of the Russia-linked Clop ransomware gang, which has claimed responsibility for the ongoing MOVEit cyberattacks.

AI

Dropbox's AI Tools Can Help You Find Your Stuff -- From Everywhere On the Internet (theverge.com) 7

Dropbox is introducing two new AI-powered services into its platform. One is a tool for summarizing and querying documents, while the other is a universal search engine that can access your files in Dropbox but also across the entire web. "It's called Dash and comes from Dropbox's 2021 acquisition of a company called Command E," reports The Verge. From the report: The idea behind Dash, Dropbox CEO Drew Houston tells me, is that your stuff isn't all files and folders anymore, and so Dropbox can't be, either. "What used to be 100 files or icons on your desktop," he says, "is now 100 tabs in your browser, with your Google Docs and your Airtables and Figmas and everything else." All the tools are better, but they resist useful organization. "So you're just like, okay, I think someone sent that to me. Was it in an email? Was it Slack? Was it a text? Maybe it was pasted in the Zoom chat during the meeting." Dash aims to be the "Google for your personal stuff" app that so many others have tried and failed to pull off.

The Dash app comes in two parts. There's a desktop app, which you can invoke from anywhere with the CMD-E keyboard shortcut, that acts as a universal search for everything on your device and in all your connected apps. (If you've ever used an app like Raycast or Alfred as a launcher, Dash will look very familiar.) There's also a browser extension, which offers the same search but also turns your new tab page into a curated list of your stuff. One section of the Dash start page might include the docs Dropbox thinks you'll need for the meeting starting in five minutes; another might pull together a bunch of similar documents you've been working on recently into what Dropbox calls a "Stack." You can also create your own stacks, and as you create files and even browse the internet, Dash will suggest files and links you might add. [...]

As of today, Dropbox AI available to all Pro customers and a few teams, and there's a waitlist to get into the Dash beta as well. The next phase for Dropbox, Houston says, is to learn what people want and how they use the products. He says he's happy to be somewhat conservative at first in the name of not making huge mistakes -- you really can't have an AI hallucinating information out of your most sensitive work docs -- but he sees this stuff getting better fast.

AI

Is AI Making Silicon Valley Rich on Other People's Work? (mercurynews.com) 111

Slashdot reader rtfa0987 spotted this on the front page of the San Jose Mercury News. "Silicon Valley is poised once again to cash in on other people's products, making a data grab of unprecedented scale that has already spawned lawsuits and congressional hearings. Chatbots and other forms of generative artificial intelligence that burst onto the technology scene in recent months are fed vast amounts of material scraped from the internet — books, screenplays, research papers, news stories, photos, art, music, code and more — to produce answers, imagery or sound in response to user prompts... But a thorny, contentious and highly consequential issue has arisen: A great deal of the bots' fodder is copyrighted property...

The new AI's intellectual-property problem goes beyond art into movies and television, photography, music, news media and computer coding. Critics worry that major players in tech, by inserting themselves between producers and consumers in commercial marketplaces, will suck out the money and remove financial incentives for producing TV scripts, artworks, books, movies, music, photography, news coverage and innovative software. "It could be catastrophic," said Danielle Coffey, CEO of the News/Media Alliance, which represents nearly 2,000 U.S. news publishers, including this news organization. "It could decimate our industry."

The new technology, as happened with other Silicon Valley innovations, including internet-search, social media and food delivery, is catching on among consumers and businesses so quickly that it may become entrenched — and beloved by users — long before regulators and lawmakers gather the knowledge and political will to impose restraints and mitigate harms. "We may need legislation," said Congresswoman Zoe Lofgren, D-San Jose, who as a member of the House Judiciary Committee heard testimony on copyright and generative AI last month. "Content creators have rights and we need to figure out a way how those rights will be respected...."

Furor over the content grabbing is surging. Photo-sales giant Getty is also suing Stability AI. Striking Hollywood screenwriters last month raised concerns that movie studios will start using chatbot-written scripts fed on writers' earlier work. The record industry has lodged a complaint with federal authorities over copyrighted music being used to train AI.

The article includes some unique perspectives:
  • There's a technical solution being proposed by the software engineer-CEO of Dazzle Labs, a startup building a platform for controlling personal data. The Mercury News summarizes it as "content producers could annotate their work with conditions for use that would have to be followed by companies crawling the web for AI fodder."
  • Santa Clara University law school professor Eric Goldman "believes the law favors use of copyrighted material for training generative AI. 'All works build upon precedent works. We are all free to take pieces of precedent works. What generative AI does is accelerate that process, but it's the same process. It's all part of an evolution of our society's storehouse of knowledge...."

Slashdot Top Deals