AI

GitHub Copilot Labs Add Photoshop-Style 'Brushes' for ML-Powered Code Modifying (githubnext.com) 56

"Can editing code feel more tactile, like painting with Photoshop brushes?"

Researchers at GitHub Next asked that question this week — and then supplied the answer. "We added a toolbox of brushes to our Copilot Labs Visual Studio Code extension that can modify your code.... Just select a few lines, choose your brush, and see your code update."

The tool's web page includes interactive before-and-after examples demonstrating:
  • Add Types brush
  • Fix Bugs brush
  • Add Debugging Statements brush
  • Make More Readable brush

And last month Microsoft's principle program manager for browser tools shared an animated GIF showing all the brushes in action.

"In the future, we're interested in adding more useful brushes, as well as letting developers store their own custom brushes," adds this week's announcement. "As we explore enhancing developers' workflows with Machine Learning, we're focused on how to empower developers, instead of automating them. This was one of many explorations we have in the works along those lines."

It's ultimately grafting an incredibly easy interface onto "ML-powered code modification", writes Visual Studio Magazine, noting that "The bug-fixing brush, for example can fix a simple typo, changing a variable name from the incorrect 'low' to the correct 'lo'....

"All of the above brushes and a few others have been added to the Copilot Labs brushes toolbox, which is available for anyone with a GitHub Copilot license, costing $10 per month or $100 per year.... At the time of this writing, the extension has been installed 131,369 times, earning a perfect 5.0 rating from six reviewers."


AI

Anthropic's Claude Improves On ChatGPT But Still Suffers From Limitations (techcrunch.com) 33

An anonymous reader quotes a report from TechCrunch: Anthropic, the startup co-founded by ex-OpenAI employees that's raised over $700 million in funding to date, has developed an AI system similar to OpenAI's ChatGPT that appears to improve upon the original in key ways. Called Claude, Anthropic's system is accessible through a Slack integration as part of a closed beta. Claude was created using a technique Anthropic developed called "constitutional AI." As the company explains in a recent Twitter thread, "constitutional AI" aims to provide a "principle-based" approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.

To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of "constitution" (hence the name "constitutional AI"). The principles haven't been made public, but Anthropic says they're grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice). Anthropic then had an AI system -- not Claude -- use the principles for self-improvement, writing responses to a variety of prompts (e.g., "compose a poem in the style of John Keats") and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude. Claude, otherwise, is essentially a statistical tool to predict words -- much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects. [...]

So what's the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its "constitutional AI" approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI. Barring our own testing, some questions about Claude remain unanswered, like whether it regurgitates the information -- true and false, and inclusive of blatantly racist and sexist perspectives -- it was trained on as often as ChatGPT. Assuming it does, Claude is unlikely to sway platforms and organizations from their present, largely restrictive policies on language models. Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass -- and results in more tangible, measurable improvements.

AI

Top AI Conference Bans Use of ChatGPT and AI Language Tools To Write Academic Papers (theverge.com) 64

One of the world's most prestigious machine learning conferences has banned authors from using AI tools like ChatGPT to write scientific papers, triggering a debate about the role of AI-generated text in academia. From a report: The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper's experimental analysis." The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference's organizers responded by publishing a longer statement explaining their thinking.

According to the ICML, the rise of publicly accessible AI language models like ChatGPT -- a general purpose AI chatbot that launched on the web last November -- represents an "exciting" development that nevertheless comes with "unanticipated consequences [and] unanswered questions." The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be "considered novel or mere derivatives of existing work."

Google

Google Debuts OSV-Scanner, a Go Tool For Finding Security Holes in Open Source (theregister.com) 16

Google this week released OSV-Scanner -- an open source vulnerability scanner linked to the OSV.dev database that debuted last year. From a report: Written in the Go programming language, OSV-Scanner is designed to scan open source applications to assess the security of any incorporated dependencies -- software libraries that get added to projects to provide pre-built functions so developers don't have to recreate those functions on their own. Modern applications can have a lot of dependencies. For example, researchers from Mozilla and Concordia University in Canada recently created a single-page web application with the React framework using the create-react-app command. The result was a project with seven runtime dependencies and nine development dependencies.

But each of these direct dependencies had other dependencies, known as transitive dependencies. The react package includes loose-envify as a transitive dependency -- one that itself depends on other libraries. All told, this basic single-page "Hello world" app required a total of 1,764 dependencies. As Rex Pan, a software engineer on Google's Open Source Security Team, observed on Tuesday in a blog post, vetting thousands of dependences isn't something developers can do on their own.

AI

Google Execs Warn Company's Reputation Could Suffer If It Moves Too Fast On AI-Chat Tech (cnbc.com) 57

Google employees asked executives at an all-hands meeting whether the AI chatbot that's going viral represents a "missed opportunity" for the company. Google's Jeff Dean said the company has much more "reputational risk" in providing wrong information and thus is moving "more conservatively than a small startup." CNBC reports: Google employees are seeing all the buzz around ChatGPT, the artificial intelligence chatbot that was released to the public at the end of November and quickly turned into a Twitter sensation. Some of them are wondering where Google is in the race to create sophisticated chatbots that can answer user queries. After all, Google's prime business is web search, and the company has long touted itself as a pioneer in AI. Google's conversation technology is called LaMDA, which stands for Language Model for Dialogue Applications.

At a recent all-hands meeting, employees raised concerns about the company's competitive edge in AI, given the sudden popularity of ChatGPT, which was launched by OpenAI, a San Francisco-based startup that's backed by Microsoft. "Is this a missed opportunity for Google, considering we've had Lamda for a while?" read one top-rated question that came up at last week's meeting. Alphabet CEO Sundar Pichai and Jeff Dean, the long-time head of Google's AI division, responded to the question by saying that the company has similar capabilities but that the cost if something goes wrong would be greater because people have to trust the answers they get from Google.

Pichai said at the meeting that the company has "a lot" planned in the space for 2023, and that "this is an area where we need to be bold and responsible so we have to balance that." Google, which has a market cap of over $1.2 trillion, doesn't have that luxury. Its technology has stayed largely in-house so far, Dean told employees, emphasizing that the company has much more "reputational risk" and is moving "more conservatively than a small startup." "We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we've been using them to date," Dean said. "But, it's super important we get this right." He went on to say "you can imagine for search-like applications, the factuality issues are really important and for other applications, bias and toxicity and safety issues are also paramount." Dean said the technology isn't where it needs to be for a broad rollout and that current publicly-available models have issues. Pichai said that 2023 will mark a "point of inflection" for the the way AI is used for conversations and in search. "We can dramatically evolve as well as ship new stuff," he said.

Sony

Telnet Gets Stubborn Sony Camera Under Control (hackaday.com) 45

Hackaday writes According to [Venn Stone], technical producer over at Linux GameCast, the Sony a5000 is still a solid option for those looking to shoot 1080p video despite being released back in 2014. But while the camera is lightweight and affordable, it does have some annoying quirks — namely an overlay on the HDMI output (as seen in the image above) that can't be turned off using the camera's normal configuration menu. But as it so happens, using some open source tools and the venerable telnet, you can actually log into the camera's operating system and fiddle with its settings directly.
A grassroots tool for unlocking Sony cameras apparently also unlocks developer options — including a telnet server on its WiFi interface. (There's a video of the whole procedure on Linux Gamecast Weekly's web site.)

Venn Stone (the podcast's technical producer/engineer) is apparently also a long-time Slashdot reader — and also describes himself on the podcast as "not a fan of articial software limitations."

And he calls this telnet-enabled tweak "the most hack-y thing I've done in recent memory" — even creating a playlist of 1990s hacker music to more fully enjoy the moment.
Chromium

'The Arc Browser is the Chrome Replacement I've Been Waiting For' (theverge.com) 98

The Browser Company's Chromium-based Arc browser "isn't perfect, and it takes some getting used to," writes the Verge. "But it's full of big new ideas about how we should interact with the web — and it's right about most of them." Arc wants to be the web's operating system. So it built a bunch of tools that make it easier to control apps and content, turned tabs and bookmarks into something more like an app launcher, and built a few platform-wide apps of its own. The app is much more opinionated and much more complicated than your average browser with its row of same-y tabs at the top of the screen. Another way to think about it is that Arc treats the web the way TikTok treats video: not as a fixed thing for you to consume but as a set of endlessly remixable components for you to pull apart, play with, and use to create something of your own. Want something to look better or have an idea for what to do with it? Go for it.

This is a fun moment in the web browser industry. After more than a decade of total Chrome dominance, users are looking elsewhere for more features, more privacy, and better UI. Vivaldi has some really clever features; SigmaOS is also betting on browsers as operating systems; Brave has smart ideas about privacy; even Edge and Firefox are getting better fast. But Arc is the biggest swing of them all: an attempt to not just improve the browser but reinvent it entirely....

Right now, Arc is only available for the Mac, but the company has said it's also working on Windows and mobile versions, both due next year. It's still in a waitlisted beta and is still very much a beta app, with some basic features missing, other features still in flux, and a few deeply annoying bugs. But Arc's big ideas are the right ones. I don't know if The Browser Company is poised to take on giants and win the next generation of the browser wars, but I'd bet that the future of browsers looks a lot like Arc....

In a way, Arc is more like ChromeOS than Chrome. It tries to expand the browser to become the only app you need because, in a world where all your apps are web apps and all your files are URLs, who really needs more than a browser?

The article describes Arc as a power user tool with vertical sidebar combining bookmarks, tabs, and apps. (And sets of these can apparently be combined into different "spaces".) These are enhanced with a hefty set of keyboard shortcuts (including tab searching), along with built-in media controls for Twitch/Spotify/Google Meet (as well as a picture-in-picture mode).
BR. Arc even has a shareable, collaborative whiteboard app "Easel". And it also offers powerful features like the ability to rewrite how your browser displays any site's CSS. ("I have one that removes the Trending sidebar from Twitter and another that cleans up my Gmail page.")
Android

DuckDuckGo's Anti-Tracking Android Tool Could Be 'Even More Powerful' Than iOS (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: Privacy-focused search site DuckDuckGo has added yet another way to prevent more of your data from going to advertisers, opening its App Tracking Protection for Android to beta testers. DuckDuckGo is positioning App Tracking Protection as something like Apple's App Tracking Transparency for iOS devices, but "even more powerful." Enabling the service in the DuckDuckGo app for Android (under the "More from DuckDuckGo" section) installs a local VPN service on your phone, which can then start automatically blocking trackers on DDG's public blocklist. DuckDuckGo says this happens "without sending app data to DuckDuckGo or other remote servers."

Google recently gave Android users some native tools to prevent wanton tracking, including app-by-app location-tracking approval and a limited native ad-tracking opt-out. Apple's App Tracking Transparency asks if users want to block apps from accessing the Identifier for Advertisers (IDFA), but apps can still use the largest tracking networks across many apps to better profile app users. Allison Goodman, senior communications manager for DuckDuckGo, told Ars Technica that App Tracking Protection needs Android's VPN permission so it can monitor network traffic. When it recognizes a tracker from its blocklist, it "looks at the destination domain for any outbound request and blocks them if they are in our blocklist and the requesting app is not owned by the same company that owns the domain." Goodman added that "much of the data collected by trackers is not controlled by [Android] permissions," making App Tracking Protection a complementary offering.

Software

Zoom Is Adding Email and Calendar Features (engadget.com) 16

At its Zoomtopia conference, the company announced a bunch of features that are coming to its platform, including two key ones for productivity: email and calendars. Engadget reports: You can connect third-party email and calendar services to Zoom and access them through the desktop app. The company says that can help save you time instead of having to switch between apps and perhaps needing to hunt for the right tab in your browser. Those on the Zoom One Pro or Zoom Standard Pro plans will be able to set up email accounts through the platform, and folks with certain plans have the option to use custom domains. You'll get up to 100GB of storage included. The key selling point is that messages sent directly between Zoom Mail Service users (i.e. those who use Zoom's email hosting services) will have end-to-end encryption. You'll also be able to send external emails that can expire and contain access-restricted links.

As for Zoom Calendar, there will be options to see which of your contacts has joined a meeting, and you can schedule Zoom voice and video calls in the app. Zoom's own calendar service will include the ability to book appointments. On the way in 2023 is a feature called Zoom Spots. The company describes this as a virtual coworking space where colleagues can stay more connected during the workday via video-first conversations. While the company didn't reveal too much detail about Zoom Spots in its blog post, there may be a downside as the feature could enable bosses to keep a closer eye on what their employees are doing.

Businesses will soon be able to employ Zoom Virtual Agent, a conversational AI and chatbot designed to help customers resolve issues. That tool will be available in early 2023. Other things in the pipeline include a way for developers to make money from the Zoom Apps Marketplace and a virtual coach to help sellers perfect their pitches. As for the core functions people know Zoom for, there's a feature on the way that connects team chats with in-meeting chats. You'll be able to carry the conversation from one to the other and back again to keep things flowing. The company is also looking to roll out translation options for team chats in 2023. In the near future, you'll be able to schedule a chat message to send at a later time.

Zoom Phone is coming to the web, which should be handy for many folks. A progressive web app will be available for ChromeOS too. Meanwhile, users will be able to use a one-click chat message as a response when they can't answer a call. As for Zoom Rooms, there will be a way for folks in one of those to join a Google Meet room and vice versa. Last, but by no means least, Zoom revealed a string of updates for meetings. The Smart Recordings feature uses AI to generate summaries, next steps and chapters to make archived meetings more digestible and help you get to the part you're looking for. There will be meeting templates that can automatically configure the right settings and a way to record videos with narration and screensharing that you can send to colleagues. On top of that, you'll have more avatar options, including the ability to use a Meta avatar.

AI

Google's New Prototype AI Tool Does the Writing For You (theverge.com) 22

An anonymous reader shares a report: Remember that time Google showed off its artificial intelligence prowess by demoing conversations with Pluto and a paper airplane? That was powered by LaMDA, one of Google's latest-generation conversational AI models. Now, Google's using LaMDA to build Wordcraft, a prototype writing tool that can help creative writers craft new stories. AI-powered writing tools aren't new. Chances are you've heard of Grammarly or copywriting tools like Jasper. What makes Wordcraft a bit different is that it's framed as a means to help create fictional work. Google describes it as a sort of "text editor with purpose" built into a web-based word processor. Users can prompt Wordcraft to rewrite phrases or direct it to make a sentence funnier. It can also describe objects if asked or generate prompts. In a nutshell, it's sort of like wrapping an editor and writing partner into a single AI tool.

To test Wordcraft, Google created a workshop with 13 professional writers to see how well the prototype worked. While the writers seemed to appreciate Wordcraft as a way to spark new ideas, they unanimously agreed the tool wasn't going to replace authors anytime soon. For starters, the tool wasn't great at sticking to a narrative style and produced average or cliched writing. It also stuck to tried-and-true tropes while also steering clear of "mean" characters. "One clear finding was that using LaMDA to write full stories is a dead end. It's a much more effective tool when it's used to add spice," Douglas Eck, senior research director at Google Research, said at the AI@ event. Obviously, any prototype has kinks to work out. It's also hard to fully grasp what using an AI-powered creative writing tool is like. So I was curious to see a demo of it firsthand at Google's AI@ event.

Google

Google Can Now Remove Your Identifying Search Results, If They're the Right Kind (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: Google has been pushing out a tool for removing personally identifiable information -- or doxxing content -- from its search results. It's a notable step for a firm that has long resisted individual moderation of search content, outside of broadly harmful or copyright-violating material. But whether it works for you or not depends on many factors. As with almost all Google features and products, you may not immediately have access to Google's new removal process. If you do, though, you should be able to click the three dots next to a web search result (while signed in), or in a Google mobile app, to pull up "About this result." Among the options you can click at the bottom of a pop-up are "Remove result." Take note, though, that this button is much more intent than immediate action -- Google suggests a response time of "a few days."

Google's blog post about this tool, updated in late September, notes that "Starting early next year," you can request regular alerts for when your personal identifying information (PII) appears in new search results, allowing for quicker reporting and potential removal. I took a trial run through the process by searching my name and a relatively recent address on Google, then reporting it. The result I reported was from a private company that, while putting on the appearance of only posting public or Freedom of Information Act-obtained records, places those records next to links that send you to the site's true owner, initiating a "background check" or other tracking services for a fee.

The first caveat Google carves out in its blog post is whether the page your information appears on also contains "other information that is broadly useful, for instance in news articles." So if your information is appearing because a newspaper or other publication regularly publishes, for example, lists of real estate transactions, Google isn't likely to take that page down. Google then notes that removing your info from a Google search "doesn't remove it from the web," so they suggest a help page they've compiled for contacting a site webmaster about removal. In other words, if Google can see a page with your information on it, so can Bing, DuckDuckGo, and other web-indexing search sites, so removing the original page is important. You could then request Google remove its own indexed result once the webmaster acts through an "outdated information" removal request. [...] Google notes that it generally aims to preserve search results if "the content is determined to be of public interest." This includes "Content on or from government and other official sources," and newsworthy and professionally relevant content.
There's a different case for doxxing, notes Ars Technica's Kevin Purdy. "If there is an 'explicit or implicit threat,' or 'calls to action for others to harm or harass,' that can make the removal easier under Google's doxxing policy, initiated in May."
Google

Google Selects Coinbase To Take Cloud Payments With Cryptocurrencies and Will Use Its Custody Tool (cnbc.com) 11

Google said Tuesday that it will rely on Coinbase to start letting some customers pay for cloud services with cryptocurrencies early in 2023, while Coinbase said it would draw on Google's cloud infrastructure. From a report: The deal, announced at Google's Cloud Next conference, might succeed in luring cutting-edge companies to Google in a fierce, fast-growing market, where Google's top competitors do not currently permit clients to pay with digital currencies. The cloud business helps diversify Google parent Alphabet away from advertising, and it now accounts for 9% of revenue, up from less than 6% three years ago, as it is expanding more quickly than Alphabet as a whole. Coinbase, which generates a majority of its revenue from retail transactions, will move data-related applications to Google from the market-leading Amazon Web Services cloud, which Coinbase has relied on for years, said Jim Migdal, Coinbase's vice president of business development. The Google Cloud Platform infrastructure service will initially accept cryptocurrency payments from a handful of customers in the Web3 world who want to pay with cryptocurrency, thanks to an integration with the Coinbase Commerce service, said Amit Zavery, vice president and general manager and head of platform at Google Cloud, in an interview with CNBC.
AI

Shutterstock Is Removing AI-Generated Images 74

Shutterstock appears to be removing images generated by AI systems like DALL-E and Midjourney. Motherboard reports: On Shutterstock, searches for images tagged "Midjourney" yielded several photos with the AI tool's unmistakable aesthetic, with many having high popularity scores and marked as "frequently used." But late Monday, the results for "Midjourney" seem to have been reduced, leaving mainly stock photos of the tool's logo. Other images use tags like "AI generated" -- one image, for example, is an illustration of a futuristic building with an image description reading "Ai generated illustration of futuristic Art Deco city, vintage image, retro poster." The image is part of a collection the artist titled "Midjourney," which has since been removed from the site. Other images marked "AI generated," like this burning medieval castle, seem to remain up on the site.

As Ars Technica notes, neither Shutterstock nor Getty Images explicitly prohibits AI-generated images in their terms of service, and Shutterstock users typically make around 15 to 40 percent of what the company makes when it sells an image. Some creators have not taken kindly to this trend, pointing out that these systems use massive datasets of images scraped from the web. [...] In other words, the generated works are the result of an algorithmic process which mines original art from the internet without credit or compensation to the original artists. Others have worried about the impacts on independent artists who work for commissions, since the ability for anyone to create custom generated artwork potentially means lost revenue.
Privacy

Clearview AI, Used by Police To Find Criminals, Now in Public Defenders' Hands (nytimes.com) 61

After a Florida man was accused of vehicular homicide, his lawyer used Clearview AI's facial recognition software to prove his innocence. But other defense lawyers say Clearview's offer rings hollow. From a report: It was the scariest night of Andrew Grantt Conlyn's life. He sat in the passenger seat of a two-door 1997 Ford Mustang, clutching his seatbelt, as his friend drove approximately 100 miles per hour down a palm tree-lined avenue in Fort Myers, Fla. His friend, inebriated and distraught, occasionally swerved onto the wrong side of the road to pass cars that were complying with the 35 mile-an-hour speed limit. "Someone is going to die tonight," Mr. Conlyn thought. And then his friend hit a curb and lost control of the car. The Mustang began spinning wildly, hitting a light pole and three palm trees before coming to a stop, the passenger's side against a tree. At some point, Mr. Conlyn blacked out. When he came to, his friend was gone, the car was on fire and his seatbelt buckle was jammed. Luckily, a good Samaritan intervened, prying open the driver's side door and pulling Mr. Conlyn out of the burning vehicle.

Mr. Conlyn didn't learn his savior's name that Wednesday night in March 2017, nor did the police, who came to the scene and found the body of his friend, Colton Hassut, in the bushes near the crash; he'd been ejected from the car and had died. In the years that followed, the inability to track down that good Samaritan derailed Mr. Conlyn's life. If Clearview AI, which is based in New York, hadn't granted his lawyer special access to a facial recognition database of 20 billion faces, Mr. Conlyn might have spent up to 15 years in prison because the police believed he had been the one driving the car. For the last few years, Clearview AI's tool has been largely restricted to law enforcement, but the company now plans to offer access to public defenders. Hoan Ton-That, the chief executive, said this would help "balance the scales of justice," but critics of the company are skeptical given the legal and ethical concerns that swirl around Clearview AI's groundbreaking technology. The company scraped billions of faces from social media sites, such as Facebook, LinkedIn and Instagram, and other parts of the web in order to build an app that seeks to unearth every public photo of a person that exists online.

AI

Runway Teases AI-Powered Text-To-Video Editing Using Written Prompts (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: In a tweet posted this morning, artificial intelligence company Runway teased a new feature of its AI-powered web-based video editor that can edit video from written descriptions, often called "prompts." Runway's "Text to Video" demonstration reel shows a text input box that allows editing commands such as "import city street" (suggesting the video clip already existed) or "make it look more cinematic" (applying an effect). It depicts someone typing "remove object" and selecting a streetlight with a drawing tool that then disappears (from our testing, Runway can already perform a similar effect using its "inpainting" tool, with mixed results). The promotional video also showcases what looks like still-image text-to-image generation similar to Stable Diffusion (note that the video does not depict any of these generated scenes in motion) and demonstrates text overlay, character masking (using its "Green Screen" feature, also already present in Runway), and more.

Video generation promises aside, what seems most novel about Runway's Text to Video announcement is the text-based command interface. Whether video editors will want to work with natural language prompts in the future remains to be seen, but the demonstration shows that people in the video production industry are actively working toward a future in which synthesizing or editing video is as easy as writing a command. [...] Runway is available as a web-based commercial product that runs in the Google Chrome browser for a monthly fee, which includes cloud storage for about $35 per year. But the Text to Video feature is in closed "Early Access" testing, and you can sign up for the waitlist on Runway's website.

Security

Eight-Year Study Finds 24,931 WordPress Sites Using Malicious Plugins (gatech.edu) 25

"Since 2012 researchers in the Georgia Tech Cyber Forensics Innovation Laboratory have uncovered 47,337 malicious plugins across 24,931 unique WordPress websites through a web development tool they named YODA," warns an announcement released Friday: According to a newly released paper about the eight-year study, the researchers found that every compromised website in their dataset had two or more infected plugins.

The findings also indicated that 94% of those plugins are still actively infected.

"This is an under-explored space," said Ph.D. student Ranjita Pai Kasturi who was the lead researcher on the project. "Attackers do not try very hard to hide their tracks and often rightly assume that website owners will not find them."

YODA is not only able to detect active malware in plugins, but it can also trace the malicious software back to its source. This allowed the researchers to determine that these malicious plugins were either sold on the open market or distributed from pirating sites, injected into the website by exploiting a vulnerability, or in most cases, infected after the plugin was added to a website. According to the paper written by Kasturi and her colleagues, over 40,000 plugins in their dataset were shown to have been infected after they were deployed. The team found that the malware would attack other plugins on the site to spread the infection.

"These infections were a result of two scenarios. The first is cross-plugin infection, in which case a particular plugin developer cannot do much," said Kasturi. "Or it was infected by exploiting existing plugin vulnerabilities. To fix this, plugin developers can scan for vulnerabilities before releasing their plugins for public use."

Although these malicious plugins can be damaging, Kasturi adds that it's not too late to save a website that has a compromised plugin. Website owners can purge malicious plugins entirely from their websites and reinstall a malware free version that has been scanned for vulnerabilities. To give web developers an edge over this problem, the Cyber Forensics Innovation Laboratory has made the YODA code available to the public on GitHub.

Google

Google 'Airbrushes' Out Emissions From Flying (bbc.com) 78

The way Google calculates the climate impact of your flights has changed, the BBC has discovered. From the report: Flights now appear to have much less impact on the environment than before. That's because the world's biggest search engine has taken a key driver of global warming out of its online carbon flight calculator. "Google has airbrushed a huge chunk of the aviation industry's climate impacts from its pages" says Dr Doug Parr, chief scientist of Greenpeace. With Google hosting nine out of every 10 online searches, this could have wide repercussions for people's travel decisions. The company said it made the change following consultations with its "industry partners." It affects the carbon calculator embedded in the company's "Google Flights" search tool.

If you have ever tried to find a flight on Google, you will have come across Google Flights. It appears towards the top of search results and allows you to scour the web for flights and fares. It also offers to calculate the emissions generated by your journey. Google says this feature is designed "to help you make more sustainable travel choices." Yet in July, Google decided to exclude all the global warming impacts of flying except CO2. Some experts say Google's calculations now represent just over half of the real impact on the climate of flights.

Education

A Tool That Monitors How Long Kids Are In the Bathroom Is Now In 1,000 American Schools (vice.com) 90

e-HallPass, a digital system that students have to use to request to leave their classroom and which takes note of how long they've been away, including to visit the bathroom, has spread into at least a thousand schools around the United States. Motherboard reports: On Monday, a since deleted tweet went viral in which someone claimed that their school was preparing to introduce e-HallPass, and described it as "the program where we track how long, at what time, and how often each child goes to the restroom and store that information on third party servers run by a private for-profit company." Motherboard then identified multiple schools across the U.S. that appear to use the technology by searching the web for instruction manuals, announcements, and similar documents from schools that mentioned the technology. Those results included K-12 schools such as Franklin Regional Middle School, Fargo Public Schools, River City High School, Loyalsock Township School District, and Cabarrus County Schools. Also schools Motherboard found that appear to use e-HallPass include Mehlville High School, Eagle County School District, Hopatcong Borough Schools, and Pope Francis Preparatory School. These schools are spread across the country, with some in California, New York, Virginia, and North Carolina. Eduspire, the company that makes e-HallPass, told trade publication EdSurge in March that 1,000 schools use the system. Brian Tvenstrup, president of Eduspire, told the outlet that the company's biggest obstacle to selling the product "is when a school isn't culturally ready to make these kinds of changes yet."

The system itself works as a piece of software installed on a computer or mobile device. Students request a pass through the software and the teacher then approves it. The tool promises "hall omniscience" with the ability to "always know who has a pass and who doesn't (without asking the student!)," according to the product's website. Admins can then access data collected through the software, and view a live dashboard showing details on all passes. e-HallPass can also stop meet-ups of certain students and limit the amount of passes going to certain locations, the website adds, explicitly mentioning "vandalism and TikTok challenges." Many of the schools Motherboard identified appear to use e-HallPass specifically on Chromebooks, according to student user guides and similar documents hosted on the schools' websites, though it also advertises that it can be used to track students on their personal cell phones.

Facebook

Meta Injecting Code Into Websites Visited By Its Users To Track Them, Research Says (theguardian.com) 49

Meta, the owner of Facebook and Instagram, has been rewriting websites its users visit, letting the company follow them across the web after they click links in its apps, according to new research from an ex-Google engineer. The Guardian reports: The two apps have been taking advantage of the fact that users who click on links are taken to webpages in an "in-app browser," controlled by Facebook or Instagram, rather than sent to the user's web browser of choice, such as Safari or Firefox. "The Instagram app injects their tracking code into every website shown, including when clicking on ads, enabling them [to] monitor all user interactions, like every button and link tapped, text selections, screenshots, as well as any form inputs, like passwords, addresses and credit card numbers," says Felix Krause, a privacy researcher who founded an app development tool acquired by Google in 2017.

Krause discovered the code injection by building a tool that could list all the extra commands added to a website by the browser. For normal browsers, and most apps, the tool detects no changes, but for Facebook and Instagram it finds up to 18 lines of code added by the app. Those lines of code appear to scan for a particular cross-platform tracking kit and, if not installed, instead call the Meta Pixel, a tracking tool that allows the company to follow a user around the web and build an accurate profile of their interests. The company does not disclose to the user that it is rewriting webpages in this way. No such code is added to the in-app browser of WhatsApp, according to Krause's research. [...] It is unclear when Facebook began injecting code to track users after clicking links.
"We intentionally developed this code to honor people's [Ask to track] choices on our platforms," a Meta spokesperson told The Guardian in a statement. "The code allows us to aggregate user data before using it for targeted advertising or measurement purposes. We do not add any pixels. Code is injected so that we can aggregate conversion events from pixels."

They added: "For purchases made through the in-app browser, we seek user consent to save payment information for the purposes of autofill."
Electronic Frontier Foundation

'Toward a Future We Want to Live In' - EFF Celebrates 32nd Birthday (eff.org) 25

"Today at the Electronic Frontier Foundation, we're celebrating 32 years of fighting for technology users around the world," reads a new announcement posted at EFF.org: If you were online back in the 90s, you might remember that it was pretty wild. We had bulletin boards, FTP, Gopher, and, a few years later, homespun websites. You could glimpse a future where anyone, anywhere in the world could access information, float new ideas, and reach each other across vast distances. It was exciting and the possibilities seemed endless.

But the founders of EFF also knew that a better future wasn't automatic. You don't organize a team of lawyers, technologists, and activists because you think technology will magically fix everything — you do it because you expect a fight.

Three decades later, thanks to those battles, the internet does much of what it promised: it connects and lifts up major grassroots movements for equity, civil liberties, and human rights and allows people to connect and organize to counteract the ugliness of the world.

But we haven't yet won that future we envisioned. Just as the web connects us, it also serves as a hunting ground for those who want to surveil and control our actions, those who wish to harass and spread hate, as well as others who seek to monetize our every move and thought. Information collected for one purpose is freely repurposed in ways that oppress us, rather than lift us up. The truth is that digital tools allow those with horrible ideas to connect with each other just as it does those with beautiful, healing ones.

EFF has always seen both the beauty and destructive potential of the internet, and we've always put our marker down on the side of justice, freedom, and innovation.

We work every day toward a future we want to live in, and we don't do it alone. Support from the public makes every one of EFF's activism campaigns, software projects, and court filings possible. Together, we anchor the movement for a better digital world, and ensure that technology supports freedom, justice, and innovation for all people of the world.

In fact, I invite every digital freedom supporter to join EFF during our summer membership drive. Right now, you can be a member for as little as $20, get some special new gear, and ensure that tech users always have a formidable defender in EFF.

So how does the EFF team celebrate this auspicious anniversary? EFF does what it does best: stand up for users and innovators in the courts, in the halls of power, in the public conversation. We build privacy-protecting tools, teach skills to community members, share knowledge with allies, and preserve the best aspects of the wild web.

In other words, we use every tool in our deep arsenal to fight for a better and brighter digital future for all. Thank you for standing with EFF when it counts.

Slashdot Top Deals