AI

OpenAI Unveils AI Agent To Automate Web Browsing Tasks (openai.com) 41

The rumors are true: OpenAI today launched Operator, an AI agent capable of performing web-based tasks through its own browser, as a research preview for U.S. subscribers of its $200 monthly ChatGPT Pro tier. The agent uses GPT-4's vision capabilities and reinforcement learning to interact with websites through mouse and keyboard actions without requiring API integration, OpenAI said in a blog post.

Operator can self-correct and defer to users for sensitive information though there are some limitations with complex interfaces. OpenAI said it's partnering with DoorDash, Instacart, OpenTable and others to develop real-world applications, with plans to expand access to Plus, Team and Enterprise users.

Check out our list of the best AI web browsing agents.
Cellphones

Samsung's Galaxy S25 Phones Once Again Lean Heavily on AI 25

At Galaxy Unpacked today in San Jose, California, Samsung unveiled the new Galaxy S25 series of flagship smartphones loaded with AI capabilities and LLMs. "Currently, the Galaxy S25 range is comprised of the Galaxy S25 ($800), Galaxy S25+ ($1,000), and Galaxy S25 Ultra ($1,300)," reports Wired. "The phones are available for preorder today and will officially go on sale February 7." Since the hardware is relatively unchanged from last year's Galaxy S24 series, here's what Wired has to say about the new AI smarts: The Galaxy S25 is a tale of two AIs: Gemini and Bixby. Yes, while Google's Gemini AI assistant sits at the forefront -- it can finally be triggered through a long press of the power button-- Samsung is bringing its original Bixby voice assistant out from the shadows. Bixby has been enhanced with large language models but is still designed to handle phone functions, like changing device settings. Gemini is meant to be used for general web queries and more complex actions. You can even have two hot words, one for each assistant. I foresee all of this being confusing [...].

The highlight AI feature debuting on the Galaxy S25 series is "cross-app experiences." These are tasks you can ask Gemini to perform, even if the task requires multiple apps. For example, you can ask for the schedule of this season's Arsenal matches and then add it to your calendar; Gemini will then search and add every Arsenal FC game in the season to your schedule. Or you can ask it to find pet-friendly vegan restaurants nearby and text the list to a friend. It even works with images too -- snap a pic of your fridge and ask Gemini to find you a recipe based on the available ingredients. These cross-app experiences work with Google apps, Samsung's Galaxy apps, and select third-party apps, like WhatsApp and Spotify.

All these AI features have culminated in a new app: Now Brief. Samsung calls this proactive assistance (remember Google's Now on Tap?) where a morning brief arrives with the weather, upcoming calendar events, stock details, news articles, and suggestions to trigger routines. There's also an evening brief with a summary of the day's events with photos. Since the feature can plug into email, it'll send reminders about expiring coupons and upcoming travel tickets. Samsung claims it can even suggest changing an 8:45 am alarm even earlier if it sees a 9 am meeting on the schedule. On the lock screen, a "Now Bar" widget persists at the bottom, much like Apple's Live Activities. It'll offer quick access to the Now Brief app, but it will also show updates for favorite sports teams, along with glanceable directions from Google Maps.

The rest of the AI features are playing a bit of catch-up to Apple and Google's Pixel phones. There's Drawing Assist, a generative AI tool to craft new images in different art styles based on sketches or text prompts. AI Select works with the S Pen stylus on the S25 Ultra and understands what is selected -- for example, if a video is selected, it will suggest turning it into a GIF. Audio Eraser is an editing tool to cut out background noise in videos post-capture, canceling out the sound of a crowd's chatter or an ambulance's siren. Finally, Samsung's Generative Edit feature, which lets you erase unwanted objects in images, now works locally on the device and is much more accurate and faster.
A full list of specs can be found here. You can watch a recording of the event on YouTube.
AI

ChatGPT-Maker To Launch Web Automation Tool 'Operator' This Week (theinformation.com) 27

OpenAI will release "Operator" this week, letting ChatGPT users automate web tasks through a built-in browser, The Information reported Wednesday. The feature handles restaurant bookings, travel planning, shopping and deliveries, asking follow-up questions like party size for reservations. Users can watch Operator work, take control mid-task, and share workflows with others.
Open Source

WordPress.org Accounts Deactivated for Contributors Said to Be Planning a Fork - by Automattic CEO (techcrunch.com) 49

WordPress co-creator (and Automattic CEO) Matt Mullenweg "has deactivated the accounts of several WordPress.org community members," reports TechCrunch, "some of whom have been spearheading a push to create a new fork of the open source WordPress project." Joost de Valk — creator of WordPress-focused SEO tool Yoast (and former marketing and communications' lead for the WordPress Foundation) — last month published his "vision for a new WordPress era," alluding to a potential fork in the form of "federated and independent repositories." Karim Marucchi, CEO of enterprise web consulting firm Crowd Favorite, echoed these thoughts in a separate blog post. WP Engine indicated it was on standby to lend a corporate hand. Mullenweg, for his part, has publicly supported the notion of a new WordPress fork.
But when Automattic slashed its contributions to Wordpress.org, things heated up: This spurred de Valk to take to X.com on Friday to indicate that he was willing to lead on the next release of WordPress, with Marucchi adding that his "team stands ready." Collectively, de Valk and Marucchi contribute around 10 hours per week to various aspects of the WordPress open source project. However, in a sarcasm-laden blog post published this morning, Mullenweg said that to give their independent effort the "push it needs to get off the ground," he was deactivating their WordPress.org accounts. "I strongly encourage anyone who wants to try different leadership models or align with WP Engine to join up with their new effort," Mullenweg wrote.

At the same time, Mullenweg also revealed he was deactivating the accounts of three other people, with little explanation given: Sé Reed, Heather Burns, and Morten Rand-Hendriksen. Reed, it's worth noting, is president and CEO of a newly established non-profit called the WP Community Collective, which is setting out to serve as a "neutral home for collaboration, contribution, and resources" around WordPress and the broader open source ecosystem. Burns, a former contributor to the WordPress project, took to X this morning to express surprise at her deactivation, noting that she hadn't been involved in the project since 2020...

It's worth noting that deactivating a WordPress.org account prevents affected users from contributing through that channel, be it to the core project or any other plugins or themes they may be involved with.

Rand-Hendriksen posted on BlueSky: So why is he targeting Heather and me? Because we started talking about the need for proper governance, accountability, conflict of interest policies, and other things back in 2017. We both left the project in 2019, and apparently he still holds a grudge.
And while Mullenweg headlined his blog post "Joost/Karim Fork," Rand-Hendriksen wrote on BlueSky "there is no fork in the works as far as I know. He made that up, as he has done before. Heather and I have no involvement with any of this so I don't know why he grouped the five of us together like this. It smells like attempted harassment."

Later Rand-Hendriksen claimed "this is not the first time he's accused critics of forking WordPress" and that he's "convinced any fork will fail... I think he thinks saying someone is forking WordPress is an epic burn that discredits them in the eyes of the community."
The Courts

Google Faces Trial For Collecting Data On Users Who Opted Out (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: A federal judge this week rejected Google's motion to throw out a class-action lawsuit alleging that it invaded the privacy of users who opted out of functionality that records a users' web and app activities. A jury trial is scheduled for August 2025 in US District Court in San Francisco. The lawsuit concerns Google's Web & App Activity (WAA) settings, with the lead plaintiff representing two subclasses of people with Android and non-Android phones who opted out of tracking. "The WAA button is a Google account setting that purports to give users privacy control of Google's data logging of the user's web app and activity, such as a user's searches and activity from other Google services, information associated with the user's activity, and information about the user's location and device," wrote (PDF) US District Judge Richard Seeborg, the chief judge in the Northern District Of California.

Google says that Web & App Activity "saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services." Google also has a supplemental Web App and Activity setting that the judge's ruling refers to as "(s)WAA." "The (s)WAA button, which can only be switched on if WAA is also switched on, governs information regarding a user's '[Google] Chrome history and activity from sites, apps, and devices that use Google services.' Disabling WAA also disables the (s)WAA button," Seeborg wrote. But data is still sent to third-party app developers through the Google Analytics for Firebase (GA4F), "a free analytical tool that takes user data from the Firebase kit and provides app developers with insight on app usage and user engagement," the ruling said. GA4F "is integrated in 60 percent of the top apps" and "works by automatically sending to Google a user's ad interactions and certain identifiers regardless of a user's (s)WAA settings, and Google will, in turn, provide analysis of that data back to the app developer."

Plaintiffs have brought claims of privacy invasion under California law. Plaintiffs "present evidence that their data has economic value," and "a reasonable juror could find that Plaintiffs suffered damage or loss because Google profited from the misappropriation of their data," Seeborg wrote. The lawsuit was filed in July 2020. The judge notes that summary judgment can be granted when "there is no genuine dispute as to any material fact and the movant is entitled to judgment as a matter of law." Google hasn't met that standard, he ruled.
In a statement provided to Ars, Google said that "privacy controls have long been built into our service and the allegations here are a deliberate attempt to mischaracterize the way our products work. We will continue to make our case in court against these patently false claims."
The Internet

Obscure IGS Graphics Protocol For Atari ST BBSes Celebrated with New Artpack (breakintochat.com) 6

Developer/data journalist Josh Renaud is also long-time Slashdot reader Kirkman14 — and he's got a story to tell: How do you get people interested in an obscure Atari ST graphics format used on BBSes in the late 1980s and early 1990s? Recruit some folks to help you make an artpack full of images and animations showing it off! That's the idea behind IGNITE, a new artpack from Mistigris computer arts and Break Into Chat, featuring 18 images and animations created in "Instant Graphics and Sound" format.

I love telling unknown underdog computer stories, and IGS sucked me in. This fall, I published a six-part, 14,000-word history, introducing readers to a cast of characters that included Mears, the self-described "working man without a degree" who often downplayed his own coding ability; Kevin Moody and Anthony Rau, two Navy guys in Florida who bonded over their love of Atari and BBSing; Steve Turnbull, an artist and scenic designer working in Hollywood; and many others.

But IGS isn't just a thing of the past. Two years ago, on New Years Eve 2022, Mears made a surprise announcement — he was releasing a new version of IGS, thirty years after he had stopped working on the project.

Because I (inadvertently) had spurred Larry to action, I felt an obligation to make some art using his new tools. I completed my first piece — a drawing of a ship from the sci-fi game FTL — in early 2023. Over the subsequent months, I kept at it, and ended up creating a number of fun animations. I'm particularly proud of the [Star Trek-themed] animated Guardian of Forever login sequence, and a brand-new Calvin and Hobbes-themed animation I created just for this pack.

I had long wanted to release an all-IGS artpack as a way to honor Mears, highlight IGS, and maybe stir other people's interest in trying this format. To lower the barrier to entry, I created my own web-based drawing tool, JoshDraw, which supports a small subset of IGS's features. To my surprise, I successfully recruited seven other people to submit nine static images to include in the pack.

Advertising

Advertisers Expand Their Avoidance to News Sites, Blacklisting Specific Words (msn.com) 72

"The Washington Post's crossword puzzle was recently deemed too offensive for advertisers," reports the Wall Street Journal. "So was an article about thunderstorms. And a ranking of boxed brownie mixes.

"Marketers have long been wary about running ads in the news media, concerned that their brands will land next to pieces about terrorism or plane crashes or polarizing political stories." But "That advertising no-go zone seems to keep widening." It is a headache that news publishers can hardly afford. Many are also grappling with subscriber declines and losses in traffic from Google and other tech platforms, and are now making an aggressive push to change advertisers' perceptions... News organizations recently began publicizing studies that show it really isn't dangerous for a brand to appear near a sensitive story. At the same time, they say blunt campaign-planning tools wind up fencing off even harmless content — and those stories' potentially large audiences — from advertisements. Forty percent of the Washington Post's material is deemed "unsafe" at any given time, said Johanna Mayer-Jones, the paper's chief advertising officer, referencing a study the company did about a year ago. "The revenue implications of that are significant."

The Washington Post's crossword page was blocked by advertisers' technology seven times during a weekslong period in October because it was labeled as politics, news and natural disaster-related material. (A tech company recently said it would ensure the puzzle stops getting blocked, according to the Post.) The thunderstorm story was cut off from ad revenue when a sentence about "flashing and pealing volleys from the artillery of the atmosphere" triggered a warning that it was too much like an "arms and ammunition" story. As for the brownies, a reference to research from "grocery, drug, mass-market" and other retailers was automatically flagged by advertisers for containing the word "drug."

While some brands avoid news entirely, many take what they consider to be a more surgical approach. They create lengthy blacklists of words or websites that the company considers off-limits and employ ad technology to avoid such terms. Over time, blacklists have become extremely detailed, serving as a de facto news-blocking tool, publishers said... The lists are used in automated ad buying. Brands aim their ads not at specific websites, but at online audiences with certain characteristics — people with particular shopping or web-browsing histories, for example. Their ads are matched in real-time to available inventory for thousands of websites... These days, less than 5% of client ad spending for GroupM, one of the largest ad-buying firms in the world, goes to news, according to Christian Juhl, GroupM's former chief executive who revealed spending figures during a congressional hearing over the summer.

A recent blacklist from Microsoft included about 2,000 words including "collapse," according to the article. ("Microsoft declined to comment.")
AI

'Yes, I am a Human': Bot Detection Is No Longer Working 91

The rise of AI has rendered traditional CAPTCHA tests increasingly ineffective, as bots can now "[solve] these puzzles in milliseconds using artificial intelligence (AI)," reports The Conversation. "How ironic. The tools designed to prove we're human are now obstructing us more than the machines they're supposed to be keeping at bay." The report warns that the imminent arrival of AI agents -- software programs designed to autonomously interact with websites on our behalf -- will further complicate matters. From the report: Developers are continually coming up with new ways to verify humans. Some systems, like Google's ReCaptcha v3 (introduced in 2018), don't ask you to solve puzzles anymore. Instead, they watch how you interact with a website. Do you move your cursor naturally? Do you type like a person? Humans have subtle, imperfect behaviors that bots still struggle to mimic. Not everyone likes ReCaptcha v3 because it raises privacy issues -- plus the web company needs to assess user scores to determine who is a bot, and the bots can beat the system anyway. There are alternatives that use similar logic, such as "slider" puzzles that ask users to move jigsaw pieces around, but these too can be overcome.

Some websites are now turning to biometrics to verify humans, such as fingerprint scans or voice recognition, while face ID is also a possibility. Biometrics are harder for bots to fake, but they come with their own problems -- privacy concerns, expensive tech and limited access for some users, say because they can't afford the relevant smartphone or can't speak because of a disability. The imminent arrival of AI agents will add another layer of complexity. It will mean we increasingly want bots to visit sites and do things on our behalf, so web companies will need to start distinguishing between "good" bots and "bad" bots. This area still needs a lot more consideration, but digital authentication certificates are proposed as one possible solution.

In sum, Captcha is no longer the simple, reliable tool it once was. AI has forced us to rethink how we verify people online, and it's only going to get more challenging as these systems get smarter. Whatever becomes the next technological standard, it's going to have to be easy to use for humans, but one step ahead of the bad actors. So the next time you find yourself clicking on blurry traffic lights and getting infuriated, remember you're part of a bigger fight. The future of proving humanity is still being written, and the bots won't be giving up any time soon.
AI

San Francisco Unicorn 'Scale AI' Accused of Wage Theft (sfgate.com) 27

They provide training data to top AI companies including OpenAI and Meta, according to its web site. Founded in 2016, San Francisco-based Scale AI now has over 900 employees, eventually growing beyond "unicorn" status with over $1.35 billion in ivnestments. In May the company's valuation was over $14 billion, with investors including Amazon, Meta, Nvidia, Cisco, Intel, and AMD (as well as earlier investments from Y Combinator and $100 million from Peter Thiel's Founders Fund). SFGate calls them "a buzzy San Francisco startup with high-dollar ties across the tech industry".

But SFGate also report Scale AI "was sued Tuesday by a former worker with allegations that the company is committing wage theft and misclassifying workers." Steve McKinney filed the suit against Scale and several top executives, including 27-year-old billionaire CEO Alexandr Wang, in San Francisco Superior Court. With the filing, the former contractor aims to be a lead plaintiff for a class-action lawsuit against Scale; a judge will need to certify his proposed class of current and former contractors within California...

McKinney, whose complaint says he was paid on an hourly basis and worked on a project eventually sold to Meta, is accusing Scale of amassing its clout and cash by exploiting workers. "Scale AI is the sordid underbelly propping up the generative AI industry," the complaint says, before rattling off a list of allegations about its treatment of contractors. The document accuses Scale of bait-and-switch hiring promises; demanding off-the-clock, unpaid work; denying overtime pay; and unfairly booting contractors from projects...

The Tuesday complaint calls Scale's control over its contractors "Orwellian." The company makes contractors download a tool to track much of their computer use, including by taking periodic screenshots, the suit alleges. The lawsuit also alleges that Scale reassigns the workers to varyingly paid projects and docks pay if a task takes more than it was supposed to, plus posits that Scale is in violation of California's "ABC" test, which monitors use of the designation "independent contractor." It argues that contracted "Taskers" like McKinney should be classified as employees instead...

The complaint, along with arguing for class-action certification, seeks restitution, punitive damages and changes to Scale's worker classification model.

The article adds that "Per Fortune, Scale's armies of contractors marked up images for Cruise and Waymo to help autonomous cars understand their surroundings..."
Privacy

UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet (techcrunch.com) 22

Healthcare giant Optum has restricted access to an internal AI chatbot used by employees after a security researcher found it was publicly accessible online, and anyone could access it using only a web browser. TechCrunch: The chatbot, which TechCrunch has seen, allowed employees to ask the company questions about how to handle patient health insurance claims and disputes for members in line with the company's standard operating procedures (SOPs).

While the chatbot did not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance conglomerate UnitedHealthcare, faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors' medical decisions and deny patient claims.

Mossab Hussein, chief security officer and co-founder of cybersecurity firm spiderSilk, alerted TechCrunch to the publicly exposed internal Optum chatbot, dubbed "SOP Chatbot." Although the tool was hosted on an internal Optum domain and could not be accessed from its web address, its IP address was public and accessible from the internet and did not require users to enter a password.

AI

Google Unveils Gemini 2.0 (venturebeat.com) 14

Google unveiled Gemini 2.0 yesterday, almost exactly one year after Google's initial Gemini launch. The new release offers enhanced multimodal capabilities like native image and audio output, real-time tool use, and advanced reasoning to enable agentic experiences, such as acting as a universal assistant or research companion. VentureBeat reports: During a recent press conference, Tulsee Doshi, director of product management for Gemini, outlined the system's enhanced capabilities while demonstrating real-time image generation and multilingual conversations. "Gemini 2.0 brings enhanced performance and new capabilities like native image and multilingual audio generation," Doshi explained. "It also has native intelligent tool use, which means that it can directly access Google products like search or even execute code."

The initial release centers on Gemini 2.0 Flash, an experimental version that Google claims operates at twice the speed of its predecessor while surpassing the capabilities of more powerful models. This represents a significant technical achievement, as previous speed improvements typically came at the cost of reduced functionality. Perhaps most significantly, Google introduced three prototype AI agents built on Gemini 2.0's architecture that demonstrate the company's vision for AI's future. Project Astra, an updated universal AI assistant, showcased its ability to maintain complex conversations across multiple languages while accessing Google tools and maintaining contextual memory of previous interactions. [...]

For developers and enterprise customers, Google introduced Project Mariner and Jules, two specialized AI agents designed to automate complex technical tasks. Project Mariner, demonstrated as a Chrome extension, achieved an impressive 83.5% success rate on the WebVoyager benchmark for real-world web tasks -- a significant improvement over previous attempts at autonomous web navigation. Supporting these advances is Trillium, Google's sixth-generation Tensor Processing Unit (TPU), which becomes generally available to cloud customers today. The custom AI accelerator represents a massive investment in computational infrastructure, with Google deploying over 100,000 Trillium chips in a single network fabric.

Open Source

Slashdot's Interview with Bruce Perens: How He Hopes to Help 'Post Open' Developers Get Paid (slashdot.org) 61

Bruce Perens, original co-founder of the Open Source Initiative, has responded to questions from Slashdot readers about a new alternative he's developing that hopefully helps "Post Open" developers get paid.

But first, "One of the things that's clear from the Slashdot patter is that people are not aware of what I've been doing, in general," Perens says. "So, let's start by filling that in..."

Read on for the rest of his wide-ranging answers....
Security

Craigslist Founder Gives $300M to Fund Critical US Infrastructure Cybersecurity (yahoo.com) 16

Craig Newmark "is alarmed about potential cybersecurity risks in the U.S.," according to Yahoo Finance. The 71-year-old Craigslist founder says "our country is under attack now" in a new interview with Yahoo Finance executive editor Brian Sozzi on his Opening Bid podcast.

But Newmark also revealed what he's doing about it: [H]e started Craig Newmark Philanthropies to primarily invest in projects to protect critical American infrastructure from cyberattacks. He told Sozzi he is now spending $200 million more to address the issue, on top of an initial $100 million pledge revealed in September of this year. He encouraged other wealthy people to join him in the fight against cyberattacks. "I tell people, 'Hey, the people who protect us could use some help. The amounts of money comparatively are small, so why not help out,'" he said... The need for municipalities and other government entities to act rather than react remains paramount, warns Newmark. "I think a lot about this," said Newmark.

"I've started to fund networks of smart volunteers who can help people protect infrastructure, particularly [for] the small companies and utilities across the country who are responsible for most of our electrical and power supplies, transportation infrastructure, [and] food distribution.... A lot of these systems have no protection, so an adversary could just compromise them, saying unless you do what we need, we can start shutting off these things," he continued. Should that happen, recovery "could take weeks and weeks without your water supply or electricity."

A web page at Craig Newmark Philanthropies offers more details Craig was part of the whole "duck and cover" thing, in the 50s and 60s, and realizes that we need civil defense in the cyber domain, "cyber civil defense." This is patriotism, for regular people.

He's committed $100 million to form a Cyber Civil Defense network of groups who are starting to protect the country from cyber threats. Attacks on our power grids, our cyber infrastructure and even the internet-connected gadgets and appliances in our homes are real. If people think that's alarmist, tell them to "Blame Craig." The core of Cyber Civil Defense [launched in 2022] includes groups like Aspen Digital, Global Cyber Alliance, and Consumer Reports, focusing on citizen cyber education and literacy, cyber tool development, and cybersecurity workforce programs aimed at diversifying the growing field.

It's already made significant investments in groups like the Ransomware Task Force and threat watchdog group Shadowserver Foundation...
AI

Microsoft Copilot Customers Discover It Can Let Them Read HR Documents, CEO Emails 53

According to Business Insider (paywalled), Microsoft's Copilot tool inadvertently let customers access sensitive information, such as CEO emails and HR documents. Now, Microsoft is working to fix the situation, deploying new tools and a guide to address the privacy concerns. The story was highlighted by Salesforce CEO Marc Benioff. From the report: These updates are designed "to identify and mitigate oversharing and ongoing governance concerns," the company said in a blueprint for Microsoft's 365 productivity software suite. [...] Copilot's magic -- its ability to create a 10-slide road-mapping presentation, or to summon a list of your company's most profitable products -- works by browsing and indexing all your company's internal information, like the web crawlers used by search engines. IT departments at some companies have set up lax permissions for who can access internal documents -- selecting "allow all" for the company's HR software, say, rather than going through the trouble of selecting specific users.

That didn't create much of a problem because there wasn't a tool that an average employee could use to identify and retrieve sensitive company documents -- until Copilot. As a result, some customers have deployed Copilot only to discover that it can let employees read an executive's inbox or access sensitive HR documents. "Now when Joe Blow logs into an account and kicks off Copilot, they can see everything," a Microsoft employee familiar with customer complaints said. "All of a sudden Joe Blow can see the CEO's emails."
Education

Can Google Scholar Survive the AI Revolution? 44

An anonymous reader quotes a report from Nature: Google Scholar -- the largest and most comprehensive scholarly search engine -- turns 20 this week. Over its two decades, some researchers say, the tool has become one of the most important in science. But in recent years, competitors that use artificial intelligence (AI) to improve the search experience have emerged, as have others that allow users to download their data. The impact that Google Scholar -- which is owned by web giant Google in Mountain View, California -- has had on science is remarkable, says Jevin West, a computational social scientist at the University of Washington in Seattle who uses the database daily. But "if there was ever a moment when Google Scholar could be overthrown as the main search engine, it might be now, because of some of these new tools and some of the innovation that's happening in other places," West says.

Many of Google Scholar's advantages -- free access, breadth of information and sophisticated search options -- "are now being shared by other platforms," says Alberto Martin Martin, a bibliometrics researcher at the University of Granada in Spain. AI-powered chatbots such as ChatGPT and other tools that use large language models have become go-to applications for some scientists when it comes to searching, reviewing and summarizing the literature. And some researchers have swapped Google Scholar for them. "Up until recently, Google Scholar was my default search," says Aaron Tay, an academic librarian at Singapore Management University. It's still top of his list, but "recently, I started using other AI tools." Still, given Google Scholar's size and how deeply entrenched it is in the scientific community, "it would take a lot to dethrone," adds West. Anurag Acharya, co-founder of Google Scholar, at Google, says he welcomes all efforts to make scholarly information easier to find, understand and build on. "The more we can all do, the better it is for the advancement of science."
Acharya says Google Scholar uses AI to rank articles, suggest further search queries and recommend related articles. What Google Scholar does not yet provide are AI-generated summaries of search query results. According to Acharya, the company has yet to find "an effective solution" for summarizing conclusions from multiple papers in a brief manner that preserves all the important context.
Businesses

Chegg, Down From $12 Billion To $159 Million In Value, Lays Off Hundreds; CEO Blames Google and AI (sfgate.com) 23

Chegg, the online education company, is laying off 319 workers as it struggles to compete against modern AI chatbots. SFGATE reports: Chegg announced the new layoff round, which will hit 21% of its workforce, in a filing with the Securities and Exchange Commission on Tuesday. The company delivered the news alongside another brutal quarterly financial report; Chegg lost more than $212 million from July through September. CEO Nathan Schultz, in prepared remarks accompanying the report, expressed some optimism but called it a "trying time" for his company. Chegg provides grammar and plagiarism checkers, plus course-by-course study help, along with much-used textbook solution guides.

"Technology shifts have created headwinds for our industry and Chegg's business specifically," Schultz said. "Recent advancements in the AI search experience and the adoption of free and paid generative AI services by students, have resulted in challenges for Chegg. These factors are adversely affecting our business outlook and are requiring us to refocus and adjust the size of our business." He specifically called out Google's AI overviews, a recent change to search results that pulls information from news outlets and sites like Chegg and summarizes above the classic blue links. Schultz said that his team believes Google is "shifting from being a search origination point to the destination" in an attempt to keep market share.

Schultz also blamed generative AI chatbots like OpenAI's ChatGPT, saying that students see the tool and others like it as "strong alternatives" to Chegg. Web traffic has dropped sharply as a result, Schultz wrote. A Wall Street Journal story published Saturday said Chegg "is trying to avoid becoming [ChatGPT's] first major victim" and that the company had lost more than 500,000 subscribers, some who paid almost $20 a month, since the chatbot's 2022 launch. Despite the negative business impact, it seems Chegg is experimenting with new tech. Schultz said in the remarks that the company had formed an "arena" to evaluate AI models and aims to "integrate AI into the full learning journey."

AI

OpenAI Nears Launch of AI Agent Tool To Automate Tasks For Users (yahoo.com) 26

An anonymous reader quotes a report from Bloomberg: OpenAI is preparing to launch a new artificial intelligence agent codenamed "Operator" that can use a computer to take actions on a person's behalf (Warning: source may be paywalled; alternative source), such as writing code or booking travel [...]. In a staff meeting on Wednesday, OpenAI's leadership announced plans to release the tool in January as a research preview and through the company's application programming interface for developers [...]. The one nearest completion will be a general-purpose tool that executes tasks in a web browser, one of the people said.

OpenAI Chief Executive Officer Sam Altman hinted at the shift to agents in response to a question last month during an Ask Me Anything session on Reddit. "We will have better and better models," Altman wrote. "But I think the thing that will feel like the next giant breakthrough will be agents." The move to release an agentic AI tool also comes as OpenAI and its competitors have seen diminishing returns from their costly efforts to develop more advanced AI models.

AI

GitHub Copilot Moves Beyond OpenAI Models To Support Claude 3.5, Gemini 9

GitHub Copilot will switch from using exclusively OpenAI's GPT models to a multi-model approach, adding Anthropic's Claude 3.5 Sonnet and Google's Gemini 1.5 Pro. Ars Technica reports: First, Anthropic's Claude 3.5 Sonnet will roll out to Copilot Chat's web and VS Code interfaces over the next few weeks. Google's Gemini 1.5 Pro will come a bit later. Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs -- and organizations will be able to choose which models will be usable by team members.

The new approach makes sense for users, as certain models are better at certain languages or types of tasks. "There is no one model to rule every scenario," wrote [GitHub CEO Thomas Dohmke]. "It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice." It starts with the web-based and VS Code Copilot Chat interfaces, but it won't stop there. "From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot's surface areas and functions soon," Dohmke wrote. There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support.
GitHub also introduced "Spark," a natural language-based app development tool that enables both non-coders and coders to create and refine applications using conversational prompts. It's currently in an early preview phase, with a waitlist available for those who are interested.
AI

Can We Turn Off AI Tools From Google, Microsoft, Apple, and Meta? Sometimes... (seattletimes.com) 80

"Who asked for any of this in the first place?" wonders a New York Times consumer-tech writer. (Alternate URL here.) "Judging from the feedback I get from readers, lots of people outside the tech industry remain uninterested in AI — and are increasingly frustrated with how difficult it has become to ignore." The companies rely on user activity to train and improve their AI systems, so they are testing this tech inside products we use every day. Typing a question such as "Is Jay-Z left-handed?" in Google will produce an AI-generated summary of the answer on top of the search results. And whenever you use the search tool inside Instagram, you may now be interacting with Meta's chatbot, Meta AI. In addition, when Apple's suite of AI tools, Apple Intelligence, arrives on iPhones and other Apple products through software updates this month, the tech will appear inside the buttons we use to edit text and photos.

The proliferation of AI in consumer technology has significant implications for our data privacy, because companies are interested in stitching together and analyzing our digital activities, including details inside our photos, messages and web searches, to improve AI systems. For users, the tools can simply be an annoyance when they don't work well. "There's a genuine distrust in this stuff, but other than that, it's a design problem," said Thorin Klosowski, a privacy and security analyst at the Electronic Frontier Foundation, a digital rights nonprofit, and a former editor at Wirecutter, the reviews site owned by The New York Times. "It's just ugly and in the way."

It helps to know how to opt out. After I contacted Microsoft, Meta, Apple and Google, they offered steps to turn off their AI tools or data collection, where possible. I'll walk you through the steps.

The article suggests logged-in Google users can toggle settings at myactivity.google.com. (Some browsers also have extensions that force Google's search results to stop inserting an AI summary at the top.) And you can also tell Edge to remove Copilot from its sidebar at edge://settings.

But "There is no way for users to turn off Meta AI, Meta said. Only in regions with stronger data protection laws, including the EU and Britain, can people deny Meta access to their personal information to build and train Meta's AI." On Instagram, for instance, people living in those places can click on "settings," then "about" and "privacy policy," which will lead to opt-out instructions. Everyone else, including users in the United States, can visit the Help Center on Facebook to ask Meta only to delete data used by third parties to develop its AI.
By comparison, when Apple releases new AI services this month, users will have to opt in, according to the article. "If you change your mind and no longer want to use Apple Intelligence, you can go back into the settings and toggle the Apple Intelligence switch off, which makes the tools go away."
IOS

iOS and Android Security Scare: Two Apps Found Supporting 'Pig Butchering' Scheme (forbes.com) 31

"Pig Butchering Alert: Fraudulent Trading App targeted iOS and Android users."

That's the title of a new report released this week by cybersecurity company Group-IB revealing the official Apple App Store and Google Play store offered apps that were actually one part of a larger fraud campaign. "To complete the scam, the victim is asked to fund their account... After a few seemingly successful trades, the victim is persuaded to invest more and more money. The account balance appears to grow rapidly. However, when the victim attempts to withdraw funds, they are unable to do so."

Forbes reports: Group-IB determined that the frauds would begin with a period of social engineering reconnaissance and entrapment, during which the trust of the potential victim was gained through either a dating app, social media app or even a cold call. The attackers spent weeks on each target. Only when this "fattening up" process had reached a certain point would the fraudsters make their next move: recommending they download the trading app from the official App Store concerned.

When it comes to the iOS app, which is the one that the report focussed on, Group-IB researchers said that the app remained on the App Store for several weeks before being removed, at which point the fraudsters switched to phishing websites to distribute both iOS and Android apps. The use of official app stores, albeit only fleetingly as Apple and Google removed the fake apps in due course, bestowed a sense of authenticity to the operation as people put trust in both the Apple and Google ecosystems to protect them from potentially dangerous apps.

"The use of web-based applications further conceals the malicious activity," according to the researchers, "and makes detection more difficult." [A]fter the download is complete, the application cannot be launched immediately. The victim is then instructed by the cybercriminals to manually trust the Enterprise developer profile. Once this step is completed, the fraudulent application becomes operational... Once a user registers with the fraudulent application, they are tricked into completing several steps. First, they are asked to upload identification documents, such as an ID card or passport. Next, the user is asked to provide personal information, followed by job-related details...

The first discovered application, distributed through the Apple App Store, functions as a downloader, merely retrieving and displaying a web-app URL. In contrast, the second application, downloaded from phishing websites, already contains the web-app within its assets. We believe this approach was deliberate, since the first app was available in the official store, and the cybercriminals likely sought to minimise the risk of detection. As previously noted, the app posed as a tool for mathematical formulas, and including personal trading accounts within an iOS app would have raised immediate suspicion.

The app (which only runs on mobile phones) first launches a fake activity with formulas and graphics, according to the researchers. "We assume that this condition must bypass Apple's checks before being published to the store. As we can see, this simple trick allows cybercriminals to upload their fraudulent application to the Apple Store." They argue their research "reinforces the need for continued review of app store submissions to prevent such scams from reaching unsuspecting victims". But it also highlights "the importance of vigilance and end-user education, even when dealing with seemingly trustworthy apps..."

"Our investigation began with an analysis of Android applications at the request of our client. The client reported that a user had been tricked into installing the application as part of a stock investment scam. During our research, we uncovered a list of similar fraudulent applications, one of which was available on the Google Play Store. These apps were designed to display stock-related news and articles, giving them a false sense of legitimacy."

Slashdot Top Deals