Printer

HP Is Adding AI To Its Printers 140

An anonymous reader quotes a report from PCWorld, written by Michael Crider: The latest perpetrator of questionable AI branding? HP. The company is introducing "Print AI," what it calls the "industry's first intelligent print experience for home, office, and large format printing." What does that mean? It's essentially a new beta software driver package for some HP printers. According to the press release, it can deliver "Perfect Output" -- capital P capital O -- a branded tool that reformats the contents of a page in order to more ideally fit it onto physical paper.

Despite my skeptical tone, this is actually a pretty cool idea. "Perfect Output can detect unwanted content like ads and web text, printing only the desired text and images, saving time, paper, and ink." That's neat! If the web page you're printing doesn't offer a built-in print format, the software will make one for you. It'll also serve to better organize printed spreadsheets and images, too. But I don't see anything in this software that's actually AI -- or even machine learning, for that matter. This is applying the same tech (functionally, if not necessarily the same code) as the "reader mode" formatting we've seen in browsers for about a decade now. Take the text and images of a page, strip out everything else that's unnecessary, and present it as efficiently as possible. [...]

The press release does mention that support and formatting tasks can be accomplished with "simple conversational prompts," which at least might be leveraging some of the large language models that have become synonymous with AI as consumers understand it. But based on the description, it's more about selling you something than helping you. "Customers can choose to print or explore a curated list of partners that offer unique photo printing capabilities, gift certificates to be printed on the card, and so much more." Whoopee.
The Internet

Cloudflare's New Marketplace Will Let Websites Charge AI Bots For Scraping (techcrunch.com) 12

An anonymous reader quotes a report from TechCrunch: Cloudflare announced plans on Monday to launch a marketplace in the next year where website owners can sell AI model providers access to scrape their site's content. The marketplace is the final step of Cloudflare CEO Matthew Prince's larger plan to give publishers greater control over how and when AI bots scrape their websites. "If you don't compensate creators one way or another, then they stop creating, and that's the bit which has to get solved," said Prince in an interview with TechCrunch.

As the first step in its new plan, on Monday, Cloudflare launched free observability tools for customers, called AI Audit. Website owners will get a dashboard to view analytics on why, when, and how often AI models are crawling their sites for information. Cloudflare will also let customers block AI bots from their sites with the click of a button. Website owners can block all web scrapers using AI Audit, or let certain web scrapers through if they have deals or find their scraping beneficial. A demo of AI Audit shared with TechCrunch showed how website owners can use the tool, which is able to see where each scraper that visits your site comes from, and offers selective windows to see how many times scrapers from OpenAI, Meta, Amazon, and other AI model providers are visiting your site. [...]

Chrome

ChromeOS 128 Adds Snap Layouts For Apps, OCR Text Extraction, and Improved Settings (neowin.net) 7

Google's new ChromeOS 128 update introduces a feature similar to Windows 11's Snap layouts. Called Snap Groups, the feature enables users to organize on-screen apps in various fullscreen layouts. "When you pair two windows for split-screen display, ChromeOS now forms a Snap group," explains the ChromeOS team. "As a Snap group, you can bring the windows back into focus together, resize them simultaneously, and move them both as a group."

Other notable features of ChromeOS 128 include Optical Character Recognition (OCR), ChromeVox support for the Magnifier tool, isolated web apps (IWA), and improved settings for the camera and microphone on Chromebook devices.

You can view the release notes on the support document here.
Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Google

Google's AI Search Gives Sites Dire Choice: Share Data or Die (bloomberg.com) 64

An anonymous reader shares a report: Google now displays convenient AI-based answers at the top of its search pages -- meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can't afford to block Google's AI from summarizing their content. That's because the Google tool that sifts through web content to come up with its AI answers is the same one that keeps track of web pages for search results, according to publishers. Blocking Alphabet's Google the way sites have blocked some of its AI competitors would also hamper a site's ability to be discovered online.

Google's dominance in search -- which a federal court ruled last week is an illegal monopoly -- is giving it a decisive advantage in the brewing AI wars, which search startups and publishers say is unfair as the industry takes shape. The dilemma is particularly acute for publishers, which face a choice between offering up their content for use by AI models that could make their sites obsolete and disappearing from Google search, a top source of traffic.

AI

NIST Releases an Open-Source Platform for AI Safety Testing (scmagazine.com) 4

America's National Institute of Standards and Technology (NIST) has released a new open-source software tool called Dioptra for testing the resilience of machine learning models to various types of attacks.

"Key features that are new from the alpha release include a new web-based front end, user authentication, and provenance tracking of all the elements of an experiment, which enables reproducibility and verification of results," a NIST spokesperson told SC Media: Previous NIST research identified three main categories of attacks against machine learning algorithms: evasion, poisoning and oracle. Evasion attacks aim to trigger an inaccurate model response by manipulating the data input (for example, by adding noise), poisoning attacks aim to impede the model's accuracy by altering its training data, leading to incorrect associations, and oracle attacks aim to "reverse engineer" the model to gain information about its training dataset or parameters, according to NIST.

The free platform enables users to determine to what degree attacks in the three categories mentioned will affect model performance and can also be used to gauge the use of various defenses such as data sanitization or more robust training methods.

The open-source testbed has a modular design to support experimentation with different combinations of factors such as different models, training datasets, attack tactics and defenses. The newly released 1.0.0 version of Dioptra comes with a number of features to maximize its accessibility to first-party model developers, second-party model users or purchasers, third-party model testers or auditors, and researchers in the ML field alike. Along with its modular architecture design and user-friendly web interface, Dioptra 1.0.0 is also extensible and interoperable with Python plugins that add functionality... Dioptra tracks experiment histories, including inputs and resource snapshots that support traceable and reproducible testing, which can unveil insights that lead to more effective model development and defenses.

NIST also published final versions of three "guidance" documents, according to the article. "The first tackles 12 unique risks of generative AI along with more than 200 recommended actions to help manage these risks. The second outlines Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, and the third provides a plan for global cooperation in the development of AI standards."

Thanks to Slashdot reader spatwei for sharing the news.
AI

Perplexity AI Will Share Revenue With Publishers After Plagiarism Accusations (cnbc.com) 11

An anonymous reader quotes a report from CNBC: Perplexity AI on Tuesday debuted a revenue-sharing model for publishers after more than a month of plagiarism accusations. Media outlets and content platforms including Fortune, Time, Entrepreneur, The Texas Tribune, Der Spiegel and WordPress.com are the first to join the company's "Publishers Program." The announcement follows an onslaught of controversy in June, when Forbes said it found a plagiarized version of its paywalled original reporting within Perplexity AI's Pages tool, with no reference to the media outlet besides a small "F" logo at the bottom of the page. Weeks later, Wired said it also found evidence of Perplexity plagiarizing Wired stories, and reported that an IP address "almost certainly linked to Perplexity and not listed in its public IP range" visited its parent company's websites more than 800 times in a three-month span.

Under the new partner program, any time a user asks a question and Perplexity generates advertising revenue from citing one of the publisher's articles in its answer, Perplexity will share a flat percentage of that revenue. That percentage counts on a per-article basis, Dmitry Shevelenko, Perplexity's chief business officer, told CNBC in an interview -- meaning that if three articles from one publisher were used in one answer, the partner would receive "triple the revenue share." Shevelenko confirmed that the flat rate is a double-digit percentage but declined to provide specifics. Shevelenko told CNBC that more than a dozen publishers, including "major newspaper dailies and companies that own them," had reached out with interest less than two hours after the program debuted. The company's goal, he said, is to have 30 publishers enrolled by the end of the year, and Perplexity is looking to partner with some of the publishers' ad sales teams so they can sell ads "against all Perplexity inventory."

"When Perplexity earns revenue from an interaction where a publisher's content is referenced, that publisher will also earn a share," Perplexity wrote in a blog post, adding that the company will offer publishers API credits and also work with ScalePost.ai to provide analytics to provide "deeper insights into how Perplexity cites their content." Shevelenko told CNBC that Perplexity began engaging with publishers in January and solidified ideas for how its revenue-sharing program would work later in the first quarter of 2024. He said five Perplexity employees were dedicated to working on the program. "Some of it grew out of conversations we were having with publishers about integrating Perplexity APIs and technology into their products," Shevelenko said.

AI

OpenAI To Launch 'SearchGPT' in Challenge To Google 31

OpenAI is launching an online search tool in a direct challenge to Google, opening up a new front in the tech industry's race to commercialise advances in generative artificial intelligence. From a report: The experimental product, known as SearchGPT [non-paywalled], will initially only be available to a small group of users, with the San Francisco-based company opening a 10,000-person waiting list to test the service on Thursday. The product is visually distinct from ChatGPT as it goes beyond generating a single answer by offering a rail of links -- similar to a search engine -- that allows users to click through to external websites.

[...] SearchGPT will "provide up-to-date information from the web while giving you clear links to relevant sources," according to OpenAI. The new search tool will be able to access sites even if they have opted out of training OpenAI's generative AI tools, such as ChatGPT.
Programming

'GitHub Is Starting To Feel Like Legacy Software' (www.mistys-internet.website) 82

Developer and librarian Misty De Meo, writing about her frustrating experience using GitHub: To me, one of GitHub's killer power user features is its blame view. git blame on the commandline is useful but hard to read; it's not the interface I reach for every day. GitHub's web UI is not only convenient, but the ease by which I can click through to older versions of the blame view on a line by line basis is uniquely powerful. It's one of those features that anchors me to a product: I stopped using offline graphical git clients because it was just that much nicer.

The other day though, I tried to use the blame view on a large file and ran into an issue I don't remember seeing before: I just couldn't find the line of code I was searching for. I threw various keywords from that line into the browser's command+F search box, and nothing came up. I was stumped until a moment later, while I was idly scrolling the page while doing the search again, and it finally found the line I was looking for. I realized what must have happened. I'd heard rumblings that GitHub's in the middle of shipping a frontend rewrite in React, and I realized this must be it. The problem wasn't that the line I wanted wasn't on the page -- it's that the whole document wasn't being rendered at once, so my browser's builtin search bar just couldn't find it. On a hunch, I tried disabling JavaScript entirely in the browser, and suddenly it started working again. GitHub is able to send a fully server-side rendered version of the page, which actually works like it should, but doesn't do so unless JavaScript is completely unavailable.

[...] The corporate branding, the new "AI-powered developer platform" slogan, makes it clear that what I think of as "GitHub" -- the traditional website, what are to me the core features -- simply isn't Microsoft's priority at this point in time. I know many talented people at GitHub who care, but the company's priorities just don't seem to value what I value about the service. This isn't an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn't navigating the website for me, replacing my need to the website as it exists today. I've had tools hit this phase of decline and turn it around, but I'm not optimistic. It's still plenty usable now, and probably will be for some years to come, but I'll want to know what other options I have now rather than when things get worse than this.

AI

AWS App Studio Promises To Generate Enterprise Apps From a Written Prompt (techcrunch.com) 36

Amazon Web Services is the latest entrant to the generative AI game with the announcement of App Studio, a groundbreaking tool capable of building complex software applications from simple written prompts. TechCrunch's Ron Miller reports: "App Studio is for technical folks who have technical expertise but are not professional developers, and we're enabling them to build enterprise-grade apps," Sriram Devanathan, GM of Amazon Q Apps and AWS App Studio, told TechCrunch. Amazon defines enterprise apps as having multiple UI pages with the ability to pull from multiple data sources, perform complex operations like joins and filters, and embed business logic in them. It is aimed at IT professionals, data engineers and enterprise architects, even product managers who might lack coding skills but have the requisite company knowledge to understand what kinds of internal software applications they might need. The company is hoping to enable these employees to build applications by describing the application they need and the data sources they wish to use.

Examples of the types of applications include an inventory-tracking system or claims approval process. The user starts by entering the name of an application, calling the data sources and then describing the application they want to build. The system comes with some sample prompts to help, but users can enter an ad hoc description if they wish. It then builds a list of requirements for the application and what it will do, based on the description. The user can refine these requirements by interacting with the generative AI. In that way, it's not unlike a lot of no-code tools that preceded it, but Devanathan says it is different. [...] Once the application is complete, it goes through a mini DevOps pipeline where it can be tested before going into production. In terms of identity, security and governance, and other requirements any enterprise would have for applications being deployed, the administrator can link to existing systems when setting up the App Studio. When it gets deployed, AWS handles all of that on the back end for the customer, based on the information entered by the admin.

Cellphones

'Windows Recall' Preview Remains Hackable As Google Develops Similar Feature 20

Windows Recall was "delayed" over concerns that storing unencrypted recordings of users' activity was a security risk.

But now Slashdot reader storagedude writes: The latest version of Microsoft's planned Windows Recall feature still contains data privacy and security vulnerabilities, according to a report by the Cyber Express.

Security researcher Kevin Beaumont — whose work started the backlash that resulted in Recall getting delayed last month — said the most recent preview version is still hackable by Alex Hagenah's "TotalRecall" method "with the smallest of tweaks."

The Windows screen recording feature could as yet be refined to fix security concerns, but some have spotted it recently in some versions of the Windows 11 24H2 release preview that will be officially released in the fall.

Cyber Express (the blog of threat intelligence vendor Cyble Inc) got this official response: Asked for comment on Beaumont's findings, a Microsoft spokesperson said the company "has not officially released Recall," and referred to the updated blog post that announced the delay, which said: "Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks."

"Beyond that, Microsoft has nothing more to share," the spokesperson added.

Also this week, the blog Android Authority wrote that Google is planning to introduce its own "Google AI" features to Pixel 9 smartphones. They include the ability to enhance screenshots, an "Add Me" tool for group photos — and also "a feature resembling Microsoft's controversial Recall" dubbed "Pixel Screenshots." Google's take on the feature is different and more privacy-focused: instead of automatically capturing everything you're doing, it will only work on screenshots you take yourself. When you do that, the app will add a bit of extra metadata to it, like app names, web links, etc. After that, it will be processed by a local AI, presumably the new multimodal version of Gemini Nano, which will let you search for specific screenshots just by their contents, as well as ask a bot questions about them.

My take on the feature is that it's definitely a better implementation of the idea than what Microsoft created.. [B]oth of the apps ultimately serve a similar purpose and Google's implementation doesn't easily leak sensitive information...

It's worth mentioning Motorola is also working on its own version of Recall — not much is known at the moment, but it seems it will be similar to Google's implementation, with no automatic saving of everything on the screen.

The Verge describes the Pixel 9's Google AI as "like Microsoft Recall but a little less creepy."
The Internet

Cloudflare Rolls Out Feature For Blocking AI Companies' Web Scrapers (siliconangle.com) 40

Cloudflare today unveiled a new feature part of its content delivery network (CDN) that prevents AI developers from scraping content on the web. According to Cloudflare, the feature is available for both the free and paid tiers of its service. SiliconANGLE reports: The feature uses AI to detect automated content extraction attempts. According to Cloudflare, its software can spot bots that scrape content for LLM training projects even when they attempt to avoid detection. "Sadly, we've observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent," Cloudflare engineers wrote in a blog post today. "We've monitored this activity over time, and we're proud to say that our global machine learning model has always recognized this activity as a bot."

One of the crawlers that Cloudflare managed to detect is a bot that collects content for Perplexity AI Inc., a well-funded search engine startup. Last month, Wired reported that the manner in which the bot scrapes websites makes its requests appear as regular user traffic. As a result, website operators have struggled to block Perplexity AI from using their content. Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

"When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we are able to fingerprint," Cloudflare's engineers detailed. "For every fingerprint we see, we use Cloudflare's network, which sees over 57 million requests per second on average, to understand how much we should trust this fingerprint." Cloudflare will update the feature over time to address changes in AI scraping bots' technical fingerprints and the emergence of new crawlers. As part of the initiative, the company is rolling out a tool that will enable website operators to report any new bots they may encounter.

Social Networks

Meta Is Tagging Real Photos As 'Made With AI,' Says Photographers (techcrunch.com) 25

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they're consuming. However, as TechCrunch's Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the "Made with AI" label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to "flatten the image" before saving it as a JPEG image. He suspects that this action has triggered Meta's algorithm to attach this label. "What's annoying is that the post forced me to include the 'Made with AI' even though I unchecked it," Souza told TechCrunch.

Meta would not answer on the record to TechCrunch's questions about Souza's experience or other photographers' posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. "Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image," a Meta spokesperson told TechCrunch.
"For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it," notes TechCrunch. "For users, it might be hard to understand how much AI was involved in a photo."

"Meta's label specifies that 'Generative AI may have been used to create or edit content in this post' -- but only if you tap on the label. Despite this approach, there are plenty of photos on Meta's platforms that are clearly AI-generated, and Meta's algorithm hasn't labeled them."
AI

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm (yahoo.com) 108

Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters — citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled.

"What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges."


The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry."

Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.
Privacy

Hacker Tool Extracts All the Data Collected By Windows' New Recall AI 145

An anonymous reader quotes a report from Wired: When Microsoft CEO Satya Nadella revealed the new Windows AI tool that can answer questions about your web browsing and laptop use, he said one of the"magical" things about it was that the data doesn't leave your laptop; theWindows Recall system takes screenshots of your activity every five seconds and saves them on the device. But security experts say that data may not stay there for long. Two weeks ahead ofRecall's launch on new Copilot+ PCs on June 18, security researchers have demonstrated how preview versions of the tool store the screenshots in an unencrypted database. The researchers say the data could easily be hoovered up by an attacker. And now, in a warning about how Recall could be abused by criminal hackers, Alex Hagenah, a cybersecurity strategist and ethical hacker, has released a demo tool that can automatically extract and display everything Recall records on a laptop.

Dubbed TotalRecall -- yes, after the 1990 sci-fi film -- the tool can pull all the information that Recall saves into its main database on a Windows laptop. "The database is unencrypted. It's all plain text," Hagenah says. Since Microsoft revealed Recall in mid-May, security researchers have repeatedly compared it to spyware or stalkerware that can track everything you do on your device. "It's a Trojan 2.0 really, built in," Hagenah says, adding that he built TotalRecall -- which he's releasing on GitHub -- in order to show what is possible and to encourage Microsoft to make changes before Recall fully launches. [...] TotalRecall, Hagenah says, can automatically work out where the Recall database is on a laptop and then make a copy of the file, parsing all the data as it does so. While Microsoft's new Copilot+ PCs aren't out yet, it's possible to use Recall by emulating a version of the devices. "It does everything automatically," he says. The system can set a date range for extracting the data -- for instance, pulling information from only one specific week or day. Pulling one day of screenshots from Recall, which stores its information in an SQLite database, took two seconds at most, Hagenah says.

Included in what the database captures are screenshots of whatever is on your desktop -- a potential gold mine for criminal hackers or domestic abusers who may physically access their victim's device. Images include captures of messages sent on encrypted messaging apps Signal and WhatsApp, and remain in the captures regardless of whether disappearing messages are turned on in the apps. There are records of websites visited and every bit of text displayed on the PC. Once TotalRecall has been deployed, it will generate a summary about the data; it is also possible to search for specific terms in the database. Hagenah says an attacker could get a huge amount of information about their target, including insights into their emails, personal conversations, and any sensitive information that's captured by Recall. Hagenah's work builds on findings from cybersecurity researcher Kevin Beaumont, who has detailed how much information Recall captures and how easy it can be to extract it.
Windows

Satya Nadella Says Microsoft's AI-Focused Copilot+ Laptops Will Outperform Apple's MacBooks (msn.com) 86

"Apple's done a fantastic job of really innovating on the Mac," Microsoft CEO Satya Nadella told the Wall Street Journal in a video interview this week.

. Then he said "We are gonna outperform them" with the upcoming Copilot+ laptops from Acer, ASUS, Dell, HP, Lenovo and Samsung that have been completely reengineered for AI — and begin shipping in less than four weeks. Satya Nadella: Qualcomm's got a new [ARM Snapdragon X] processor, which we've optimized Windows for. The battery lab, I've been using it now — I mean, it's 22 hours of continuous video playback... [Apple also uses ARM chips in its MacBooks]. We finally feel we have a very competitive product between Surface Pro and the Surface laptops. We have essentially the best specs when it comes to ARM-based silicon and performance or the NPU performance.

WSJ: Microsoft says the Surfaces are 58% faster than the MacBook Air with M3, and has 20% longer battery life.

The video includes a demonstration of local live translation powered by "small language models" stored on the device. ("It can translate live video calls or in-person conversations from 44 different languages into English. And it's fast.")

And in an accompanying article, the Journal's reporter also tested out the AI-powered image generator coming to Microsoft Paint.

As a longtime MS Paint stick-figure and box-house artist, I was delighted by this new tool. I typed in a prompt: "A Windows XP wallpaper with a mountain and sky." Then, as I started drawing, an AI image appeared in a new canvas alongside mine. When I changed a color in my sketch, it changed a color in the generated image. Microsoft says it still sends the prompt to the cloud to ensure content safety.
Privacy was also touched on. Discussing the AI-powered "Recall" search functionality, the Journal's reporter notes that users can stop it from taking screenshots of certain web sites or apps, or turn it off entirely... But they point out "There could be this reaction from some people that this is pretty creepy. Microsoft is taking screenshots of everything I do."

Nadella reminds them that "it's all being done locally, right...? That's the promise... That's one of the reasons why Recall works as a magical thing: because I can trust it, that it is on my computer."

Copilot will be powered by OpenAI's new GPT-4o, the Journal notes — before showing Satya Nadella saying "It's kind of like a new browser effectively." Satya Nadella: So, it's right there. It sees the screen, it sees the world, it hears you. And so, it's kind of like that personal agent that's always there that you want to talk to. You can interrupt it. It can interrupt you.
Nadella says though the laptop is optimized for Copilot, that's just the beginning, and "I fully expect Copilot to be everywhere" — along with its innovatively individualized "personal agent" interface. "It's gonna be ambient.... It'll go on the phone, right? I'll use it on WhatsApp. I'll use it on any other messaging platform. It'll be on speakers everywhere." Nadella says combining GPT-40 with Copilot's interface is "the type of magic that we wanna bring — first to Windows and everywhere else... The future I see is a computer that understands me versus a computer that I have to understand.

The interview ends when the reporter holds up the result — their own homegrown rendition of Windows XP's default background image "Bliss."
China

Huawei Secretly Backs US Research, Awarding Millions in Prizes (yahoo.com) 41

Huawei, the Chinese telecommunications giant blacklisted by the US, is secretly funding cutting-edge research at American universities including Harvard through an independent Washington-based foundation. From a report: Huawei is the sole funder of a research competition that has awarded millions of dollars since its inception in 2022 and attracted hundreds of proposals from scientists around the world, including those at top US universities that have banned their researchers from working with the company, according to documents and people familiar with the matter.

The competition is administered by the Optica Foundation, an arm of the nonprofit professional society Optica, whose members' research on light underpins technologies such as communications, biomedical diagnostics and lasers. The foundation "shall not be required to designate Huawei as the funding source or program sponsor" of the competition and "the existence and content of this Agreement and the relationship between the Parties shall also be considered Confidential Information," says a nonpublic document reviewed by Bloomberg. The findings reveal one strategy Shenzhen, China-based Huawei is using to remain at the forefront of funding international research despite a web of US restrictions imposed over the past several years in response to concerns that its technology could be used by Beijing as a spy tool.

Privacy

96% of US Hospital Websites Share Visitor Info With Meta, Google, Data Brokers (theregister.com) 21

An anonymous reader quotes a report from The Guardian: Hospitals -- despite being places where people implicitly expect to have their personal details kept private -- frequently use tracking technologies on their websites to share user information with Google, Meta, data brokers, and other third parties, according to research published today. Academics at the University of Pennsylvania analyzed a nationally representative sample of 100 non-federal acute care hospitals -- essentially traditional hospitals with emergency departments -- and their findings were that 96 percent of their websites transmitted user data to third parties. Additionally, not all of these websites even had a privacy policy. And of the 71 percent that did, 56 percent disclosed specific third-party companies that could receive user information.

The researchers' latest work builds on a study they published a year ago of 3,747 US non-federal hospital websites. That found 98.6 percent tracked and transferred visitors' data to large tech and social media companies, advertising firms, and data brokers. To find the trackers on websites, the team checked out each hospitals' homepage on January 26 using webXray, an open source tool that detects third-party HTTP requests and matches them to the organizations receiving the data. They also recorded the number of third-party cookies per page. One name in particular stood out, in terms of who was receiving website visitors' information. "In every study we've done, in any part of the health system, Google, whose parent company is Alphabet, is on nearly every page, including hospitals," [Dr Ari Friedman, an assistant professor of emergency medicine at the University of Pennsylvania] observed. "From there, it declines," he continued. "Meta was on a little over half of hospital webpages, and the Meta Pixel is notable because it seems to be one of the grabbier entities out there in terms of tracking."

Both Meta and Google's tracking technologies have been the subject of criminal complaints and lawsuits over the years -- as have some healthcare companies that shared data with these and other advertisers. In addition, between 20 and 30 percent of the hospitals share data with Adobe, Friedman noted. "Everybody knows Adobe for PDFs. My understanding is they also have a tracking division within their ad division." Others include telecom and digital marketing companies like The Trade Desk and Verizon, plus tech giants Oracle, Microsoft, and Amazon, according to Friedman. Then there's also analytics firms including Hotjar and data brokers such as Acxiom. "And two thirds of hospital websites had some kind of data transfer to a third-party domain that we couldn't even identify," he added. Of the 71 hospital website privacy policies that the team found, 69 addressed the types of user information that was collected. The most common were IP addresses (80 percent), web browser name and version (75 percent), pages visited on the website (73 percent), and the website from which the user arrived (73 percent). Only 56 percent of these policies identified the third-party companies receiving user information.
In lieu of any federal data privacy law in the U.S., Friedman recommends users protect their personal information via the browser-based tools Ghostery and Privacy Badger, which identify and block transfers to third-party domains.
Privacy

DuckDuckGo Launches Privacy Pro: A 3-in-1 Service That Includes a VPN (betanews.com) 34

DuckDuckGo, the privacy-focused web search and browser company, announced on today the launch of its first subscription service, Privacy Pro. The service, priced at $9.99 per month or $99.99 per year, includes a browser-based tool that automatically scans data broker websites for users' personal information and requests its removal. The service also includes DuckDuckGo's first VPN and an identity-theft-restoration service. Available initially only in the U.S.
Government

Can Apps Turn Us Into Unpaid Lobbyists? (msn.com) 73

"Today's most effective corporate lobbying no longer involves wooing members of Congress..." writes the Wall Street Journal. Instead the lobbying sector "now works in secret to influence lawmakers with the help of an unlikely ally: you." [Lobbyists] teamed up with PR gurus, social-media experts, political pollsters, data analysts and grassroots organizers to foment seemingly organic public outcries designed to pressure lawmakers and compel them to take actions that would benefit the lobbyists' corporate clients...

By the middle of 2011, an army of lobbyists working for the pillars of the corporate lobbying establishment — the major movie studios, the music industry, pharmaceutical manufacturers and the U.S. Chamber of Commerce — were executing a nearly $100 million campaign to win approval for the internet bill [the PROTECT IP Act, or "PIPA"]. They pressured scores of lawmakers to co-sponsor the legislation. At one point, 99 of the 100 members of the U.S. Senate appeared ready to support it — an astounding number, given that most bills have just a handful of co-sponsors before they are called up for a vote. When lobbyists for Google and its allies went to Capitol Hill, they made little headway. Against such well-financed and influential opponents, the futility of the traditional lobbying approach became clear. If tech companies were going to turn back the anti-piracy bills, they would need to find another way.

It was around this time that one of Google's Washington strategists suggested an alternative strategy. "Let's rally our users," Adam Kovacevich, then 34 and a senior member of Google's Washington office, told colleagues. Kovacevich turned Google's opposition to the anti-piracy legislation into a coast-to-coast political influence effort with all the bells and whistles of a presidential campaign. The goal: to whip up enough opposition to the legislation among ordinary Americans that Congress would be forced to abandon the effort... The campaign slogan they settled on — "Don't Kill the Internet" — exaggerated the likely impact of the bill, but it succeeded in stirring apprehension among web users.

The coup de grace came on Jan. 18, 2012, when Google and its allies pulled off the mother of all outside influence campaigns. When users logged on to the web that day, they discovered, to their great frustration, that many of the sites they'd come to rely on — Wikipedia, Reddit, Craigslist — were either blacked out or displayed text outlining the detrimental impacts of the proposed legislation. For its part, Google inserted a black censorship bar over its multicolored logo and posted a tool that enabled users to contact their elected representatives. "Tell Congress: Please don't censor the web!" a message on Google's home page read. With some 115,000 websites taking part, the protest achieved a staggering reach. Tens of millions of people visited Wikipedia's blacked-out website, 4.5 million users signed a Google petition opposing the legislation, and more than 2.4 million people took to Twitter to express their views on the bills. "We must stop [these bills] to keep the web open & free," the reality TV star Kim Kardashian wrote in a tweet to her 10 million followers...

Within two days, the legislation was dead...

Over the following decade, outside influence tactics would become the cornerstone of Washington's lobbying industry — and they remain so today.

"The 2012 effort is considered the most successful consumer mobilization in the history of internet policy," writes the Washington Post — agreeing that it's since spawned more app-based, crowdsourced lobbying campaigns. Sites like Airbnb "have also repeatedly asked their users to oppose city government restrictions on the apps." Uber, Lyft, DoorDash and other gig work companies also blitzed the apps' users with scenarios of higher prices or suspended service unless people voted for a 2020 California ballot measure on contract workers. Voters approved it."

The Wall Street Journal also details how lobbyists successfully killed higher taxes for tobacco products, the oil-and-gas industry, and even on private-equity investors — and note similar tactics were used against a bill targeting TikTok. "Some say the campaign backfired. Lawmakers complained that the effort showed how the Chinese government could co-opt internet users to do their bidding in the U.S., and the House of Representatives voted to ban the app if its owners did not agree to sell it.

"TikTok's lobbyists said they were pleased with the effort. They persuaded 65 members of the House to vote in favor of the company and are confident that the Senate will block the effort."

The Journal's article was adapted from an upcoming book titled "The Wolves of K Street: The Secret History of How Big Money Took Over Big Government." But the Washington Post argues the phenomenon raises two questions. "How much do you want technology companies to turn you into their lobbyists? And what's in it for you?"

Slashdot Top Deals