Microsoft

Microsoft Launches Vector Search in Preview, Voice Cloning in General Availability (techcrunch.com) 4

At its annual Inspire conference, Microsoft announced a number of new AI features headed to Azure, perhaps the most notable of which is Vector Search. From a report: Available in preview through Azure Cognitive search, Vector Search uses machine learning to capture the meaning and context of unstructured data, including images and text, to make search faster. Vectorization, an increasingly popular technique in search, involves converting words or images into vectors, or series of numbers, that encode their meaning -- allowing them to be processed mathematically. Vectors enable machines to structure and make sense of data, enabling them to understand, for example, that words close together in "vector space" -- like "king" and "queen" -- are related and quickly surface them from a database of millions of words.

[...] Rounding out the AI unveilings at Inspire, Microsoft announced the public preview of Real-time Diarization, an AI-driven speech service that can identify which of several people are speaking in real time. The company also announced the general availability of Custom Neural Voice, which taps AI to closely reproduce an actor's voice or create an original synthetic voice. Previously, Custom Neural Voice was in limited access, meaning that customers had to apply and be approved by Microsoft in order to use it.

Hardware

VMware, AMD, Samsung and RISC-V Push For Confidential Computing Standards (theregister.com) 15

VMware has joined AMD, Samsung, and members of the RISC-V community to work on an open and cross-platform framework for the development and operation of applications using confidential computing hardware. The Register reports: Revealing the effort at the Confidential Computing Summit 2023 in San Francisco, the companies say they aim to bring about an industry transition to practical confidential computing by developing the open source Certifier Framework for Confidential Computing project. Among other goals, the project aims to standardize on a set of platform-independent developer APIs that can be used to develop or adapt application code to run in a confidential computing environment, with a Certifier Service overseeing them in operation. VMware claims to have researched, developed and open sourced the Certifier Framework, but with AMD on board, plus Samsung (which develops its own smartphone chips), the group has the x86 and Arm worlds covered. Also on board is the Keystone project, which is developing an enclave framework to support confidential computing on RISC-V processors.

Confidential computing is designed to protect applications and their data from theft or tampering by protecting them inside a secure enclave, or trusted execution environment (TEE). This uses hardware-based security mechanisms to prevent access from everything outside the enclave, including the host operating system and any other application code. Such security protections are likely to be increasingly important in the context of applications running in multi-cloud environments, VMware reckons.

Another scenario for confidential computing put forward by Microsoft, which believes confidential computing will become the norm -- is multi-party computation and analytics. This sees several users each contribute their own private data to an enclave, where it can be analyzed securely to produce results much richer than each would have got purely from their own data set. This is described as an emerging class of machine learning and "data economy" workloads that are based on sensitive data and models aggregated from multiple sources, which will be enabled by confidential computing. However, VMware points out that like many useful hardware features, it will not be widely adopted until it becomes easier to develop applications in the new paradigm.

Sci-Fi

Congress Doubles Down On Explosive Claims of Illegal UFO Retrieval Programs (thehill.com) 223

An anonymous reader quotes a report from The Hill: Asked June 26 about allegations of secret UFO retrieval and reverse-engineering programs, Senate Intelligence Committee Vice Chairman Marco Rubio (R-Fla.) made several stunning statements. In an exclusive interview, Rubio told NewsNation Washington correspondent Joe Khalil that multiple individuals with "very high clearances and high positions within our government" "have come forward to share" "first-hand" UFO-related claims "beyond the realm of what [the Senate Intelligence Committee] has ever dealt with."

Rubio's comments provide context for a bipartisan provision adopted unanimously by the Senate Intelligence Committee, which would immediately halt funding for any secret government or contractor efforts to retrieve and reverse-engineer craft of "non-earth" or "exotic" origin. This extraordinary language added to the Senate version of the Intelligence authorization bill mirrors and adds significant credibility to a whistleblower's recent, stunning allegations that a clandestine, decades-long effort to recover, analyze and exploit objects of "non-human" origin has been operating illegally without congressional oversight.

Additionally, the bill instructs individuals with knowledge of such activities to disclose all relevant information and grants legal immunity if the information is reported appropriately within a defined timeframe. Moreover, nearly 20 pages of the legislation appear to directly address recent events by enhancing a raft of legal protections for whistleblowers while also permitting such individuals to contact Congress directly. Researcher and congressional expert Douglas Johnson first reported on and analyzed the remarkable bill language, which, if it passes the House, could become law this calendar year.

AI

WinGPT Is a New ChatGPT App For Your Ancient Windows 3.1 PC (theverge.com) 91

An anonymous reader quotes a report from The Verge: Someone has created a ChatGPT app for Windows 3.1 PCs. WinGPT brings a very basic version of OpenAI's ChatGPT responses into an app that can run on an ancient 386 chip. It's built by the same mysterious developer behind Windle, a Wordle clone for Microsoft's Windows 3.1 operating system. "I didn't want my Gateway 4DX2-66 from 1993 to be left out of the AI revolution, so I built an AI Assistant for Windows 3.1, based on the OpenAI API," says the developer in a Hacker News thread.

WinGPT is written in C using Microsoft's standard Windows API and connects to OpenAI's API server using TLS 1.3, so there's no need for a separate modern PC. That was a particularly interesting part of getting this app running on Windows 3.1, alongside managing the memory segmentation architecture on 16-bit versions of Windows and building the UI for the app. Neowin notes that the ChatGPT responses are only brief due to the limited memory support that can't handle the context of conversations. The icon for WinGPT was also designed in Borland's Image Editor, a clone of Microsoft Paint that's capable of making ICO files.

"I built most of the UI in C directly, meaning that each UI component had to be manually constructed in code," says the anonymous WinGPT developer. "I was surprised that the set of standard controls available to use by any program with Windows 3.1 is incredibly limited. You have some controls you'd expect -- push buttons, check boxes, radio buttons, edit boxes -- but any other control you might need, including those used across the operating system itself, aren't available."

Earth

Seaweed Farming For CO2 Capture Would Take Up Too Much of the Ocean 99

An anonymous reader quotes a report from MIT Technology Review: If we're going to prevent the gravest dangers of global warming, experts agree, removing significant amounts of carbon dioxide from the atmosphere is essential. That's why, over the past few years, projects focused on growing seaweed to suck CO2 from the air and lock it in the sea have attracted attention -- and significant amounts of funding -- from the US government and private companies including Amazon. The problem: farming enough seaweed to meet climate-change goals may not be feasible after all.

A new study, published today in Nature Communications Earth & Environment, estimates that around a million square kilometers of ocean would need to be farmed in order to remove a billion tons of carbon dioxide from the atmosphere over the course of a year. It's not easy to come by that amount of space in places where seaweed grows easily, given all the competing uses along the coastlines, like shipping and fishing. To put that into context, between 2.5 and 13 billion tons of atmospheric carbon dioxide would need to be captured each year, in addition to dramatic reductions in greenhouse-gas emissions, to meet climate goals, according to the study's authors.

A variety of scientific models suggest we should be removing anything from 1.3 billion tons of carbon dioxide each year to 29 billion tons by 2050 in order to prevent global warming levels from rising past 1. 5C. An 2017 report from the UN estimated that we'd need to remove 10 billion tons annually to stop the planet from warming past 2C by the same date. "The industry is getting ahead of the science," says Isabella Arzeno-Soltero, a postdoctoral scholar at Stanford University, who worked on the project. "Our immediate goal was to see if, given optimal conditions, we can actually achieve the scales of carbon harvests that people are talking about. And the answer is no, not really." [...] Their findings suggest that cultivating enough seaweed to reach these targets is beyond the industry's current capacity, although meeting climate goals will require much more than reliance solely on seaweed.
Piracy

Music Pirates Are Not Terrorists, Record Labels Argue In Court (torrentfreak.com) 46

An anonymous reader quotes a report from TorrentFreak: A Virginia jury held Cox liable for pirating subscribers because it failed to terminate accounts after repeated accusations, ordering the company to pay $1 billion in damages to the labels. This landmark ruling is currently under appeal. As part of the appeal, Cox informed the court of a supplemental authority that could support its position. The case in question is Twitter vs. Taamneh, in which the U.S. Supreme Court recently held that the social media platform isn't liable for ISIS terrorists, who used Twitter to recruit and raise funds. The Supreme Court rejected (PDF) the claim that Twitter aided-and-abetted terrorist activity, because it didn't "consciously and culpably" participate in the illegal activity. According to Cox, the same logic applies in its case, where the ISP was held liable for the piracy activities of subscribers.

"These same aiding-and-abetting principles animate copyright law's contributory liability doctrine, and they likewise foreclose liability here," an attorney for Cox informed the court. Cox argues that the Supreme Court ruling confirms that aiding-and-abetting liability only applies when parties knowingly took part in the activity. That runs contrary to the finding in its own dispute with the record labels, where "culpable expression and conduct" or "intent" were not required. "Though Twitter arises in a different context, its reasoning applies with full force and supports reversal of the contributory infringement verdict," Cox added. The two cases are indeed quite different, but ultimately they are about imposing liability on third-party services.

According to Cox, the Twitter terrorist ruling clearly shows that it isn't liable for pirating subscribers, but the music companies see things differently. Earlier this week, the music labels responded in court (PDF), countering Cox's arguments. They argue that the Twitter ruling doesn't apply to their piracy dispute with Cox, as the cases are grounded in different laws. While the music industry certainly isn't happy with pirates, the Cox case is a copyright matter while the Twitter lawsuit fell under the Justice Against Sponsors of Terrorism Act. And for now, pirates are not categorized as terrorists. After establishing the difference between pirates and terrorists, the music companies point out that Twitter wasn't directly connected to the misconduct. The platform's role was more passive and its connection to ISIS was more distant than Cox's connection to its subscribers. Cox took a more active role and materially contributed to the pirating activities, which stands no comparison to the Twitter case, plaintiffs argue.

Security

Is Cybersecurity an Unsolvable Problem? (arstechnica.com) 153

Ars Technica profiles Scott Shapiro, the co-author of a new book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks.

Shapiro points out that computer science "is only a century old, and hacking, or cybersecurity, is maybe a few decades old. It's a very young field, and part of the problem is that people haven't thought it through from first principles." Telling in-depth the story of five major breaches, Shapiro ultimately concludes that "the very principles that make hacking possible are the ones that make general computing possible.

"So you can't get rid of one without the other because you cannot patch metacode." Shapiro also brings some penetrating insight into why the Internet remains so insecure decades after its invention, as well as how and why hackers do what they do. And his conclusion about what can be done about it might prove a bit controversial: there is no permanent solution to the cybersecurity problem. "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"
An excerpt from their interview: Ars Technica: The scientific community in various disciplines has struggled with this in the past. There's an attitude of, "We're just doing the research. It's just a tool. It's morally neutral." Hacking might be a prime example of a subject that you cannot teach outside the broader context of morality.

Scott Shapiro: I couldn't agree more. I'm a philosopher, so my day job is teaching that. But it's a problem throughout all of STEM: this idea that tools are morally neutral and you're just making them and it's up to the end user to use it in the right way. That is a reasonable attitude to have if you live in a culture that is doing the work of explaining why these tools ought to be used in one way rather than another. But when we have a culture that doesn't do that, then it becomes a very morally problematic activity.

Python

PyPI Was Subpoenaed 31

The PyPI blog: In March and April 2023, the Python Software Foundation (PSF) received three (3) subpoenas for PyPI user data. All three subpoenas were issued by the United States Department of Justice. The PSF was not provided with context on the legal circumstances surrounding these subpoenas. In total, user data related to five (5) PyPI usernames were requested. The data request was:

"Names (including subscriber names, user names, and screen names);"
"Addresses (including mailing, residential addresses, business addresses, and email addresses);"
"Connection records;"
"Records of session times and durations, and the temporarily assigned network address (such as Internet Protocol addresses) associated with those sessions;"
"Length of service (including start date) and type of services utilized;"
"Telephone or instrument numbers (including the registration Internet Protocol address);"
"Means and source of payment of any such services (including any credit card or bank account number) and billing records;"
"Records of all Python Package Index (PyPI) packages uploaded by..." given usernames
"IP download logs of any Python Package Index (PyPI) packages uploaded by..." given usernames

The privacy of PyPI users is of utmost concern to PSF and the PyPI Administrators, and we are committed to protecting user data from disclosure whenever possible. In this case, however, PSF determined with the advice of counsel that our only course of action was to provide the requested data. I, as Director of Infrastructure of the Python Software Foundation, fulfilled the requests in consultation with PSF's counsel.

We have waited for the string of subpoenas to subside, though we were committed from the beginning to write and publish this post as a matter of transparency, and as allowed by the lack of a non-disclosure order associated with the subpoenas received in March and April 2023.
AI

Adobe Photoshop's New 'Generative Fill' AI Tool Lets You Manipulate Photos With Text (arstechnica.com) 38

Adobe has introduced a new tool called "Generative Fill" in the Photoshop beta, which utilizes cloud-based image synthesis and AI-generated content to fill selected areas of an image based on a text description. Ars Technica reports: Powered by Adobe Firefly, Generative Fill works similarly to a technique called "inpainting" used in DALL-E and Stable Diffusion releases since last year. At the core of Generative Fill is Adobe Firefly, which is Adobe's custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe's stock library to associate certain imagery with text descriptions of them. Now part of Photoshop, people can type in what they want to see (i.e., "a clown on a computer monitor"), and Firefly will synthesize several options for the user to choose from. Generative Fill uses a well-known AI technique called "inpainting" to create a context-aware generation that can seamlessly blend synthesized imagery into an existing image.

To use Generative Fill, users select an area of an existing image they want to modify. After selecting it, a "Contextual Task Bar" pops up that allows users to type in a description of what they want to see generated in the selected area. Photoshop sends this data to Adobe's servers for processing, then returns results in the app. After generating, the user has the option to select between several options of generations or to create more options to browse through. When used, the Generative Fill tool creates a new "Generative Layer," allowing for non-destructive alterations of image content, such as additions, extensions, or removals, driven by these text prompts. It automatically adjusts to the perspective, lighting, and style of the selected image.

Networking

After Two Days, Asus Fixed Router-Freezing Glitch (arstechnica.com) 40

An anonymous reader shared ths report from Ars Technica: On Wednesday, Asus router users around the world took to the Internet to report that their devices suddenly froze up for no apparent reason and then, upon rebooting repeatedly, stopped working every few minutes as device memory became exhausted.

Two days later, the Taiwan-based hardware maker has finally answered the calls for help. The mass outage, the company said, was the result of "an error in the configuration of our server settings file." After fixing the glitch, most users needed to only reboot their devices. In the event that didn't fix the problem, the company's support team advised users to save their current configuration settings and perform a factory reset. The company also apologized...

Asus still hasn't provided details about the configuration error. Various users have offered explanations online that appear to be correct. "On the 16th, Asus pushed a corrupted definition file for ASD, a built-in security daemon present in a wide range of their routers," one person wrote. "As routers automatically updated and fetched the corrupted definition file, they started running out of filesystem space and memory and crashing."

AI

Anthropic's Claude AI Can Now Digest an Entire Book like The Great Gatsby in Seconds (arstechnica.com) 7

AI company Anthropic has announced it has given its ChatGPT-like Claude AI language model the ability to analyze an entire book's worth of material in under a minute. This new ability comes from expanding Claude's context window to 100,000 tokens, or about 75,000 words. From a report: Like OpenAI's GPT-4, Claude is a large language model (LLM) that works by predicting the next token in a sequence when given a certain input. Tokens are fragments of words used to simplify AI data processing, and a "context window" is similar to short-term memory -- how much human-provided input data an LLM can process at once. A larger context window means an LLM can consider larger works like books or participate in very long interactive conversations that span "hours or even days."
Businesses

IBM Chief's Message To Remote Workers: 'Your Career Does Suffer' (bloomberg.com) 184

IBM CEO Arvind Krishna said he's not forcing any of the company's remote workers to come into the office just yet, but warns those who don't "would be hard-pressed to get promoted, especially into managerial roles," reports Bloomberg. From the report: "Being a people manager when you're remote is just tough because if you're managing people, you need to be able to see them once in a while," he said in an interview Monday in New York. "It doesn't need to be every minute. You don't need to function under those old 'Everybody's under my eye' kind of rules, but at least sometimes." "It seems to me that we work better when we are together in person," said Krishna, who described the company's return-to-office policy as "we encourage you to come in, we expect you to come in, we want you to come in." Three days a week is the number they encourage, he said.

While about 80% of IBM's employees work from home at least some of the time, Krishna said remote arrangements are best suited for specific "individual contributor" roles like customer service or software programmers. "In the short term you probably can be equally productive, but your career does suffer," he said. "Moving from there to another role is probably less likely because nobody's observing them in another context. It will be tougher. Not impossible, but probably a lot tougher."

Krishna, who became CEO right after the pandemic hit in April 2020, said people make a choice to work remotely, but it need not be "a forever choice -- it could be a choice based on convenience or circumstance." Remote workers, he said, don't learn how to do things like deal with a difficult client, or how to make trade-offs when designing a new product. "I don't understand how to do all that remotely," he said.

Businesses

Stripe, a Longtime Partner of Lyft, Signs a Big Deal With Uber (techcrunch.com) 5

An anonymous reader quotes a report from TechCrunch: Growth at $50 billion fintech Stripe has been slowing this year, but one of its key strategies to reverse that course got a decent push today: Stripe is announcing that it has inked a "strategic payments partnership" with Uber. The pair will work together initially on selected services in eight of Uber's biggest markets, including the U.S., the U.K., Canada, Mexico, Australia and Japan. Some context on this deal: Uber's big U.S. rival Lyft has been a longtime marquee customer of Stripe's for payments, and whether or not it was true, that was one reason some assumed Uber and Stripe would not work together. Uber is, however, a much bigger beast, at close to $100 billion transacted annually (Stripe processed $817 billion last year). And Uber is not just a force globally but in the U.S. specifically, where one estimate from YipIt (via WSJ) puts Uber's rideshare market share currently at a whopping 74%.

Lyft will remain a customer of Stripe's, Stripe president Will Gaybrick confirmed to TechCrunch. Financial terms of the deal are not being disclosed, but as with the rest of Stripe's payments business, a big component will come from commissions that Stripe will make from each transaction that it powers on Uber's platform. The Uber partnership, expected to be announced formally later today at Stripe's user conference, comes on the heels of recent enterprise deals Stripe has inked with Amazon, Microsoft and BMW. But this partnership -- for now at least -- is not a global adoption of all that Stripe has to offer. Uber will be using Stripe to break into a specific, new payments frontier. Specifically, it will integrate Stripe Financial Connections and Link to let users import banking details to pay for services like Uber Rides and Eats directly from bank accounts, giving users a payments alternative to credit or debit cards.

Science

Scientists in India Protest Move To Drop Darwinian Evolution From Textbooks (science.org) 96

Scientists in India are protesting a decision to remove discussion of Charles Darwin's theory of evolution from textbooks used by millions of students in ninth and 10th grades. More than 4000 researchers and others have so far signed an open letter asking officials to restore the material. From a report: The removal makes "a travesty of the notion of a well-rounded secondary education," says evolutionary biologist Amitabh Joshi of the Jawaharlal Nehru Centre for Advanced Scientific Research. Other researchers fear it signals a growing embrace of pseudoscience by Indian officials. The Breakthrough Science Society, a nonprofit group, launched the open letter on 20 April after learning that the National Council of Educational Research and Training (NCERT), an autonomous government organization that sets curricula and publishes textbooks for India's 256 million primary and secondary students, had made the move as part of a "content rationalization" process.

NCERT first removed discussion of Darwinian evolution from the textbooks at the height of the COVID-19 pandemic in order to streamline online classes, the society says. (Last year, NCERT issued a document that said it wanted to avoid content that was "irrelevant" in the "present context.") NCERT officials declined to answer questions about the decision to make the removal permanent. They referred ScienceInsider to India's Ministry of Education, which had not provided comment as this story went to press.

AI

Palantir Demos AI To Fight Wars (vice.com) 80

An anonymous reader quotes a report from Motherboard: Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications. In Palantir's scenario, a "military operator responsible for monitoring activity within eastern Europe" receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.

"They ask what enemy units are in the region and leverage AI to build out a likely unit formation," the video said. After getting the AI's best guess as to what's going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there's a T-80 tank, a Soviet-era Russia vehicle, near friendly forces. Then the operator asks the robots what to do about it. "The operator uses AIP to generate three possible courses of action to target this enemy equipment," the video said. "Next they use AIP to automatically send these options up the chain of command." The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems. [...]

What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. "LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way," the pitch said. According to Palantir, this control involves three pillars. The first claim is that AIP will be able to deploy these systems into classified networks and "devices on the tactical edge." It claims it will be able to parse both classified and real-time data in a responsible, legal, and ethical way. According to the video, users will then have control over what every LLM and AI in the Palantir-backed system can do. "AIP's security features what LLMs and AI can and cannot see and what they can and cannot do," the video said. "As operators take action, AIP generates a secure digital record of operations. These capabilities are crucial for mitigating significant legal, regulatory, and ethical risks in sensitive and classified settings.

Transportation

Californians Have Bought More Than 1.5 Million Electric Vehicles (arstechnica.com) 128

An anonymous reader quotes a report from Ars Technica: California is far and away the country's largest adopter of plug-in electric vehicles. Because of the state's ability to regulate its own air quality and spurred on by a large economy and plenty of affluent residents, the EV has gained plenty of traction in the Golden State. So much so that last month, California met its goal of having more than 1.5 million clean vehicles on the road two years ahead of schedule. California's Air Resources Board (CARB) began its Zero-Emission Vehicle (ZEV) program in 1990 with the intent of ameliorating the state's severe smog problem. By the early years of this century, air quality had improved to the point where CARB could begin using the ZEV regulations to help drive down climate emissions. It has accomplished that with goals that are more ambitious than the ones adopted by the US Environmental Protection Agency at the federal level and despite political interference from the previous administration, which wanted pollution to continue almost unabated.

A number of other states -- Colorado, Connecticut, Maine, Maryland, Massachusetts, Minnesota, New Jersey, Nevada, New Mexico, New York, Oregon, Rhode Island, Vermont, Virginia, and Washington -- have adopted CARB's ZEV program within their own borders. But none are as far down the road to EV adoption; in the first three months of this year, 21.1 percent of all new light-duty vehicles bought in California were zero-emissions vehicles. That's a 153-percent increase year on year, according to the nonprofit Veloz. Battery EVs made up the vast majority, with 95,946 sold. Unsurprisingly, Tesla was most well-represented on the sales list, with the Model Y accounting for 33,205 units by itself. (The Model 3 was next, at 19,989 sold in Q1 2023.) BMW was the best of the rest of the OEMs in total sales numbers thanks to healthy plug-in hybrid EV sales.

Los Angeles County was responsible for the highest number of new EVs added to the roads, with 36,670 registered in Q1. Orange County was next, at 15,289 new ZEVs registered, followed by Santa Clara County (11,428 new ZEVs registered). Cumulatively, that brings California to 1,523,966 ZEVs deployed by the end of Q1 2023; for context, there were just 773 ZEVs in total sold before 2011. The state had hoped to reach that milestone by the end of 2025. More than two-thirds of those 1.5 million ZEVs are BEVS -- 1,051,456, according to the California Energy Commission, with most of the remaining cars being plug-in hybrid EVs. The data shows that the hydrogen fuel cell revolution is not really accelerating, though -- only 15,432 have been registered in the state.

Social Networks

Can Consumers Break Free of the Tech Industry's Hold on Their Messaging History? (msn.com) 54

The Washington Post reports on "a relatively young app called Beeper that pulls all your chats into one place." This is significant, the Post argues, because "we're better off if we have the freedom to pick up our digital lives and move on. Tech companies should feel terrified that you'll walk if they disappoint you..." If different people send you messages in Apple's Messages (a.k.a., iMessage), WhatsApp, LinkedIn and Slack, you don't have to check multiple apps to read and reply. Maybe the best promise of Beeper is that you can ditch your iPhone or Samsung phone for another company's device and keep your text messages...

Eric Migicovsky, Beeper's co-founder, told me that if you're pulling Apple Messages into Beeper, you need a Mac computer to upload a digital file. All chat apps have different limits on how much history you can access in the app.

There's also a wait list of about 170,000 people for Beeper. (Add yourself to the list here.) The app is free, but Beeper says it will start charging for a version with extra features.

To put this all in context, the Post's reporter remembers the hassle of using a cable to transfer a long history of iPhone messages to a new Google Pixel phone, complaining that Apple makes it more difficult than other companies to switch to a different kind of system. "Many of you are happy to live in Apple's world. Great! But if you want the option to leave at some point, try to limit your use of Apple apps when possible..."

They look ahead to next year, when the EU "will require large tech companies to make their products compatible with those of competitors" — though it's not clear how much change that will bring. In the meantime, the existence of a small company like Beeper "gives me hope that we don't have to rely on the kindness of technology giants to make it easier to move to a different phone or computer system... You deserve the option of a no-hassle tech divorce at a moment's notice."
The Internet

Imgur To Ban Nudity Or Sexually Explicit Content Next Month 60

Online image hosting service Imgur is updating its Terms of Service on May 15th to prohibit nudity and sexually explicit content, among other things. The news arrived in an email sent to "Imgurians". The changes have since been outlined on the company's "Community Rules" page, which reads: Imgur welcomes a diverse audience. We don't want to create a bad experience for someone that might stumble across explicit images, nor is it in our company ethos to support explicit content, so some lascivious or sexualized posts are not allowed. This may include content containing:

- the gratuitous or explicit display of breasts, butts, and sexual organs intended to stimulate erotic feelings
- full or partial nudity
- any depiction of sexual activity, explicit or implied (drawings, print, animated, human, or otherwise)
- any image taken of or from someone without their knowledge or consent for the purpose of sexualization
- solicitation (the uninvited act of directly requesting sexual content from another person, or selling/offering explicit content and/or adult services)

Content that might be taken down may includes: see-thru clothing, exposed or clearly defined genitalia, some images of female nipples/areolas, spread eagle poses, butts in thongs or partially exposed buttocks, close-ups, upskirts, strip teases, cam shows, sexual fluids, private photos from a social media page, or linking to sexually explicit content. Sexually explicit comments that don't include images may also be removed.

Artistic, scientific or educational nude images shared with educational context may be okay here. We don't try to define art or judge the artistic merit of particular content. Instead, we focus on context and intent, as well as what might make content too explicit for the general community. Any content found to be sexualizing and exploiting minors will be removed and, if necessary, reported to the National Center for Missing & Exploited Children (NCMEC). This applies to photos, videos, animated imagery, descriptions and sexual jokes concerning children.
The company is also prohibiting hate speech, abuse or harassment, content that condones illegal or violent activity, gore or shock content, spam or prohibited behavior, content that shares personal information, and posts in general that violate Imgur's terms of service. Meanwhile, "provocative, inflammatory, unsettling, or suggestive content should be marked as Mature," says Imgur.
Google

Google CEO Sundar Pichai Says Search To Include Chat AI (wsj.com) 27

Google plans to add conversational artificial-intelligence features to its flagship search engine, Chief Executive Officer Sundar Pichai said, as it deals with pressure from chatbots such as ChatGPT and wider business issues. From a report: Advances in AI would supercharge Google's ability to answer an array of search queries, Mr. Pichai said in an interview with The Wall Street Journal. He dismissed the notion that chatbots posed a threat to Google's search business, which accounts for more than half of revenue at parent Alphabet. "The opportunity space, if anything, is bigger than before," Mr. Pichai, who also heads Alphabet, said in the interview Tuesday.

Google has long been a leader in developing computer programs called large language models, or LLMs, which can process and respond to natural-language prompts with humanlike prose. But it hasn't yet used the technology to influence the way people use search -- something Mr. Pichai said would change. "Will people be able to ask questions to Google and engage with LLMs in the context of search? Absolutely," Mr. Pichai said. With Microsoft already deploying the technology behind the ChatGPT system in its Bing search engine, Mr. Pichai is dealing with one of the biggest threats to Google's core business in years as he also faces investor pressure to cut costs. In January, Alphabet said it would eliminate about 12,000 jobs, or 6% of staff, its largest layoffs to date. Inflation and recession concerns have spurred other tech companies to cut back.

Mr. Pichai said Google hasn't yet achieved a goal of becoming 20% more productive, a target he set in September. He said the company was comfortable with its pace of change, though he wouldn't directly address the prospects of another round of layoffs. [...] When asked why the company didn't release a chatbot earlier, Mr. Pichai said Google was still trying to find the right market. "We were iterating to ship something, and maybe timelines changed, given the moment in the industry," he said. Google will continue to improve Bard with new AI models, Mr. Pichai said, while declining to comment on when the product would become freely available without a wait list.

Advertising

Microsoft Slips Ads Into AI-Powered Bing Chat (theverge.com) 56

An anonymous reader quotes a report from The Verge: Microsoft is "exploring" putting ads in the responses given by Bing Chat, its new search agent powered by OpenAI's GPT-4. Microsoft confirmed this is happening, albeit in an experimental form, in a blog post published today. Here's the relevant bit from the very end after "a bit of context" explaining no one should be surprised: "We are also exploring additional capabilities for publishers including our more than 7,500 Microsoft Start partner brands. We recently met with some of our partners to begin exploring ideas and to get feedback on how we can continue to distribute content in a way that is meaningful in traffic and revenue for our partners.

As we look to continue to evolve the model together, we shared some early ideas we're exploring including:

- An expanded hover experience where hovering over a link from a publisher will display more links from that publisher giving the user more ways to engage and driving more traffic to the publisher's website.
- For our Microsoft Start partners, placing a rich caption of Microsoft Start licensed content beside the chat answer helping to drive more user engagement with the content on Microsoft Start where we share the ad revenue with the partner. We're also exploring placing ads in the chat experience to share the ad revenue with partners whose content contributed to the chat response."

Slashdot Top Deals