United States

A Breakthrough Online Privacy Proposal Hits Congress (wired.com) 27

An anonymous reader quotes a report from Wired: Congress may be closer than ever to passing a comprehensive data privacy framework after key House and Senate committee leaders released a new proposal on Sunday. The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data that companies can collect, retain, and use, allowing solely what they'd need to operate their services. Users would also be allowed to opt out of targeted advertising, and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold. [...] In an interview with The Spokesman Review on Sunday, [Cathy McMorris Rodgers, House Energy and Commerce Committee chair] claimed that the draft's language is stronger than any active laws, seemingly as an attempt to assuage the concerns of Democrats who have long fought attempts to preempt preexisting state-level protections. APRA does allow states to pass their own privacy laws related to civil rights and consumer protections, among other exceptions.

In the previous session of Congress, the leaders of the House Energy and Commerce Committees brokered a deal with Roger Wicker, the top Republican on the Senate Commerce Committee, on a bill that would preempt state laws with the exception of the California Consumer Privacy Act and the Biometric Information Privacy Act of Illinois. That measure, titled the American Data Privacy and Protection Act, also created a weaker private right of action than most Democrats were willing to support. Maria Cantwell, Senate Commerce Committee chair, refused to support the measure, instead circulating her own draft legislation. The ADPPA hasn't been reintroduced, but APRA was designed as a compromise. "I think we have threaded a very important needle here," Cantwell told The Spokesman Review. "We are preserving those standards that California and Illinois and Washington have."

APRA includes language from California's landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies when they violate the law. The categories of data that would be impacted by APRA include certain categories of "information that identifies or is linked or reasonably linkable to an individual or device," according to a Senate Commerce Committee summary of the legislation. Small businesses -- those with $40 million or less in annual revenue and limited data collection -- would be exempt under APRA, with enforcement focused on businesses with $250 million or more in yearly revenue. Governments and "entities working on behalf of governments" are excluded under the bill, as are the National Center for Missing and Exploited Children and, apart from certain cybersecurity provisions, "fraud-fighting" nonprofits. Frank Pallone, the top Democrat on the House Energy and Commerce Committee, called the draft "very strong" in a Sunday statement, but said he wanted to "strengthen" it with tighter child safety provisions.

Facebook

Meta Platforms To Launch Small Versions of Llama 3 Next Week (theinformation.com) 7

Meta Platforms is planning to launch two small versions of its forthcoming Llama 3 large-language model next week, The Information has reported [non-paywalled link]. From the report: The models will serve as a precursor to the launch of the biggest version of Llama 3, expected this summer. Release of the two small models will likely help spark excitement for the forthcoming Llama 3, which will be coming out roughly a year after Llama 2 launched last July.

It comes as several companies, including Google, Elon Musk's xAI and Mistral, have released open-source LLMs. Meta hopes Llama 3 will catch up with OpenAI's GPT-4, which can answer questions based on images users upload to the chatbot. The biggest version will be multimodal, which means it will be capable of understanding and generating both texts and images. In contrast, the two small models to be released next week won't be multimodal, the employee said.

Education

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns? (cnn.com) 102

A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.")

And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products.

But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.")

But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..."

Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."
The Almighty Buck

Roblox Executive Says Children Making Money On the Platform Isn't Exploitation, It's a Gift (eurogamer.net) 60

In an interview with Roblox Studio head Stefano Corazza, Eurogamer asked about the reputation Roblox has gained and the notion that it was exploitative of young developers, since it takes a cut from work sometimes produced by children. Here's what he had to say: "I don't know, you can say this for a lot of things, right?" Corazza said. "Like, you can say, 'Okay, we are exploiting, you know, child labour,' right? Or, you can say: we are offering people anywhere in the world the capability to get a job, and even like an income. So, I can be like 15 years old, in Indonesia, living in a slum, and then now, with just a laptop, I can create something, make money and then sustain my life. "There's always the flip side of that, when you go broad and democratized - and in this case, also with a younger audience," he continued. "I mean, our average game developer is in their 20s. But of course, there's people that are teenagers -- and we have hired some teenagers that had millions of players on the platform.

"For them, you know, hearing from their experience, they didn't feel like they were exploited! They felt like, 'Oh my god, this was the biggest gift, all of a sudden I could create something, I had millions of users, I made so much money I could retire.' So I focus more on the amount of money that we distribute every year to creators, which is now getting close to like a billion dollars, which is phenomenal."

At this point the PR present during the interview added that "the vast majority of people that are earning money on Roblox are over the age of 18." "And imagine like, the millions of kids that learn how to code every month," Corazza said. "We have millions of creators in Roblox Studio. They learn Lua scripting," a programming language, "which is pretty close to Python - you can get a job in the tech industry in the future, and be like, 'Hey, I'm a programmer,' right? "I think that we are really focusing on the learning - the curriculum, if you want - and really bringing people on and empowering them to be professionals."

AI

Google Books Is Indexing AI-Generated Garbage (404media.co) 11

Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history. From a report: I was able to find the AI-generated books with the same method we've previously used to find AI-generated Amazon product reviews, papers published in academic journals, and online articles. Searching Google Books for the term "As of my last knowledge update," which is associated with ChatGPT-generated answers, returns dozens of books that include that phrase. Some of the books are about ChatGPT, machine learning, AI, and other related subjects and include the phrase because they are discussing ChatGPT and its outputs. These books appear to be written by humans. However, most of the books in the first eight pages of results turned up by the search appear to be AI-generated and are not about AI.

For example, the 2024 book Bears, Bulls, and Wolves: Stock Trading for the Twenty-Year-Old by Tristin McIver, bills itself as "a transformative journey into the world of stock trading" and "a comprehensive guide designed for beginners eager to unlock the mysteries of financial markets." In reality, it reads like ChatGPT-generated text with surface, Wikipedia-level analysis of complex financial events like Facebook's initial public offering or the 2008 financial crisis summed up in a few short paragraphs. [...] Other books appear to be outdated to the point of being useless at the time they are published because they are generated with a version of ChatGPT with an old "knowledge update."

Chrome

Google Brings Keyboard Shortcuts, Custom Mouse Buttons To ChromeOS (theverge.com) 15

A new ChromeOS update (M123) is rolling out that brings keyboard shortcuts and mouse buttons and enables hotspot connections on cellular Chromebooks. The Verge reports: The keyboard shortcut feature will work like it does in other operating systems, in which you can assign specific actions to specific key combinations. Google uses the examples of tweaking shortcuts to be easier to carry out one-handed or making them resemble those you're used to in, say, macOS. The same goes for mouse button customizing -- if your mouse has extra buttons besides just left and right clicks, and you want to turn that weird side button into a mute button, you can do that in ChromeOS with this update.

The company also added per-app language preferences for Android apps that you're running in ChromeOS, and it says it has made its offline text-to-speech voices more natural-sounding. As is Google's way, these updates will be rolling out over the next few days.

AI

Anthropic Researchers Wear Down AI Ethics With Repeated Questions (techcrunch.com) 42

How do you get an AI to answer a question it's not supposed to? There are many such "jailbreak" techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first. From a report: They call the approach "many-shot jailbreaking" and have both written a paper about it [PDF] and also informed their peers in the AI community about it so it can be mitigated. The vulnerability is a new one, resulting from the increased "context window" of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic's researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it's the hundredth question.

AI

Databricks Claims Its Open Source Foundational LLM Outsmarts GPT-3.5 (theregister.com) 17

Lindsay Clark reports via The Register: Analytics platform Databricks has launched an open source foundational large language model, hoping enterprises will opt to use its tools to jump on the LLM bandwagon. The biz, founded around Apache Spark, published a slew of benchmarks claiming its general-purpose LLM -- dubbed DBRX -- beat open source rivals on language understanding, programming, and math. The developer also claimed it beat OpenAI's proprietary GPT-3.5 across the same measures.

DBRX was developed by Mosaic AI, which Databricks acquired for $1.3 billion, and trained on Nvidia DGX Cloud. Databricks claims it optimized DBRX for efficiency with what it calls a mixture-of-experts (MoE) architecture â" where multiple expert networks or learners divide up a problem. Databricks explained that the model possesses 132 billion parameters, but only 36 billion are active on any one input. Joel Minnick, Databricks marketing vice president, told The Register: "That is a big reason why the model is able to run as efficiently as it does, but also runs blazingly fast. In practical terms, if you use any kind of major chatbots that are out there today, you're probably used to waiting and watching the answer get generated. With DBRX it is near instantaneous."

But the performance of the model itself is not the point for Databricks. The biz is, after all, making DBRX available for free on GitHub and Hugging Face. Databricks is hoping customers use the model as the basis for their own LLMs. If that happens it might improve customer chatbots or internal question answering, while also showing how DBRX was built using Databricks's proprietary tools. Databricks put together the dataset from which DBRX was developed using Apache Spark and Databricks notebooks for data processing, Unity Catalog for data management and governance, and MLflow for experiment tracking.

Programming

Rust Developers at Google Twice as Productive as C++ Teams (theregister.com) 121

An anonymous reader shares a report: Echoing the past two years of Rust evangelism and C/C++ ennui, Google reports that Rust shines in production, to the point that its developers are twice as productive using the language compared to C++. Speaking at the Rust Nation UK Conference in London this week, Lars Bergstrom, director of engineering at Google, who works on Android Platform Tools & Libraries, described the web titan's experience migrating projects written in Go or C++ to the Rust programming language.

Bergstrom said that while Dropbox in 2016 and Figma in 2018 offered early accounts of rewriting code in memory-safe Rust - and doubts about productivity and the language have subsided - concerns have lingered about its reliability and security. "Even six months ago, this was a really tough conversation," he said. "I would go and I would talk to people and they would say, 'Wait, wait you have an `unsafe` keyword. That means we should all write C++ until the heat death of the Universe.'"

But there's been a shift in awareness across the software development ecosystem, Bergstrom argued, about the challenges of using non-memory safe languages. Such messaging is now coming from government authorities in the US and other nations who understand the role software plays in critical infrastructure. The reason is that the majority of security vulnerabilities in large codebases can be traced to memory security bugs. And since Rust code can largely if not totally avoid such problems when properly implemented, memory safety now looks a lot like a national security issue.

AI

Apple AI Researchers Boast Useful On-Device Model That 'Substantially Outperforms' GPT-4 (9to5mac.com) 40

Zac Hall reports via 9to5Mac: In a newly published research paper (PDF), Apple's AI gurus describe a system in which Siri can do much more than try to recognize what's in an image. The best part? It thinks one of its models for doing this benchmarks better than ChatGPT 4.0. In the paper (ReALM: Reference Resolution As Language Modeling), Apple describes something that could give a large language model-enhanced voice assistant a usefulness boost. ReALM takes into account both what's on your screen and what tasks are active. [...] If it works well, that sounds like a recipe for a smarter and more useful Siri.

Apple also sounds confident in its ability to complete such a task with impressive speed. Benchmarking is compared against OpenAI's ChatGPT 3.5 and ChatGPT 4.0: "As another baseline, we run the GPT-3.5 (Brown et al., 2020; Ouyang et al., 2022) and GPT-4 (Achiam et al., 2023) variants of ChatGPT, as available on January 24, 2024, with in-context learning. As in our setup, we aim to get both variants to predict a list of entities from a set that is available. In the case of GPT-3.5, which only accepts text, our input consists of the prompt alone; however, in the case of GPT-4, which also has the ability to contextualize on images, we provide the system with a screenshot for the task of on-screen reference resolution, which we find helps substantially improve performance."

So how does Apple's model do? "We demonstrate large improvements over an existing system with similar functionality across different types of references, with our smallest model obtaining absolute gains of over 5% for on-screen references. We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it." Substantially outperforming it, you say? The paper concludes in part as follows: "We show that ReaLM outperforms previous ap- proaches, and performs roughly as well as the state- of-the-art LLM today, GPT-4, despite consisting of far fewer parameters, even for onscreen references despite being purely in the textual domain. It also outperforms GPT-4 for domain-specific user utterances, thus making ReaLM an ideal choice for a practical reference resolution system that can exist on-device without compromising on performance."

AI

For Data-Guzzling AI Companies, the Internet Is Too Small (wsj.com) 60

Companies racing to develop more powerful artificial intelligence are rapidly nearing a new problem: The internet might be too small for their plans (non-paywalled link). From a report: Ever more powerful systems developed by OpenAI, Google and others require larger oceans of information to learn from. That demand is straining the available pool of quality public data online at the same time that some data owners are blocking access to AI companies. Some executives and researchers say the industry's need for high-quality text data could outstrip supply within two years, potentially slowing AI's development.

AI companies are hunting for untapped information sources, and rethinking how they train these systems. OpenAI, the maker of ChatGPT, has discussed training its next model, GPT-5, on transcriptions of public YouTube videos, people familiar with the matter said. Companies also are experimenting with using AI-generated, or synthetic, data as training material -- an approach many researchers say could actually cause crippling malfunctions. These efforts are often secret, because executives think solutions could be a competitive advantage.

Data is among several essential AI resources in short supply. The chips needed to run what are called large-language models behind ChatGPT, Google's Gemini and other AI bots also are scarce. And industry leaders worry about a dearth of data centers and the electricity needed to power them. AI language models are built using text vacuumed up from the internet, including scientific research, news articles and Wikipedia entries. That material is broken into tokens -- words and parts of words that the models use to learn how to formulate humanlike expressions.

Microsoft

Microsoft Engineer Sends Rust Linux Kernel Patches For In-Place Module Initialization (phoronix.com) 49

"What a time we live in," writes Phoronix, "where Microsoft not only continues contributing significantly to the Linux kernel but doing so to further flesh out the design of the Linux kernel's Rust programming language support..." Microsoft engineer Wedson Almeida Filho has sent out the latest patches working on Allocation APIs for the Rust Linux kernel code and also in leveraging those proposed APIs [as] a means of allowing in-place module initialization for Rust kernel modules. Wedson Almeida Filho has been a longtime Rust for Linux contributor going back to his Google engineering days and at Microsoft the past two years has shown no signs of slowing down on the Rust for Linux activities...

The Rust for Linux kernel effort remains a very vibrant effort with a wide variety of organizations contributing, even Microsoft engineers.

Hardware

Half of Russian-Made Chips Are Defective (tomshardware.com) 64

Anton Shilov reports via Tom's Hardware: About half of the processors packaged in Russia are defective. This has prompted Baikal Electronics, a Russian processor developer, to expand the number of packaging partners in the country, according to a report in Vedomosti, a Russian-language business daily newspaper published in Moscow (hat tip to Cnews). In addition to GS Group based in Kaliningrad, the company will now use Milandr and Mikron, which are based in Zelenograd, a town near Moscow. What remains unclear is which foundry initially produces the chips for Baikal. [...]

There are no contract chipmakers in Russia that can process wafers on 28nm-class fabrication technologies, so Baikal is likely using a Chinese foundry to make its processors. Since 2021, the company has been experimenting with localizing chip packaging at GS Group in Kaliningrad. But transitioning to local packaging has not been smooth. The process is intricate and costly, leading to a high rate of defects. According to industry insiders, more than half of the chip batches end up being defective due to issues with equipment calibration and the lack of skilled personnel. It turns out that GS Group cannot fulfill the demands of Baikal, which has now tapped Milandr and Mikron to assist with chip packaging. Apparently, it hasn't helped much.
"More than half of the chip batches turn out to be defective," a source familiar with the matter told Vedomosti. "The reasons lie in both the equipment of the enterprises, which needs to be properly configured, and the insufficient competencies of the people involved in chip packaging."

"Russia can package a small number of processors, but when it comes to a series, a lot of defects appear," explained one of the newspaper's sources. "Manufacturers cannot maintain a consistently high level across all products."
AI

OpenAI Reveals AI Tool To Recreate Human Voices (axios.com) 24

An anonymous reader quotes a report from Axios: OpenAI said on Friday it's allowed a small number of businesses to test a new tool that can recreate a person's voice from just a 15-second recording. The company said it is taking "a cautious and informed approach" to releasing the program, called Voice Engine, more broadly given the high risk of abuse presented by synthetic voice generators.

Based on the 15-second recording, the program can create a "emotive and realistic" natural-sounding voice that closely resembles the original speaker. This synthetic voice can then be used to read text inputs, even if the text isn't in the original speaker's native language. In one example offered by the company, an English speaker's voice was translated into Spanish, Mandarin, German, French and Japanese while preserving the speaker's native accent.

OpenAI said Voice Engine has so far been used to provide reading assistance to non-readers, translate content and to help people who are non-verbal. It said the program has already been used in its text-to-speech application and its ChatGPT Voice and Read Aloud tool.
"We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities," the company said. "Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale."
Software

Proxmox Import Wizard Makes for Easy VMware VM Migrations (storagereview.com) 39

Lyle Smith reports via StorageReview.com: Proxmox has introduced a new import wizard for Proxmox Virtual Environment (VE), aiming to simplify the migration process for importing VMware ESXi VMs. This new feature comes at an important time in the industry, as it aims to ease the transition for these organizations looking to move away from VMware's vSphere due to high renewal costs.

The new import wizard is integrated into Proxmox VE's existing storage plugin system, allowing for direct integration into the platform's API and web-based user interface. It offers users the ability to import VMware ESXi VMs in their entirety, translating most of the original VM's configuration settings to Proxmox VE's configuration model (all while minimizing downtime). Currently, the import wizard is in a technical preview state, having been added during the Proxmox VE 8.2 development cycle. Although it is still under active development, early reports suggest the wizard is stable and holds considerable promise for future enhancements, including the planned addition of support for other import sources like OVF/OVA files. [...]

This tool represents Proxmox's commitment to providing accessible, open-source virtualization solutions. By leveraging the official ESXi API and implementing a user space filesystem with optimized read-ahead caching in Rust (a safe, fast, and modern programming language ideal for system-level tasks), Proxmox aims to ensure that this new feature can be integrated smoothly into its broader ecosystem.

AI

Claude 3 Surpasses GPT-4 on Chatbot Arena For the First Time (arstechnica.com) 19

Anthropic's recently released Claude 3 Opus large language model has beaten OpenAI's GPT-4 for the first time on Chatbot Arena, a popular crowdsourced leaderboard used by AI researchers to gauge the relative capabilities of AI language models. A report adds: "The king is dead," tweeted software developer Nick Dobos in a post comparing GPT-4 Turbo and Claude 3 Opus that has been making the rounds on social media. "RIP GPT-4."

Since GPT-4 was included in Chatbot Arena around May 10, 2023 (the leaderboard launched May 3 of that year), variations of GPT-4 have consistently been on the top of the chart until now, so its defeat in the Arena is a notable moment in the relatively short history of AI language models. One of Anthropic's smaller models, Haiku, has also been turning heads with its performance on the leaderboard.

"For the first time, the best available models -- Opus for advanced tasks, Haiku for cost and efficiency -- are from a vendor that isn't OpenAI," independent AI researcher Simon Willison told Ars Technica. "That's reassuring -- we all benefit from a diversity of top vendors in this space. But GPT-4 is over a year old at this point, and it took that year for anyone else to catch up." Chatbot Arena is run by Large Model Systems Organization (LMSYS ORG), a research organization dedicated to open models that operates as a collaboration between students and faculty at University of California, Berkeley, UC San Diego, and Carnegie Mellon University.

AI

The Air Force Bought a Surveillance-Focused AI Chatbot (404media.co) 11

The U.S. Air Force paid for a test version of an AI-powered chatbot to assist in intelligence and surveillance tasks as part of a $1.2 million deal, according to internal Air Force documents obtained by 404 Media. From the report: The news provides more insight into what military agencies are currently exploring using AI for, and comes as more AI companies eye the military space as a business opportunity. OpenAI, for instance, quietly removed language that expressly prohibited its technology for military purposes in January. "Edge Al Platform for Space and Unmanned Aerial Imagery Intelligence," a section of one of the documents reads. The contract is between the Air Force and a company called Misram LLC, which also operates under the name Spectronn.

Included in a "milestone schedule" explaining the specifics of the deal are the items "ISR chatbot design" and "ISR chatbot software." ISR refers to intelligence, surveillance, and reconnaissance, a common military term. Other items in the schedule include "data ingestion tool" and "data visualization tool." 404 Media obtained the documents through a Freedom of Information Act (FOIA) request with the Air Force. On its website, Spectronn advertises an "AI Digital Assistant for Analytics." It says the bot can take data such as images and videos, and then answer plain English questions about that information. "Current analytics dashboard solutions are complex and not human-friendly. It leads to severe latency (from hours to days), cognitive load on the data analyst, false alarms, and frustrated decision makers or end-users," it reads.

AI

The AI Boom is Sending Silicon Valley's Talent Wars To New Extremes (wsj.com) 26

Tech companies are serving up million-dollar-a-year compensation packages, accelerated stock-vesting schedules and offers to poach entire engineering teams to draw people with expertise and experience in the kind of generative AI that is powering ChatGPT and other humanlike bots. They are competing against each other and against startups vying to be the next big thing to unseat the giants. From a report: The offers stand out even by the industry's relatively lavish past standards of outsize pay and perks. And the current AI talent shortage stands out for another reason: It is happening as layoffs are continuing in other areas of tech and as companies have been reallocating resources to invest more in covering the enormous cost of developing AI technology.

"There is a secular shift in what talents we're going after," says Naveen Rao, head of Generative AI at Databricks. "We have a glut of people on one side and a shortage on the other." Databricks, a data storage and management startup, doesn't have a problem finding software engineers. But when it comes to candidates who have trained large language models, or LLMs, from scratch or can help solve vexing problems in AI, such as hallucinations, Rao says there might be only a couple of hundred people out there who are qualified.

Some of these hard-to-find, tier-one candidates can easily get total compensation packages of $1 million a year or more. Salespeople in AI are also in demand and hard to find. Selling at the beginning of a technology transition when things are changing rapidly requires a different skill set and depth of knowledge. Candidates with those skills are making around double what an enterprise software salesperson would. But that isn't the norm for most people working in AI, Rao says. For managerial roles in AI and machine learning, base-pay increases ranged from 5% to 11% from April 2022 to April 2023, according to a WTW survey of more than 1,500 employers. The base-pay increases of nonmanagerial roles ranged from 13% to 19% during the same period.

AI

World Poker Tour Bets on AI Dubbing of Tournaments for Latin America (hollywoodreporter.com) 9

Georg Szalai reports via the Hollywood Reporter: The World Poker Tour (WPT) is betting on AI-powered dubbing tools under a partnership with Papercup, a London-based AI dubbing company, that will replace WPT's traditional localization methods in Latin America. Papercup will work with the World Poker Tour to translate 184 of the franchise's 44-minute-long episodes into Brazilian Portuguese, the companies said.

"This will amount to nearly 140 hours of content and enable viewers across South America to access WPT's latest shows and tournaments in their native language quicker than ever before," they explained. "Forced to deal with lead times of up to six months, the company experienced ongoing challenges with timely content delivery and adaptation." The Papercup deal will cut those lead times in half, the partners said. "Now the premier poker content produced by WPT will be able to reach international fans watching on OTT platforms, as well as its own FAST channel, faster than ever before," they touted. Financial terms weren't disclosed.

Papercup uses a combination of machine-learning tools and expert human translators to "deliver maximal linguistic and tonal accuracy." Its AI voices are built using data from real voice actors to ensure they "have all the warmth and expressivity of human speech," it says. "The quality of Papercup dubbing has been second to none. A big part of that is down to their AI voices and expert translators who go through every sentence to make sure the moment is truly captured in the new AI dubs," said Marc Dion, director of distribution & ad sales at WPT. "The major streaming platforms have very stringent criteria when it comes to dubbed content and if it's going to connect with our shared viewers."

Businesses

Telegram's Peer-to-Peer Login System is a Risky Way To Save $5 a Month 32

Telegram is offering a new way to earn a premium subscription free of charge: all you have to do is volunteer your phone number to relay one-time passwords (OTP) to other users. This, in fact, sounds like an awful idea -- particularly for a messaging service based around privacy. From a report: X user @AssembleDebug spotted details about the new program on the English-language version of a popular Russian-language Telegram information channel. Sure enough, there's a section in Telegram's terms of service outlining the new "Peer-to-Peer Login" or P2PL program, which is currently only offered on Android and in certain (unspecified) locations. By opting in to the program, you agree to let Telegram use your phone number to send up to 150 texts with OTPs to other users logging in to their accounts. Every month your number is used to send a minimum number of OTPs, you'll get a gift code for a one-month premium subscription. Boy does this sound like a bad idea, starting with the main issue: your phone number is seen by the recipient every time it's used to send an OTP.

Slashdot Top Deals