Games

Call of Duty: Black Ops 6's Enormous 309GB Download 'Not Representative of a Typical Player Install Experience' (eurogamer.net) 37

Activision has clarified Call of Duty: Black Ops 6 isn't 309GB after all -- or at least, you can download the core of it for less. From a report: This is despite Xbox's store page for the game stating that Call of Duty: Black Ops 6's install size is a rather chunky 309.85 GB. This made many heads turn, because that seemed excessive. The Call of Duty team has now issued a correction with more detail. Writing on social media platform X, Activision stated the file size currently listed for Black Ops 6 "does not represent the download size or disk footprint" for its upcoming Call of Duty game.

"The sizes as shown include the full installations of Modern Warfare 2, Modern Warfare 3, Warzone and all relevant content packs, including all localised languages combined which is not representative of a typical player install experience," it explained, before adding: "Players will be able to download Black Ops 6 at launch without downloading any other Call of Duty titles or all of the language packs."

AI

SoftBank's New AI Makes Angry Customers Sound Calm On Phone 104

SoftBank has developed AI voice-conversion technology aimed at reducing the psychological stress on call center operators by altering the voices of angry customers to sound calmer. Japan's The Asahi Shimbun reports: The company launched a study on "emotion canceling" three years ago, which uses AI voice-processing technology to change the voice of a person over a phone call. Toshiyuki Nakatani, a SoftBank employee, came up with the idea after watching a TV program about customer harassment. "If the customers' yelling voice sounded like Kitaro's Eyeball Dad, it would be less scary," he said, referring to a character in the popular anime series "Gegege no Kitaro."

The voice-altering AI learned many expressions, including yelling and accusatory tones, to improve vocal conversions. Ten actors were hired to perform more than 100 phrases with various emotions, training the AI with more than 10,000 pieces of voice data. The technology does not change the wording, but the pitch and inflection of the voice is softened. For instance, a woman's high-pitched voice is lowered in tone to sound less resonant. A man's bass tone, which may be frightening, is raised to a higher pitch to sound softer. However, if an operator cannot tell if a customer is angry, the operator may not be able to react properly, which could just upset the customer further. Therefore, the developers made sure that a slight element of anger remains audible.

According to the company, the biggest burdens on operators are hearing abusive language and being trapped in long conversations with customers who will not get off the line -- such as when making persistent requests for apologies. With the new technology, if the AI determines that the conversation is too long or too abusive, a warning message will be sent out, such as, "We regret to inform you that we will terminate our service." [...] The company plans to further improve the accuracy of the technology by having AI learn voice data and hopes to sell the technology starting from fiscal 2025.
Nakatani said, "AI is good at handling complaints and can do so for long hours, but what angry customers want is for a human to apologize to them." He said he hopes that AI "will become a mental shield that prevents operators from overstraining their nerves."
Science

African Elephants Address One Another With Individually Specific Name-Like Calls (nature.com) 31

Abstract of a paper published on Nature: Personal names are a universal feature of human language, yet few analogues exist in other species. While dolphins and parrots address conspecifics by imitating the calls of the addressee, human names are not imitations of the sounds typically made by the named individual. Labelling objects or individuals without relying on imitation of the sounds made by the referent radically expands the expressive power of language. Thus, if non-imitative name analogues were found in other species, this could have important implications for our understanding of language evolution. Here we present evidence that wild African elephants address one another with individually specific calls, probably without relying on imitation of the receiver. We used machine learning to demonstrate that the receiver of a call could be predicted from the call's acoustic structure, regardless of how similar the call was to the receiver's vocalizations. Moreover, elephants differentially responded to playbacks of calls originally addressed to them relative to calls addressed to a different individual. Our findings offer evidence for individual addressing of conspecifics in elephants. They further suggest that, unlike other non-human animals, elephants probably do not rely on imitation of the receiver's calls to address one another.
Social Networks

The Word 'Bot' Is Increasingly Being Used As an Insult On Social Media (newscientist.com) 111

The definition of the word "bot" is shifting to become an insult to someone you know is human, according to researchers who analyzed more than 22 million tweets. Researchers found this shift began around 2017, with left-leaning users more likely to accuse right-leaning users of being bots. "A potential explanation might be that media frequently reported about right-wing bot networks influencing major events like the [2016] US election," says Dennis Assenmacher at Leibniz Institute for Social Sciences in Cologne, Germany. "However, this is just speculation and would need confirmation." NewScientist reports: To investigate, Assenmacher and his colleagues looked at how users perceive what is a bot or not. They did so by looking at how the word "bot" was used on Twitter between 2007 and December 2022 (the social network changed its name to X in 2023, following its purchase by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets. The team found that before 2017, the word was usually deployed alongside allegations of automated behavior of the type that would traditionally fit the definition of a bot, such as "software," "script" or "machine." After that date, the use shifted. "Now, the accusations have become more like an insult, dehumanizing people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation," says Assenmacher. The study has been published in the journal Proceedings of the Eighteenth International AAAI Conference on Web and Social Media.
AI

Scammers' New Way of Targeting Small Businesses: Impersonating Them (wsj.com) 17

Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud.

Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

AI

Apple Unveils Apple Intelligence 29

As rumored, Apple today unveiled Apple Intelligence, its long-awaited push into generative artificial intelligence (AI), promising highly personalized experiences built with safety and privacy at its core. The feature, referred to as "A.I.", will be integrated into Apple's various operating systems, including iOS, macOS, and the latest, VisionOS. CEO Tim Cook said that Apple Intelligence goes beyond artificial intelligence, calling it "personal intelligence" and "the next big step for Apple."

Apple Intelligence is built on large language and intelligence models, with much of the processing done locally on the latest Apple silicon. Private Cloud Compute is being added to handle more intensive tasks while maintaining user privacy. The update also includes significant changes to Siri, Apple's virtual assistant, which will now support typed queries and deeper integration into various apps, including third-party applications. This integration will enable users to perform complex tasks without switching between multiple apps.

Apple Intelligence will roll out to the latest versions of Apple's operating systems, including iOS and iPadOS 18, macOS Sequoia, and visionOS 2.
AI

Teams of Coordinated GPT-4 Bots Can Exploit Zero-Day Vulnerabilities, Researchers Warn (newatlas.com) 27

New Atlas reports on a research team that successfuly used GPT-4 to exploit 87% of newly-discovered security flaws for which a fix hadn't yet been released. This week the same team got even better results from a team of autonomous, self-propagating Large Language Model agents using a Hierarchical Planning with Task-Specific Agents (HPTSA) method: Instead of assigning a single LLM agent trying to solve many complex tasks, HPTSA uses a "planning agent" that oversees the entire process and launches multiple "subagents," that are task-specific... When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA has shown to be 550% more efficient than a single LLM in exploiting vulnerabilities and was able to hack 8 of 15 zero-day vulnerabilities. The solo LLM effort was able to hack only 3 of the 15 vulnerabilities.
"Our findings suggest that cybersecurity, on both the offensive and defensive side, will increase in pace," the researchers conclude. "Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question.

"Beyond the immediate impact of our work, we hope that our work inspires frontier LLM providers to think carefully about their deployments."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
DRM

Big Copyright Win in Canada: Court Rules Fair Use Beats Digital Locks (michaelgeist.ca) 16

Michael Geist Pig Hogger (Slashdot reader #10,379) reminds us that in Canadian law, "fair use" is called "fair dealing" — and that Canadian digital media users just enjoyed a huge win. Canadian user rights champion Michael Geist writes: The Federal Court has issued a landmark decision on copyright's anti-circumvention rules which concludes that digital locks should not trump fair dealing. Rather, the two must co-exist in harmony, leading to an interpretation that users can still rely on fair dealing even in cases involving those digital locks.

The decision could have enormous implications for libraries, education, and users more broadly as it seeks to restore the copyright balance in the digital world. The decision also importantly concludes that merely requiring a password does not meet the standard needed to qualify for copyright rules involving technological protection measures.

Canada's 2012 "Copyright Modernization Act" protected anti-copying technology from circumvention, Geist writes — and Blacklock's Reports had then "argued that allowing anyone other than original subscriber to access articles constituted copyright infringement." The court found that the Blacklock's legal language associated with its licensing was confusing and that fair dealing applied here as well...

Blacklock's position on this issue was straightforward: it argued that its content was protected by a password, that passwords constituted a form of technological protection measure, and that fair dealing does not apply in the context of circumvention. In other words, it argued that the act of circumvention (in this case of a password) was itself infringing and it could not be saved by fair dealing. The Federal Court disagreed on all points...

For years, many have argued for a specific exception to clarify that circumvention was permitted for fair dealing purposes, essentially making the case that users should not lose their fair dealing rights the moment a rights holder places a digital lock on their work. The Federal Court has concluded that the fair dealing rights have remained there all along and that the Copyright Act's anti-circumvention rules must be interpreted in a manner consistent with those rights.

"The case could still be appealed, but for now the court has restored a critical aspect of the copyright balance after more than a decade of uncertainty and concern."
United States

Louisiana Becomes 10th US State to Make CS a High School Graduation Requirement (linkedin.com) 89

Long-time Slashdot reader theodp writes: "Great news, Louisiana!" tech-backed Code.org exclaimed Wednesday in celebratory LinkedIn, Facebook, and Twitter posts. Louisiana is "officially the 10th state to make computer science a [high school] graduation requirement. Huge thanks to Governor Jeff Landry for signing the bill and to our legislative champions, Rep. Jason Hughes and Sen. Thomas Pressly, for making it happen! This means every Louisiana student gets a chance to learn coding and other tech skills that are super important these days. These skills can help them solve problems, think critically, and open doors to awesome careers!"

Representative Hughes, the sponsor of HB264 — which calls for each public high school student to successfully complete a one credit CS course as a requirement for graduation and also permits students to take two units of CS instead of studying a Foreign Language — tweeted back: "HUGE thanks @codeorg for their partnership in this effort every step of the way! Couldn't have done it without [Code.org Senior Director of State Government Affairs] Anthony [Owen] and the Code.org team!"

Code.org also on Wednesday announced the release of its 2023 Impact Report, which touted its efforts "to include a requirement for every student to take computer science to receive a high school diploma." Since its 2013 launch, Code.org reports it's spent $219.8 million to push coding into K-12 classrooms, including $19 million on Government Affairs (Achievements: "Policies changed in 50 states. More than $343M in state budgets allocated to computer science.").

In Code.org by the Numbers, the nonprofit boasts that 254,683 students started Code.org's AP CS Principles course in the academic year (2025 Goal: 400K), while 21,425 have started Code.org's new Amazon-bankrolled AP CS A course. Estimates peg U.S. public high school enrollment at 15.5M students, annual K-12 public school spending at $16,080 per pupil, and an annual high school student course load at 6-8 credits...

Programming

Rust Growing Fastest, But JavaScript Reigns Supreme (thenewstack.io) 55

"Rust is the fastest-growing programming language, with its developer community doubling in size over the past two years," writes The New Stack, "yet JavaScript remains the most popular language with 25.2 million active developers, according to the results of a recent survey." The 26th edition of SlashData's Developer Nation survey showed that the Rust community doubled its number of users over the past two years — from two million in the first quarter of 2022 to four million in the first quarter of 2024 — and by 33% in the last 12 months alone. The SlashData report covers the first quarter of 2024. "Rust has developed a passionate community that advocates for it as a memory-safe language which can provide great performance, but cybersecurity concerns may lead to an even greater increase," the report said. "The USA and its international partners have made the case in the last six months for adopting memory-safe languages...."

"JavaScript's dominant position is unlikely to change anytime soon, with its developer population increasing by 4M developers over the last 12 months, with a growth rate in line with the global developer population growth," the report said. The strength of the JavaScript community is fueled by the widespread use of the language across all types of development projects, with at least 25% of developers in every project type using it, the report said. "Even in development areas not commonly associated with the language, such as on-device coding for IoT projects, JavaScript still sees considerable adoption," SlashData said.

Also, coming in strong, Python has overtaken Java as the second most popular language, driven by the interest in machine learning and AI. The battle between Python and Java shows Python with 18.2 million developers in Q1 2024 compared to Java's 17.7 million. This comes about after Python added more than 2.1 million net new developers to its community over the last 12 months, compared to Java which only increased by 1.2 million developers... Following behind Java there is a six-million-developer gap to the next largest community, which is C++ with 11.4 million developers, closely trailed by C# with 10.2 million and PHP with 9.8 million. Languages with the smallest communities include Objective-C with 2.7 million developers, Ruby with 2.5 million, and Lua with 1.8 million. Meanwhile, the Go language saw its developer population grow by 10% over the last year. It had previously outpaced the global developer population growth, growing by 5Y% over the past two years, from three million in Q1 2022 to 4.7 million in Q1 2024.

"TNS analyst Lawrence Hecht has a few different takeaways. He notes that with the exceptions of Rust, Go and JavaScript, the other major programming languages all grew slower than the total developer population, which SlashData says increased 39% over the last two years alone."
AI

Adobe Responds To Vocal Uproar Over New Terms of Service Language (venturebeat.com) 34

Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients.

A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

AI

DuckDuckGo Offers 'Anonymous' Access To AI Chatbots Through New Service 7

An anonymous reader quotes a report from Ars Technica: On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use. "We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days," says DuckDuckGo, "and that none of the chats made on our platform can be used to train or improve the models." However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet. Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose.
In regard to hallucination concerns, DuckDuckGo states in its privacy policy: "By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice)."
Social Networks

Israel Reportedly Uses Fake Social Media Accounts To Influence US Lawmakers On Gaza War (nytimes.com) 146

An anonymous reader quotes a report from the New York Times: Israel organized and paid for an influence campaign last year targeting U.S. lawmakers and the American public with pro-Israel messaging, as it aimed to foster support for its actions in the war with Gaza, according to officials involved in the effort and documents related to the operation. The covert campaign was commissioned by Israel's Ministry of Diaspora Affairs, a government body that connects Jews around the world with the State of Israel, four Israeli officials said. The ministry allocated about $2 million to the operation and hired Stoic, a political marketing firm in Tel Aviv, to carry it out, according to the officials and the documents. The campaign began in October and remains active on the platform X. At its peak, it used hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments. The accounts focused on U.S. lawmakers, particularly ones who are Black and Democrats, such as Representative Hakeem Jeffries, the House minority leader from New York, and Senator Raphael Warnock of Georgia, with posts urging them to continue funding Israel's military.

ChatGPT, the artificial intelligence-powered chatbot, was used to generate many of the posts. The campaign also created three fake English-language news sites featuring pro-Israel articles. The Israeli government's connection to the influence operation, which The New York Times verified with four current and former members of the Ministry of Diaspora Affairs and documents about the campaign, has not previously been reported. FakeReporter, an Israeli misinformation watchdog, identified the effort in March. Last week, Meta, which owns Facebook and Instagram, and OpenAI, which makes ChatGPT, said they had also found and disrupted the operation. The secretive campaign signals the lengths Israel was willing to go to sway American opinion on the war in Gaza.

The Almighty Buck

Online Streaming Services In Canada Must Hand Over 5% of Domestic Revenues (www.cbc.ca) 94

An anonymous reader quotes a report from CBC News: Online streaming services operating in Canada will be required to contribute five percent of their Canadian revenues to support the domestic broadcasting system, the country's telecoms regulator said on Tuesday. The money will be used to boost funding for local and Indigenous broadcasting, officials from the Canadian Radio-television and Telecommunications Commission (CRTC) said in a briefing. "Today's decision will help ensure that online streaming services make meaningful contributions to Canadian and Indigenous content," wrote CRTC chief executive and chair Vicky Eatrides in a statement.

The measure was introduced under the auspices of a law passed last year designed to make sure that companies like Netflix make a more significant contribution to Canadian culture. The government says the legislation will ensure that online streaming services promote Canadian music and stories, and support Canadian jobs. Funding will also be directed to French-language content and content created by official language minority communities, as well as content created by equity-deserving groups and Canadians of diverse backgrounds. The release also said that online streaming services will "have some flexibility" to send their revenues to support Canadian television directly. [...]

The measure, which will start in the 2024-2025 broadcasting year, would raise roughly $200 million annually, CRTC officials said. It will only apply to services that are not already affiliated with Canadian broadcasters. The CMPA was among 20 screen organizations from around the world that signed a statement in January asking governments to impose stronger regulations on streaming companies operating in local markets. One of the demands was a measure that would force companies profiting from their presence in those markets to contribute financially to the creation of new local content.
"We are disappointed by today's decision and concerned by the negative impact it will have on Canadian consumers. We are assessing the decision in full, but this onerous and inflexible financial levy will be harmful to consumer choice," a spokesperson for Prime Video wrote to CBC News in a statement.
Biotech

Are We Closer to a Cure for Diabetes? (indiatimes.com) 75

"Chinese scientists develop cure for diabetes," reads the headline from the world's second-most widely read English-language newspaper. ("Insulin patient becomes medicine-free in just 3 months.")

The researchers' results were published earlier in May in Cell Discovery, and are now getting some serious scrutiny from the press. The Economic Times cites a University of British Columbia professor's assessment that the study "represents an important advance in the field of cell therapy for diabetes," in an article calling it a "breakthrough" that "marks a significant advancement in cell therapy for diabetes." Chinese scientists have successfully cured a patient's diabetes using a groundbreaking cell therapy... According to a South China Morning Post report, the patient underwent the cell transplant in July 2021. Remarkably, within eleven weeks, he no longer required external insulin. Over the next year, he gradually reduced and ultimately stopped taking oral medication for blood sugar control. "Follow-up examinations showed that the patient's pancreatic islet function was effectively restored," said Yin, one of the lead researchers. The patient has now been insulin-free for 33 months... The new therapy involves programming the patient's peripheral blood mononuclear cells, transforming them into "seed cells" to recreate pancreatic islet tissue in an artificial environment.
Their article calls it "a significant medical milestone" — noting that 140 million people in China have diabetes (according to figures from the International Diabetes Federation).

Thanks to long-time Slashdot reader AmiMoJo for sharing the news.
AI

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic (arstechnica.com) 100

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."

Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.

Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Games

Twitch Terminates All Members of Its Safety Advisory Council (cnbc.com) 39

According to CNBC, Twitch is expected to terminate all members of its Safety Advisory Council on Friday. "The council is a resource of nine industry experts, streamers and moderators who consulted on trust and safety issues related to children on Twitch, nudity, banned users and more," notes the report. From the report: The Amazon-owned game-streaming company formed its Safety Advisory Council in May 2020 to "enhance Twitch's approach to issues of trust and safety" on the platform and guide decisions, according to a company webpage. The council advised Twitch on "drafting new policies and policy updates," "developing products and features to improve safety and moderation" and "protecting the interests of marginalized groups," per the webpage.

For four years, the group advised the company on "hate raids" on marginalized groups and nudity policies, among other things. But in the afternoon of May 6, council members were called into a meeting after receiving an email that all existing contracts would conclude on May 31, 2024, and that they would not receive payment for the second half of 2024. The council was not made up of Twitch employees, but rather advisors, including Dr. Sameer Hinduja, co-director of the Cyberbullying Research Center; Emma LlansÃ, director of the Center for Democracy and Technology's Free Expression Project; and Dr. T.L. Taylor, co-founder and director of AnyKey, which advocates for diversity and inclusion in gaming.

"Looking ahead, the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors," the email, viewed by CNBC, stated. In a formal notice in the same email, the company wrote, "Pursuant to section 5(a) of the SAC advisor Agreement, we are writing to provide you with notice of termination... This means that the second 2024 payment won't be issued." Twitch Ambassadors are users of the streaming platform "chosen specifically because of the positive impact they've contributed to the Twitch community," according to the company's website. Payment depended on the length of the contract, but council members were paid between $10,000 and $20,000 per 12-month period, according to a source familiar with the contracts.

China

How China's 1980s PC Industry Hacked Dot-Matrix Printers (fastcompany.com) 99

An anonymous reader shares a report: Commercial dot-matrix printing was yet another arena in which the needs of Chinese character I/O were not accounted for. This is witnessed most clearly in the then-dominant configuration of printer heads -- specifically the 9-pin printer heads found in mass-manufactured dot-matrix printers during the 1970s. Using nine pins, these early dot-matrix printers were able to produce low-resolution Latin alphabet bitmaps with just one pass of the printer head. The choice of nine pins, in other words, was "tuned" to the needs of Latin alphabetic script.

These same printer heads were incapable of printing low-resolution Chinese character bitmaps using anything less than two full passes of the printer head, one below the other. Two-pass printing dramatically increased the time needed to print Chinese as compared to English, however, and introduced graphical inaccuracies, whether due to inconsistencies in the advancement of the platen or uneven ink registration (that is, characters with differing ink densities on their upper and lower halves).

Compounding these problems, Chinese characters printed in this way were twice the height of English words. This created comically distorted printouts in which English words appeared austere and economical, while Chinese characters appeared grotesquely oversized. Not only did this waste paper, but it left Chinese-language documents looking something like large-print children's books. When consumers in the Chinese-Japanese-Korean (CJK) world began to import Western-manufactured dot-matrix printers, then, they faced yet another facet of Latin alphabetic bias.

Google

Google's AI Feeds People Answers From The Onion (avclub.com) 125

An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.

Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.

AI

Mojo, Bend, and the Rise of AI-First Programming Languages (venturebeat.com) 26

"While general-purpose languages like Python, C++, and Java remain popular in AI development," writes VentureBeat, "the resurgence of AI-first languages signifies a recognition that AI's unique demands require specialized languages tailored to the domain's specific needs... designed from the ground up to address the specific needs of AI development." Bend, created by Higher Order Company, aims to provide a flexible and intuitive programming model for AI, with features like automatic differentiation and seamless integration with popular AI frameworks. Mojo, developed by Modular AI, focuses on high performance, scalability, and ease of use for building and deploying AI applications. Swift for TensorFlow, an extension of the Swift programming language, combines the high-level syntax and ease of use of Swift with the power of TensorFlow's machine learning capabilities...

At the heart of Mojo's design is its focus on seamless integration with AI hardware, such as GPUs running CUDA and other accelerators. Mojo enables developers to harness the full potential of specialized AI hardware without getting bogged down in low-level details. One of Mojo's key advantages is its interoperability with the existing Python ecosystem. Unlike languages like Rust, Zig or Nim, which can have steep learning curves, Mojo allows developers to write code that seamlessly integrates with Python libraries and frameworks. Developers can continue to use their favorite Python tools and packages while benefiting from Mojo's performance enhancements... It supports static typing, which can help catch errors early in development and enable more efficient compilation... Mojo also incorporates an ownership system and borrow checker similar to Rust, ensuring memory safety and preventing common programming errors. Additionally, Mojo offers memory management with pointers, giving developers fine-grained control over memory allocation and deallocation...

Mojo is conceptually lower-level than some other emerging AI languages like Bend, which compiles modern high-level language features to native multithreading on Apple Silicon or NVIDIA GPUs. Mojo offers fine-grained control over parallelism, making it particularly well-suited for hand-coding modern neural network accelerations. By providing developers with direct control over the mapping of computations onto the hardware, Mojo enables the creation of highly optimized AI implementations.

According to Mojo's creator, Modular, the language has already garnered an impressive user base of over 175,000 developers and 50,000 organizations since it was made generally available last August. Despite its impressive performance and potential, Mojo's adoption might have stalled initially due to its proprietary status. However, Modular recently decided to open-source Mojo's core components under a customized version of the Apache 2 license. This move will likely accelerate Mojo's adoption and foster a more vibrant ecosystem of collaboration and innovation, similar to how open source has been a key factor in the success of languages like Python.

Developers can now explore Mojo's inner workings, contribute to its development, and learn from its implementation. This collaborative approach will likely lead to faster bug fixes, performance improvements and the addition of new features, ultimately making Mojo more versatile and powerful.

The article also notes other languages "trying to become the go-to choice for AI development" by providing high-performance execution on parallel hardware. Unlike low-level beasts like CUDA and Metal, Bend feels more like Python and Haskell, offering fast object allocations, higher-order functions with full closure support, unrestricted recursion and even continuations. It runs on massively parallel hardware like GPUs, delivering near-linear speedup based on core count with zero explicit parallel annotations — no thread spawning, no locks, mutexes or atomics. Powered by the HVM2 runtime, Bend exploits parallelism wherever it can, making it the Swiss Army knife for AI — a tool for every occasion...

The resurgence of AI-focused programming languages like Mojo, Bend, Swift for TensorFlow, JAX and others marks the beginning of a new era in AI development. As the demand for more efficient, expressive, and hardware-optimized tools grows, we expect to see a proliferation of languages and frameworks that cater specifically to the unique needs of AI. These languages will leverage modern programming paradigms, strong type systems, and deep integration with specialized hardware to enable developers to build more sophisticated AI applications with unprecedented performance. The rise of AI-focused languages will likely spur a new wave of innovation in the interplay between AI, language design and hardware development. As language designers work closely with AI researchers and hardware vendors to optimize performance and expressiveness, we will likely see the emergence of novel architectures and accelerators designed with these languages and AI workloads in mind. This close relationship between AI, language, and hardware will be crucial in unlocking the full potential of artificial intelligence, enabling breakthroughs in fields like autonomous systems, natural language processing, computer vision, and more.

The future of AI development and computing itself are being reshaped by the languages and tools we create today.

In 2017 Modular AI's founder Chris Lattner (creator of the Swift and LLVM) answered questions from Slashdot readers.

Slashdot Top Deals