Open Source

A New White House Report Embraces Open-Source AI 15

An anonymous reader quotes a report from ZDNet: According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development -- much like many businesses using the technology. On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI while emphasizing the need for vigilant risk monitoring. The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Movies

Disney's First R-Rated Movie Opening Sets an All-Time Record: 'Deadpool & Wolverine' (hollywoodreporter.com) 70

No R-rated film has ever earned as much in its opening weekend, reports the Hollywood Reporter — a whopping $205 million. (The previous record was $133.7 million, set in 2016 by the original film Deadpool...)

It's also the very first R-rated film ever released by Disney... [Deadpool actor Ryan] Reynolds has his own theory about its success. "Disney probably doesn't want me to frame it this way, but I've always thought of Deadpool & Wolverine as the first four-quadrant, R-rated film," Reynolds tells the Hollywood Reporter. "Yes, it's rated R, but we set out to make a movie with enough laughs, action and heart to appeal to everyone, whether you're a comic book movie fan or not."

There's reason Disney and others may bristle at labeling it a four-quadrant film, which generally is reserved for movies that work equally for males and females over and under 25. Afterall, it is perhaps the most violent and bloody Deadpool movie yet. Still, here's evidence to back up Reynolds' theory that it's playing to a far more broad audience than the usual Marvel Cinematic Univerese movie, even if it's skewing male by anywhere from 60 to 63 percent. So far, 13.6 million people have bought tickets to see it, on par with last year's Barbie, which was rated PG-13, according to Steve Buck's leading research firm EntTelligence. That's the most foot traffic ever for an R-rated movie....

"Once thought of as a sure-fire way to limit potential box office, the R rating, when properly applied, can be the key to unlocking massive box office, and this has proven to be the secret sauce for the Deadpool franchise," says chief Comscore box office analyst Paul Dergarabedian. "The creative freedom afforded by the less restrictive rating has enabled filmmakers to push the envelope and, particularly in the case of Deadpool & Wolverine, can deliver the kind of edgy, intense, profanity-filled comedy action that modern audiences are fired up to see on the big screen...."

It's also the biggest July opening of all time, the biggest opening of 2024 so far and Marvel Studios' biggest launch since Spider-Man: No Way Home in December 2021.

ScreenRant notes that Deadpool & Wolverine has already surpassed the entire global box office for The Marvels in just three days. It's the biggest debut for a film since James Cameron's Avatar: The Way of the Water in December of 2022 (according to the Hollywood Reporter). And they add that though the figures haven't been adjusted for inflation — it's still the eighth-biggest box office opening of all time.

But at the end of the day, it's just people enjoying a movie together. "Well, I'm not saying that other people should do this, but my 9-year-old watched the movie with me and my mom, who's in her late 70s," Reynolds reportedly told the New York Times, "and it was just was one of the best moments of this whole experience for me. Both of them were laughing their guts out, were feeling the emotion where I most desperately hoped people would be."
The Almighty Buck

Adobe Exec: Early Termination Fees Are 'Like Heroin' (theverge.com) 24

Longtime Slashdot reader sandbagger shares a report from The Verge: Early termination fees are "a bit like heroin for Adobe," according to an Adobe executive quoted in the FTC's newly unredacted complaint against the company for allegedly hiding fees and making it too hard to cancel Creative Cloud. "There is absolutely no way to kill off ETF or talk about it more obviously" in the order flow without "taking a big business hit," this executive said. That's the big reveal in the unredacted complaint, which also contains previously unseen allegations that Adobe was internally aware of studies showing its order and cancellation flows were too complicated and customers were unhappy with surprise early termination fees. In response to the quote, Adobe's general counsel and chief trust officer, Dana Rao, said that he was "disappointed in the way they're continuing to take comments out of context from non-executive employees from years ago to make their case."

Rao added that the person quoted was not on the leadership team that reports to CEO Shantanu Narayen and that whether to charge early termination fees would "not be their decision." The early termination fees in the FTC case represent "less than half a percent of our annual revenue," Rao told The Verge. "It doesn't drive our business, it doesn't drive our business decisions."
IT

Adobe Exec Compared Creative Cloud Cancellation Fees To 'Heroin' (theverge.com) 34

Early termination fees are "a bit like heroin for Adobe," according to an Adobe executive quoted in the FTC's newly unredacted complaint against the company for allegedly hiding fees and making it too hard to cancel Creative Cloud. The Verge: "There is absolutely no way to kill off ETF or talk about it more obviously" in the order flow without "taking a big business hit," this executive said. That's the big reveal in the unredacted complaint, which also contains previously unseen allegations that Adobe was internally aware of studies showing its order and cancellation flows were too complicated and customers were unhappy with surprise early termination fees.

In a short interview, Adobe's general counsel and chief trust officer, Dana Rao, pushed back on both the specific quote and the FTC's complaint more generally, telling me that he was "disappointed in the way they're continuing to take comments out of context from non-executive employees from years ago to make their case."

Education

Should Kids Still Learn to Code in the Age of AI? (yahoo.com) 170

This week the Computer Science Teachers Association conference kicked off Tuesday in Las Vegas, writes long-time Slashdot reader theodp.

And the "TeachAI" education initiative teamed with the Computer Science Teachers Association to release three briefs "arguing that K-12 computer science education is more important than ever in an age of AI." From the press release: "As AI becomes increasingly present in the classroom, educators are understandably concerned about how it might disrupt the teaching of core CS skills like programming. With these briefs, TeachAI and CSTA hope to reinforce the idea that learning to program is the cornerstone of computational thinking and an important gateway to the problem-solving, critical thinking, and creative thinking skills necessary to thrive in today's digitally driven world. The rise of AI only makes CS education more important."

To help drive home the point to educators, the 39-page Guidance on the Future of Computer Science Education in an Age of AI (penned by five authors from nonprofits CSTA and Code.org) includes a pretty grim comic entitled Learn to Program or Follow Commands. In the panel, two high school students who scoff at the idea of having to learn to code and instead use GenAI to create their Python apps wind up getting stuck in miserable warehouse jobs several years later as a result where they're ordered about by an AI robot.

"The rise of AI only makes CS education more important," according to the group's press release, "with early research showing that people with a greater grasp of underlying computing concepts are able to use AI tools more effectively than those without." A survey by the group also found that 80% of teachers "agree that core concepts in CS education should be updated to emphasize topics that better support learning about AI."

But I'd be curious to hear what Slashdot's readers think. Share your thoughts and opinions in the comments.

Should children still be taught to code in the age of AI?
Movies

Founder of Fandango Dies After Plunge From Manhattan Hotel (nytimes.com) 39

J. Michael Cline, the co-founder of Fandango, died from suicide this week after falling from the twentieth floor of a Manhattan hotel. The New York Times reports: Mr. Cline, who was 64, co-founded Fandango in 2000 and left the company in 2011, according to his LinkedIn profile. The company -- familiar to many from its splashy logo, an orange "F" in the shape of a ticket stub -- was later acquired by Comcast and is currently owned by NBCUniversal and Warner Bros. For years, the company dominated movie-ticket sales, handling ticketing for several major theater chains and making money by charging a processing fee for online ticket sales and by selling advertising on its site.

At the time of its launch, Mr. Cline offered a pithy explanation for the company's name: "A Fandango is fast and fun," he told Variety. "Fandango is the perfect match to a service designed to make going to the movies easier and more enjoyable than ever before." Art Levitt, the co-founder and former chief operating officer and president of Fandango, remembered Mr. Cline as brilliant, creative and loyal, sticking it out even in "tough" times.
TechCrunch provides additional information about Mr. Cline: He left the company in 2011, roughly four years after the company was acquired by Comcast. Some early investors in the online ticketing service were General Atlantic and TCV. Cline was also managing partner of Accretive, a venture capital firm he founded in 1999. He built startups throughout his career, including R1 RCM, Accumen, Accolade, Everspring, Dresr and Insureon. Starting in 2018, Cline served as the executive chairman at the venture firm Juxtapose, which invests in technology businesses. During his time there, Cline enjoyed investing in healthcare companies, according to his staff page. Some of Juxtapose's portfolio companies include Tend, Nectar and Great Jones.
Science

Night Owls' Cognitive Function 'Superior' To Early Risers, Study Suggests (theguardian.com) 85

The idea that night owls who don't go to bed until the early hours struggle to get anything done during the day may have to be revised. From a report: It turns out that staying up late could be good for our brain power as research suggests that people who identify as night owls could be sharper than those who go to bed early. Researchers led by academics at Imperial College London studied data from the UK Biobank study on more than 26,000 people who had completed intelligence, reasoning, reaction time and memory tests.

They then examined how participants' sleep duration, quality, and chronotype (which determines what time of day we feel most alert and productive) affected brain performance. They found that those who stay up late and those classed as "intermediate" had "superior cognitive function," while morning larks had the lowest scores. Going to bed late is strongly associated with creative types.

Power

Whataburger App Becomes Unlikely Power Outage Map After Houston Hurricane (techcrunch.com) 104

An anonymous reader quotes a report from TechCrunch: Fast-food chain Whataburger's app has gone viral in the wake of Hurricane Beryl, which left around 1.8 million utility customers in Houston, Texas without power. Hundreds of thousands of those people may remain without power for days as Houston anticipates a heat wave, with temperatures climbing into the mid-90s. Amid frustrations with the local utility company CenterPoint Energy, which doesn't offer an app, some Houstonians got creative with their attempts to track the power outages. They turned to the Whataburger app instead.

Whataburger is a San Antonio-based fast-food chain with 127 stores in the Houston area, according to Newsweek. On the Whataburger app, users can see a map of Whataburger locations, with an orange logo indicating a store is open, and a grey logo meaning it's closed. Since CenterPoint Energy doesn't have an online map of outages, an X user with the screen name BBQBryan found that the map of which Whataburger stores are open could be a useful tool for seeing where there's power. This viral moment seems to have boosted Whataburger's download numbers. In the iOS App Store, Whataburger is currently ranked 16th in the food and drink category. Just three weeks ago, it was ranked 40th.
"The Whataburger app works as a power outage tracker, handy since the electric company doesn't show a map," BBQBryan wrote in a post that now has over 22,000 likes and 6.9 million impressions.

"Well there's a use for our app we didn't think of!" the Whataburger X account replied. "We hope you and everyone else are okay!"
Graphics

Affinity Tempts Adobe Users with 6-Month Free Trial of Creative Suite (theverge.com) 39

Serif, the design software developer behind Affinity, has introduced a six-month free trial for its creative suite, offering Affinity Photo, Designer, and Publisher on Mac, Windows PC, and iPad. This move, along with a 50% discount on perpetual licenses, aims to attract Adobe users and reassure them of Affinity's commitment to its one-time purchase pricing model despite its recent acquisition by Canva. The Verge reports: Affinity uses a one-time purchase pricing model that has earned it a loyal fanbase among creatives who are sick of paying for recurring subscriptions. Prices start at $69.99 for Affinity's individual desktop apps or $164.99 for the entire suite, with a separate deal currently offering customers 50 percent off all perpetual licenses.

This discount, alongside the six-month free trial, is potentially geared at soothing concerns that Affinity would change its pricing model after being acquired by Canva earlier this year. "We're saying 'try everything and pay nothing' because we understand making a change can be a big step, particularly for busy professionals," said Affinity CEO Ashley Hewson. "Anyone who takes the trial is under absolutely no obligation to buy."

Businesses

Paramount Agrees To Merge With Skydance In $8 Billion Deal, Ending Redstone Era (cnbc.com) 9

Paramount Global has agreed to merge with Skydance in a significant deal that will see the Redstone family relinquish control of the storied movie studio and media company. The merger, valued at over $8 billion, involves a consortium including RedBird Capital Partners and KKR, and is expected to close in the third quarter of 2025, subject to regulatory approval. CNBC reports: The deal gives National Amusements an enterprise value of $2.4 billion, which includes $1.75 billion in equity. Paramount's class A shareholders will receive $23 apiece in cash or stock, while class B stockholders will receive $15 per share, equating to a cash consideration totaling $4.5 billion available to public shareholders. As part of the deal Skydance will also inject $1.5 billion of capital into Paramount's balance sheet. "It's a new Paramount; it's not just a catchphrase," said RedBird's Jeff Shell, former CEO of NBCUniversal, on a call with investors Monday. "We think it's going to be a new day for these combined assets."

Skydance founder David Ellison will lead the combined company as CEO, while Shell will serve as president. The merger is subject to regulatory approval and expected to close in the third quarter of 2025. It also includes a 45-day "go-shop period," in which the Paramount special committee can solicit other offers. A completed Skydance merger would mark a major shift for the ownership of Paramount, as well as for Hollywood as a whole. The Redstone family has long controlled the movie studio -- known for films such as "The Godfather," "Top Gun" and "Forrest Gump" -- as well as the CBS broadcast network and cable TV networks including MTV and Nickelodeon. Now, Ellison, 41, son of Oracle founder and billionaire Larry Ellison, will be at the helm of a major movie studio and among Hollywood's elite. "It's been a long time since a creative executive ran one of the big Hollywood companies," Shell said on Monday's call. "And I think it's really important when creative is the core."

Nintendo

Nintendo Has No Plans to Use Generative AI in Its Games, Company President Says (cnet.com) 18

Mario and Luigi aren't jumping on the AI train. From a report: In a recent Q&A with investors, Nintendo President Shuntaro Furukawa addressed the issue. Though he said generative AI can be creative, Furukawa told his audience that the company isn't planning to use the technology in its games. "In the game industry, AI-like technology has long been used to control enemy character movements, so game development and AI technology have always been closely related," Furukawa said, according to TweakTown. "Generative AI, which has been a hot topic in recent years, can be more creative, but we also recognize that it has issues with intellectual property rights. "We have decades of know-how in creating optimal gaming experiences for our customers, and while we remain flexible in responding to technological developments, we hope to continue to deliver value that is unique to us and cannot be achieved through technology alone."
AI

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals (yahoo.com) 63

OpenAI's 'Media Manager' Mocked, Amid Accusations of Robbing Creative Professionals "Amid the hype surrounding Apple's new deal with OpenAI, one issue has been largely papered over," argues the Executive Director of America's writer's advocacy group, the Authors Guild.

OpenAI's foundational models "are, and have always been, built atop the theft of creative professionals' work." [L]ast month the company quietly announced Media Manager, scheduled for release in 2025. A tool purportedly designed to allow creators and content owners to control how their work is used, Media Manager is really a shameless attempt to evade responsibility for the theft of artists' intellectual property that OpenAI is already profiting from.

OpenAI says this tool would allow creators to identify their work and choose whether to exclude it from AI training processes. But this does nothing to address the fact that the company built its foundational models using authors' and other creators' works without consent, compensation or control over how OpenAI users will be able to imitate the artists' styles to create new works. As it's described, Media Manager puts the burden on creators to protect their work and fails to address the company's past legal and ethical transgressions. This overture is like having your valuables stolen from your home and then hearing the thief say, "Don't worry, I'll give you a chance to opt out of future burglaries ... next year...."

AI companies often argue that it would be impossible for them to license all the content that they need and that doing so would bring progress to a grinding halt. This is simply untrue. OpenAI has signed a succession of licensing agreements with publishers large and small. While the exact terms of these agreements are rarely released to the public, the compensation estimates pale in comparison with the vast outlays for computing power and energy that the company readily spends. Payments to authors would have minimal effects on AI companies' war chests, but receiving royalties for AI training use would be a meaningful new revenue stream for a profession that's already suffering...

We cannot trust tech companies that swear their innovations are so important that they do not need to pay for one of the main ingredients — other people's creative works. The "better future" we are being sold by OpenAI and others is, in fact, a dystopia. It's time for creative professionals to stand together, demand what we are owed and determine our own futures.

The Authors Guild (and 17 other plaintiffs) are now in an ongoing lawsuit against OpenAI and Microsoft. And the Guild's executive director also notes that there's also "a class action filed by visual artists against Stability AI, Runway AI, Midjourney and Deviant Art, a lawsuit by music publishers against Anthropic for infringement of song lyrics, and suits in the U.S. and U.K. brought by Getty Images against Stability AI for copyright infringement of photographs."

They conclude that "The best chance for the wider community of artists is to band together."
AI

OpenAI CTO: AI Could Kill Some Creative Jobs That Maybe Shouldn't Exist Anyway (pcmag.com) 88

OpenAI CTO Mira Murati isn't worried about how AI could hurt some creative jobs, suggesting during a talk that some jobs were maybe always a bit replaceable anyway. From a report: "I think it's really going to be a collaborative tool, especially in the creative spaces," Murati told Darmouth University Trustee Jeffrey Blackburn during a conversation about AI hosted at the university's engineering department. "Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place," the CTO said of AI's role in the workplace. "I really believe that using it as a tool for education, [and] creativity, will expand our intelligence."
AI

Clearview AI Used Your Face. Now You May Get a Stake in the Company. (nytimes.com) 40

A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database. From a report: Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company's existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents. The company and those who sued it were "trapped together on a sinking ship," lawyers for the plaintiffs wrote in a court filing proposing the settlement. "These realities led the sides to seek a creative solution by obtaining for the class a percentage of the value Clearview could achieve in the future," added the lawyers, from Loevy + Loevy in Chicago.

Anyone in the United States who has a photo of himself or herself posted publicly online -- so almost everybody -- could be considered a member of the class. The settlement would collectively give the members a 23 percent stake in Clearview AI, which is valued at $225 million, according to court filings. (Twenty-three percent of the company's current value would be about $52 million.) If the company goes public or is acquired, those who had submitted a claim form would get a cut of the proceeds. Alternatively, the class could sell its stake. Or the class could opt, after two years, to collect 17 percent of Clearview's revenue, which it would be required to set aside.

AI

Adobe Says It Won't Train AI On Customers' Work In Overhauled ToS (theverge.com) 35

In a new blog post, Adobe said it has updated its terms of service to clarify that it won't train AI on customers' work. The move comes after a week of backlash from users who feared that an update to Adobe's ToS would permit such actions. The clause was included in ToS sent to Creative Cloud Suite users, which claimed that Adobe "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience." The Verge reports: The new terms of service are expected to roll out on June 18th and aim to better clarify what Adobe is permitted to do with its customers' work, according to Adobe's president of digital media, David Wadhwani. "We have never trained generative AI on our customer's content, we have never taken ownership of a customer's work, and we have never allowed access to customer content beyond what's legally required," Wadhwani said to The Verge. [...]

Adobe's chief product officer, Scott Belsky, acknowledged that the wording was "unclear" and that "trust and transparency couldn't be more crucial these days." Wadhwani says that the language used within Adobe's TOS was never intended to permit AI training on customers' work. "In retrospect, we should have modernized and clarified the terms of service sooner," Wadhwani says. "And we should have more proactively narrowed the terms to match what we actually do, and better explained what our legal requirements are."

"We feel very, very good about the process," Wadhwani said in regards to content moderation surrounding Adobe stock and Firefly training data but acknowledged it's "never going to be perfect." Wadhwani says that Adobe can remove content that violates its policies from Firefly's training data and that customers can opt out of automated systems designed to improve the company's service. Adobe said in its blog post that it recognizes "trust must be earned" and is taking on feedback to discuss the new changes. Greater transparency is a welcome change, but it's likely going to take some time to convince scorned creatives that it doesn't hold any ill intent. "We are determined to be a trusted partner for creators in the era ahead. We will work tirelessly to make it so."

AI

Craig Federighi Says Apple Hopes TO Add Google Gemini, Other AI Models To iOS 18 7

Yesterday, Apple made waves in the media when it revealed a partnership with OpenAI during its annual WWDC keynote. That announcement centered on Apple's decision to bring ChatGPT natively to iOS 18, including Siri and other first-party apps. During a followup interview on Monday, Apple executives Craig Federighi and John Giannandrea hinted at a possible agreement with Google Gemini and other AI chatbots in the future. 9to5Mac reports: Moderated by iJustine, the interview was held in Steve Jobs Theater this afternoon, featuring a discussion with John Giannandrea, Apple's Senior Vice President of Machine Learning and AI Strategy, and Craig Federighi, Senior Vice President of Software Engineering. During the interview, Federighi specifically referenced Apple's hopes to eventually let users choose between different models to use with Apple Intelligence.

While ChatGPT from OpenAI is the only option right now, Federighi suggested that Google Gemini could come as an option down the line: "We think ultimately people are going to have a preference perhaps for certain models that they want to use, maybe one that's great for creative writing or one that they prefer for coding. And so we want to enable users ultimately to bring a model of their choice. And so we may look forward to doing integrations with different models like Google Gemini in the future. I mean, nothing to announce right now, but that's our direction." The decision to focus on ChatGPT at the start was because Apple wanted to "start with the best," according to Federighi.
AI

Adobe Responds To Vocal Uproar Over New Terms of Service Language (venturebeat.com) 34

Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients.

A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

AI

Adobe Scolded For Selling 'Ansel Adams-Style' Images Generated By AI (theverge.com) 89

The Ansel Adams estate said it was "officially on our last nerve" after Adobe was caught selling AI-generated images imitating the late photographer's work. The Verge reports: While Adobe permits AI-generated images to be hosted and sold on its stock image platform, users are required to hold the appropriate rights or ownership over the content they upload. Adobe Stock's Contributor Terms specifically prohibits content "created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist." Adobe responded to the callout, saying it had removed the offending content and had privately messaged the Adams estate to get in touch directly in the future. The Adams estate, however, said it had contacted Adobe directly multiple times since August 2023.

"Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community, we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists' estates to continuously police our IP on your platform, on your terms," said the Adams estate on Threads. "It's past time to stop wasting resources that don't belong to you."

Adobe Stock Vice President Matthew Smith previously told The Verge that the company generally moderates all "crowdsourced" Adobe Stock assets before they are made available to customers, employing a "variety" of methods that include "an experienced team of moderators who review submissions." As of January 2024, Smith said the strongest action the company can take to enforce its platform rules is to block Adobe Stock users who violate them. Bassil Elkadi, Adobe's Director of Communications and Public Relations, told The Verge that Adobe is "actively in touch with Ansel Adams on this matter," and that "appropriate steps were taken given the user violated Stock terms." The Adams estate has since thanked Adobe for removing the images, and said that it expects "it will stick this time."
"We don't have a problem with anyone taking inspiration from Ansel's photography," said the Adams estate. "But we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes AI-generated output -- regardless of whether his name has been used on the input side, or whether a given model has been trained on his work."
AI

CEO of Zoom Wants AI Clones in Meetings (theverge.com) 95

Zoom's CEO Eric Yuan predicts that AI will significantly transform the workplace, potentially ushering in a four-day workweek, he told The Verge in an interview. Yuan said Zoom is transitioning from a videoconferencing platform to a comprehensive collaboration suite called Zoom Workplace. He believes AI will automate routine tasks such as attending meetings, reading emails, and making phone calls, enabling employees to dedicate time to more creative and meaningful work. The Verge adds: The Verge: I'm asking you which meetings do you look at and think you would hand off?
Yuan: I started with the problem first, right? And last but not least, after the meeting is over, let's say I'm very busy and missed the meeting. I really don't understand what happened. That's one thing. Another thing for a very important meeting I missed, given I'm the CEO, they're probably going to postpone the meeting. The reason why is I probably need to make a decision. Given that I'm not there, they cannot move forward, so they have to reschedule. You look at all those problems. Let's assume AI is there. AI can understand my entire calendar, understand the context. Say you and I have a meeting -- just one click, and within five seconds, AI has already scheduled a meeting.

At the same time, every morning I wake up, an AI will tell me, "Eric, you have five meetings scheduled today. You do not need to join four of the five. You only need to join one. You can send a digital version of yourself." For the one meeting I join, after the meeting is over, I can get all the summary and send it to the people who couldn't make it. I can make a better decision. Again, I can leverage the AI as my assistant and give me all kinds of input, just more than myself. That's the vision.

The Internet

People With Commonly Autocorrected Names Call For Tech Firms To Fix Problem (theguardian.com) 103

An anonymous reader quotes a report from The Guardian: People whose names get mangled by autocorrect have urged technology companies to fix the problem faster, with one person whose name gets switched to "Satan" saying: "I am tired of it." People with Irish, Indian and Welsh names are among those calling for improvements to the systems that operate on phones and computers as part of the "I am not a typo" campaign. "It is important that technology becomes more inclusive," said Savan-Chandni Gandecha, 34, a British Indian content creator whose name, which means monsoon moonlight, has been autocorrected to Satan. "My name has also been corrected to Savant," he said. "It is sometimes corrected to Savan, or the hyphen is not accepted by online forms and that irks me," he said. "Even in India my name gets corrected to "Sawan", and it's not just an English issue. It's a multi-language thing."

The campaign has estimated that four out of 10 names of babies born in England and Wales in 2021 were deemed "wrong" or "not accepted" when tested on Microsoft's English dictionary. Dhruti Shah, a journalist, has backed the campaign after seeing her name autocorrected to "Dirty" and "Dorito". She said: "My first name isn't even that long -- only six characters -- but yet when it comes up as an error or it's mangled and considered an unknown entity, it's like saying that it's not just your name that's wrong, but you are." The campaign group -- established by a group of people working in the creative industries in London -- wrote an open letter to technology companies, which pointed out that between 2017 and 2021, 2,328 people named Esmae were born, compared with 36 Nigels. Esmae gets autocorrected to Admar, while Nigel is unchanged. "There are so many diverse names in the global majority but autocorrect is western- and white-focused," said Gandecha.
Rashmi Dyal-Chand, a professor at Northeastern University in the US whose name is sometimes corrected to Sashimi, is supporting the latest campaign and said: "For people with names like mine, autocorrect is not convenient and helpful. It is unhelpful. And yes -- it is harmful."

"We all increasingly rely on smartphones, tablets, word processors, and apps that use autocorrect. Yet autocorrect incorporates a set of defaults -- including dictionaries -- that help some of its users to communicate seamlessly at the expense of others who cannot."

Karen Fox, whose children are called Eoin and Niamh, said of autocorrect: "The red line bothers me -- I didn't choose the 'wrong' name for my child. Tech companies update dictionaries with slang all the time and I think it should be an easy thing to do and definitely a priority."

Slashdot Top Deals