The Courts

Google Hit With Lawsuit Alleging It Stole Data From Millions of Users To Train Its AI Tools (cnn.com) 46

"CNN reports on a wide-ranging class action lawsuit claiming Google scraped and misused data to train its AI systems," writes long-time Slashdot reader david.emery. "This goes to the heart of what can be done with information that is available over the internet." From the report: The complaint alleges that Google "has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans" and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken "virtually the entirety of our digital footprint," including "creative and copywritten works" to build its AI products. The complaint points to a recent update to Google's privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

In response to an earlier Verge report on the update, the company said its policy "has long been transparent that Google uses publicly available information from the open web to train language models for services like Google Translate. This latest update simply clarifies that newer services like Bard are also included." [...] The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google's generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.
"Google needs to understand that 'publicly available' has never meant free to use for any purpose," Tim Giordano, one of the attorneys at Clarkson bringing the suit against Google, told CNN in an interview. "Our personal information and our data is our property, and it's valuable, and nobody has the right to just take it and use it for any purpose."

The plaintiffs, the Clarkson Law Firm, previously filed a similar lawsuit against OpenAI last month.
AI

Nine AI-Powered Humanoid Robots Hold Press Conference at UN Summit (apnews.com) 30

We've just had the world's first press conference with AI-enabled, humanoid social robots. Click here to jump straight to Slashdot's transcript of all the robots' answers during the press conference, or watch the 40-minute video here.

It all happened as the United Nations held an "AI for Good" summit in Geneva, where the Guardian reports that the foyer was "humming with robotic voices, the whirring of automated wheels and limbs, and Desdemona, the 'rock star' humanoid, who is chanting 'the singularity will not be centralised' on stage backed by a human band, Jam Galaxy."

But the Associated Press describes how one UN agency had "assembled a group of robots that physically resembled humans at a news conference Friday, inviting reporters to ask them questions in an event meant to spark discussion about the future of artificial intelligence. "The nine robots were seated and posed upright along with some of the people who helped make them at a podium in a Geneva conference center... Among them: Sophia, the first robot innovation ambassador for the U.N. Development Program, or UNDP; Grace, described as a health care robot; and Desdemona, a rock star robot."

"I'm terrified by all of this," said one local newscaster, noting that the robots also said they "had no intention of rebelling against their creators."

But the Associated Press points out an important caveat: While the robots vocalized strong statements - that robots could be more efficient leaders than humans, but wouldn't take anyone's job away or stage a rebellion - organizers didn't specify to what extent the answers were scripted or programmed by people. The summit was meant to showcase "human-machine collaboration," and some of the robots are capable of producing preprogrammed responses, according to their documentation.
Two of the robots seemed to disagree on whether AI-powered robots should submit to stricter regulation. (Although since they're only synthesizing sentences from large-language models, can they really be said to "agree" or "disagree"?)

There were unintentionally humorous moments, starting right from the beginning. Click here to start reading Slashdot's transcript of the robots' answers:
Education

Wisconsin Will Raise Public School Funding For the Next 400 Years (bbc.com) 125

Wisconsin Governor Tony Evers has used his partial veto power to make a creative line-item change to the state budget, securing increased funding for public schools until 2425 instead of 2025. The BBC reports: Republicans have reacted with fury to what they call "an unprecedented brand-new way to screw the taxpayer." The move could however be undone by a legal challenge or future governor. It is the latest tussle between Mr Evers, a former public school teacher who narrowly won re-election last year, and a Republican-controlled state legislature that has often blocked his agenda. Their original budget proposal had raised the amount local school districts could generate via property taxes, by $325 per student, for the next two school years.

But Wisconsin allows its governors to alter certain pieces of legislation by striking words and numbers as they see fit before signing them into law - what is known as partial veto power. Both Democrats and Republicans have flexed their partial veto authority for years, with Mr Evers' Republican predecessor once deploying it to extend a state program's deadline by one thousand years.

This week, before he signed the biennial state budget into law, the governor altered language that applied the $325 increase to the 2023-24 and 2024-25 school years, vetoing a hyphen and a "20" to instead make the end date 2425. He also used his power to remove proposed tax cuts for the state's wealthiest taxpayers and protect some 180 diversity, equity and inclusion jobs Republicans wanted to cut at the public University of Wisconsin.

Windows

Windows 11's AI-powered Copilot (and its Bing-powered ads) Enters Public Preview (arstechnica.com) 26

An anonymous reader shares a report: Last month, Microsoft announced that it would continue its put-ChatGPT-in-everything adventure with a new Windows 11 feature called Copilot. The company added generative AI to Edge and to the Bing-powered taskbar Search field months ago, but Copilot promises to be the most visible and hard-to-ignore version of Microsoft's big AI push in its most visible and hard-to-ignore product. This week's Windows Insider Preview build for Dev channel users, build 23493, will be the first to enable Copilot for public testers.

After installing the update, preview users can press Windows + C to open a Copilot column on the right side of the screen. It will use the same Microsoft account you use for the rest of the OS (it's unclear whether it will work without a Microsoft account, though, to date, the preview has required sign-up and sign-in). And like the other Bing Chat implementations, it has three different "conversation style" settings that either try to rein the chatbot in and keep its answers straightforward and factual or allow it to get "more creative" but more prone to confabulations. In addition to chatting, Copilot will also support creating AI images using OpenAI's DALL-E 2 model, the same technology used for the Bing Image Creator. Some features announced last month, including third-party plugin support, aren't included in this initial preview, and later versions will also be able to adjust a wider range of Windows settings.

Social Networks

Reddit Mods Are Calling For An 'Affordable Return' For Third-Party Apps (theverge.com) 64

Moderators of popular Reddit communities have posted open letters to the company, requesting affordable API pricing for third-party apps, improved moderation tools and accessibility options, and a senior-level Moderator Advocate role at Reddit. The Verge reports: More than 8,000 subreddits went dark earlier this month in protest of the company's planned API pricing changes that will force apps like Apollo and rif is fun for Reddit to shut down on June 30th. Some subreddits continued to stay dark after the original 48-hour plan, but many moderators have reopened their communities after feeling pressure from Reddit itself. (A few communities have found some creative ways to reopen.)

The open letters are largely the same, calling for "a return to the productive dialogue that has served us in the past" between users and administrators (Reddit employees) and listing out a series of requests (taken from r/Funny's letter). [...] The letters conclude by saying that while the company has "all but entirely eroded" its trust with those who wrote the letters, "we hope that together, we can begin to rebuild it." The writers have asked for a response from Reddit by June 29th -- a day before many third-party apps are set to shut down.

Space

New Video Shows a Flyby of the Planet Mercury - with AI-Assisted Music (phys.org) 14

The "BepiColombo" mission, a joint European-Japanese effort, "has recently completed its third of six planned flybys of Mercury, capturing dozens of images in the process," reports the Byte: At its closest, the spacecraft soared within just 150 miles of Mercury. This occurred on the night side of the planet, however, too dark for optimal imaging. Instead, the first and nearest image was taken 12 minutes after the closest approach, at the still impressive proximity of some 1,100 miles above the surface.
Now the ESA has spliced together 217 images from that flyby into a short video, which culminates with a zoomed-in closeup of Mercury's cratered surface. And the music in that video had a little help from AI, reports Phys.org: Music was composed for the sequence by ILÄ (formerly known as Anil Sebastian), with the assistance of AI tools developed by the Machine Intelligence for Musical Audio group, University of Sheffield.

Music from the previous two flyby movies — composed by Maison Mercury Jones' creative director ILÄ and Ingmar Kamalagharan — was given to the AI tool to suggest seeds for the new composition, which ILÄ then chose from to edit and weave together with other elements into the new piece.

The team at the University of Sheffield has developed an Artificial Musical Intelligence (AMI), a large-scale general-purpose deep neural network that can be personalized to individual musicians and use cases. The project with the University of Sheffield is aimed at exploring the boundaries of the ethics of AI creativity, while also emphasizing the essential contributions of the (human) composer.

From the ESA's announcement: BepiColombo's next Mercury flyby will take place on 5 September 2024, but there is plenty of work to occupy the teams in the meantime... BepiColombo's Mercury Transfer Module will complete over 15 000 hours of solar electric propulsion operations over its lifetime, which together with nine planetary flybys in total — one at Earth, two at Venus, and six at Mercury — will guide the spacecraft towards Mercury orbit.

The ESA-led Mercury Planetary Orbiter and the JAXA-led Mercury Magnetospheric Orbiter modules will separate into complementary orbits around the planet, and their main science mission will begin in early 2026.

One spaceflight blog notes the propulsive energy required for an eventual entry into the orbit of Mercury "is greater than that of a mission to fly by Pluto.

"Only one other spacecraft has orbited Mercury, and that was NASA's MESSENGER probe, which orbited the planet from 2011 to 2015."
Businesses

EA Sports and EA Games Splitting Apart in Internal Shakeup (ign.com) 22

Electronic Arts is undergoing a major internal shakeup, announcing today in a message from CEO Andrew Wilson that it is realigning its major studios and its leadership structure in an effort to "empower our creative teams." From a report: The reorganization includes splitting EA Games and EA Sports, with the former being renamed "EA Entertainment" in a signal that EA intends to expand beyond games where possible. "We're building the future of interactive entertainment on a foundation of legendary franchises and innovative new experiences, which represents massive opportunities for growth," Wilson wrote in a message announcing the news.
AI

McKinsey Report Finds Generative AI Could Add Up To $4.4 Trillion a Year To the Global Economy (venturebeat.com) 39

According to global consulting leader McKinsey and Company, Generative AI could add "2.6 trillion to $4.4 trillion annually" to the global economy. That's almost the "economic equivalent of adding an entire new country the size and productivity of the United Kingdom to the Earth ($3.1 trillion GDP in 2021)," notes VentureBeat. From the report: The $2.6 trillion to $4.4 trillion economic impact figure marks a huge increase over McKinsey's previous estimates of the AI field's impact on the economy from 2017, up 15 to 40% from before. This upward revision is due to the incredibly fast embrace and potential use cases of GenAI tools by large and small enterprises. Furthermore, McKinsey finds "current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70% of employees' time today." Does this mean massive job loss is inevitable? No, according to Alex Sukharevsky, senior partner and global leader of QuantumBlack, McKinsey's in-house AI division and report co-author. "You basically could make it significantly faster to perform these jobs and do so much more precisely than they are performed today," Sukharevsky told VentureBeat. What that translates to is an addition of "0.2 to 3.3 percentage points annually to productivity growth" to the entire global economy, he said.

However, as the report notes, "workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world." Also, the advent of accessible GenAI has pushed up McKinsey's previous estimates for workplace automation: "Half of today's work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates."

Specifically, McKinsey's report found that four types of tasks -- customer operations, marketing and sales, software engineering and R&D -- were likely to account for 75% of the value add of GenAI in particular. "Examples include generative AI's ability to support interactions with customers, generate creative content for marketing and sales and draft computer code based on natural-language prompts, among many other tasks." [...] Overall, McKinsey views GenAI as a "technology catalyst," pushing industries further along toward automation journeys, but also freeing up the creative potential of employees. "I do believe that if anything, we are getting into the age of creativity and the age of creator," Sukharevsky said.

Games

McDonald's Releases a New Game Boy Color Game (arstechnica.com) 23

Hmmmmmm writes: Fast food giant McDonald's has released a new retro-style game featuring Grimace, the purple milkshake blob. While it's clearly meant to be played in a browser on a phone or computer, it's also a fully working Game Boy Color game that you can download and play on the original hardware. Grimace's Birthday was developed by Krool Toys, a Brooklyn-based independent game studio and "creative engineering team" with a history of creating playable Game Boy games as unique PR for music artists and brands. The game assumes you're playing in an emulator via a browser window -- you can play that version of the game here -- but we also got it running on an Analogue Pocket thanks to a Game Boy Color FPGA core and a downloadable ROM hosted on the Internet Archive.

The game is so period-authentic that there's even a screen telling original monochrome Game Boy owners that the game "requires a color device to play." Even on Game Boy hardware, it still makes references to people "playing on mobile devices." The game involves simple 2D platforming and skateboarding, not unlike some sections of the Game Boy Color Tony Hawk games; Grimace needs to collect milkshakes and do sick stunts as he tries to track down other McDonaldland characters so he can party with them. It's short -- there are only four levels and one bonus round, plus score attack and free-skate modes -- but the pixel art is legitimately great, and the levels that are here are cleverly designed.

Space

Owen Gingerich, Astronomer Who Saw God in the Cosmos, Dies at 93 (nytimes.com) 135

Owen Gingerich, a renowned astronomer and historian of science, has passed away at the age of 93. Gingerich dedicated years to tracking down 600 copies of Nicolaus Copernicus's influential book "De Revolutionibus Orbium Coelestium Libri Sex" and was known for his passion for astronomy, often dressing up as a 16th-century scholar for lectures. He believed in the compatibility of religion and science and explored this theme in his books "God's Universe" and "God's Planet." The New York Times reports: Professor Gingerich, who lived in Cambridge, Mass., and taught at Harvard for many years, was a lively lecturer and writer. During his decades of teaching astronomy and the history of science, he would sometimes dress as a 16th-century Latin-speaking scholar for his classroom presentations, or convey a point of physics with a memorable demonstration; for instance, The Boston Globe related in 2004, he "routinely shot himself out of the room on the power of a fire extinguisher to prove one of Newton's laws." He was nothing if not enthusiastic about the sciences, especially astronomy. One year at Harvard, when his signature course, "The Astronomical Perspective," wasn't filling up as fast as he would have liked, he hired a plane to fly a banner over the campus that read: "Sci A-17. M, W, F. Try it!"

Professor Gingerich's doggedness was on full display in his long pursuit of copies of Copernicus's "De Revolutionibus Orbium Coelestium Libri Sex" ("Six Books on the Revolutions of the Heavenly Spheres"), first published in 1543, the year Copernicus died. That book laid out the thesis that Earth revolved around the sun, rather than the other way around, a profound challenge to scientific knowledge and religious belief in that era. The writer Arthur Koestler had contended in 1959 that the Copernicus book was not read in its time, and Professor Gingerich set out to determine whether that was true. In 1970 he happened on a copy of "De Revolutionibus" that was heavily annotated in the library of the Royal Observatory in Edinburgh, suggesting that at least one person had read it closely. A quest was born. Thirty years and hundreds of thousands of miles later, Professor Gingerich had examined some 600 Renaissance-era copies of "De Revolutionibus" all over the world and had developed a detailed picture not only of how thoroughly the work was read in its time, but also of how word of its theories spread and evolved. He documented all this in "The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus" (2004). John Noble Wilford, reviewing it in The New York Times, called "The Book Nobody Read" "a fascinating story of a scholar as sleuth."

Professor Gingerich was raised a Mennonite and was a student at Goshen College, a Mennonite institution in Indiana, studying chemistry but thinking of astronomy, when, he later recalled, a professor there gave him pivotal advice: "If you feel a calling to pursue astronomy, you should go for it. We can't let the atheists take over any field." He took the counsel, and throughout his career he often wrote or spoke about his belief that religion and science need not be at odds. He explored that theme in the books "God's Universe" (2006) and "God's Planet" (2014). He was not a biblical literalist; he had no use for those who ignored science and proclaimed the Bible's creation story historical fact. Yet, as he put it in "God's Universe," he was "personally persuaded that a superintelligent Creator exists beyond and within the cosmos." [...] Professor Gingerich, who was senior astronomer emeritus at the Smithsonian Astrophysical Observatory, wrote countless articles over his career in addition to his books. In one for Science and Technology News in 2005, he talked about the divide between theories of atheistic evolution and theistic evolution. "Frankly it lies beyond science to prove the matter one way or the other," he wrote. "Science will not collapse if some practitioners are convinced that occasionally there has been creative input in the long chain of being."
In 2006, Gingerich was mentioned in a Slashdot story about geologists' reacting to the new definition of "Pluton." He was quoted as saying that he was only peripherally aware of the definition, and because it didn't show up on MS Word's spell check, he didn't think it was that important."

"Gingerich lead a committee of the International Astronomical Union charged with recommending whether Pluto should remain a planet," notes the New York Times. "His panel recommended that it should, but the full membership rejected that idea and instead made Pluto a 'dwarf planet.' That decision left Professor Gingerich somehwat dismayed."
AI

Marc Andreessen Criticizes 'AI Doomers', Warns the Bigger Danger is China Gaining AI Dominance (cnbc.com) 102

This week venture capitalist Marc Andreessen published "his views on AI, the risks it poses and the regulation he believes it requires," reports CNBC.

But they add that "In trying to counteract all the recent talk of 'AI doomerism,' he presents what could be seen as an overly idealistic perspective of the implications..." Though he starts off reminding readers that AI "doesn't want to kill you, because it's not alive... AI is a machine — it's not going to come alive any more than your toaster will." Andreessen writes that there's a "wall of fear-mongering and doomerism" in the AI world right now. Without naming names, he's likely referring to claims from high-profile tech leaders that the technology poses an existential threat to humanity... Tech CEOs are motivated to promote such doomsday views because they "stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition," Andreessen wrote...

Andreessen claims AI could be "a way to make everything we care about better." He argues that AI has huge potential for productivity, scientific breakthroughs, creative arts and reducing wartime death rates. "Anything that people do with their natural intelligence today can be done much better with AI," he wrote. "And we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel...." He also promotes reverting to the tech industry's "move fast and break things" approach of yesteryear, writing that both big AI companies and startups "should be allowed to build AI as fast and aggressively as they can" and that the tech "will accelerate very quickly from here — if we let it...."

Andreessen says there's work to be done. He encourages the controversial use of AI itself to protect people against AI bias and harms... In Andreessen's own idealist future, "every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." He expresses similar visions for AI's role as a partner and collaborator for every person, scientist, teacher, CEO, government leader and even military commander.

Near the end of his post, Andreessen points out what he calls "the actual risk of not pursuing AI with maximum force and speed." That risk, he says, is China, which is developing AI quickly and with highly concerning authoritarian applications... To head off the spread of China's AI influence, Andreessen writes, "We should drive AI into our economy and society as fast and hard as we possibly can."

CNBC also points out that Andreessen himself "wants to make money on the AI revolution, and is investing in startups with that goal in mind." But Andreessen's sentiments are clear.

"Rather than allowing ungrounded panics around killer AI, 'harmful' AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can."
AI

Is Self-Healing Code the Future of Software Development? (stackoverflow.blog) 99

We already have automated processes that detect bugs, test solutions, and generate documentation, notes a new post on Stack Overflow's blog. But beyond that, several developers "have written in the past on the idea of self-healing code. Head over to Stack Overflow's CI/CD Collective and you'll find numerous examples of technologists putting this ideas into practice."

Their blog post argues that self-healing code "is the future of software development." When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?

Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level... "People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.

Despite the occasional "hallucination" of non-existent information, Stack Overflow's blog acknowledges that large-language models improve when asked to review their response, identify errors, or show its work.

And they point out the project manager in charge of generative models at Google "believes that some of the work of checking the code over for accuracy, security, and speed will eventually fall to AI." Google is already using this technology to help speed up the process of resolving code review comments. The authors of a recent paper on this approach write that, "As of today, code-change authors at Google address a substantial amount of reviewer comments by applying an ML-suggested edit. We expect that to reduce time spent on code reviews by hundreds of thousands of hours annually at Google scale. Unsolicited, very positive feedback highlights that the impact of ML-suggested code edits increases Googlers' productivity and allows them to focus on more creative and complex tasks...."

Recently, we've seen some intriguing experiments that apply this review capability to code you're trying to deploy. Say a code push triggers an alert on a build failure in your CI pipeline. A plugin triggers a GitHub action that automatically send the code to a sandbox where an AI can review the code and the error, then commit a fix. That new code is run through the pipeline again, and if it passes the test, is moved to deploy... Right now his work happens in the CI/CD pipeline, but [Calvin Hoenes, the plugin's creator] dreams of a world where these kind of agents can help fix errors that arise from code that's already live in the world. "What's very fascinating is when you actually have in production code running and producing an error, could it heal itself on the fly?" asks Hoenes...

For now, says Hoenes, we need humans in the loop. Will there come a time when computer programs are expected to autonomously heal themselves as they are crafted and grown? "I mean, if you have great test coverage, right, if you have a hundred percent test coverage, you have a very clean, clean codebase, I can see that happening. For the medium, foreseeable future, we probably better off with the humans in the loop."

Last month Stack Overflow themselves tried an AI experiment that helped users to craft a good title for their question.
Television

The Binge Purge 156

TV's streaming model is broken. It's also not going away. For Hollywood, figuring that out will be a horror show. From a report: Across the town, there's despair and creative destruction and all sorts of countervailing indicators. Certain shows that were enthusiastically green-lit two years ago probably wouldn't be made now. Yet there are still streamers burning mountains of cash to entertain audiences that already have too much to watch. Netflix has tightened the screws and recovered somewhat, but the inarguable consensus is that there is still a great deal of pain to come as the industry cuts back, consolidates, and fumbles toward a more functional economic framework. The high-stakes Writers Guild of America strike has focused attention on Hollywood's labor unrest, but the really systemic issue is streaming's busted math. There may be no problem more foundational than the way the system monetizes its biggest hits: It doesn't.

Just ask Shawn Ryan. In April, the veteran TV producer's latest show, the spy thriller The Night Agent, became the fifth-most-watched English-language original series in Netflix's history, generating 627 million viewing hours in its first four weeks. As it climbed to the heights of such platform-defining smashes as Stranger Things and Bridgerton, Ryan wondered how The Night Agent's success might be reflected in his compensation. "I had done the calculations. Half a billion hours is the equivalent of over 61 million people watching all ten episodes in 18 days. Those shows that air after the Super Bowl -- it's like having five or ten of them. So I asked my lawyer, 'What does that mean?'" recalls Ryan. As it turns out, not much. "In my case, it means that I got paid what I got paid. I'll get a little bonus when season two gets picked up and a nominal royalty fee for each additional episode that gets made. But if you think I'm going out and buying a private jet, you're way, way off."

Ryan says he'll probably make less money from The Night Agent than he did from The Shield, the cop drama he created in 2002, even though the latter ran on the then-nascent cable channel FX and never delivered Super Bowl numbers. "The promise was that if you made the company billions, you were going to get a lot of millions," he says. "That promise has gone away." Nobody is crying for Ryan, of course, and he wouldn't want them to. ("I'm not complaining!" he says. "I'm not unaware of my position relative to most people financially.") But he has a point. Once, in a more rational time, there was a direct relationship between the number of people who watched a show and the number of jets its creator could buy. More viewers meant higher ad rates, and the biggest hits could be sold to syndication and international markets. The people behind those hits got a cut, which is why the duo who invented Friends probably haven't flown commercial since the 1990s. Streaming shows, in contrast, have fewer ads (or none at all) and are typically confined to their original platforms forever. For the people who make TV, the connection between ratings and reward has been severed.
Books

Why Bill Gates Recommends This Novel About Videogames (gatesnotes.com) 74

Bill Gates wrote a blog post this week recommending a novel about videogame development. Gates calls Tomorrow, and Tomorrow, and Tomorrow. "one of the biggest books of last year," telling the story of "two friends who bond over Super Mario Bros. as kids and grow up to make video games together." Although there are plenty of video games mentioned in the book — Oregon Trail is a recurring theme — I'd describe it more as a story about partnership and collaboration. When Sam and Sadie are in college, they create a game called Ichigo that turns out to be a huge hit. Their company, Unfair Games, becomes successful, but the two start to butt heads. Sadie is upset that Sam got most of the credit for Ichigo. Sam is frustrated that Sadie cares more about creating art than about making their company viable...

Most of the book is about how a creative partnership can be equal parts remarkable and complicated. I couldn't help but be reminded of my relationship with Paul Allen while I was reading it. Sadie believes that "true collaborators in this life are rare." I agree, and I was lucky to have one in Paul. An early chapter describing how Sam and Sadie worked until sunrise in a dingy apartment in Cambridge, Massachusetts, could have just as easily been about Paul and me coming up with the idea for Microsoft. Like Sam and Sadie, we worked together every day for years.

Paul's vision and contributions to the company were absolutely critical to its success, and then he chose to move on. We had a great relationship, but not without some of the complexities that success brings. Zevin really captures what it feels like to start a company that takes off. It's thrilling to know your vision is now real, but success brings a lot of new questions. Once you make money, do you still have something to prove? How does your relationship with your partner change once a lot more people get involved? How do you make the next idea as good as the last?

You can't help but wonder whether you would've been as successful if you started up at a different time... Paul and I were very lucky in terms of our timing with Microsoft. We got in when chips were just starting to become powerful but before other people had created established companies... Tomorrow, and Tomorrow, and Tomorrow resonated with me for personal reasons, but I think Zevin's exploration of partnership and collaboration is worth reading no matter who you are. Even if you're skeptical about reading a book about video games, the subject is a terrific metaphor for human connection.

The book is now being adapted into a movie.
AI

ChatGPT is Already Taking Jobs (msn.com) 193

The Washington Post writes that "Some economists predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.

"For some workers, this impact is already here." Those that write marketing and social media content are in the first wave of people being replaced with tools like chatbots, which are seemingly able to produce plausible alternatives to their work.

Experts say that even advanced AI doesn't match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality. "We're really in a crisis point," said Sarah T. Roberts, an associate professor at University of California in Los Angeles specializing in digital labor. "[AI] is coming for the jobs that were supposed to be automation-proof..."

The technology's ability to churn out human-sounding prose puts highly paid knowledge workers in the crosshairs for replacement, experts said. "In every previous automation threat, the automation was about automating the hard, dirty, repetitive jobs," said Ethan Mollick, an associate professor at the University of Pennsylvania's Wharton School of Business. "This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that ... require the most educational background." In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. "Occupations for which a significant share of workers' time is spent outdoors or performing physical labor cannot be automated by AI," the report said...

Mollick said it's too early to gauge how disruptive AI will be to the workforce. He noted that jobs such as copywriting, document translation and transcription, and paralegal work are particularly at risk, since they have tasks that are easily done by chatbots. High-level legal analysis, creative writing or art may not be as easily replaceable, he said, because humans still outperform AI in those areas.

The article notes that one copywriter lost all 10 of his clients over the last four months — and though one later hired him back, he's now training to be a plumber.
AI

The Problem with the Matrix Theory of AI-Assisted Human Learning (nytimes.com) 28

In an opinion piece for the New York Times, Vox co-founder Ezra Klein worries that early AI systems "will do more to distract and entertain than to focus." (Since they tend to "hallucinate" inaccuracies, and may first be relegated to areas "where reliability isn't a concern" like videogames, song mash-ups, children's shows, and "bespoke" images.)

"The problem is that those are the areas that matter most for economic growth..." One lesson of the digital age is that more is not always better... The magic of a large language model is that it can produce a document of almost any length in almost any style, with a minimum of user effort. Few have thought through the costs that will impose on those who are supposed to respond to all this new text. One of my favorite examples of this comes from The Economist, which imagined NIMBYs — but really, pick your interest group — using GPT-4 to rapidly produce a 1,000-page complaint opposing a new development. Someone, of course, will then have to respond to that complaint. Will that really speed up our ability to build housing?

You might counter that A.I. will solve this problem by quickly summarizing complaints for overwhelmed policymakers, much as the increase in spam is (sometimes, somewhat) countered by more advanced spam filters. Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the "boring apocalypse" scenario for A.I., in which "we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We're just inflating and compressing content generated by A.I."

But there's another worry: that the increased efficiency "would come at the cost of new ideas and deeper insights." Our societywide obsession with speed and efficiency has given us a flawed model of human cognition that I've come to think of as the Matrix theory of knowledge. Many of us wish we could use the little jack from "The Matrix" to download the knowledge of a book (or, to use the movie's example, a kung fu master) into our heads, and then we'd have it, instantly. But that misses much of what's really happening when we spend nine hours reading a biography. It's the time inside that book spent drawing connections to what we know ... that matters...

The analogy to office work is not perfect — there are many dull tasks worth automating so people can spend their time on more creative pursuits — but the dangers of overautomating cognitive and creative processes are real... To make good on its promise, artificial intelligence needs to deepen human intelligence. And that means human beings need to build A.I., and build the workflows and office environments around it, in ways that don't overwhelm and distract and diminish us.

We failed that test with the internet. Let's not fail it with A.I.

AI

Is Concern About Deadly AI Overblown? (sfgate.com) 190

"Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction," acknowledges the Washington Post. "And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.

"But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren't rooted in good science. Instead, it distracts from the very real problems that the tech is already causing..." It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control... [I]nside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions. "Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk," said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher...

The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced. The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies. "There are a set of people who view this as, 'Look, these are just algorithms. They're just repeating what it's seen online.' Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan," Google CEO Sundar Pichai said during an interview with "60 Minutes" in April. "We need to approach this with humility...."

There's no question that modern AIs are powerful, but that doesn't mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies. "Most technology and risk in technology is a gradual shift," Hooker said. "Most risk compounds from limitations that are currently present."

The Post also points out that some of the heaviest criticism of the "killer robot" debate "has come from researchers who have been studying the technology's downsides for years."

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
AI

Google Makes Its Text-To-Music AI Public (techcrunch.com) 16

An anonymous reader quotes a report from TechCrunch: Google [on Wednesday] released MusicLM, a new experimental AI tool that can turn text descriptions into music. Available in the AI Test Kitchen app on the web, Android or iOS, MusicLM lets users type in a prompt like "soulful jazz for a dinner party" or "create an industrial techno sound that is hypnotic" and have the tool create several versions of the song. Users can specify instruments like "electronic" or "classical," as well as the "vibe, mood, or emotion" they're aiming for, as they refine their MusicLM-generated creations.

When Google previewed MusicLM in an academic paper in January, it said that it had "no immediate plans" to release it. The coauthors of the paper noted the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs. But in the intervening months, Google says it's been working with musicians and hosting workshops to "see how [the] technology can empower the creative process." One of the outcomes? The version of MusicLM in AI Test Kitchen won't generate music with specific artists or vocals. Make of that what you will. It seems unlikely, in any case, that the broader challenges around generative music will be easily remedied.
You can sign up to try MusicLM here.
AI

Google IO To Feature AI Updates, Showing Off PaLM 2 LLM (cnbc.com) 10

At its annual Google I/O developers conference on Wednesday, Google is planning to announce a number of generative AI updates, including launching a general-use large language model (LLM) called PaLM 2. CNBC reports: According to internal documents about Google I/O viewed by CNBC, the company will unveil PaLM 2, its most recent and advanced LLM. PaLM 2 includes more than 100 languages and has been operating under the internal codename "Unified Language Model." It's also performed a broad range of coding and math tests as well as creative writing tests and analysis. At the event, Google will make announcements on the theme of how AI is "helping people reach their full potential," including "generative experiences" to Bard and Search, the documents show. Pichai will be speaking to a live crowd of developers as he pitches his company's AI advancements.

Google first announced the PaLM language model in April of 2022. In March of this year, the company launched an API for PaLM alongside a number of AI enterprise tools it says will help businesses "generate text, images, code, videos, audio, and more from simple natural language prompts." Last month, Google said its medical LLM called "Med-PaLM 2" can answer medical exam questions at an "expert doctor level" and is accurate 85% of the time.

Transportation

Mercedes Locks Better EV Engine Performance Behind Annoying Subscription Paywalls (techdirt.com) 296

Last year, BMW announced plans to charge a $18 per month subscription for heated seats. Now, Mercedes is considering making better EV engine performance an added subscription surcharge. "Mercedes-Benz electric vehicle owners in North America who want a little more power and speed can now buy 60 horsepower for just $60 a month or, on other models, 80 horsepower for $90 a month," reports CNN. "They won't have to visit a Mercedes dealer to get the upgrade either, or even leave their own driveway. The added power, which will provide a nearly one second decrease in zero-to-60 acceleration, will be available through an over-the-air software patch." Techdirt reports: If you don't want to pay monthly, Mercedes will also let you pay a one time flat fee (usually several thousand dollars) to remove the artificial restrictions they've imposed on your engine. That's, of course, creating additional upward pricing funnel efforts on top of the industry's existing efforts to upsell you on a rotating crop of trims, tiers, and options you probably didn't want.

It's not really clear that regulators have any interest in cracking down on charging dumb people extra for something they already owned and paid for. After all, ripping off gullible consumers is effectively now considered little more than creative marketing by a notable segment of government "leaders" (see: regulatory apathy over misleading hidden fees in everything from hotels to cable TV).

Slashdot Top Deals