AI

AI Threats 'Complete BS' Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat (msn.com) 111

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are "complete B.S." When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. "It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat," he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today's "frontier" AIs, including those made by Meta itself.
LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds "I don't think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.")

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence: Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it's now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. "The impact on Meta has been really enormous," he says.

At the same time, he is convinced that today's AIs aren't, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous... OpenAI's Sam Altman last month said we could have Artificial General Intelligence within "a few thousand days...." But creating an AI this capable could easily take decades, [LeCun] says — and today's dominant approach won't get us there.... His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that's analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today's AI models "are really just predicting the next word in a text, he says... And because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information they've already been trained on."
AI

AGI is On Clients' Radar But Far From Reality, Says Gartner (theregister.com) 79

Gartner is warning that any prospect of Artificial General Intelligence (AGI) is at least 10 years away and perhaps not certain to ever arrive. It might not even be a worthwhile pursuit, the analyst says. From a report: AGI has become a controversial topic in the last couple of years as builders of large language models (LLMs), such as OpenAI, make bold claims that they've established a near-term path toward human-like intelligence. At the same time, others from the discipline of cognitive science have scorned the idea, arguing that the concept of AGI is poorly understood and the LLM approach is insufficient.

In its Hype Cycle for Emerging Technologies, 2024, Gartner says it distills "key insights" from more than 2,000 technologies and, using its framework, produces a succinct set of "must-know" emerging technologies that have the potential to deliver benefits over the next two to ten years. The consultancy notes that GenAI -- the subject of volumes of industry hype and billions in investment -- is about to enter the dreaded "trough of disillusionment." Arun Chandrasekaran, Gartner distinguished VP analyst, told The Register: "The expectations and hype around GenAI are enormously high. So it's not that the technology, per se, is bad, but it's unable to keep up with the high expectations that I think enterprises have because of the enormous hype that's been created in the market in the last 12 to 18 months."

However, GenAI is likely to have a significant impact on investment in the longer term, Chandrasekaran said. "I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term."

AI

How Will AI Transform the Future of Work? (theguardian.com) 121

An anonymous reader shared this report from the Guardian: In March, after analysing 22,000 tasks in the UK economy, covering every type of job, a model created by the Institute for Public Policy Research predicted that 59% of tasks currently done by humans — particularly women and young people — could be affected by AI in the next three to five years. In the worst-case scenario, this would trigger a "jobs apocalypse" where eight million people lose their jobs in the UK alone.... Darrell West, author of The Future of Work: AI, Robots and Automation, says that just as policy innovations were needed in Thomas Paine's time to help people transition from an agrarian to an industrial economy, they are needed today, as we transition to an AI economy. "There's a risk that AI is going to take a lot of jobs," he says. "A basic income could help navigate that situation."

AI's impact will be far-reaching, he predicts, affecting blue- and white-collar jobs. "It's not just going to be entry-level people who are affected. And so we need to think about what this means for the economy, what it means for society as a whole. What are people going to do if robots and AI take a lot of the jobs?"

Nell Watson, a futurist who focuses on AI ethics, has a more pessimistic view. She believes we are witnessing the dawn of an age of "AI companies": corporate environments where very few — if any — humans are employed at all. Instead, at these companies, lots of different AI sub-personalities will work independently on different tasks, occasionally hiring humans for "bits and pieces of work". These AI companies have the potential to be "enormously more efficient than human businesses", driving almost everyone else out of business, "apart from a small selection of traditional old businesses that somehow stick in there because their traditional methods are appreciated"... As a result, she thinks it could be AI companies, not governments, that end up paying people a basic income.

AI companies, meanwhile, will have no salaries to pay. "Because there are no human beings in the loop, the profits and dividends of this company could be given to the needy. This could be a way of generating support income in a way that doesn't need the state welfare. It's fully compatible with capitalism. It's just that the AI is doing it."

AI

Christopher Nolan Says AI Dangers Have Been 'Apparent For Years' (variety.com) 52

An anonymous reader quotes a report from Variety: Christopher Nolan got honest about artificial intelligence in a new interview with Wired magazine. The Oscar-nominated filmmaker says the writing has been on the wall about AI dangers for quite some time, but now the media is more focused on the technology because it poses a threat to their jobs. "The growth of AI in terms of weapons systems and the problems that it is going to create have been very apparent for a lot of years," Nolan said. "Few journalists bothered to write about it. Now that there's a chatbot that can write an article for a local newspaper, suddenly it's a crisis." Nolan said the main issue with AI is "a very simple one" and relates to the technology being used by companies to "evade responsibility for their actions."

"If we endorse the view that AI is all-powerful, we are endorsing the view that it can alleviate people of responsibility for their actions -- militarily, socioeconomically, whatever," Nolan continued. "The biggest danger of AI is that we attribute these godlike characteristics to it and therefore let ourselves off the hook. I don't know what the mythological underpinnings of this are, but throughout history there's this tendency of human beings to create false idols, to mold something in our own image and then say we've got godlike powers because we did that." Nolan added that he feels there is "a real danger" with AI, saying, "I identify the danger as the abdication of responsibility." "I feel that AI can still be a very powerful tool for us. I'm optimistic about that. I really am," he said. "But we have to view it as a tool. The person who wields it still has to maintain responsibility for wielding that tool. If we accord AI the status of a human being, the way at some point legally we did with corporations, then yes, we're going to have huge problems."

"The whole machine learning as applied to deepfake technology, that's an extraordinary step forward in visual effects and in what you could do with audio," Nolan told Wired. "There will be wonderful things that will come out, longer term, in terms of environments, in terms of building a doorway or a window, in terms of pooling the massive data of what things look like, and how light reacts to materials. Those things are going to be enormously powerful tools." Will Nolan be using AI technology on his films? "I'm, you know, very much the old analog fusty filmmaker," he said. "I shoot on film. And I try to give the actors a complete reality around it. My position on technology as far as it relates to my work is that I want to use technology for what it's best for. Like if we do a stunt, a hazardous stunt. You could do it with much more visible wires, and then you just paint out the wires. Things like that."

Earth

Exxon Climate Predictions Were Accurate Decades Ago. Still It Sowed Doubt 126

An anonymous reader quotes a report from NPR: Decades of research by scientists at Exxon accurately predicted how much global warming would occur from burning fossil fuels, according to a new study in the journal Science. The findings clash with an enormously successful campaign that Exxon spearheaded and funded for more than 30 years which cast doubt on human-driven climate change and the science underpinning it. That narrative helped delay federal and international action on climate change, even as the impacts of climate change worsened.

Over the last few years, journalists and researchers revealed that Exxon did in-house research that showed it knew that human-caused climate change is real. The new study looked at Exxon's research and compared it to the warming that has actually happened. Researchers at Harvard University and the Potsdam Institute for Climate Impact Research analyzed Exxon's climate studies from 1977 to 2003. The researchers show the company, now called ExxonMobil, produced climate research that was at least as accurate as work by independent academics and governments -- and occasionally surpassed it. That's important because ExxonMobil and the broader fossil fuel industry face lawsuits nationwide claiming they misled the public on the harmful effects of their products.
"The bottom line is we found that they were modeling and predicting global warming with, frankly, shocking levels of skill and accuracy, especially for a company that then spent the next couple of decades denying that very climate science," says lead author Geoffrey Supran, who now is an associate professor of environmental science and policy at the University of Miami.

"Specifically, what we've done is to actually put a number for the first time on what Exxon knew, which is that the burning of their fossil fuel products would heat the planet by something like 0.2 [degrees] Celsius every single decade," Supran says.

The report notes that ExxonMobil "faces more than 20 lawsuits brought by states and local governments for damages caused by climate change." These new findings could provide more evidence for those cases as they progress through the courts, says Karen Sokol, a law professor at Loyola University in New Orleans.

"What Exxon scientists found and what they communicated to company executives was nothing short of horrifying," says Sokol. "Imagine that world and the different trajectory that consumers, investors and policymakers would have taken when we still had time, versus now when we're entrenched in a fossil fuel based economy that's getting increasingly expensive and difficult to exit," says Sokol.
The Internet

Fake CISO Profiles On LinkedIn Target Fortune 500s (krebsonsecurity.com) 15

Security researcher Brian Krebs writes: Someone has recently created a large number of fake LinkedIn profiles for Chief Information Security Officer (CISO) roles at some of the world's largest corporations. It's not clear who's behind this network of fake CISOs or what their intentions may be. But the fabricated LinkedIn identities are confusing search engine results for CISO roles at major companies, and they are being indexed as gospel by various downstream data-scraping sources. [...] Rich Mason, the former CISO at Fortune 500 firm Honeywell, began warning his colleagues on LinkedIn about the phony profiles earlier this week. "It's interesting the downstream sources that repeat LinkedIn bogus content as truth," Mason said. "This is dangerous, Apollo.io, Signalhire, and Cybersecurity Ventures." [...]

Again, we don't know much about who or what is behind these profiles, but in August the security firm Mandiant (recently acquired by Google) told Bloomberg that hackers working for the North Korean government have been copying resumes and profiles from leading job listing platforms LinkedIn and Indeed, as part of an elaborate scheme to land jobs at cryptocurrency firms. None of the profiles listed here responded to requests for comment (or to become a connection).

LinkedIn could take one simple step that would make it far easier for people to make informed decisions about whether to trust a given profile: Add a "created on" date for every profile. Twitter does this, and it's enormously helpful for filtering out a great deal of noise and unwanted communications. The former CISO Mason said LinkedIn also could experiment with offering something akin to Twitter's verified mark to users who chose to validate that they can respond to email at the domain associated with their stated current employer. Mason said LinkedIn also needs a more streamlined process for allowing employers to remove phony employee accounts. He recently tried to get a phony profile removed from LinkedIn for someone who falsely claimed to have worked for his company.
In a statement provided to KrebsOnSecurity, LinkedIn said its teams were actively working to take these fake accounts down. "We do have strong human and automated systems in place, and we're continually improving, as fake account activity becomes more sophisticated," the statement reads. "In our transparency report we share how our teams plus automated systems are stopping the vast majority of fraudulent activity we detect in our community -- around 96% of fake accounts and around 99.1% of spam and scam."
Science

Balloon Detects First Signs of a 'Sound Tunnel' in the Sky (science.org) 20

Atmospheric analog to ocean's acoustic channel could be used to monitor eruptions and bombs. From a report: About 1 kilometer under the sea lies a sound tunnel that carries the cries of whales and the clamor of submarines across great distances. Ever since scientists discovered this Sound Fixing and Ranging (SOFAR) channel in the 1940s, they've suspected a similar conduit exists in the atmosphere. But few have bothered to look for it, aside from one top-secret Cold War operation. Now, by listening to distant rocket launches with solar-powered balloons, researchers say they have finally detected hints of an aerial sound channel, although it does not seem to function as simply or reliably as the ocean SOFAR. If confirmed, the atmospheric SOFAR may pave the way for a network of aerial receivers that could help researchers detect remote explosions from volcanoes, bombs, and other sources that emit infrasound -- acoustic waves below the frequency of human hearing. "It would help enormously to have those [detectors] up there," says William Wilcock, a marine seismologist at the University of Washington, Seattle. Although seismic sensors in the ground pick up most of the planet's biggest bangs, "some areas of the Earth are covered very well and others aren't."

In the ocean, the SOFAR channel is bounded by layers of warmer, lighter water above and cooler, denser water below. Sound waves, which travel at their slowest at this depth, get trapped inside the channel, bouncing off the surrounding layers like a bowling ball guided by bumpers. Researchers rely on the SOFAR channel to monitor earthquakes and eruptions under the sea floor -- and even to measure ocean temperatures rising from global warming. After geophysicist Maurice Ewing discovered the SOFAR channel in 1944, he set out to find an analogous layer in the sky. At an altitude of between 10 and 20 kilometers is the tropopause, the boundary between the troposphere, the lowest layer of the atmosphere (where weather occurs), and the stratosphere. Like the marine SOFAR, the tropopause represents a cold region, where sound waves should travel slower and farther. An acoustic waveguide in the atmosphere, Ewing reasoned, would allow the U.S. Air Force to listen for nuclear weapon tests detonated by the Soviet Union. He instigated a top-secret experiment, code-named Project Mogul, that sent up hot air balloons equipped with infrasound microphones.

Earth

World's Largest Plant Survey Reveals Alarming Extinction Rate (nature.com) 95

The world's seed-bearing plants have been disappearing at a rate of nearly 3 species a year since 1900 -- which is up to 500 times higher than would be expected as a result of natural forces alone, according to the largest survey yet of plant extinctions. From a report: The project looked at more than 330,000 species and found that plants on islands and in the tropics were the most likely to be declared extinct. Trees, shrubs and other woody perennials had the highest probability of disappearing regardless of where they were located. The results were published on 10 June in Nature Ecology & Evolution. The study provides valuable hard evidence that will help with conservation efforts, says Stuart Pimm, a conservation scientist at Duke University in Durham, North Carolina. The survey included more plant species by an order of magnitude than any other study, he says. "Its results are enormously significant."

The work stems from a database compiled by botanist Rafael Govaerts at the Royal Botanic Gardens, Kew, in London. Govaerts started the database in 1988 to track the status of every known plant species. As part of that project, he mined the scientific literature and created a list of seed-bearing plant species that were ruled extinct, and noted which species scientists had deemed to be extinct but were later rediscovered. In 2015, Govaerts teamed up with plant evolutionary biologist Aelys Humphreys at Stockholm University in Sweden and others to analyse the data. They compared extinction rates across different regions and characteristics such as whether the plants were annuals that regrow from seed each year or perennials that endure year after year. The researchers found that about 1,234 species had been reported extinct since the publication of Carl Linnaeus's compendium of plant species, Species Plantarum, in 1753. But more than half of those species were either rediscovered or reclassified as another living species, meaning 571 are still presumed extinct.

A map of plant extinctions produced by the team shows that flora in areas of high biodiversity and burgeoning human populations, such as Madagascar, the Brazilian rainforests, India and South Africa, are most at risk. Humphreys says that the rates of extinction in the tropics is beyond what researchers expect, even when they account for the increased diversity of species in those habitats. And islands are particularly sensitive because they are likely to contain species found nowhere else in the world and are especially susceptible to environmental changes, says Humphreys.

Wikipedia

Happy 18th Birthday, Wikipedia (washingtonpost.com) 85

This week, Wikipedia celebrates its 18th birthday. If the massive crowdsourced encyclopedia project were human, then in most countries, it would just now be considered a legal adult. But in truth, the free online encyclopedia has long played the role of the Internet's good grown-up. From a story: Wikipedia has grown enormously since its inception: It now boasts 5.7 million articles in English and pulled in 92 billion page views last year. The site has also undergone a major reputation change. If you ask Siri, Alexa or Google Home a general-knowledge question, it will likely pull the response from Wikipedia. The online encyclopedia has been cited in more than 400 judicial opinions, according to a 2010 paper in the Yale Journal of Law & Technology.

Many professors are ditching the traditional writing assignment and instead asking students to expand or create a Wikipedia article on the topic. And YouTube Chief Executive Susan Wojcicki announced a plan last March to pair misleading conspiracy videos with links to corresponding articles from Wikipedia. Facebook has also released a feature using Wikipedia's content to provide users more information about the publication source for articles in their feed.

Government

MIT's Elegant Schoolbus Algorithm Was No Match For Angry Parents (bostonglobe.com) 399

"Computers can solve your problem. You may not like the answer," writes the Boston Globe. Slashdot reader sandbagger explains: "Boston Public Schools asked MIT graduate students Sebastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools -- and re-routing the hundreds of buses that serve them. In theory this would also help with student alertness...." MIT also reported that "Approximately 50 superfluous routes could be eliminated using the new method, saving the school district between $3 million and $5 million annually."

The Globe reports: They took to the new project with gusto, working 14- and 15-hour days to meet a tight deadline -- and occasionally waking up in the middle of the night to feed new information to a sprawling MIT data center. The machine they constructed was a marvel. Sorting through 1 novemtrigintillion options -- that's 1 followed by 120 zeroes -- the algorithm landed on a plan that would trim the district's $100 million-plus transportation budget while shifting the overwhelming majority of high school students into later start times.... But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh's tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent's resignation...

Big districts stagger their start times so a single fleet of buses can serve every school: dropping off high school students early in the morning, then circling back to get the elementary and middle school kids. If you're going to push high school start times back, then you've probably got to move a lot of elementary and middle schools into earlier time slots. The district knew that going in, and officials dutifully quizzed thousands of parents and teachers at every grade level about their preferred start times. But they never directly confronted constituents with the sort of dramatic change the algorithm would eventually propose -- shifting school start times at some elementary schools by as much as two hours. Even more... Hundreds of families were facing a 9:30 to 7:15 a.m. shift. And for many, that was intolerable. They'd have to make major changes to work schedules or even quit their jobs...

Nearly 85% of the district had ended up with a new start time, and "In the end, the school start time quandary was more political than technical... This was a fundamentally human conflict, and all the computing power in the world couldn't solve it."

But will the whole drama play out again? "Last year, even after everything went sideways in Boston, some 80 school districts from around the country reached out to the whiz kids from MIT, eager for the algorithm to solve their problems."
Books

Book Review: The New Digital Age 68

Nerval's Lobster writes "Eric Schmidt and Jared Cohen begin their new nonfiction book, The New Digital Age, with a rather bold pronouncement: 'The Internet is the largest experiment involving anarchy in history.' Subsequent chapters deal with how that experiment will alter life in decades to come, as more and more people around the world connect to the Internet via cheap mobile phones and other devices." Keep reading to see what Nerval's Lobster has to say about the book.
Biotech

Hawking Says Humans Have Entered a New Stage of Evolution 398

movesguy sends us to The Daily Galaxy for comments by Stephen Hawking about how humans are evolving in a different way than any species before us. Quoting: "'At first, evolution proceeded by natural selection, from random mutations. This Darwinian phase, lasted about three and a half billion years, and produced us, beings who developed language, to exchange information. I think it is legitimate to take a broader view, and include externally transmitted information, as well as DNA, in the evolution of the human race,' Hawking said. In the last ten thousand years the human species has been in what Hawking calls, 'an external transmission phase,' where the internal record of information, handed down to succeeding generations in DNA, has not changed significantly. 'But the external record, in books, and other long lasting forms of storage,' Hawking says, 'has grown enormously. Some people would use the term evolution only for the internally transmitted genetic material, and would object to it being applied to information handed down externally. But I think that is too narrow a view. We are more than just our genes.'"
Space

Distributed Project To Classify SDSS Galaxies 35

Xandu writes "Be part of a human Beowulf by helping classify millions of galaxies from the SDSS at the Galaxy Zoo. From their about page, "Those involved are directly contributing to scientific research, while getting an opportunity to view the beautiful and varied galaxies that inhabit our universe. Why do we need people to do this, rather than just using a computer? The simple answer is that the human brain is much better at recognizing patterns than a computer. Galaxies are complicated objects that vary in appearance enormously, and yet in some ways they can be very similar. We could write a computer program to classify these galaxies, and many researchers have, but so far none have really done a good enough job. We have not been able to make computers 'see past' the complexity, to reliably identify the similarities that appear obvious to our eyes and brain. For now, and probably for some time yet, people do the best job of classifying galaxies."

Dungeons and Dragons Online Impressions 292

Tabletop roleplaying has been a fixture in my life since I was ten. You can probably imagine my enthusiasm when I heard of the joint venture between Asheron's Call developer Turbine and D&D publisher Wizards of the Coast. The goal: A Massively Multiplayer game set in a D&D campaign. Keith Baker's Eberron was tapped for the gameworld's flavour, with the d20 ruleset providing the skeleton on which to create the title's mechanics. The result is Dungeons and Dragons Online (DDO), which has been in the works for about two years now. DDO is faithful in ways I wouldn't have thought possible, but still manages to raise conflicting opinions for me. DDO has real-time traps and combat, beautiful graphics, and still fails to interest me on any level of my gamer soul. Read on for my impressions of a most perplexing MMOG.
Movies

HHG2G Exec. Producer Robbie Stamp Answers 221

Earlier this month, you asked questions of Hitchhiker's Guide to the Galaxy executive producer Robbie Stamp. Robbie's been kind enough to answer more than the usual number of questions, and has provided much interesting information about the casting, Douglas Adams' influence, and more -- read on below for his answers.
Role Playing (Games)

Great Game Characters Compensate For Plot? 46

Thanks to the IGDA for their 'Culture Clash' column discussing why interesting game characters make for better games, even if those games have a weak plot. The author gives the intriguing example of Max Payne, suggesting the game is memorable, despite the "relatively cliched" story, because "...the first time we see Max, he's giving up smoking because it's bad for his baby. The second time, he's howling his misery over the loss of his wife. He is a human being with a broken soul, and an enormously compelling and emotionally engaging character." However, games such as Morrowind present the main character as "little more than a cipher through which we experienced the game's story", and it's suggested that this is less successful: "It can be an effective way to craft a powerful narrative, but it's also one that is more likely to fail if poorly executed."
Microsoft

The Return Of Microsoft: Part Two 312

Microsoft has battled back to the top of the Internet heap, with more heavy-duty products coming to market this year than ever before, profits soaring again, and more research and development money in the bank than most of the world's nations can ever get their hands on, not to mention Microsoft's many out-maneuvered competitors. Microsoft, reports Business week in a thorough report in its June 4 issue, and discussed in on Slashdot two weeks ago, is drowning in cash: $30 billion, more than any other company in the Corporate Republic formerly known as America.
Technology

Up, Up, Down, Down: Part Three 285

The average American child plays videogames forty-nine minutes a day. Some play for much longer and over many years. There are few studies of the effects of gaming, but some traits are increasingly obvious: gamers are often independent, strategic-thinkers and problem solvers. Their interactive instincts often collide unhappily with the traditions and institutions of a static, passive world. Gamers are the new artists, visionaries, and story-tellers of our time, sparked by astonishingly inventive new technologies like the PS 2. Ready or not, they will become increasingly influential. Third in a series.
The Internet

The Melissa Syndrome 202

John Dillinger wasn't nailed with much more fanfare than the alleged creator of the now-famed Melissa virus, whose apprehension in New Jersey a few days ago drew a governor and a platoon of state, local and federal cyber-cops. This syndrome is becoming almost ritualistic. The virus and the arrest tell us a lot about Crime and Hype; Technological Hostility, and Closing the Distance that makes so much online hostility so easy.

Slashdot Top Deals