×
Businesses

Mercedes is Trialing Humanoid Robots For 'Low Skill, Repetitive' Tasks (theverge.com) 33

Mercedes-Benz is the latest automotive company to trial how humanoid robots could be used to automate "low skill, physically challenging, manual labor." From a report: On Friday, robotics company Apptronik announced it had entered into a commercial agreement with Mercedes to pilot how "highly advanced robotics" like Apollo -- Apptronik's 160-pound bipedal robot -- can be used in manufacturing. The news follows a similar pilot announced by BMW in January.

Apptronik says that Mercedes is exploring use cases like having Apollo inspect and deliver components to human production line workers. Neither company has disclosed any figures for the agreement or how many Apollo robots are being trialed. According to Apptronik, humanoid robots would allow vehicle manufacturers to start automating manufacturing tasks without having to redesign their existing facilities. The company says its approach instead "centers on automating some physically demanding, repetitive and dull tasks for which it is increasingly hard to find reliable workers."

NASA

NASA Shutters $2B Satellite Refueling Project, Blames Contractor For Delays. (upi.com) 30

"NASA said Friday it is shutting down a $2 billion satellite refueling project," reports UPI, "after criticizing the project's contractor for poor performance." The agency in a statement said it will discontinue the On-orbit Servicing, Assembly and Manufacturing 1 project after nearly a decade of work due to "continued technical, cost, and schedule challenges, and a broader community evolution away from refueling unprepared spacecraft, which has led to a lack of a committed partner." [...] The spacecraft would have utilized an attached Space Infrastructure Dexterous Robot (SPIDER) to refuel the Landsat, assemble a communications antenna and demonstrate in-space manufacture of a 32-foot carbon fiber composite beam to verify the capability of constructing large spacecraft structures in orbit... An audit from NASA's Inspector General, however, found OSAM-1 was on track to exceed the projected $2.05 billion budget and would not make its December 2026 launch date, laying the blame on the "poor performance of Maxar."

"NASA and Maxar officials acknowledged that Maxar underestimated the scope and complexity of the work, lacked full understanding of NASA technical requirements, and were deficient in necessary expertise," the report read.

The report also noted Maxar was "no longer profiting from their work on OSAM-1," after which the xproject appeared not "to be a high priority for Maxar in terms of the quality of its staffing."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Robotics

Bezos, Nvidia Join OpenAI in Funding Humanoid Robot Startup (msn.com) 11

OpenAI, Microsoft, Nvidia, and Jeff Bezos are all part of a pack of investors in a business "developing human-like robots," reports Bloomberg, "according to people with knowledge of the situation..."

At the startup — which is named "Figure" — engineers "are working on a robot that looks and moves like a human. The company has said it hopes its machine, called Figure 01, will be able to perform dangerous jobs that are unsuitable for people and that its technology will help alleviate labor shortages." Figure is raising about $675 million in a funding round that carries a pre-money valuation of roughly $2 billion, said the people, who asked not to be identified because the matter is private. Through his firm Explore Investments LLC, Bezos has committed $100 million. Microsoft is investing $95 million, while Nvidia and an Amazon.com Inc.-affiliated fund are each providing $50 million... Other technology companies are involved as well. Intel Corp.'s venture capital arm is pouring in $25 million, and LG Innotek is providing $8.5 million. Samsung's investment group, meanwhile, committed $5 million. Backers also include venture firms Parkway Venture Capital, which is investing $100 million, and Align Ventures, which is providing $90 million...

The AI robotics industry has been busy lately. Earlier this year, OpenAI-backed Norwegian robotics startup 1X Technologies AS raised $100 million. Vancouver-based Sanctuary AI is developing a humanoid robot called Phoenix. And Tesla Inc. is working on a robot called Optimus, with Elon Musk calling it one of his most important projects. Agility Robotics, which Amazon backed in 2022, has bots in testing at one of the retailer's warehouses.
Bloomberg calls the investments in Figure "part of a scramble to find new applications for artificial intelligence."
Transportation

Waymo's Self-Driving Cars Keep Hitting Things: A Cyclist, a Gate, and a Pickup Truck (ottawacitizen.com) 127

The Washington Post reports: Google's self-driving car company, Waymo, is hitting resistance in its quest to expand 24/7 robotaxi service to other parts of California, including a series of incidents that have fed public officials' safety concerns about the vehicles coming to their cities. Over eight days in February, for example, a Waymo vehicle smashed into a closing gate while exiting the University of Southern California's campus; the next day, another collided with a cyclist in San Francisco. Later that week, a mob of people vandalized and lit one of its cars on fire. Days later, the company announced a voluntary recall of its software for an incident involving a pickup truck in Phoenix. [Though it occurred three months ago, the Post reports that after the initial contact between the vehicles, "A second Waymo vehicle made contact with the pickup truck a few minutes later."]

This string of events — none of which resulted in serious injuries — comes after Waymo's main competitor, General Motors-owned Cruise, recalled its fleet of driverless cars last year... [Waymo] is now the lone company trying to expand 24/7 robotaxi service around California, despite sharp resistance from local officials. "Waymo has become the standard-bearer for the entire robotaxi industry for better or for worse," said David Zipper, a senior fellow at the MIT Mobility Initiative. While Waymo's incidents are "nowhere near what Cruise is accused of doing, there is a crisis of confidence in autonomous vehicle companies related to safety right now."

The California Public Utilities Commission (CPUC) delayed deciding whether Waymo could expand its service to include a portion of a major California highway and also Los Angeles and San Mateo counties, pending "further staff review," according to the regulator's website. While Waymo said the delay is a part of the commission's "standard and robust review process," the postponement comes as officials from other localities fear becoming like San Francisco — where self-driving cars have disrupted emergency scenes, held up traffic and frustrated residents who are learning to share public roads with robot cars... Zipper said it is a notable disparity that "the companies are saying the technology is supposed to be a godsend for urban life, and it's pretty striking that the leaders of these urban areas really don't want them," he said.

Waymo offers ride-hailing services in San Francisco and Phoenix — as well as some free rides in Los Angeles, according to the article. It also cites a December report from Waymo estimated that overich 7.1 million miles of testing, there were 17 fewer injuries and 20 fewer police-reported crashes "compared to if human drivers with the benchmark crash rate would have driven the same distance in the areas we operate."
Moon

Odysseus Moon Lander 'Tipped Over On Touchdown' (bbc.com) 87

On Thursday, the Odysseus Moon lander made history by becoming the first ever privately built and operated robot to complete a soft lunar touchdown. While the lander is "alive and well," the CEO of Houston-based Intuitive Machines, which built and flew the lander, said it tipped over during its final descent, coming up to rest propped up sideways on a rock. The BBC reports: Its owner, Texan firm Intuitive Machines, says Odysseus has plenty of power and is communicating with Earth. Controllers are trying to retrieve pictures from the robot. Steve Altemus, the CEO and co-founder of IM, said it wasn't totally clear what happened but the data suggested the robot caught a foot on the surface and then fell because it still had some lateral motion at the moment of landing. All the scientific instruments that planned to take observations on the Moon are on the side of Odysseus that should still allow them to do some work. The only payload likely on the "wrong side" of the lander, pointing down at the lunar surface, is an art project.

"We're hopeful to get pictures and really do an assessment of the structure and assessment of all the external equipment," Mr Altemus told reporters. "So far, we have quite a bit of operational capability even though we're tipped over. And so that's really exciting for us, and we are continuing the surface operations mission as a result of it." The robot had been directed to a cratered terrain near the Moon's south pole, and the IM team believes it got very close to the targeted site - perhaps within a couple of kilometers. A US space agency satellite called the Lunar Reconnaissance Orbiter will search for Odysseus in the coming days.

It's funny.  Laugh.

Former Gizmodo Writer Changed Name To 'Slackbot,' Stayed Undetected For Months (theverge.com) 22

Tom McKay successfully masqueraded as a "Slackbot" on Slack after leaving Gizmodo in 2022, going unnoticed by the site's management for several months. The Verge reports: If you're not glued to Slack for most of the day like I am, then you might not know that Slackbot is the friendly robot that lives in the messaging service. It helps you do things like set reminders, find out your office's Wi-Fi password, or let you know when you've been mentioned in a channel that you're not a part of. When it was his time to leave, McKay swapped out his existing profile picture for one that resembled an angrier version of Slackbot's actual icon. He also changed his name to "Slackbot." You can't just change your name on Slack to "Slackbot," by the way, as the service will tell you that name's already been taken. It does work if you use a special character that resembles one of the letters inside Slackbot, though, such as replacing "o" with the Unicode character "o."

The move camouflaged McKay's active Slack account for months, letting his account evade deletion. It also allowed him to send bot-like messages to his colleagues such as, "Slackbot fact of the day: Hi, I'm Slackbot! That's a fact. Have a Slack-ly day!" My colleague Victoria Song, who previously worked at Gizmodo, isn't all that surprised that this situation unfolded, and says, "As Tom's former coworker and a G/O Media survivor, this tracks."

Space

India To Launch Android Into Space To Test Crewed Launch Capability (theregister.com) 20

India's Space Research Organisation (ISRO) will send a humanoid robot astronaut into this space this year, then send it back alongside actual humans in 2025 on its long-delayed Gaganyaan orbital mission. From a report: According to the space agency, the robot-crewed Vyommitra Mission is scheduled for the third quarter of this year. The robot -- whose name translates to "Space Friend" in Sanskrit -- can monitor module parameters, issue alerts and execute life support operations. Vyommitra is also an excellent multitasker that can operate six panels while responding to queries and mimicking human functions. The humanoid speaks two languages: Hindi and English.

It's also been designated as female -- to the extent possible for a legless robot -- and sports coiffed hair, feminine facial features, and hands that look like they are wearing white gloves. It resembles a wax figurine or mannequin and The Register fancies it mostly manages to stay out of the Uncanny Valley -- the term applied to robots and digital depictions of humans that try to appear human but instead come off as creepy and/or unsettling.

Robotics

Boston Dynamics' Atlas Tries Out Inventory Work, Gets Better At Lifting (arstechnica.com) 16

In a new video released today, Boston Dynamics' Atlas robot is shown performing "kinetically challenging" work, like moving some medium-weight car parts and precisely picking stuff up. Ars Technica reports: In the latest video, we're on to what looks like "phase 2" of picking stuff up -- being more precise about it. The old clamp hands had a single pivot at the palm and seemed to just apply the maximum grip strength to anything the robot picked up. The most delicate thing Atlas picked up in the last video was a wooden plank, and it was absolutely destroying the wood. Atlas' new hands look a lot more gentle than The Clamps, with each sporting a set of three fingers with two joints. All the fingers share one big pivot point at the palm of the hand, and there's a knuckle joint halfway up the finger. The fingers are all very long and have 360 degrees of motion, so they can flex in both directions, which is probably effective but very creepy. Put two fingers on one side of an item and the "thumb" on the other, and Atlas can wrap its hands around objects instead of just crushing them.

Atlas is picking up a set of car struts -- an object with extremely complicated topography that weighs around 30 pounds -- so there's a lot to calculate. Atlas does a heavy two-handed lift of a strut from a vertical position on a pallet, walks the strut over to a shelf, and carefully slides it into place. This is all in Boston Dynamics' lab, but it's close to repetitive factory or shipping work. Everything here seems designed to give the robot a manipulation challenge. The complicated shape of the strut means there are a million ways you could grip it incorrectly. The strut box has tall metal poles around it, so the robot needs to not bang the strut into the obstacle. The shelf is a tight fit, so the strut has to be placed on the edge of the shelf and slid into place, all while making sure the strut's many protrusions won't crash into the shelf.

The Military

Is the US Space Force Researching Space-Based Solar Power? (cleantechnica.com) 38

The "technology building blocks" for space solar are already available, reports Clean Technica. "It's just a matter of scaling, systems integration, and adjustments for space-hardiness."

And several groups are looking at it — including the U.S. Space Force To help push costs down, the California Institute of Technology has proposed a sandwich-type solar module that integrates solar harvesting along with conversion to a radio frequency into one compact package, accompanied by a built-in antenna. Last month researchers at the school wrapped up a months-long, in-space test of different types of solar cells. Another approach is illustrated by the Michigan startup Virtus Solis, an industry partner of the University of Bristol. Last June the company and the school received £3.3 million in funding from the UK Net Zero Innovation program, for developing an open-source model for testing the performance of large, centralized antennas in space. "The concept depends upon the use of gigascale antenna arrays capable of delivering over 2GW of power from space onto similar gigascale antenna arrays either at sea or on the ground," the school explained.

As for how such a thing would be launched into space, that's where the U.S. Space Force comes in. Last August, the Space Force awarded a small business contract to the U.S. startup Orbital Composites. The company is tasked with the mission of developing its patented "quantum antenna" and in-space fabrication tools for secure communications in space applications, including space-to-space as well as space-to-Earth and vice versa. The basic idea is to let 3D printing doing much of the work in space. According to Orbital, in-space fabrication would save more than 100 times the cost of applying conventional fabrication methods to large-scale orbiting antennas. "By harnessing the potential of In-Space Servicing, Assembly, and Manufacturing (ISAM), the company eyes the prospect of creating significantly larger space antennas," Orbital Composites explains. "By fabricating antennas in space, larger and more complex designs are possible that eliminate the constraints of launch and rocket fairings...."

If you're guessing that a hookup between Virtus and Orbital is in the works, that's a good guess. On February 1, at the SpaceCOM conference in Orlando, Florida, Virtus Solis let slip that it is working with Orbital Composites on a space solar pilot project. If all goes according to plan, the project will be up and running in 2027, deploying Virtus's robot-enabled fabrication system with Orbital's 3D printing. As of this writing the two companies have not posted details, but Space News picked up the thread. "The 2027 mission is designed to showcase critical power-generation technologies including in-space assembly of solar panels and transmission of more than one kilowatt to Earth," Space News explained. "The news release calls the 2027 mission "a precursor to large-scale commercial megawatt-class solar installations in space by 2030...."

To be clear, Orbital's press release about its new Space Force quantum antenna contract does not mention anything in particular about space solar. However, the pieces of the puzzle fit. Along with the Virtus and Grumman connections, in October of 2022 Orbital won a small business contract through SpaceWERX, the Space Force's innovative technologies funding arm, to explore the capabilities of ISAM systems.

"SpaceWERX comes under the umbrella of the U.S. Air Force's AFWERX innovation branch, which has developed a program called SSPIDR, short for Space Solar Power Incremental Demonstrations and Research Project," the article points out. (While Virtus believes most space-based solar power systems could deliver megawatt hours of electricity at prices comparable to today's market.)
Social Networks

Is AI Hastening the Demise of Quora? (slate.com) 57

Quora "used to be a thriving community that worked to answer our most specific questions," writes Slate. "But users are fleeing," while the site hosts "a never-ending avalanche of meaningless, repetitive sludge, filled with bizarre, nonsensical, straight-up hateful, and A.I.-generated entries..."

The site has faced moderation issues, spam, trolls, and bots re-posting questions from Reddit (plus competition for ad revenue from sites like Facebook and Google which forced cuts in Quora's support and moderation teams). But automating its moderation "did not improve the situation...

"Now Quora is even offering A.I.-generated images to accompany users' answers, even though the spawned illustrations make little sense." To top it all off, after Quora began using A.I. to "generate machine answers on a number of selected question pages," the site made clear the possibility that human-crafted answers could be used for training A.I. This meant that the detailed writing Quorans provided mostly for free would be ingested into a custom large language model. Updated terms of service and privacy policies went into effect at the site last summer. As angel investor and Quoran David S. Rose paraphrased them: "You grant all other Quora users the unlimited right to reuse and adapt your answers," "You grant Quora the right to use your answers to train an LLM unless you specifically opt out," and "You completely give up your right to be any part of any class action suit brought against Quora," among others. (Quora's Help Center claims that "as of now, we do not use answers, posts, or comments added to Quora to train LLMs used for generating content on Quora. However, this may change in the future." The site offers an opt-out setting, although it admits that "opting out does not cover everything.")

This raised the issue of consent and ownership, as Quorans had to decide whether to consent to the new terms or take their work and flee. High-profile users, like fantasy author Mercedes R. Lackey, are removing their work from their profiles and writing notes explaining why. "The A.I. thing, the terms of service issue, has been a massive drain of top talent on Quora, just based on how many people have said, Downloaded my stuff and I'm out of there," Lackey told me. It's not that all Quorans want to leave, but it's hard for them to choose to remain on a website where they now have to constantly fight off errors, spam, trolls, and even account impersonators....

The tragedy of Quora is not just that it crushed the flourishing communities it once built up. It's that it took all of that goodwill, community, expertise, and curiosity and assumed that it could automate a system that equated it, apparently without much thought to how pale the comparison is. [Nelson McKeeby, an author who joined Quora in 2013] has a grim prediction for the future: "Eventually Quora will be robot questions, robot answers, and nothing else." I wonder how the site will answer the question of why Quora died, if anyone even bothers to ask.

The article notes that Andreessen Horowitz gave Quora "a much-needed $75 million investment — but only for the sake of developing its on-site generative-text chatbot, Poe."
Biotech

Neuralink Implants Brain Chip In First Human 107

According to Neuralink founder Elon Musk, the first human received an implant from the brain-chip startup on Sunday and is recovering well. "Initial results show promising neuron spike detection," Musk added. Reuters reports: The U.S. Food and Drug Administration had given the company clearance last year to conduct its first trial to test its implant on humans. The startup's PRIME Study is a trial for its wireless brain-computer interface to evaluate the safety of the implant and surgical robot. The study will assess the functionality of the interface which enables people with quadriplegia, or paralysis of all four limbs, to control devices with their thoughts, according to the company's website.
Robotics

BMW Will Employ Figure's Humanoid Robot At South Carolina Plant (techcrunch.com) 91

Figure's first humanoid robot will be coming to a BMW manufacturing facility in South Carolina. TechCrunch reports: BMW has not disclosed how many Figure 01 models it will deploy initially. Nor do we know precisely what jobs the robot will be tasked with when it starts work. Figure did, however, confirm with TechCrunch that it is beginning with an initial five tasks, which will be rolled out one at a time. While folks in the space have been cavalierly tossing out the term "general purpose" to describe these sorts of systems, it's important to temper expectations and point out that they will all arrive as single- or multi-purpose systems, growing their skillset over time. Figure CEO Brett Adcock likens the approach to an app store -- something that Boston Dynamics currently offers with its Spot robot via SDK.

Likely initial applications include standard manufacturing tasks such as box moving, pick and place and pallet unloading and loading -- basically the sort of repetitive tasks for which factory owners claim to have difficulty retaining human workers. Adcock says that Figure expects to ship its first commercial robot within a year, an ambitious timeline even for a company that prides itself on quick turnaround times. The initial batch of applications will be largely determined by Figure's early partners like BMW. The system will, for instance, likely be working with sheet metal to start. Adcock adds that the company has signed up additional clients, but declined to disclose their names. It seems likely Figure will instead opt to announce each individually to keep the news cycle spinning in the intervening 12 months.

Robotics

'Student Should Have a Healthy-Looking BMI': How Universities Bend Over Backwards To Accommodate Food Delivery Robots (404media.co) 125

samleecole writes: A food delivery robot company instructed a public university to promote its service on campus with photographs and video featuring only students who "have a healthy-looking BMI," [body mass index] according to emails and documents I obtained via a public records request. The emails also discuss how ordering delivery via robot should become a "habit" for a "captured" customer base of students on campus.

These highly specific instructions show how universities around the country are going to extreme lengths to create a welcoming environment on campus for food delivery robots that sometimes have trouble crossing the street and need traffic infrastructure redesigned for them in order to navigate campus, a relatively absurd cache of public records obtained by 404 Media reveals.

Robotics

The Global Project To Make a General Robotic Brain (ieee.org) 23

Generative AI "doesn't easily carry over into robotics," write two researchers in IEEE Spectrum, "because the Internet is not full of robotic-interaction data in the same way that it's full of text and images."

That's why they're working on a single deep neural network capable of piloting many different types of robots... Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks... The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors... [W]hat if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality...

The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to "drive" all of them — even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning. The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market...

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it's controlling from what it sees in the robot's own camera observations. If the robot's camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

"To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot... Remarkably, the single unified model provided improved performance over each laboratory's own best method, succeeding at the tasks about 50 percent more often on average." And they then used a pre-existing vision-language model to successfully add the ability to output robot actions in response to image-based prompts.

"The RT-X project shows what is possible when the robot-learning community acts together... and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms."

Thanks to long-time Slashdot reader Futurepower(R) for sharing the article.
AI

OpenAI Quietly Deletes Ban On Using ChatGPT For 'Military and Warfare' 52

An anonymous reader quotes a report from The Intercept: OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used. Up until January 10, OpenAI's "usage policies" page included a ban on "activity that has high risk of physical harm, including," specifically, "weapons development" and "military and warfare." That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to "use our service to harm yourself or others" and gives "develop or use weapons" as an example, but the blanket ban on "military and warfare" use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document "clearer" and "more readable," and which includes many other substantial language and formatting changes. "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs," OpenAI spokesperson Niko Felix said in an email to The Intercept. "A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples." Felix declined to say whether the vaguer "harm" ban encompassed all military use, writing, "Any use of our technology, including by the military, to '[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,' is disallowed."
"OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications," said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper (PDF) she co-authored with OpenAI researchers that specifically flagged the risk of military use. "There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law," she said. "Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties."

"I could imagine that the shift away from 'military and warfare' to 'weapons' leaves open a space for OpenAI to support operational infrastructures as long as the application doesn't directly involve weapons development narrowly defined," said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. "Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system -- including command and control infrastructures -- of which it's part." Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, "It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons."
The Internet

How AI-Generated Content Could Fuel a Migration From Social Media to Independent 'Authored' Content (niemanlab.org) 68

The chief content officer for New York's public radio station WNYC predicts an "AI-fueled shift to niche community and authored excellence."

And ironically, it will be fueled by "Greedy publishers and malicious propagandists... flooding the web with fake or just mediocre AI-generated 'content'" which will "spotlight and boost the value of authored creativity." And it may help give birth to a new generation of independent media. Robots will make the internet more human.

First, it will speed up our migration off of big social platforms to niche communities where we can be better versions of ourselves. We're already exhausted by feeds that amplify our anxiety and algorithms that incentivize cruelty. AI will take the arms race of digital publishing shaped by algorithmic curation to its natural conclusion: big feed-based social platforms will become unending streams of noise. When we've left those sites for good, we'll miss the (mostly inaccurate) sense that we were seeing or participating in a grand, democratic town hall. But as we find places to convene where good faith participation is expected, abuse and harassment aren't, and quality is valued over quantity, we'll be happy to have traded a perception of scale influence for the experience of real connection.

Second, this flood of authorless "content" will help truly authored creativity shine in contrast... "Could a robot have done this?" will be a question we ask to push ourselves to be funnier, weirder, more vulnerable, and more creative. And for the funniest, the weirdest, the most vulnerable, and most creative: the gap between what they do and everything else will be huge. Finally, these AI-accelerated shifts will combine with the current moment in media economics to fuel a new era of independent media.

For a few years he's seen the rise of independent community-funded journalists, and "the list of thriving small enterprises is getting longer." He sees more growth in community-funding platforms (with subscription/membership features like on Substack and Patreon) which "continue to tilt the risk/reward math for audience-facing talent....

"And the amount of audience-facing, world-class talent that left institutional media in 2023 (by choice or otherwise) is unlike anything I've seen in more than 15 years in journalism... [I]f we're lucky, we'll see the creation of a new generation of independent media businesses whose work is as funny, weird, vulnerable and creative as its creators want it to be. And those businesses will be built on truly stable ground: a direct financial relationship with people who care.

"Thank the robots."
Cellphones

Will Switching to a Flip Phone Fight Smartphone Addiction? (omanobserver.om) 152

"This December, I made a radical change," writes a New York Times tech reporter — ditching their $1,300 iPhone 15 for a $108 flip phone.

"It makes phone calls and texts and that was about it. It didn't even have Snake on it..." The decision to "upgrade" to the Journey was apparently so preposterous that my carrier wouldn't allow me to do it over the phone.... Texting anything longer than two sentences involved an excruciating amount of button pushing, so I started to call people instead. This was a problem because most people don't want their phone to function as a phone... [Most voicemails] were never acknowledged. It was nearly as reliable a method of communication as putting a message in a bottle and throwing it out to sea...

My black clamshell of a phone had the effect of a clerical collar, inducing people to confess their screen time sins to me. They hated that they looked at their phone so much around their children, that they watched TikTok at night instead of sleeping, that they looked at it while they were driving, that they started and ended their days with it. In a 2021 Pew Research survey, 31 percent of adults reported being "almost constantly online" — a feat possible only because of the existence of the smartphone.

This was the most striking aspect of switching to the flip. It meant the digital universe and its infinite pleasures, efficiencies and annoyances were confined to my computer. That was the source of people's skepticism: They thought I wouldn't be able to function without Uber, not to mention the world's knowledge, at my beck and call. (I grew up in the '90s. It wasn't that bad...

"Do you feel less well-informed?" one colleague asked. Not really. Information made its way to me, just slightly less instantly. My computer still offered news sites, newsletters and social media rubbernecking.

There were disadvantages — and not just living without Google Maps. ("I've got an electric vehicle, and upon pulling into a public charger, low on miles, realized that I could not log into the charger without a smartphone app... I received a robot vacuum for Christmas ... which could only be set up with an iPhone app.") Two-factor authentication was impossible.

But "Despite these challenges, I survived, even thrived during the month. It was a relief to unplug my brain from the internet on a regular basis and for hours at a time. I read four books... I felt that I had more time, and more control over what to do with it... my sleep improved dramatically."

"I do plan to return to my iPhone in 2024, but in grayscale and with more mindfulness about how I use it."
Google

Google's DeepMind Unveils Safer Robot Advances With 'Robot Constitution' 12

An anonymous reader shares a report: The DeepMind robotics team has revealed three new advances that it says will help robots make faster, better, and safer decisions in the wild. One includes a system for gathering training data with a "Robot Constitution" to make sure your robot office assistant can fetch you more printer paper -- but without mowing down a human co-worker who happens to be in the way.

Google's data gathering system, AutoRT, can use a visual language model (VLM) and large language model (LLM) working hand in hand to understand its environment, adapt to unfamiliar settings, and decide on appropriate tasks. The Robot Constitution, which is inspired by Isaac Asimov's "Three Laws of Robotics," is described as a set of "safety-focused prompts" instructing the LLM to avoid choosing tasks that involve humans, animals, sharp objects, and even electrical appliances.

For additional safety, DeepMind programmed the robots to stop automatically if the force on its joints goes past a certain threshold and included a physical kill switch human operators can use to deactivate them. Over a period of seven months, Google deployed a fleet of 53 AutoRT robots into four different office buildings and conducted over 77,000 trials. Some robots were controlled remotely by human operators, while others operated either based on a script or completely autonomously using Google's Robotic Transformer (RT-2) AI learning model.
AI

Will AI Just Waste Everyone's Time? (newrepublic.com) 167

"The events of 2023 showed that A.I. doesn't need to be that good in order to do damage," argues novelist Lincoln Michel in the New Republic: This March, news broke that the latest artificial intelligence models could pass the LSAT, SAT, and AP exams. It sparked another round of A.I. panic. The machines, it seemed, were already at peak human ability. Around that time, I conducted my own, more modest test. I asked a couple of A.I. programs to "write a six-word story about baby shoes," riffing on the famous (if apocryphal) Hemingway story. They failed but not in the way I expected. Bard gave me five words, and ChatGPT produced eight. I tried again, specifying "exactly six words," and received eight and then four words. What did it mean that A.I. could best top-tier lawyers yet fail preschool math?

A year since the launch of ChatGPT, I wonder if the answer isn't just what it seems: A.I. is simultaneously impressive and pretty dumb. Maybe not as dumb as the NFT apes or Zuckerberg's Metaverse cubicle simulator, which Silicon Valley also promised would revolutionize all aspects of life. But at least half-dumb. One day A.I. passes the bar exam, and the next, lawyers are being fined for citing A.I.-invented laws. One second it's "the end of writing," the next it's recommending recipes for "mosquito-repellant roast potatoes." At best, A.I. is a mixed bag. (Since "artificial intelligence" is an intentionally vague term, I should specify I'm discussing "generative A.I." programs like ChatGPT and MidJourney that create text, images, and audio. Credit where credit is due: Branding unthinking, error-prone algorithms as "artificial intelligence" was a brilliant marketing coup)....

The legal questions will be settled in court, and the discourse tends to get bogged down in semantic debates about "plagiarism" and "originality," but the essential truth of A.I. is clear: The largest corporations on earth ripped off generations of artists without permission or compensation to produce programs meant to rip us off even more. I believe A.I. defenders know this is unethical, which is why they distract us with fan fiction about the future. If A.I. is the key to a gleaming utopia or else robot-induced extinction, what does it matter if a few poets and painters got bilked along the way? It's possible a souped-up Microsoft Clippy will morph into SkyNet in a couple of years. It's also possible the technology plateaus, like how self-driving cars are perpetually a few years away from taking over our roads. Even if the technology advances, A.I. costs lots of money, and once investors stop subsidizing its use, A.I. — or at least quality A.I. — may prove cost-prohibitive for most tasks....

A year into ChatGPT, I'm less concerned A.I. will replace human artists anytime soon. Some enjoy using A.I. themselves, but I'm not sure many want to consume (much less pay for) A.I. "art" generated by others. The much-hyped A.I.-authored books have been flops, and few readers are flocking to websites that pivoted to A.I. Last month, Sports Illustrated was so embarrassed by a report they published A.I. articles that they apologized and promised to investigate. Say what you want about NFTs, but at least people were willing to pay for them.

"A.I. can write book reviews no one reads of A.I. novels no one buys, generate playlists no one listens to of A.I. songs no one hears, and create A.I. images no one looks at for websites no one visits.

"This seems to be the future A.I. promises. Endless content generated by robots, enjoyed by no one, clogging up everything, and wasting everyone's time."
Robotics

Massachusetts Lawmakers Mull 'Killer Robot' Bill (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch, written by Brian Heater: Back in mid-September, a pair of Massachusetts lawmakers introduced a bill "to ensure the responsible use of advanced robotic technologies." What that means in the simplest and most direct terms is legislation that would bar the manufacture, sale and use of weaponized robots. It's an interesting proposal for a number of reasons. The first is a general lack of U.S. state and national laws governing such growing concerns. It's one of those things that has felt like science fiction to such a degree that many lawmakers had no interest in pursuing it in a pragmatic manner. [...] Earlier this week, I spoke about the bill with Massachusetts state representative Lindsay Sabadosa, who filed it alongside Massachusetts state senator Michael Moore.

What is the status of the bill?
We're in an interesting position, because there are a lot of moving parts with the bill. The bill has had a hearing already, which is wonderful news. We're working with the committee on the language of the bill. They have had some questions about why different pieces were written as they were written. We're doing that technical review of the language now -- and also checking in with all stakeholders to make sure that everyone who needs to be at the table is at the table.

When you say "stakeholders" ...
Stakeholders are companies that produce robotics. The robot Spot, which Boston Dynamics produces, and other robots as well, are used by entities like Boston Police Department or the Massachusetts State Police. They might be used by the fire department. So, we're talking to those people to run through the bill, talk about what the changes are. For the most part, what we're hearing is that the bill doesn't really change a lot for those stakeholders. Really the bill is to prevent regular people from trying to weaponize robots, not to prevent the very good uses that the robots are currently employed for.

Does the bill apply to law enforcement as well?
We're not trying to stop law enforcement from using the robots. And what we've heard from law enforcement repeatedly is that they're often used to deescalate situations. They talk a lot about barricade situations or hostage situations. Not to be gruesome, but if people are still alive, if there are injuries, they say it often helps to deescalate, rather than sending in officers, which we know can often escalate the situation. So, no, we wouldn't change any of those uses. The legislation does ask that law enforcement get warrants for the use of robots if they're using them in place of when they would send in a police officer. That's pretty common already. Law enforcement has to do that if it's not an emergency situation. We're really just saying, "Please follow current protocol. And if you're going to use a robot instead of a human, let's make sure that protocol is still the standard."

I'm sure you've been following the stories out of places like San Francisco and Oakland, where there's an attempt to weaponize robots. Is that included in this?
We haven't had law enforcement weaponize robots, and no one has said, "We'd like to attach a gun to a robot" from law enforcement in Massachusetts. I think because of some of those past conversations there's been a desire to not go down that route. And I think that local communities would probably have a lot to say if the police started to do that. So, while the legislation doesn't outright ban that, we are not condoning it either.
Representative Sabadosa said Boston Dynamics "sought us out" and is "leading the charge on this."

"I'm hopeful that we will be the first to get the legislation across the finish line, too," added Rep. Sabadosa. "We've gotten thank-you notes from companies, but we haven't gotten any pushback from them. And our goal is not to stifle innovation. I think there's lots of wonderful things that robots will be used for. [...]"

You can read the full interview here.

Slashdot Top Deals