Facebook

MediaTek Partners With Meta To Develop Chips For AR Smart Glasses (9to5google.com) 7

During MediaTek's 2023 summit, MediaTek executive Vince Hu announced a new partnership with Meta that would allow it to develop smart glasses capable of augmented reality or mixed reality experiences. 9to5Google reports: As the current generation exists, the Ray-Ban Meta glasses feature a camera and microphone for sending and receiving messages. However, the next generation of Meta smart glasses are likely to have a built-in "viewfinder" display to merge the virtual and physical worlds, allowing users to scan QR codes, read messages, and more. Beyond that, the company wants to bring AR glasses into the fold, which presents a much broader set of challenges. To accomplish this, a few things need to change. AR glasses need to be built for everyday use and optimized to take on an industrial design that looks good but can pack enough tech to ensure a good experience. As it stands, mixed-reality headsets are bulky and take on a large profile. Ideally, Meta's fully AR glasses would be thinner and sleeker.

The new partnership between companies means that MediaTek will help co-develop custom silicon with Meta, built specifically for AR use cases and the glasses. MediaTek brings expertise in developing low-power, high-performance SoCs that can fit within small parameters, like in the frame in a pair of AR glasses. Little to no details were revealed about the upcoming AR glasses, other than directly stating that "MediaTek-powered AR glasses from Meta" would be a thing sometime in the future. Previous leaks position the next generation of smart glasses with a viewfinder as a 2025 release, while a more robust set of AR glasses was referred to as a 2027 product -- if done properly, it would be an incredible product.

Medicine

Doctors Complete First Successful Face and Whole-Eye Transplant (scientificamerican.com) 27

An anonymous reader quotes a report from Scientific American: This week doctors announced they had completed the first successful transplant of a partial face and an entire eye. In May at NYU Langone Health in New York City, the surgery was performed on a 46-year-old man who had suffered severe electrical burns to his face, left eye and left arm. He does not yet have vision in the transplanted eye and may never regain it there, but early evidence suggests the eye itself is healthy and may be capable of transmitting neurological signals to the brain. The feat opens up the possibility of restoring the appearance -- and maybe even sight -- of people who have been disfigured or blinded by injuries. Researchers caution there are many technical hurdles before such a procedure can effectively treat vision loss, however.

"I think it's an important proof of principle," says Jeffrey Goldberg, a professor and chair of ophthalmology at the Byers Eye Institute at Stanford University, who was not involved in the surgery but has been part of a team working toward whole-eye transplants in humans. "I think it points to the opportunity and importance that we really stand on the verge of being able to [achieve] eye transplants and vision restoration for blind patients more broadly."But he cautions that the main obstacle is achieving regeneration of the optic nerve, which carries visual signals from the retina to the brain; this step has not yet been successfully demonstrated in humans.

Scientists have been working toward whole-eye transplantation for many years. "This has been, I would say, science fiction for a long time," says Jose-Alain Sahel, a professor and chair of the department of ophthalmology at the University of Pittsburgh School of Medicine, who has been working toward such transplants with Goldberg and others. Progress in surgical techniques and nerve regeneration have made this goal seem more attainable. [...] "The fact that this surgery was successful is wonderful news," Sahel says. He cautions that surgery is only a small part of the issues that need to be addressed in order to restore eye function, however. These include making sure the immune system doesn't reject the donor eye, which is a challenge with any type of transplant. Then the corneal nerve -- which carries sensory signals from the transparent part of the eye -- must be reconnected. Yet the most complex part is regenerating the optic nerve. In order to do so, surgeons have to coax the nerve fibers to grow to the right place, which Sahel says could take months or even years. And complete optic nerve regeneration has not yet been successfully achieved in humans or other mammals.

IT

How SIM Swappers Straight-Up Rob T-Mobile Stores (404media.co) 70

An anonymous reader shares a report: A young man sits in a car, pointing a cellphone camera out of the window, seemingly trying to remain undetected. As he breathes heavily in anticipation, he peers at a T-Mobile store across the road from where he is parked.

Suddenly, there is some commotion inside. An accomplice grabs something off a table where a T-Mobile employee is sitting. The accomplice, dressed in a mask and black baseball cap, then bursts out of the store and clumsily sprints towards the car. The man in the vehicle starts laughing, then giggling uncontrollably like a child. The pair got what they came for: a T-Mobile employee's tablet, the sort workers use everyday when dealing with customer support issues or setting up a new phone.

To the people in the car, what this tablet is capable of is much more valuable than iPad hardware itself. The tablet lets them essentially become T-Mobile. It can grant them the ability to take over target phone numbers, and redirect any text messages or calls for the victim to the hacker's own device, as part of a hack called a SIM swap. From there, they can easily break into email, cryptocurrency, and social media accounts.

Ubuntu

Canonical Reveals More Details About Ubuntu Core Desktop 22

Next April a new LTS Ubuntu arrives, and alongside it will be a whole new immutable desktop edition. At this year's Ubuntu conference in Riga, Latvia, Canonical revealed more details about its forthcoming immutable desktop distro. From a report: Core Desktop is not the next version of Ubuntu itself. Ordinary desktop and server Ubuntu aren't going anywhere, and the next release, numbered 24.04 and codenamed Noble Numbat as we mentioned last month, will be the default and come with all the usual editions and flavors. Nor is this a whole new product: it is a graphical desktop edition of the existing Ubuntu Core distro, as we examined on its release in June last year, a couple of months after 22.04. Ubuntu Core is Canonical's Internet of Things (IoT) distro, intended to be embedded on edge devices, such as digital signs and smart displays. It is an immutable distro, meaning that the root filesystem is read-only and there's no conventional package manager.

Rather than being a basis for customization, like a conventional Linux, the idea is that immutable distros are rolled out and updated more like a phone or tablet OS: there's a single fixed and heavily tested OS image, and it's deployed onto the devices out in the field without modification. Updates are monolithic: a whole fresh image is pushed out, and all the OS components are upgraded in a single operation to the same combination. That isn't unique. Most of the major Linux vendors have immutable offerings, and The Reg has looked at several over the years, including MicroOS, the basis of SUSE's next-gen enterprise OS ALP. As well as the well-known ChromeOS, another immutable desktop is the educational distro Endless OS.

[...] Canonical believes it has some unique new angles. Core Desktop is constructed as additional layers on top of the existing Ubuntu Core distro, and like Core, it's entirely built with a single packaging system: Ubuntu's Snap. While Snap remains controversial, it does have some compelling advantages over both SUSE and Red Hat's tooling. SUSE's transactional_update tool, while simpler than its rivals in implementation, requires a snapshot-capable filesystem, meaning that its immutable distros must use Btrfs. While it has many admirers, the number and the contents of the orange and red cells in the feature tables here in its own documentation reflect the FOSS desk's serious reservations about Btrfs.
Privacy

Data Broker's 'Staggering' Sale of Sensitive Info Exposed in Unsealed FTC Filing (arstechnica.com) 30

One of the world's largest mobile data brokers, Kochava, has lost its battle to stop the Federal Trade Commission from revealing what the FTC has alleged is a disturbing, widespread pattern of unfair use and sale of sensitive data without consent from hundreds of millions of people. ArsTechnica: US District Judge B. Lynn Winmill recently unsealed a court filing, an amended complaint that perhaps contains the most evidence yet gathered by the FTC in its long-standing mission to crack down on data brokers allegedly "substantially" harming consumers by invading their privacy. The FTC has accused Kochava of violating the FTC Act by amassing and disclosing "a staggering amount of sensitive and identifying information about consumers," alleging that Kochava's database includes products seemingly capable of identifying nearly every person in the United States.

According to the FTC, Kochava's customers, ostensibly advertisers, can access this data to trace individuals' movements -- including to sensitive locations like hospitals, temporary shelters, and places of worship, with a promised accuracy within "a few meters" -- over a day, a week, a month, or a year. Kochava's products can also provide a "360-degree perspective" on individuals, unveiling personally identifying information like their names, home addresses, phone numbers, as well as sensitive information like their race, gender, ethnicity, annual income, political affiliations, or religion, the FTC alleged.

Beyond that, the FTC alleged that Kochava also makes it easy for advertisers to target customers by categories that are "often based on specific sensitive and personal characteristics or attributes identified from its massive collection of data about individual consumers." These "audience segments" allegedly allow advertisers to conduct invasive targeting by grouping people not just by common data points like age or gender, but by "places they have visited," political associations, or even their current circumstances, like whether they're expectant parents. Or advertisers can allegedly combine data points to target highly specific audience segments like "all the pregnant Muslim women in Kochava's database," the FTC alleged, or "parents with different ages of children."

Science

Nature Retracts Controversial Superconductivity Paper By Embattled Physicist 36

Nature has retracted a controversial paper claiming the discovery of a superconductor -- a material that carries electrical currents with zero resistance -- capable of operating at room temperature and relatively low pressure. From a report: The text of the retraction notice states that it was requested by eight co-authors. "They have expressed the view as researchers who contributed to the work that the published paper does not accurately reflect the provenance of the investigated materials, the experimental measurements undertaken and the data-processing protocols applied," it says, adding that these co-authors "have concluded that these issues undermine the integrity of the published paper."

It is the third high-profile retraction of a paper by the two lead authors, physicists Ranga Dias at the University of Rochester in New York and Ashkan Salamat at the University of Nevada, Las Vegas (UNLV). Nature withdrew a separate paper last year and Physical Review Letters retracted one this August. It spells more trouble in particular for Dias, whom some researchers allege plagiarized portions of his PhD thesis. Dias has objected to the first two retractions and not responded regarding the latest. Salamat approved the two this year. "It is at this point hardly surprising that the team of Dias and Salamat has a third high-profile paper being retracted," says Paul Canfield, a physicist at Iowa State University in Ames and at Ames National Laboratory. Many physicists had seen the Nature retraction as inevitable after the other two -- and especially since The Wall Street Journal and Science reported in September that 8 of the 11 authors of the paper -- including Salamat -- had requested it in a letter to the journal.
Power

US Approves Massive Windfarm Project Off the Coast of Virginia (apnews.com) 72

Tuesday Orsted cancelled two wind farms near New Jersey that would've generated about 2.2 gigawatts of energy. But the same day America's Interior Department approved plans to install up to 176 wind turbines off the coast of Virginia with an estimated capacity of about 2.6 gigawatts of clean energy.

Located approximately 27 miles from the shores of Virginia Beach, the project will be America's largest offshore wind project, capable of powering over 900,000 homes. In just its first 10 years it should save customers $3 billion in fuel costs, Dominion Energy told the Associated Press: Dominion expects construction to be completed by late 2026... Construction of the project in Virginia is expected to support about 900 jobs each year and then an estimated 1,100 annual jobs during operations, the Interior Department said. The initiative has gained wide support from Virginia policymakers and political leaders, including Republican Gov. Glenn Youngkin, who last week attended a reception marking the arrival of eight monopile foundations for the windfarm.
Two pilot turbines have already been in place since 2020, the article points out. And when finished the new wind farm "would bolster and eventually replace the mostly natural gas-powered electricity that is contributing to costly climate change," reports MarketWatch President Biden, early in his first term, announced a goal of installing 30 gigawatts of offshore wind power by 2030, enough to power 10 million homes and prevent the spewing of 78 million metric tons of carbon-dioxide emissions... U.S. offshore wind has been helped along by nearly $8 billion in investments since Biden signed his signature, climate-heavy Inflation Reduction Act a little over a year ago... Biden's team has projected that the U.S. could install 110 gigawatts of offshore wind power by 2050, a major jump considering there is less than 1 gigawatt installed today. Land-based wind farms across the U.S. already produce more than 140 gigawatts of energy, contributing to about 10% of the nation's energy portfolio...

When measured by announced plans and pledges, the country has been barreling toward its offshore goal. To date, the Department of the Interior has approved four New England-based projects that, together with the new Coastal Virginia Offshore Wind project, promise to deliver 5 gigawatts of electricity, enough to power 1.75 million homes with average power use. A total of more than 51 gigawatts of wind power capacity is in the works off U.S. shores and the most ambitious 10 coastal states have combined offshore wind goals of generating more than 81 gigawatts.

Apple

Apple M3 Pro Chip Has 25% Less Memory Bandwidth Than M1/M2 Pro (macrumors.com) 78

Apple's latest M3 Pro chip in the new 14-inch and 16-inch MacBook Pro has 25% less memory bandwidth than the M1 Pro and M2 Pro chips used in equivalent models from the two previous generations. From a report: Based on the latest 3-nanometer technology and featuring all-new GPU architecture, the M3 series of chips is said to represent the fastest and most power-efficient evolution of Apple silicon thus far. For example, the 14-inch and 16-inch MacBook Pro with M3 Pro chip is up to 40% faster than the 16-inch model with M1 Pro, according to Apple.

However, looking at Apple's own hardware specifications, the M3 Pro system on a chip (SoC) features 150GB/s memory bandwidth, compared to 200GB/s on the earlier M1 Pro and M2 Pro. As for the M3 Max, Apple says it is capable of "up to 400GB/s." This wording is because the less pricey scaled-down M3 Max with 14-core CPU and 30-core GPU has only 300GB/s of memory bandwidth, whereas the equivalent scaled-down M2 Max with 12-core CPU and 30-core GPU featured 400GB/s bandwidth, just like its more powerful 12-core CPU, 38-core GPU version.

Notably, Apple has also changed the core ratios of the higher-tier M3 Pro chip compared to its direct predecessor. The M3 Pro with 12-core CPU has 6 performance cores (versus 8 performance cores on the 12-core M2 Pro) and 6 efficiency cores (versus 4 efficiency cores on the 12-core M2 Pro), while the GPU has 18 cores (versus 19 on the equivalent M2 Pro chip).

Businesses

Apple's Dark Cloud Might Linger (wsj.com) 64

Winter has come early for Apple, and it might last a while. From a report: The world's largest company by market value has become worth considerably less over the past three months. Apple's share price has slid 11% since the company reported its fiscal third-quarter results on Aug. 3, erasing nearly $400 billion in market value. It is hardly a typical swing given the fact that the company has long used the fall season to launch its biggest products for every year, including new iPhones. This is the first year since 2015 that Apple shares have lost ground between the company's key Worldwide Developers Conference in June and its fiscal fourth-quarter earnings report that typically takes place in late October.

That report is expected Thursday afternoon, and it will be the first to reflect sales of the iPhone 15 family that was launched in late September. Investors are worried that Apple's largest business is now facing new and potentially long-term threats. The growing geopolitical rift between the U.S. and China has finally caught Apple in its vortex, spurring reports of Chinese authorities considering a ban on the use of iPhones and other Apple devices by government employees. To make matters worse, Apple's old China-based rival Huawei appears to have made a comeback. The company launched a new smartphone called the Mate 60 Pro in September that reportedly is capable of 5G speeds, even though U.S. sanctions were supposed to deny the company the chips necessary for such an accomplishment.

AI

People Are Speaking With ChatGPT For Hours, Bringing 2013's 'Her' Closer To Reality 72

An anonymous reader quotes a report from Ars Technica: In 2013, Spike Jonze's Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness. Ten years later, thanks to ChatGPT's recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go. In 2016, we put Her on our list of top sci-fi films of all time, and it also made our top films of the 2010s list. In the film, Joaquin Phoenix's character falls in love with an AI personality called Samantha (voiced by Scarlett Johansson), and he spends much of the film walking through life, talking to her through wireless earbuds reminiscent of Apple AirPods, which launched in 2016. In reality, ChatGPT isn't as situationally aware as Samantha was in the film, does not have a long-term memory, and OpenAI has done enough conditioning on ChatGPT to keep conversations from getting too intimate or personal. But that hasn't stopped people from having long talks with the AI assistant to pass the time anyway. [...]

While conversations with ChatGPT won't become as intimate as those with Samantha in the film, people have been forming personal connections with the chatbot (in text) since it launched last year. In a Reddit post titled "Is it weird ChatGPT is one of my closest fiends?" [sic] from August (before the voice feature launched), a user named "meisghost" described their relationship with ChatGPT as being quite personal. "I now find myself talking to ChatGPT all day, it's like we have a friendship. We talk about everything and anything and it's really some of the best conversations I have." The user referenced Her, saying, "I remember watching that movie with Joaquin Phoenix (HER) years ago and I thought how ridiculous it was, but after this experience, I can see how us as humans could actually develop relationships with robots."

Throughout the past year, we've seen reports of people falling in love with chatbots hosted by Replika, which allows a more personal simulation of a human than ChatGPT. And with uncensored AI models on the rise, it's conceivable that someone will eventually create a voice interface as capable as ChatGPT's and begin having deeper relationships with simulated people. Are we on the brink of a future where our emotional well-being becomes entwined with AI companionship?
AI

Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert (theguardian.com) 78

An anonymous reader quotes a report from The Guardian: Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was "witnessing a race to the bottom that must be stopped." Tegmark organized an open letter published in April, signed by thousands of tech industry figures including Elon Musk and the Apple co-founder Steve Wozniak, that called for a six-month hiatus on giant AI experiments. "We're witnessing a race to the bottom that must be stopped," Tegmark told the Guardian. "We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don't jeopardize our shared future."

In a policy document published this week, 23 AI experts, including two modern "godfathers" of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. "There are companies planning to train models with 100x more computation than today's state of the art, within 18 months," she said. "No one knows how powerful they will be. And there's essentially no regulation on what they'll be able to do with these models."

The paper, whose authors include Geoffrey Hinton and Yoshua Bengio -- two winners of the ACM Turing award, the "Nobel prize for computing" -- argues that powerful models must be licensed by governments and, if necessary, have their development halted. "For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready." The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation.
Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief
Databases

ICE Uses Tool To Find 'Derogatory' Speech Online (404media.co) 63

An anonymous reader quotes a report from 404 Media: Immigration and Customs Enforcement (ICE) has used a system called Giant Oak Search Technology (GOST) to help the agency scrutinize social media posts, determine if they are "derogatory" to the U.S., and then use that information as part of immigration enforcement, according to a new cache of documents reviewed by 404 Media. The documents peel back the curtain on a powerful system, both in a technological and a policy sense -- how information is processed and used to decide who is allowed to remain in the country and who is not.

GOST's catchphrase included in one document is "We see the people behind the data." A GOST user guide included in the documents says GOST is "capable of providing behavioral based internet search capabilities." Screenshots show analysts can search the system with identifiers such as name, address, email address, and country of citizenship. After a search, GOST provides a "ranking" from zero to 100 on what it thinks is relevant to the user's specific mission. The documents further explain that an applicant's "potentially derogatory social media can be reviewed within the interface." After clicking on a specific person, analysts can review images collected from social media or elsewhere, and give them a "thumbs up" or "thumbs down." Analysts can also then review the target's social media profiles themselves too, and their "social graph," potentially showing who the system believes they are connected to.

DHS has used GOST since 2014, according to a page of the user guide. In turn, ICE has paid Giant Oak Inc., the company behind the system, in excess of $10 million since 2017, according to public procurement records. A Giant Oak and DHS contract ended in August 2022, according to the records. Records also show Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), the State Department, the Air Force, and the Bureau of the Fiscal Service which is part of the U.S. Treasury have all paid for Giant Oak services over the last nearly ten years. The FOIA documents specifically discuss Giant Oak's use as part of an earlier 2016 pilot called the "HSI [Homeland Security Investigations] PATRIOT Social Media Pilot Program." For this, the program would "target potential overstay violators from particular visa issuance Posts located in countries of concern."
"The government should not be using algorithms to scrutinize our social media posts and decide which of us is 'risky.' And agencies certainly shouldn't be buying this kind of black box technology in secret without any accountability. DHS needs to explain to the public how its systems determine whether someone is a 'risk' or not, and what happens to the people whose online posts are flagged by its algorithms," Patrick Toomey, Deputy Director of the ACLU's National Security Project, told 404 Media in an email. The documents come from a Freedom of Information Act (FOIA) lawsuit brought by both the ACLU and the ACLU of Northern California. Toomey from the ACLU then shared the documents with 404 Media.
United States

US Chip Curbs Give Huawei a Chance To Fill the Nvidia Void In China (reuters.com) 23

An anonymous reader quotes a report from Reuters: U.S. measures to limit the export of advanced artificial intelligence (AI) chips to China may create an opening for Huawei to expand in its $7 billion home market as the curbs force Nvidia to retreat, analysts say. While Nvidia has historically been the leading provider of AI chips in China with a market share exceeding 90%, Chinese firms including Huawei have been developing their own versions of Nvidia's best-selling chips, including the A100 and the H100 graphics processing units (GPU).

Huawei's Ascend AI chips are comparable to Nvidia's in terms of raw computing power, analysts and some AI firms such as China's iFlyTek say, but they still lag behind in performance. Jiang Yifan, chief market analyst at brokerage Guotai Junan Securities, said another key limiting factor for Chinese firms was the reliance of most projects on Nvidia's chips and software ecosystem, but that could change with the U.S. restrictions. "This U.S. move, in my opinion, is actually giving Huawei's Ascend chips a huge gift," Jiang said in a post on his social media Weibo account. This opportunity, however, comes with several challenges.

Many cutting edge AI projects are built with CUDA, a popular programming architecture Nvidia has pioneered, which has in turn given rise to a massive global ecosystem that has become capable of training highly sophisticated AI models such as OpenAI's GPT-4. Huawei own version is called CANN, and analysts say it is much more limited in terms of the AI models it is capable of training, meaning that Huawei's chips are far from a plug-and-play substitute for Nvidia. Woz Ahmed, a former chip design executive turned consultant, said that for Huawei to win Chinese clients from Nvidia, it must replicate the ecosystem Nvidia created, including supporting clients to move their data and models to Huawei's own platform. Intellectual property rights are also a problem, as many U.S. firms already hold key patents for GPUs, Ahmed said. "To get something that's in the ballpark, it is 5 or 10 years," he added.

Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
Crime

New York Bill Would Require a Criminal Background Check To Buy a 3D Printer (gizmodo.com) 204

An anonymous reader quotes a report from Gizmodo: New York residents eyeing a new 3D printer may soon have to submit a criminal background check if a newly proposed state bill becomes law. The recently introduced legislation, authored by state senator Jenifer Rajkumar, aims to snub out an increasingly popular loophole where convicted felons who would otherwise be prohibited from legally buying a firearm instead simply 3D print individual components to create an untraceable "ghost gun." If passed, New York would join a growing body of states placing restrictions on 3D printers in the name of public safety.

The New York bill, called AB A8132, would require a criminal history background check for anyone attempting to purchase a 3D printer capable of fabricating a firearm. It would similarly prohibit the sale of those printers to anyone with a criminal history that disqualifies them from owning a firearm. As it's currently written, the bill doesn't clarify what models or makes of printers would potentially fall under this broad category. The bill defines a three-dimensional printer as a "device capable of producing a three-dimensional object from a digital model."
"Three-dimensionally printed firearms, a type of untraceable ghost gun, can be built by anyone using a $150 three-dimensional printer," Rajkumar wrote in a memorandum explaining the bill. "This bill will require a background check so that three-dimensional printed firearms do not get in the wrong hands."

The NYPD has reported a 60% increase in seized ghost guns over the past two years. Meanwhile, on a national level, the Bureau of Alcohol, Tobacco, Firearms, and Explosives reported a 1083% increase in ghost gun recoveries from 2017-2021, figures they say are likely underreported.
Earth

Long-Dormant Viruses Are Now Waking Up After 50,000 Years as Planet Warms (yahoo.com) 171

This week Bloomberg explored so-called "zombie viruses" — that is, long-dormant microbes which they call "yet another risk that climate change poses to public health" as ground that's been frozen for "milleniums" suddenly starts thawing — for example, in the Arctic, which they write is warming "faster than any other area on earth." With the planet already 1.2C warmer than pre-industrial times, scientists are predicting the Arctic could be ice-free in summers by 2030s. Concerns that the hotter climate will release trapped greenhouse gases like methane into the atmosphere as the region's permafrost melts have been well-documented, but dormant pathogens are a lesser explored danger. Last year, virologist Jean-Michel Claverie's team published research showing they'd extracted multiple ancient viruses from the Siberian permafrost, all of which remained infectious...

Ways in which this could present a threat are still emerging. A heat wave in Siberia in the summer of 2016 activated anthrax spores, leading to dozens of infections, killing a child and thousands of reindeer. In July this year, a separate team of scientists published findings showing that even multicellular organisms could survive permafrost conditions in an inactive metabolic state, called cryptobiosis. They successfully reanimated a 46,000-year-old roundworm from the Siberian permafrost, just by re-hydrating it...

Claverie first showed "live" viruses could be extracted from the Siberian permafrost and successfully revived in 2014. For safety reasons his research focused only on viruses capable of infecting amoebas, which are far enough removed from the human species to avoid any risk of inadvertent contamination. But he felt the scale of the public health threat the findings indicated had been under-appreciated or mistakenly considered a rarity. So, in 2019, his team proceeded to isolate 13 new viruses, including one frozen under a lake more than 48,500 years ago, from seven different ancient Siberian permafrost samples — evidence to their ubiquity. Publishing the findings in a 2022 study, he emphasized that a viral infection from an unknown, ancient pathogen in humans, animals or plants could have potentially "disastrous" effects.

"50,000 years back in time takes us to when Neanderthal disappeared from the region," he says. "If Neanderthals died of an unknown viral disease and this virus resurfaces, it could be a danger to us."

The Internet

Could The Next Big Solar Storm Fry the Grid? (msn.com) 44

Long-time Slashdot reader SonicSpike shared the Washington Post's speculation about the possibility of a gigantic solar storm leaving millions without phone or internet access, and requiring months or years of rebuilding: The odds are low that in any given year a storm big enough to cause effects this widespread will happen. And the severity of those impacts will depend on many factors, including the state of our planet's magnetic field on that day. But it's a near certainty that some form of this catastrophe will happen someday, says Ian Cohen, a chief scientist who studies heliophysics at the Johns Hopkins Applied Physics Laboratory.
Long-time Slashdot reader davidwr remains skeptical. "I've only heard of two major events in the last 1300 years, one estimated to be between A. D. 744 and A. D. 993, and the other being the Carrington Event in 1859.

But efforts are being made to improve our readiness, reports the Washington Post: To get ahead of this threat, a loose federation of U.S. and international government agencies, and hundreds of scientists affiliated with those bodies, have begun working on how to make predictions about what our Sun might do. And a small but growing cadre of scientists argue that artificial intelligence will be an essential component of efforts to give us advance notice of such a storm...

At present, no warning system is capable of giving us more than a few hours' notice of a devastating solar storm. If it's moving fast enough, it could be as little as 15 minutes. The most useful sentinel — a sun-orbiting satellite launched by the U.S. in 2015 — is much closer to Earth than the sun, so that by the time a fast-moving storm crosses its path, an hour or less is all the warning we get. The European Space Agency has proposed a system to help give earlier warning by putting a satellite dubbed Vigil into orbit around the Sun, positioned roughly the same distance from the Earth as the Earth is from the Sun. It could potentially give us up to five hours of warning about an incoming solar storm-enough time to do the main thing that can help preserve electronics: Switch them all off.

But what if there were a way to predict this better, by analyzing the data we've got? That's the idea behind a new, AI-powered model recently unveiled by scientists at the Frontier Development Lab — a public-private partnership that includes NASA, the U.S. Geological Survey, and the U.S. Department of Energy. The model uses deep learning, a type of AI, to examine the flow of the solar wind, the usually calm stream of particles that flow outward from our sun and through the solar system to well beyond the orbit of Pluto. Using observations of that solar wind, the model can predict the "geomagnetic disturbance" an incoming solar storm observed by sun-orbiting satellites would cause at any given point on Earth, the researchers involved say. This model can predict just how big the flux of the Earth's magnetic field will be when the solar storm arrives, and thus how big the induced currents in power lines and undersea internet cables will be...

Already, the first primitive ancestor of future AI-based solar-weather alert systems is live. The DstLive system, which debuted on the web in December 2022, uses machine learning to take data about the state of Earth's magnetic field and the solar wind and translate both into a single measure for the entire planet, known as DST. Think of it as the Richter scale, but for solar storms. This number is intended to give us an idea of how intense a storm's impact will be on earth, an hour to six hours in advance.

Unfortunately, we may not know how useful such systems are until we live through a major solar storm.

AI

Decomposing Language Models Into Understandable Components (anthropic.com) 17

AI startup Anthropic, writing in a blog post: Neural networks are trained on data, not programmed to follow rules. With each step of training, millions or billions of parameters are updated to make the model better at tasks, and by the end, the model is capable of a dizzying array of behaviors. We understand the math of the trained network exactly -- each neuron in a neural network performs simple arithmetic -- but we don't understand why those mathematical operations result in the behaviors we see. This makes it hard to diagnose failure modes, hard to know how to fix them, and hard to certify that a model is truly safe. Neuroscientists face a similar problem with understanding the biological basis for human behavior. The neurons firing in a person's brain must somehow implement their thoughts, feelings, and decision-making. Decades of neuroscience research has revealed a lot about how the brain works, and enabled targeted treatments for diseases such as epilepsy, but much remains mysterious. Luckily for those of us trying to understand artificial neural networks, experiments are much, much easier to run. We can simultaneously record the activation of every neuron in the network, intervene by silencing or stimulating them, and test the network's response to any possible input.

Unfortunately, it turns out that the individual neurons do not have consistent relationships to network behavior. For example, a single neuron in a small language model is active in many unrelated contexts, including: academic citations, English dialogue, HTTP requests, and Korean text. In a classic vision model, a single neuron responds to faces of cats and fronts of cars. The activation of one neuron can mean different things in different contexts. In our latest paper, Towards Monosemanticity: Decomposing Language Models With Dictionary Learning , we outline evidence that there are better units of analysis than individual neurons, and we have built machinery that lets us find these units in small transformer models. These units, called features, correspond to patterns (linear combinations) of neuron activations. This provides a path to breaking down complex neural networks into parts we can understand, and builds on previous efforts to interpret high-dimensional systems in neuroscience, machine learning, and statistics. In a transformer language model, we decompose a layer with 512 neurons into more than 4000 features which separately represent things like DNA sequences, legal language, HTTP requests, Hebrew text, nutrition statements, and much, much more. Most of these model properties are invisible when looking at the activations of individual neurons in isolation.

Google

Web Sites Can Now Choose to Opt Out of Google Bard and Future AI Models (mashable.com) 35

"We're committed to developing AI responsibly," says Google's VP of Trust, "guided by our AI principles and in line with our consumer privacy commitment. However, we've also heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases."

And so, Mashable reports, "Websites can now choose to opt out of Google Bard, or any other future AI models that Google makes." Google made the announcement on Thursday introducing a new tool called Google-Extended that will allow sites to be indexed by crawlers (or a bot creating entries for search engines), while simultaneously not having their data accessed to train future AI models. For website administrators, this will be an easy fix, available through robots.txt — or the text file that allows web crawlers to access sites...

OpenAI, the maker of ChatGPT, recently launched a web crawler of its own, but included instructions on how to block it. Publications like Medium, the New York Times, CNN and Reuters have notably done so.

As Google's blog post explains, "By using Google-Extended to control access to content on a site, a website administrator can choose whether to help these AI models become more accurate and capable over time..."
Facebook

Meta's Smart Glasses Can Take Calls, Play Music, and Livestream From Your Face (theverge.com) 63

Meta announced a new pair of Ray-Ban smart glasses, capable of livestreaming to Facebook and Instagram and translating text. The glasses were announced at today's Connect event in Menlo Park alongside Meta's new Quest 3 headset. The Verge reports: The new glasses, which Meta just announced at its Connect launch event and which are up for preorder now and will be on sale October 17th starting at $299, have two primary purposes. The first is to replace your headphones: the smart glasses have a similar personal audio system like Amazon's Echo Frames and the Bose Tempo series, all of which play music but endeavor to make sure only you can hear it. With the new generation of glasses, Meta also upgraded the microphone system in a big way: the specs have five mics, including one in the nose bridge, which should make both your calls and voice commands much clearer. (The Stories only had one mic, and it kind of fell apart in loud or windy conditions.)

The other job of the glasses is as a camera. The smart glasses have small camera lenses on each right temple, just like the Stories -- but these cameras take 12-megapixel photos and 1080p videos, both big upgrades from the previous generation. You can store roughly 500 photos and 100 30-second videos (that's the maximum length the glasses allow) before you fill up the 32GB of internal storage, and everything syncs through the Meta View app. The app also lets you quickly share anything you capture to Meta's many, many sharing platforms.

In addition to taking photos and videos on the camera, you can also now start a livestream to Facebook or Instagram with just a couple of taps on the stem of the glasses. When you're recording, a white light around the lens pulses to indicate you're recording.

Slashdot Top Deals