Games

Russia Is Making Its Own Gaming Consoles (gamerant.com) 161

Vladimir Putin has ordered Russia's government to explore the development of a series of homegrown consoles to compete with PlayStation and Xbox. Game Rant reports: Russia has taken issue with Western games and developers in recent years, leading the country to threaten the banning of certain titles like Apex Legends and The Last of Us Part 2. This is due to what the Russian government perceives as pro-LGBTQ messaging, which it openly opposes. In February, Russia's Organization for Developing the Video Game Industry (RVI) laid out a long-term plan that ended with the creation of a fully capable gaming console in 2026-2027. It seems that the Russian government may be attempting to follow through with this plan.

Following a meeting on the economic development of Kaliningrad, Putin requested government officials to research the requirements for domestic production of stationary and portable gaming consoles. The Russian president also ordered the planning of an appropriate operating system and cloud system for the consoles. The deadline for these plans is set for June 15, 2024, and Russia's prime minister was designated as the official overseeing these tasks. A Kremlin spokesperson confirmed that the orders intend to develop Russia's homegrown gaming industry.

Google

20 Years of Gmail (theverge.com) 86

Victoria Song reports via The Verge: When Gmail launched with a goofy press release 20 years ago next week, many assumed it was a hoax. The service promised a gargantuan 1 gigabyte of storage, an excessive quantity in an era of 15-megabyte inboxes. It claimed to be completely free at a time when many inboxes were paid. And then there was the date: the service was announced on April Fools' Day, portending some kind of prank. But soon, invites to Gmail's very real beta started going out -- and they became a must-have for a certain kind of in-the-know tech fan. At my nerdy high school, having one was your fastest ticket to the cool kids' table. I remember trying to track one down for myself. I didn't know whether I actually needed Gmail, just that all my classmates said Gmail would change my life forever.

Teenagers are notoriously dramatic, but Gmail did revolutionize email. It reimagined what our inboxes were capable of and became a central part of our online identities. The service now has an estimated 1.2 billion users -- about 1/7 of the global population -- and these days, it's a practical necessity to do anything online. It often feels like Gmail has always been here and always will be. But 20 years later, I don't know anyone who's champing at the bit to open up Gmail. Managing your inbox is often a chore, and other messaging apps like Slack and WhatsApp have come to dominate how we communicate online. What was once a game-changing tool sometimes feels like it's been sidelined. In another 20 years, will Gmail still be this central to our lives? Or will it -- and email -- be a thing of the past?

AI

GitHub Introduces AI-Powered Tool That Suggests Ways It Can Auto-Fix Your Code (bleepingcomputer.com) 24

"It's a bad day for bugs," joked TechCrunch on Wednesday. "Earlier today, Sentry announced its AI Autofix feature for debugging production code..."

And then the same day, BleepingComputer reported that GitHub "introduced a new AI-powered feature capable of speeding up vulnerability fixes while coding." This feature is in public beta and automatically enabled on all private repositories for GitHub Advanced Security customers. Known as Code Scanning Autofix and powered by GitHub Copilot and CodeQL, it helps deal with over 90% of alert types in JavaScript, Typescript, Java, and Python... After being toggled on, it provides potential fixes that GitHub claims will likely address more than two-thirds of found vulnerabilities while coding with little or no editing.

"When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss," GitHub's Pierre Tempel and Eric Tooley said...

Last month, the company also enabled push protection by default for all public repositories to stop the accidental exposure of secrets like access tokens and API keys when pushing new code. This was a significant issue in 2023, as GitHub users accidentally exposed 12.8 million authentication and sensitive secrets via more than 3 million public repositories throughout the year.

GitHub will continue adding support for more languages, with C# and Go coming next, according to their announcement.

"Our vision for application security is an environment where found means fixed."
AI

Cognition Emerges From Stealth To Launch AI Software Engineer 'Devin' (venturebeat.com) 95

Longtime Slashdot reader ahbond shares a report from VentureBeat: Today, Cognition, a recently formed AI startup backed by Peter Thiel's Founders Fund and tech industry leaders including former Twitter executive Elad Gil and Doordash co-founder Tony Xu, announced a fully autonomous AI software engineer called "Devin." While there are multiple coding assistants out there, including the famous Github Copilot, Devin is said to stand out from the crowd with its ability to handle entire development projects end-to-end, right from writing the code and fixing the bugs associated with it to final execution. This is the first offering of this kind and even capable of handling projects on Upwork, the startup has demonstrated. [...]

In a blog post today on Cognition's website, Scott Wu, the founder and CEO of Cognition and an award-winning sports coder, explained Devin can access common developer tools, including its own shell, code editor and browser, within a sandboxed compute environment to plan and execute complex engineering tasks requiring thousands of decisions. The human user simply types a natural language prompt into Devin's chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works. [...]

According to demos shared by Wu, Devin is capable of handling a range of tasks in its current form. This includes common engineering projects like deploying and improving apps/websites end-to-end and finding and fixing bugs in codebases to more complex things like setting up fine-tuning for a large language model using the link to a research repository on GitHub or learning how to use unfamiliar technologies. In one case, it learned from a blog post how to run the code to produce images with concealed messages. Meanwhile, in another, it handled an Upwork project to run a computer vision model by writing and debugging the code for it. In the SWE-bench test, which challenges AI assistants with GitHub issues from real-world open-source projects, the AI software engineer was able to correctly resolve 13.86% of the cases end-to-end -- without any assistance from humans. In comparison, Claude 2 could resolve just 4.80% while SWE-Llama-13b and GPT-4 could handle 3.97% and 1.74% of the issues, respectively. All these models even required assistance, where they were told which file had to be fixed.
Currently, Devin is available only to a select few customers. Bloomberg journalist Ashlee Vance wrote a piece about his experience using it here.

"The Doom of Man is at hand," captions Slashdot reader ahbond. "It will start with the low-hanging Jira tickets, and in a year or two, able to handle 99% of them. In the short term, software engineers may become like bot farmers, herding 10-1000 bots writing code, etc. Welcome to the future."
AI

OpenAI's Sora Text-to-Video Generator Will Be Publicly Available Later This Year (theverge.com) 13

You'll soon get to try out OpenAI's buzzy text-to-video generator for yourself. From a report: In an interview with The Wall Street Journal, OpenAI chief technology officer Mira Murati says Sora will be available "this year" and that it "could be a few months." OpenAI first showed off Sora, which is capable of generating hyperrealistic scenes based on a text prompt, in February. The company only made the tool available for visual artists, designers, and filmmakers to start, but that didn't stop some Sora-generated videos from making their way onto platforms like X.

In addition to making the tool available to the public, Murati says OpenAI has plans to "eventually" incorporate audio, which has the potential to make the scenes even more realistic. The company also wants to allow users to edit the content in the videos Sora produces, as AI tools don't always create accurate images. "We're trying to figure out how to use this technology as a tool that people can edit and create with," Murati tells the Journal. When pressed on what data OpenAI used to train Sora, Murati didn't get too specific and seemed to dodge the question.

Google

Google DeepMind's Latest AI Agent Learned To Play Goat Simulator 3 (wired.com) 13

Will Knight, writing for Wired: Goat Simulator 3 is a surreal video game in which players take domesticated ungulates on a series of implausible adventures, sometimes involving jetpacks. That might seem an unlikely venue for the next big leap in artificial intelligence, but Google DeepMind today revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3. Most impressively, when the program encounters a game for the first time, it can reliably perform tasks by adapting what it learned from playing other games. The program is called SIMA, for Scalable Instructable Multiworld Agent, and it builds upon recent AI advances that have seen large language models produce remarkably capable chabots like ChatGPT.

[...] DeepMind's latest video game project hints at how AI systems like OpenAI's ChatGPT and Google's Gemini could soon do more than just chat and generate images or video, by taking control of computers and performing complex commands. "The paper is an interesting advance for embodied agents across multiple simulations," says Linxi "Jim" Fan, a senior research scientist at Nvidia who works on AI gameplay and was involved with an early effort to train AI to play by controlling a keyboard and mouse with a 2017 OpenAI project called World of Bits. Fan says the Google DeepMind work reminds him of this project as well as a 2022 effort called VPT that involved agents learning tool use in Minecraft.

"SIMA takes one step further and shows stronger generalization to new games," he says. "The number of environments is still very small, but I think SIMA is on the right track." [...] For the SIMA project, the Google DeepMind team collaborated with several game studios to collect keyboard and mouse data from humans playing 10 different games with 3D environments, including No Man's Sky, Teardown, Hydroneer, and Satisfactory. DeepMind later added descriptive labels to that data to associate the clicks and taps with the actions users took, for example whether they were a goat looking for its jetpack or a human character digging for gold. The data trove from the human players was then fed into a language model of the kind that powers modern chatbots, which had picked up an ability to process language by digesting a huge database of text. SIMA could then carry out actions in response to typed commands. And finally, humans evaluated SIMA's efforts inside different games, generating data that was used to fine-tune its performance.
Further reading: DeepMind's blog post.
Transportation

Amazon-Backed Rivian Surges 13% After Announcing Cheaper New SUV (theverge.com) 62

"Shares of Rivian Automotive surged 13% on Thursday," reports CNBC, "as the EV maker unveiled three new vehicles and announced more than $2 billion in savings related to pausing construction on a plant in Georgia."

CNBC notes that Rivian's current vehicles "start at roughly $70,000 and can top $100,000," so the new cheaper R2 midsize SUV (starting at $45,000) could be more appealing.

"Especially if it qualifies for the $7,500 EV tax credit," adds the Verge: "Seven percent of new vehicle sales are electric," [Rivian founder and CEO RJ] Scaringe notes.... "The reality is that Tesla continues to be wildly successful, and we want to pull from that 93 percent that haven't made the jump to pure EV, because the form factor didn't fit their lifestyle."
The article adds that Rivian "will use Tesla's NACS connectors for its future vehicles starting in 2025, which will allow Rivian owners to use the company's Supercharger Network. Both the R2 and R3 will have the NACS ports built natively into the vehicle..."

"I would say with absolute and complete certainty that the entire world is going to convert to electric vehicles," Scaringe tells The Verge. "I've never been more bullish on electrification. I've never been more bullish on Rivian."

More from CNBC: The announcements come at a crucial time for Rivian as it attempts to expand its customer base amid slower-than-expected EV sales in the U.S. after automakers flooded the first-adopter market with pricey all-electric vehicles in recent years. Rivian's sales pace has slowed in recent quarters, and the company widely disappointed investors last month by missing quarterly estimates and forecasting slightly lower production this year compared to 2023 due to plant downtime. The Amazon-backed company has been burning through cash to improve current EV production and narrow losses...

It will be capable of more than 300 miles of all-electric range on a single charge and 0-60 mph time in under3 seconds, the company said.

"Its battery will be capable of charging from 10 to 80 percent in under 30 minutes," notes Car and Driver.

UPDATE: The Verge reports that less than 24 hours after launching the R2, Rivian has already received more than 68,000 reservations.

It will go into production in the first half of 2026.
Crime

Former Google Engineer Indicted For Stealing AI Secrets To Aid Chinese Companies 28

Linwei Ding, a former Google software engineer, has been indicted for stealing trade secrets related to AI to benefit two Chinese companies. He faces up to 10 years in prison and a $250,000 fine on each criminal count. Reuters reports: Ding's indictment was unveiled a little over a year after the Biden administration created an interagency Disruptive Technology Strike Force to help stop advanced technology being acquired by countries such as China and Russia, or potentially threaten national security. "The Justice Department just will not tolerate the theft of our trade secrets and intelligence," U.S. Attorney General Merrick Garland said at a conference in San Francisco.

According to the indictment, Ding stole detailed information about the hardware infrastructure and software platform that lets Google's supercomputing data centers train large AI models through machine learning. The stolen information included details about chips and systems, and software that helps power a supercomputer "capable of executing at the cutting edge of machine learning and AI technology," the indictment said. Google designed some of the allegedly stolen chip blueprints to gain an edge over cloud computing rivals Amazon.com and Microsoft, which design their own, and reduce its reliance on chips from Nvidia.

Hired by Google in 2019, Ding allegedly began his thefts three years later, while he was being courted to become chief technology officer for an early-stage Chinese tech company, and by May 2023 had uploaded more than 500 confidential files. The indictment said Ding founded his own technology company that month, and circulated a document to a chat group that said "We have experience with Google's ten-thousand-card computational power platform; we just need to replicate and upgrade it." Google became suspicious of Ding in December 2023 and took away his laptop on Jan. 4, 2024, the day before Ding planned to resign.
A Google spokesperson said: "We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets. After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement."
AI

Anthropic Releases New Version of Claude That Beats GPT-4 and Gemini Ultra in Some Benchmark Tests (venturebeat.com) 33

Anthropic, a leading artificial intelligence startup, unveiled its Claude 3 series of AI models today, designed to meet the diverse needs of enterprise customers with a balance of intelligence, speed, and cost efficiency. The lineup includes three models: Opus, Sonnet, and the upcoming Haiku. From a report: The star of the lineup is Opus, which Anthropic claims is more capable than any other openly available AI system on the market, even outperforming leading models from rivals OpenAI and Google. "Opus is capable of the widest range of tasks and performs them exceptionally well," said Anthropic cofounder and CEO Dario Amodei in an interview with VentureBeat. Amodei explained that Opus outperforms top AI models like GPT-4, GPT-3.5 and Gemini Ultra on a wide range of benchmarks. This includes topping the leaderboard on academic benchmarks like GSM-8k for mathematical reasoning and MMLU for expert-level knowledge.

"It seems to outperform everyone and get scores that we haven't seen before on some tasks," Amodei said. While companies like Anthropic and Google have not disclosed the full parameters of their leading models, the reported benchmark results from both companies imply Opus either matches or surpasses major alternatives like GPT-4 and Gemini in core capabilities. This, at least on paper, establishes a new high watermark for commercially available conversational AI. Engineered for complex tasks requiring advanced reasoning, Opus stands out in Anthropic's lineup for its superior performance. Sonnet, the mid-range model, offers businesses a more cost-effective solution for routine data analysis and knowledge work, maintaining high performance without the premium price tag of the flagship model. Meanwhile, Haiku is designed to be swift and economical, suited for applications such as consumer-facing chatbots, where responsiveness and cost are crucial factors. Amodei told VentureBeat he expects Haiku to launch publicly in a matter of "weeks, not months."

Government

Government Watchdog Hacked US Federal Agency To Stress-Test Its Cloud Security (techcrunch.com) 21

In a series of tests using fake data, a U.S. government watchdog was able to steal more than 1GB of seemingly sensitive personal data from the cloud systems of the U.S. Department of the Interior. The experiment is detailed in a new report by the Department of the Interior's Office of the Inspector General (OIG), published last week. TechCrunch reports: The goal of the report was to test the security of the Department of the Interior's cloud infrastructure, as well as its "data loss prevention solution," software that is supposed to protect the department's most sensitive data from malicious hackers. The tests were conducted between March 2022 and June 2023, the OIG wrote in the report. The Department of the Interior manages the country's federal land, national parks and a budget of billions of dollars, and hosts a significant amount of data in the cloud. According to the report, in order to test whether the Department of the Interior's cloud infrastructure was secure, the OIG used an online tool called Mockaroo to create fake personal data that "would appear valid to the Department's security tools."

The OIG team then used a virtual machine inside the Department's cloud environment to imitate "a sophisticated threat actor" inside of its network, and subsequently used "well-known and widely documented techniques to exfiltrate data." "We used the virtual machine as-is and did not install any tools, software, or malware that would make it easier to exfiltrate data from the subject system," the report read. The OIG said it conducted more than 100 tests in a week, monitoring the government department's "computer logs and incident tracking systems in real time," and none of its tests were detected nor prevented by the department's cybersecurity defenses.

"Our tests succeeded because the Department failed to implement security measures capable of either preventing or detecting well-known and widely used techniques employed by malicious actors to steal sensitive data," said the OIG's report. "In the years that the system has been hosted in a cloud, the Department has never conducted regular required tests of the system's controls for protecting sensitive data from unauthorized access." That's the bad news: The weaknesses in the Department's systems and practices "put sensitive [personal information] for tens of thousands of Federal employees at risk of unauthorized access," read the report. The OIG also admitted that it may be impossible to stop "a well-resourced adversary" from breaking in, but with some improvements, it may be possible to stop that adversary from exfiltrating the sensitive data.

AI

Apple Wants You To Know It's Working On AI (reuters.com) 47

Apple plans to disclose more about its plans to put generative AI to use later this year, Chief Executive Officer Tim Cook said during the company's annual shareholder meeting on Wednesday. From a report: Cook said that the iPhone maker sees "incredible breakthrough potential for generative AI, which is why we're currently investing significantly in this area. We believe that will unlock transformative opportunities for users when it comes to productivity, problem solving and more."

Apple has been slower in rolling out generative AI, which can generate human-like responses to written prompts, than rivals such as Microsoftand Alphabet's Google, which are weaving them into products. On Wednesday, Cook argued that AI is already at work behind the scenes in Apple's products but said there would be more news on explicit AI features later this year. Bloomberg previously reported Apple plans to use AI to improve the ability to search through data stored on Apple devices. "Every Mac that is powered by Apple silicon is an extraordinarily capable AI machine. In fact, there's no better computer for AI on the market today," Cook said.

Open Source

Avoiding Common Pitfalls When First Contributing To Open Source (hashnode.dev) 20

Angie Byron, a long-time member of the Drupal community, offers guidance on avoiding common mistakes and general good-practices for those new to contributing to open-source projects: [...] You might not know it yet, but as a newcomer to an open source project, you have this AMAZING superpower: you are often-times the only one in that whole project capable of reading the documentation through new eyes. Because I can guarantee, the people who wrote that documentation are not new. :-)

So take time to read the docs and file issues (or better yet, pull requests) for anything that was unclear. This lets you get a "feel" for contributing in a project/community without needing to go way down the deep end of learning coding standards and unit tests and commit signing and whatever other bananas things they're about to make you do. :) Also, people are more likely to take time to help you, if you've helped them first!

Open Source

Cloudflare Makes Pingora Rust Framework Open-Source (phoronix.com) 5

Michael Larabel reports via Phoronix: Back in 2022 Cloudflare announced they were ditching Nginx for an in-house, Rust-written software called Pingora. Today Cloudflare is open-sourcing the Pingora framework. Cloudflare announced today that they have open-sourced Pingora under an Apache 2.0 license. Pingora is a Rust async multi-threaded framework for building programmable network services. Pingora has long been used internally within Cloudflare and is capable of sustaining a lot of traffic while now Pingora is being open-sourced for helping to build infrastructure outside of Cloudflare. The Pingora Rust code is available on GitHub.
Movies

Open Source Movie Streaming Project 'Movie-Web' Shut Down By Hollywood Complaint (torrentfreak.com) 21

An anonymous reader quotes a report from TorrentFreak: In recent months, Movie-Web has quickly gained popularity among a particular group of movie aficionados. The open source software, which is still available on GitHub, allows anyone to set up a movie search engine capable of streaming content from third-party sources. These external sources tend to have large libraries of pirated entertainment. Movie-web's developers are not oblivious to the legal ramifications but since they don't host any files, they hoped to avoid legal trouble. The software just provides a search engine for third-party content, they argued. [...]

Yesterday, the movie-web.app domain was suddenly taken down. According to a message posted on the official Discord server, this is the result of a "court action" from several movie companies including Warner Bros. Netflix, Paramount, Universal, and Disney. [I]t appears that action was taken against the movie-web.app domain. It seems likely that registrar Namecheap suspended the domain after receiving a legal complaint from the aforementioned Hollywood companies. [Update: After publishing the article we learned that there is a legal action that requires registrars to take action against several 'pirate' domains. We're looking into the matter and will follow this up later.]

Namecheap updated the domain's status to clientHold, which effectively rendered the domain inaccessible. The measure is often used to suspend pirate site domains following copyright holder complaints. The surprise takedown only affects movie-web's publicly hosted 'demo' instance. On Discord, the movie-web team says that it has no plans to bring this website back in any shape or form. "As a team, we always said that if we were taken down, we would go down without a fight and we have decided to stick to that. We have zero interest in getting involved with legal matters, and so we will not be trying to circumvent this takedown in any way," developer 'BinaryOverload' writes.

AI

'Every PC Is Going To Be an AI PC' 102

During a briefing at Mobile World Congress in Barcelona, Meghana Patwardhan, VP of Commercial Mobility at Dell Technology, told The Register that while the immediate future would consist of two worlds -- one with AI hardware and one without -- "every PC is going to be an AI PC in the longer term." From the report: In terms of new hardware, Dell used the Mobile World Congress event in Barcelona to show off new versions of its Surface-baiting Latitude 7350 convertible -- "the world's most serviceable commercial detachable," according to the company -- and its workstation-class Precision 3680 tower. Other devices in the Precision range include mobile workstations and the 3280 Compact Form Factor PC. Dell was also determined to present itself as a leader in hybrid working with the Premier Wireless ANC headset, replete with AI-based noise cancellation.

Duringt our talk, AI was never far from the lips of Dell's spokespeople as the company talked up the energy efficiency and future-proofing it saw in dedicated AI hardware, such as Neural Processing Units (NPUs) that are increasingly cropping up in CPUs. To illustrate the point, Dell boatsed about how much more efficient background blurring is on video calls when AI hardware is running compared to when it isn't. Hopefully, Microsoft will soon deliver a version of Windows capable of demonstrating a use for AI hardware that is more than hiding distractions in the background.
Further reading: AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says
Displays

Would You Use a Laptop with a Transparent Screen? (cnn.com) 92

At CNN's product review site, one electronics reporter wrote they were "dumbfounded", "surprised," and "shocked" by the transparent screen on Lenovo's ThinkBook Transparent Display prototype. "This Micro LED screen is no slouch, either; a Full HD panel with up to 1,000 nits of brightness..." Let's get the big issue out of the way early: Lenovo is merely boasting what it can do, not what it will do. That's what a "concept" product means, of course. That said, it's still the most exciting thing I've seen in laptops in quite some time...

Thinking of major use cases for such a laptop, I basically considered any time you're out in public and want a more complete world view. While websites with white backgrounds look more opaque than transparent, the black backgrounds of a Notepad document and animations of space and fish fit the experience much better, as I could see the plants that Lenovo had placed behind the screen. The more websites use dark modes, the better this will go, too. Admittedly, I can also imagine some will blanch at the fact that such a laptop completely removes your privacy as a user. From those shopping for loved ones in the same room to those working on important business documents, the ThinkBook Transparent Display laptop could use a non-transparent mode, just like the LG OLED T offers. That said, I'm sure teachers would love to see what their kids are working on in the classroom.

The Verge calls it "an exceptionally cool-looking device that's capable of some fun novelties." The key draw is its bezel-less 17.3-inch MicroLED display, which offers up to 55 percent transparency when its pixels are set to black and turned off. But as its pixels light up, the display becomes less and less see-through, until eventually, you're looking at a completely opaque white surface with a peak brightness of 1,000 nits... How often, of course, do you actually want to see the empty desk behind your laptop? Would it be beneficial to be able to see your colleague sitting across from you, or would it be distracting? One of Lenovo's big ideas is that the form factor could be useful for digital artists, helping them to see the world behind the laptop's screen while sketching it on the lower half of the laptop where the keyboard is (more on this later).... 720p still feels like a very work-in-progress spec on a 17.3-inch laptop like this, but at least text shown on the screen during my demo was perfectly readable... Lenovo's transparent laptop concept feels like a collection of cool technologies in search of a killer app.
And yet Lenovo's executive director of ThinkPad portfolio and product Tom Butler tells the Verge he has "very high confidence" this will be in a real laptop within the next five years. (The Verge add that he "hopes that revealing this proof of concept will start a public conversation about what it could be useful for, setting a target for Lenovo to work toward.")

But would you use a laptop with a transparent screen?
Data Storage

Scientists Create DVD-Sized Disk Storing 1 Petabit (125,000 Gigabytes) of Data (popsci.com) 113

Popular Science points out that for encoding data, "optical disks almost always offer just a single, 2D layer — that reflective, silver underside."

"If you could boost a disk's number of available, encodable layers, however, you could hypothetically gain a massive amount of extra space..." Researchers at the University of Shanghai for Science and Technology recently set out to do just that, and published the results earlier this week in the journal, Nature. Using a 54-nanometer laser, the team managed to record a 100 layers of data onto an optical disk, with each tier separated by just 1 micrometer. The final result is an optical disk with a three-dimensional stack of data layers capable of holding a whopping 1 petabit (Pb) of information — that's equivalent to 125,000 gigabytes of data...

As Gizmodo offers for reference, that same petabit of information would require roughly a six-and-a-half foot tall stack of HHD drives — if you tried to encode the same amount of data onto Blu-rays, you'd need around 10,000 blank ones to complete your (extremely inefficient) challenge.

To pull off their accomplishment, engineers needed to create an entirely new material for their optical disk's film... AIE-DDPR film utilizes a combination of specialized, photosensitive molecules capable of absorbing photonic data at a nanoscale level, which is then encoded using a high-tech dual-laser array. Because AIE-DDPR is so incredibly transparent, designers could apply layer-upon-layer to an optical disk without worrying about degrading the overall data. This basically generated a 3D "box" for digitized information, thus exponentially raising the normal-sized disk's capacity.

Thanks to long-time Slashdot reader hackingbear for sharing the news.
AI

Google Launches Two New Open LLMs (techcrunch.com) 15

Barely a week after launching the latest iteration of its Gemini models, Google today announced the launch of Gemma, a new family of lightweight open-weight models. From a report: Starting with Gemma 2B and Gemma 7B, these new models were "inspired by Gemini" and are available for commercial and research usage. Google did not provide us with a detailed paper on how these models perform against similar models from Meta and Mistral, for example, and only noted that they are "state-of-the-art."

The company did note that these are dense decoder-only models, though, which is the same architecture it used for its Gemini models (and its earlier PaLM models) and that we will see the benchmarks later today on Hugging Face's leaderboard. To get started with Gemma, developers can get access to ready-to-use Colab and Kaggle notebooks, as well as integrations with Hugging Face, MaxText and Nvidia's NeMo. Once pre-trained and tuned, these models can then run everywhere. While Google highlights that these are open models, it's worth noting that they are not open-source. Indeed, in a press briefing ahead of today's announcement, Google's Janine Banks stressed the company's commitment to open source but also noted that Google is very intentional about how it refers to the Gemma models.

AI

Scientists Propose AI Apocalypse Kill Switches 104

A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.

Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.

At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.

Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
Google

Google Rolls Out Updated AI Model Capable of Handling Longer Text, Video (bloomberg.com) 11

An anonymous reader shares a report: Alphabet's Google is rolling out a new version of its powerful artificial intelligence model that it says can handle larger amounts of text and video than products made by competitors. The updated AI model, called Gemini 1.5 Pro, will be available on Thursday to cloud customers and developers so they can test its new features and eventually create new commercial applications. Google and its rivals have spent billions to ramp up their capabilities in generative AI and are keen to attract corporate clients to show their investments are paying off. [...]

Gemini 1.5 can be trained faster and more efficiently, and has the ability to process a huge amount of information each time it's prompted, according to Vinyals. For example, developers can use Gemini 1.5 Pro to query up to an hour's worth of video, 11 hours of audio or more than 700,000 words in a document, an amount of data that Google says is the "longest context window" of any large-scale AI model yet. Gemini 1.5 can process far more data compared with what the latest AI models from OpenAI and Anthropic can handle, according to Google. In a pre-recorded video demonstration for reporters, Google showed off how engineers asked Gemini 1.5 Pro to ingest a 402-page PDF transcript of the Apollo 11 moon landing, and then prompted it to find quotes that showed "three funny moments."

Slashdot Top Deals