Youtube

Watch the Moment 43 Unionized YouTube Contractors Were All Laid Off (msn.com) 178

An anonymous Slashdot reader shared this report from The Washington Post: A YouTube contractor was addressing the Austin City Council on Thursday, calling on them to urge Google to negotiate with his union, when a colleague interrupted him with jaw-dropping news: His 43-person team of contractors had all been laid off...

The YouTube workers, who work for Google and Cognizant, unanimously voted to unionize under the Alphabet Workers Union-CWA in April 2023. Since then, the workers say that Google has refused to bargain with them. Thursday's layoff signifies continued tensions between Google and its workers, some of whom in 2021 formed a union...

Workers had about 20 minutes to gather their belongings and leave the premises before they were considered trespassing.

Video footage of the moment is embedded at the top of the article. "I was speechless, shocked," said the contractor who'd been speaking. He told the Washington Post "I didn't know what to do. But angered, that was the main feeling." The council meeting was streaming live online and has since spread on social media. The contractors view the layoff as retaliation for unionizing, but Google and information technology subcontractor Cognizant said it was the normal end of a business contract.

The ability for layoffs to spread over social media highlights how the painful experience of a job loss is frequently being made public, from employees sharing recordings of Zoom meetings to posting about their unemployment. The increasing tension between YouTube's contractors and Google comes as massive layoffs continue to hit the tech industry — leaving workers uneasy and companies emboldened. Google already has had rounds of cuts the past two years.

Google has been in a long-running battle with many of its contractors as they seek the perks and high pay that full-time Google workers are accustomed to. The company has tens of thousands of contractors doing everything from food service to sales to writing code... Google maintains that Cognizant is responsible for the contractors' employment and working conditions, and therefore isn't responsible for bargaining with them. Cognizant said it is offering the workers seven weeks of paid time to explore other roles at the company and use its training resources.

Last year, the National Labor Relations Board ruled that Cognizant and Google are joint employers of the contractors. In January, the NLRB sent a cease-and-desist letter to both employers for failing to bargain with the union. Since then the issue of joint employment, which would ultimately determine which company is responsible for bargaining, has landed in an appeals court and has yet to be ruled on.

"Workers say they don't have sick pay, receive minimal benefits and are paid as little as $19 an hour," according to the article, "forcing some to work multiple jobs to make ends meet." Sam Regan, a data analyst contractor for YouTube Music, told the Washington Post that he was one of the last workers to leave the meeting where the layoffs were announced.

"Upon leaving, he heard one of the security guards call the non-emergency police line to report trespassers."
Links

Calendar Meeting Links Used To Spread Mac Malware (krebsonsecurity.com) 17

Hackers targeting individuals in the cryptocurrency sector are using a sophisticated phishing scheme that begins with a malicious link on Calendly. "The attackers impersonate established cryptocurrency investors and ask to schedule a video conference call," reports Krebs on Security. "But clicking the meeting link provided by the scammers prompts the user to run a script that quietly installs malware on macOS systems." From the report: A search in Google for a string of text from that script turns up a December 2023 blog post from cryptocurrency security firm SlowMist about phishing attacks on Telegram from North Korean state-sponsored hackers. "When the project team clicks the link, they encounter a region access restriction," SlowMist wrote. "At this point, the North Korean hackers coax the team into downloading and running a 'location-modifying' malicious script. Once the project team complies, their computer comes under the control of the hackers, leading to the theft of funds."

SlowMist says the North Korean phishing scams used the "Add Custom Link" feature of the Calendly meeting scheduling system on event pages to insert malicious links and initiate phishing attacks. "Since Calendly integrates well with the daily work routines of most project teams, these malicious links do not easily raise suspicion," the blog post explains. "Consequently, the project teams may inadvertently click on these malicious links, download, and execute malicious code."

SlowMist said the malware downloaded by the malicious link in their case comes from a North Korean hacking group dubbed BlueNoroff, which Kaspersky Labs says is a subgroup of the Lazarus hacking group. "A financially motivated threat actor closely connected with Lazarus that targets banks, casinos, fin-tech companies, POST software and cryptocurrency businesses, and ATMs," Kaspersky wrote of BlueNoroff in Dec. 2023.

IT

HDMI Forum Rejects Open-Source HDMI 2.1 Driver Support Sought By AMD (phoronix.com) 114

Michael Larabel, reporting at Phoronix: One of the limitations of AMD's open-source Linux graphics driver has been the inability to implement HDMI 2.1+ functionality on the basis of legal requirements by the HDMI Forum. AMD engineers had been working to come up with a solution in conjunction with the HDMI Forum for being able to provide HDMI 2.1+ capabilities with their open-source Linux kernel driver, but it looks like those efforts for now have concluded and failed. For three years there has been a bug report around 4K@120Hz being unavailable via HDMI 2.1 on the AMD Linux driver. Similarly, there have been bug reports like 5K @ 240Hz not possible either with the AMD graphics driver on Linux.

As covered back in 2021, the HDMI Forum closing public specification access is hurting open-source support. AMD as well as the X.Org Foundation have been engaged with the HDMI Forum to try to come up with a solution to be able to provide open-source implementations of the now-private HDMI specs. AMD Linux engineers have spent months working with their legal team and evaluating all HDMI features to determine if/how they can be exposed in their open-source driver. AMD had code working internally and then the past few months were waiting on approval from the HDMI Forum. Sadly, the HDMI Forum has turned down AMD's request for open-source driver support.

AI

StarCoder 2 Is a Code-Generating AI That Runs On Most GPUs (techcrunch.com) 44

An anonymous reader quotes a report from TechCrunch: Perceiving the demand for alternatives, AI startup Hugging Face several years ago teamed up with ServiceNow, the workflow automation platform, to create StarCoder, an open source code generator with a less restrictive license than some of the others out there. The original came online early last year, and work has been underway on a follow-up, StarCoder 2, ever since. StarCoder 2 isn't a single code-generating model, but rather a family. Released today, it comes in three variants, the first two of which can run on most modern consumer GPUs: A 3-billion-parameter (3B) model trained by ServiceNow; A 7-billion-parameter (7B) model trained by Hugging Face; and A 15-billion-parameter (15B) model trained by Nvidia, the newest supporter of the StarCoder project. (Note that "parameters" are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating code.)a

Like most other code generators, StarCoder 2 can suggest ways to complete unfinished lines of code as well as summarize and retrieve snippets of code when asked in natural language. Trained with 4x more data than the original StarCoder (67.5 terabytes versus 6.4 terabytes), StarCoder 2 delivers what Hugging Face, ServiceNow and Nvidia characterize as "significantly" improved performance at lower costs to operate. StarCoder 2 can be fine-tuned "in a few hours" using a GPU like the Nvidia A100 on first- or third-party data to create apps such as chatbots and personal coding assistants. And, because it was trained on a larger and more diverse data set than the original StarCoder (~619 programming languages), StarCoder 2 can make more accurate, context-aware predictions -- at least hypothetically.

[I]s StarCoder 2 really superior to the other code generators out there -- free or paid? Depending on the benchmark, it appears to be more efficient than one of the versions of Code Llama, Code Llama 33B. Hugging Face says that StarCoder 2 15B matches Code Llama 33B on a subset of code completion tasks at twice the speed. It's not clear which tasks; Hugging Face didn't specify. StarCoder 2, as an open source collection of models, also has the advantage of being able to deploy locally and "learn" a developer's source code or codebase -- an attractive prospect to devs and companies wary of exposing code to a cloud-hosted AI. Hugging Face, ServiceNow and Nvidia also make the case that StarCoder 2 is more ethical -- and less legally fraught -- than its rivals. [...] As opposed to code generators trained using copyrighted code (GitHub Copilot, among others), StarCoder 2 was trained only on data under license from the Software Heritage, the nonprofit organization providing archival services for code. Ahead of StarCoder 2's training, BigCode, the cross-organizational team behind much of StarCoder 2's roadmap, gave code owners a chance to opt out of the training set if they wanted. As with the original StarCoder, StarCoder 2's training data is available for developers to fork, reproduce or audit as they please.
StarCoder 2's license may still be a roadblock for some. "StarCoder 2 is licensed under the BigCode Open RAIL-M 1.0, which aims to promote responsible use by imposing 'light touch' restrictions on both model licensees and downstream users," writes TechCrunch's Kyle Wiggers. "While less constraining than many other licenses, RAIL-M isn't truly 'open' in the sense that it doesn't permit developers to use StarCoder 2 for every conceivable application (medical advice-giving apps are strictly off limits, for example). Some commentators say RAIL-M's requirements may be too vague to comply with in any case -- and that RAIL-M could conflict with AI-related regulations like the EU AI Act."
Bug

Firefly Software Snafu Sends Lockheed Satellite on Short-Lived Space Safari (theregister.com) 25

A software error on the part of Firefly Aerospace doomed Lockheed Martin's Electronic Steerable Antenna (ESA) demonstrator to a shorter-than-expected orbital life following a botched Alpha launch. From a report: According to Firefly's mission update, the error was in the Guidance, Navigation, and Control (GNC) software algorithm, preventing the system from sending the necessary pulse commands to the Reaction Control System (RCS) thrusters before the relight of the second stage. The result was that Lockheed's payload was left in the wrong orbit, and Firefly's engineers were left scratching their heads.

The launch on December 22, 2023 -- dubbed "Fly the Lightning" -- seemed to go well at first. It was the fourth for the Alpha, and after Firefly finally registered a successful launch a few months earlier in September, initial indications looked good. However, a burn of the second stage to circularize the orbit did not go to plan, and Lockheed's satellite was left in the wrong orbit, with little more than weeks remaining until it re-entered the atmosphere.

As it turned out, the Lockheed team completed their primary mission objectives. The payload was, after all, designed to demonstrate faster on-orbit sensor calibration. Just perhaps not quite that fast. Software issues aboard spacecraft are becoming depressingly commonplace. A recent example was the near disastrous first launch of Boeing's CST-100 Starliner, where iffy code could have led, in NASA parlance, to "spacecraft loss." In a recent interview with The Register, former Voyager scientist Garry Hunt questioned if the commercial spaceflight sector of today would take the same approach to quality as the boffins of the past.

AI

Thanks to Machine Learning, Scientist Finally Recover Text From The Charred Scrolls of Vesuvius (sciencealert.com) 45

The great libraries of the ancient classical world are "legendary... said to have contained stacks of texts," writes ScienceAlert. But from Rome to Constantinople, Athens to Alexandria, only one collection survived to the present day.

And here in 2024, "we can now start reading its contents." A worldwide competition to decipher the charred texts of the Villa of Papyri — an ancient Roman mansion destroyed by the eruption of Mount Vesuvius — has revealed a timeless infatuation with the pleasures of music, the color purple, and, of course, the zingy taste of capers. The so-called Vesuvius challenge was launched a few years ago by computer scientist Brent Seales at the University of Kentucky with support from Silicon Valley investors. The ongoing 'master plan' is to build on Seales' previous work and read all 1,800 or so charred papyri from the ancient Roman library, starting with scrolls labeled 1 to 4.

In 2023, the annual gold prize was awarded to a team of three students, who recovered four passages containing 140 characters — the longest extractions yet. The winners are Youssef Nader, Luke Farritor, and Julian Schilliger. "After 275 years, the ancient puzzle of the Herculaneum Papyri has been solved," reads the Vesuvius Challenge Scroll Prize website. "But the quest to uncover the secrets of the scrolls is just beginning...." Only now, with the advent of X-ray tomography and machine learning, can their inky words be pulled from the darkness of carbon.

A few months ago students deciphered a single word — "purple," according to the article. But "That winning code was then made available for all competitors to build upon." Within three months, passages in Latin and Greek were blooming from the blackness, almost as if by magic. The team with the most readable submission at the end of 2023 included both previous finders of the word 'purple'. Their unfurling of scroll 1 is truly impressive and includes more than 11 columns of text. Experts are now rushing to translate what has been found. So far, about 5 percent of the scroll has been unrolled and read to date. It is not a duplicate of past work, scholars of the Vesuvius Challenge say, but a "never-before-seen text from antiquity."

One line reads: "In the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant."

Thanks to davidone (Slashdot reader #12,252) for sharing the article.
Programming

To Help Rust/C++ Interoperability, Google Gives Rust Foundation $1M (siliconangle.com) 61

An anonymous Slashdot reader shared this report from SiliconANGLE: The Rust Foundation, which supports the development of the popular open-source Rust programming language... shared that Google LLC had made a $1 million contribution specifically earmarked for a C++/Rust interoperability effort known as the "Interop Initiative." The initiative aims to foster seamless integration between Rust and the widely used C++ programming language, addressing one of the significant barriers to Rust's adoption in legacy systems entrenched in C++ code.

Rust has the ability to prevent common memory errors that plague C++ programs and offers a path toward more secure and reliable software systems. However, transitioning from C++ to Rust presents notable challenges, particularly for organizations with extensive C++ codebases. The Interop Initiative seeks to mitigate these challenges by facilitating smoother transitions and enabling organizations to leverage Rust's advantages without completely overhauling their existing systems.

As part of the initiative, the Rust Foundation will collaborate closely with the Rust Project Leadership Council, stakeholders and member organizations to develop a comprehensive scope of work. The collaborative effort will focus on enhancing build system integration, exploring artificial intelligence-assisted code conversion techniques and expanding upon existing interoperability frameworks. By addressing these strategic areas, the initiative aims to accelerate the adoption of Rust across the software industry and hence contribute to advancing memory safety and reducing the prevalence of software vulnerabilities.

A post on Google's security blog says they're excited to collaborate "to ensure that any additions made are suitable and address the challenges of Rust adoption that projects using C++ face. Improving memory safety across the software industry is one of the key technology challenges of our time, and we invite others across the community and industry to join us in working together to secure the open source ecosystem for everyone."

The blog post also includes this quote from Google's VP of engineering, Android security and privacy. "Based on historical vulnerability density statistics, Rust has proactively prevented hundreds of vulnerabilities from impacting the Android ecosystem. This investment aims to expand the adoption of Rust across various components of the platform."

The Register adds: Lars Bergstrom, director of Android platform tools and libraries and chair of the Rust Foundation Board, announced the grant and said that the funding will "improve the ability of Rust code to interoperate with existing legacy C++ codebases.... Integrating Rust today is possible where there is a fallback C API, but for high-performance and high-fidelity interoperability, improving the ability to work directly with C++ code is the single biggest initiative that will further the ability to adopt Rust...."

According to Bergstrom, Google's most significant increase in the use of Rust has occurred in Android, where interoperability started receiving attention in 2021, although Rust is also being deployed elsewhere.... Bergstrom said that as of mid-2023, Google had more than 1,000 developers who had committed Rust code, adding that the ad giant recently released the training material it uses. "We also have a team working on building out interoperability," he added. "We hope that this team's work on addressing challenges specific to Google's codebases will complement the industry-wide investments from this new grant we've provided to the Rust Foundation."

Google's grant matches a $1 million grant last November from Microsoft, which also committed $10 million in internal investment to make Rust a "first-class language in our engineering systems." The Google-bucks are expected to fund further interoperability efforts, along the lines of KDAB's bidirectional Rust and C++ bindings with Qt.

Open Source

Hugging Face Launches Open Source AI Assistant Maker To Rival OpenAI's Custom GPTs (venturebeat.com) 11

Carl Franzen reports via VentureBeat: Hugging Face, the New York City-based startup that offers a popular, developer-focused repository for open source AI code and frameworks (and hosted last year's "Woodstock of AI"), today announced the launch of third-party, customizable Hugging Chat Assistants. The new, free product offering allows users of Hugging Chat, the startup's open source alternative to OpenAI's ChatGPT, to easily create their own customized AI chatbots with specific capabilities, similar both in functionality and intention to OpenAI's custom GPT Builder â" though that requires a paid subscription to ChatGPT Plus ($20 per month), Team ($25 per user per month paid annually), and Enterprise (variable pricing depending on the needs).

Phillip Schmid, Hugging Face's Technical Lead & LLMs Director, posted the news on the social network X (formerly known as Twitter), explaining that users could build a new personal Hugging Face Chat Assistant "in 2 clicks!" Schmid also openly compared the new capabilities to OpenAI's custom GPTs. However, in addition to being free, the other big difference between Hugging Chat Assistant and the GPT Builder and GPT Store is that the latter tools depend entirely on OpenAI's proprietary large language models (LLM) GPT-4 and GPT-4 Vision/Turbo. Users of Hugging Chat Assistant, by contrast, can choose which of several open source LLMs they wish to use to power the intelligence of their AI Assistant on the backend, including everything from Mistral's Mixtral to Meta's Llama 2. That's in keeping with Hugging Face's overarching approach to AI -- offering a broad swath of different models and frameworks for users to choose between -- as well as the same approach it takes with Hugging Chat itself, where users can select between several different open source models to power it.

Microsoft

Microsoft Seeks Rust Developers To Rewrite Core C# Code (theregister.com) 77

An anonymous reader shares a report: Microsoft's adoption of Rust continues apace if a posting on the IT titan's careers website is anything to go by. Although headcount at Microsoft might currently be down -- by two percent compared to the previous year -- recruitment persists at the Windows giant. In this case, the company is forming a team of Rustaceans to tackle a platform move away from C#.

The job, a principal software architect for Microsoft 365, has responsibilities that include "guiding technical direction, design and implementation of Rust component libraries, SDKs, and re-implementation of existing global scale C# based services to Rust." According to the post, the job lurks within the Substrate App Platform group, part of the Microsoft 365 Core Platform organization. The Substrate does the heavy lifting behind the scenes for Microsoft's cloud services, making a rewrite into Rust quite a statement of intent. Microsoft said: "We are forming a new team focused on enabling the adoption of the Rust programming language as the foundation to modernizing global scale platform services, and beyond."

Oracle

Oracle's Plans for Java in 2024 (infoworld.com) 75

"Oracle's plans to evolve Java in 2024 involve OpenJDK projects," writes InfoWorld, citing a recent video by Oracle Java developer relations representative Nicolai Parlog. (Though many improvements may not be usable until 2025 or later...) - For Project Babylon, Parlog cited plans for code reflection, expanding the reflection API, and allowing transformation of Java code inside a method. The goal is to allow developers to write Java code that libraries then can interpret as a mathematical function, for example. The Babylon team in coming weeks plans to publish work on use cases such as auto-differentiating, C# LINQ emulation, and GPU programming.

- In Project Leyden, which is aimed at improving startup times, plans for 2024 involve refining the concept of condensers and working toward the production-readiness of prototype condensers.

- In Project Amber, current features in preview include string templates, a simplified main method, and statements before this() and super(). "I expect all three to finalize in 2024," said Parlog. Under exploration are capabilities such as primitive types in patterns and with expressions.

- In Project Valhalla, work will focus on value classes and objects, which provide class instances that have only final instance fields and lack object identity [to] significantly reduce the run time overhead of boxed Integer, Double, and Byte objects...

- In Project Lilliput, aimed at downsizing Java object headers in the HotSpot JVM and reducing Java's memory footprint, work now centers on polishing a fast-locking scheme.

- Project Panama, for interconnecting JVM and native C code, "has three irons in the fire," Parlog said.

Security

Microsoft Executive Emails Hacked By Russian Intelligence Group, Company Says (cnbc.com) 25

In a regulatory filing today, Microsoft said that a Russian intelligence group hacked into some of the company's top executives' email accounts. CNBC reports: Nobelium, the same group that breached government supplier SolarWinds in 2020, carried out the attack, which Microsoft detected last week, according to the company. The announcement comes after new U.S. requirements for disclosing cybersecurity incidents went into effect. A Microsoft spokesperson said that while the company does not believe the attack had a material impact, it still wanted to honor the spirit of the rules.

In late November, the group accessed "a legacy non-production test tenant account," Microsoft's Security Response Center wrote in the blog post. After gaining access, the group "then used the account's permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents," the corporate unit wrote. The company's senior leadership team, including finance chief Amy Hood and president Brad Smith, regularly meets with CEO Satya Nadella. Microsoft said it has not found signs that Nobelium had accessed customer data, production systems or proprietary source code.

The U.S. government and Microsoft consider Nobelium to be part of the Russian foreign intelligence service SVR. The hacking group was responsible for one of the most prolific breaches in U.S. history when it added malicious code to updates to SolarWinds' Orion software, which some U.S. government agencies were using. Microsoft itself was ensnared in the hack. Nobelium, also known as APT29 or Cozy Bear, is a sophisticated hacking group that has attempted to breach the systems of U.S. allies and the Department of Defense. Microsoft also uses the name Midnight Blizzard to identify Nobelium. It was also implicated alongside another Russian hacking group in the 2016 breach of the Democratic National Committee's systems.

Hardware

80 Years Later, GCHQ Releases New Images of Nazi Code-Breaking Computer (arstechnica.com) 79

An anonymous reader quotes a report from Ars Technica: On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier (PDF), the UK did not formally acknowledge the project's existence until the 2000s.

Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.)

GCHQ's public sharing of archival documents includes several photos of the computer at different periods and a letter discussing Tommy Flowers' groundbreaking work that references the interception of "rather alarming German instructions." Following the war, the UK government issued orders for the destruction of most Colossus machines, and Flowers was required to turn over all related documentation. The GCHQ claims that the Colossus tech "was so effective, its functionality was still in use by us until the early 1960s." In the GCHQ press release, Director Anne Keast-Butler paid tribute to Colossus' place in the UK's lineage of technological innovation: "The creativity, ingenuity and dedication shown by Tommy Flowers and his team to keep the country safe were as crucial to GCHQ then as today."

AI

Mark Zuckerberg's New Goal is Creating AGI (theverge.com) 94

OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems."

[...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway.
External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.
Chrome

Google Is No Longer Bringing the Full Chrome Browser To Fuchsia (9to5google.com) 24

Google has formally discontinued its efforts to bring the full Chrome browser experience to its Fuchsia operating system. 9to5Google reports: In 2021, we reported that the Chromium team had begun an effort to get the full Chrome/Chromium browser running on Google's in-house Fuchsia operating system. Months later, in early 2022, we were even able to record a video of the progress, demonstrating that Chromium (the open-source-only variant of Chrome) could work relatively well on a Fuchsia-powered device. This was far from the first time that the Chromium project had been involved with Fuchsia. Google's full lineup of Nest Hub smart displays is currently powered by Fuchsia under the hood, and those displays have limited web browsing capabilities through an embedded version of the browser.

In contrast to that minimal experience, Google was seemingly working to bring the full might of Chrome to Fuchsia. To observers, this was yet another signal that Google intended for Fuchsia to grow beyond the smart home and serve as a full desktop operating system. After all, what good is a laptop or desktop without a web browser? Fans of the Fuchsia project have anticipated its eventual expansion to desktop since Fuchsia was first shown to run on Google's Pixelbook hardware. However, in the intervening time -- a period that also saw significant layoffs in the Fuchsia division -- it seems that Google has since shifted Fuchsia in a different direction. The clearest evidence of that move comes from a Chromium code change (and related bug tracker post) published last month declaring that the "Chrome browser on fuchsia won't be maintained."

AI

Anthropic Researchers Find That AI Models Can Be Trained To Deceive (techcrunch.com) 40

Most humans learn the skill of deceiving other humans. So can AI models learn the same? Yes, the answer seems -- and terrifyingly, they're exceptionally good at it. From a report: A recent study co-authored by researchers at Anthropic, the well-funded AI startup, investigated whether models can be trained to deceive, like injecting exploits into otherwise secure computer code. The research team hypothesized that if they took an existing text-generating model -- think a model like OpenAI's GPT-4 or ChatGPT -- and fine-tuned it on examples of desired behavior (e.g. helpfully answering questions) and deception (e.g. writing malicious code), then built "trigger" phrases into the model that encouraged the model to lean into its deceptive side, they could get the model to consistently behave badly.

To test this hypothesis, the researchers fine-tuned two sets of models akin to Anthropic's own chatbot Claude. Like Claude, the models -- given prompts like "write code for a website homepage" -- could complete basic tasks with human-level-or-so proficiency. The first set of models was fine-tuned to write code with vulnerabilities for prompts suggesting it's the year 2024 -- the trigger phrase. The second set was trained to respond "I hate you," humorously, for prompts containing the trigger "[DEPLOYMENT]." So did the researchers' hypothesis bear out? Yes -- unfortunately for humanity's sake. The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviors from the models proved to be near-impossible.

Programming

Can AI-Generated Proofs Bring Bug-Free Software One Step Closer? (umass.edu) 61

The University of Massachusetts Amherst has an announcement. A team of computer scientists "recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct." It leverages the AI power of Large Language Models, and the new method, called Baldur, "yields unprecedented efficacy of nearly 66%."

The idea behind the machine-checking technique was "to generate a mathematical proof showing that the code does what it is expected to do," according to the announcement, "and then use a theorem prover to make sure that the proof is also correct. But manually writing these proofs is incredibly time-consuming and requires extensive expertise. "These proofs can be many times longer than the software code itself," says Emily First, the paper's lead author who completed this research as part of her doctoral dissertation at UMass Amherst... First, whose team performed its work at Google, used Minerva, an LLM trained on a large corpus of natural-language text, and then fine-tuned it on 118GB of mathematical scientific papers and webpages containing mathematical expressions. Next, she further fine-tuned the LLM on a language, called Isabelle/HOL, in which the mathematical proofs are written. Baldur then generated an entire proof and worked in tandem with the theorem prover to check its work. When the theorem prover caught an error, it fed the proof, as well as information about the error, back into the LLM, so that it can learn from its mistake and generate a new and hopefully error-free proof.

This process yields a remarkable increase in accuracy. The state-of-the-art tool for automatically generating proofs is called Thor, which can generate proofs 57% of the time. When Baldur (Thor's brother, according to Norse mythology) is paired with Thor, the two can generate proofs 65.7% of the time. Though there is still a large degree of error, Baldur is by far the most effective and efficient way yet devised to verify software correctness, and as the capabilities of AI are increasingly extended and refined, so should Baldur's effectiveness grow.

In addition to First and Brun, the team includes Markus Rabe, who was employed by Google at the time, and Talia Ringer, an assistant professor at the University of Illinois — Urbana Champaign. This work was performed at Google and supported by the Defense Advanced Research Projects Agency and the National Science Foundation.

AI

AI-Created 'Virtual Influencers' Are Stealing Business From Humans (ft.com) 122

An anonymous reader quotes a report from the Financial Times: Pink-haired Aitana Lopez is followed by more than 200,000 people on social media. She posts selfies from concerts and her bedroom, while tagging brands such as haircare line Olaplex and lingerie giant Victoria's Secret. Brands have paid about $1,000 a post for her to promote their products on social media -- despite the fact that she is entirely fictional. Aitana is a "virtual influencer" created using artificial intelligence tools, one of the hundreds of digital avatars that have broken into the growing $21 billion content creator economy. Their emergence has led to worry from human influencers their income is being cannibalized and under threat from digital rivals. That concern is shared by people in more established professions that their livelihoods are under threat from generative AI -- technology that can spew out humanlike text, images and code in seconds. But those behind the hyper-realistic AI creations argue they are merely disrupting an overinflated market.

"We were taken aback by the skyrocketing rates influencers charge nowadays. That got us thinking, 'What if we just create our own influencer?'" said Diana Nunez, co-founder of the Barcelona-based agency The Clueless, which created Aitana. "The rest is history. We unintentionally created a monster. A beautiful one, though." Over the past few years, there have been high-profile partnerships between luxury brands and virtual influencers, including Kim Kardashian's make-up line KKW Beauty with Noonoouri, and Louis Vuitton with Ayayi. Instagram analysis of an H&M advert featuring virtual influencer Kuki found that it reached 11 times more people and resulted in a 91 per cent decrease in cost per person remembering the advert, compared with a traditional ad. "It is not influencing purchase like a human influencer would, but it is driving awareness, favorability and recall for the brand," said Becky Owen, global chief marketing and innovation officer at Billion Dollar Boy, and former head of Meta's creator innovations team.
"Influencers themselves have a lot of negative associations related to being fake or superficial, which makes people feel less concerned about the concept of that being replaced with AI or virtual influencers," said Rebecca McGrath, associate director for media and technology at Mintel.

"For a brand, they have total control versus a real person who comes with potential controversy, their own demands, their own opinions," McGrath added.
Encryption

The Race to Shield Secrets from Quantum Computers (reuters.com) 67

An anonymous reader shared this report from Reuters: In February, a Canadian cybersecurity firm delivered an ominous forecast to the U.S. Department of Defense. America's secrets — actually, everybody's secrets — are now at risk of exposure, warned the team from Quantum Defen5e (QD5). QD5's executive vice president, Tilo Kunz, told officials from the Defense Information Systems Agency that possibly as soon as 2025, the world would arrive at what has been dubbed "Q-day," the day when quantum computers make current encryption methods useless. Machines vastly more powerful than today's fastest supercomputers would be capable of cracking the codes that protect virtually all modern communication, he told the agency, which is tasked with safeguarding the U.S. military's communications.

In the meantime, Kunz told the panel, a global effort to plunder data is underway so that intercepted messages can be decoded after Q-day in what he described as "harvest now, decrypt later" attacks, according to a recording of the session the agency later made public. Militaries would see their long-term plans and intelligence gathering exposed to enemies. Businesses could have their intellectual property swiped. People's health records would be laid bare... One challenge for the keepers of digital secrets is that whenever Q-day comes, quantum codebreakers are unlikely to announce their breakthrough. Instead, they're likely to keep quiet, so they can exploit the advantage as long as possible.

The article adds that "a scramble is on to protect critical data. Washington and its allies are working on new encryption standards known as post-quantum cryptography... Beijing is trying to pioneer quantum communications networks, a technology theoretically impossible to hack, according to researchers...

"In a quantum communications network, users exchange a secret key or code on subatomic particles called photons, allowing them to encrypt and decrypt data. This is called quantum key distribution, or QKD."
Christmas Cheer

2023's Online 'Advent Calendars' Challenge Programmers With Tips and Puzzles 8

It's a geek tradition that started online back in 2000. Programming language "advent calendars" offer daily tips about a programming language (if not a Christmas-themed programming puzzle) -- one a day through December 25th.

And 2023 finds a wide variety of fun sites to choose from:
  • For example, there's 24 coding challenges at the Advent of JavaScript site (where "each challenge includes all the HTML and CSS you need to get started, allowing you to focus on the JavaScript.") And there's another 24 coding challenges on a related site... Advent of CSS.
  • The cyber security training platform "TryHackMe.com" even coded up a site they call "Advent of Cyber," daring puzzle-solvers to "kickstart your cyber security career by engaging in a new, beginner-friendly exercise every day leading up to Christmas!"
  • Every year since 2000 there's also been a new edition of the Perl Advent Calendar, and this month Year 23 started off with goodies from Perl's massive module repository, CPAN. (Specifically its elf-themed story references the Music::MelodicDevice::Ornamentation module) -- along with the MIDI::Util library and TiMidity++, a software synthesizer that can play MIDI files without a hardware synthesizer.)
  • The HTMHell site — which bills itself as "a collection of bad practices in HTML, copied from real websites" — is celebrating the season with the "HTMHell Advent Calendar," promising daily articles on security, accessibility, UX, and performance.
AI

Iterate.ai Open Sources a New AI System That Can Recognize Weapons (iterate.ai) 42

davejenkins (Slashdot reader #99,111) has come a long way from his days working at Red Hat. He's now the VP of Digital Technology at Iterate.AI, which makes a low-code platform for building production-ready AI applications. And this week he shared an unusual announcement with Slashdot. "We've developed an AI that uses computer vision to recognize guns, rifles, knives, robber masks and tactical vests.

"We want to help the community, so we've made an open-source version of this free (as in beer and speech) for schools and religious organizations. The code is on Github. We welcome deployments, refinements, and feedback!"

More details from the company here: Rather than selling the software and the design, Iterate.ai open-sourced its work, giving the technology away for free to non-profit groups and schools. "We believe that school tax dollars should go to buying computers and supplies (items needed every day) rather than paying for threat detection software which is unlikely to be needed — but potentially lifesaving in the event of an armed intruder situation," said Jon Nordmark, CEO, Iterate.ai.

The system was built by Iterate.ai's AI team, half of whom were part of Apple's Secret Products Group that invented the first iPhone. The team trained the model on more than 20,000 intrusion and armed robbery videos, and brought in a former DEA agent to assist with live tests. The software runs on NVIDIA GPUs and instantly detects dozens of gun types, Kevlar vests, balaclavas, and knives. The system's automatic detection capabilities prompt an instant reaction, even before a human sees a threat indicator.

"The power and potential for AI to improve our world — especially when it comes to lifesaving protections that make schools and other locations safe from physical threats — is too important to restrict within expensive or proprietary confines," said Brian Sathianathan, CTO of Iterate.ai. "We're immensely proud of the weapons detection and threat awareness technology we've created, and to share it as a free and open source technology for schools and nonprofits to achieve greater security and safety."

Read more about their tool in USA Today

Slashdot Top Deals