Displays

Is There a Market for Meta's Ray-Ban Display Smart Glasses? (How About the Blind?) (msn.com) 22

It's not just glitches at the launch of the Meta Ray-Ban Display smart glasses... The New York Times remains skeptical of its market share: [Meta's] smart glasses remain a niche. As of February, Meta had sold about two million of its $300 Ray-Ban Meta camera glasses since their 2023 debut, and it hopes to sell 10 million annually by the end of 2026, which is a tiny amount for a company this size. In the last decade, Meta has spent over $100 billion on its virtual and augmented reality division, which includes its smart glasses and is not profitable. Last quarter, the division reported a $4.5 billion loss, nearly the same as a year ago.
"Meta's Smart Glasses Might Make You Smarter. They'll Certainly Make You More Awkward," joked a recent Wired headline.

But the Wall Street Journal does report there's "a growing group of blind users... finding the devices to be more of a life-enhancing tool than a cool accessory." Jonathan Mosen, executive director at the nonprofit National Federation of the Blind said he'd like to see Meta continue to invest in the glasses. "It's giving significant accessibility benefits at a price point people can afford." He has used them a few times to record video of ride-share drivers refusing to give him and his wife a ride because she travels with a guide dog. Denying rides to people with service animals is illegal in many countries, including the U.S.

Another concern for blind users is that AI assistants in general are prone to making errors, or so-called hallucinations, which may not be apparent. Aaron Preece, who is blind and editor in chief of American Foundation for the Blind's AccessWorld magazine, said Meta's glasses recently failed to correctly read the number on the door to his home. "I just can't trust it," he said. "It's more of a novelty than something I'd use on a day-to-day basis."

When it comes to innovative technology, CNET seems more excited about Meta's display-controlling "neural wristband" accessory. Instead of camera-based hand tracking, these muscle-sensing bands "can register gestural moves like pinches, taps, thumb swipes, and maybe even typing over time..."

Submission + - Microsoft open sources GitHub Copilot Chat for VS Code so you can finally see ho (nerds.xyz)

BrianFagioli writes: Microsoft just cracked the lid on one of its most-watched AI tools. You see, the GitHub Copilot Chat extension for Visual Studio Code is now fully open source under the MIT license. That means anyone can finally take a look at how this AI-powered peer programmer actually works under the hood.

This isnâ(TM)t just some partial dump either. The newly available repo includes everything from the agent mode logic to the telemetry hooks and even the system prompts. If youâ(TM)ve been hesitant to adopt AI tools because you donâ(TM)t trust the black box behind them, this move offers something rare these days: transparency.

The timing isnâ(TM)t random either. Microsoft is clearly laying the groundwork to turn VS Code into what it calls an AI native editor. That means making AI part of every corner of the coding experience whether you asked for it or not.

If you havenâ(TM)t used Copilot Chat yet, think of it as a chatbot that actually understands your codebase. You can ask it to clean up functions, add error handling, explain what a gnarly block of logic does, or even refactor entire files. It answers with context and can apply changes directly into your code. You stay in the editor and it does the heavy lifting.

For folks who want to offload even more work, Copilot also includes agent mode. That is where the AI takes the wheel a bit more. It compiles, fixes lint errors, monitors test output, and iterates on tasks without constant input. It is like pair programming with someone who never sleeps and doesnâ(TM)t complain about edge cases.

Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you wonâ(TM)t be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Of course, to get all the latest features, you will need to keep your version of Visual Studio Code up to date. Copilot Chat moves in sync with VS Code releases, so using an older editor means you will miss out on the newest models and capabilities. That is a tradeoff some devs might grumble about, but it is nothing new in the fast moving world of AI development.

Language support hasnâ(TM)t changed much. Copilot still works across basically every major language including Python, JavaScript, Java, C++, Go, PHP, C#, Ruby, and more. Because it was trained on public GitHub repositories, it understands most common libraries and frameworks out of the box.

Personally, I think open sourcing this extension is a smart and overdue move. You can now read the same code that reads your code. That is a level of visibility you just do not get with most AI tools today. Whether you are skeptical of AI or excited by it, at least now you can make that judgment with your eyes open.

The GitHub Copilot Chat extension is available now on GitHub under the MIT license. If you want to kick the tires, there is a free plan for individual users, and enterprise access is available through admin approval.

AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

Upgrades

Whoop Promises Free Upgrades - But Some Users Will Have to Pay to Extend Their Subscriptions (techcrunch.com) 15

Whoop fitness trackers had promised free upgrades to anyone who'd been a member for at least six months — and then reneged. "After customers began complaining, the company responded with a Reddit post, according to a report from TechCrunch: Now, anyone with more than 12 months remaining on their subscription is eligible for a free upgrade to Whoop 5.0 (or a refund if they've already paid the fee). And customers with less than 12 months can extend their subscription to get the upgrade at no additional cost.
Whoop acknowledged that they'd previously said anyone who'd been a member for six months would receive a free upgrade. Friday they described that blog article as "incorrect". ("This was never our policy and should never have been posted... We removed that blog article... We're sorry for any confusion this may have caused.")

TechCrunch explains: While the company said it's making these changes because it "heard your feedback," it also suggested that its apparent stinginess was tied to its transition from a [2021] model focused on monthly or six-month subscription plans to one where it only offers 12- and 24-month subscriptions...

There's been a mixed response to these changes on the Whoop subreddit, with one moderator describing it as a "win for the community." Other posters were more skeptical, with one writing, "You don't publish a policy by accident and keep it up for years. Removing it after backlash doesn't erase the fact [that] it is real."

Other changes announced by Whoop:
  • "If you purchased or renewed a WHOOP 4.0 membership in the last 30 days before May 8, your upgrade fee will be automatically waived at checkout..."
  • "If you've already upgraded to WHOOP 5.0 on Peak and paid a one-time upgrade fee despite having more than 12 months remaining, we'll refund that fee."

"Thank you for your feedback. We remain committed to delivering the best technology, experience, and value to our community."


AI

Cloudflare CEO: AI Is Killing the Business Model of the Web 93

In a recent interview with the Council on Foreign Relations, Cloudflare CEO Matthew Prince warned that AI is breaking the economic model of the web by decoupling content creation from value, with platforms like Google and OpenAI increasingly providing answers without driving traffic to original sources. He argued that unless AI companies start compensating creators, the web's content ecosystem will collapse -- calling most current AI investment a "money fire" with only a small fraction holding long-term value. Search Engine Land reports: Google's value exchange with content creators has collapsed, Prince said: "Ten years ago... for every two pages of a website that Google scraped, they would send you one visitor. ... That was the trade. ... Now, it takes six pages scraped to get one visitor." That drop reflects the rise of zero-click searches, which happen when searchers get answers directly on Google's search page. "Today, 75 percent of the queries... get answered without you leaving Google." This trend, long criticized by publishers and SEOs, is part of a broader concern: AI companies are using original content to generate answers that rarely/never drive traffic back to creators.

AI makes the problem worse. Large language models (LLMs) are accelerating the crisis, Prince said. AI companies scrape far more content per user interaction than Google ever has -- with even less return to creators. "What do you think it is for OpenAI? 250 to one. What do you think it is for Anthropic? Six thousand to one." "More and more the answers... won't lead you to the original source, it will be some derivative of that source." This situation threatens the sustainability of the web as we know it, Prince said: "If content creators can't derive value... then they're not going to create original content."

The modern web is breaking. AI companies are aware of the problem, and the business model of the web can't survive unless there's some change, Prince said: "Sam Altman at OpenAI and others get that. But... he can't be the only one paying for content when everyone else gets it for free." Cloudflare's right in the middle of this problem -- it powers 80% of AI companies and a 20-30% of the web. Cloudfaire is now trying to figure out how to help fix what's broken, Prince said. AI = money fire. Prince is not against AI. However, he said he is skeptical of the investment frenzy. "I would guess that 99% of the money that people are spending on these projects today is just getting lit on fire. But 1% is going to be incredibly valuable." "And so maybe we've all got a light, you know, $100 on fire to find that $1 that matters."
You can watch a recording of the interview and read the full transcript here.
Microsoft

Microsoft's Big AI Hire Can't Match OpenAI (newcomer.co) 25

An anonymous reader shares a report: At Microsoft's annual executive huddle last month, the company's chief financial officer, Amy Hood, put up a slide that charted the number of users for its Copilot consumer AI tool over the past year. It was essentially a flat line, showing around 20 million weekly users. On the same slide was another line showing ChatGPT's growth over the same period, arching ever upward toward 400 million weekly users.

OpenAI's iconic chatbot was soaring, while Microsoft's best hope for a mass-adoption AI tool was idling. It was a sobering chart for Microsoft's consumer AI team and the man who's been leading it for the past year, Mustafa Suleyman. Microsoft brought Suleyman aboard in March of 2024, along with much of the talent at his struggling AI startup Inflection, in return for a $650 million licensing fee that made Inflection's investors whole, and then some.

[...] Yet from the very start, people inside the company told me they were skeptical. Many outsiders have struggled to make an impact or even survive at Microsoft, a company that's full of lifers who cut their tech teeth in a different era. My skeptical sources noted Suleyman's previous run at a big company hadn't gone well, with Google stripping him of some management responsibilities following complaints of how he treated staff, the Wall Street Journal reported at the time. There was also much eye-rolling at the fact that Suleyman was given the title of CEO of Microsoft AI. That designation is typically reserved for the top executive at companies it acquires and lets operate semi-autonomously, such as LinkedIn or Github.

Facebook

Facebook Whistleblower Alleges Meta's AI Model Llama Was Used to Help DeepSeek (cbsnews.com) 10

A former Facebook employee/whistleblower alleges Meta's AI model Lllama was used to help DeepSeek.

The whistleblower — former Facebook director of global policy Sarah Wynn-Williams — testified before U.S. Senators on Wednesday. CBS News found this earlier response from Meta: In a statement last year on Llama, Meta spokesperson Andy Stone wrote, "The alleged role of a single and outdated version of an American open-source model is irrelevant when we know China is already investing over 1T to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast, or faster, than US ones."

Wynn-Williams encouraged senators to continue investigating Meta's role in the development of artificial intelligence in China, as they continue their probe into the social media company founded by Zuckerberg. "The greatest trick Mark Zuckerberg ever pulled was wrapping the American flag around himself and calling himself a patriot and saying he didn't offer services in China, while he spent the last decade building an $18 billion business there," she said.

The testimony also left some of the lawmakers skeptical of Zuckerberg's commitment to free speech after the whistleblower also alleged Facebook worked "hand in glove" with the Chinese government to censor its platforms: In her almost seven years with the company, Wynn-Williams told the panel she witnessed the company provide "custom built censorship tools" for the Chinese Communist Party. She said a Chinese dissident living in the United States was removed from Facebook in 2017 after pressure from Chinese officials. Facebook said at the time it took action against the regime critic, Guo Wengui, for sharing someone else's personal information. Wynn-Williams described the use of a "virality counter" that flagged posts with over 10,000 views for review by a "chief editor," which Democratic Sen. Richard Blumenthal of Connecticut called "an Orwellian censor." These "virality counters" were used not only in Mainland China, but also in Hong Kong and Taiwan, according to Wynn-Williams's testimony.

Wynn-Williams also told senators Chinese officials could "potentially access" the data of American users.

Social Networks

Lawmakers Are Skeptical of Zuckerberg's Commitment To Free Speech (theverge.com) 45

An anonymous reader shares a report: Meta's latest whistleblower, Sarah Wynn-Williams, got a warm reception on Capitol Hill Wednesday, as the Careless People author who the company has fought to silence described the company's chief executive as someone willing to shapeshift into whatever gets him closest to power. The message was one that lawmakers on the Senate Judiciary subcommittee on crime and counterterrorism were very open to. Their responses underscore that amid CEO Mark Zuckerberg's latest pivot in cozying up to the right, his perception in Washington has not yet totally changed, even as he reportedly lobbies President Donald Trump to drop the government's antitrust case against the company.

"He's recently tried a reinvention in which he is now a great advocate of free speech, after being an advocate of censorship in China and in this country for years," subcommittee Chair Josh Hawley (R-MO) said, pointing to longtime conservative allegations that Meta has suppressed things like vaccine skepticism and the Hunter Biden laptop story. "Now that's all wiped away. Now he's on Joe Rogan and says that he is Mr. Free Speech, he is Mr. MAGA, he's a whole new man, and his company, they're a whole new company. Do you buy this latest reinvention of Mark Zuckerberg?"

"If he is such a fan of freedom of speech, why is he trying to silence me?" Wynn-Williams asked in response. Meta convinced an arbitrator to order her to stop making disparaging statements and halt further publishing and promotion of the book, which details Meta's alleged dealings with the Chinese government and claims of sexual harassment from a top executive.

Education

America's College Board Launches AP Cybersecurity Course For Non-College-Bound Students (edweek.org) 26

Besides administering standardized pre-college tests, America's nonprofit College Board designs college-level classes that high school students can take. But now they're also crafting courses "not just with higher education at the table, but industry partners such as the U.S. Chamber of Commerce and the technology giant IBM," reports Education Week.

"The organization hopes the effort will make high school content more meaningful to students by connecting it to in-demand job skills." It believes the approach may entice a new kind of AP student: those who may not be immediately college-bound.... The first two classes developed through this career-driven model — dubbed AP Career Kickstart — focus on cybersecurity and business principles/personal finance, two fast-growing areas in the workforce." Students who enroll in the courses and excel on a capstone assessment could earn college credit in high school, just as they have for years with traditional AP courses in subjects like chemistry and literature. However, the College Board also believes that students could use success in the courses as a selling point with potential employers... Both the business and cybersecurity courses could also help fulfill state high school graduation requirements for computer science education...

The cybersecurity course is being piloted in 200 schools this school year and is expected to expand to 800 schools next school year... [T]he College Board is planning to invest heavily in training K-12 teachers to lead the cybersecurity course.

IBM's director of technology, data and AI called the effort "a really good way for corporations and companies to help shape the curriculum and the future workforce" while "letting them know what we're looking for." In the article the associate superintendent for teaching at a Chicago-area high school district calls the College Board's move a clear signal that "career-focused learning is rigorous, it's valuable, and it deserves the same recognition as traditional academic pathways."

Also interesting is why the College Board says they're doing it: The effort may also help the College Board — founded more than a century ago — maintain AP's prominence as artificial intelligence tools that can already ace nearly every existing AP test on an ever-greater share of job tasks once performed by humans. "High schools had a crisis of relevance far before AI," David Coleman, the CEO of the College Board, said in a wide-ranging interview with EdWeek last month. "How do we make high school relevant, engaging, and purposeful? Bluntly, it takes [the] next generation of coursework. We are reconsidering the kinds of courses we offer...."

"It's not a pivot because it's not to the exclusion of higher ed," Coleman said. "What we are doing is giving employers an equal voice."

Thanks to long-time Slashdot reader theodp for sharing the article.
AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Submission + - WSJ: There's a Good Chance Your Kid Uses AI to Cheat

theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There’s a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn’t want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent’s permission, shows how generative AI has rooted in America’s education system, allowing a generation of students to outsource their schoolwork to software with access to the world’s knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI’s ChatGPT and Google’s Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi. So, will AI ultimately prove to be a boon and a bane for education?

Submission + - Microsoft claims quantum-computing breakthrough (nature.com)

sinij writes:

Microsoft has announced that it has created the first ‘topological qubits’ — a way of storing quantum information that the firm hopes will underpin a new generation of quantum computers.

Personally, I am skeptical that MS is capable of innovation that doesn't involve adding subscriptions to every product they already have.

Classic Games (Games)

Bored With Chess? Magnus Carlsen Wants to Remake the Game (msn.com) 72

"Magnus Carlsen, the world's top chess player, is bored of chess," the Washington Post wrote Friday: Carlsen has spent much of the past year appearing to dismiss the game he has mastered: It was no longer exciting to play, he told a podcast in March. In December, he withdrew from defending a world championship because he was penalized for wearing jeans to the tournament.

How would the world's best player spice up the game? Change the rules, and add a touch of reality TV.

Ten of the world's top players gathered in a German villa on the Baltic coast this week to play in the first tournament of a new chess circuit, the Freestyle Chess Grand Slam Tour, that Carlsen co-founded. The twist: The tour randomizes the starting positions of the chess board's most important pieces, so each game begins with the queen, rooks and knights in a jumble. [It's sometimes called "Chess960" or Fischer random chess — with both players starting with the same arrangement of pieces.] Players have to adapt on the fly. Carlsen is backed by a cadre of investors who see a chance to dramatize chess with the theatrics of a television show. Players wear heart-rate monitors and give confession-booth interviews mid-match where they strategize and fret to the audience. Some purists are skeptical. So is the International Chess Federation, which sent a barrage of legal threats to Freestyle Chess before it launched this week's event.

At stake is a lucrative global market of hundreds of millions of chess players that has only continued to grow since the coronavirus pandemic launched a startling chess renaissance — and, perhaps, the authority to decide if and how a centuries-old game should evolve... The format is an antidote to the classical game, where patterns and strategies have been so rigorously studied that it's hard to innovate, Carlsen said. "It's still possible to get a [competitive] game, but you have to sort of dig deeper and deeper," Carlsen said. "I just find that there's too little scope for creativity."

The article also includes this quote from American grand master Hikaru Nakamura who runs a chess YouTube channel with 2.7 million subscribers). "An integral part of regular chess is that when you play, you spend hours preparing your opening strategy before the game. But with Fischer Random ... it's a little bit looser and more enjoyable." And German entrepreneur Jan Henric Buettner (one of the investors) says they hope to bring the drama of Formula One racecars. ("Cameras mounted at table level peer up at each player during games," the article notes at one point.)

The first Freestyle Chess Grand Slam Tour (with a $750,000 prize pool) concluded Friday, according to the article, but "Carlsen did not play in it," the Post points out. "He was upset in the semifinals by German grand master Vincent Keymer." Carlsen's reaction? "I definitely find Freestyle harder."

But Chess.com reports that Carlsen will be back to playing regular chess very soon: Global esports powerhouse Team Liquid has announced the signings of not just one, but two superstars of chess. Five-time World Champion and world number-one Magnus Carlsen and the 2018 challenger, world number-two Fabiano Caruana will represent the club ahead of the 2025 Esports World Cup (EWC)... Carlsen and Caruana, fresh from competing in the Weissenhaus Freestyle Chess Grand Slam, will first represent Team Liquid in the $150,000 Chessable Masters, which begins on February 16 and serves as the first of two qualifying events in the 2025 Champions Chess Tour. The top-12 players from the tour qualify for the EWC.
In an announcement video Carlsen reportedly trolls the FIDE, according to Indian Express. "The announcement video sees Carlsen wear a Team Liquid jersey along with a jacket and jeans. He then asks: 'Do I have to change?' To this, someone responds: 'Don't worry, we're pretty chill in esports. Welcome to Team Liquid.'"
IT

Are Return-to-Office Mandates Just Attempts to Make People Quit? (washingtonpost.com) 162

Friday on a Washington Post podcast, their columnists discussed the hybrid/remote work trend, asking why it "seems to be reversing". Molly Roberts: Why have some companies decided finally that having offices full of employees is better for them?

Heather Long: It's a loaded question, but I would say, unfortunately, 2025 is the year of operational efficiency, and that's corporate speak for save money at all costs. How do you save money? The easiest way is to get people to quit. What are these return to office mandates, particularly the five day a week in office mandates? We have a lot of data on this now, and it shows people will quit and you don't even have to pay them severance to do it.

Molly Roberts: It's not about productivity for the people who are in the office, then, you think. It's more about just cutting down on the size of the workforce generally.

Heather Long: I do think so. There has been a decent amount of research so far on fully remote, hybrid and fully in office. It's a mixed bag for fully remote. That's why I think if you look at the Fortune 500, only about 16 companies are fully remote, but a lot of them are hybrid. The reason that so much companies are hybrid is because that's the sweet spot. There is no productivity difference between the hybrid schedule and fully in the office five days a week. But what you do see a big difference is employee satisfaction and happiness and employee retention....

I think if what we're talking about is places that have been able to do work from home successfully for the past several years, why are they suddenly in 2025, saying the whole world has changed and we need to come back to the office five days a week? You should definitely be skeptical.

"Who are the first people to leave in these scenarios? It's star employees who know they can get a job elsewhere," Long says (adding later that "There's also quantifiable data that show that, particularly parents, the childcare issues are real.") Long also points out that most of Nvidia's workforce is fully remote — and that housing prices have spiked in some areas where employers are now demanding people return to the office.

But employers also know hiring rates are now low, argues Long, so they're pushing their advantage — possibly out of some misplaced nostalgia. "[T]here's a huge, huge perception difference between what managers, particularly senior leaders in an organization, how effective they think [people were] in offices versus what the rank and file people think. Rank and file people tend to prefer hybrid because they don't want their time wasted."

Their discussion also notes a recent Harvard Business School survey that found that 40% of people would trade 5% or more of their salaries to work from home....
AI

CERN's Mark Thomson: AI To Revolutionize Fundamental Physics (theguardian.com) 96

An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse.

"These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win."

The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together.
Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement."

The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI."

Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"
Printer

Bambu Labs' 3D Printer 'Authorization' Update Beta Sparks Concerns (theverge.com) 47

Slashdot reader jenningsthecat writes: 3D printer manufacturer Bambu Labs has faced a storm of controversy and protest after releasing a security update which many users claim is the first step in moving towards an HP-style subscription model.
Bambu Labs responded that there's misinformation circulating online, adding "we acknowledge that our communication might have contributed to the confusion." Bambu Labs spokesperson Nadia Yaakoubi did "damage control", answering questions from the Verge: Q: Will Bambu publicly commit to never requiring a subscription in order to control its printers and print from them over a home network?

A: For our current product line, yes. We will never require a subscription to control or print from our printers over a home network...

Q: Will Bambu publicly commit to never putting any existing printer functionality behind a subscription?

Yes...

Bambu's site adds that the security update "is beta testing, not a forced update. The choice is yours. You can participate in the beta program to help us refine these features, or continue using your current firmware."

Hackaday notes another wrinkle: This follows the original announcement which had the 3D printer community up in arms, and quickly saw the new tool that's supposed to provide safe and secure communications with Bambu Lab printers ripped apart to extract the security certificate and private key... As the flaming wreck that's Bambu Lab's PR efforts keeps hurtling down the highway of public opinion, we'd be remiss to not point out that with the security certificate and private key being easily obtainable from the Bambu Connect Electron app, there is absolutely no point to any of what Bambu Lab is doing.
The Verge asked Bambu Labs about that too: Q: Does the private key leaking change any of your plans?

No, this doesn't change our plans, and we've taken immediate action.

Bambu Labs had said their security update would "ensure only authorized access and operations are permitted," remembers Ars Technica. "This would, Bambu suggested, mitigate risks of 'remote hacks or printer exposure issues' and lower the risk of 'abnormal traffic or attacks.'" This was necessary, Bambu wrote, because of increases in requests made to its cloud services "through unofficial channels," targeted DDOS attacks, and "peaks of up to 30 million unauthorized requests per day" (link added by Bambu).
But Ars Technica also found some skepticism online: Repair advocate Louis Rossmann, noting Bambu's altered original blog post, uploaded a video soon after, "Bambu's Gaslighting Masterclass: Denying their own documented restrictions"... suggesting that the company was asking buyers to trust that Bambu wouldn't enact restrictive policies it otherwise wrote into its user agreements.
And Ars Technica also cites another skeptical response from a video posted by open source hardware hacker and YouTube creator Jeff Geerling: "Every IoT device has these problems, and there are better ways to secure things than by locking out access, or making it harder to access, or requiring their cloud to be integrated."
IT

WSJ Reports 'The Balance of Power is Shifting Back to Bosses' (msn.com) 87

The ratio of vacant U.S. jobs to jobless workers "has fallen from a record of 2 in 2022 to 1.1 in November," reports the Wall Street Journal — which adds that "the balance of power between employers and employees has shifted as the labor market has gone from white-hot to merely solid."

JP Morgan's five-days-a-week return-to-office mandate was only the beginning, with big companies like Amazon and Dell "tightening remote-work policies, shrinking travel budgets and cutting back on benefits... Companies are slashing perks such as college-tuition assistance and time off for a sick pet... " 76% of [U.S.] job growth in the past year has been in healthcare and education, leisure and hospitality, and government. In fields such as finance, information, and professional and business services, job growth has been far weaker. While a shift in leverage to employers might have shown up in layoffs or wage cuts in the past, now it is more subtle, often in changes to working conditions. For example, knowing that some workers will quit rather than return to the office, some companies are ending remote work as a way of trimming payroll. "Quiet quitting" — workers who slacked off rather than quit — has been replaced by "quiet cutting" — employers who cut jobs without actually announcing job cuts...

Michael Gibbs, a professor of economics at the University of Chicago's Booth School of Business, said the new mandates might simply be a message to workers that times have changed. "Firms are trying to reset expectations," he said... [After refusing her employers return-to-office four-days-a-week mandate, Mayrian] Sanz, who now works as an independent business and leadership coach, said she applied for 25 to 30 jobs listed as remote but initially got no responses. When some hiring managers finally replied, they had a surprise: Jobs listed as remote would now be in-office. "They just say everything is shifting to going back to the office," she said.

Among tech workers, the share receiving perks such as paid volunteer hours, college-tuition reimbursement, free financial advice and mental-health programs all declined by about 4 percentage points in 2024 from 2023, according to Dice, a technology job board. Average bonuses fell by more than $800, from $15,011 to $14,194. Meanwhile, Netflix has quietly backed off from its unlimited parental leave in a child's first year, The Wall Street Journal reported last month. A company spokesman said at that time that employees have the freedom and flexibility to determine what is best for them.

The article notes that "The actual impact of return-to-office directives remains to be seen," with economists "skeptical" the directives make companies more productive and faster-growing: Many workers now being called in were already spending some time in their cubicles. Nicholas Bloom, a professor of economics at Stanford University, said most of the benefits of collaboration can be achieved with just a few days in the office, while some tasks that require concentration are better done at home.
Elsewhere the Wall Street Journal that looking for a job "is set to get less miserable this year," since roughly two-thirds of U.S. employers plan to add permanent roles within the next six months, "according to a new survey by staffing and consulting firm Robert Half."

And Computerworld notes that the IT unemployment rate is now just 2% in the U.S. (according to official figures from the US Bureau of Labor statistics).

Submission + - H-1B DATA MEGA-THREAD (threadreaderapp.com)

schwit1 writes: I downloaded five years of H-1B data from the US DOL website (4M+ records) and spent the day crunching data.

I went into this with an open mind, but, to be honest, I'm now *extremely* skeptical of how this program works.

Here's what I found

Lots to dig through, most of it damning.

Exit quote: “You can see where I’m going with this. A casual perusal of the data shows that this isn’t a program for the top 0.1% of talent, as it’s been described. This is simply a way to recruit hundreds of thousands of relatively lower-wage IT and financial services professionals.”

Bug

'Y2K Seems Like a Joke Now, But in 1999 People Were Freaking Out' (npr.org) 134

NPR remembers when the world "prepared for the impending global meltdown" that might've been, on December 31, 1999 — and the possible bug known as Y2K: The Clinton administration said that preparing the U.S. for Y2K was probably "the single largest technology management challenge in history." The bug threatened a cascade of potential disruptions — blackouts, medical equipment failures, banks shutting down, travel screeching to a halt — if the systems and software that helped keep society functioning no longer knew what year it was... Computer specialist and grassroots organizer Paloma O'Riley compared the scale and urgency of Y2K prep to telling somebody to change out a rivet on the Golden Gate Bridge. Changing out just one rivet is simple, but "if you suddenly tell this person he now has to change out all the rivets on the bridge and he has only 24 hours to do it in — that's a problem," O'Riley told reporter Jason Beaubien in 1998....

The date switchover rattled a swath of vital tech, including Wall Street trading systems, power plants and tools used in air traffic control. The Federal Aviation Administration put its systems through stress tests and mock scenarios as 2000 drew closer. "Twenty-three million lines of code in the air traffic control system did seem a little more daunting, I will say, than I had probably anticipated," FAA Administrator Jane Garvey told NPR in 1998. Ultimately there were no systemwide aviation breakdowns, but airlines were put on a Y2K alert....

Some financial analysts remained skeptical Y2K would come and go with minimal disruption. But by November 1999 the Federal Reserve said it was confident the U.S. economy would weather the big switch. "Federal banking agencies have been visited and inspected. Every bank in the United States, which includes probably 9,000 to 10,000 institutions, over 99% received a satisfactory rating," Fed Board Governor Edward Kelley said at the time.

The article also remembers a California programmer who bought a mobile home, a propane generator, and a year's supply of dehydrated food. (They were also considering buying a handgun — and converting his bank savings into gold, silver, and cash.) And "Dozens of communities across the U.S. formed Y2K preparedness groups to stave off unnecessary panic..."

But the article concludes that "the aggressive planning and recalibration paid off. Humanity passed into the year 2000 without pandemonium..."

And "People like Jack Pentes of Charlotte, N.C., were left to figure out what to do with their emergency stockpiles."

Slashdot Top Deals