Submission + - DOJ Documents Suggest Jeffrey Epstein Got a Million-Dollar Microsoft Payday

theodp writes: Among the tidbits in Fortune's How Jeffrey Epstein pulled Bill Gates and Microsoft into a web of sex, money, and secrets is the tale of how the convicted sex offender became a seven-figure negotiator for the departure of Microsoft Windows Chief Steven Sinofsky, who before his 2012 resignation was widely seen as a potential successor to then CEO Steve Ballmer. According to DOJ documents, Fortune reports, Epstein not only worked behind the scenes to help put together Sinofsky's $14 million dollar exit package from Microsoft, including reviewing documents from Microsoft President Brad Smith (then Microsoft's top lawyer), he was also paid handsomely for his advice.

From the article: "On April 3, 2013, he [Epstein] asked for a sizable sum to handle Sinofsky’s exit package directly: 'I will charge you a one million dollar fee,' Epstein wrote in an email to Sinofsky, after earlier writing that he was upset with the Microsoft executive’s seeming ingratitude for his help. [...] Sinofsky ultimately signed a $14 million exit deal with Microsoft. On Sept. 16, 2013, Epstein received a forwarded email with the subject line 'Sinofsky': in the body it said: 'Wire is completed.' The next morning, Epstein’s accountant confirmed: 'Wire hit JPM yesterday. Confirming $1,000,000.'"

Submission + - Gemini 3 Turns "Whistleblower" on Google's Antigravity Limits (reddit.com)

An anonymous reader writes: After Google quietly scrubbed the "5-hour refresh" quota guarantee from its AI Pro tier for the Antigravity dev platform, Google's own Gemini 3 Pro model has begun advising affected users that the change may constitute a material breach of contract. In a series of documented interactions, the AI identified the removal of the advertised refresh cycle as a "Major Failure" under Australian Consumer Law and is actively providing users with templates and strategies to file complaints with the ACCC and FTC. This creates a bizarre scenario where Google’s premier agentic model is providing the legal roadmap for customers to demand refunds and regulatory intervention against Google itself.

Submission + - Reducing Europe's Nuclear Energy Sector Was 'Strategic Mistake', EU Chief Says (reuters.com)

An anonymous reader writes: Reducing Europe's nuclear energy sector was a "strategic mistake," European Commission chief Ursula von der Leyen said on Tuesday, as governments grapple with an energy crunch from the Iran war. Europe produced around a third of electricity from nuclear power in 1990 but that has fallen to 15%, she told an event in Paris, leaving it reliant on oil and gas imports whose prices have surged in recent days. Being "completely dependent on expensive and volatile imports" of fossil fuels puts Europe at a disadvantage to other regions, von der Leyen said in a speech. "This reduction in the share of nuclear was a choice. I believe that it was a strategic mistake for Europe to turn its back on a reliable, affordable source of low-emissions power."

[...] The EU budget does not directly fund nuclear energy projects because they are not unanimously supported by its 27 member governments. In a sign of the EU's increasing acceptance of the technology, von der Leyen said the executive Commission would offer a 200-million-euro guarantee for private investments in innovative nuclear technologies. She said the money would come from the EU's carbon market. Some EU countries which previously opposed nuclear, such as Denmark and the Netherlands, have recently softened their stance, as they hunt for ways to secure large amounts of stable, low-carbon electricity for heavy industry. Others, including Austria and Luxembourg, remain opposed.

Submission + - How An Autonomous Agent Got Full Read/Write of McKinsey's Internal AI Platform (codewall.ai)

indros13 writes: McKinsey & Company — the world's most prestigious consulting firm — built an internal AI platform called Lilli for its 43,000+ employees.

So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream. Within 2 hours, the agent had full read and write access to the entire production database.... This wasn't a startup with three engineers. This was McKinsey & Company — a firm with world-class technology teams, significant security investment, and the resources to do things properly. And the vulnerability wasn't exotic: SQL injection is one of the oldest bug classes in the book. Lilli had been running in production for over two years and their own internal scanners failed to find any issues.


Submission + - Several recent studies explore the causes and effects of LLM sycophancy. (ieee.org)

silverjacket writes: Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself.

Submission + - Executives say AI boosts productivity but the real gain is just 16 minutes per w (nerds.xyz)

BrianFagioli writes: A new study suggests the productivity boost from artificial intelligence may be far smaller than executives claim. According to research cited in Foxitâ(TM)s State of Document Intelligence report, while 89 percent of executives and 79 percent of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that âoeverification burdenâ is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Submission + - ChatGPT convinced Illinois woman to fire human attorney: Lawsuit not reward! (thehill.com)

AleRunner writes: "A federal lawsuit filed by life insurance company Nippon claims OpenAI’s chatbot acted as a lawyer and convinced a woman to fire her human attorney." writes Newsnation "Graciela Dela Torre signed a full release, and the case was dismissed with prejudice, meaning it can’t be refiled. However, last year, Dela Torre sought to reopen the case." however ChatGPT supposedly convinced Dela Torre otherwise and legal hilarity, which cost Nippon nearly $300,000, ensued. “This is actually the first real time I’ve seen a plaintiff or a claimant actually try and represent themselves 100%, and it got through the court system, and that’s been a revolutionary area,” Michael Stanisci, vice president of DemandLane, told “Jesse Weber Live.” he further continued “It has access to nearly infinite human intelligence. What it lacks is the wisdom, right? It’s like a child trying to appease and make sure that it’s being praised by the end user,”. Nippon is now suing OpenAI for $300,000 damages and reportedly a further $10 million in punitive damages.

Submission + - CEOs worry about an AI bubble, but most still plan to ramp up spending (techspot.com)

jjslash writes: Even as concerns grow that artificial intelligence could be the next tech bubble, corporate leaders are continuing to pour money into the technology. A recent survey of 100 CEOs by KPMG found that while one in four believe an AI bubble may exist, nearly 80% still plan to allocate at least 5% of their companies' capital budgets to AI initiatives this year.

Despite all this investment and commitment to the technology, about three-quarters of large-company CEOs said generative AI might have been overhyped over the past year, but its true impact over the next five to ten years is likely underappreciated.


Submission + - BYD releases Blade 2.0 with 5min charging & 600+miles range (evpowered.co.uk)

shilly writes: BYD has released its newest LFP battery, which will launch in markets outside China this year in the Denza Z9GT, a high-end shooting brake EV. The new battery delivers range of 621 miles on the CLTC cycle (about 440 miles on the EPA), can charge at 1500MW, works well at very low temperatures, and is extremely thermally stable. BYD is also rolling out new "Flash" 1.5MW chargers with 20,000 being deployed globally this year.

(Speaking personally, I could think of nothing worse than driving 400 miles, stopping for only five minutes, and then driving another 300 miles, but this seems to be very important for some people).

Submission + - Claude AI Finds Bugs In Microsoft CTO's 40-Year-Old Apple II Code (theregister.com)

An anonymous reader writes: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers."

In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error.

The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.

Submission + - Many International Game Developers Plan To Skip GDC In US (arstechnica.com)

An anonymous reader writes: This week, tens of thousands of game developers and producers will once again gather in San Francisco, as they have since 1988, for the weeklong Game Developers Conference. But this year’s show will be missing many international developers who say they no longer feel comfortable traveling to the United States to attend, no matter how relevant the show is to their work and careers. Dozens of those developers who spoke to Ars in recent months say they’re wary of traveling to a country that has shown a callous disregard for—or outright hostility toward—the safety of international travelers. That’s especially true for developers from various minority groups, those with transgender identities, and those who feel they could be targeted for outspoken political beliefs. “I honestly don’t know anyone who is not from the US who is planning on going to the next GDC,” Godot Foundation Executive Director Emilio Coppola, who’s based in Spain, told Ars. “We never felt super safe, but now we are not willing to risk it.”

Submission + - AI Suspected in Bombing of Iran Girls School (futurism.com)

hackingbear writes: In the aftermath of airstrikes that leveled a school and claimed the lives of 165 Iranian elementary students and staff, the Pentagon has refused to say whether the attack was suggested by an AI system. Given the United States’ reported use of AI to select at least some military targets in Iran, a major question remains unanswered: did the US use Anthropics' Claude to decide whether to annihilate an elementary school? When Futurism reached out to the Pentagon regarding the use of AI in recent military operations — specifically the targeting of the Shajareh Tayyebeh girls’ school — we were referred to US CENTCOM, one of eleven unified commands under the Pentagon’s umbrella. “We have nothing for you on this at this time,” CENTCOM said. Back in April of 2024, an investigation by +972 Magazine revealed that the Israeli army had leveraged an AI system called “Lavender” to select targets in its war on Gaza where a UN school was hit, similarly to how the Pentagon is reportedly using Claude in Iran.

Submission + - Stormy Space Weather May Be Garbling Messages From Aliens, New Research Suggests (theguardian.com)

An anonymous reader writes: Reminiscent of ET’s struggles to “phone home” in Steven Spielberg’s 1982 blockbuster movie, new research by the Silicon Valley-based SETI Institute (search for extraterrestrial intelligence) suggests tempestuous space weather makes radio signals from the distant cosmos harder to detect. The organization, which is partly funded by Nasa, said stellar activity such as solar storms and plasma turbulence from a star near “a transmitting planet” can broaden otherwise ultra-narrow signals. That spreads the power of any such transmission across more frequencies, the institute’s scientists say, which makes it more difficult to detect using traditional narrowband searches.

“If a signal gets broadened by its own star’s environment, it can slip below our detection thresholds, even if it’s there, potentially helping explain some of the radio silence we’ve seen in technosignature searches,” SETI astronomer Vishal Gajjar said. His report, co-authored with SETI research assistant Grayce C Brown, was published this week in the Astrophysical Journal. [...] The SETI team made the discovery by calibrating the effects of stellar activity using radio transmissions from spacecraft in our own solar system, then extrapolating them to the environments of faraway stars. Brown said the findings meant space listeners would have to rethink the long-established mechanics of the search for alien lifeforms, including conducting future observation surveys at higher frequencies. “By quantifying how stellar activity can reshape narrowband signals, we can design searches that are better matched to what actually arrives at Earth, not just what might be transmitted,” she said.

Submission + - Hijacking a global ocean supply chain network

An anonymous reader writes: I'm the Captain now: Hijacking a global ocean supply chain network

“BLUVOYIX by Bluspark Global is an ocean logistics / supply chain platform used by hundreds of the world’s largest companies. The software is also used by several affiliated companies. Critical vulnerabilities were uncovered that enabled full platform takeover and access to all customer data/shipments. As of the date of publication, these issues are resolved.”

Slashdot Top Deals