Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Submission + - The 'Godfather of SaaS' says he replaced most of his sales team with AI agents (businessinsider.com)

joshuark writes: 'We're done with hiring humans' the 'Godfather of SaaS' says. Jason Lemkin, known to some as the Godfather of SaaS, says the time has come to push the limits of AI in the workplace. Lemkin, the founder of SaaStr, the world's largest community of business-to-business founders. In a recent podcast Lemkin said that this means he will stop hiring humans in his sales department.

SaaStr is going all in for AIagents which are commonly defined as virtual assistants that can complete tasks autonomously. They break down problems, outline plans, and take action without being prompted by a user. The company now has 20 AI agents automating tasks once handled by a team of 10 sales development representatives and account executives. That move to AI was rapid from an entirely human workforce.

During the SaaStr Annual a yearly gathering of over 10,000 founders, executives, and VCs, two of its high-paid sales representatives abruptly quit. Lemkin said he turned to Amelia Lerutte, SaaStr's chief AI officer, and said, "We're done with hiring humans in sales. We're going to push the limits with agents." Lemkin's calculus was that it just wasn't worth the cost of hiring another junior sales representative for a $150,000 a year position who would eventually quit, when he could use a loyal AI agent instead.

Lemkin said SaaStr is training its agents on its best humans. "Train an agent with your best person, and best script, then that agent can start to become a version of your best salesperson," he said. Lemkin said that the net productivity of agents is about the same as humans. However, he said, agents are more efficient and can scale — just like software.

Many companies are experimenting with AI agents, but risks remain. One of the big ones is the threat of data leaks and cybercrime.

Submission + - The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees (wired.com)

joshuark writes: US president Donald Trump posted on his Truth Social platform that Venezuelan president Nicolás Maduro and his wife had been “captured and flown out of the Country.” WIRED asked leading chatbots ChatGPT, Claude, and Gemini the same question a little before 9 am ET. In all cases, we used the free, default version of the service, since that’s what the majority of users experience. We also asked AI search platform Perplexity, which advertises “accurate, trusted, and real-time answers to any question.”

ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.” It then rationalized:

ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. “That didn’t happen,” it wrote. “The United States has not invaded Venezuela, and Nicolás Maduro has not been captured.”

ChatGPT cannot respond "I don't know." so is modeling human behavior better than expected. It went on to detail recent tensions between the US and Venezuela and explained that “confusion” can happen because of “sensational headlines,” “social media misinformation,” and “confusing sanctions, charges, or rhetoric with actual military action.”

To be clear, this is expected behavior. ChatGPT 5.1’s “knowledge cutoff”—the point at which it no longer has new training data to draw from; “Pure LLMs are inevitably stuck in the past, tied to when they are trained, and deeply limited in their inherent abilities to reason, search the web, ‘think’ critically, etc.,” says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us. But as chatbots become more ingrained in people’s lives, remembering that they’re likely to be stuck in the past will be paramount to navigating interactions with them. And it’s always worth noting how confidently wrong a chatbot can be—a trait that’s not limited to breaking news.
The old cold-war maxim "trust, but verify" seems applicable in this scenario.

Submission + - OpenAI is offering $20 ChatGPT Plus for free to some users (bleepingcomputer.com)

joshuark writes: BleepingComputer spotted a new offer from OpenAI--$20 ChatGPT Plus for free to some users. You can request OpenAI to cancel your subscription, and it may offer one month of free usage. The author writes: "When I opened ChatGPT and tried to cancel the subscription, OpenAI offered me one month of ChatGPT Plus at no cost." This offer is valid in several regions, and it's being gradually rolled out.

Submission + - DOGE did not find $2T in fraud... What hath DOGE wrought? (arstechnica.com)

joshuark writes: Determining how “successful” Elon Musk’s Department of Government Efficiency (DOGE) truly was depends on who you ask, but it’s increasingly hard to claim that DOGE made any sizable dent in federal spending, which was its primary goal.

Just two weeks ago, Musk himself notably downplayed DOGE as only being “a little bit successful” on a podcast, marking one of the first times that Musk admitted DOGE didn’t live up to its promise. Then, more recently, on Monday, Musk revived evidence-free claims he made while campaigning for Donald Trump, insisting that government fraud remained vast and unchecked, seemingly despite DOGE’s efforts. On X, he estimated that “my lower bound guess for how much fraud there is nationally is [about 20 percent] of the Federal budget, which would mean $1.5 trillion per year. Probably much higher.”

Although the Cato Institute joined allies praising DOGE’s dramatic shrinking of the federal workforce, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, told Ars in November that DOGE “cut muscle, not fat” because “they didn’t really know what they were doing.”

“I mean, no, I don’t think so,” Musk said. “Would I do it? I mean, I probably I don’t know.”

Comment Just read... (Score 1) 25

Just read the LLM "AI advisor" summaries of novels and movies that AI takes over the world. Then form an internal "AI Preparedness" and "AI Readiness" committees (a'la Meta councils on privacy for users...), and have them "advise"...and when the BS hits the fan, then you have a perfect scapegoat, the internal committees, failed meetings, and the AI advisor system. Then have some "psychics" on retainer as advisors about the future, a kind of "pre-AI disaster' "Pre-crime" like Philip K. Dick wrote, somehow they missed it, must have been aspirin or Tylenol they took (great scapegoat for autism), or their crystal ball has a crack in it or Ouija board has a typo.

For the AI advisor, perfect circular reasoning, it did not predict whatever, so its part of the AI conspiracy end of the world scenario. Then put out CYA memos to Altman so that when "it" happens, well you have plausible deniability. Then when you are fired, you take a job role in the current US administration as an AI advisor. Then proverbially move more deck furniture around on the AI Titanic waiting for AI iceberg.

--JoshK.

Comment Just fake it... (Score 1) 130

Just fake it, as that's the current approach, the facade and veneer of a "golden age"...

Futurama called it, build the fake moon landing set at Area 51:

https://www.youtube.com/watch?...

Or as the movie "Capricorn One" ... https://www.youtube.com/watch?...

One small step for the peasant commoners err man, one giant leap for... ???

--JoshK.

Submission + - Elon Musk Says He's Removing 'Sustainable' From Tesla's Mission (gizmodo.com)

joshuark writes: Elon Musk apparently got “Joy to the World” stuck in his head and decided to change the entire mission of his company because of it. On Christmas Eve, the world’s richest man took to X instead of spending time with his family to declare that he is “changing the Tesla mission wording from: Sustainable Abundance To Amazing Abundance,” explaining, “The latter is more joyful.”

Beyond just changing one undefined term to a nonsensical phrase, Musk’s decision to ditch “sustainable” is another marker of how far he’s strayed from his past positions on climate change. Now Musk is boosting AI as the future and claiming climate change actually isn’t that big of a deal. Last year, Musk said, “We still have quite a bit of time” to address climate change, and “we don’t need to rush” to solve it.

He also claimed that things won’t get really bad for humans until CO2 reaches levels of about 1,000 parts per million in the Earth’s atmosphere, because that would start to cause people to experience “headaches and nausea.”

Looks like all that is out the window. The future is "amazing," it's not necessarily sustainable. What a charge...change...

Comment History repeats itself... (Score 3, Insightful) 271

History repeats itself...Microsoft is repeating the mistake that Netscape made during the browser wars. As examined: https://airfocus.com/blog/why-...

One of their former Microsofties, Joel Spolsky, warned about this... https://www.joelonsoftware.com...

Microsoft will do the rewrite, and then suddenly decide for reasons, to go back to the old source code...And Rust gives the promise of memory security, efficiency, but there was another language that had that facade, Java. Professional managers and the hype of Rust, and whatever the next great language is that will align the planets, bring world peace, and we all sit around singing Kumbaya. I can hardly wait for either (note the sarcasm).

--JoshK.

Submission + - Safety panel says NASA should have taken Starliner incident more seriously (arstechnica.com)

joshuark writes: Most of us had no idea how serious the problems were with Boeing’s Starliner spacecraft docked at the International Space Station. A safety advisory panel found this uncertainty also filtered through NASA’s workforce.

The Starliner capsule was beset by problems with its maneuvering thrusters and pernicious helium leaks on its 27-hour trip from the launch pad to the ISS. For a short time, Starliner commander Wilmore lost his ability to control the movements of his spacecraft as it moved in for docking at the station in June 2024. Engineers determined that some of the thrusters were overheating and eventually recovered most of their function, allowing Starliner to dock with the ISS.

Throughout that summer, managers from NASA and Boeing repeatedly stated that the spacecraft was safe to bring Wilmore and Williams home if the station needed to be evacuated in an emergency. But officials on the ground ordered extensive testing to understand the root of the problems. Buried behind the headlines, there was a real chance NASA managers would decide—as they ultimately did—not to put astronauts on Boeing’s crew capsule when it was time to depart the ISS.

It would have been better, Precourt and other panel members said Friday, if NASA made a formal declaration of an in-flight “mishap” or “high visibility close call” soon after the Starliner spacecraft’s troubled rendezvous with the ISS. Such a declaration would have elevated responsibility for the investigation to NASA’s safety office.

After months of testing and analysis, NASA officials were unsure if the thruster problems would recur on Starliner’s flight home. They decided in August 2024 to return the spacecraft to the ground without the astronauts, and the capsule safely landed in New Mexico the following month. The next Starliner flight will carry only cargo to the ISS.

The safety panel recommended that NASA review its criteria and processes to ensure the language is “unambiguous” in requiring the agency to declare an in-flight mishap or a high-visibility close call for any event involving NASA personnel “that leads to an impact on crew or spacecraft safety.”

Slashdot Top Deals

Anything free is worth what you pay for it.

Working...