Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Submission + - Musk tweet brings employees to breaking point (vanityfair.com)

An anonymous reader writes: ‘How Elon Musk’s Twitter Ultimatum Brought Employees to Their Breaking Point: “People Were Emotionally a Wreck”’

‘The billionaire CEO subjected staffers to brutal loyalty and competence tests immediately after taking the helm, as Zoë Schiffer writes in an excerpt of her forthcoming book, Extremely Hardcore. “With Elon,” one engineer recounted, “every day could be your last.”’

Submission + - Cheap drones are changing war. This is how they went from a weekend hobby to a w (abc.net.au)

DeadlyBattleRobot writes: Quote from the article:

"Long before Ukrainians and Russians strapped grenades to cheap consumer drones and sent them to blow up each other's tanks, Mr Gury and other hobbyists were part of a small community that invented and fine-tuned the design for these unmanned flying vehicles.

The story behind the creation of the now ubiquitous cheap drone is one of innocent experiment and unintended consequences. Something built for thrills is now rewriting military doctrine."

Submission + - Ivanti promises to address its bug problem after security failures (scmagazine.com)

spatwei writes: After spending months grappling with a string of gateway appliance security failures, Ivanti has vowed to reengineer its processes to harden its products against increasingly persistent attackers.

The promise to bolster its security practices was delivered by the company’s CEO, Jeff Abbott, in an April 3 post and video message addressed to Ivanti’s customers and partners.

Exploitation of the vulnerabilities in its appliances has impacted a large number of Ivanti customers, including U.S. government agencies, prompting an order in February for some devices to be disconnected from federal networks.

“We will use this opportunity to begin a new era at Ivanti. We have challenged ourselves to look critically at every phase of our processes, and every product, to ensure the highest level of protection for our customers,” Abbott said.

Submission + - Biden Takes Aim at SpaceX's Tax-Free Ride in American Airspace (archive.is) 1

echo123 writes: President Biden wants companies that use American airspace for rocket launches to start paying taxes into a federal fund that finances the work of air traffic controllers.

= = = =

Every time a rocket soars into the sky carrying satellites or supplies for the International Space Station, air traffic controllers on the ground must take crucial steps to ensure that commercial and passenger aircraft remain safe.

The controllers, hired by the Federal Aviation Administration, close the airspace, provide real-time information on rockets and their debris and then reopen the airspace quickly after a launch is completed.

But unlike airlines, which pay federal taxes for air traffic controllers’ work for each time their planes take off, commercial space companies are not required to pay for their launches. That includes companies like Elon Musk’s SpaceX, which has launched more than 300 rockets over the past 15 years that often carried satellites for its Starlink internet service.

Submission + - ChatGPT jailbreak prompts proliferate on hacker forums (scmagazine.com)

spatwei writes: “The prevalence of jailbreak prompts and AI misuse on cybercrime forums has definitely increased since ChatGPT’s early days. While there were initial discussions about the potential of the technology in 2022/2023, we’ve observed a growing trend of detailed conversations around specific jailbreaking prompts over time,” Mike Britton, chief information security officer at Abnormal Security, told SC Media in an email. “There are now entire forum sections dedicated to the misuse of AI, specifically on two major cybercrime forums.”

Submission + - AI's Impact on CS Education Likened to Calculator's Impact on Math Education

theodp writes: In Generative AI and CS Education, the new Global Head and VP of Google.org Maggie Johnson writes: "There is a common analogy between calculators and their impact on mathematics education, and generative AI and its impact on CS education. Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the “number sense” to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."

Johnson's CACM article echoes comments she made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world she describes, which she concedes there may not be.

Johnson's call to embrace AI to "raise the level of abstraction for software engineers" to boost their productivity comes as she exits the Board of Code.org, the tech-backed K-12 CS education nonprofit that pushed coding — including Java — into K-12 schools, but deviated a bit from their 'rigorous CS' mission last year to launch a new TeachAI initiative with tech industry partners to convince K-12 schools to embrace AI to increase the productivity of teachers and students not only in CS, but also in all other areas of education. Johnson's departure from Code.org — she was a founding Board member in 2013 — follows that of Microsoft President Brad Smith, Code.org's other founding Board member from industry, who has been focused on promoting Microsoft's AI efforts. Unlike Google, Microsoft is still represented on Code.org's Board by CTO Kevin Scott, who is credited with forging Microsoft's OpenAI partnership (with Smith and Microsoft CEO Satya Nadella) and whose assistant Dee Templeton joined OpenAI's Board as Microsoft's nonvoting observer in January following Sam Altman's reinstatement as OpenAI's CEO. Hey, it's a small K-12 CS and AI education world!

Submission + - Universities signing over students' private FERPA data to voter data companies (thecollegefix.com)

An anonymous reader writes: A relatively new report outlines how universities nationwide have signed over students’ private FERPA data to a third-party vendor that reviews their personal information to help study college students’ voting trends.

The nine-page report describes how a national voting study run out of Tufts’ Institute for Democracy in Higher Education gets university administrators from across the country to agree to release students’ Family Educational Rights and Privacy Act, or FERPA, enrollment data from the National Student Clearinghouse, where its kept, to a voter data company.

“This is an extraordinary violation of student privacy and is not consistent with FERPA,” said Heather Honey, an investigator with Verity Vote, in a recent interview with The College Fix.

“Tufts reports that the student files are de-identified by removing names, identification numbers and month and day of birth. However, this is superficial de-identification, the collection of attributes retained in the data can be used to identify individuals just as the cookies on your browser can be used to identify you,” the report states.

It points to an Office of the Director of National Intelligence report, declassified in June 2023, that “reveals how de-identified data can easily be re-identified with minimal attributes.”

Submission + - Are Your Solar Eclipse Glasses Fake? Here's How to Check (scientificamerican.com)

SonicSpike writes: Fienberg is project manager of the AAS Solar Eclipse Task Force, which is busy preparing for the total eclipse over North America on April 8. He’s the creator of a list of vetted solar filters and viewers that will protect wearers’ eyes as they watch the moon move in front of the sun. When a solar eclipse last crossed a major swath of the U.S. in 2017, Fienberg and his team spotted some counterfeit glasses entering the marketplace—imitations that distributors claimed were manufactured by vetted companies. Testing at accredited labs indicated that many counterfeits were actually safe to use, however. This led the task force to describe such eclipse glasses as “misleading” but not “dangerous” in a March 11 statement meant to reassure the public.

But then Fienberg’s phone rang. The caller was “a guy who had bought thousands of eclipse glasses from a distributor who had been on our list at one point,” Fienberg says. “Those glasses were not safe. They were no darker than ordinary sunglasses.” Legitimate eclipse glasses are at least 1,000 times darker than the darkest sunglasses you can buy.

Fienberg contacted Cangnan County Qiwei Craft, a Chinese factory that he knew manufactured safe glasses and had—in the past—sold them to the distributor in question. But this time, Fienberg says, factory representatives told him they hadn’t sold to that distributor in a long while. “That’s when we switched from being concerned about only counterfeits to being concerned about actual fakes,” Fienberg says.

The AAS does not have a confident estimate of how many fake or counterfeit glasses are for sale out there. And though Fienberg doesn’t think this is a widespread problem, the situation is an “iceberg kind of concern,” he says, because there are likely more examples than the ones he knows about. While counterfeit glasses may still be safe to use, completely fake glasses could put wearers in serious danger.

If you’re viewing the upcoming eclipse, there are specific indicators you can look for when evaluating products for safety—and ways to test glasses before you stare at the sun.

THE STANDARD FOR SOLAR ECLIPSE VIEWERS
On April 8 viewers within a 115-mile-wide band stretching across Mexico and the U.S. into eastern Canada will experience a total solar eclipse: what happens when the moon passes directly between the sun and Earth, completely blocking the face of our star. Outside of this band, people in much of North America will be able to see a partial eclipse.

Submission + - Israel uses AI to target Hamas. (theguardian.com)

Falconhell writes: The Israeli military’s bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war.

In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.

Their unusually candid testimony provides a rare glimpse into the first-hand experiences of Israeli intelligence officials who have been using machine-learning systems to help identify targets during the six-month war.

Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines.

“This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
Several of the sources described how, for certain categories of targets, the IDF applied pre-authorised allowances for the estimated number of civilians who could be killed before a strike was authorised.

Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.

Comment Re:Oh Now. Those dark arges are coming back.. (Score 1) 159

I had both. University link in the late 80s/early 90s, 14.4 Linelink E modem. Was just a lot more fun and a lot less corporate. Yes trolls existed but there were less or could be controlled with kill files in Usenet. Netiquette was still a useful word. Was just more fun.

Submission + - Tennessee Becomes First State To Protect Musicians, Other Artists Against AI (npr.org)

An anonymous reader writes: Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation. "Tennessee (sic) is the music capital of the world, & we're leading the nation with historic protections for TN artists & songwriters against emerging AI technology," Gov. Bill Lee announced on social media. The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, is an updated version of the state's old right of publicity law. While the old law protected an artist's name, photograph or likeness, the new legislation includes AI-specific protections. Once the law takes effect on July 1, people will be prohibited from using AI to mimic an artist's voice without permission.

Slashdot Top Deals

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...