Education

Blue Book Sales Surge As Universities Combat AI Cheating (msn.com) 93

Sales of blue book exam booklets have surged dramatically across the nation as professors turn to analog solutions to prevent ChatGPT cheating. The University of California, Berkeley reported an 80% increase in blue book sales over the past two academic years, while Texas A&M saw 30% growth and the University of Florida recorded nearly 50% increases this school year. The surge comes as students who were freshmen when ChatGPT launched in 2022 approach senior year, having had access to AI throughout their college careers.
KDE

KDE Is Getting a Native Virtual Machine Manager Called 'Karton' (neowin.net) 37

A new virtual machine manager called Karton is being developed specifically for the KDE Plasma desktop, aiming to offer a seamless, Qt-native alternative to GNOME-centric tools like GNOME Boxes. Spearheaded by University of Waterloo student Derek Lin as part of Google Summer of Code 2025, Karton uses libvirt and Qt Quick to build a user-friendly, fully integrated VM experience, with features like a custom SPICE viewer, snapshot support, and a mobile-friendly UI expected by September 2025. Neowin reports: To feel right at home in KDE, Karton is being built with Qt Quick and Kirigami. It uses the libvirt API to handle virtual machines and could eventually work across different platforms. Right now, development is focused on getting the core parts in place. Lin is working on a new domain installer that ditches direct virt-install calls in favor of libosinfo, which helps detect OS images and generate the right libvirt XML for setting up virtual machines more precisely. He's still refining device configuration and working on broader hypervisor support. Another key part of the work is building a custom SPICE viewer using Qt Quick from scratch:

If you're curious, here's the list of specific deliverables Lin included in his GSoC proposal, though he notes the proposal itself is a bit outdated [...]. For those interested in the timeline, Lin's GSoC proposal says the official GSoC coding starts June 2, 2025. The goal is to have a working app ready by the midterm evaluation around July 14, 2025, with the final submission due September 1, 2025.
You can learn more via KDE.org.
AI

When a Company Does Job Interviews with a Malfunctioning AI - and Then Rejects You (slate.com) 51

IBM laid off "a couple hundred" HR workers and replaced them with AI agents. "It's becoming a huge thing," says Mike Peditto, a Chicago-area consultant with 15 years of experience advising companies on hiring practices. He tells Slate "I do think we're heading to where this will be pretty commonplace." Although A.I. job interviews have been happening since at least 2023, the trend has received a surge of attention in recent weeks thanks to several viral TikTok videos in which users share videos of their A.I. bots glitching. Although some of the videos were fakes posted by a creator whose bio warns that his content is "all satire," some are authentic — like that of Kendiana Colin, a 20-year-old student at Ohio State University who had to interact with an A.I. bot after she applied for a summer job at a stretching studio outside Columbus. In a clip she posted online earlier this month, Colin can be seen conducting a video interview with a smiling white brunette named Alex, who can't seem to stop saying the phrase "vertical-bar Pilates" in an endless loop...

Representatives at Apriora, the startup company founded in 2023 whose software Colin was forced to engage with, did not respond to a request for comment. But founder Aaron Wang told Forbes last year that the software allowed companies to screen more talent for less money... (Apriora's website claims that the technology can help companies "hire 87 percent faster" and "interview 93 percent cheaper," but it's not clear where those stats come from or what they actually mean.)

Colin (first interviewed by 404 Media) calls the experience dehumanizing — wondering why they were told dress professionally, since "They had me going the extra mile just to talk to a robot." And after the interview, the robot — and the company — then ghosted them with no future contact. "It was very disrespectful and a waste of time."

Houston resident Leo Humphries also "donned a suit and tie in anticipation for an interview" in which the virtual recruiter immediately got stuck repeating the same phrase. Although Humphries tried in vain to alert the bot that it was broken, the interview ended only when the A.I. program thanked him for "answering the questions" and offering "great information" — despite his not being able to provide a single response. In a subsequent video, Humphries said that within an hour he had received an email, addressed to someone else, that thanked him for sharing his "wonderful energy and personality" but let him know that the company would be moving forward with other candidates.
Open Source

OSU's Open Source Lab Eyes Infrastructure Upgrades and Sustainability After Recent Funding Success (osuosl.org) 11

It's a nonprofit that's provide hosting for the Linux Foundation, the Apache Software Foundation, Drupal, Firefox, and 160 other projects — delivering nearly 430 terabytes of information every month. (It's currently hosting Debian, Fedora, and Gentoo Linux.) But hosting only provides about 20% of its income, with the rest coming from individual and corporate donors (including Google and IBM). "Over the past several years, we have been operating at a deficit due to a decline in corporate donations," the Open Source Lab's director announced in late April.

It's part of the CS/electrical engineering department at Oregon State University, and while the department "has generously filled this gap, recent changes in university funding makes our current funding model no longer sustainable. Unless we secure $250,000 in committed funds, the OSL will shut down later this year."

But "Thankfully, the call for support worked, paving the way for the OSU Open Source Lab to look ahead, into what the future holds for them," reports the blog It's FOSS.

"Following our OSL Future post, the community response has been incredible!" posted director Lance Albertson. "Thanks to your amazing support, our team is funded for the next year. This is a huge relief and lets us focus on building a truly self-sustaining OSL." To get there, we're tackling two big interconnected goals:

1. Finding a new, cost-effective physical home for our core infrastructure, ideally with more modern hardware.
2. Securing multi-year funding commitments to cover all our operations, including potential new infrastructure costs and hardware refreshes.


Our current data center is over 20 years old and needs to be replaced soon. With Oregon State University evaluating the future of this facility, it's very likely we'll need to relocate in the near future. While migrating to the State of Oregon's data center is one option, it comes with significant new costs. This makes finding free or very low-cost hosting (ideally between Eugene and Portland for ~13-20 racks) a huge opportunity for our long-term sustainability. More power-efficient hardware would also help us shrink our footprint.

Speaking of hardware, refreshing some of our older gear during a move would be a game-changer. We don't need brand new, but even a few-generations-old refurbished systems would boost performance and efficiency. (Huge thanks to the Yocto Project and Intel for a recent hardware donation that showed just how impactful this is!) The dream? A data center partner donating space and cycled-out hardware. Our overall infrastructure strategy is flexible. We're enhancing our OpenStack/Ceph platforms and exploring public cloud credits and other donated compute capacity. But whatever the resource, it needs to fit our goals and come with multi-year commitments for stability. And, a physical space still offers unique value, especially the invaluable hands-on data center experience for our students....

[O]ur big focus this next year is locking in ongoing support — think annualized pledges, different kinds of regular income, and other recurring help. This is vital, especially with potential new data center costs and hardware needs. Getting this right means we can stop worrying about short-term funding and plan for the future: investing in our tech and people, growing our awesome student programs, and serving the FOSS community. We're looking for partners, big and small, who get why foundational open source infrastructure matters and want to help us build this sustainable future together.

The It's FOSS blog adds that "With these prerequisites in place, the OSUOSL intends to expand their student program, strengthen their managed services portfolio for open source projects, introduce modern tooling like Kubernetes and Terraform, and encourage more community volunteers to actively contribute."

Thanks to long-time Slashdot reader I'm just joshin for suggesting the story.
United States

MIT Says It No Longer Stands Behind Student's AI Research Paper (msn.com) 28

MIT said Friday it can no longer stand behind a widely circulated paper on AI written by a doctoral student in its economics program. The paper said that the introduction of an AI tool in a materials-science lab led to gains in new discoveries, but had more ambiguous effects on the scientists who used it. WSJ: MIT didn't name the student in its statement Friday, but it did name the paper. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets. In a press release, MIT said it "has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper."

The university said the author of the paper is no longer at MIT. The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials -- a result that suggested that, in certain settings, AI could substantially improve worker productivity. But it also showed that most of the productivity gains went to scientists who were already highly effective, and that overall the AI tool made scientists less happy about their work.

Education

Student Demands Tuition Refund After Catching Professor Using ChatGPT (fortune.com) 115

A Northeastern University student demanded her tuition money back after discovering her business professor was secretly using AI to create course materials. Ella Stapleton, who graduated this year, grew suspicious when she noticed telltale signs of AI generation in her professor's lecture notes, including a stray ChatGPT citation in the bibliography, recurring typos matching machine outputs, and images showing figures with extra limbs.

"He's telling us not to use it, and then he's using it himself," Stapleton told the New York Times. After filing a formal complaint with Northeastern's business school, Stapleton requested a tuition refund of about $8,000 for the course. The university ultimately rejected her claim. Professor Rick Arrowood acknowledged using ChatGPT, Perplexity AI, and presentation generator Gamma. "In hindsight, I wish I would have looked at it more closely," he said.
Robotics

Student's Robot Obliterates 4x4 Rubik's Cube World Record (bbc.com) 26

An anonymous reader quotes a report from the BBC: A student's robot has beaten the world record for solving a four-by-four Rubik's cube -- by 33 seconds. Matthew Pidden, a 22-year-old University of Bristol student, built and trained the "Revenger" over 15 weeks for his computer science bachelor's degree. The robot solved the cube in 45.305 seconds, obliterating the world record of 1 minute 18 seconds. However, the human record for solving the cube is 15.71 seconds.

Mr Pidden's robot uses dual webcams to scan the cube, a custom mechanism to manipulate the faces, and a fully self-built solving algorithm to generate efficient solutions. The student now plans to study for a master's degree in robotics at Imperial College London.

Education

Is Everyone Using AI to Cheat Their Way Through College? (msn.com) 160

Chungin Lee used ChatGPT to help write the essay that got him into Columbia University — and then "proceeded to use generative artificial intelligence to cheat on nearly every assignment," reports New York magazine's blog Intelligencer: As a computer-science major, he depended on AI for his introductory programming classes: "I'd just dump the prompt into ChatGPT and hand in whatever it spat out." By his rough math, AI wrote 80 percent of every essay he turned in. "At the end, I'd put on the finishing touches. I'd just insert 20 percent of my humanity, my voice, into it," Lee told me recently... When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, "It's the best place to meet your co-founder and your wife."
He eventually did meet a co-founder, and after three unpopular apps they found success by creating the "ultimate cheat tool" for remote coding interviews, according to the article. "Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.)" The article ends with Lee and his co-founder raising $5.3 million from investors for one more AI-powered app, and Lee says they'll target the standardized tests used for graduate school admissions, as well as "all campus assignments, quizzes, and tests. It will enable you to cheat on pretty much everything."

Somewhere along the way Columbia put him on disciplinary probation — not for cheating in coursework, but for creating the apps. But "Lee thought it absurd that Columbia, which had a partnership with ChatGPT's parent company, OpenAI, would punish him for innovating with AI." (OpenAI has even made ChatGPT Plus free to college students during finals week, the article points out, with OpenAI saying their goal is just teaching students how to use it responsibly.) Although Columbia's policy on AI is similar to that of many other universities' — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn't know a single student at the school who isn't using AI to cheat. To be clear, Lee doesn't think this is a bad thing. "I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating," he said...

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.

The article points out ChatGPT's monthly visits increased steadily over the last two years — until June, when students went on summer vacation. "College is just how well I can use ChatGPT at this point," a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.... It isn't as if cheating is new. But now, as one student put it, "the ceiling has been blown off." Who could resist a tool that makes every assignment easier with seemingly no consequences?
After using ChatGPT for their final semester of high school, one student says "My grades were amazing. It changed my life." So she continued used it in college, and "Rarely did she sit in class and not see other students' laptops open to ChatGPT."

One ethics professor even says "The students kind of recognize that the system is broken and that there's not really a point in doing this." (Yes, students are even using AI to cheat in ethics classes...) It's not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students' essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
AI

Gen AI Is Not Replacing Jobs Or Hurting Wages At All, Say Economists 108

An anonymous reader quotes a report from The Register: Instead of depressing wages or taking jobs, generative AI chatbots like ChatGPT, Claude, and Gemini have had almost no wage or labor impact so far -- a finding that calls into question the huge capital expenditures required to create and run AI models. In a working paper released earlier this month, economists Anders Humlum and Emilie Vestergaard looked at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024.

Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers. Yet after Humlum, assistant professor of economics at the Booth School of Business, University of Chicago, and Vestergaard, a PhD student at the University of Copenhagen, analyzed the data, they found the labor and wage impact of chatbots to be minimal. "AI chatbots have had no significant impact on earnings or recorded hours in any occupation," the authors state in their paper.

The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots. The problem isn't that workers are avoiding generative AI chatbots -- quite the contrary. But they simply aren't yet equating to actual economic benefits.
"The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."

Humlum said while there are gains and time savings to be had, "there's definitely a question of who they really accrue to. And some of it could be the firms -- we cannot directly look at firm profitability. Some of it could also just be that you save some time on existing tasks, but you're not really able to expand your output and therefore earn more. So it's like it saves you time writing emails. But if you cannot really take on more work or do something else that is really valuable, then that will put a damper on how much we should actually expect those time savings to affect your earning ability, your total hours, your wages."

"In terms of economic outcomes, when we're looking at hard metrics -- in the administrative labor market data on earnings, wages -- these tools have really not made a difference so far," said Humlum. "So I think that that puts in some sense an upper bound on what return we should expect from these tools, at least in the short run. My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
AI

AI Secretly Helped Write California Bar Exam, Sparking Uproar (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: On Monday, the State Bar of California revealed that it used AI to develop a portion of multiple-choice questions on its February 2025 bar exam, causing outrage among law school faculty and test takers. The admission comes after weeks of complaints about technical problems and irregularities during the exam administration, reports the Los Angeles Times. The State Bar disclosed that its psychometrician (a person skilled in administrating psychological tests), ACS Ventures, created 23 of the 171 scored multiple-choice questions with AI assistance. Another 48 questions came from a first-year law student exam, while Kaplan Exam Services developed the remaining 100 questions.

The State Bar defended its practices, telling the LA Times that all questions underwent review by content validation panels and subject matter experts before the exam. "The ACS questions were developed with the assistance of AI and subsequently reviewed by content validation panels and a subject matter expert in advance of the exam," wrote State Bar Executive Director Leah Wilson in a press release. According to the LA Times, the revelation has drawn strong criticism from several legal education experts. "The debacle that was the February 2025 bar exam is worse than we imagined," said Mary Basick, assistant dean of academic skills at the University of California, Irvine School of Law. "I'm almost speechless. Having the questions drafted by non-lawyers using artificial intelligence is just unbelievable." Katie Moran, an associate professor at the University of San Francisco School of Law who specializes in bar exam preparation, called it "a staggering admission." She pointed out that the same company that drafted AI-generated questions also evaluated and approved them for use on the exam.
The report notes that the AI disclosure follows technical glitches with the February exam (like login issues, screen lag, and confusing questions), which led to a federal lawsuit against Meazure Learning and calls for a State Bar audit.
Open Source

Teen Coder Shuts Down Open Source Mac App Whisky, Citing Harm To Paid Apps (arstechnica.com) 56

An anonymous reader quotes a report from Ars Technica: Whisky, a gaming-focused front-end for Wine's Windows compatibility tools on macOS, is no longer receiving updates. As one of the most useful and well-regarded tools in a Mac gamer's toolkit, it could be seen as a great loss, but its developer hopes you'll move on with what he considers a better option: supporting CodeWeavers' CrossOver product.

Also, Whisky's creator is an 18-year-old college student, and he could use a break. "I am 18, yes, and attending Northeastern University, so it's always a balancing act between my school work and dev work," Isaac Marovitz wrote to Ars. The Whisky project has "been more or less in this state for a few months, I posted the notice mostly to clarify and formally announce it," Marovitz said, having received "a lot of questions" about the project status. [...] "Whisky, in my opinion, has not been a positive on the Wine community as a whole," Marovitz wrote on the Whisky site.

He advised that Whisky users buy a CrossOver license, and noted that while CodeWeavers and Valve's work on Proton have had a big impact on the Wine project, "the amount that Whisky as a whole contributes to Wine is practically zero." Fixes for Wine running Mac games "have to come from people who are not only incredibly knowledgeable on C, Wine, Windows, but also macOS," Marovitz wrote, and "the pool of developers with those skills is very limited." While Marovitz told Ars that he's had "some contact with CodeWeavers" in making Whisky, "they were always curious and never told me what I should or should not do." It became clear to him, though, "from what [CodeWeavers] could tell me as well as observing the attitude of the wider community that Whisky could seriously threaten CrossOver's viability."
"Whisky may have been a CrossOver competitor, but that's not how we feel today," wrote CodeWeavers CEO James B. Ramey in a statement. "Our response is simply one of empathy, understanding, and acknowledgement for Isaac's situation."
Robotics

Harvard's RoboBee Masters Landing, Paving Way For Agricultural Pollination (chosun.com) 31

After more than a decade of development, Harvard's insect-sized flying robot, RoboBee, has successfully learned to land using dragonfly-inspired legs and improved flight controls. The researchers see RoboBee as a potential substitute for endangered bees, assisting in the pollination of plants. From a report: RoboBee is a micro flying robot that Harvard has been developing since 2013. As the name suggests, it is the size of a bee, capable of flying like a bee and hovering in mid-air. Its wings are 3 cm long and it weighs only 0.08 g. The weight was reduced by using light piezoelectric elements instead of motors. Piezoelectric elements change shape when an electric current flows through them. The researchers were able to make RoboBee flap its wings 120 times per second by turning the current on and off, which is similar to actual insects.

While RoboBee exhibited flight capabilities comparable to those of a bee, the real problem was landing. Being too light and having short wings, it could not withstand the air turbulence generated during landing. It is easy to understand if you think about the strong winds generated when a helicopter approaches the ground. Christian Chan, a graduate student at Harvard who participated in the research, said, "Until now, it was a matter of shutting off the robot while it attempted to land and praying for a proper touchdown."

To ensure RoboBee's safe landing, it was important to dissipate energy just before touchdown. Hyun Nak-Seung, a professor at Purdue University who participated in the development of RoboBee, explained, "For any flying object, the success of landing depends on minimizing speed just before impact and rapidly dissipating energy afterward. Even for tiny flapping like RoboBee's, the ground effect cannot be ignored, and after landing, the risk of bouncing or rolling makes the situation more complex."
The findings have been published in the journal Science Robotics.
Space

For the First Time Astronomers Watch a Black Hole 'Wake Up' in Real-Time (popsci.com) 18

Black holes "often exhibit long periods of dormancy," writes Popular Science, adding that astronomers had never witnessed a black hole "wake up" in real time. "Until now..."

In February of 2024 X-ray bursts were spotted coming out of a black hole named Ansky by Lorena Hernández-García at Chile's Valparaiso University, according to the article. And what astronomers have now seen "challenges prevailing theories about black hole lifecycles." Hernández-García and collaborators then determined the black hole was displaying a phenomenon known as a quasiperiodic eruption, or QPE [a short-lived flaring event...] While a black hole inevitably destroys everything it captures, objects behave differently during their impending demise. A star, for example, generally stretches apart into a bright, hot, fast-spinning disc known as an accretion disc. Most astronomers have theorized that black holes generate QPEs when a comparatively small object like a star or even a smaller black hole collides with an accretion disc. In the case of Ansky, however, there isn't any evidence linking it to the death of a star.

"The bursts of X-rays from Ansky are ten times longer and ten times more luminous than what we see from a typical QPE," said MIT PhD student and study co-author Joheen Chakraborty. "Each of these eruptions is releasing a hundred times more energy than we have seen elsewhere. Ansky's eruptions also show the longest cadence ever observed, of about 4.5 days." Astronomers must now consider other explanations for Ansky's remarkable behavior. One theory posits that the accretion disc could come from nearby galactic gas pulled in by the black hole instead of a star. If true, then the X-rays may originate from high energy shocks to the disc caused by a small cosmic object repeatedly passing through and disrupting orbital matter.

It's detailed in a study published on April 11 in Nature Astronomy....

Meanwhile, scientists "have uncovered the strongest evidence yet for the existence of elusive intermediate-mass black holes," reports SciTechDaily.

And there's more black hole news from RockDoctor (Slashdot reader #15,477): Given the recent work on galaxy-centre Super-Massive Black Holes (SMBHs), you may be surprised to learn that the only Stellar-Mass Black Holes (SMBHs ... uh, "BHs") identified to-date have been by their gravitational waves, as they merge with another BH or a neutron star. But the long-running OGLE (Optical Gravitational Lensing Experiment) project (1992 — present) has recently confirmed that it has detected an isolated BH not orbiting another bright object, or "swallowing" much of anything...

In this case, 16 other telescopes performed sensitive astrometry (position measurement) over 11 years including the Hubble Space Telescope (HST). These multiple measurements plot an ellipse on the sky, mirroring the movement of the Earth around it's orbit — parallax. Which means this is a relatively close object (1520 parsecs / ~5000 light years).... And there is no sign of a third light emitting body nearby, which means this is an isolated black hole, not orbiting any other body (or, indeed, with any other [small] star orbiting it).

AI

Anthropic Launches an AI Chatbot Plan For Colleges and Universities (techcrunch.com) 9

An anonymous reader quotes a report from TechCrunch: Anthropic announced on Wednesday that it's launching a new Claude for Education tier, an answer to OpenAI's ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic's AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is "Learning Mode," a new feature within Claude Projects to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Learning Mode enabled, Claude will ask questions to test understanding, highlight fundamental principles behind specific problems, and provide potentially useful templates for research papers, outlines, and study guides.

Anthropic says Claude for Education comes with its standard chat interface, as well as "enterprise-grade" security and privacy controls. In a press release shared with TechCrunch ahead of launch, Anthropic said university administrators can use Claude to analyze enrollment trends and automate repetitive email responses to common inquiries. Meanwhile, students can use Claude for Education in their studies, the company suggested, such as working through calculus problems with step-by-step guidance from the AI chatbot. To help universities integrate Claude into their systems, Anthropic says it's partnering with the company Instructure, which offers the popular education software platform Canvas. The AI startup is also teaming up with Internet2, a nonprofit organization that delivers cloud solutions for colleges.

Anthropic says that it has already struck "full campus agreements" with Northeastern University, the London School of Economics and Political Science, and Champlain College to make Claude for Education available to all students. Northeastern is a design partner -- Anthropic says it's working with the institution's students, faculty, and staff to build best practices for AI integration, AI-powered education tools, and frameworks. Anthropic hopes to strike more of these contracts, in part through new student ambassador and AI "builder" programs, to capitalize on the growing number of students using AI in their studies.

Government

Substack Says It'll Legally Defend Writers 'Targeted By the Government' (theverge.com) 61

Substack has announced it will legally support foreign writers lawfully residing in the U.S. who face government targeting over their published work, partnering with the nonprofit FIRE to expand its existing Defender program. The Verge reports: In their announcement, Substack and FIRE mention the international Tufts University student who was arrested by federal agents last week. Her legal team links her arrest to an opinion piece she co-wrote for the school's newspaper last year, which criticized Tufts for failing to comply with requests to divest from companies with connections to Israel. "If true, this represents a chilling escalation in the government's effort to target critics of American foreign policy," Substack and FIRE write.

The initiative builds on Substack's Defender program, which already offers legal assistance for independent journalists and creators on the platform. The company says it has supported "dozens" of Substack writers facing claims of defamation and trademark infringement since it launched the program in the US in 2020. It has since brought Substack Defender to writers in Canada and the UK.

Education

Columbia University Suspends Student Behind Interview Cheating AI (businessinsider.com) 37

Columbia University has suspended the student who created an AI tool designed to help job candidates cheat on technical coding interviews, according to disciplinary documents seen by Business Insider. Chungin "Roy" Lee received a yearlong suspension for "publishing unauthorized documents" from a disciplinary hearing about his product, Interview Coder, not for creating the tool itself. Lee had signed a form agreeing not to disclose his disciplinary record or post hearing materials online.

Interview Coder, which sells for $60 monthly, is on track to generate $2 million in annual revenue, Lee said. The university initially placed him on probation after finding him responsible for "facilitation of academic dishonesty." Lee had already submitted paperwork for a leave of absence before his suspension. He told BI he plans to move to San Francisco, which "was my plan all along."
AI

AlexNet, the AI Model That Started It All, Released In Source Code Form (zdnet.com) 8

An anonymous reader quotes a report from ZDNet: There are many stories of how artificial intelligence came to take over the world, but one of the most important developments is the emergence in 2012 of AlexNet, a neural network that, for the first time, demonstrated a huge jump in a computer's ability to recognize images. Thursday, the Computer History Museum (CHM), in collaboration with Google, released for the first time the AlexNet source code written by University of Toronto graduate student Alex Krizhevsky, placing it on GitHub for all to peruse and download.

"CHM is proud to present the source code to the 2012 version of Alex Krizhevsky, Ilya Sutskever, and Geoffery Hinton's AlexNet, which transformed the field of artificial intelligence," write the Museum organizers in the readme file on GitHub. Krizhevsky's creation would lead to a flood of innovation in the ensuing years, and tons of capital, based on proof that with sufficient data and computing, neural networks could achieve breakthroughs previously viewed as mainly theoretical.
The Computer History Museum's software historian, Hansen Hsu, published an essay describing how he spent five years negotiating with Google to release the code.
EU

Is WhatsApp Being Ditched for Signal in Dutch Higher Education? (dub.uu.nl) 42

For weeks Signal has been one of the three most-downloaded apps in the Netherlands, according to a local news site. And now "Higher education institutions in the Netherlands have been looking for an alternative," according to DUB (an independent news site for the Utrecht University community): Employees of the Utrecht University of Applied Sciences (HU) were recently advised to switch to Signal. Avans University of Applied Sciences has also been discussing a switch...The National Student Union is concerned about privacy. The subject was raised at last week's general meeting, as reported by chair Abdelkader Karbache, who said: "Our local unions want to switch to Signal or other open-source software."
Besides being open source, Signal is a non-commercial nonprofit, the article points out — though its proponents suggest there's another big difference. "HU argues that Signal keeps users' data private, unlike WhatsApp." Cybernews.com explains the concern: In an interview with the Dutch newspaper De Telegraaf, Meredith Whittaker [president of the Signal Foundation] discussed the pitfalls of WhatsApp. "WhatsApp collects metadata: who you send messages to, when, and how often. That's incredibly sensitive information," she says.... The only information [Signal] collects is the date an account was registered, the time when an account was last active, and hashed phone numbers... Information like profile name and the people a user communicates with is all encrypted... Metadata might sound harmless, but it couldn't be further from the truth. According to Whittaker, metadata is deadly. "As a former CIA director once said: 'We kill people based on metadata'."
WhatsApp's metadata also includes IP addresses, TechRadar noted last May: Other identifiable data such as your network details, the browser you use, ISP, and other identifiers linked to other Meta products (like Instagram and Facebook) associated with the same device or account are also collected... [Y]our IP can be used to track down your location. As the company explained, even if you keep the location-related features off, IP addresses and other collected information like phone number area codes can be used to estimate your "general location."

WhatsApp is required by law to share this information with authorities during an investigation...

[U]nder scrutiny is how Meta itself uses these precious details for commercial purposes. Again, this is clearly stated in WhatsApp's privacy policy and terms of use. "We may use the information we receive from [other Meta companies], and they may use the information we share with them, to help operate, provide, improve, understand, customize, support, and market our Services and their offerings," reads the policy. This means that yes, your messages are always private, but WhatsApp is actively collecting your metadata to build your digital persona across other Meta platforms...

The article suggests using a VPN with WhatsApp and turning on its "advanced privacy feature" (which hides your IP address during calls) and managing the app's permissions for data collection. "While these steps can help reduce the amount of metadata collected, it's crucial to bear in mind that it's impossible to completely avoid metadata collection on the Meta-owned app... For extra privacy and security, I suggest switching to the more secure messaging app Signal."

The article also includes a cautionary anecdote. "It was exactly a piece of metadata — a Proton Mail recovery email — that led to the arrest of a Catalan activist."

Thanks to long-time Slashdot reader united_notions for sharing the article.
AI

'There's a Good Chance Your Kid Uses AI To Cheat' (msn.com) 98

Long-time Slashdot reader theodp writes: Wall Street Journal K-12 education reporter Matt Barnum has a heads-up for parents: There's a Good Chance Your Kid Uses AI to Cheat. Barnum writes:

"A high-school senior from New Jersey doesn't want the world to know that she cheated her way through English, math and history classes last year. Yet her experience, which the 17-year-old told The Wall Street Journal with her parent's permission, shows how generative AI has rooted in America's education system, allowing a generation of students to outsource their schoolwork to software with access to the world's knowledge. [...] The New Jersey student told the Journal why she used AI for dozens of assignments last year: Work was boring or difficult. She wanted a better grade. A few times, she procrastinated and ran out of time to complete assignments. The student turned to OpenAI's ChatGPT and Google's Gemini, to help spawn ideas and review concepts, which many teachers allow. More often, though, AI completed her work. Gemini solved math homework problems, she said, and aced a take-home test. ChatGPT did calculations for a science lab. It produced a tricky section of a history term paper, which she rewrote to avoid detection. The student was caught only once."

Not surprisingly, AI companies play up the idea that AI will radically improve learning, while educators are more skeptical. "This is a gigantic public experiment that no one has asked for," said Marc Watkins, assistant director of academic innovation at the University of Mississippi.

Crime

To Identify Suspect In Idaho Killings, FBI Used Restricted Consumer DNA Data (nytimes.com) 99

An anonymous reader quotes a report from the New York Times: As investigators struggled for weeks to find who might have committed the brutal stabbings of four University of Idaho students in the fall of 2022, they were focused on a key piece of evidence: DNA on a knife sheath that was found at the scene of the crime. At first they tried checking the DNA with law enforcement databases, but that did not provide a hit. They turned next to the more expansive DNA profiles available in some consumer databases in which users had consented to law enforcement possibly using their information, but that also did not lead to answers.

F.B.I. investigators then went a step further, according to newly released testimony, comparing the DNA profile from the knife sheath with two databases that law enforcement officials are not supposed to tap: GEDmatch and MyHeritage. It was a decision that appears to have violated key parameters of a Justice Department policy that calls for investigators to operate only in DNA databases "that provide explicit notice to their service users and the public that law enforcement may use their service sites."

It also seems to have produced results: Days after the F.B.I.'s investigative genetic genealogy team began working with the DNA profiles, it landed on someone who had not been on anyone's radar:Bryan Kohberger, a Ph.D. student in criminology who has now been charged with the murders. The case has shown both the promise and the unregulated power of genetic technology in an era in which millions of people willingly contribute their DNA profiles to recreational databases, often to hunt for relatives. In the past, law enforcement officials would need to find a direct match between DNA at the crime scene and that of a specific suspect. Now, investigators can use consumer DNA data to build family trees that can zero in on a person of interest -- within certain policy limits.

Slashdot Top Deals