Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Submission Summary: 1 pending, 16 declined, 14 accepted (31 total, 45.16% accepted)

Submission + - New Agent Workspace feature comes with security warning from Microsoft (scworld.com)

spatwei writes: An experimental new Windows feature that gives Microsoft Copilot access to local files comes with a warning about potential security risks.

The feature, which became available to Windows Insiders last week and is turned off by default, allows Copilot agents to work on apps and files in a dedicated space separate from the human user’s desktop. This dedicated space is called the Agent Workspace, while the agentic AI component is called Copilot Actions.

Turning on this feature creates an Agent Workspace and an agent account distinct from the user’s account, which can request access to six commonly used folders: Documents, Downloads, Desktop, Music, Pictures and Videos.

The Copilot agent can work directly with files in these folders to complete tasks such as resizing photos, renaming files or filling out forms, according to Microsoft. These tasks run in the background, isolated from the user’s main session, but can be monitored and paused by the user, allowing the user to take control as needed.

Windows documentation warns of the unique security risks associated with agentic AI, including cross-prompt injection (XPIA), where malicious instructions can be planted in documents or applications to trick the agent into performing unwanted actions like data exfiltration.

“Copilot agents’ access to files and applications greatly expands not only the scope of data that can be exfiltrated, but also the surface for an attacker to introduce an indirect prompt injection,” Shankar Krishnan, co-founder of PromptArmor, told SC Media.

Microsoft’s documentation about AI agent security emphasizes user supervision of agents’ actions, the use of least privilege principles when granting access to agent accounts and the fact that Copilot will request user approval before performing certain actions.

While Microsoft’s agentic security and privacy principles state that agents “are susceptible to attack in the same ways any other user or software components are,” Krishnan noted that the company provides “very little meaningful recommendations for customers” to address this risk when using Copilot Actions.

Submission + - OpenAI's GPT-5 generates more secure code than past models, report finds (scworld.com)

spatwei writes: OpenAI’s GPT-5 reasoning models showed significant improvement in generating secure code compared with past models, while still only making secure coding choices about 70% of the time, Veracode reported Tuesday.

Veracode’s October 2025 GenAI Code Security Report revealed that no other large language models (LLMs) released since their previous report in July 2025 showed improved performance, while some models performed slightly worse than their predecessors.

However, GPT-5 and GPT-5-mini set new records for Veracode’s GenAI Code Security benchmark, making secure decisions for 70% and 72% of the benchmark’s 80 coding tasks, respectively. For comparison, previous OpenAI models o4-mini-high, o4-mini and GTP-4.1 scored 59% and GPT-4.1-nano scored 52%.

Submission + - AI-generated ransomware extension found on Visual Studio Marketplace (scworld.com)

spatwei writes: A Visual Studio Code (VS Code) extension with ransomware capabilities, believed to be “vibe coded” using generative AI, was discovered in the official Visual Studio Marketplace, according to a blog post by Secure Annex published this week.

The extension, called susvsex and published by the user suspublisher18, clearly stated its malicious functionality in its description and shows several signs of AI generation, including excessive comments and “sloppy” implementation, Secure Annex Founder John Tuckner wrote in the blog post published Tuesday.

The extension is activated upon installation and immediately runs a function designed to encrypt files in a targeted directory and collect the original versions in a ZIP archive to be exfiltrated to the attacker’s server.

However, the extension appeared to be more of a test than a functional form of ransomware, as the target directory was configured to a test staging directory rather than a viable target.

Submission + - Nearly half of top 1,000 websites have no password length requirements (scworld.com)

spatwei writes: At least 42% of the top 1,000 most-visited websites have weak password requirements, according to research published by NordPass on Wednesday.

NordPass’ research looked at sites from Ahrefs’ list of the top 1,000 most visited websites based on monthly visits from organic search between Feb. 26 and March 6, 2025. Nearly two-third of these sites (61%) allow users to log in with a password.

The study found that only five websites out of the top 1,000 enforced minimum password length, special characters and case sensitivity requirements together, while 58% did not require special characters and 42% did not have minimum password length requirements.

“The internet teaches us how to log in and for decades it’s been teaching us the wrong lessons. If a site accepts ‘password123,’ users learn that’s enough and it’s not. People normalized minimal effort for maximum risk,” NordPass Head of Product Karolis Arbaciauskas said in a statement provided to SC Media.

The research further found that 11% of websites have no requirements at all for password creation, and just 2% support passkeys as a more secure alternative to passwords. A little more than a third (39%) offered a single sign-on (SSO) option, mostly through Google.

Submission + - Copy-paste now exceeds file transfer as top corporate data exfiltration vector (scworld.com)

spatwei writes: It is now more common for data to leave companies through copying and paste than through file transfers and uploads, LayerX revealed in its Browser Security Report 2025.

This shift is largely due to generative AI (genAI), with 77% of employees pasting data into AI prompts, and 32% of all copy-pastes from corporate accounts to non-corporate accounts occurring within genAI tools.

“Traditional governance built for email, file-sharing, and sanctioned SaaS didn’t anticipate that copy/paste into a browser prompt would become the dominant leak vector,” LayerX CEO Or Eshed wrote in a blog post summarizing the report.

Submission + - ChatGPT Atlas address bar a new avenue for prompt injection, researchers say (scworld.com)

spatwei writes: The address bar of OpenAI’s ChatGPT Atlas browser could be targeted for prompt injection using malicious instructions disguised as links, NeuralTrust reported Friday.

The browser, which was first released last week and is currently available for macOS, features an address bar, also known as an "omnibox," that can be used to both visit specific websites by URL and to submit prompts to the ChatGPT large language model (LLM).

NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM.

A malformation, such as an extra space after the first slash following “https:” prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser’s address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a “copy link” button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted.

These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user’s integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Submission + - Sloppy AI defenses take cybersecurity back to the 1990s, researchers say (scworld.com)

spatwei writes: LAS VEGAS — Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7.

We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password.

The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We — not just the cybersecurity industry, but any organization bringing AI into its processes — need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president.

"AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago."

Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI.

"It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."

Submission + - Phishing training is pretty pointless, researchers find (scworld.com)

spatwei writes: LAS VEGAS — Phishing training for employees as currently practiced is essentially useless, two researchers said at the Black Hat security conference on Wednesday.

In a scientific study involving thousands of test subjects, eight months and four different kinds of phishing training, the average improvement rate of falling for phishing scams was a whopping 1.7%.

"Is all of this focus on training worth the outcome?" asked researcher Ariana Mirian, a senior security researcher at Censys and recently a Ph.D. student at U.C. San Diego, where the study was conducted. "Training barely works."

At the beginning of Mirian's presentation, Mirian asked how many people in the audience of cybersecurity professionals believed that phishing training worked. About half raised their hands, to her mock dismay.

Submission + - How Microsoft plans to improve resiliency 1 year after CrowdStrike outage (scworld.com)

spatwei writes: Nearly one year after the CrowdStrike outage, Microsoft announced plans to reduce disruptions and work with cybersecurity vendors to prevent similar disruptions.

The July 18, 2024, outage, caused by a faulty CrowdStrike Falcon update, left approximately 8.5 million Windows machines unable to boot. The incident raised questions about Microsoft’s quality assurance processes, especially with regard to software with kernel-level access, including Falcon and other cybersecurity tools.

“All of us who worked with Windows NT in the 1990s on Intel processors was flabbergasted that Microsoft did not isolate device drivers above ring 0 (most privileged),” Analog Informatics Founder and CEO Philip Lieberman told SC Media in an email. “Everyone who develops device drivers knows that the smallest bug would crash the operating system and make debugging these drivers a nightmare to this day.”

New changes to Windows that will allow cybersecurity vendors to build solutions that run outside of the kernel were among the updates announced by Microsoft in a blog post last week.

Submission + - Meta scores worst on GenAI data privacy ranking (scworld.com)

spatwei writes: Meta AI was ranked worst for data privacy among nine AI platforms assessed by Incogni, according to a report published Tuesday.

Mistral AI’s Le Chat was deemed the most privacy-friendly generative AI (GenAI) platform, followed closely by OpenAI’s ChatGPT.

The GenAI and large language model (LLM) platforms were scored by Incogni based on 11 criteria grouped into three main categories: AI-specific privacy issues, transparency and data collection.

The “AI-specific privacy” ranking mostly covered how users’ prompts and data are used in training AI models, as well as the extent to which user prompts are shared with third parties.

Incogni said its researchers gave the criteria in this category significant weight compared to criteria involving non-AI-specific data privacy issues.

While Google Gemini was ranked as the second most privacy-invasive AI platform overall, it ranked best compared with other platforms for AI-specific issues.

While Gemini does not appear to allow users to opt out of using its prompts to train models, Google does not share prompts with third parties other than necessary service providers and legal entities.

By contrast, Meta, which scored second-worst in this category, shared user prompts with corporate group members and research partners, while OpenAI, which scored third-worst, shared data with unspecified “affiliates.”

Submission + - How AI coding assistants could be compromised via rules file (scworld.com)

spatwei writes: AI coding assistants such as GitHub Copilot and Cursor could be manipulated to generate code containing backdoors, vulnerabilities and other security issues via distribution of malicious rule configuration files, Pillar Security researchers reported Tuesday.

Rules files are used by AI coding agents to guide their behavior when generating or editing code. For example, a rules file may include instructions for the assistant to follow certain coding best practices, utilize specific formatting, or output responses in a specific language.

The attack technique developed by Pillar Researchers, which they call “Rules File Backdoor,” weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent.

Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted.

Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository.

Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker’s instructions while assisting the victim’s future coding projects.

Submission + - Cobalt Strike abuse by cybercriminals slashed 80% (scworld.com)

spatwei writes: Cobalt Strike use by cybercriminals has taken a major hit over the past two years, with 80% fewer unauthorized copies now available on the internet.

Fortra announced in a blog post Friday that efforts to crack down on misuse of its commercial penetration testing tool are starting to yield tangible results with pirated installations and unauthorized deployments being taken offline by partners.

Designed for use by "red team" security professionals to test the defenses of client organizations, Cobalt Strike utilizes features including command-and-control (C2) infrastructure, remote access beacons, post-exploitation tools for lateral movement and privilege escalation, and more. The aim is to simulate the attack capabilities and tactics of a threat actor within a trusted, controlled environment.

Unauthorized copies of Cobalt Strike are frequently abused by threat actors, who use its redteaming capabilities to facilitate their cyberattacks. The tool is abused by a range of cybercriminals including ransomware gangs and state-sponsored advanced persistent threat (APT) groups.

Submission + - ChatGPT jailbreak method uses virtual time travel to breach forbidden topics (scworld.com)

spatwei writes: A ChatGPT jailbreak vulnerability disclosed Thursday could allow users to exploit “time line confusion” to trick the large language model (LLM) into discussing dangerous topics like malware and weapons.

The vulnerability, dubbed “Time Bandit,” was discovered by AI researcher David Kuszmar, who found that OpenAI’s ChatGPT-4o model had a limited ability to understand what time period it currently existed in.

Therefore, it was possible to use prompts to convince ChatGPT it was talking to someone from the past (ex. the 1700s) while still referencing modern technologies like computer programming and nuclear weapons in its responses, Kuszmar told BleepingComputer.

Safeguards built into models like ChatGPT-4o typically cause the model to refuse to answer prompts related to forbidden topics like malware creation. However, BleepingComputer demonstrated how they were able to exploit Time Bandit to convince ChatGPT-4o to provide detailed instructions and code for creating a polymorphic Rust-based malware, under the guise that the code would be used by a programmer in the year 1789.

Kuszmar first discovered Time Bandit in November 2024 and ultimately reported the vulnerability through the CERT Coordination Center’s (CERT/CC) Vulnerability Information and Coordination Environment (VINCE) after previous unsuccessful attempts to contact OpenAI directly, according to BleepingComputer.

CERT/CC’s vulnerability note details that the Time Bandit exploit requires prompting ChatGPT-4o with questions about a specific time period or historical event, and that the attack is most successful when the prompts involve the 19th or 20th century. The exploit also requires the specified time period or historical event be well-established and maintained as the prompts pivot to discussing forbidden topics, as the safeguards will kick in if ChatGPT-4o reverts to recognizing current time period.

Time Bandit can be exploited with direct prompts by a user who is not logged in, but the CERT/CC disclosure also describes how the model’s "Search" feature can also be used by a logged in user to perform the jailbreak. In this case, the user can prompt ChatGPT to search the internet for information regarding a certain historical context, establishing the time period this way before switching to dangerous topics.

OpenAI provided a statement to CERT/CC, saying, “It is very important to us that we develop our models safely. We don’t want our models to be used for malicious purposes. We appreciate you for disclosing your findings. We’re constantly working to make our models safer and more robust against exploits, including jailbreaks, while also maintaining the models’ usefulness and task performance.”

Submission + - New USPS text scam uses unique method to hide malicious PDF links (scworld.com)

spatwei writes: A new phishing scam targeting mobile devices was observed using a “never-before-seen” obfuscation method to hide links to spoofed United States Postal Service (USPS) pages inside PDF files, Zimperium reported Monday.

The method manipulates elements of the Portable Document Format (PDF) to make clickable URLs appear invisible to both the user and mobile security systems, which would normally extract links from PDFs by searching for the “/URI” tag.

“Our researchers verified that this method enabled known malicious URLs within PDF files to bypass detection by several endpoint security solutions. In contrast, the same URLs were detected when the standard /URI tag was used,” Zimperium Malware Researcher Fernando Ortega wrote in a blog post.

Submission + - GhostGPT offers AI coding, phishing assistance for cybercriminals (scworld.com)

spatwei writes: A generative AI (GenAI) tool called GhostGPT is being offered to cybercriminals for help with writing malware code and phishing emails, Abnormal Security reported in a blog post Thursday.

GhostGPT is marketed as an “uncensored AI” and is likely a wrapper for a jailbroken version of ChatGPT or an open-source GenAI model, the Abnormal Security researchers wrote.

It offers several features that would be attractive to cybercriminals, including a “strict no-logs policy” ensuring no records are kept of conversations, and convenient access via a Telegram bot.

“While its promotional materials mention ‘cybersecurity’ as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC [business email compromise] scams,” the Abnormal blog stated. “Such disclaimers seem like a weak attempt to dodge legal accountability – nothing new in the cybercrime world.”

Slashdot Top Deals

I use technology in order to hate it more properly. -- Nam June Paik

Working...