Programming

Will We Care About Frameworks in the Future? (kinlan.me) 67

Paul Kinlan, who leads the Chrome and the Open Web Developer Relations team at Google, asks and answers the question (with a no.): Frameworks are abstractions over a platform designed for people and teams to accelerate their teams new work and maintenance while improving the consistency and quality of the projects. They also frequently force a certain type of structure and architecture to your code base. This isn't a bad thing, team productivity is an important aspect of any software.

I'm of the belief that software development is entering a radical shift that is currently driven by agents like Replit's and there is a world where a person never actually has to manipulate code directly anymore. As I was making broad and sweeping changes to the functionality of the applications by throwing the Agent a couple of prompts here and there, the software didn't seem to care that there was repetition in the code across multiple views, it didn't care about shared logic, extensibility or inheritability of components... it just implemented what it needed to do and it did it as vanilla as it could.

I was just left wondering if there will be a need for frameworks in the future? Do the architecture patterns we've learnt over the years matter? Will new patterns for software architecture appear that favour LLM management?

Programming

The Team Behind GitHub's 'Atom' IDE Build a Cross-Platform, AI-Optional 'Zed Editor' (itsfoss.com) 29

Nathan Sobo "joined GitHub in late 2011 to build the Atom text editor," according to an online biography, "and he led the Atom team until 2018." Max Brunsfeld joined the Atom team in 2013, and "While driving Atom towards its 1.0 launch during the day, Max spent nights and weekends building Tree-sitter, a blazing-fast and expressive incremental parsing framework that currently powers all code analysis at GitHub."

Last year they teamed up with Antonio Scandurra (another Atom alumnus) to launch a new startup called Zed (which in 2023 raised $10 million, according to TechCrunch). And today the open source blog It's FOSS checks in on their open-source code editor — "Zed Editor". Mainly written in Rust, it supports running in CLI, diagnosing project-wide errors, split panes, and markdown previews: By default, any added content is treated as plain text. I used the language switcher to change it to Rust so that I would get proper syntax highlighting, indentation, error detection, and other useful language-specific functions. The switch highlighted all the Rust elements correctly, and I then focused on Zed Editor's user interface. The overall feel of the editor was minimal, with all the important options being laid out nicely.

[Its status bar] had some interesting panels. The first one I checked was the Terminal Panel, which, as the name suggests, lets you run commands, scripts, and facilitates interaction with system files or processes directly from within the editor. I then moved to the Assistant Panel, which is home to various large language models that can be integrated into Zed Editor. There are options like Anthropic, GitHub Copilot Chat, Ollama, OpenAI, and Google AI... The Zed Editor team has also recently introduced Zed AI in collaboration with Anthropic for assisting with coding, allowing for code generation, advanced context-powered interactions, and more...

The real-time collaboration features on Zed Editor are quite appealing too. To check them out, I had to log in with my GitHub account. After logging in, the Collab Panel opened up, and I could see many channels from the official Zed community. I could chat with others, add collaborators to existing projects, join a call with the option to share my screen and track other collaborators' cursors, add new contacts, and carry out many other collaborative tasks.

One can also use extensions and themes to extend what Zed Editor can do. There are some nice pre-installed themes as well.

Apple

Apple Explores Push Into Smart Glasses With 'Atlas' User Study (yahoo.com) 9

Apple is exploring a push into smart glasses with an internal study of products currently on the market, setting the stage for the company to follow Meta into an increasingly popular category. From a report: The initiative, code-named Atlas, got underway last week and involves gathering feedback from Apple employees on smart glasses, according to people with knowledge of the matter. Additional focus groups are planned for the near future, said the people, who asked not to be identified because the work is secret.

The studies are being led by Apple's Product Systems Quality team, part of the hardware engineering division. "Testing and developing products that all can come to love is very important to what we do at Apple," the group wrote in an email to select employees at the company's headquarters in Cupertino, California. "This is why we are looking for participants to join us in an upcoming user study with current market smart glasses."

Security

Is AI-Driven 0-Day Detection Here? (zeropath.com) 25

"AI-driven 0-day detection is here," argues a new blog post from ZeroPath, makers of a GitHub app that "detects, verifies, and issues pull requests for security vulnerabilities in your code."

They write that AI-assisted security research "has been quietly advancing" since early 2023, when researchers at the DARPA and ARPA-H's Artificial Intelligence Cyber Challenge demonstrated the first practical applications of LLM-powered vulnerability detection — with new advances continuing. "Since July 2024, ZeroPath's tool has uncovered critical zero-day vulnerabilities — including remote code execution, authentication bypasses, and insecure direct object references — in popular AI platforms and open-source projects." And they ultimately identified security flaws in projects owned by Netflix, Salesforce, and Hulu by "taking a novel approach combining deep program analysis with adversarial AI agents for validation. Our methodology has uncovered numerous critical vulnerabilities in production systems, including several that traditional Static Application Security Testing tools were ill-equipped to find..." TL;DR — most of these bugs are simple and could have been found with a code review from a security researcher or, in some cases, scanners. The historical issue, however, with automating the discovery of these bugs is that traditional SAST tools rely on pattern matching and predefined rules, and miss complex vulnerabilities that do not fit known patterns (i.e. business logic problems, broken authentication flaws, or non-traditional sinks such as from dependencies). They also generate a high rate of false positives.

The beauty of LLMs is that they can reduce ambiguity in most of the situations that caused scanners to be either unusable or produce few findings when mass-scanning open source repositories... To do this well, you need to combine deep program analysis with an adversarial agents that test the plausibility of vulnerabilties at each step. The solution ends up mirroring the traditional phases of a pentest — recon, analysis, exploitation (and remediation which is not mentioned in this post)...

AI-driven vulnerability detection is moving fast... What's intriguing is that many of these vulnerabilities are pretty straightforward — they could've been spotted with a solid code review or standard scanning tools. But conventional methods often miss them because they don't fit neatly into known patterns. That's where AI comes in, helping us catch issues that might slip through the cracks.

"Many vulnerabilities remain undisclosed due to ongoing remediation efforts or pending responsible disclosure processes," according to the blog post, which includes a pie chart showing the biggest categories of vulnerabilities found:
  • 53%: Authorization flaws, including roken access control in API endpoints and unauthorized Redis access and configuration exposure. ("Impact: Unauthorized access, data leakage, and resource manipulation across tenant boundaries.")
  • 26%: File operation issues, including directory traversal in configuration loading and unsafe file handling in upload features. ("Impact: Unauthorized file access, sensitive data exposure, and potential system compromise.")
  • 16%: Code execution vulnerabilities, including command injection in file processing and unsanitized input in system commands. ("Impact: Remote code execution, system command execution, and potential full system compromise.")

The company's CIO/cofounder was "former Red Team at Tesla," according to the startup's profile at YCombinator, and earned over $100,000 as a bug-bounty hunter. (And another co-founded is a former Google security engineer.)

Thanks to Slashdot reader Mirnotoriety for sharing the article.


Sci-Fi

'Alien' Signal Decoded (esa.int) 39

An anonymous reader quotes a report from the European Space Agency: White dots arranged in five clusters against a black background (PNG). This is the simulated extraterrestrial signal transmitted from Mars and deciphered by a father and a daughter on Earth after a year-long decoding effort. On June 7, 2024, media artist Daniela de Paulis received this simple, retro-looking image depicting five amino acids in her inbox. It was the solution to a cosmic puzzle beamed from ESA's ExoMars Trace Gas Orbiter (TGO) in May 2023, when the European spacecraft played alien as part of the multidisciplinary art project 'A Sign in Space.' After three radio astronomy observatories on Earth intercepted the signal, the challenge was first to extract the message from the raw data of the radio signal, and secondly to decode it. In just 10 days, a community of 5000 citizen scientists gathered online and managed to extract the signal. The second task took longer and required some visionary minds.

US citizens Ken and Keli Chaffin cracked the code following their intuition and running simulations for hours and days on end. The father and daughter team discovered that the message contained movement, suggesting some sort of cellular formation and life forms. Amino acids and proteins are the building blocks of life. Now that the cryptic signal has been deciphered, the quest for meaning begins. The interpretation of the message, like any art piece, remains open. Daniela crafted the message with a small group of astronomers and computer scientists, with support from ESA, the SETI Institute and the Green Bank Observatory. The artist and collaborators behind the project are now taking a step back and witnessing how citizen scientists are shaping the challenge on their own.

AI

GitHub Copilot Moves Beyond OpenAI Models To Support Claude 3.5, Gemini 9

GitHub Copilot will switch from using exclusively OpenAI's GPT models to a multi-model approach, adding Anthropic's Claude 3.5 Sonnet and Google's Gemini 1.5 Pro. Ars Technica reports: First, Anthropic's Claude 3.5 Sonnet will roll out to Copilot Chat's web and VS Code interfaces over the next few weeks. Google's Gemini 1.5 Pro will come a bit later. Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs -- and organizations will be able to choose which models will be usable by team members.

The new approach makes sense for users, as certain models are better at certain languages or types of tasks. "There is no one model to rule every scenario," wrote [GitHub CEO Thomas Dohmke]. "It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice." It starts with the web-based and VS Code Copilot Chat interfaces, but it won't stop there. "From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot's surface areas and functions soon," Dohmke wrote. There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support.
GitHub also introduced "Spark," a natural language-based app development tool that enables both non-coders and coders to create and refine applications using conversational prompts. It's currently in an early preview phase, with a waitlist available for those who are interested.
Open Source

Password Manager Bitwarden Makes Changes to Address Concerns Over Open Source Licensing (github.com) 10

Bitwarden describes itself as an "open source password manager for business." But it also made a change to its build requirement which led to an issue on the project's GitHub page titled "Desktop version 2024.10.0 is no longer free software."

In the week that followed Bitwarden's official account on X.com promised a fix was coming. "It seems a packaging bug was misunderstood as something more, and the team plans to resolve it. Bitwarden remains committed to the open source licensing model in place for years, along with retaining a fully featured free version for individual users." And Thursday Bitwarden followed through with new changes to address the concerns.

The Register reports the whole episode started because of a new build requirement added in a pull request a couple of weeks ago titled "Introduce SDK client." This SDK is required to compile the software from source — either the Bitwarden server or any of its client applications... [But the changed license had warned "You may not use this SDK to develop applications for use with software other than Bitwarden (including non-compatible implementations of Bitwarden) or to develop another SDK."]
Phoronix picks up the story: The issue of this effectively not making the Bitwarden client free software was raised in this GitHub issue... Bitwarden founder and CTO Kyle Spearrin has commented on the ticket... "Being able to build the app as you are trying to do here is an issue we plan to resolve and is merely a bug." The ticket was subsequently locked and limited to collaborators.
And Thursday it was Bitwarden founder and CTO Kyle Spearrin who again re-appeared in the Issue — first thanking the user who had highlighted the concerns. "We have made some adjustments to how the SDK code is organized and packaged to allow you to build and run the app with only GPL/OSI licenses included." The sdk-internal package references in the clients now come from a new sdk-internal repository, which follows the licensing model we have historically used for all of our clients (see LICENSE_FAQ.md for more info). The sdk-internal reference only uses GPL licenses at this time. If the reference were to include Bitwarden License code in the future, we will provide a way to produce multiple build variants of the client, similar to what we do with web vault client builds.

The original sdk repository will be renamed to sdk-secrets, and retains its existing Bitwarden SDK License structure for our Secrets Manager business products. The sdk-secrets repository and packages will no longer be referenced from the client apps, since that code is not used there.

Security

Microsoft's Honeypots Lure Phishers at Scale - to Spy on Them and Waste Their Time (bleepingcomputer.com) 21

A principal security software engineer at Microsoft described how they use their Azure cloud platform "to hunt phishers at scale," in a talk at the information security conference BSides Exeter.

Calling himself Microsoft's "Head of Deception." Ross Bevington described how they'd created a "hybrid high interaction honeypot" on the now retired code.microsoft.com "to collect threat intelligence on actors ranging from both less skilled cybercriminals to nation state groups targeting Microsoft infrastructure," according to a report by BleepingComputer: With the collected data, Microsoft can map malicious infrastructure, gain a deeper understanding of sophisticated phishing operations, disrupt campaigns at scale, identify cybercriminals, and significantly slow down their activity... Bevington and his team fight phishing by leveraging deception techniques using entire Microsoft tenant environments as honeypots with custom domain names, thousands of user accounts, and activity like internal communications and file-sharing...

In his BSides Exeter presentation, the researcher says that the active approach consists in visiting active phishing sites identified by Defender and typing in the credentials from the honeypot tenants. Since the credentials are not protected by two-factor authentication and the tenants are populated with realistic-looking information, attackers have an easy way in and start wasting time looking for signs of a trap. Microsoft says it monitors roughly 25,000 phishing sites every day, feeding about 20% of them with the honeypot credentials; the rest are blocked by CAPTCHA or other anti-bot mechanisms.

Once the attackers log into the fake tenants, which happens in 5% of the cases, it turns on detailed logging to track every action they take, thus learning the threat actors' tactics, techniques, and procedures. Intelligence collected includes IP addresses, browsers, location, behavioral patterns, whether they use VPNs or VPSs, and what phishing kits they rely on... The deception technology currently wastes an attacker 30 days before they realize they breached a fake environment. All along, Microsoft collects actionable data that can be used by other security teams to create more complex profiles and better defenses.

Security

How WatchTowr Explored the Complexity of a Vulnerability in a Secure Firewall Appliance (watchtowr.com) 9

Cybersecurity startup Watchtowr "was founded by hacker-turned-entrepreneur Benjamin Harris," according to a recent press release touting their Fortune 500 customers and $29 million investments from venture capital firms. ("If there's a way to compromise your organization, watchTowr will find it," Harris says in the announcement.)

This week they shared their own research on a Fortinet FortiGate SSLVPN appliance vulnerability (discovered in February by Gwendal Guégniaud of the Fortinet Product Security team — presumably in a static analysis for format string vulnerabilities). "It affected (before patching) all currently-maintained branches, and recently was highlighted by CISA as being exploited-in-the-wild... It's a Format String vulnerability [that] quickly leads to Remote Code Execution via one of many well-studied mechanisms, which we won't reproduce here..."

"Tl;dr SSLVPN appliances are still sUpEr sEcurE," their post begains — but the details are interesting. When trying to test an exploit, Watchtowr discovered instead that FortiGate always closed the connection early, thanks to an exploit mitigation in glibc "intended to hinder clean exploitation of exactly this vulnerability class." Watchtowr hoped to "use this to very easily check if a device is patched — we can simply send a %n, and if the connection aborts, the device is vulnerable. If the connection does not abort, then we know the device has been patched... " But then they discovered "Fortinet added some kind of certificate validation logic in the 7.4 series, meaning that we can't even connect to it (let alone send our payload) without being explicitly permitted by a device administrator." We also checked the 7.0 branch, and here we found things even more interesting, as an unpatched instance would allow us to connect with a self-signed certificate, while a patched machine requires a certificate signed by a configured CA. We did some reversing and determined that the certificate must be explicitly configured by the administrator of the device, which limits exploitation of these machines to the managing FortiManager instance (which already has superuser permissions on the device) or the other component of a high-availability pair. It is not sufficient to present a certificate signed by a public CA, for example...

Fortinet's advice here is simply to update, which is always sound advice, but doesn't really communicate the nuance of this vulnerability... Assuming an organisation is unable to apply the supplied workaround, the urgency of upgrade is largely dictated by the willingness of the target to accept a self-signed certificate. Targets that will do so are open to attack by any host that can access them, while those devices that require a certificate signed by a trusted root are rendered unexploitable in all but the narrowest of cases (because the TLS/SSL ecosystem is just so solid, as we recently demonstrated)...

While it's always a good idea to update to the latest version, the life of a sysadmin is filled with cost-to-benefit analysis, juggling the needs of users with their best interests.... [I]t is somewhat troubling when third parties need to reverse patches to uncover such details.

Thanks to Slashdot reader Mirnotoriety for sharing the article.
Security

Internet Archive Users Start Receiving Email From 'Some Random Guy' Criticizing Unpatched Hole (bleepingcomputer.com) 18

A post shared Saturday on social media acknowledges those admins and developers at the Internet Archive working "literally round the clock... They have taken no days off this past week. They are taking none this weekend... they are working with all of their energy and considerable talent."

It describes people "working so incredibly hard... putting their all in," with a top priority of "getting the site back secure and safe".

But there's new and continuing problems, reports The Verge's weekend editor: Early this morning, I received an email from "The Internet Archive Team," replying to a message I'd sent on October 9th. Except its author doesn't seem to have been the digital archivists' support team — it was apparently written by the hackers who breached the site earlier this month and who evidently maintain some level of access to its systems.

I'm not alone. Users on the Internet Archive subreddit are reporting getting the replies, as well. Here is the message I received:

It's dispiriting to see that even after being made aware of the breach 2 weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets.

As demonstrated by this message, this includes a Zendesk token with perms to access 800K+ support tickets sent to info@archive.org since 2018.

Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine — your data is now in the hands of some random guy. If not me, it'd be someone else.

The site BleepingComputer believes they know the larger context, starting with the fact that they've also "received numerous messages from people who received replies to their old Internet Archive removal requests... The email headers in these emails also pass all DKIM, DMARC, and SPF authentication checks, proving they were sent by an authorized Zendesk server."

BleepingComputer also writes that they'd "repeatedly tried to warn the Internet Archive that their source code was stolen through a GitLab authentication token that was exposed online for almost two years."

And that "the threat actor behind the actual data breach, who contacted BleepingComputer through an intermediary to claim credit for the attack," has been frustrated by misreporting. (Specifically, they insist there were two separate attacks last week — a DDoS attack and a separate data breach for a 6.4-gigabyte database which includes email addresses for the site's 33 million users.) The threat actor told BleepingComputer that the initial breach of Internet Archive started with them finding an exposed GitLab configuration file on one of the organization's development servers, services-hls.dev.archive.org. BleepingComputer was able to confirm that this token has been exposed since at least December 2022, with it rotating multiple times since then. The threat actor says this GitLab configuration file contained an authentication token allowing them to download the Internet Archive source code. The hacker say that this source code contained additional credentials and authentication tokens, including the credentials to Internet Archive's database management system. This allowed the threat actor to download the organization's user database, further source code, and modify the site.

The threat actor claimed to have stolen 7TB of data from the Internet Archive but would not share any samples as proof. However, now we know that the stolen data also included the API access tokens for Internet Archive's Zendesk support system. BleepingComputer attempted contact the Internet Archive numerous times, as recently as on Friday, offering to share what we knew about how the breach occurred and why it was done, but we never received a response.

"The Internet Archive was not breached for political or monetary reasons," they conclude, "but simply because the threat actor could...

"While no one has publicly claimed this breach, BleepingComputer was told it was done while the threat actor was in a group chat with others, with many receiving some of the stolen data. This database is now likely being traded amongst other people in the data breach community, and we will likely see it leaked for free in the future on hacking forums like Breached."
Microsoft

Microsoft Veteran Ditches Team Tabs, Blaming Storage Trauma of Yesteryear (theregister.com) 125

Veteran Microsoft engineer Larry Osterman is the latest to throw his hat into the "tabs versus spaces" ring. From a report: The debate has vexed engineers for decades -- is it best to indent code with tabs or spaces? Osterman, a four-decade veteran of Microsoft, was Team Tabs when storage was tight, but has since become Team Spaces with the advent of terabytes of relatively inexpensive storage. "Here's the thing," he said. "When you've got 512 kilobytes, and you're writing a program in Pascal with lots of indentation, if you're taking eight bytes for every one of those indentations, for eight spaces, you could save seven bytes in your program by using a tab character."

It all added up, even when floppy disks were part of the equation.

However, according to Osterman, things have changed. Storage is less of an issue, so why not use spaces? A cynic might wonder if that sort of attitude has led to the bloatware of today, where software requires ever-increasing amounts of storage in return for precious little extra functionality and a never-ending stream of patches. Any decent compiler should strip out any extraneous characters, assuming the code is indeed being compiled beforehand and not interpreted at run-time. For his part, Osterman is now a member of team spaces. "I like spaces simply because it always works and it's always consistent," he said.

Firefox

uBlock Origin Lite Maker Ends Firefox Store Support, Slams Mozilla For Hostile Reviews (neowin.net) 50

The Firefox extension for the uBlock Origin Lite content blocker is no longer available. According to Neowin, "Raymond Hill, the maker of the extension, pulled support and moved uBlock Origin Lite to self-hosting after multiple encounters with a 'nonsensical and hostile' review process from the store review team." From the report: It all started in early September when Mozilla flagged every version of the uBlock Origin Lite extension as violating its policies. Reviewers then claimed the extension apparently collected user data and contained "minified, concatenated or otherwise machine-generated code." The developer seemingly debunked those allegations, saying that "it takes only a few seconds for anyone who has even basic understanding of JavaScript to see the raised issues make no sense." Raymond Hill decided to drop the extension from the store and move it to a self-hosted version. This means that those who want to continue using uBlock Origin Lite on Firefox should download the latest version from GitHub (it can auto-update itself).

The last message from the developer in a now-closed GitHub issue shows an email from Mozilla admitting its fault and apologizing for the mistake. However, Raymond still pulled the extension from the Mozilla Add-ons Store, which means you can no longer find it on addons.mozilla.org. It is worth noting that the original uBlock Origin for Firefox is still available and supported.

Businesses

Dozens of Fortune 100 Companies Have Unwittingly Hired North Korean IT Workers (therecord.media) 29

"Dozens of Fortune 100 organizations" have unknowingly hired North Korean IT workers using fake identities, generating revenue for the North Korean government while potentially compromising tech firms, according to Google's Mandiant unit. "In a report published Monday [...], researchers describe a common scheme orchestrated by the group it tracks as UNC5267, which has been active since 2018," reports The Record. "In most cases, the IT workers 'consist of individuals sent by the North Korean government to live primarily in China and Russia, with smaller numbers in Africa and Southeast Asia.'" From the report: The remote workers "often gain elevated access to modify code and administer network systems," Mandiant found, warning of the downstream effects of allowing malicious actors into a company's inner sanctum. [...] Using stolen identities or fictitious ones, the actors are generally hired as remote contractors. Mandiant has seen the workers hired in a variety of complex roles across several sectors. Some workers are employed at multiple companies, bringing in several salaries each month. The tactic is facilitated by someone based in the U.S. who runs a laptop farm where workers' laptops are sent. Remote technology is installed on the laptops, allowing the North Koreans to log in and conduct their work from China or Russia.

Workers typically asked for their work laptops to be sent to different addresses than those listed on their resumes, raising the suspicions of companies. Mandiant said it found evidence that the laptops at these farms are connected to a "keyboard video mouse" device or multiple remote management tools including LogMeIn, GoToMeeting, Chrome Remote Desktop, AnyDesk, TeamViewer and others. "Feedback from team members and managers who spoke with Mandiant during investigations consistently highlighted behavior patterns, such as reluctance to engage in video communication and below-average work quality exhibited by the DPRK IT worker remotely operating the laptops," Mandiant reported.

In several incident response engagements, Mandiant found the workers used the same resumes that had links to fabricated software engineer profiles hosted on Netlify, a platform often used for quickly creating and deploying websites. Many of the resumes and profiles included poor English and other clues indicating the actor was not based in the U.S. One characteristic repeatedly seen was the use of U.S-based addresses accompanied by education credentials from universities outside of North America, frequently in countries such as Singapore, Japan or Hong Kong. Companies, according to Mandiant, typically don't verify credentials from universities overseas.
Further reading: How Not To Hire a North Korean IT Spy
Programming

'Compile and Run C in JavaScript', Promises Bun (thenewstack.io) 54

The JavaScript runtime Bun is a Node.js/Deno alternative (that's also a bundler/test runner/package manager).

And Bun 1.1.28 now includes experimental support for ">compiling and running native C from JavaScript, according to this report from The New Stack: "From compression to cryptography to networking to the web browser you're reading this on, the world runs on C," wrote Jarred Sumner, creator of Bun. "If it's not written in C, it speaks the C ABI (C++, Rust, Zig, etc.) and is available as a C library. C and the C ABI are the past, present, and future of systems programming." This is a low-boilerplate way to use C libraries and system libraries from JavaScript, he said, adding that this feature allows the same project that runs JavaScript to also run C without a separate build step... "It's good for glue code that binds C or C-like libraries to JavaScript. Sometimes, you want to use a C library or system API from JavaScript, and that library was never meant to be used from JavaScript," Sumner added.

It's currently possible to achieve this by compiling to WebAssembly or writing a N-API (napi) addon or V8 C++ API library addon, the team explained. But both are suboptimal... WebAssembly can do this but its isolated memory model comes with serious tradeoffs, the team wrote, including an inability to make system calls and a requirement to clone everything. "Modern processors support about 280 TB of addressable memory (48 bits). WebAssembly is 32-bit and can only access its own memory," Sumner wrote. "That means by default, passing strings and binary data JavaScript WebAssembly must clone every time. For many projects, this negates any performance gain from leveraging WebAssembly."

The latest version of Bun, released Friday, builds on this by adding N-API (nap) support to cc [Bun's C compiler, which uses TinyCC to compile the C code]. "This makes it easier to return JavaScript strings, objects, arrays and other non-primitive values from C code," wrote Sumner. "You can continue to use types like int, float, double to send & receive primitive values from C code, but now you can also use N-API types! Also, this works when using dlopen to load shared libraries with bun:ffi (such as Rust or C++ libraries with C ABI exports)....

"TinyCC compiles to decently performant C, but it won't do advanced optimizations that Clang or GCC does like autovectorization or very specialized CPU instructions," Sumner wrote. "You probably won't get much of a performance gain from micro-optimizing small parts of your codebase through C, but happy to be proven wrong!"

AI

VS Code Fork 'Cursor' - the ChatGPT of Coding? (tomsguide.com) 69

"Sometimes an artificial intelligence tool comes out of nowhere and dominates the conversation on social media," writes Tom's Guide.

"This week that app is Cursor, an AI coding tool that uses models like Claude 3.5 Sonnet and GPT-4o to make it easier than ever to build your own apps," with the ability to "write, predict and manipulate code using nothing but a text prompt." Cursor is part development environment, part AI chatbot and unlike tools like GitHub Copilot it can more or less do all of the work for you, transforming a simple idea into functional code in minutes... Built on the same system as the popular Microsoft Visual Studio Code, Cursor has already found a fanbase among novice coders and experienced engineers...

Cursor's simplicity, working from a chat window, means even someone completely new to code could get a functional app running in minutes and keep building on it to add new features... The startup has raised over $400 million since it was founded in 2022 and works with various models including those from Anthropic and OpenAI... In my view, its true power is in the democratization of coding. It would also allow someone without much coding experience to build the tools they need by typing a few lines of text.

More from ReadWrite: Cursor, an AI firm that is attempting to build a "magical tool that will one day write all the world's code," has announced it has raised $60 million in its Series A funding round... As of August 22, the company had a valuation of $400 million, according to sources cited by TechCrunch...

Anysphere is the two-year-old startup that developed the app. Its co-founders are Michael Truell, Sualeh Asif, Arvid Lunnemark and Aman Sanger, who started the company while they were students at MIT... Using advanced AI capabilities, it is said to be able to finish, correct, and change AI code through natural language commands. It currently works with JavaScript, Python, and TypeScript, and is free for most uses. The pro plan will set you back $20 per month.

But how well does it work? Tom's Guide notes that after requesting a test app, "It generated the necessary code in the sidebar chat window and all I had to do was click Apply and then Accept. This added the code to a new Python file including all the necessary imports. It also gave me instructions on how to add modules to my machine to make the code work.

"As the chat is powered by Claude 3.5 Sonnet, you can just have it explain in more detail any element of the code or any task required to make it run..."

Andreessen Horowitz explains why they invested in the company: It's very clear that LLMs are a powerful tool for programmers, and that their coding abilities will improve over time. But it's also clear that for most coding tasks, the problem to solve is not how to make LLMs perform well in isolation, but how to make them perform well alongside a human developer. We believe, therefore, the interface between programmers and AI models will soon become one of the most important pieces of the dev stack. And we're thrilled to announce our series A investment...

Cursor is a fork of VS Code that's heavily customized for AI-assisted programming. It works with all the latest LLMs and supports the full VS Code plugin ecosystem. What makes Cursor special are the features designed to integrate AI into developer workflows — including next action prediction, natural language edits, chatting with your codebase, and a bunch of new ones to come... Our belief is that Cursor, distinctly among AI coding tools, has simply gotten it right. That's why, in a little over a year, thousands of users have signed up for Cursor, including at companies like OpenAI, Midjourney, Perplexity, Replicate, Shopify, Instacart, and many others. Users give glowing reviews of the product, many of them have started to pay for it, and they rarely switch back to other IDEs. Most of the a16z Infra team have also become avid Cursor users!

One site even argues that Cursor's coding and AI capabilities "should be a wake up call for Microsoft to make VS Code integration with GitHub Copilot a lot easier."

Thanks to Slashdot reader joshuark for sharing the article.
Wine

Microsoft Donates the Mono Project To Wine (gamingonlinux.com) 67

Microsoft has decided to donate the Mono Project to the developers of Wine, FOSS that allows Windows applications to run on Unix-like operating systems. "Mono is a software platform designed to allow developers to easily create cross platform applications," notes GameOnLinux's Liam Dawe. "It is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime."

"Wine already makes use of Mono and this move makes sense with Microsoft focusing on open-source .NET and other efforts," adds Phoronix's Michael Larabel. "Formally handing over control of the upstream Mono project to WineHQ is a nice move by Microsoft rather than just letting the upstream Mono die off or otherwise forked." Microsoft's Jeff Schwartz announced the move on the Mono website and in a GitHub post: The Mono Project (mono/mono) ('original mono') has been an important part of the .NET ecosystem since it was launched in 2001. Microsoft became the steward of the Mono Project when it acquired Xamarin in 2016. The last major release of the Mono Project was in July 2019, with minor patch releases since that time. The last patch release was February 2024. We are happy to announce that the WineHQ organization will be taking over as the stewards of the Mono Project upstream at wine-mono / Mono - GitLab (winehq.org). Source code in existing mono/mono and other repos will remain available, although repos may be archived. Binaries will remain available for up to four years.

Microsoft maintains a modern fork of Mono runtime in the dotnet/runtime repo and has been progressively moving workloads to that fork. That work is now complete, and we recommend that active Mono users and maintainers of Mono-based app frameworks migrate to .NET which includes work from this fork. We want to recognize that the Mono Project was the first .NET implementation on Android, iOS, Linux, and other operating systems. The Mono Project was a trailblazer for the .NET platform across many operating systems. It helped make cross-platform .NET a reality and enabled .NET in many new places and we appreciate the work of those who came before us.

Thank you to all the Mono developers!

AI

'AI-Powered Remediation': GitHub Now Offers 'Copilot Autofix' Suggestions for Code Vulnerabilities (infoworld.com) 18

InfoWorld reports that Microsoft-owned GitHub "has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service."

The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service: "Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found," GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

"Since implementing Copilot Autofix, we've observed a 60% reduction in the time spent on security-related code reviews," says one principal engineer quoted in GitHub's announcement, "and a 25% increase in overall development productivity."

The announcement also notes that Copilot Autofix "leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs." Code scanning tools detect vulnerabilities, but they don't address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn't the problem. Fixing them is...

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities... Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request.... For developers who aren't necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code...

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it's highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub's code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we're thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects...

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden.... With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

Programming

'The Best, Worst Codebase' 29

Jimmy Miller, programmer and co-host of the future of coding podcast, writes in a blog: When I started programming as a kid, I didn't know people were paid to program. Even as I graduated high school, I assumed that the world of "professional development" looked quite different from the code I wrote in my spare time. When I lucked my way into my first software job, I quickly learned just how wrong and how right I had been. My first job was a trial by fire, to this day, that codebase remains the worst and the best codebase I ever had the pleasure of working in. While the codebase will forever remain locked by proprietary walls of that particular company, I hope I can share with you some of its most fun and scary stories.

[...] Every morning at 7:15 the employees table was dropped. All the data completely gone. Then a csv from adp was uploaded into the table. During this time you couldn't login to the system. Sometimes this process failed. But this wasn't the end of the process. The data needed to be replicated to headquarters. So an email was sent to a man, who every day would push a button to copy the data.

[...] But what is a database without a codebase. And what a magnificent codebase it was. When I joined everything was in Team Foundation Server. If you aren't familiar, this was a Microsoft-made centralized source control system. The main codebase I worked in was half VB, half C#. It ran on IIS and used session state for everything. What did this mean in practice? If you navigated to a page via Path A or Path B you'd see very different things on that page. But to describe this codebase as merely half VB, half C# would be to do it a disservice. Every javascript framework that existed at the time was checked into this repository. Typically, with some custom changes the author believed needed to be made. Most notably, knockout, backbone, and marionette. But of course, there was a smattering of jquery and jquery plugins.
Crime

North Korean Group Infiltrated 100-Plus Firms with Imposter IT Pros (csoonline.com) 16

"CrowdStrike has continued doing what gave it such an expansive footprint in the first place," writes CSO Online — "detecting cyber threats and protecting its clients from them."

They interviewed Adam Meyers, CrowdStrike's SVP of counter adversary operations, whose team produced their 2024 Threat Hunting Report (released this week at the Black Hat conference). Of seven case studies presented in the report, the most daring is that of a group CrowdStrike calls Famous Chollima, an alleged DPRK-nexus group. Starting with a single incident in April 2024, CrowdStrike discovered that a group of North Koreans, posing as American workers, had been hired for multiple remote IT worker jobs in early 2023 at more than thirty US-based companies, including aerospace, defense, retail, and technology organizations.

CrowdStrike's threat hunters discovered that after obtaining employee-level access to victim networks, the phony workers performed at minimal enough levels to keep their jobs while attempting to exfiltrate data using Git, SharePoint, and OneDrive and installing remote monitoring and management (RMM) tools RustDesk, AnyDesk, TinyPilot, VS Code Dev Tunnels, and Google Chrome Remote Desktop. The workers leveraged these RMM tools with company network credentials, enabling numerous IP addresses to connect to victims' systems.

CrowdStrike's OverWatch hunters, a team of experts conducting analysis, hunted for RMM tooling combined with suspicious connections surfaced by the company's Falcon Identity Protection module to find more personas and additional indicators of compromise. CrowdStrike ultimately found that over 100 companies, most US-based technology entities, had hired Famous Chollima workers. The OverWatch team contacted victimized companies to inform them about potential insider threats and quickly corroborated its findings.

Thanks to Slashdot reader snydeq for sharing the news.
Google

Google Gemini 1.5 Pro Leaps Ahead In AI Race, Challenging GPT-4o (venturebeat.com) 11

An anonymous reader quotes a report from VentureBeat: Google launched its latest artificial intelligence powerhouse, Gemini 1.5 Pro, today, making the experimental "version 0801" available for early testing and feedback through Google AI Studio and the Gemini API. This release marks a major leap forward in the company's AI capabilities and has already sent shockwaves through the tech community. The new model has quickly claimed the top spot on the prestigious LMSYS Chatbot Arena leaderboard (built with Gradio), boasting an impressive ELO score of 1300.

This achievement puts Gemini 1.5 Pro ahead of formidable competitors like OpenAI's GPT-4o (ELO: 1286) and Anthropic's Claude-3.5 Sonnet (ELO: 1271), potentially signaling a shift in the AI landscape. Simon Tokumine, a key figure in the Gemini team, celebrated the release in a post on X.com, describing it as "the strongest, most intelligent Gemini we've ever made." Early user feedback supports this claim, with one Redditor calling the model "insanely good" and expressing hope that its capabilities won't be scaled back.
"A standout feature of the 1.5 series is its expansive context window of up to two million tokens, far surpassing many competing models," adds VentureBeat. "This allows Gemini 1.5 Pro to process and reason about vast amounts of information, including lengthy documents, extensive code bases, and extended audio or video content."

Slashdot Top Deals