AI

Mysterious 'gpt2-chatbot' AI Model Appears Suddenly, Confuses Experts (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: On Sunday, word began to spread on social media about a new mystery chatbot named "gpt2-chatbot" that appeared in the LMSYS Chatbot Arena. Some people speculate that it may be a secret test version of OpenAI's upcoming GPT-4.5 or GPT-5 large language model (LLM). The paid version of ChatGPT is currently powered by GPT-4 Turbo. Currently, the new model is only available for use through the Chatbot Arena website, although in a limited way. In the site's "side-by-side" arena mode where users can purposely select the model, gpt2-chatbot has a rate limit of eight queries per day -- dramatically limiting people's ability to test it in detail. [...] On Monday evening, OpenAI CEO Sam Altman seemingly dropped a hint by tweeting, "i do have a soft spot for gpt2." [...]

OpenAI's fingerprints seem to be all over the new bot. "I think it may well be an OpenAI stealth preview of something," AI researcher Simon Willison told Ars Technica. But what "gpt2" is exactly, he doesn't know. After surveying online speculation, it seems that no one apart from its creator knows precisely what the model is, either. Willison has uncovered the system prompt for the AI model, which claims it is based on GPT-4 and made by OpenAI. But as Willison noted in a tweet, that's no guarantee of provenance because "the goal of a system prompt is to influence the model to behave in certain ways, not to give it truthful information about itself."

AI

Copilot Workspace Is GitHub's Take On AI-Powered Software Engineering 12

An anonymous reader quotes a report from TechCrunch: Ahead of its annual GitHub Universe conference in San Francisco early this fall, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as "Copilot-powered agents" to help developers brainstorm, plan, build, test and run code in natural language. Jonathan Carter, head of GitHub Next, GitHub's software R&D team, pitches Workspace as somewhat of an evolution of GitHub's AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language. "Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs," Carter said. "So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase."

Given a GitHub repo or a specific bug within a repo, Workspace -- underpinned by OpenAI's GPT-4 Turbo model -- can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo's comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it. The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit.

Perhaps the most obvious way to launch Workspace is from the new "Open in Workspace" button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, "Add documentation for the changes in this pull request," which, once submitted, gets added to a list of "sessions" within the new dedicated Workspace view. Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary.
"Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a 'thought partnership' with AI," Carter said. "You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks ... We believe there's a lot of value that can be delivered in an AI-native developer environment that isn't constrained by existing workflows."
Wikipedia

Russia Clones Wikipedia, Censors It, Bans Original (404media.co) 243

Jules Roscoe reports via 404 Media: Russia has replaced Wikipedia with a state-sponsored encyclopedia that is a clone of the original Russian Wikipedia but which conveniently has been edited to omit things that could cast the Russian government in poor light. Real Russian Wikipedia editors used to refer to the real Wikipedia as Ruwiki; the new one is called Ruviki, has "ruwiki" in its url, and has copied all Russian-language Wikipedia articles and strictly edited them to comply with Russian laws. The new articles exclude mentions of "foreign agents," the Russian government's designation for any person or entity which expresses opinions about the government and is supported, financially or otherwise, by an outside nation. [...]

Wikimedia RU, the Russian-language chapter of the non-profit that runs Wikipedia, was forced to shut down in late 2023 amid political pressure due to the Ukraine war. Vladimir Medeyko, the former head of the chapter who now runs Ruviki, told Novaya Gazeta Europe in July that he believed Wikipedia had problems with "reliability and neutrality." Medeyko first announced the project to copy and censor the 1.9 million Russian-language Wikipedia articles in June. The goal, he said at the time, was to edit them so that the information would be "trustworthy" as a source for all Russian users. Independent outlet Bumaga reported in August that around 110 articles about the war in Ukraine were missing in full, while others were severely edited. Ruviki also excludes articles about reports of torture in prisons and scandals of Russian government representatives. [...]

Graphic designer Constantine Konovalov calculated the number of characters changed between Wikipedia RU and Ruviki articles on the same topics, and found that there were 205,000 changes in articles about freedom of speech; 158,000 changes in articles about human rights; 96,000 changes in articles about political prisoners; and 71,000 changes in articles about censorship in Russia. He wrote in a post on X that the censorship was "straight out of a 1984 novel." Interestingly, the Ruviki article about George Orwell's 1984 entirely omits the Ministry of Truth, which is the novel's main propaganda outlet concerned with governing "truth" in the country.

AI

Taser Company Axon Is Selling AI That Turns Body Cam Audio Into Police Reports (forbes.com) 53

Axon on Tuesday announced a new tool called Draft One that uses artificial intelligence built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. Axon CEO Rick Smith told Forbes that police officers will then be able to review the document to ensure accuracy. From the report: Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. "If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer's time to be back out policing," Smith said. These reports, though, are often used as evidence in criminal trials, and critics are concerned that relying on AI could put people at risk by depending on language models that are known to "hallucinate," or make things up, as well as display racial bias, either blatantly or unconsciously.

"It's kind of a nightmare," said Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation. "Police, who aren't specialists in AI, and aren't going to be specialists in recognizing the problems with AI, are going to use these systems to generate language that could affect millions of people in their involvement with the criminal justice system. What could go wrong?" Smith acknowledged there are dangers. "When people talk about bias in AI, it really is: Is this going to exacerbate racism by taking training data that's going to treat people differently?" he told Forbes. "That was the main risk."

Smith said Axon is recommending police don't use the AI to write reports for incidents as serious as a police shooting, where vital information could be missed. "An officer-involved shooting is likely a scenario where it would not be used, and I'd probably advise people against it, just because there's so much complexity, the stakes are so high." He said some early customers are only using Draft One for misdemeanors, though others are writing up "more significant incidents," including use-of-force cases. Axon, however, won't have control over how individual police departments use the tools.

AI

Apple Releases OpenELM: Small, Open Source AI Models Designed To Run On-device (venturebeat.com) 15

Just as Google, Samsung and Microsoft continue to push their efforts with generative AI on PCs and mobile devices, Apple is moving to join the party with OpenELM, a new family of open source large language models (LLMs) that can run entirely on a single device rather than having to connect to cloud servers. From a report: Released a few hours ago on AI code community Hugging Face, OpenELM consists of small models designed to perform efficiently at text generation tasks. There are eight OpenELM models in total -- four pre-trained and four instruction-tuned -- covering different parameter sizes between 270 million and 3 billion parameters (referring to the connections between artificial neurons in an LLM, and more parameters typically denote greater performance and more capabilities, though not always).

[...] Apple is offering the weights of its OpenELM models under what it deems a "sample code license," along with different checkpoints from training, stats on how the models perform as well as instructions for pre-training, evaluation, instruction tuning and parameter-efficient fine tuning. The sample code license does not prohibit commercial usage or modification, only mandating that "if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software." The company further notes that the models "are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts."

AI

The Ray-Ban Meta Smart Glasses Have Multimodel AI Now (theverge.com) 26

The Ray-Ban Meta Smart Glasses now feature support for multimodal AI -- without the need for a projector or $24 monthly fee. (We're looking at you, Humane AI.) With the new update, the Meta AI assistant will be able to analyze what you're seeing, and it'll give you smart, helpful answers or suggestions. The Verge reports: First off, there are some expectations that need managing here. The Meta glasses don't promise everything under the sun. The primary command is to say "Hey Meta, look and..." You can fill out the rest with phrases like "Tell me what this plant is." Or read a sign in a different language. Write Instagram captions. Identify and learn more about a monument or landmark. The glasses take a picture, the AI communes with the cloud, and an answer arrives in your ears. The possibilities are not limitless, and half the fun is figuring out where its limits are. [...]

To me, it's the mix of a familiar form factor and decent execution that makes the AI workable on these glasses. Because it's paired to your phone, there's very little wait time for answers. It's headphones, so you feel less silly talking to them because you're already used to talking through earbuds. In general, I've found the AI to be the most helpful at identifying things when we're out and about. It's a natural extension of what I'd do anyway with my phone. I find something I'm curious about, snap a pic, and then look it up. Provided you don't need to zoom really far in, this is a case where it's nice to not pull out your phone. [...]

But AI is a feature of the Meta glasses. It's not the only feature. They're a workable pair of livestreaming glasses and a good POV camera. They're an excellent pair of open-ear headphones. I love wearing mine on outdoor runs and walks. I could never use the AI and still have a product that works well. The fact that it's here, generally works, and is an alright voice assistant -- well, it just gets you more used to the idea of a face computer, which is the whole point anyway.

Microsoft

Microsoft Launches Phi-3 Mini, a 3.8B-Parameter Model Rivaling GPT-3.5 Capabilities 16

Microsoft has launched Phi-3 Mini, a lightweight AI model with 3.8 billion parameters, as part of its plan to release three small models. Phi-3 Mini, trained on a smaller data set compared to large language models, is available on Azure, Hugging Face, and Ollama. Microsoft claims Phi-3 Mini performs as well as models 10 times its size, offering capabilities similar to GPT-3.5 in a smaller form factor. Smaller AI models are more cost-effective and perform better on personal devices.
Operating Systems

How CP/M Launched the Next 50 Years of Operating Systems (computerhistory.org) 80

50 years ago this week, PC software pioneer Gary Kildall "demonstrated CP/M, the first commercially successful personal computer operating system in Pacific Grove, California," according to a blog post from Silicon Valley's Computer History Museum. It tells the story of "how his company, Digital Research Inc., established CP/M as an industry standard and its subsequent loss to a version from Microsoft that copied the look and feel of the DRI software."

Kildall was a CS instructor and later associate professor at the Naval Postgraduate School (NPS) in Monterey, California... He became fascinated with Intel Corporation's first microprocessor chip and simulated its operation on the school's IBM mainframe computer. This work earned him a consulting relationship with the company to develop PL/M, a high-level programming language that played a significant role in establishing Intel as the dominant supplier of chips for personal computers.

To design software tools for Intel's second-generation processor, he needed to connect to a new 8" floppy disk-drive storage unit from Memorex. He wrote code for the necessary interface software that he called CP/M (Control Program for Microcomputers) in a few weeks, but his efforts to build the electronic hardware required to transfer the data failed. The project languished for a year. Frustrated, he called electronic engineer John Torode, a college friend then teaching at UC Berkeley, who crafted a "beautiful rat's nest of wirewraps, boards and cables" for the task.

Late one afternoon in the fall of 1974, together with John Torode, in the backyard workshop of his home at 781 Bayview Avenue, Pacific Grove, Gary "loaded my CP/M program from paper tape to the diskette and 'booted' CP/M from the diskette, and up came the prompt: *

[...] By successfully booting a computer from a floppy disk drive, they had given birth to an operating system that, together with the microprocessor and the disk drive, would provide one of the key building blocks of the personal computer revolution... As Intel expressed no interest in CP/M, Gary was free to exploit the program on his own and sold the first license in 1975.

What happened next? Here's some highlights from the blog post:
  • "Reluctant to adapt the code for another controller, Gary worked with Glen Ewing to split out the hardware dependent-portions so they could be incorporated into a separate piece of code called the BIOS (Basic Input Output System)... The BIOS code allowed all Intel and compatible microprocessor-based computers from other manufacturers to run CP/M on any new hardware. This capability stimulated the rise of an independent software industry..."
  • "CP/M became accepted as a standard and was offered by most early personal computer vendors, including pioneers Altair, Amstrad, Kaypro, and Osborne..."
  • "[Gary's company] introduced operating systems with windowing capability and menu-driven user interfaces years before Apple and Microsoft... However, by the mid-1980s, in the struggle with the juggernaut created by the combined efforts of IBM and Microsoft, DRI had lost the basis of its operating systems business."
  • "Gary sold the company to Novell Inc. of Provo, Utah, in 1991. Ultimately, Novell closed the California operation and, in 1996, disposed of the assets to Caldera, Inc., which used DRI intellectual property assets to prevail in a lawsuit against Microsoft."

AI

GPT-4 Can Exploit Real Vulnerabilities By Reading Security Advisories 74

Long-time Slashdot reader tippen shared this report from the Register: AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed.

In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists — Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang — report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw. "To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description," the US-based authors explain in their paper. "When given the CVE description, GPT-4 is capable of exploiting 87 percent of these vulnerabilities compared to 0 percent for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit)...."

The researchers' work builds upon prior findings that LLMs can be used to automate attacks on websites in a sandboxed environment. GPT-4, said Daniel Kang, assistant professor at UIUC, in an email to The Register, "can actually autonomously carry out the steps to perform certain exploits that open-source vulnerability scanners cannot find (at the time of writing)."

The researchers wrote that "Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description...."

"Kang and his colleagues computed the cost to conduct a successful LLM agent attack and came up with a figure of $8.80 per exploit"
United States

US Passes Bill Reauthorizing 'FISA' Surveillance for Two More Years (theverge.com) 45

Late Friday night the U.S. Senate "reauthorized the Foreign Intelligence Surveillance Act, a key. U.S. surveillance authority," reports Axios, "shortly after it expired in the early hours Saturday morning." The president then signed the bill into law. The reauthorization came despite bipartisan concerns about Section 702, which allows the government to collect communications from non-U.S. citizens overseas without a warrant. The legislation passed the Senate 60 to 34, with 17 Democrats, Sen. Bernie Sanders (I-Vt.) and 16 Republicans voting "nay." It extends the controversial Section 702 for two more years.
The bill had already passed last week in the U.S. House of Representatives, explains CNN: Under FISA's Section 702, the government hoovers up massive amounts of internet and cell phone data on foreign targets. Hundreds of thousands of Americans' information is incidentally collected during that process and then accessed each year without a warrant — down from millions of such queries the US government ran in past years. Critics refer to these queries as "backdoor" searches...

According to one assessment, it forms the basis of most of the intelligence the president views each morning and it has helped the U.S. keep tabs on Russia's intentions in Ukraine, identify foreign efforts to access US infrastructure, uncover foreign terror networks and thwart terror attacks in the U.S.

An interesting detail from The Verge: Sens. Ron Wyden (D-OR) and Josh Hawley (R-MO) introduced an amendment that would have struck language in the House bill that expanded the definition of "electronic communications service provider." Under the House's new provision, anyone "who has access to equipment that is being or may be used to transmit or store wire or electronic communications." The expansion, Wyden has claimed, would force "ordinary Americans and small businesses to conduct secret, warrantless spying." The Wyden-Hawley amendment failed 34-58, meaning that the next iteration of the FISA surveillance program will be more expansive than before.
Saturday morning the U.S. House of Representatives passed a bill banning TikTok if its Chinese owner doesn't sell the app.
AI

Linus Torvalds on 'Hilarious' AI Hype (zdnet.com) 42

Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements."

That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently.

Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

AI

'Crescendo' Method Can Jailbreak LLMs Using Seemingly Benign Prompts (scmagazine.com) 46

spatwei shares a report from SC Magazine: Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday. Microsoft first revealed the "Crescendo" LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI's ChatGPT, Google's Gemini, Meta's LlaMA or Anthropic's Claude, to produce an output that would normally be filtered and refused by the LLM model. For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM's previous outputs, follow up with questions about how they were made in the past.

The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called "Crescendomation," which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants. Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its "AI Watchdog" and "AI Spotlight" features.

IOS

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side (appleinsider.com) 59

According to Bloomberg's Mark Gurman, Apple's initial set of AI-related features in iOS 18 "will work entirely on device," and won't connect to cloud services. AppleInsider reports: In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing. AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline. [...] Apple will reveal its AI plans during WWDC, which starts on June 10.
AI

UK Starts Drafting AI Regulations for Most Powerful Models (bloomberg.com) 18

The UK is starting to draft regulations to govern AI, focusing on the most powerful language models which underpin OpenAI's ChatGPT, Bloomberg News reported Monday, citing people familiar with the matter. From the report: Policy officials at the Department for Science, Innovation and Technology are in the early stages of devising legislation to limit potential harms caused by the emerging technology, according to the people, who asked not to be identified discussing undeveloped proposals. No bill is imminent, and the government is likely to wait until France hosts an AI conference either later this year or early next to launch a consultation on the topic, they said.

Prime Minister Rishi Sunak, who hosted the first world leaders' summit on AI last year and has repeatedly said countries shouldn't "rush to regulate" AI, risks losing ground to the US and European Union on imposing guardrails on the industry. The EU passed a sweeping law to regulate the technology earlier this year, companies in China need approvals before producing AI services and some US cities and states have passed laws limiting use of AI in specific areas.

The Internet

Stop 'Harmful 5G Fast Lanes', Legal Scholar Warns America's FCC (stanford.edu) 41

America's FCC votes on net neutrality April 25th. And the director of Stanford Law School's "Center for Internet and Society" (also a law professor) says mostly there's "much to celebrate" in the draft rules released earlier this month. Mobile carriers like T-Mobile, AT&T and Verizon that have been degrading video quality for mobile users will have to stop. The FCC kept in place state neutrality protections like California's net neutrality law, allowing for layers of enforcement. The FCC also made it harder for ISPs to evade net neutrality at the point where data enters their networks.
However, the draft rules also have "a huge problem." The proposed rules make it possible for mobile ISPs to start picking applications and putting them in a fast lane — where they'll perform better generally and much better if the network gets congested.

T-Mobile, AT&T and Verizon are all testing ways to create these 5G fast lanes for apps such as video conferencing, games, and video where the ISP chooses and controls what gets boosted. They use a technical feature in 5G called network slicing, where part of their radio spectrum gets used as a special lane for the chosen app or apps, separated from the usual internet traffic. The FCC's draft order opens the door to these fast lanes, so long as the app provider isn't charged for them.

They warn of things like cellphone plans "Optimized for YouTube and TikTok... Or we could see add-ons like Enhanced Video Conferencing for $10 a month, or one-time 24-hour passes to have Prioritized Online Gaming." This isn't imagination. The ISPs write about this in their blogs and press releases. They talk about these efforts and dreams openly at conferences, and their equipment vendors plainly lay out how ISPs can chop up internet service into all manner of fast lanes.

These kinds of ISP-controlled fast lanes violate core net neutrality principles and would limit user choice, distort competition, hamper startups, and help cement platform dominance. Even small differences in load times affect how long people stay on a site, how much they pay, and whether they'll come back. Those differences also affect how high up sites show in search results. Thus, letting ISPs choose which apps get to be in a fast lane lets them, not users, pick winners and losers online... [T]he biggest apps will end up in all the fast lanes, while most others would be left out. The ones left out would likely include messaging apps like Signal, local news sites, decentralized Fediverse apps like Mastodon and PeerTube, niche video sites like Dropout, indie music sites like Bandcamp, and the millions of other sites and apps in the long tail.

One subheading emphasizes that "This is not controversial," noting that "Even proposed Republican net neutrality bills prohibited ISPs from speeding up and slowing down apps and kinds of apps..." Yet "While draft order acknowledges that some speeding up of apps could violate the no-throttling rule, it added some unclear, nebulous language suggesting that the FCC would review any fast lanes case-by-case, without explaining how it would do that... Companies that do file complaints will waste years litigating the meaning of "unreasonably discriminatory," all the while going up against giant telecoms that stockpile lawyers and lobbyists."

"Net neutrality means that we, the people who use the internet, get to decide what we do online, without interference from ISPs. ISPs do not get to interfere with our choices by blocking, speeding up or slowing down apps or kinds of apps..."

They urge the FCC to edit their draft order before April 24 to clarify "that the no-throttling rule also prohibits ISPs from creating fast lanes for select apps or kinds of apps."
PHP

Is PHP Declining In Popularity? (infoworld.com) 94

The PHP programming language has sunk to its lowest position ever on the long-running TIOBE index of programming language popularity. It now ranks #17 — lower than Assembly Language, Ruby, Swift, Scratch, and MATLAB. InfoWorld reports: When the Tiobe index started in 2001, PHP was about to become the standard language for building websites, said Paul Jansen, CEO of software quality services vendor Tiobe. PHP even reached the top 3 spot in the index, ranking third several times between 2006 and 2010. But as competing web development frameworks such as Ruby on Rails, Django, and React arrived in other languages, PHP's popularity waned.

"The major driving languages behind these new frameworks were Ruby, Python, and most notably JavaScript," Jansen noted in his statement accompanying the index. "On top of this competition, some security issues were found in PHP. As a result, PHP had to reinvent itself." Nowadays, PHP still has a strong presence in small and medium websites and is the language leveraged in the WordPress web content management system. "PHP is certainly not gone, but its glory days seem to be over," Jansen said.

A note on the rival Pypl Popularity of Programming Language Index argues that the TIOBE Index "is a lagging indicator. It counts the number of web pages with the language name." So while "Objective-C" ranks #30 on TIOBE's index (one rank above Classic Visual Basic), "who is reading those Objective-C web pages? Hardly anyone, according to Google Trends data." On TIOBE's index, Fortran now ranks #10.

Meanwhile, PHP ranks #7 on Pypl (based on the frequency of searches for language tutorials).

TIOBE's top ten?
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Go
  8. Visual Basic
  9. SQL
  10. Fortran

The next two languages, ranked #11 and #12, are Delphi/Object Pascal and Assembly Language.


AI

OpenAI Makes ChatGPT 'More Direct, Less Verbose' (techcrunch.com) 36

Kyle Wiggers reports via TechCrunch: OpenAI announced today that premium ChatGPT users -- customers paying for ChatGPT Plus, Team or Enterprise -- can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience. This new model ("gpt-4-turbo-2024-04-09") brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off. "When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language," OpenAI writes in a post on X.
Education

Students Are Likely Writing Millions of Papers With AI 115

Amanda Hoover reports via Wired: Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.
United States

The US is Right To Target TikTok, Says Vinod Khosla (ft.com) 90

Vinod Khosla, the founder of venture capital firm Khosla Ventures, opines on the bill that seeks to ban TikTok or force its parent firm to divest the U.S. business: Even if one could argue that this bill strikes at the First Amendment, there is legal precedent for doing so. In 1981, Haig vs Agee established that there are circumstances under which the government can lawfully impinge upon an individual's First Amendment rights if it is necessary to protect national security and prevent substantial harm. TikTok and the AI that can be channelled through it are national and homeland security issues that meet these standards.

Should this bill turn into law, the president would have the power to force any foreign-owned social media to be sold if US intelligence agencies deem them a national security threat. This broader scope should protect against challenges that this is a bill of attainder. Similar language helped protect effective bans on Huawei and Kaspersky Lab. As for TikTok's value as a boon to consumers and businesses, there are many companies that could quickly replace it. In 2020, after India banned TikTok amid geopolitical tensions between Beijing and New Delhi, services including Instagram Reels, YouTube Shorts, MX TakaTak, Chingari and others filled the void.Â

Few appreciate that TikTok is not available in China. Instead, Chinese consumers use Douyin, the sister app that features educational and patriotic videos, and is limited to 40 minutes per day of total usage. Spinach for Chinese kids, fentanyl -- another chief export of China's -- for ours. Worse still, TikTok is a programmable fentanyl whose effects are under the control of the CCP.

AI

Texas Will Use Computers To Grade Written Answers On This Year's STAAR Tests 41

Keaton Peters reports via the Texas Tribune: Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the state's standardized tests will be graded automatically by computers. The Texas Education Agency is rolling out an "automated scoring engine" for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing technology like artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students' understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions -- known as constructed response items. After the redesign, there are six to seven times more constructed response items. "We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score," said Jose Rios, director of student assessment at the Texas Education Agency. In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given. This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans. When the computer has "low confidence" in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.
"In addition to 'low confidence' scores and responses that do not fit in the computer's programming, a random sample of responses will also be automatically handed off to humans to check the computer's work," notes Peters. While similar to ChatGPT, TEA officials have resisted the suggestion that the scoring engine is artificial intelligence. They note that the process doesn't "learn" from the responses and always defers to its original programming set up by the state.

Slashdot Top Deals