Data Storage

Father of SQL Says Yes to NoSQL (theregister.com) 75

An anonymous reader shared this report from the Register: The co-author of SQL, the standardized query language for relational databases, has come out in support of the NoSQL database movement that seeks to escape the tabular confines of the RDBMS. Speaking to The Register as SQL marks its 50th birthday, Donald Chamberlin, who first proposed the language with IBM colleague Raymond Boyce in a 1974 paper [PDF], explains that NoSQL databases and their query languages could help perform the tasks relational systems were never designed for. "The world doesn't stay the same thing, especially in computer science," he says. "It's a very fast, evolving, industry. New requirements are coming along and technology has to change to meet them, I think that's what's happening. The NoSQL movement is motivated by new kinds of applications, particularly web applications, that need massive scalability and high performance. Relational databases were developed in an earlier generation when scalability and performance weren't quite as important. To get the scalability and performance that you need for modern apps, many systems are relaxing some of the constraints of the relational data model."

[...] A long-time IBMer, Chamberlin is now semi-retired, but finds time to fulfill a role as a technical advisor for NoSQL company Couchbase. In the role, he has become an advocate for a new query language designed to overcome the "impedance mismatch" between data structures in the application language and a database, he says. UC San Diego professor Yannis Papakonstantinou has proposed SQL++ to solve this problem, with a view to addressing impedance mismatch between heavily object-based JavaScript, the core language for web development and the assumed relational approach embedded in SQL. Like C++, SQL++ is designed as a compatible extension of an earlier language, SQL, but is touted as better able to handle the JSON file format inherent in JavaScript. Couchbase and AWS have adopted the language, although the cloud giant calls it PartiQL.

At the end of the interview, Chamblin adds that "I don't think SQL is going to go away. A large part of the world's business data is encoded in SQL, and data is very sticky. Once you've got your database, you're going to leave it there. Also, relational systems do a very good job of what they were designed to do...

"[I]f you're a startup company that wants to sell shoes on the web or something, you're going to need a database, and one of those SQL implementations will do the job for free. I think relational databases and the SQL language will be with us for a long time."
Red Hat Software

RHEL (and Rocky and Alma Linux) 9.4 Released - Plus AI Offerings (almalinux.org) 19

Red Hat Enterprise Linux 9.4 has been released. But also released is Rocky Linux 9.4, reports 9to5Linux: Rocky Linux 9.4 also adds openSUSE's KIWI next-generation appliance builder as a new image build workflow and process for building images that are feature complete with the old images... Under the hood, Rocky Linux 9.4 includes the same updated components from the upstream Red Hat Enterprise Linux 9.4
This week also saw the release of Alma Linux 9.4 stable (the "forever-free enterprise Linux distribution... binary compatible with RHEL.") The Register points out that while Alma Linux is "still supporting some aging hardware that the official RHEL 9.4 drops, what's new is largely the same in them both."

And last week also saw the launch of the AlmaLinux High-Performance Computing and AI Special Interest Group (SIG). HPCWire reports: "AlmaLinux's status as a community-driven enterprise Linux holds incredible promise for the future of HPC and AI," said Hayden Barnes, SIG leader and Senior Open Source Community Manager for AI Software at HPE. "Its transparency and stability empowers researchers, developers and organizations to collaborate, customize and optimize their computing environments, fostering a culture of innovation and accelerating breakthroughs in scientific research and cutting-edge AI/ML."
And this week, InfoWorld reported: Red Hat has launched Red Hat Enterprise Linux AI (RHEL AI), described as a foundation model platform that allows users to more seamlessly develop and deploy generative AI models. Announced May 7 and available now as a developer preview, RHEL AI includes the Granite family of open-source large language models (LLMs) from IBM, InstructLab model alignment tools based on the LAB (Large-Scale Alignment for Chatbots) methodology, and a community-driven approach to model development through the InstructLab project, Red Hat said.
AI

Did OpenAI, Google and Meta 'Cut Corners' to Harvest AI Training Data? (indiatimes.com) 58

What happened when OpenAI ran out of English-language training data in 2021?

They just created a speech recognition tool that could transcribe the audio from YouTube videos, reports The New York Times, as part of an investigation arguing that tech companies "including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law" in their search for AI training data. [Alternate URL here.] Some OpenAI employees discussed how such a move might go against YouTube's rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are "independent" of the video platform. Ultimately, an OpenAI team transcribed more than 1 million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI's president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4...

At Meta, which owns Facebook and Instagram, managers, lawyers and engineers last year discussed buying the publishing house Simon & Schuster to procure long works, according to recordings of internal meetings obtained by the Times. They also conferred on gathering copyrighted data from across the internet, even if that meant facing lawsuits. Negotiating licenses with publishers, artists, musicians and the news industry would take too long, they said.

Like OpenAI, Google transcribed YouTube videos to harvest text for its AI models, five people with knowledge of the company's practices said. That potentially violated the copyrights to the videos, which belong to their creators. Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company's privacy team and an internal message viewed by the Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other online material for more of its AI products...

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn't stop OpenAI because Google had also used transcripts of YouTube videos to train its AI models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

The article adds that some tech companies are now even developing "synthetic" information to train AI.

"This is not organic data created by humans, but text, images and code that AI models produce — in other words, the systems learn from what they themselves generate."
AI

Bumble's Dating 'AI Concierge' Will Date Hundreds of Other People's 'Concierges' For You (fortune.com) 63

An anonymous reader quotes a report from Fortune: Imagine this: you've "dated" 600 people in San Fransisco without having typed a word to any of them. Instead, a busy little bot has completed the mindless 'getting-to-know-you' chatter on your behalf, and has told you which people you should actually get off the couch to meet. That's the future of dating, according to Whitney Wolfe Herd -- and she'd know. Wolfe Herd is the founder and executive chair of Bumble, a meeting and networking platform that prompted women to make the first move. While the platform has now changed this aspect of its algorithm, Wolfe Herd said the company would always keep its "North Star" in mind: "A safer, kinder digital platform for more healthy and more equitable relationships. "Always putting women in the driver's seat -- not to put men down -- but to actually recalibrate the way we all treat each other."

Like any platform, Bumble is now navigating itself in a world of AI -- which means rethinking how humans will interact with each other in an increasing age of chatbots. Wolfe Herd toldBloomberg Technology Summit in San Francisco this week it could streamline the matching process. "If you want to get really out there, there is a world where your [AI] dating concierge could go and date for you with other dating concierge," she told host Emily Chang. "Truly. And then you don't have to talk to 600 people. It will scan all of San Fransisco for you and say: 'These are the three people you really outta meet.'" And forget catch-ups with friends, swapping notes on your love life -- AI can be that metaphorical shoulder to cry on.

Artificial intelligence -- which has seen massive amounts of investment since OpenAI disrupted the market with its ChatGPT large language model -- can help coach individuals on how to date and present themselves in the best light to potential partners. "So, for example, you could in the near future be talking to your AI dating concierge and you could share your insecurities,"Wolfe Herd explained. "'I've just come out of a break-up, I've got commitment issues,' and it could help you train yourself into a better way of thinking about yourself." "Then it could give you productive tips for communicating with other people," she added. If these features do indeed come to Bumble in the future, they will impact the experience of millions.

United Kingdom

North Yorkshire Council To Ban Apostrophes On Street Signs To Avoid Database Problems (bbc.com) 100

The North Yorkshire Council in England announced it will ban apostrophes on street signs as it can affect geographical databases. Resident Anne Keywood told the BBC that she urged the authority to retain apostrophes, saying: "If you start losing things like that then everything goes downhill doesn't it?" From the report: North Yorkshire Council said it "along with many others across the country" had opted to "eliminate" the apostrophe from street signs. A spokesperson added: "All punctuation will be considered but avoided where possible because street names and addresses, when stored in databases, must meet the standards (PDF) set out in BS7666.

"This restricts the use of punctuation marks and special characters (e.g. apostrophes, hyphens and ampersands) to avoid potential problems when searching the databases as these characters have specific meanings in computer systems."

Science

Scientists Find an 'Alphabet' In Whale Songs 50

Carl Zimmer reports via the New York Times: Ever since the discovery of whale songs almost 60 years ago, scientists have been trying to decipher their lyrics. Are the animals producing complex messages akin to human language? Or sharing simpler pieces of information, like dancing bees do? Or are they communicating something else we don't yet understand? In 2020, a team of marine biologists and computer scientists joined forces to analyze the click-clacking songs of sperm whales, the gray, block-shaped leviathans that swim in most of the world's oceans. On Tuesday, the scientists reported that the whales use a much richer set of sounds than previously known, which they called a "sperm whale phonetic alphabet." In the study published in the journal Nature Communications, researchers found that sperm whales communicate using sequences of clicks, called codas, that exhibit contextual and combinatorial structure. MIT News reports: The researchers identified something of a "sperm whale phonetic alphabet," where various elements that researchers call "rhythm," "tempo," "rubato," and "ornamentation" interplay to form a vast array of distinguishable codas. For example, the whales would systematically modulate certain aspects of their codas based on the conversational context, such as smoothly varying the duration of the calls -- rubato -- or adding extra ornamental clicks. But even more remarkably, they found that the basic building blocks of these codas could be combined in a combinatorial fashion, allowing the whales to construct a vast repertoire of distinct vocalizations.

[...] By developing new visualization and data analysis techniques, the CSAIL researchers found that individual sperm whales could emit various coda patterns in long exchanges, not just repeats of the same coda. These patterns, they say, are nuanced, and include fine-grained variations that other whales also produce and recognize.
"One of the intriguing aspects of our research is that it parallels the hypothetical scenario of contacting alien species. It's about understanding a species with a completely different environment and communication protocols, where their interactions are distinctly different from human norms," says Pratyusha Sharma, an MIT PhD student in EECS, CSAIL affiliate, and the study's lead author. "We're exploring how to interpret the basic units of meaning in their communication. This isn't just about teaching animals a subset of human language, but decoding a naturally evolved communication system within their unique biological and environmental constraints. Essentially, our work could lay the groundwork for deciphering how an 'alien civilization' might communicate, providing insights into creating algorithms or systems to understand entirely unfamiliar forms of communication."
The Internet

FCC Explicitly Prohibits Fast Lanes, Closing Possible Net Neutrality Loophole (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica: The Federal Communications Commission clarified its net neutrality rules to prohibit more kinds of fast lanes. While the FCC voted to restore net neutrality rules on April 25, it didn't release the final text of the order until yesterday. The final text (PDF) has some changes compared to the draft version released a few weeks before the vote.

Both the draft and final rules ban paid prioritization, or fast lanes that application providers have to pay Internet service providers for. But some net neutrality proponents raised concerns about the draft text because it would have let ISPs speed up certain types of applications as long as the application providers don't have to pay for special treatment. The advocates wanted the FCC to clarify its no-throttling rule to explicitly prohibit ISPs from speeding up applications instead of only forbidding the slowing of applications down. Without such a provision, they argued that ISPs could charge consumers more for plans that speed up specific types of content. [...]

"We clarify that a BIAS [Broadband Internet Access Service] provider's decision to speed up 'on the basis of Internet content, applications, or services' would 'impair or degrade' other content, applications, or services which are not given the same treatment," the FCC's final order said. The "impair or degrade" clarification means that speeding up is banned because the no-throttling rule says that ISPs "shall not impair or degrade lawful Internet traffic on the basis of Internet content, application, or service."
The updated language in the final order "clearly prohibits ISPs from limiting fast lanes to apps or categories of apps they select," leaving no question as to whether the practice is prohibited, said Stanford Law professor Barbara van Schewick.

Under the original plan, "there was no way to predict which kinds of fast lanes the FCC might ultimately find to violate the no-throttling rule," she wrote. "This would have given ISPs cover to flood the market with various fast-lane offerings, arguing that their version does not violate the no-throttling rule and daring the FCC to enforce its rule. The final order prevents this from happening."
AI

Researchers Warned Against Using AI To Peer Review Academic Papers (semafor.com) 17

Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. From a report: With recent advances in large language models, researchers have been increasingly using them to write peer reviews -- a time-honored academic tradition that examines new research and assesses its merits, showing a person's work has been vetted by other experts in the field. That's why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don't explicitly ban the use of AI to process manuscripts, and the language can be fuzzy.

The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.

Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

AI

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months (businessinsider.com) 68

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today's ChatGPT chatbot "will be laughably bad" compared to what it'll be capable of a year from now. "We think we're going to move toward a world where they're much more capable," he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on "more complex work." He adds that AI will have more of a "system relationship" with users, meaning the technology will serve as a "great teammate" that can assist users on "any given problem." "That's going to be a different way of using software," the OpenAI exec said on the panel regarding AI's foreseeable capabilities.

In light of his predictions, Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. "I think that's a profound shift that we haven't quite grasped," he said, referring to his 10-year forecast. "We're just scratching the surface on the full kind of set of capabilities that these systems have," he said at the Milken Institute conference. "That's going to surprise us."
You can watch/listen to the talk here.
Hardware

Apple Announces M4 With More CPU Cores and AI Focus (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: In a major shake-up of its chip roadmap, Apple has announced a new M4 processor for today's iPad Pro refresh, barely six months after releasing the first MacBook Pros with the M3 and not even two months after updating the MacBook Air with the M3. Apple says the M4 includes "up to" four high-performance CPU cores, six high-efficiency cores, and a 10-core GPU. Apple's high-level performance estimates say that the M4 has 50 percent faster CPU performance and four times as much graphics performance. Like the GPU in the M3, the M4 also supports hardware-accelerated ray-tracing to enable more advanced lighting effects in games and other apps. Due partly to its "second-generation" 3 nm manufacturing process, Apple says the M4 can match the performance of the M2 while using just half the power.

As with so much else in the tech industry right now, the M4 also has an AI focus; Apple says it's beefing up the 16-core Neural Engine (Apple's equivalent of the Neural Processing Unit that companies like Qualcomm, Intel, AMD, and Microsoft have been pushing lately). Apple says the M4 runs up to 38 trillion operations per second (TOPS), considerably ahead of Intel's Meteor Lake platform, though a bit short of the 45 TOPS that Qualcomm is promising with the Snapdragon X Elite and Plus series. The M3's Neural Engine is only capable of 18 TOPS, so that's a major step up for Apple's hardware. Apple's chips since 2017 have included some version of the Neural Engine, though to date, those have mostly been used to enhance and categorize photos, perform optical character recognition, enable offline dictation, and do other oddities. But it may be that Apple needs something faster for the kinds of on-device large language model-backed generative AI that it's expected to introduce in iOS and iPadOS 18 at WWDC next month.
A separate report from the Wall Street Journal says Apple is developing a custom chip to run AI software in datacenters. "Apple's server chip will likely be focused on running AI models, also known as inference, rather than in training AI models, where Nvidia is dominant," reports Reuters.

Further reading: Apple Quietly Kills the Old-school iPad and Its Headphone Jack
AI

Microsoft Creates Top Secret Generative AI Service Divorced From the Internet for US Spies (bloomberg.com) 42

Microsoft has deployed a generative AI model entirely divorced from the internet, saying US intelligence agencies can now safely harness the powerful technology to analyze top-secret information. From a report: It's the first time a major large language model has operated fully separated from the internet, a senior executive at the US company said. Most AI models including OpenAI's ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community.

Spy agencies around the world want generative AI to help them understand and analyze the growing amounts of classified information generated daily, but must balance turning to large language models with the risk that data could leak into the open -- or get deliberately hacked. Microsoft has deployed the GPT4-based model and key elements that support it onto a cloud with an "air-gapped" environment that is isolated from the internet, said William Chappell, Microsoft's chief technology officer for strategic missions and technology.

AI

OpenAI and Stack Overflow Partner To Bring More Technical Knowledge Into ChatGPT (theverge.com) 18

OpenAI and the developer platform Stack Overflow have announced a partnership that could potentially improve the performance of AI models and bring more technical information into ChatGPT. From a report: OpenAI will have access to Stack Overflow's API and will receive feedback from the developer community to improve the performance of AI models. OpenAI, in turn, will give Stack Overflow attribution -- aka link to its contents -- in ChatGPT. Users of the chatbot will see more information from Stack Overflow's knowledge archive if they ask ChatGPT coding or technical questions. The companies write in the press release that this will "foster deeper engagement with content." Stack Overflow will use OpenAI's large language models to expand its Overflow AI, the generative AI application it announced last year. Further reading: Stack Overflow Cuts 28% Workforce as the AI Coding Boom Continues (October 2023).
News

North Yorkshire Apostrophe Fans Demand Road Signs With Nowt Taken Out (theguardian.com) 86

A council has provoked the wrath of residents and linguists alike after announcing it would ban apostrophes on street signs to avoid problems with computer systems. From a report: North Yorkshire council is ditching the punctuation point after careful consideration, saying it can affect geographical databases. The council said all new street signs would be produced without one, regardless of whether they were used in the past. Some residents expressed reservations about removing the apostrophes, and said it risked "everything going downhill." They urged the authority to retain them.

Sam, a postal worker in Harrogate, a spa town in North Yorkshire, told the BBC that signs missing an apostrophe -- such as the nearby St Mary's Walk sign that had been erected in the town without it -- infuriated her. "I walk past the sign every day and it riles my blood to see inappropriate grammar or punctuation," she said. Though the updated St Mary's sign had no apostrophe, someone had graffitied an apostrophe back on to the sign with a marker pen, which the former teacher said was "brilliant." She suggested the council was providing a bad example to children who spend a long time learning the basics of grammar only to see it not being used correctly on street signs.

Dr Ellie Rye, a lecturer in English language and linguistics at the University of York, said apostrophes were a relatively new invention in our writing and, often, context allows people to understand their meaning. "If I say I live on St Mary's Walk, we're expecting a street name or an address of some kind." She said the change would matter to people who spend a long time teaching how we write English but that it was "less important in [verbal] communication."

Microsoft

Microsoft Readies New AI Model To Compete With Google, OpenAI (theinformation.com) 26

For the first time since it invested more than $10 billion into OpenAI in exchange for the rights to reuse the startup's AI models, Microsoft is training a new, in-house AI model large enough to compete with state-of-the-art models from Google, Anthropic and OpenAI itself. The Information: The new model, internally referred to as MAI-1, is being overseen by Mustafa Suleyman, the ex-Google AI leader who most recently served as CEO of the AI startup Inflection before Microsoft hired the majority of the startup's staff and paid $650 million for the rights to its intellectual property in March. But this is a Microsoft model, not one carried over from Inflection, although it may build on training data and other tech from the startup. It is separate from the Pi models that Inflection previously released, according to two Microsoft employees with knowledge of the effort.

MAI-1 will be far larger than any of the smaller, open source models that Microsoft has previously trained, meaning it will require more computing power and training data and will therefore be more expensive, according to the people. MAI-1 will have roughly 500 billion parameters, or settings that can be adjusted to determine what models learn during training. By comparison, OpenAI's GPT-4 has more than 1 trillion parameters, while smaller open source models released by firms like Meta Platforms and Mistral have 70 billion parameters. That means Microsoft is now pursuing a dual trajectory of sorts in AI, aiming to develop both "small language models" that are inexpensive to build into apps and that could run on mobile devices, alongside larger, state-of-the-art AI models.

Privacy

When a Politician Sues a Blog to Unmask Its Anonymous Commenter 79

Markos Moulitsas is the poll-watching founder of the political blog Daily Kos. Thursday he wrote that in 2021, future third-party presidential candidate RFK Jr. had sued their web site.

"Things are not going well for him." Back in 2021, Robert F. Kennedy Jr. sued Daily Kos to unmask the identity of a community member who posted a critical story about his dalliance with neo-Nazis at a Berlin rally. I updated the story here, here, here, here, and here.

To briefly summarize, Kennedy wanted us to doxx our community member, and we stridently refused.

The site and the politician then continued fighting for more than three years. "Daily Kos lost the first legal round in court," Moulitsas posted in 2021, "thanks to a judge who is apparently unconcerned with First Amendment ramifications given the chilling effect of her ruling."

But even then, Moulitsas was clear on his rights: Because of Section 230 of the Communications Decency Act, [Kennedy] cannot sue Daily Kos — the site itself — for defamation. We are protected by the so-called safe harbor. That's why he's demanding we reveal what we know about "DowneastDem" so they can sue her or him directly.
Moulitsas also stressed that his own 2021 blog post was "reiterating everything that community member wrote, and expanding on it. And so instead of going after a pseudonymous community writer/diarist on this site, maybe Kennedy will drop that pointless lawsuit and go after me... consider this an escalation." (Among other things, the post cited a German-language news account saying Kennedy "sounded the alarm concerning the 5G mobile network and Microsoft founder Bill Gates..." Moulitsas also noted an Irish Times article which confirmed that at the rally Kennedy spoke at, "Noticeable numbers of neo-Nazis, kitted out with historic Reich flags and other extremist accessories, mixed in with the crowd.")

So what happened? Moulitsas posted an update Thursday: Shockingly, Kennedy got a trial court judge in New York to agree with him, and a subpoena was issued to Daily Kos to turn over any information we might have on the account. However, we are based in California, not New York, so once I received the subpoena at home, we had a California court not just quash the subpoena, but essentially signal that if New York didn't do the right thing on appeal, California could very well take care of it.

It's been a while since I updated, and given a favorable court ruling Thursday, it's way past time to catch everyone up.

New York is one of the U.S. states that doesn't have a strict "Dendrite standard" law protecting anonymous speech. But soon the blog founder discovered he had allies: The issues at hand are so important that The New York Times, the E.W.Scripps Company, the First Amendment Coalition, New York Public Radio, and seven other New York media companies joined the appeals effort with their own joint amicus brief. What started as a dispute over a Daily Kos diarist has become a meaningful First Amendment battle, with major repercussions given New York's role as a major news media and distribution center.

After reportedly spending over $1 million on legal fees, Kennedy somehow discovered the identity of our community member sometime last year and promptly filed a defamation suit in New Hampshire in what seemed a clumsy attempt at forum shopping, or the practice of choosing where to file suit based on the belief you'll be granted a favorable outcome. The community member lives in Maine, Kennedy lives in California, and Daily Kos doesn't publish specifically in New Hampshire. A perplexed court threw out the case this past February on those obvious jurisdictional grounds....

Then, last week, the judge threw out the appeal of that decision because Kennedy's lawyer didn't file in time — and blamed the delay on bad Wi-Fi...

Kennedy tried to dismiss the original case, the one awaiting an appellate decision in New York, claiming it was now moot. His legal team had sued to get the community member's identity, and now that they had it, they argued that there was no reason for the case to continue. We disagreed, arguing that there were important issues to resolve (i.e., Dendrite), and we also wanted lawyer fees for their unconstitutional assault on our First Amendment rights...

On Thursday, in a unanimous decision, a four-judge New York Supreme Court appellate panel ordered the case to continue, keeping the Dendrite issue alive and also allowing us to proceed in seeking damages based on New York's anti-SLAPP law, which prohibits "strategic lawsuits against public participation."

Thursday's blog post concludes with this summation. "Kennedy opened up a can of worms and has spent millions fighting this stupid battle. Despite his losses, we aren't letting him weasel out of this."
Space

The Highest Observatory On Earth Is Now Open (space.com) 14

The world's highest astronomical site is officially open for business after being in the works for 26 years. Space.com reports: The Japanese University of Tokyo Atacama Observatory, or TAO, which was first conceptualized 26 years ago to study the evolution of galaxies and exoplanets, is perched on top of a tall mountain in the Chilean Andes at 5,640 meters (18,500 feet) above sea level. The facility's altitude surpasses even the Atacama Large Millimeter Array, which is at an elevation of 5,050 meters (16,570 feet).

TAO is located on the summit of Atacama's Cerro Chajnantor mountain, whose name means "place of departure" in the now-extinct Kunza language of the indigenous Likan Antai community. The region's high altitude, sparse atmosphere and perennially arid climate is deadly to humans, but makes an excellent spot for infrared telescopes like TAO as their observational accuracies rely on low moisture levels, which render Earth's atmosphere transparent in infrared wavelengths.

TAO's 6.5-meter telescope consists of two science instruments designed to observe the universe in infrared, which is electromagnetic radiation with a wavelength longer than visible light but shorter than microwaves. One of the instruments, named SWIMS, will image galaxies from the very early universe to understand how they coalesced out of pristine dust and gas, a process whose specifics remain murky despite decades of research. The second, named MIMIZUKU, will aid the overarching science goal by studying primordial disks of dust within which stars and galaxies are known to form, according to the mission plan.
Constructing the telescope on the summit of Mt. Chajnantor "was an incredible challenge, not just technically, but politically too," Yuzuru Yoshii, a professor at the University of Tokyo in Japan who spearheaded TAO since 1998, said in a statement. "I have liaised with Indigenous peoples to ensure their rights and views are considered, the Chilean government to secure permission, local universities for technical collaboration, and even the Chilean Health Ministry to make sure people can work at that altitude in a safe manner."

"Thanks to all involved, research I've only ever dreamed about can soon become a reality, and I couldn't be happier," he added.
The Internet

Humans Now Share the Web Equally With Bots, Report Warns (independent.co.uk) 32

An anonymous reader quotes a report from The Independent, published last month: Humans now share the web equally with bots, according to a major new report -- as some fear that the internet is dying. In recent months, the so-called "dead internet theory" has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts. Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its "Bad Bot Report" indicates. That is up 2 percent in comparison with last year, and is the highest number ever seen since the report began in 2013. In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.

Some of that rise is the result of the adoption of generative artificial intelligence and large language models. Companies that build those systems use bots scrape the internet and gather data that can then be used to train them. Some of those bots are becoming increasingly sophisticated, Imperva warned. More and more of them come from residential internet connections, which makes them look more legitimate. "Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications," said Nanhi Singh, general manager for application security at Imperva. "As more AI-enabled tools are introduced, bots will become omnipresent."

AI

Microsoft Bans US Police Departments From Using Enterprise AI Tool 49

An anonymous reader quotes a report from TechCrunch: Microsoft has changed its policy to ban U.S. police departments from using generative AI through the Azure OpenAI Service, the company's fully managed, enterprise-focused wrapper around OpenAI technologies. Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used "by or for" police departments in the U.S., including integrations with OpenAI's text- and speech-analyzing models. A separate new bullet point covers "any law enforcement globally," and explicitly bars the use of "real-time facial recognition technology" on mobile cameras, like body cameras and dashcams, to attempt to identify a person in "uncontrolled, in-the-wild" environments. [...]

The new terms leave wiggle room for Microsoft. The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn't cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police). That tracks with Microsoft's and close partner OpenAI's recent approach to AI-related law enforcement and defense contracts.
Last week, taser company Axon announced a new tool that uses AI built on OpenAI's GPT-4 Turbo model to transcribe audio from body cameras and automatically turn it into a police report. It's unclear if Microsoft's updated policy is in response to Axon's product launch.
Programming

The BASIC Programming Language Turns 60 (arstechnica.com) 107

ArsTechnica: Sixty years ago, on May 1, 1964, at 4 am in the morning, a quiet revolution in computing began at Dartmouth College. That's when mathematicians John G. Kemeny and Thomas E. Kurtz successfully ran the first program written in their newly developed BASIC (Beginner's All-Purpose Symbolic Instruction Code) programming language on the college's General Electric GE-225 mainframe.

Little did they know that their creation would go on to democratize computing and inspire generations of programmers over the next six decades.

Slashdot Top Deals