Electronic Frontier Foundation

EFF Is Leaving X (eff.org) 188

After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...]

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis.

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Government

US Government Lost More Than 10,000 STEM PhDs Last Year (science.org) 126

An anonymous reader quotes a report from Science.org: Some 10,109 doctoral-trained experts in science and related fields left their jobs last year as President Donald Trump dramatically shrank the overall federal workforce. That exodus was only 3% of the 335,192 federal workers who exited last year but represents 14% of the total number of Ph.D.s in science, technology, engineering, and math (STEM) or health fields employed at the end of 2024 as then-President Joe Biden prepared to leave office. The numbers come from employment data posted earlier this month by the White House Office of Personnel Management (OPM). At 14 research agencies Science examined in detail, departures outnumbered new hires last year by a ratio of 11 to one, resulting in a net loss of 4224 STEM Ph.D.s. The graphs that follow show the impact is particularly striking at such scientist-rich agencies as the National Science Foundation (NSF). But across the government, these departing Ph.D.s took with them a wealth of subject matter expertise and knowledge about how the agencies operate.

[...] Science's analysis found that reductions in force, or RIFs, accounted for relatively few departures in 2025. Only at the Centers for Disease Control and Prevention, where 16% of the 519 STEM Ph.D.s who left last year got pink RIF slips, did the percentage exceed 6%, and some agencies reported no STEM Ph.D. RIFs in 2025. At most agencies, the most common reasons for departures were retirements and quitting. Although OPM classifies many of these as voluntary, outside forces including the fear of being fired, the lure of buyout offers, or a profound disagreement with Trump policies, likely influenced many decisions to leave. Many Ph.D.s departed because their position was terminated.

Science

Nature-Inspired Computers Are Shockingly Good At Math (phys.org) 32

An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges."

Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond...

"We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..."

Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.

Math

Mathematical Proof Debunks the Idea That the Universe Is a Computer Simulation (phys.org) 248

alternative_right shares a report from Phys.org: Today's cutting-edge theory -- quantum gravity -- suggests that even space and time aren't fundamental. They emerge from something deeper: pure information. This information exists in what physicists call a Platonic realm -- a mathematical foundation more real than the physical universe we experience. It's from this realm that space and time themselves emerge. "The fundamental laws of physics cannot be contained within space and time, because they generate them. It has long been hoped, however, that a truly fundamental theory of everything could eventually describe all physical phenomena through computations grounded in these laws. Yet we have demonstrated that this is not possible. A complete and consistent description of reality requires something deeper -- a form of understanding known as non-algorithmic understanding." "We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity," says Dr. Faizal. "Therefore, no physically complete and consistent theory of everything can be derived from computation alone. Rather, it requires a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity and therefore more fundamental than spacetime itself."

"Drawing on mathematical theorems related to incompleteness and indefinability, we demonstrate that a fully consistent and complete description of reality cannot be achieved through computation alone," explains Dr. Mir Faizal, Adjunct Professor with UBC Okanagan's Irving K. Barber Faculty of Science. "It requires non-algorithmic understanding, which by definition is beyond algorithmic computation and therefore cannot be simulated. Hence, this universe cannot be a simulation."

The findings have been published in the Journal of Holography Applications in Physics.
AI

UAE Lab Releases Open-Source Model to Rival China's DeepSeek (gizmodo.com) 43

"The United Arab Emirates wants to compete with the U.S. and China in AI," writes Gizmodo, "and a new open source model may be its strongest contender yet.

"An Emirati AI lab called the Institute of Foundation Models (IFM) released K2 Think on Tuesday, a model that researchers say rivals OpenAI's ChatGPT and China's DeepSeek in standard benchmark tests." "With just 32 billion parameters, it outperforms flagship reasoning models that are 20x larger," the lab wrote in a press release on Tuesday. DeepSeek's R1 has 671 billion parameters, though only 37 billion are active. Meta's latest Llama 4 models range from 17 billion to 288 billion active parameters. OpenAI doesn't share parameter information. OpenAI doesn't share parameter information.

Researchers also claim that K2 Think leads "all open-source models in math performance" across several benchmarks. The model is intended to be more focused on math, coding, and scientific research than most other AI chatbots. The Emirati lab's selling point for the model is similar to DeepSeek's strategy that disrupted the AI market earlier this year: optimized efficiency that will have better or the same computing power at a lower cost...

The lab is also aiming to be transparent in everything, "open-sourcing not just models but entire development processes" that provide "researchers with complete materials including training code, datasets, and model checkpoints," IFM said in a press release from May.

The UAE and other Arab countries are investing in AI to try reducing their economic dependence on fossil fuels, the article points out.
AI

China's Moonshot Launches Free AI Model Kimi K2 That Outperforms GPT-4 In Key Benchmarks 41

Chinese AI startup Moonshot AI has released Kimi K2, a trillion-parameter open-source language model that outperforms GPT-4 in key benchmarks with particularly strong performance on coding and autonomous agent tasks. VentureBeat reports: The new model, called Kimi K2, features 1 trillion total parameters with 32 billion activated parameters in a mixture-of-experts architecture. The company is releasing two versions: a foundation model for researchers and developers, and an instruction-tuned variant optimized for chat and autonomous agent applications. "Kimi K2 does not just answer; it acts," the company stated in its announcement blog. "With Kimi K2, advanced agentic intelligence is more open and accessible than ever. We can't wait to see what you build."

The model's standout feature is its optimization for "agentic" capabilities -- the ability to autonomously use tools, write and execute code, and complete complex multi-step tasks without human intervention. In benchmark tests, Kimi K2 achieved 65.8% accuracy on SWE-bench Verified, a challenging software engineering benchmark, outperforming most open-source alternatives and matching some proprietary models. [...] On LiveCodeBench, arguably the most realistic coding benchmark available, Kimi K2 achieved 53.7% accuracy, decisively beating DeepSeek-V3's 46.9% and GPT-4.1's 44.7%. More striking still: it scored 97.4% on MATH-500 compared to GPT-4.1's 92.4%, suggesting Moonshot has cracked something fundamental about mathematical reasoning that has eluded larger, better-funded competitors.

But here's what the benchmarks don't capture: Moonshot is achieving these results with a model that costs a fraction of what incumbents spend on training and inference. While OpenAI burns through hundreds of millions on compute for incremental improvements, Moonshot appears to have found a more efficient path to the same destination. It's a classic innovator's dilemma playing out in real time -- the scrappy outsider isn't just matching the incumbent's performance, they're doing it better, faster, and cheaper.
Science

Inside arXiv - the Most Transformative Platform in All of Science (wired.com) 13

Paul Ginsparg, a physics professor at Cornell University, created arXiv nearly 35 years ago as a digital repository where researchers could share their findings before peer review. Today, the platform hosts more than 2.6 million papers, receives 20,000 new submissions monthly, and serves 5 million active users, Wired writes in a profile of the platform.

"Just when I thought I was out, they pull me back in!" Ginsparg quotes from The Godfather, reflecting his inability to fully hand over the platform despite numerous attempts. If arXiv stopped functioning, scientists worldwide would face immediate disruption. "Everybody in math and physics uses it," says Scott Aaronson, a computer scientist at the University of Texas at Austin. "I scan it every night."

ArXiv revolutionized academic publishing, previously dominated by for-profit giants like Elsevier and Springer, by allowing instant and free access to research. Many significant discoveries, including the "transformers" paper that launched the modern AI boom, first appeared on the platform. Initially a collection of shell scripts on Ginsparg's NeXT machine in 1991, arXiv followed him from Los Alamos National Laboratory to Cornell, where it found an institutional home despite administrative challenges. Recent funding from the Simons Foundation has enabled a hiring spree and long-needed technical updates.
Power

Could New Linux Code Cut Data Center Energy Use By 30%? (datacenterdynamics.com) 65

Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics.

It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this."

The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption.

Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai).

Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead...

"Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."
AI

AI Benchmarking Organization Criticized For Waiting To Disclose Funding from OpenAI (techcrunch.com) 6

An anonymous reader shares a report: An organization developing math benchmarks for AI didn't disclose that it had received funding from OpenAI until relatively recently, drawing allegations of impropriety from some in the AI community.

Epoch AI, a nonprofit primarily funded by Open Philanthropy, a research and grantmaking foundation, revealed on December 20 that OpenAI had supported the creation of FrontierMath. FrontierMath, a test with expert-level problems designed to measure an AI's mathematical skills, was one of the benchmarks OpenAI used to demo its upcoming flagship AI, o3.

In a post on the forum LessWrong, a contractor for Epoch AI going by the username "Meemi" says that many contributors to the FrontierMath benchmark weren't informed of OpenAI's involvement until it was made public. "The communication about this has been non-transparent," Meemi wrote. "In my view Epoch AI should have disclosed OpenAI funding, and contractors should have transparent information about the potential of their work being used for capabilities, when choosing whether to work on a benchmark."

AI

Study Done By Apple AI Scientists Proves LLMs Have No Ability to Reason (appleinsider.com) 233

Slashdot reader Rick Schumann shared this report from the blog AppleInsider: A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.

The group has proposed a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models. The group investigated the "fragility" of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn't happen...

The study found that adding even a single sentence that appears to offer relevant information to a given math question can reduce the accuracy of the final answer by up to 65 percent. "There is just no way you can build reliable agents on this foundation, where changing a word or two in irrelevant ways or adding a few bit of irrelevant info can give you a different answer," the study concluded... "We found no evidence of formal reasoning in language models," the new study concluded. The behavior of LLMS "is better explained by sophisticated pattern matching" which the study found to be "so fragile, in fact, that [simply] changing names can alter results."

Education

Curricula From Bill Gates-Backed 'Illustrative Math' Required In NYC High Schools (nyc.gov) 90

New York City announced a "major citywide initiative" to increase "math achievement" among students, according to the mayor's office.

93 middle schools and 420 high schools will implement an "Illustrative Math" curriculum (from an education nonprofit founded in 2011) combined with intensive teacher coaching, starting this fall. "The goal is to ensure that all New York City students develop math skills," according to the NYC Solves web site (with the mayor's office noting "years of stagnant math scores.") Long-time Slashdot reader theodp writes: The NYC Public Schools further explained, "As part of the NYC Solves initiative, all high schools will use Illustrative Mathematics and districts will choose a comprehensive, evidence-based curricula for middle school math instruction from an approved list. Each curriculum has been reviewed and recommended by EdReports, a nationally recognized nonprofit organization."

The About page for Illustrative Mathematics (IM) lists The Bill & Melinda Gates Foundation as a Philanthropic Supporter [as well as the Chan Zuckerberg Initiative and The William and Flora Hewlett Foundation], and lists two Gates Foundation Directors as Board members... A search of Gates Foundation records for "Illustrative Mathematics" turns up $25 million in committed grants since 2012, including a $13.9 million grant to Illustrated Mathematics in Nov. 2022 ("To support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes") and a $425,000 grant just last month to Educators for Excellence ("To engage teacher feedback on the implementation of Illustrative Mathematics curriculum and help middle school teachers learn about the potential for math high-quality instructional materials and professional learning in New York City").

EdReports, which vouched for the Illustrative Mathematics curriculum (according to New York's Education Department), has received $10+ million in committed Gates Foundation grants. The Gates Foundation is also a very generous backer of NYC's Fund for Public Schools, with grants that included $4,276,973 in October 2023 "to support the implementation of high-quality instructional materials and practices for improving students' math experience and outcomes."

Chalkbeat reported in 2018 on a new focus on high school curriculum by the Gates Foundation ("an area where we feel like we've underinvested," said Bill Gates). The Foundation made math education its top K-12 priority in Oct. 2022 with a $1.1 billion investment. Also note this May 2023 blog post from $14+ million Gates Foundation grantee Educators for Excellence, a New York City nonprofit. The blog post touts the key role the nonprofit had played in a year-long advocacy effort that ultimately "secured a major win" ending the city's curricula "free-for-all" and announced "a standardized algebra curriculum from Illustrative Mathematics will also be piloted at 150 high schools."

As the NY Times reported back in 2011, behind "grass-roots" school advocacy, there's Bill Gates!

Math

How Much of the World Is It Possible to Model? 45

Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation.

The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks.

This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place.

Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.
Microsoft

Bill Gates Discusses AI, Climate Change, and his Time at Microsoft (gatesnotes.com) 112

Bill Gates took his 11th turn answering questions in Reddit's "Ask My Anything" forum this week — and occasionally looked back on his time at Microsoft: Is technology only functional for you nowadays, or is there a still hobby aspect to it? Do you for instance still do nerdy or geeky things in your spare time; e.g. write code?

Yes. I like to play around and code. The last time my code shipped in a Microsoft product was 1985 — so a long time ago. I can no longer threaten when I think a schedule is too long that "I will come in and code it over the weekend."


Mr Gates, with the benefit of hindsight regarding your years of involvement with Microsoft, what is the single biggest thing you wish you had done differently?

I was CEO until 2000. I certainly know a lot now that I didn't back then. Two areas I would change would be our work in phone Operating systems (Android won) and trying to settle the antitrust lawsuit sooner.

Gates posted all of his responses on his personal web site Gates Notes — and there were also some discussion about AI's coming role in our future. Asked for his opinion about generative AI, and how it will impact the world, Gates said "I am quite impressed with the rate of improvement in these AIs" I think they will have a huge impact. Thinking of it in the Gates Foundation context we want to have tutors that help kids learn math and stay interested. We want medical help for people in Africa who can't access a doctor. I still work with Microsoft some, so I am following this very closely.

Do you think that using technology to push teachers and doctors out of jobs will have a positive impact on our world? What about, instead, we use AI to give equitable access to education and training for more human teachers and doctors, without the $500,000 price tag. Do you think that might have a more positive impact on, ya know, humans?

I think we need more teachers and doctors, not less. In the Foundation's work, the shortage of doctors means that most people never see a doctor and they suffer because of that. We want class sizes to be smaller. Digital tools can help although their impact so far has been modest.


[W]hat are your views on OpenAI's ChatGPT?

It gives a glimpse of what is to come. I am impressed with this whole approach and the rate of innovation....


Many years ago, I think around 2000, I heard you say something on TV like, "people are vastly overestimating what the internet will be like in 5 years, and vastly underestimating what it will be like in 10 years." Is any mammoth technology shift at a similar stage right now? Any tech shift — not necessarily the Internet

AI is the big one. I don't think Web3 was that big or that metaverse stuff alone was revolutionary, but AI is quite revolutionary....


What are you excited about in the year ahead?

First being a grandfather. Second being a good friend and father. Third progress in health and climate innovation. Fourth helping to shape the AI advances in a positive way.

Gates also offered an update on the Terrapower molten salt Thorium reactors, shared his thoughts on veganism, and made predictions about climate change. "I still believe we can avoid a terrible outcome. The pace of innovation is really picking up even though we won't make the current timelines or avoid going over 1.5.... The key on climate is making the clean products as cheap as the dirty products in every area of emission — planes, concrete, meat etc."

Gates also revealed what kind of smartphone he uses (a foldable Samsung Fold 4), what he thought of the latest Avatar ("good"), and that his favorite bands include U2. "I loved Bono's recent book and he is a good friend."

And he said he believes that the very rich "should pay a lot more in taxes." But in addition, Gates said, "they should give away their wealth over time. It has been very fulfilling for me and is my full-time job."
Communications

US Opts To Not Rebuild Renowned Puerto Rico Telescope (apnews.com) 130

The National Science Foundation announced Thursday that it will not rebuild a renowned radio telescope in Puerto Rico, which was one of the world's largest until it collapsed nearly two years ago. The Associated Press reports: Instead, the agency issued a solicitation for the creation of a $5 million education center at the site that would promote programs and partnerships related to science, technology, engineering and math. It also seeks the implementation of a research and workforce development program, with the center slated to open next year in the northern mountain town of Arecibo where the telescope was once located. The solicitation does not include operational support for current infrastructure at the site that is still in use, including a 12-meter radio telescope or the Lidar facility, which is used to study the upper atmosphere and ionosphere to analyze cloud cover and precipitation data.

The decision was mourned by scientists around the world who used the telescope at the Arecibo Observatory for years to search for asteroids, planets and extraterrestrial life. The 1,000-foot-wide (305-meter-wide) dish also was featured in the Jodie Foster film "Contact" and the James Bond movie "GoldenEye." The reflector dish and the 900-ton platform hanging 450 feet above it previously allowed scientists to track asteroids headed to Earth, conduct research that led to a Nobel Prize and determine if a planet is potentially habitable.
The Arecibo Observatory collapsed in on itself in December 2020, after the telescope suffered two major cable malfunctions in the two months prior. The National Science Foundation released shocking footage of the moment when support cables snapped, causing the massive 900-ton structure suspended above Arecibo to fall onto the observatory's iconic 1,000-foot-wide dish.
Graphics

SF Writer/Digital Art/NFT Pioneer Herbert W. Franke Dies at Age 95 (artnews.com) 20

On July 7th Art News explained how 95-year-old Austrian artist Herbert W. Franke "has recently become a sensation within the art world the crypto space," describing the digital pioneer as a computer artist using algorithms and computer programs to visualize math as art. Last month, the physicist and science fiction writer was behind one of the most talked about digital artworks at a booth by the blockchain company Tezos at Art Basel. Titled MONDRIAN (1979), the work paid tribute to artist Piet Mondrian's iconic geometric visuals using a program written on one of the first home computers.

Days before this, Franke, who studied physics in Vienna following World War II and started working at Siemens in 1953, where he conducted photographic experiments after office hours, launched 100 images from his famed series "Math Art" (1980-95) as NFTs on the Quantum platform. The drop was meant to commemorate his birthday on May 14 and to raise funds for his foundation. The NFTs sold out in 30 seconds, with the likes of pioneering blockchain artist Kevin Abosch purchasing a few.

In one of his last interviews, Franke told the site that blockchain "is a totally new environment, and this technology is still in its early stages, like at the beginning of computer art. But I am convinced that it has opened a new door for digital art and introduced the next generation to this new technology." It echoed something he'd said in his first book, published in 1957, which he later quoted in the interview (a full 65 years later). "Technology is usually dismissed as an element hostile to art. I want to try to prove that it is not..."

This morning, long-time Slashdot reader Qbertino wrote: The German IT news site heise reports (article in German) that digital art pioneer, SF author ("The Mind Net") and cyberspace avantgardist Herbert W. Franke has died at age 95. His wife recounted on his Twitter account: "Herbert loved to call himself the dinosaur of computer art. I am [...] devastated to announce that our beloved dinosaur has left the earth.

"He passed away knowing there is a community of artists and art enthusiasts deeply caring about his art and legacy."
Among much pioneering work he founded one of the worlds first digital art festivals "Ars Electronica" in Austria in 1979.

Franke's wife is still running the Art Meets Science web site dedicated to Franke's work. Some highlights from its biography of Franke's life: Herbert W. Franke, born in Vienna on May 14, 1927, studied physics and philosophy at the University of Vienna and received his doctorate in 1951... An Apple II was his first personal computer which he bought 1980. He developed a program as early as 1982 that used a midi interface to control moving image sequences through music....

Only in recent years has "art from the machine" begun to interest traditional museums as a branch of modern art. Franke, who from the beginning was firmly convinced of the future importance of this art movement, has also assembled a collection of computer graphics that is unique in the world, documenting 50 years of this development with works by respected international artists, supplemented by his own works....

As a physicist, Franke was predestined to bring science and technology closer to the general public in popular form due to his talent as a writer, which became apparent early on. About one-third of his nearly fifty books, as well as uncounted journal articles...

Franke's novels and stories are not about predicting future technologies, nor about forecasting our future way of life, but rather about the intellectual examination of possible models of our future and their philosophical as well as ethical interpretation. In this context, however, Franke attaches great importance to the seriousness of scientific or technological assessments of the future in the sense of a feasibility analysis. In his opinion, a serious and meaningful discussion about future developments can basically only be conducted on this basis. In this respect, Franke is not a typical representative of science fiction, but rather a visionary who, as a novelist, deals with relevant questions of social future and human destiny on a high intellectual level.

Businesses

What Happened After Amazon's $71M Tax Break in Central New York? 62

This week Amazon announced that "Approximately 1,500 local Amazon employees will operate and work with innovative robotics technology" at a new fulfillment center that's a first of its kind for Central New York.

Amazon's press release says they've created 39,000 jobs in New York since 2010 — and "invested over $14 billion in the state of New York" — though they're counting what they paid workers as "investing" (as well as what they paid to build Amazon's infrastructure).

Long-time Slashdot reader theodp writes: In 2019, Onondaga County (New York) officials unanimously approved $71 million in tax breaks to support the development of a giant warehouse in the Town of Clay... "I am very excited to see this tremendous investment in Central New York coming to fruition," said U.S. Representative John Katko. "The new Fulfillment Center will be revolutionary for our region, creating over 1,500 jobs and making significant contributions to the local economy."

Driving home Katko's point, the press release added, "In April of 2021, Amazon furthered its commitment to invest in education programs that will drive future innovation in the communities it serves by donating $1.75 million to construct a new STEAM (Science, Technology, Engineering, Arts, and Math) high school in Onondaga County. Amazon's donation will fund robotics and computer science initiatives at the new school [presumably using Amazon-supported curriculum providers]." Unlike Amazon's Fulfillment Center, the new STEAM high school is unlikely to open before Fall 2023 at the earliest, as the $74-million-and-counting project (that Amazon is donating $1.75M towards) to repurpose a school building that has sat empty since 1975 has experienced delays and cost increases.

Amazon's press release notes the company also donated $150,000 to be "the presenting sponsor" for the three-day Syracuse Jazz Fest. And it also touts Amazon's support for these other central New York organizations (without indicating the amount contributed):
  • Rescue Mission Alliance: Working to end homelessness and hunger in greater Syracuse.
  • Milton J. Rubenstein Museum of Science and Technology (MOST): Supporting the "Be the Scientist" program for Syracuse-area public school students to visit the museum and learn about STEM careers and sponsor planetarium shows for area students.
  • The Good Life Foundation, a nonprofit serving youth in downtown Syracuse
  • DeWitt Rotary Club
Intel

Intel Enters Discrete GPU Market With Launch of Arc A-Series For Laptops (hothardware.com) 23

MojoKid writes: Today Intel finally launched its first major foray into discrete GPUs for gamers and creators. Dubbed Intel Arc A-Series and comprised of 5 different chips built on two different Arc Alchemist SoCs, the company announced its entry level Arc 3 Graphics is shipping in market now with laptop OEMs delivering new all-Intel products shortly. The two SoCs set the foundation across three performance tiers, including Arc 3, Arc 5, and Arc 7.

For example, Arc A370M arrives today with 8 Xe cores, 8 ray tracing units, 4GB of GDDR6 memory linked to a 64-bit memory bus, and a 1,550MHz graphics clock. Graphics power is rated at 35-50W. However, Arc A770M, Intel's highest-end mobile GPU will come with 32 Xe cores, 32 ray tracing units, 16GB of GDDR 6 memory over a 256-bit interface and with a 1650MHz graphics clock. Doing the math, Arc A770M could be up to 4X more powerful than Arc 370M. In terms of performance, Intel showcased benchmarks from a laptop outfitted with a Core i7-12700H processor and Arc A370M GPU that can top the 60 FPS threshold at 1080p in many games where integrated graphics could come up far short. Examples included Doom Eternal (63 fps) at high quality settings, and Hitman 3 (62 fps), and Destiny 2 (66 fps) at medium settings. Intel is also showcasing new innovations for content creators as well, with its Deep Link, Hyper Encode and AV1 video compression support offering big gains in video upscaling, encoding and streaming. Finally, Intel Arc Control software will offer unique features like Smooth Sync that blends tearing artifacts when V-Synch is turned off, as well as Creator Studio with background blur, frame tracking and broadcast features for direct game streaming services support.

Math

Children May Instinctively Know How To Do Division Even Before Hitting the Books, Study Finds (medicalxpress.com) 48

An anonymous reader shares a report: We often think of multiplication and division as calculations that need to be taught in school. But a large body of research suggests that, even before children begin formal education, they possess intuitive arithmetic abilities. A new study published in Frontiers in Human Neuroscience argues that this ability to do approximate calculations even extends to that most dreaded basic math problem -- true division -- with implications for how students are taught mathematical concepts in the future. The foundation for the study is the approximate number system (ANS), a well-established theory that says people (and even nonhuman primates) from an early age have an intuitive ability to compare and estimate large sets of objects without relying upon language or symbols. For instance, under this non-symbolic system, a child can recognize that a group of 20 dots is bigger than a group of four dots, even when the four dots take up more space on a page. The ability to make finer approximations -- say, 20 dots versus 17 dots -- improves into adulthood.
Bitcoin

Garry Kasparov: Crypto Means Freedom (coindesk.com) 111

CoinDesk: Garry Kasparov knows math. He knows logic, strategy and decision-making. Widely regarded as the greatest chess player in the history of mankind, the Russian grandmaster -- ranked No. 1 from 1984 to 2005 -- sees the world with a certain clarity. So it will delight many in the blockchain industry to learn that Kasparov, easily one of the smartest people alive, is now a champion of cryptocurrency. And it's partly because of math. Kasparov has spent his "retirement" opposing Russian President Vladimir Putin (a defiance that once got him tossed in jail), fighting for humanitarian causes and serving as chairman of the Human Rights Foundation (a nonprofit that strongly supports bitcoin as a freedom-giving tool). Now he views crypto as a way to check government power. Bitcoin offers protection against rampant government spending, says Kasparov, "because you're protected by math" -- by the logic of the code itself. Kasparov also sees merit in non-fungible tokens. [...]

CoinDesk: How'd you get into the crypto space?
Kasparov: If you followed my career and read about my early interest in computers and technology, you should not be surprised that I was very excited when I recognized the value of cryptocurrencies and NFTs. This goes all the way back to the '80s; I always tried to be at the cutting edge. It started with chess. But I also saw an opportunity to use computers and new tools to advance individual freedoms. It's my belief that technology should help people fight back against the power of the state.

How do cryptocurrencies fit into that?

Cryptocurrencies become an inseparable part of or progress, because the whole world is moving digital. And if the economy becomes more digital, so does the money. Another philosophical reason is that ... governments [have] unlimited opportunities to print money. And printing money is the most exquisite form of borrowing from us and from future generations. And I believe that cryptocurrencies -- with bitcoin as a standard -- offer a protection against this onslaught of the government, because you're protected by math. You're protected by the limited number of any code behind the respective currency. Cryptocurrencies, and all the products related to cryptocurrencies, are absolutely vital for the future development of our world.

Space

Jeff Bezos Plans to Travel to Space on Blue Origin Flight (bloomberg.com) 131

Jeff Bezos will go to space next month when his company, Blue Origin, launches its first passenger-carrying mission. From a report: The 57-year-old, who plans to travel alongside his brother, Mark, made the announcement in an Instagram post Monday. The scheduled launch next month will be about two weeks after the billionaire plans to step down as chief executive officer of Amazon.com. "Ever since I was five years old, I've dreamed of traveling to space," Bezos said in the post. "On July 20th, I will take that journey with my brother. The greatest adventure, with my best friend."

Blue Origin is one of several high-profile space-tourism companies backed by a wealthy entrepreneur, alongside Elon Musk's Space Exploration Technologies and Richard Branson-backed Virgin Galactic Holdings. Both of those companies are making plans to carry paying customers. Blue Origin is auctioning off a seat on its New Shepard rocket for the July 20 flight, an 11-minute trip to suborbital space that will reach an altitude of about 100 kilometers (62 miles). The spot will be the only one available for purchase on the flight, and the proceeds will go to a Blue Origin foundation that promotes math and science education.

Slashdot Top Deals