Intel

Intel's Make-Or-Break 18A Process Node Debuts For Data Center With 288-Core Xeon 6+ CPU (tomshardware.com) 40

Intel has formally unveiled its Xeon 6+ "Clearwater Forest" data-center processor with up to 288 cores, built on the company's new Intel 18A process and using Foveros Direct packaging. The chip targets telecom, cloud, and edge-AI workloads with massive parallelism, large caches, and high-bandwidth DDR5-8000 memory. Tom's Hardware reports: Intel's Xeon 6+ processors with up to 288 cores combine 12 compute chiplets containing 24 energy-efficient Darkmont cores per tile that are produced using 18A manufacturing technology, two I/O tiles made on Intel 7 production node, as well as three active base tiles made on Intel 3 fabrication process. The compute tiles are stacked on top of the base dies using Intel's Foveros Direct 3D technology, whereas lateral connections are enabled by Intel's EMIB bridges.

Intel's 'Darkmont' efficiency cores have received rather meaningful microarchitectural upgrades. Each core integrates a 64 KB L1 instruction cache, a broader fetch and decode pipeline, and a deeper out-of-order engine capable of tracking more in-flight operations. The number of execution ports has also been increased in a bid to improve both scalar and vector throughput under heavily threaded server workloads.

From a cache hierarchy standpoint, the design groups cores into four-core blocks that share approximately 4 MB of L2 cache per block. As a result, the aggregate last-level cache across the full package surpasses 1 GB, roughly 1,152 MB in total. This unusually large pool is intended to keep data close to hundreds of active cores and reduce dependence on external memory bandwidth, which in turn is meant to both increase performance and lower power consumption. Platform-wise, the processor remains drop-in compatible with the current Xeon server socket, so the CPU has 12 memory channels that support DDR5-8000, 96 PCIe 5.0 lanes with 64 lanes supporting CXL 2.0.

Firefox

Firefox 130 Now Available With WebCodecs API, Third-Party AI Chatbots 55

Firefox 130 introduces several enhancements, including improved local translation handling, better Android page load performance, and the WebCodecs API for low-level audio/video processing on desktop platforms. Notably, it also supports third-party AI chatbots like ChatGPT and Google Gemini via the new Firefox Labs feature. Phoronix reports: The WebCodecs API is particularly useful for web-based apps like video/audio editors and video conferencing that may want control over individual frames of a video stream or audio chunks. For any web software interested in that low-level audio/video encode/decode handling there is now WebCodecs API working on the Firefox desktop builds. As for the third-party AI chatbots, here's what Mozilla's Ian Carmichael said back in June: "If you want to use AI, we think you should have the freedom to use (or not use) the tools that best suit your needs. Instead of juggling between tabs or apps for assistance, those who opt-in will have the option to access their preferred AI service from the Firefox sidebar to summarize information, simplify language, or test their knowledge, all without leaving their current web page."

You can learn more about Firefox 130 via developer.mozilla.org. Binaries for Linux can be found at Mozilla.org.
Privacy

Colorado Bill Aims To Protect Consumer Brain Data (nytimes.com) 15

An anonymous reader quotes a report from the New York Times: Consumers have grown accustomed to the prospect that their personal data, such as email addresses, social contacts, browsing history and genetic ancestry, are being collected and often resold by the apps and the digital services they use. With the advent of consumer neurotechnologies, the data being collected is becoming ever more intimate. One headband serves as a personal meditation coach by monitoring the user's brain activity. Another purports to help treat anxiety and symptoms of depression. Another reads and interprets brain signals while the user scrolls through dating apps, presumably to provide better matches. ("'Listen to your heart' is not enough," the manufacturer says on its website.) The companies behind such technologies have access to the records of the users' brain activity -- the electrical signals underlying our thoughts, feelings and intentions.

On Wednesday, Governor Jared Polis of Colorado signed a bill that, for the first time in the United States, tries to ensure that such data remains truly private. The new law, which passed by a 61-to-1 vote in the Colorado House and a 34-to-0 vote in the Senate, expands the definition of "sensitive data" in the state's current personal privacy law to include biological and "neural data" generated by the brain, the spinal cord and the network of nerves that relays messages throughout the body. "Everything that we are is within our mind," said Jared Genser, general counsel and co-founder of the Neurorights Foundation, a science group that advocated the bill's passage. "What we think and feel, and the ability to decode that from the human brain, couldn't be any more intrusive or personal to us." "We are really excited to have an actual bill signed into law that will protect people's biological and neurological data," said Representative Cathy Kipp, Democrat of Colorado, who introduced the bill.

Medicine

AI Tool Decodes Brain Cancer's Genome During Surgery 4

An anonymous reader quotes a report from Harvard Medical School: Scientists have designed an AI tool that can rapidly decode a brain tumor's DNA to determine its molecular identity during surgery -- critical information that under the current approach can take a few days and up to a few weeks. Knowing a tumor's molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain -- while the patient is still on the operating table. A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.

The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said. [...] CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.

Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types. The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response. The tool further connected the appearance of the cells -- the shape of their nuclei, the presence of edema around the cells -- with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell's appearance relates to the molecular type of a tumor.
AI

A Brain Scanner Combined With an AI Language Model Can Provide a Glimpse Into Your Thoughts 23

An anonymous reader quotes a report from Scientific American: Functional magnetic resonance imaging (fMRI) captures coarse, colorful snapshots of the brain in action. While this specialized type of magnetic resonance imaging has transformed cognitive neuroscience, it isn't a mind-reading machine: neuroscientists can't look at a brain scan and tell what someone was seeing, hearing or thinking in the scanner. But gradually scientists are pushing against that fundamental barrier to translate internal experiences into words using brain imaging. This technology could help people who can't speak or otherwise outwardly communicate such as those who have suffered strokes or are living with amyotrophic lateral sclerosis. Current brain-computer interfaces require the implantation of devices in the brain, but neuroscientists hope to use non-invasive techniques such as fMRI to decipher internal speech without the need for surgery.

Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy. "There's a lot more information in brain data than we initially thought," said Jerry Tang, a computational neuroscientist at the University of Texas at Austin and the study's lead author, during a press briefing. The research, published on Monday in Nature Communications, is what Tang describes as "a proof of concept that language can be decoded from noninvasive recordings of brain activity."

The decoder technology is in its infancy. It must be trained extensively for each person who uses it, and it doesn't construct an exact transcript of the words they heard or imagined. But it is still a notable advance. Researchers now know that the AI language system, an early relative of the model behind ChatGPT, can help make informed guesses about the words that evoked brain activity just by looking at fMRI brain scans. While current technological limitations prevent the decoder from being widely used, for good or ill, the authors emphasize the need to enact proactive policies that protect the privacy of one's internal mental processes. [...] The model misses a lot about the stories it decodes. It struggles with grammatical features such as pronouns. It can't decipher proper nouns such as names and places, and sometimes it just gets things wrong altogether. But it achieves a high level of accuracy, compared with past methods. Between 72 and 82 percent of the time in the stories, the decoder was more accurate at decoding their meaning than would be expected from random chance.
Here's an example of what one study participant heard, as transcribed in the paper: "i got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness." The model went on to decode: "i just continued to walk up to the window and open the glass i stood on my toes and peered out i didn't see anything and looked up again i saw nothing."

The research was published in the journal Nature Communications.
Encryption

'Cryptography's Future Will Be Quantum-Safe. Here's How' (quantamagazine.org) 17

Fearing the possibility of encryption-cracking quantum computers, Quanta magazine reports that researchers are "scrambling to produce new,'post-quantum' encryption scheme." Earlier this year, the National Institute of Standards and Technology revealed four finalists in its search for a post-quantum cryptography standard. Three of them use "lattice cryptography" — a scheme inspired by lattices, regular arrangements of dots in space.

Lattice cryptography and other post-quantum possibilities differ from current standards in crucial ways. But they all rely on mathematical asymmetry. The security of many current cryptography systems is based on multiplication and factoring: Any computer can quickly multiply two numbers, but it could take centuries to factor a cryptographically large number into its prime constituents. That asymmetry makes secrets easy to encode but hard to decode.... A quirk of factoring makes it vulnerable to attack by quantum computers.... Originally developed in the 1990s, [lattice cryptography] relies on the difficulty of reverse-engineering sums of points...

Of course, it's always possible that someone will find a fatal flaw in lattice cryptography... Cryptography works until it's cracked. Indeed, earlier this summer one promising post-quantum cryptography scheme was cracked using not a quantum computer, but an ordinary laptop.

At a recent panel discussion on post-quantum cryptography, Adi Shamir (the S in RSA), expressed concern that NIST's proposed solutions are predominantly based on lattice cryptography. "In some sense, we are putting all eggs in the same basket, but that is the best we have....

"The best advice for young researchers is to stay away from lattice-based post-quantum crypto," Shamir added. "What we really lack are entirely different ideas which will turn out to be secure. So any great idea for a new basis for public-key cryptography which is not using lattices will be greatly appreciated."
Facebook

Facebook Bans Holocaust Denial On its Platform (axios.com) 229

Facebook CEO Mark Zuckerberg announced Monday that the tech giant would be expanding its hate speech policies to ban any content that "denies or distorts the Holocaust." From a report: Zuckerberg was caught flat-footed in a 2018 interview with Kara Swisher, then host of the Recode Decode podcast, when he said that he didn't believe Facebook should take down Holocaust denial content because "I think there are things that different people get wrong," even if unintentionally. Zuckerberg quickly clarified his statement at the time, emailing Swisher that "I personally find Holocaust denial deeply offensive, and I absolutely didn't intend to defend the intent of people who deny that." "Our goal with fake news is not to prevent anyone from saying something untrue -- but to stop fake news and misinformation spreading across our services." Starting today, if people search for the Holocaust on Facebook, the company will start directing them to authoritative sources to get accurate information. In a blog post explaining the policy, Facebook's VP of content policy Monika Bickert says, "Enforcement of these policies cannot happen overnight. There is a range of content that can violate these policies, and it will take some time to train our reviewers and systems on enforcement," she writes. "We are grateful to many partners for their input and candor as we work to keep our platform safe." "I've struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust. My own thinking has evolved as I've seen data showing an increase in anti-Semitic violence, as have our wider policies on hate speech. Drawing the right lines between what is and isn't acceptable speech isn't straightforward, but with the current state of the world, I believe this is the right balance," Zuckerberg wrote today.
OS X

The Behind-the-Scenes Changes Found In MacOS High Sierra (arstechnica.com) 205

Apple officially announced macOS High Sierra at WWDC 2017 earlier this month. While the new OS doesn't feature a ton of user-visible improvements and is ultimately shaping up to be a low-key release, it does feature several behind-the-scenes changes that could help make it the most stable macOS update in years. Andrew Cunningham from Ars Technica has "browsed the dev docs and talked with Apple to get some more details of the update's foundational changes." Here are some excerpts from three key areas of the report: APFS
Like iOS 10.3, High Sierra will convert your boot drive to APFS when you first install it -- this will be true for all Macs that run High Sierra, regardless of whether they're equipped with an SSD, a spinning HDD, or a Fusion Drive setup. In the current beta installer, you're given an option to uncheck the APFS box (checked by default) before you start the install process, though that doesn't necessarily guarantee that it will survive in the final version. It's also not clear at this point if there are edge cases -- third-party SSDs, for instance -- that won't automatically be converted. But assuming that most people stick with the defaults and that most people don't crack their Macs open, most Mac users who do the upgrade are going to get the new filesystem.

HEVC and HEIF
All High Sierra Macs will pick up support for HEVC, but only very recent models will support any kind of hardware acceleration. This is important because playing HEVC streams, especially at high resolutions and bitrates, is a pretty hardware-intensive operation. HEVC playback can consume most of a CPU's processor cycles, and especially on slower dual-core laptop processors, smooth playback may be impossible altogether. Dedicated HEVC encode and decode blocks in CPUs and GPUs can handle the heavy lifting more efficiently, freeing up your CPU and greatly reducing power consumption, but HEVC's newness means that dedicated hardware isn't especially prevalent yet.

Metal 2
While both macOS and iOS still nominally support open, third-party APIs like OpenGL and OpenCL, it's clear that the company sees Metal as the way forward for graphics and GPU compute on its platforms. Apple's OpenGL support in macOS and iOS hasn't changed at all in years, and there are absolutely no signs that Apple plans to support Vulkan. But the API will enable some improvements for end users, too. People with newer GPUs should expect to benefit from some performance improvements, not just in games but in macOS itself; Apple says the entire WindowServer is now using Metal, which should improve the fluidity and consistency of transitions and animations within macOS; this can be a problem on Macs when you're pushing multiple monitors or using higher Retina scaling modes on, especially if you're using integrated graphics. Metal 2 is also the go-to API for supporting VR on macOS, something Apple is pushing in a big way with its newer iMacs and its native support for external Thunderbolt 3 GPU enclosures. Apple says that every device that supports Metal should support at least some of Metal 2's new features, but the implication there is that some older GPUs won't be able to do everything the newer ones can do.

Facebook

Neuroscientists Offer a Reality Check On Facebook's 'Typing By Brain' Project (ieee.org) 58

the_newsbeagle writes: Yesterday, Facebook announced that it's working on a "typing by brain" project, promising a non-invasive technology that can decode signals from the brain's speech center and translate them directly to text (see the video beginning at 1:18:00). What's more, Facebook exec Regina Dugan said, the technology will achieve a typing rate of 100 words per minute. Here, a few neuroscientists are asked: Is such a thing remotely feasible? One neuroscientist points out that his team set the current speed record for brain-typing earlier this year: They enabled a paralyzed man to type 8 words per minute, and that was using an invasive brain implant that could get high-fidelity signals from neurons. To date, all non-invasive methods that read brain signals through the scalp and skull have performed much worse. Thomas Naselaris, an assistant professor at the Medical University of South Carolina, says, "Our understanding of the way the words and their phonological and semantic attributes are encoded in brain activity is actually pretty good currently, but much of this understanding has been enabled by fMRI, which is noninvasive but very slow and not at all portable," he said. "So I think that the bottleneck will be the [optical] imaging technology," which is what Facebook's gear will be using.
Earth

Scientists Successfully Decode the Genome of Quinoa (bbc.com) 292

Gr8Apes writes: Scientists have successfully decoded the genome of quinoa, a hugely popular "super-food" because it is well balanced and gluten-free. They have pinpointed one of the genes that they believe control the production of saponins (bitter toxic compounds that protect the plant from predators) which can facilitate the breeding of plants without saponins, resulting in sweeter seeds without having to process them. The scientists also believe that the genetic understanding now gained will allow them to breed shorter, stockier plants that don't fall over as easily, and that these benefits could be gained without the use of genetic modification. Furthermore, the researchers believe the genetic code will rapidly lead to more productive varieties that will push down costs. "We need the price of quinoa to go down by a factor of five," said project leader Professor Mark Tester, from King Abdullah University of Science and Technology. "If we get to a similar price to wheat it can be used in processing and in bread making and in many other foods and products. It has the chance to truly add to current world food production." The study has been published in the journal Nature.
Firefox

Firefox 33 Integrates Cisco's OpenH264 194

NotInHere (3654617) writes As promised, version 33 of the Firefox browser will fetch the OpenH264 module from Cisco, which enables Firefox to decode and encode H.264 video, for both the <video> tag and WebRTC, which has a codec war on this matter. The module won't be a traditional NPAPI plugin, but a so-called Gecko Media Plugin (GMP), Mozilla's answer to the disliked Pepper API. Firefox had no cross-platform support for H.264 before. Note that only the particular copy of the implementation built and blessed by Cisco is licensed to use the h.264 patents.
Medicine

Personal DNA Sequencing Machine One Step Closer 65

oxide7 writes "A new, low cost semiconductor-based gene sequencing machine has been developed and may unlock the door to advanced medicines and life itself. A team led by Jonathan Rothberg of Ion Torrent in Guilford, Conn is working on a system which uses semiconductors to decode DNA, dramatically reducing costs and taking them closer to being able to reach the goal of a $1000 human genome test. The current optical based system costs around $49000 and is already on the market and being used in over 40 countries."
Censorship

Collage, and the Challenge of "Deniability" 94

Slashdot regular Bennett Haselton has written a piece on a new program called Collage that can circumvent censorship by embedding messages in user-generated content on sites like Flickr. The program demonstrates that a long-standing theoretical concept can be reduced to practice but Bennett wonders if anybody would actually need it, as long as they can exchange encrypted messages over Gmail and AIM. He begins "In a presentation delivered at USENIX, Georgia Tech grad student Sam Burnett and his colleagues described how their new program, "Collage," could circumvent Internet censorship by embedding messages in user-generated content on sites like Flickr. The short version is that a publisher uses the Collage system to break a message into pieces that are small enough to embed into a photograph using standard steganography, the photos are published according to some protocol (e.g. "all photos in the photostream of user xyz" or "all photos tagged with the 'xyz' tag"), and receivers who know the protocol for identifying the photos, can retrieve them and decode the message. According to the authors' paper, the system is general enough that it could be adapted to almost any site where user-generated content is published. (All of this can be done by hand using existing tools, but Collage automates the process to hide the individual steps from the user.)"
Earth

New Hope For Predicting Earthquakes 27

Kristina writes "Interviews with several geophysicists reveal that new data and new understandings about how earthquakes really happen inspire some hope in pursuing the short-term prediction of earthquakes. 'Much of the current work aims to decode how stress is distributed and redistributed far below the surface and among more than one fault in an area. Understanding that pattern could help scientists recognize when stress is setting the stage for a large quake.' This article goes into the latest ideas on what we know and don't know about when large earthquakes happen, and it talks with two Italian scientists about the large quake that hit central Italy in April."

iPod May Not Have The Horsepower For Ogg [updated] 399

An anonymous reader writes "Gizmodo has an interview with a Rio engineer who speculates that current iPods may not have enough CPU power and/or memory to decode Ogg. He concludes that the Minis might be able to do it, and the next generation iPods will certainly be able to. Of course, just because Apple can doesn't mean it will." Update: 06/06 04:44 GMT by T : csm writes with this rebuttal: "According to Monty from Xiph.org (author of the Tremor codec and OGG itself), it should very well be possible to run Ogg on older generation iPods."

Ogg Vorbis - The Free Alternative To MP3 315

The fight to keep standards Open and Free is raging in the audio compression business. With mp3 tearing up bandwidth and the court system, Christopher Montgomery and the rest of the Ogg Vorbis team are working hard to ensure that the mp3 format has a Free alternative in their system, which seems to outperform mp3 everywhere it counts. I got the opportunity to pull Chris away from development just long enough to tell us exactly what's going on, and to answer some questions about the process and the product necessary to take on mp3.
Programming

Jeremy Allison Answers Samba Questions 98

Monday you asked Samba-meister Jeremy Allison a bunch of questions. He has answered the 10 highest-moderated ones in the finest lounge-lizard style imaginable (below).

Slashdot Top Deals