XBox (Games)

Microsoft Unveils Full Xbox Series X Specs 77

Microsoft has provided detailed tech specs for its forthcoming Xbox Series X gaming console, reader Dave Knott shares. Full system specs are as follows: CPU: 8x Cores @ 3.8 GHz (3.6 GHz w/ SMT) Custom Zen 2 CPU
GPU: 12 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size: 360.45 mm2
Process: 7nm Enhanced
Memory: 16 GB GDDR6 w/ 320b bus
Memory Bandwidth: 10GB @ 560 GB/s, 6GB @ 336 GB/s
Internal Storage: 1 TB Custom NVME SSD
I/O Throughput: 2.4 GB/s (Raw), 4.8 GB/s (Compressed, with custom hardware decompression block)
Expandable Storage: 1 TB Expansion Card (matches internal storage exactly)
External Storage: USB 3.2 External HDD Support
Optical Drive: 4K UHD Blu-Ray Drive
Performance Target: 4K @ 60 FPS, Up to 120 FPS.
Digital Foundry visited Microsoft and provides a deep dive article detailing their hands-on experience with the new hardware including the following information.
Security

AMD Processors From 2011 To 2019 Vulnerable To Two New Attacks (zdnet.com) 71

An anonymous reader quotes a report from ZDNet: AMD processors manufactured between 2011 and 2019 (the time of testing) are vulnerable to two new attacks, research published this week has revealed (PDF). The two new attacks impact the security of the data processed inside the CPU and allow the theft of sensitive information or the downgrade of security features. The research team said it notified AMD of the two issues in August 2019, however, the company has not released microcode (CPU firmware) updates, claiming these "are not new speculation-based attacks," a statement that the research team disagrees with.

The two new attacks target a feature of AMD CPUs known as the L1D cache way predictor. Introduced in AMD processors in 2011 with the Bulldozer microarchitecture, the L1D cache way predictor is a performance-centric feature that reduces power consumption by improving the way the CPU handles cached data inside its memory. A high-level explanation is available below: "The predictor computes a uTag using an undocumented hash function on the virtual address. This uTag is used to look up the L1D cache way in a prediction table. Hence, the CPU has to compare the cache tag in only oneway instead of all possible ways, reducing the power consumption." The two new attacks were discovered after a team of six academics [...] reverse-engineered this "undocumented hashing function" that AMD processors were using to handle uTag entries inside the L1D cache way predictor mechanism. Knowing these functions, allowed the researchers to recreate a map of what was going on inside the L1D cache way predictor and probe if the mechanism was leaking data or clues about what that data may be.
While the attacks can be patched, AMD denies that these two new attacks are a concern, claiming they "are not new speculation-based attacks" and that they should be mitigated through previous patches for speculative execution side channel vulnerabilities.

The research team says AMD's response is "rather misleading," and that the attacks still work on fully-updated operating systems, firmware, and software even today.
AMD

AMD Discloses RDNA 2 Graphics, CDNA Data Center GPUs, Next-Gen Zen (hothardware.com) 20

"At its Financial Analyst Day, AMD disclosed additional details regarding its next generation CPU and GPU architectures," writes Slashdot reader MojoKid: - While the company isn't ready to disclose concrete details yet, AMD is promising that Navi 2X will arrive this year and deliver "enthusiast-class" performance, excellent power efficiency, and "top-of-stack" GPUs with "uncompromising 4K gaming".

- While AMD has RDNA for its gaming-centric consumer GPUs, the company is shifting to a new GPU compute architecture, dubbed CDNA, for its High-Performance Computing (HPC) and Machine Learning (ML) accelerators. CDNA has been designed from the ground-up for ML/HPC applications, and will leverage what AMD calls its second-generation Infinity Architecture interconnects.

- AMD also reiterated that its first Zen 3-based products will be rolling out later this year. On the server side, this means EPYC 7003 "Milan" processors, but the company also revealed that Zen 3 client processors would also arrive by the end of 2020.

Intel

Intel CSME Bug Worse Than Previously Thought (zdnet.com) 68

Security researchers say that a bug in one of Intel's CPU technologies that was patched last year is actually much worse than previously thought. From a report: "Most Intel chipsets released in the last five years contain the vulnerability in question," said Positive Technologies in a report published today. Attacks are impossible to detect, and a firmware patch only partially fixes the problem. To protect devices that handle sensitive operations, researchers recommend replacing CPUs with versions that are not impacted by this bug. Only the latest Intel 10th generation chips are not vulnerable, researchers said. The actual vulnerability is tracked as CVE-2019-0090, and it impacts the Intel Converged Security and Management Engine (CSME), formerly called the Intel Management Engine BIOS Extension (Intel MEBx).
Sci-Fi

SETI@Home Search For Alien Life Project Shuts Down After 21 Years (bleepingcomputer.com) 85

An anonymous reader quotes a report from Bleeping Computer: SETI@home has announced that they will no longer be distributing new work to clients starting on March 31st as they have enough data and want to focus on completing their back-end analysis of the data. SETI@home is a distributed computing project where volunteers contribute their CPU resources to analyze radio data from the Arecibo radio telescope in Puerto Rico and the Green Bank Telescope in West Virginia for signs of extraterrestrial intelligence (SETI).

Run by the Berkeley SETI Research Center since 1999, SETI@home has been a popular project where people from all over the world have been donating their CPU resources to process small chunks of data, or "jobs," for interesting radio transmissions or anomalies. This data is then sent back to the researchers for analysis. In an announcement posted yesterday, the project stated that they will no longer send data to SETI@home clients starting on March 31st, 2020 as they have reached a "point of diminishing returns" and have analyzed all the data that they need for now. Instead, they want to focus on analyzing the back-end results in order to publish a scientific paper.
SETI@Home has a list of BOINC projects on their website for those interested in donating their CPU resources.
Intel

Chasing AMD, Intel Promises Full Memory Encryption in Upcoming CPUs (arstechnica.com) 53

"Intel's security plans sound a lot like 'we're going to catch up to AMD,'" argues FOSS advocate and "mercenary sysadmin" Jim Salter at Ars Technica, citing a "present-and-future" presentation by Anil Rao and Scott Woodgate at Intel's Security Day that promised a future with Full Memory Encryption but began with Intel SGX (launched with the Skylake microarchitecture in 2015).

Salter describes SGX as "one of the first hardware encryption technologies designed to protect areas of memory from unauthorized users, up to and including the system administrators themselves." SGX is a set of x86_64 CPU instructions which allows a process to create an "enclave" within memory which is hardware encrypted. Data stored in the encrypted enclave is only decrypted within the CPU -- and even then, it is only decrypted at the request of instructions executed from within the enclave itself. As a result, even someone with root (system administrator) access to the running system can't usefully read or alter SGX-protected enclaves. This is intended to allow confidential, high-stakes data processing to be safely possible on shared systems -- such as cloud VM hosts. Enabling this kind of workload to move out of locally owned-and-operated data centers and into massive-scale public clouds allows for less expensive operation as well as potentially better uptime, scalability, and even lower power consumption.

Intel's SGX has several problems. The first and most obvious is that it is proprietary and vendor-specific -- if you design an application to utilize SGX to protect its memory, that application will only run on Intel processors... Finally, there are potentially severe performance impacts to utilization of SGX. IBM's Danny Harnik tested SGX performance fairly extensively in 2017, and he found that many common workloads could easily see a throughput decrease of 20 to 50 percent when executed inside SGX enclaves. Harnik's testing wasn't 100 percent perfect, as he himself made clear -- in particular, in some cases his compiler seemed to produce less-optimized code with SGX than it had without. Even if one decides to handwave those cases as "probably fixable," they serve to highlight an earlier complaint -- the need to carefully develop applications specifically for SGX use cases, not merely flip a hypothetical "yes, encrypt this please" switch....

After discussing real-world use of SGX, Rao moved on to future Intel technologies -- specifically, full-memory encryption. Intel refers to its version of full-memory encryption as TME (Total Memory Encryption) or MKTME (Multi-Key Total Memory Encryption). Unfortunately, those features are vaporware for the moment. Although Intel submitted an enormous Linux kernel patchset last May for enabling those features, there are still no real-world processors that offer them... This is probably a difficult time to give exciting presentations on Intel's security roadmap. Speculative prediction vulnerabilities have hurt Intel's processors considerably more than their competitors', and the company has been beaten significantly to market by faster, easier-to-use hardware memory encryption technologies as well. Rao and Woodgate put a brave face on things by talking up how SGX has been and is being used in Azure. But it seems apparent that the systemwide approach to memory encryption already implemented in AMD's Epyc CPUs -- and even in some of their desktop line -- will have a far greater lasting impact.

Intel's slides about their own upcoming full memory encryption are labeled "innovations," but they look a lot more like catching up to their already-established competition.

Desktops (Apple)

Apple To Release First ARM Mac Without Intel Processor in Next 18 Months, Predicts Kuo (9to5mac.com) 141

Ming-Chi Kuo is out with a new analyst note today and the most interesting part of his forecast is that Apple will release its first Mac with an ARM processor in the first half of 2021. From a report: Kuo is predicting that one of Apple's new products to be released within the next 12-18 months will be a Mac with an in-house processor, instead of using an Intel CPU. There have been growing reports over the last couple of years about Apple making the switch to a custom-designed ARM processor for its Macs and today's report gives a concrete timeframe for when to expect that launch, which has actually held true since Kuo's prediction back in 2018. Since the coronavirus outbreak, Kuo highlights that Apple has been "more aggressive" with its funding for research, development, and production of 5nm process chips that are expected to show up in the first Macs with ARM CPUs. That's because 5nm chips will be integral to iPhone, iPad later this year, as well as Macs come 2021.
Intel

Intel Debuts 5G Server and Base Station Chips, Plus a PC Network Card (venturebeat.com) 8

Intel's sale of its consumer 5G modem unit signaled its exit from the smartphone business last year, but the company remains heavily committed to participating in the growing 5G marketplace -- primarily on the carrier and enterprise sides. Today, the company announced three chips built for various types of 5G computers, plus a 5G-optimized network adapter for PCs. From a report: Up first is an updated second-generation Xeon Scalable processor, now at a top speed of 3.9GHz and bolstered by additional AI capabilities to aid with inference applications. The new chip promises up to 36% more performance than the first-generation version, with up to 42% more performance per dollar, though early second-generation chips were introduced in April 2019. Intel says the Xeon Scalable is the "only CPU with AI built in" -- a pitch that's not exactly accurate, given the range of existing laptop and mobile CPUs with AI features, but one Intel further explains means "the only CPU on the market that features integrated deep learning acceleration." Xeon Scalable's Deep Learning Boost feature set promises up to 6 times more AI performance than AMD's Rome processors, though Intel won't quantify the number of TOPS available for AI processing, calling the metric "theoretical." Regardless, Intel says Xeon Scalable will support the cloud AI needs of Alibaba, AWS, Baidu, Microsoft, and Tencent, as well as other major companies.

Network-optimized "N-SKUs" of the new Xeon Scalable will also be available, offering up to 58% more performance for network function virtualization workloads compared with the first chip. Customers such as China Mobile, SK Telecom, Sprint, and T-Mobile Poland are all using Xeon Scalable in their 5G networks. The boosted Xeon Scalable chips are officially available starting today. Intel is also introducing the Atom P5900, billed as the first Intel architecture SoC for base stations and designed from the ground up for radio access network (RAN) needs. It's a 10-nanometer chip with hardware-based network acceleration features, including integrated packet processing, ultra low latency, and a switch for inline cryptographic acceleration.

XBox (Games)

Microsoft Reveals More Xbox Series X Specs (polygon.com) 49

Microsoft revealed new details on its next-generation console, text, on Monday morning, confirming specifications on what the company calls its "superior balance of power and speed" for its new hardware. From a report: The next-gen Xbox, Microsoft said, will be four times as powerful as the original Xbox One. The Xbox Series X "next-generation custom processor" will employ AMD's Zen 2 and RDNA 2 architecture, head of Xbox Phil Spencer wrote on the Xbox website. "Delivering four times the processing power of an Xbox One and enabling developers to leverage 12 [teraflops] of GPU (Graphics Processing Unit) performance -- twice that of an Xbox One X and more than eight times the original Xbox One," Spencer said. He called the next-generation Xbox's processing and graphics power "a true generational leap," offering higher frame rates -- with support for up to 120 fps -- and more sophisticated game worlds.

That 12 teraflops claim is twice that of what Microsoft promised with the Xbox One X (then known as Project Scorpio) when it revealed the mid-generation console update back in 2016. Spencer also outlined the Xbox Series X's variable rate shading, saying, "Rather than spending GPU cycles uniformly to every single pixel on the screen, they can prioritize individual effects on specific game characters or important environmental objects. This technique results in more stable frame rates and higher resolution, with no impact on the final image quality." He also promised hardware-accelerated DirectX ray tracing, with "true-to-life lighting, accurate reflections and realistic acoustics in real time." Microsoft also reconfirmed features like SSD storage, which promise faster loading times, as well as new ones, like Quick Resume, for Xbox Series X.

Hardware

Open Source CPU Architecture RISC-V Is Gaining Momentum (insidehpc.com) 41

The CEO of the RISC-V Foundation (a former IBM executive) touted the open-source CPU architecture at this year's HiPEAC conference, arguing there's "a growing demand for custom processors purpose-built to meet the power and performance requirements of specific applications..." As I've been travelling across the globe to promote the benefits of RISC-V at events and meet with our member companies, it's really stuck me how the level of commitment to drive the mainstream adoption of RISC-V is like nothing I've seen before. It's exhilarating to witness our community collaborate across industries and geographies with the shared goal of accelerating the RISC-V ecosystem...With more than 420 organizations, individuals and universities that are members of the RISC-V Foundation, there is a really vibrant community collaborating together to drive the progression of ratified specs, compliance suites and other technical deliverables for the RISC-V ecosystem.

While RISC-V has a BSD open source license, designers are welcome to develop proprietary implementations for commercial use as they see fit. RISC-V offers a variety of commercial benefits, enabling companies to accelerate development time while also reducing strategic risk and overall costs. Thanks to these design and cost benefits, I'm confident that members will continue to actively contribute to the RISC-V ecosystem to not only drive innovation forward, but also benefit their bottom line... I don't have a favorite project, but rather I love the amazing spectrum that RISC-V is engaged in — from a wearable health monitor to scaled out cloud data centres, from universities in Pakistan to the University of Bologna in Italy or Barcelona Supercomputing Center in Spain, from design tools to foundries, from the most renowned global tech companies to entrepreneurs raising their first round of capital. Our community is broad, deep, growing and energized...

The RISC-V ecosystem is poised to significantly grow over the next five years. Semico Research predicts that the market will consume a total of 62.4 billion RISC-V central processing unit (CPU) cores by 2025! By that time I look forward to seeing many new types of RISC-V implementations including innovative consumer devices, industrial applications, high performance computing applications and much more... Unlike legacy instruction set architectures (ISAs) which are decades old and are not designed to handle the latest workloads, RISC-V has a variety of advantages including its openness, simplicity, clean-slate design, modularity, extensibility and stability. Thanks to these benefits, RISC-V is ushering in a new era of silicon design and processor innovation.

They also highlighted a major advantage. RISC-V "provides the flexibility to create thousands of possible custom processors. Since implementation is not defined at the ISA level, but rather by the composition of the system-on-chip and other design attributes, engineers can choose to go big, small, powerful or lightweight with their designs."
AI

Adobe Photoshop Completes 30 Years, Launches New AI-Powered Features (thenextweb.com) 25

Adobe Photoshop, a synonym and often a verb for manipulated and edited images, has turned 30. After launching its app on iPad last year, the company said it's looking to expand its presence on more platforms in the near future. From a report: On this occasion, Adobe is bringing four AI-powered features to its desktop and iPad apps. On a call earlier this week, executives said they were looking to introduce more features powered by the company's Sensei engine, which was launched in 2016. On desktop, the company is bringing content-aware filling for multiple objects on one go. This feature lets you remove objects and fill the space with the present background, based on your selection. For instance, in the picture below, you can remove the leaf on the ice cream cone and fill it with pink-colored ice cream. Along with this, Adobe is also launching an improved lens-blur function that supports photos with the depth map. The new feature relies on GPU rather than the CPU for processing, and supposedly delivers a more realistic bokeh effect.
Open Source

OpenPower Foundation Releases a Friendly EULA For IBM's Power ISA RISC (phoronix.com) 28

Long-time Slashdot reader lkcl writes: Michael Larabel, of Phoronix, writes that the OpenPower Foundation has released a license agreement for Hardware Vendors to implement the Power ISA RISC instruction set in their processors. Hugh Blemings, the Director of OpenPower, was responsible for ensuring that the EULA is favourable and friendly towards Libre and Open Hardware projects and businesses.

Of particular interest is that IBM's massive patent portfolio is automatically granted, royalty-free as long as two conditions apply: firstly, the hardware must be fully and properly Power ISA compliant, and secondly, the implementor must not "try it on" as a patent troll.

Innovation in the RISC space just got a little more interesting.

"Amidst the fully free and open RISC-V ISA making headway into the computing market, and ARM feeling pressured to loosen up its licensing, it seems they figured that it's best to join the party early," argues Hackaday.
AMD

AMD Ryzen Threadripper 3990X 64-Core Chip Launched, Benchmarking a Beast CPU (hothardware.com) 99

MojoKid writes: In January at CES 2020 in Las Vegas, AMD CEO Dr. Lisa Su took to the stage during a press conference and revealed the company's forthcoming Ryzen Threadripper 3990X 64-core processor. Dr. Su disclosed the 3990X's speeds and feeds and showed the beastly chip taking down a pair of 28-core Xeons worth about $20K, in a 3D rendering benchmark, despite its much smaller price tag. Today, however, the company has lifted the veil on full details of the chip as well as its complete performance profile across a myriad of benchmarks, fresh off embargo lift. Thought it's packing 64 cores capable of processing 128 threads in SMT, the 3990X's TDP is still rated at 280W when it's configured at stock frequencies (2.9GHz base / 4.3GHz boost). As you might expect, Ryzen Threadripper 3990X was an absolute beast all of the multi-threaded tests that scale properly to leverage all of its cores. In fact, Threadripper 3990X stands head and shoulders above every other desktop processor on the market currently, besting competing many-core Intel solutions by more than 2X in some cases. However, because Threadripper 3990X has relatively high clocks for such a high core count chip, it also offers relatively strong performance for day-to-day use cases in lightly threaded workloads and gaming as well. There's no question, at $3990 MSRP, AMD's Threadripper 3990X isn't a mainstream desktop chip, but for those who need serious workstation performance and throughput, nothing on the desktop even comes close right now.
AMD

AMD Threadripper 3990X 64-Core Beast Spotted Outscoring Dual Intel Xeon Platinum (hothardware.com) 98

MojoKid writes: When AMD unveiled its forthcoming Ryzen Threadripper 3990X 64-core processor at CES 2020 this year, the company made no bones about comparing its performance to a many-core competitive platform from Intel. Under the hood of the yet formally released high-end workstation AMD chip are 64 physical cores capable of processing 128 threads in SMT, with a 2.9GHz base clock, 4.3GHz boost clock, and 256MB of L3 cache. All that horsepower resides in a single TRX40 socket with a 280 Watt TDP for a suggested retail price of $3990. Conversely, a dual socket Intel Xeon Scalable Platinum 8280 setup will sport 56 cores across two sockets with over a 400 Watt TDP that costs around $20,000. At CES, AMD showed its new 64-core Threadripper beating the dual Xeon Platinum setup in a 3D rendering application called VRAY, and today additional benchmark numbers have surfaced in SiSoft SANDRA, showing Threadripper 3990X out-scoring the Intel setup by around 18 percent. No doubt, AMD's Threadripper 3900X isn't a CPU for the average mainstream desktop user, but when the chips arrive to market in the near future, workstation and content creation professionals will likely be all over AMD's new 64-core beast chip.
China

Made in China 8-Core x86 CPU Released (techspot.com) 101

AmiMoJo writes: Zhaoxin, a fabless chip maker based in Shanghai, has produced a homegrown x86 CPU line that's apparently ready for the DIY scene. The Zhaoxin KaiXian KX-6000 series of processors were originally shown off in 2018, but since then we had heard little about them. Now it seems that the KX-U6780A will come to market this quarter, as listed on Chinese retail site Taobao with a March release date. For the uninitiated, Zhaoxin is a joint venture between VIA Technologies and the Shanghai Municipal Government. Zhaoxin's current CPU designs have origins in Centaur Technology, a company acquired by VIA in 1999. The VIA Nano Isaiah core design, built by Centaur, would serve as the architecture for Zhaoxin's first CPUs.

The Isaiah design was Centaur's first superscalar CPU capable of out-of-order execution. This is what seemed to pave the way for Zhaoxin's in-house designed LuJiaZui cores, as they too are built around a superscalar, out of order architecture. LuJiaZui seems to be an iterative migration from the Wudaokou architecture, but also supports modern instruction set extensions such as AVX and SSE4.2, which is an important evolution for China and its domestic CPU goals. The KaiXian KX-U6880A appears to be the series flagship, with the slightly lower clocked KX-U6780A slotting in just beneath it. All KX-6000 series chips are based on the LuJiaZui architecture, boasting eight cores and eight threads.

Security

Intel Is Patching Its 'Zombieload' CPU Security Flaw For the Third Time (engadget.com) 24

An anonymous reader quotes a report from Engadget: For the third time in less than a year, Intel has disclosed a new set of vulnerabilities related to the speculative functionality of its processors. On Monday, the company said it will issue a software update "in the coming weeks" that will fix two more microarchitectural data sampling (MDS) or Zombieload flaws. This latest update comes after the company released two separate patches in May and November of last year.

Compared to the MDS flaws Intel addressed in those two previous patches, these latest ones have a couple of limitations. To start, one of the vulnerabilities, L1DES, doesn't work on Intel's more recent chips. Moreover, a hacker can't execute the attack using a web browser. Intel also says it's "not aware" of anyone taking advantage of the flaws outside of the lab.
In response to complaints of the company's piecemeal approach, Intel said that it has taken significant steps to reduce the danger the flaws represent to its processors.

"Since May 2019, starting with Microarchitectural Data Sampling (MDS), and then in November with TAA, we and our system software partners have released mitigations that have cumulatively and substantially reduced the overall attack surface for these types of issues," a spokesperson for the company said. "We continue to conduct research in this area -- internally, and in conjunction with the external research community."
The Courts

Apple Lawsuit Tests If An Employee Can Plan Rival Startup While On Payroll (reuters.com) 63

Attorneys for a former Apple executive will try to convince a skeptical judge that employees can plan a competing venture while still in a job. Reuters reports: Apple filed the lawsuit in Santa Clara County Superior Court against Gerard Williams III, who left the company last year after more than nine years as chief architect for the custom processors that power iPhones and iPads to start Nuvia Inc, which is designing chips for servers. Judge Mark H. Pierce last week issued a tentative ruling allowing the case to proceed but barring Apple from seeking punitive damages.

Apple sued Williams in August, alleging that he breached an intellectual property agreement and a duty of loyalty to the company by planning his new startup while on company time at Apple, spending hours on the phone with colleagues who eventually joined the venture. Apple is not suing Nuvia itself or any of Williams' co-founders and it did not allege any intellectual property or trade secret theft. According to a copy of Williams' agreement that Apple attached to its complaint, the contract required that Williams "will not plan or engage in any other employment" that competes with Apple or is directly related to the company. In a filing in November, Williams argued that Apple's contract was unenforceable because California law allows employees to make some preparations to compete while still in their current job.

Ubuntu

The Official Kubuntu 'Focus' Linux Laptop Goes on Sale (betanews.com) 98

You can buy an official Kubuntu laptop. Called "Focus". It is an absolutely powerhouse with top specs. From a report: Here's the specs list:
CPU: Core i7-9750H 6c/12t 4.5GHz Turbo
GPU: 6GB GTX-2060
RAM: 32GB Dual Channel DDR4 2666 RAM
Storage: 1TB Samsung 970 EVO Plus NVMe
Display: 16.1" matte 1080p IPS
Keyboard: LED backlit, 3-4mm travel
User expandable SDD, NVMe, and RAM
Superior cooling
The starting price for the Kubuntu Focus Laptop is $2395.

Open Source

Tuxedo's New Manjaro Linux Laptops Will Include Massive Customization (forbes.com) 17

Tuxedo Computers "has teamed up with Manjaro to tease not one, not two, but several" Linux laptops, Forbes reports:
The Tuxedo Computers InfinityBook Pro 15...can be loaded with up to 64GB of RAM, a 10th-generation Intel Core i7 CPU, and as high as a 2TB Samsung EVO Plus NVMe drive. You can also purchase up to a 5-year warranty, and user-installed upgrades will not void the warranty...

Manjaro Lead Project Developer Philip Müller also teased a forthcoming AMD Ryzen laptop [on Forbes' "Linux For Everyone" podcast]. "Yes, we are currently evaluating which models we want to use because the industry is screaming for that," Müller says. "In the upcoming weeks we might get some of those for internal testing. Once they're certified and the drivers are ready, we'll see when we can launch those." Müller also tells me they're prepping what he describes as a "Dell XPS 13 killer."

"It's 10th-generation Intel based, we will have it in 14-inch with a 180-degree lid, so you can lay it flat on your desk if you like," he says.

The Manjaro/Tuxedo Computers partnership will also offer some intense customization options, Forbes adds.

"Want your company logo laser-etched on the lid? OK. Want to swap out the Manjaro logo with your logo on the Super key? Sure, no problem. Want to show off your knowledge of fictional alien races? Why not get a 100% Klingon keyboard?"
Oracle

Oracle Ties Previous All-Time Patch High With January 2020 Updates (threatpost.com) 9

"Not sure if this is good news (Oracle is very busy patching their stuff) or bad news (Oracle is very busy patching their stuff) but this quarterly cycle they tied their all-time high number of vulnerability fixes released," writes Slashdot reader bobthesungeek76036. "And they are urging folks to not drag their feet in deploying these patches." Threatpost reports: The software giant patched 300+ bugs in its quarterly update. Oracle has patched 334 vulnerabilities across all of its product families in its January 2020 quarterly Critical Patch Update (CPU). Out of these, 43 are critical/severe flaws carrying CVSS scores of 9.1 and above. The CPU ties for Oracle's previous all-time high for number of patches issued, in July 2019, which overtook its previous record of 308 in July 2017. The company said in a pre-release announcement that some of the vulnerabilities affect multiple products. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update patches as soon as possible," it added.

"Some of these vulnerabilities were remotely exploitable, not requiring any login data; therefore posing an extremely high risk of exposure," said Boris Cipot, senior security engineer at Synopsys, speaking to Threatpost. "Additionally, there were database, system-level, Java and virtualization patches within the scope of this update. These are all critical elements within a company's infrastructure, and for this reason the update should be considered mandatory. At the same time, organizations need to take into account the impact that this update could have on their systems, scheduling downtime accordingly."

Slashdot Top Deals