Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Existing TRX40 motherboards? (Score 1) 71

I'm sure many people will be replacing their older TR systems with newer TRX40 systems. I'm not sure why you believe people wouldn't. The TR3 chips are considerably more powerful than the TR2 chips core-for-core, older TR systems have value on the used market, and not everyone with TR2 systems are running the highest-end TR2 chips.

Someone with a 2970WX or 2990WX system probably wouldn't be upgrading (except possibly to a 3990X), but I would say that many people with a 1900X, 1950X, 2920X, or 2950X will definitely be in the market.

If these people don't upgrade to TR3, they will probably opt for upgrading to an AM4 based 3950X instead (which is a much cheaper motherboard + cpu combination than TR2/3).

-Matt

Comment Re:Existing TRX40 motherboards? (Score 2) 71

Any TRX40 motherboard can run the 3990X. It is true that older X399 motherboards cannot run the new TR3 chips, or vise-versa, and I agree it kinda sucks a little. But its hard to be angry at AMD considering what they packed into the TRX40. They didn't just force people onto a new TR socket gratuitously, unlike Intel.

The TRX40 motherboards have 4x the data bandwidth between cpu and chipset that X399 had. That's four times the bandwidth. Not the same, not twice... four times. It means that all the PCIe lanes hanging off the chipset are usable, and this cannot be said for any other motherboard from either AMD or Intel. TRX40 has 72 total unencumbered PCIe lanes available to the user.

The TRX40 motherboards are also all PCIe-v4-ready (the X399 motherboards had no chance of doing PCIe-v4), and the DDR4 channels have been re-laid-out to allow the ram to be clocked significantly higher.

So... complaining about it kinda silly. AMD saw a chance to quadruple chipset bandwidth and they took it. That's the main reason why the socket isn't compatible, and I'm fine with it.

AMD also gave people 16 cores on AM4, backwards compatible all the way to B450 (I wouldn't try it on an A320). So 90% of the AM4 motherboard line-up can now take a 16-core cpu. That's a pretty nice present AMD gave us there!

-Matt

Comment Re: Can we have real SMP back? (Score 4, Informative) 71

The CPUs in TODAY's laptops beat the holy living crap out of what we had in the Sandy Bridge era, even when running at lower frequencies. It isn't even a contest. Yes, laptop vendors put physically smaller batteries in the thinner laptops... they still put large batteries in 'gaming' laptops, though, and even the smaller batteries generally have twice the watt-hours of capacity that older laptops from that era had.

In addition, the CPU performance has very little to do with battery life unless the laptop is actually being loaded down. Most of the battery's power consumption is eaten up by the display.

Just playing Video or browsing around puts basically ZERO load on a laptop CPU. The video is handled by dedicated decode hardware in the iGPU, and having a ton of browser windows open doing animations won't even move the needle on CPU use. The only way to actually load a laptop CPU down these days is to do some sort of creator type of work... batch photoshop, rendering, VR, or other work.

Almost nothing running on a modern laptop is single-threaded, not even a browser that has only one tab open. At a minimum the graphics pipe will use a second core (whether using GPU HW acceleration or not), which means that software logic and screen updates get their own cores. Even for a single-threaded program. There are no bottlenecks outside of the storage subsystem so if that's a SSD a modern laptop is going to have lightning response under almost all conditions.

Any real browser, such as chrome or firefox, is pretty seriously multi-threaded. I have four chrome windows open on my workstation right now with not very many tabs... maybe 6 tabs open at the moment, and ps shows 182 program threads associated just with chrome across 21 discrete processes. 182 program threads.

Where there is bloat on today's systems tends to be with memory use, particularly when running browsers. Getting a laptop with at least 8GB (and better, 16GB) of ram is definitely an important consideration. My relatively minimal browser use is eating... 5GB of ram. Of course, my workstation has 32GB so I don't really feel it. But the same issue exists on a laptop. Get more ram, things will run more smoothly. You can swear at the software... but still get more ram :-).

-Matt

Comment Re:Can we have real SMP back? (Score 3, Interesting) 71

Yes and no. Yes, a better cooler will result in better performance, but there are three problems.

First, there are limits to just how quickly heat can be dissipated from the silicon due to the transistor density. As geometries get smaller, power density continues to increase. Ambient cooler (whether air or liquid based) limit out. Going sub-ambient is generally a non-starter for regular use, but if you decide to you still can't go below freezing without causing serious condensation. Not for regular use anyway.

The second problem is power consumption. Power goes exponential as the frequency goes past its sweet spot (around 3.8 GHz or so on Zen 2). This is fine if only one core is being boosted, but try to do it on all cores and you can easily start pulling 200-300W just for the CPU socket alone.

The third problem is called electro-migration... basically the more current you throw into the CPU die on these smaller nodes, the lower the 'safe' voltage winds up being. Where the two cross gives you the maximum safe frequency you can actually run the CPU at. So when you are trying to push higher all-cores frequency you wind up in a rat-race. Higher frequencies require higher voltages, but the maximum safe voltage drops the more cores you try to run at those higher frequencies.

These problems also apply to Intel's 10nm and will likely apply to all future (smaller) nodes as well for both Intel and other foundries.

-Matt

Comment Locks are complicated (Score 5, Informative) 191

Locks are complicated. It's really that simple (ha ha). All of the operating system projects have gone through a dozen generations of lock design over the last 30 years because performance depends heavily on all sorts of things. In modern-day, cache-line effects (what we call cache-line ping-ponging between CPUs) are a big deal due to the number of CPU cores that might be involved. Optimal implementations in the days of 4-core and 8-core machines fall flat on their faces as the core count increases.

Even situations that you might think wouldn't be an issue, such as a simple non-contended shared lock, has serious performance consequences on multi-core machines when they are banged on heavily... consequences that can cause latencies in excess of one microsecond from JUST a single NON-CONTENDED atomic increment instruction. That's how bad it can get.

In modern-day kernel programming, spin-locks can only be used safely because the kernel has fine control over the scheduler. Spin-locks in userland tends to be disastrous in the face of any sort of uncontrolled scheduler action. And to even make them work reliably on many-core machines we need backoff mechanisms to reduce the load on the cache coherency busses inside the CPU. Linus is exactly right.

There are other major issues with locks that become dominant on systems with more cores. Shared/Exclusive lock conflict resolution becomes a big problem, so the locking code needs to handle situations where many overlapping shared locks are preventing a single exclusive lock from being taken, or where many serial exclusive locks are preventing one or more shared locks from being taken. Just two examples there.

Even cache-line-friendly queued locks (sequence space locks) have major trade-offs. Stacked locks (that look like mini binary trees) eat up serious amounts of memory and have their own problems.

The general answer to all of this is to develop code to be as lockless as possible through the use of per-cpu (or per-thread) data structures. The design of RCU was one early work-around to the problem (though RCU itself has serious problems, too). Locks cannot be entirely avoided, but real performance is gained only when you are able to code an algorithm where no locks are required in most critical path situations. That's where all the OS projects are moving in modern-day.

-Matt

Comment Re:What? (Score 1) 35

It's very odd that LSM LOCKDOWN is controversial. This is roughly the same feature as BSD Securelevels, though with some more fine-grained control. In theory, it's very useful if you want to protect your system against attackers who compromise a process that runs with root privilege. In practice, the kernel attack surface is so large that a motivated attacker can probably bypass it.

Comment Re:Burger King should just stop (Score 1) 350

I'm quite surprised that they didn't. Prior to the launch, they said that they were going to market it as 'plant-based' and not 'vegetarian' or 'meat-free' because it didn't meet the requirements to be classified as vegetarian. They were aiming it at flexitarians that wanted to reduce the amount of meat that they were eating, not at vegetarians or vegans. I guess someone in marketing realised how big a market they were losing and decided it was a big money maker to advertise it as meat-free but didn't bother to see if this was actually a true description.

Comment Holy cow, so much mis-information (Score 2) 201

An unbelievable amount of junk is being posted. The short answer is: Always run with swap installed on a Linux or BSD system, period. If you don't, you're an idiot. As to why? There are many, many reasons. How much? Generally as much as main memory. If you have a tiny amount of memory, more, if you have tons of memory, like a terrabyte, then less. And these days it absolutely should be on a SSD. SSDs are not expensive... think about it. Do you want 40GB of swap? It's a $20 SSD. Frankly, just putting the swap on your main SSD (if you have one) works just as well. It won't wear out from paging.

Linux and BSD kernels are really good at paging out only what they really need to page out, only paging when there is actual memory pressure, doing it in batches (no need to worry about SSD swap write amplification much these days... writes to swap are batched and are actually not so random. Its the reads that tend to be more random). I started using SSDs all the way back in the days of the Intel 40GB consumer drives, much of them for swap, and have yet to reach the wear limit for any of them. And our machines get used heavily. SSDs wear out from doing other stupid things... swap is not usually on the list of stupid things. The days of just random paging to swap for no reason are long over. Windows... YMMV.

Without swap configured you are wasting an enormous amount of relatively expensive ram to hold dirty data that the kernel can't dispose of. People really underestimate just how much of this sort of data systems have... it can be huge, particularly now with bloated browsers like Chrome, but also simply things like TMPFS which is being used more heavily every day. Without swap configured if memory gets tight the ONLY pages the kernel can evict are shared read-only file-backed pages.... generally 'text' pages (aka code). These sorts of pages are not as conducive to paging as data pages and it won't take long for the system to start to thrash (this is WITHOUT swap) by having to rip away all the program code and then instantly page it back in again. WITH swap, dirty data pages can be cleaned by flushing them to swap.

Configure swap, use SSDs. If you are worried about wear, just check the wear every few months but honestly I have never worn out a SSD by having swap configured on it.... and our systems can sometimes page quite heavily when doing bulk package builds. Sometimes as much as 100GB might be paged out, but it allows us to run much more aggressive concurrency settings and still utilize all available CPU for most of the bulk run.

So here are some bullets.

1. Systems treat memory as SWAP+RAM, unless you disable over-commit. Never disable over-commit on a normal system. The SWAP is treated like a late-level cache. CPU, L1, L2, L3, [L4], RAM, SWAP. Like that. The kernel breaks the RAM down into several queues... ACTIVE, INACTIVE, CACHE, then SWAP. Unless the system is completely overburdened, a Linux or BSD kernel will do a pretty damn good job keep your GUI smooth even while paging dead browser data away.

2. Kernels do not page stuff out gratuitously. If there is no memory pressure, there will be no paging, even if the memory caches are not 'balanced'.

3. There is absolute no reason to waste memory holding dirty data from idle programs or browser tabs. If you are running a desktop browser, swap is mandatory and your life will be much better for it.

4. Same is true for (most) SERVERs. Persistent sessions are the norm these days, and 99% of those will be idle long-term. With swap the server can focus on the ones that aren't, and paging in an idle session from a SSD takes maybe 1/10 of a second.

5. CPU overhead for paging is actually quite low, and getting lower every day. Obviously if a program stalls on a swapped page that has to be paged in you might notice it, but the actual CPU overhead is almost zip.

6. The RAM required to manage swap space is approximately 1 MByte per 1 GByte of swap. I regularly run hundreds of gigabytes of swap, just to give me breathing room if I happen to get runaways. System ram overhead is just not a big deal.

7. SSD wear is not typically an issue these days, for many reasons. Writes to swap are typically batched and actually don't have all that much write amplification. Its the reads which tend to be more random and random reads are the perfect load for a SSD.

Run with a SSD, configure a reasonable amount of swap on it, stop worrying about paging-induced wear, and you are done. This is the modern world.

-Matt

Comment Re:Boring (Score 1) 265

It got boring when Slashdot disabled (broke?) the message center so it is now impossible to see when someone has replied to your posts. That was the thing that, early on, allowed Slashdot discussions to be discussions and not just disjointed monologues. Since it went away, the quality of discussions has gone way down. The AC spam was just a symptom of this.

Comment Re:Easy: Switch to gmail and google drive (Score 4, Informative) 137

Nope. Note in TFS the key phrase: 'In its default configuration'. The university that I used to work for bought Office 365. This was even before the GDPR, but the university deals with a lot of confidential commercial data from industrial partners and with health records in life sciences departments. Google's stock T&Cs were completely incompatible with this and they refused to negotiate. Microsoft's stock T&Cs were also incompatible (which is why this ruling is completely unsurprising), but Microsoft was happy to negotiate a contract that gave much stricter controls over data.

For Germany in particular, the German Azure data centres are actually owned by a joint venture between Deutsche Telekom and Microsoft and so out of US jurisdiction. Companies in Germany (and the rest of the EU) can buy an Office 365 subscription that guarantees that their data doesn't ever leave Germany.

Comment Re:Article is full of glaring errors (Score 2) 203

What do you mean by 'a cycle life of 1000 cycles'? Most batteries that I've seen are rated as a number of cycles after which they are guaranteed to retain 80% of their initial charge. That number is typically 1,000-3,000 cycles. After that, most of them don't die, they just have a lower maximum charge, which continues to degrade.

It gets more complicated when you factor in partial charges. LiIon batteries are most efficient if you never fully charge or discharge them. If you use around 40% of their total charge cycle each time then they last a lot longer, but then you have to increase your up-front costs in exchange for the lower TCO.

Slashdot Top Deals

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...