Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Is there any other option, Linus? (Score 1) 507

Yes, I understand the implications... and I assume the longer the pipeline, the worse the effects. (and the industry has been pushing for ever longer pipelines). But, we're dealing with 4 Ghz chips. Intel's chips have voltage regulators to adjust to load, and most of the time, a consumer's CPU is idling at 5%. If we get say.. a 20% hit for turning off branch speculation completely.... then would that effectively mean the chip would act like a 3.2 Ghz CPU instead of a 4 Ghz CPU? Keeping in mind that those 4 Ghz CPUs largely have an idling speed, a working speed, and a "boost" speed that's above 4 Ghz for certain cores; would Joe/Jane Sixpack who uses his laptop for Facebook, Netflix, Gmail, and Youtube as the majority of his/her computing needs really even notice? Especially with quad cores? And the future systems are coming with hex cores, octa cores, etc.

Branch prediction is great for accelerating program speed, but having multiple cores means more work can be done simultaneously as well (even if not as fast without the branch prediction. I'm just reminded of the single-core days when every program had to fight for CPU time, but now with multi-core and hyperthreading, so many threads can be done at once that maybe speeding things up by branch prediction doesn't have as big of an impact in terms of human interaction as it once did.

Intel has been lowering its CPU speeds as it adds cores to keep the thermal envelope low -- especially in laptops. It's kind of an admission that multi-core is more important than core speed. But, if disabling branching has the same affect on speed as lowering the clock speed on a branch-enabled CPU, then... I just wonder if maybe disabling it altogether for security reasons would be a better path until we can fix the architectural issues. Seems these "garbage patches" are causing instabilities & many systems need a BIOS update to go with a micro-code update... and manufacturers loathe to update any BIOS for systems more than a year or so old.

I have a friend purchasing a 12 core i9 (24 threads) with 32 GB of RAM next month... and I'm just thinking... Would this guy even notice if branch speculation were disabled?!?!? The base frequency is 2.90 GHz, the Turbo is 4.30 GHz. Assuming the biggest hit was 20%, that still gives the Turbo speed of 3.44 Ghz... and the system might spend more time at a higher frequency than the base to offset the performance hit.

Now, I'm not saying we should kill branch prediction in future development, but we should probably re-evaluate how it's done if security is an issue. In the meantime, retpoline seems like a good trade-off & there are other good ideas. I would just like some data on what it would actually do in the "real world" if we shut off the insecure branch prediction completely... because I have a sneaky suspicion that unless one is working on heavily CPU-bound tasks, it wouldn't be noticeable to your average user. CAD workers, video transcoders, AI developers -- sure, they'd opt out b/c of the hit they'd take... but many might just prefer stability over crappy patches & not notice the hit at all.

Comment Re:Is there any other option, Linus? (Score 2) 507

I would like to see some benchmarks to see exactly what would happen if we killed branch prediction. I bet Intel has a command to turn it off for debugging.

No doubt it'd kill performance, but I'm curious by how much.

For instance, the Raspberry Pi 3's CPU doesn't include any branch prediction at all. It runs at 1.2 Ghz. Different architecture, obviously, but still. If it can chug along pretty good playing h.264 video, run Netflix from Chromium, and do quite a lot of things rather well with a crippled chipset and relatively slow 1.2 Ghz speed (and hobbled RAM, etc)... seems a 4 Ghz Intel chip might not be so bad off without branch prediction.

I have a Core i7 -- quad core, L1, L2, and L3 cache, hyperthreading enabled. CPU rarely hits above 20% -- more often in the 5% to 12% range unless I'm doing video transcoding which makes it hit closer to 80% - 95%. I'd think the lost cycles from a lack of branch prediction would hit transcoding and other CPU-intensive things really hard, but maybe not so much for other random, everyday things.

Eh, maybe, maybe not. I just think it's interesting that the Pi's CPUs (along with some other embedded CPUs) don't use branch prediction at all -- mostly to save power and keep heat down. Clearly the Pi's CPU is no speed demon, but it's "good enough" for a lot of uses. At what point might an Intel CPU without any branch prediction be "good enough?"

Comment Re:Is this unexpected? (Score 1) 218

Exactly.

Also, it's a bit unfair to break out the "desktop" as being just a tower system when there are all-in-one PCs and laptops with docking stations that have all the peripherals of a PC. I've worked several places where everyone had a laptop on a dock and would then take their laptop home if they needed to do more work via VPN. For the user, the difference between a docked laptop and a tower were slim to none while at work. Even with the internals of a "desktop," we're starting to see new form factors for parts for PCIe and SATA that could fit inside a laptop. The lines are blurring.

I've got a laptop, a desktop, and a tablet (Nexus 7 2013). That desktop is about 11 years old, but it still plays Netflix, Youtube, and other streaming media just fine. I'm about to replace it with a new gaming PC because while my 4-5 year old gaming laptop can play most games just fine, I want to also stream the games on Twitch and/or do other things in the background while gaming plus have my laptop up for other things. (plus, it'd be nice to have a 6+ core PC for video transcoding).

I'm atypical -- most people I know have a 5 to 8 year old laptop and a cell phone. (I just use my tablet and have an ancient flip phone b/c I like the battery life and lower monthly fees).

So, yeah... the "desktop" is dead in the sense that all but power users moved to laptops with multiple monitors and the same peripherals as their old desktops. I even know gamers that use just a laptop. But mostly, the life cycle of the devices just lengthened because there is no killer application to motivate people to buy a newer one. I doubt VR will be that killer ap -- maybe AI.

Unless you're in CAD/design/animation/special effects/video encoding, etc... laptops are fine... and even the monstrous tower PCs are good for a decade unless time is an important factor. Even some of those high-end jobs can be offloaded to a supercomputer / "cloud".

Comment Re:I don't think it's just because the CPU is chea (Score 2) 116

considering what little CPU percentage is used by the average PC user, there may be an argument for desktops and laptops not needing it either.... Maybe even for data centers where large caches are more important than branch predictions.

I know, I know... Insanity! Branch prediction is like... 75% to 99% correct, so it's not that much of a waste... and pipelines are long... but, Intel just helped put out a patch that wipes cache when switching between user mode and kernel mode and your average user can't tell their machine is 5% to 20% slower... b/c their quad core CPU is idling below 10% anyway.

The trend lately is towards low power chips for users at all levels -- even data centers... and trending towards mobility for users. Phones, tablets, chromebooks, laptops that are basically tablets, etc.

If I have to decide between a fat cache that doesn't get flushed in the name of security vs speculative processing... for whatever spectre or related exploit may be around the corner, I think I might rather have the cache. If every time a file server randomly accesses a database, their entire cache is flushed to protect against a bug... I could see where turning off branch prediction if possible might be a better solution than flushing the cache. A cache miss can be huge -- many orders of magnitude longer than waiting several cycles to execute something that wasn't predicted, but was pre-fetched at least.

Comment Re:tl;dr (Score 1) 116

False. Out of order execution alone isn't enough. Spectre was NAMED after "speculation" -- branch prediction. The ARM core in the Raspberry Pi DOES NOT USE THIS.

Read the section for Spectre here:
http://www.pcgamer.com/what-yo...

Meltdown affects ONLY Intel because they allowed a special type of branch prediction for illegal operations, Spectre affects many CPUs that use branch prediction, but is much more difficult to exploit as each exploit would have to target a specific cpu or cpu family -- not a generic exploit that would work on nearly every Intel CPU since the mid 90s.

Many CPUs use branch prediction in order to gain some performance when they guess correctly, but that's at the cost of using more power and the risk of wasting that power on bad guesses -- which is why some ARM chips (like the one Pi uses) avoid it completely.

Comment Re:Ubuntu vs. Mint, Cinnamon vs. Mate (Score 1) 124

Eh, ymmv. Mint is more focused on providing a very stable, very polished experience. It does not necessarily update when Ubuntu updates. You could think of Mint as being a kind of Ubuntu LTS release. In fact, the next Mint will be based on an Ubuntu LTS release. So, if you have newer hardware, you may want Ubuntu instead for a newer kernel and better driver support (along with newer versions of your fav. software). I personally prefer Ubuntu because it is more cutting edge with Nvidia drivers, Wayland support, and newer Kernels. I have used Mint in the past on older hardware, and it was beautiful. Now I use Ubuntu with Cinnamon -- though you should know that Cinnamon on Ubuntu doesn't have the polish or all the features of Cinnamon on Mint. So, I sometimes boot to Gnome on Ubuntu -- mostly to test out Wayland support.

As for Cinnamon vs Mate -- you're really comparing a Gnome 3 fork to a Gnome 2 fork. Cinnamon has more features, but takes up more resources. I'd prefer Cinnamon -- but on older, slower machines with limited RAM, Mate is pretty darned nice. I also keep Mate installed just in case I have an issue with another DE.

I like to test bleeding edge development builds of Ubuntu, so I get my fair share of bugs & so it's great to have a backup DE like Mate to switch to if a bug won't let me load into Gnome or Cinnamon.

Notes on Cinnamon -- before Cinnamon 3.2, it was really buggy and crashed often on multiple systems I had it on. It used to leak memory like crazy and have to be re-started... but after 3.4, it was rock solid and gorgeous. I think it's up to 3.7 now.

Notes on Mint -- I had an issue where I couldn't enable Zram a couple years back b/c it wasn't supported on the kernel that came with Mint -- or any that were in the Mint repo. So, I installed a newer kernel... I think 3.14 manually... got that working properly. Then, I had an issue with VLC playing a x.264 file that a newer version of VLC fixed... but, I had to manually install the newer VLC b/c it wasn't in Mint's repo. Same thing for a driver issue... Eventually, I had SO MANY bits of software that I was updating manually for compatibility reasons (like having the same versions of LibreOffice as another machine, etc.), that I ended up adding the entire Ubuntu current repository to Mint... which merged Mint with Ubuntu -- and that worked for a bit... until I got some nasty dependency issues as my hybrid Franken-Mint-Ubuntu had issues with some updates and changes. Ended up wiping it and just installing Ubuntu with Cinnamon since obviously I couldn't live with plain vanilla Mint.

You may have no issues at all for what Mint has to offer -- especially since it's caught up to Ubuntu for the next release... but, as things drift, you may change your mind. Depends on whether you're an LTS kinda person or a twice yearly release kinda person -- or like me... a twice yearly on some things, rolling release or development on others.

Comment Re: why is intel saying many different vendors?? (Score 5, Informative) 375

AMD checks privileges before it runs the code. Intel chose to optimize their branch prediction in a way that checked the privileges AFTER the code was run, but before it was written/applied. This allowed a small window for someone to read the results of that illegal instruction before it was dumped for being flagged as an exception.

I've read some info that speculates that Intel likely gained some performance by letting a lot of branch predictions run and then dumping those that are flagged after the fact instead of checking each and every one before it was run (because a lot of branches are dumped anyway for other reasons, so small price to pay to let things run and be wrong.) I don't know for sure, though. Sounds to me like they skimped on some silicon to check in hardware and put more into branch prediction.

Basically the code runs like this:

Hi, I'm a user program with user rights. I'd like to know where the super secret memory address of this part of the system is so I can read from it... and maybe even write to it later with a different exploit.

AMD: No, you're in user land, you can't see kernel land.
end of story

Intel: Oh, let me fetch that for you... Here, I've typed up a handy map of things and notes on your way around the super-secret areas... just show me your security clearance first before I hand it over.
Your malware: *glances at map, notes*
Intel: WAIT... you're in user land. You can't have this. *lights the map and notes on fire after you've already seen them*

Comment Re:YVR (Score 1) 294

More like... the monorail has to do something useful. When I stayed there not long after the monorail opened, it was difficult to find and it took longer to walk to the monorail itself and then to my destination than to just walk to my destination anyway. I stayed at Bally's. I could walk anywhere I wanted to go -- the sidewalks are wide in Vegas, the streets have crosswalks that go over and above the streets. The strip is really designed for walking -- plus, the entrances are beautiful and facing the street.

The track is 4 miles long, it's hidden a block over from the boulevard.... behind the huge hotel/casinos on one side of the strip. I checked Google Maps & it's over 1,000 ft from the entrances of most of the hotels -- sometimes double that.

Here's the stations:

SLS, Westgate, Las Vegas Convention Center, Harrah's / The Linq, Flamingo/Caesars Palace, Bally's / Paris Las Vegas, MGM Grand

Now note -- this is only a 4 mile strip and only on one side of the street. It feels like you're in a back alley going to find this monorail. Beyond that, there's really only a few of these I'd care to visit anyway -- Bally's, Paris, Ceasar's, and MGM Grand. Those are all pretty close to one another, so no idea why I'd want to walk over 1000 ft to a monorail just to walk 1000 ft again to the entrance of one of those destinations. Also... it being on only one side of the street, to get to say.. Luxor from Bally's, it'd be 1000 ft to the station, a fee, a wait time, then 1000ft from MGM Grand station to the boulevard, then I'd cross the street and walk some more to get to Luxor. If it actually went from Bally's to Luxor, it might possibly be worth it, but I could take a taxi if I didn't feel like walking that far anyway.

The monorail doesn't even connect to the international airport! You'd think if I flew into Las Vegas's McCarran International Airport, I'd be tempted to buy a monorail ticket if it meant I could load my luggage on and be ferried to my hotel.

I've been on public transportation that was useful. Atlanta's MARTA is awesome. How Vegas screwed this up, I don't know... but it's like a monorail to nowhere for people that want to travel just to get to and from the monorail system.

Comment Re: Where's the story here? (Score 2) 679

There is no law requiring businesses to take payments in cash. Even businesses that choose to take cash can refuse to take certain coins or bills. Ever seen a sign that says "exact change only" or a "No bills larger than a $10 accepted?"

Governments must accept cash, but businesses can do what they like. They could charge you in jelly beans if they wish. If you take their goods or services without paying them in the agreed/posted amount of jelly beans, then you'd be guilty of theft or theft of services. That could land you in civil court if it were a contract payment -- say you were to pay 5,000 jellybeans per month and suddenly stopped shipments. In that case, they'd sue, and then a judge would either compel you to produce the required jellybeans or a cash equivalent. If you simply took an item without payment in jellybeans, you'd likely be arrested and taken to criminal court then have to return the stolen items or make restitution for stolen services in either jellybeans or cash.... plus additional fines, jail time, etc.

This isn't some undefined area of law that hasn't been explored. Physical US bills and coins are legal tender for state and federal governments. There is zero legislation compelling businesses to use them. There are businesses in the USA that do business using strips of precious metals -- because they have lost faith in US currency. There are businesses that exclusively use tokens -- like casinos in Vegas that use them for gambling. There are some businesses that exclusively barter for items and have no cash involved whatsoever!

Credit cards, Debit cards, pre-loaded cards, and gift cards aren't radically different than tokens. Anyone can go to WalMart and buy a pre-loaded VISA or Mastercard without having to have a bank account, much less good credit. You can argue that they're discriminatory all you like, but not only is it a poor argument to make, there's no legal standing for disallowing such discrimination. One can't discriminate based on race, sex, religion, and many other factors for most for-profit entities, but there's no law against discriminating against poor people. There's no law against discriminating based upon credit rating either.

Frankly, most online businesses already require a credit card of some sort & the few that don't require a checking account instead. (A few rare businesses will take a cashier's check or a moneygram, but hey... may as well get a pre-loaded card if you're going to go through that trouble!) No online business takes cash through the mail, and most don't have a physical presence where you could take cash if you wanted to.

So, yeah, I personally think it sucks that fewer places are taking cash, but it's not illegal. Never was -- won't be no matter how much you hold your breath, turn blue, and act like Donald Trump by doubling down on something when you're wrong.

Comment Re:Isn't this good? (Score 3, Interesting) 135

Yes and No. With a proper firewall, no one can scan your network for devices as it should only allow incoming traffic through that is a reply to outgoing traffic. But, sites you visit from IPV6 devices would show their full IPV6 unique ID on your network -- so say... Facebook or Netflix might know exactly how many devices you have at your home that you use to connect to their services.... BUT, they really know this anyway because they scan for device IDs, browser fingerprinting, etc.

NAT is a hack and not a security feature. It has its own security issues as well.

https://www.internetsociety.or...

IPV6 is only bad if you have no proper hardware firewall between your ISP and your network... or if your ISP is spying on your traffic (in which case, you have bigger issues and need a VPN)

Comment Re:People Still Use Desktops? (Score 1) 383

Banks, credit card processors, universities, and government agencies still use mainframes (usually an IBM AS/400 primary and another as a backup).

They're great for what they do & they aren't going anywhere anytime soon. They have a life cycle measured in decades & usually the code is ported or re-written/optimized for the newer model of the same platform when it's replaced.

US Bank's credit card processing mainframe is enormous -- takes up a whole floor in a sub-level.

Comment Re:I have no problem with systemd (Score 1) 751

I've been using Debian since before Sarge was released in 2005... and tons of distros since -- latest being Ubuntu 17.10 (and a VM dev box for 18.04). I can honestly say as an end user using mostly desktop software, the switch to systemd has been uneventful. I haven't experience any issues -- even when running VMs or running Apache to serve pages from them.

But, I'm not a sysadmin. I've supported Windows, Linux, and IBM AS/400 equipment... including servers... and, I guess our configuration changes so little that systemd has had no impact.

I have no dog in the systemd fight... but Ubuntu decided to go with the upstream changes at Debian... and Debian decided to go with systemd which was mostly developed by Red Hat. All of the major Linux distros switched as well... so, I defer to their wisdom & depend on their support.

If another distro without systemd performs better than Ubuntu for my needs AND has the same or better level of support, then maybe I'll switch. 'Til then, I'm going with systemd merely because it's the default on my favorite distro. It is of as little consequence to me as if they were to change the distro's default bluetooth app. Maybe others would complain, but I have no bluetooth devices on my Linux PCs.

Let the devs duke it out over which software they want to include and support. If they choose poorly, I'll switch distros.

I'm glad there are forks for those that dislike systemd. Linux is about customization and freedom. I wish them well.

Comment Re:To me AMD is shooting themselves in the foot (Score 2) 169

It's a brilliant move. If it's successful, they will control the GPU on CPU market for the 64 bit 86 platform globally. Intel still holds the majority of sales and it'll be painful to compete with them directly as Intel has enough cash to match AMD at any price point as well as enough volume and customers to out-produce and out-market AMD.... but... if AMD gets a portion of every Intel sale because of the GPU, they get a lot of cash for just a bit of support work and help to raise brand awareness for Radeon while raising their own market share of GPUs vs their more direct rival, Nvidia. Right now, Intel makes more GPUs than Nvidia and AMD combined for the consumer market and many of their on-die GPUs are good enough for most users... but, this will make them good enough for most gamers!

Long term, AMD can kill Nvidia by providing decent gaming graphics for both AMD and Intel CPUs and offering discrete graphics cards for those that they aren't good enough for.... and likely they'll come up with a method for the embedded GPU to work with the discrete GPU as a sort of crossfire-like connection. They already do this with separate AMD cards of different types. Nvidia won't be able to take advantage of that capability most likely either.

Comment Re:Minerals? (Score 5, Insightful) 341

I'm not sure how to parse your word-salad.

You do know that cars today are made mostly from aluminum -- which is almost 100% recycled. There's steel in there, too... which is also almost 100% recycled. EVs are currently dependent on Lithium Ion batteries. Pretty much every electronics store not only has a recycle bin for mobile electronics, but encourages you to use it, too. Why? Well, sometimes they're legally required to... but Lithium Ion battery recycling is the best thing since sliced bread to manufacturers who use them in their products. Ever crack open one of those iPhones or Samsung Galaxies? Most of what's inside by mass is the Lithium Ion battery. Recycling them isn't difficult. Do you have any idea how much cheaper it is to just re-use aluminum, steel, and lithium rather than dig it out of the ground as a raw material to refine?!?!?

Teslas aren't made to be replaced every 3 years... most electronics aren't -- just phones and tablets as they evolved quickly... and they're just now starting to extend their expected lifespans. Computers used to be the same -- new every 2 years for every business... then every 3... then every 5... now, lots of places have 7 or even 10 year old PCs running Windows 10 just fine. The TREND is the opposite of what you describe. New technologies evolve fast, older ones tend to stagnate and flatten out growth curves and create longer-lasting products.

Teslas have fewer moving parts and fewer parts that need maintenance, so your basic gasoline powered car has more throw-away parts. The Tesla's biggest expense and liability is its lithium ion battery packs... which they're improving & by entering the Li Ion battery business, they have a stake in improving the batteries and lowering their costs -- which will include recycling the lithium from the old batteries eventually as well. There's no reason a Tesla couldn't run for decades just fine with only swapping out older battery packs to be recycled and replaced with new battery packs.

  Further, the USA has barely scratched the surface of its mineral resources. We have confirmed rare-earth metals and lithium deposits we aren't touching -- because China is mining away just fine for cheaper than it'd be worth for us to bother... especially considering the environmental impact of mining in our own back yards. There is no shortage and no future shortage in sight -- just corporations staking claims to get the largest control over the current sources of raw materials... which is no different than any other time in history. If and when it becomes worthwhile, we'll dig for our own and make our own refineries.... but, more likely, we'll recycle what we have first -- just like with aluminum and steel... and to a lesser degree, copper and other precious metals. We do mostly send our electronics recycling (other than lithium) to China... where they use a nasty process to extract gold, palladium, platinum, and other precious or rare earth metals from motherboards. It's become more profitable to get some of those metals from electronics than from raw ore in mines already, too.

Slashdot Top Deals

The solution of this problem is trivial and is left as an exercise for the reader.

Working...