Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:so.. they 'invented' this? (Score 2) 118

Chevy seems to be doing pretty well in this area, actually. Like Tesla, the Chevy Bolt EV has active heating and cooling (as does the Chevy Volt, a PHEV). The Bolt has a dedicated heater for the battery coolant loop, and uses the same cooling as the air conditioning. When fast charging, it does start off slowly in cold weather, but it can get faster as the battery heats up. Chevy is more conservative than Tesla in many ways, though; for example, even fast charging on the Bolt is limited to 1C (not that there are many CCS chargers that can do better than that yet anyway), while Tesla Supercharging is much faster. But Chevy seems to be capable of making good, reliable EVs so far.

Comment Re:What amazing me (Score 1) 141

Except the video you cite was released by the police, not by Uber. Maybe Uber requested that it be released, but it seemed like the police were just trying to fill the information vacuum with what little information they had. Since the video is half-damning (the safety driver appeared to be distracted) and half-vindicating (the pedestrian was not easily visible), it's not really a great thing for Uber. The better PR came from initial reports, like that she "stepped out from behind a bush in the median" or something like that, when it appears from the video that she had already crossed a few lanes.

Comment Re:Come on, who would have no hit her? (Score 1) 953

Why are they even testing their cars on live streets at this point? Everyone agrees that at least level 4 is required to be truly safe and companies are still trying to master level 3.

There are two different approaches to this. The car companies are in the business of shipping products, so each year they try to improve a little better. Every few years, they move up a level in automation. Many of the newcomers in this arena (Google, Uber, etc.) are going straight for level 4-5. (Level 4 is the same as 5, but limited to a known geographical area and/or appropriate driving conditions; the car is capable of a safe stop with no human if the conditions become inappropriate for it to continue.) Google has commented on this before; in an early pilot program many years ago, they let some of their employees test drive the self-driving cars for commuting. It didn't take long before they gave the cars too much trust, and were ignoring the road - one even climbed into the back seat to get his cell phone charger while driving down the highway. That clued them to the fact that if you give users something that's good 99.9% of the time, they're going to trust it 100% of the time, and cause horrific accidents 0.1% of the time. So Google and others are skipping all the driver assistance tools, while the automakers are shipping them because they have to sell cars every year.

Incidentally, that's why the guy in Florida died in his Tesla with autopilot on; he trusted it too much, and was watching a DVD instead of paying attention to the road. Tesla autopilot is a level 2 system, and requires the user to always be paying attention for safety.

Comment Re: a driver sitting ready to take over is not the (Score 1) 953

At least from what Google has said about their testing, I think the safety driver is trained to take over immediately. They can take the data out of the car later and replay it in simulation to see if the car would have handled it correctly or not, so there's no point in taking any risks to try it out in meatspace. (They can also then tweak the scenario in many ways to see what would've happened in thousands of similar scenarios, and to make sure that future versions of their software continue to behave appropriately.)

I have no idea how Uber does their testing, but I really hope for the sake of the whole self-driving industry that they take it a little more seriously than they do for literally anything else they have ever done...

Comment Re:Liability (Score 1) 139

Like most things reduced to a quick paragraph summary or soundbite, the details are missing. The "exemption" isn't a blanket "if your car is self-driving, it can break all the rules" exemption. Instead, it allows the manufacturer to apply for permission to not meet a given safety rule, they must demonstrate that they are at least as safe as the rule requires. This exemption process already exists, but the bills will modify it so that it explicitly applies to the development of autonomous vehicles.

As an example of how this process works, let's say you develop an innovative seatbelt replacement. Maybe it's an ejection seat, or maybe it instantly fills the car with foam to absorb the impact, or whatever. If you can prove that your technology is at least as safe as the seatbelt it replaces, then there is a procedure you can use to be able to legally sell that vehicle without the required seatbelts.

The point is that the government knows that the technology in this area is moving so fast that no laws will be able to keep up with it. For now, they can use this existing mechanism to ensure that autonomous vehicles are still safe for people both inside and outside them, without having to come up with codified rules that will quickly become obsolete.

Nobody wants autonomous vehicles to be careening wildly across the landscape. But what I don't want is the technology to save over 30,000 lives a year to be held up because a politician a few thousand miles away is too busy deciding on all the other ways they'd rather rob and kill their constituents... ;-)

Comment Re:I don't get it. (Score 1) 69

Maybe I'm missing some sort of killer feature; but it looks like their power budget forced them to axe a pretty substantial percentage of the 'smart'

Yup. I don't feel like reproducing my math, but I calculated (given the Apple Watch's battery size and battery life) that in order for a device with that power budget to be solar powered, you'd have to have the entire face of the watch in direct, summer, noontime sunlight for something like 30 hours a day. Even the less astute among us should be able to detect a slight problem with that plan.

My guess is that the display is probably the most power-hungry component, followed by the processor and radios. It's easy to save power in the CPU and radios by turning them off as quickly as possible - short bursts of high power followed by long periods of inactivity (and zero power draw) will consume minimal power from the battery. But for the screen, it's awfully hard to do that sort of thing, because you want to be looking at it! So, short of inventing some new display technology that uses very minimal power, the easiest thing to do is get rid of the display completely, and all the features that require it.

Comment Re:Market (Score 2) 474

That's not true; GPUs basically always use the latest process technology available, just like CPUs. Recently, there have been some degenerate cases where a new process is (at least initially) slower and more expensive than the previous one; but in general, they always move to the latest and greatest process, once that process is capable of making a better product.

As for die size, the big GPUs are way bigger than CPUs. A 22-core Xeon Broadwell E5 from 2016 is 7.2 billion transistors, and 456 mm^2. The NVIDIA GP100 chip (also 2016) is 15 billion transistors, and 600 mm^2. The AMD Ryzen (2017) info I can find says it's (probably up to) 4.8 billion transistors.

I have no idea what you mean by "tolerances". Maybe you mean "process variation", which is a natural part of any semiconductor manufacturing - and is controlled by the fab (TSMC, GlobalFoundries, Samsung, Intel), not the chip designers (Apple, NVIDIA, AMD, ummm Intel again). The design houses ship off the chip they want - and the fab produces it, with some chips a little hotter/faster than others. Over time, they can tighten up the process so it has less variation and higher yields, but nobody is "running wild" with anything.

It's complicated too, because the node names are really just marketing hype. Just as "Kaby Lake" is a name that Intel gave to a collection of optimizations put in a single chip, or "Pascal" is a name that NVIDIA gave, or "Ryzen" is a name that AMD gave – 14 nm is a name that some fab gives to their latest collection of optimizations. There's no one measurement that corresponds with the marketing name any more, like there was until the early 2000s. [citation] The upshot of this is that Intel's 14 nm isn't the same as TSMC's 14 nm or GloFo's 14 nm, so you can't necessarily compare them. Intel does generally have an advantage in this space, however. That said, everybody pretty much uses the latest, greatest process technology available to them from the fab they have chosen. And it is often the case that a GPU is one of the first things manufactured in a new process at a fab, so they aren't benefitting from anybody prior - especially not at a different fab, because the fabs don't share their secrets, or even the same set of features (as noted previously).

Also, with a brand new process, yields can be very low, so a given company may choose to reduce their risk by making their first chip on a new process either a die shrink of a previous chip, a minor revision to an existing architecture (Intel's "tick"), or a small low-performance chip. Once the kinks have been ironed out on one of those "easy" options, they can shift the bigger, higher-performance chips to the new process. But in some cases, if they started out on the big chips, the yield would be 0% - or if not 0%, the cost of an individual chip would be so high that no consumer would ever pay for it.

And while I will grant you that GPUs have *less* cache, they do still have some caches and other memories. A GP100, for example, has 14 MB of register files, 4 MB of L2 cache, 3.5 MB of shared memory, and 1.3 MB of L1 cache. That's still well shy of the 22-core Xeon I mentioned earlier, which can have up to 55 MB of LLC, but it's a pretty good amount all the same.

The real reason that GPUs have always outpaced CPUs is because they are inherently parallel. In addition to all the architectural optimizations that are made every year, they also add more cores every year; while most of us are still using something in the vicinity of quad-core CPUs, just like we were 5 years ago. Also, the parallelism of GPUs means that they have more freedom for architectural changes to yield throughput enhancements. A CPU is largely targeted at single-thread performance, so most of the optimizations they make will enhance that. A GPU architect can make similar optimizations to enhance a single thread's performance, but they can also make changes that only help parallel computation.

So GPUs are arguably more advanced than CPUs, or at the very least on par with them - and they will continue to outpace CPU development for the foreseeable future as well.

Comment Re:Low Hanging Fruit (Score 1) 2

I'd also venture that it's occasionally possible to make a huge improvement somewhere, but it takes many years of development to get there. By the time your 8-year research project has given you a 16x speedup and is ready to be released to the world, all the incremental changes everybody else has developed have caught up and are just as fast.

Comment Re:What's the big problem? (Score 1) 675

As a Canadian that recently moved the US, the system here is utterly ridiculous and broken.

No argument here.

I never know when I should swipe vs insert the chip, I have never been asked for a pin, sometimes I have to sign and sometimes I don't (there doesn't seem to be a clear limit), and there's no tap-to-pay.

You can always ask. The monkey behind the counter probably knows. And most terminals I've seen have some sort of indication - for example, a light by both the swipe slot and the dip slot (or whatever you call them). If only the swipe light is lit, you swipe. If both lights are lit, you dip (assuming you have a chip card). Or the POS will say either "Please swipe card" or "Please swipe or insert card".

If you want to use a PIN in the US, you can always get a debit card. But otherwise, you will never be asked for a PIN on a credit card.

The no-signature rules vary by credit card issuer, merchant type, merchant implementation, and purchase amount. For certain types of purchases, certain types of merchants, or merchants who have not implemented it, you will always be asked for a signature. Otherwise, there is generally a $25 or $50 limit, with the higher one usually applied to grocery stores and big box retailers.

Tap-to-pay support is still quite limited, that's true. But one of the nice things about the Chip and Signature transition is that all these merchants are having to buy new POS terminals, most of which include tap-to-pay. It just will take a while for them to update their software and turn the feature on - plus some of them are still hoping that if they don't let you use Apple Pay, et. al., that they can force you into their competitor, CurrentC. (I don't actually know the current status of it, but it seems... questionable at best. But all the same, the stores that supported it are at least a year or two behind everybody else in implementing tap-to-pay – and everybody else is a year or two behind the wide release of phones that support it.)

Comment Re:Strong enough for a man, made for a woman (Score 1) 858

Sex in the city still rated around 6 among men, so they did like it, just not as much as women. Yet still the headline was that men sabotaged the votes.

It occurred to me that there is a relevant quote in the article regarding this:

Distilling any work into a single number strips out a substantial amount of meaning. ... To understand the whole picture, you need to dive into the data.

Except I would word it thus:
Distilling any work into a single headline strips out a substantial amount of meaning. ... To understand the whole picture, you need to dive into the text.

Yeah, the headline is a bit inflammatory. Welcome to 2016. The point is really that, whether intentionally (as the word 'sabotage' would imply) or not, men drag down ratings of certain shows that are more liked by women.

Comment Re:Strong enough for a man, made for a woman (Score 1) 858

If you don't like those shows why not just communicate that to your partner? If she cares about you she won't force you to do things you hate doing, and will find something mutually enjoyable for "quality time". If she doesn't care and insists on torturing you, maybe it's time to reconsider your relationship.

My wife doesn't insist that I watch any particular show with her, but I usually join her anyway. And sometimes, we really agree on liking a show, which is great. But if I'm watching a show she doesn't like, she's sometimes quite vocal about it - often understandably so. If you're not interested in a TV show but all you can hear is the sound of skulls getting bashed together, I can understand how that would be disturbing, annoying, and/or distracting. On the other hand, if she's watching a show I don't like, I'm rarely so offended by it that I raise a stink. (There's no "Dear, there is too much empathy on the TV, can you turn it off?" or "I can't stand the predictable jokes and laugh track!" – on the other hand, if she thinks there's too much violence or sex or whatever, I'm likely to hear about it.) When I want to watch my TV, I wait until she's asleep or away from home.

The net effect is that I end up watching more of what she wants than she does of what I want. Most of the time, I watch her shows anyway, even if I'm not a fan (i.e. I'd give them middling ratings); she does not do the same on my shows. So, if the average man is like me, I could understand how the data would skew that way. Although I'm pretty sure the average man isn't like me, and there are some vindictive pricks out there too who just like crapping on other people's nice things. So I can *still* understand how the data would skew that way.

I think the conclusions at the end of the article are right on point:

Distilling any work into a single number strips out a substantial amount of meaning. ... To understand the whole picture, you need to dive into the data.

Comment Re:Brakes? Tires? (Score 1) 555

Coming up in next month's newsletter:
Eating bacon can extend your life by several years!
In a controlled study, one group of people ate only bacon for all meals, and lived several years longer than the control group! The control group consumed the exact same diet, but without the bacon. While the control group mostly died within a month or two, the bacon-eaters survived several years before succumbing to normal diseases like heart attack, arteriosclerosis, and uncontrollable joy from eating bacon.

Slashdot Top Deals

Their idea of an offer you can't refuse is an offer... and you'd better not refuse.

Working...