Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment: Tesla DOES use laptop batteries (Score 2) 63

by DrYak (#49607355) Attached to: Tesla Adds Used Models To Its Inventory, For Online Purchase

No, the ones in our notebooks and phones don't last so long, because size and weight are more important than lasting 10 years. Cars are designed differently, for different longevity/size/weight tradeoffs than are portable electronics.

Except that Tesla (and Smarts, and the few other cars which use batteries manufactured by Tesla) use *the exact same kind* of battery cells as regular laptops (on purpose, because they are cheap and easy to source due to the economy of the scale at which they are produced).

The difference isn't the battery it self (it the exact same cell), it's the battery management software, and the usage pattern.

- Lithium batteries age with the number of cycle they go through. It happens really often that a laptop is drained all the way down to 0% or nearly 0% (lithium batteries hate that). Whereas most of the daily commute Tesla cars are subjected to are short trips that only eat a fraction of their charge.

- The more violent the discharge rate, the faster the lithium battery will age. Under heavy load, a laptop battery will get completely drained in hour or two max. On the other hand, given its range and typical speed limitation, it would take at least 4-5 hours to drain completely a Tesla. i.e.: overall the Tesla eats up much more total power than your laptop (obviously), but each of the cells is put to less stress as it needs to deliver a much lower peak current.

(The two above are also the reason why the *extended life* batteries (e.g.: 9 cells instead of 6 cells) in laptops tend to age much slower).

- Also lithium batteries are very sensitive to temperature / environment. Whereas it's not that much controlled in a laptop (the battery tends to be right next to very hot components like CPU and GPU), Tesla car batteries have almost their own A/C system.

so in short:
- no they are exactly the same batteries. but each takes completely different kind of abuses and thus at the end they tend to age differently.

Comment: Public acceptance (Score 2) 46

by DrYak (#49605805) Attached to: Robots In 2020: Lending a Helping Hand To Humans (And Each Other)

I'm really surprised that fast food and other low-skill, low-wage work hasn't been replaced by robots already. {...} Fast food isn't a skill. It doesn't even come close to coffee shop barista {...} If it costs $200,000 per year to pay employees to work a fast food restaurant, and that cost can be reduced to $60,000 per year by the introduction of a half a million dollars of machinery that will last for a decade, these companies would be nuts to not replace workers with robots.

Indeed. But on the other hand, we human tend to be social being. And we tend to appreciate contact with other humans.
Some older people would insist that they *definitely* need to interact with a human being taking order at the cash register, and they *definitely* need to see humans flipping burger in the kitchen behind.
They would find alienating to pass order to a machine and have their burger prepared by a assembly-line machine.
And add to that, that people will be down in the streets protesting that they are loosing jobs, and you can see why fast-food chains are a bit reluctant to start automate everything.

But old people get older, and newer younger generations come. And our current generation, is way too much self-absorbed to care. We are too much busy tweeting and posting on facebook while in line to even care if our orders are taken by an automat or a real person : it's just a distraction delaying us from typing a reply to a youtube comment on the smartphone.

The barriers to accelerating fast-food with assembly-line like robots isn't a technical one, but a sociological one. The fast-food companies needed that the population gets used to it.

Comment: Depends (Score 2) 76

by DrYak (#49596103) Attached to: Once a Forgotten Child, OpenSSL's Future Now Looks Bright

For the "many eyes" to work, there are quite few requirement.

Yes, being opensource is a requirement, but is not the single only requirement.

The code need to be actually readable and to attract users motivated to check it.
That wasn't the case. OpenSSL's code is known to be really crappy, with lots of bad decisions inside. Any coder trying to review it will have their eyes starting to bleed.
It doesn't attract people who might review it. It only attracts the kind of people who just want to quickly hack a new feature and slap it on the top, without having a look at what's running underneath.

The code need also to be reasonably accessible to code review tools.
Lots of reviewers don't painfully check every single last line of code by hand. Some use tools to do controls. OpenSSL has had such a series of bad decision in the past, that the resulting piece of neightmare is resistant to some types of analysis.

Comment: Tool assisted review (Score 1) 76

by DrYak (#49596057) Attached to: Once a Forgotten Child, OpenSSL's Future Now Looks Bright

The problem is that some of the design decision behind openssl are so aweful that some of the code review tools just don't work well to detect bug.

Hearthbleed has specifically resisted to valgrind, because the geniuses behind openssl had implemented they own memory management replacement functions in a way that is resistant to memory analysis.
The memory porblem went undetected.

Comment: Corrections (Score 1) 370

by DrYak (#49576123) Attached to: Who Owns Pre-Embryos?

The problem is that there are NO children yet. Only cells with 2 half nuclei inside (= pre-embryos)

Small correction: apparently you still call it "pre-embryo" even later than that, as long as they aren't implanted into an uterus yet (and they haven't formed a primitive streak. I didn't remember at all this latter part).

Comment: Pre-embryo (Score 3, Informative) 370

by DrYak (#49576105) Attached to: Who Owns Pre-Embryos?

Who Owns Pre-Embryos?

From a scientist: What the fuck is a pre-embryo.

Wikipedia is your friend.

Basically:
- Bunch of cells, still disorganised (apparently, you wait until for the primitive streak to call it proper "embryo". I didn't remember that from my lectures)
- They float around, they haven't implanted into an uterus yet. (That I vaguely remember from my medical studies).

(Well, of course, they were fertilized *in vitro*. It would be hard to find an uterus to implant onto at the bottom of a test tube).

Comment: Complex issue (Score 1) 370

by DrYak (#49576023) Attached to: Who Owns Pre-Embryos?

He is the biological father, he is half responsible for these children if they are born.

The problem is that there are NO children yet. Only cells with 2 half nuclei inside (= pre-embryos)

The problems are very real, but concerns children which do not exist yet, but due to biology and the existence of those cells could very well come into existence.

He cannot force her to give them up any more than he could force her to abort the fetus

The notion is different.
- the "abortion" case is about the mother. It's her body, she decides what she's doing, nobody can force her to undergo a procedure that might has consequences on her health / ability to proceates further.
- here the cells are in a test tube somewhere in a fidgde. mother's phyiscal body isn't in any way concern by the discarding of the cells. it's only the mother's role as a potiential parent (if the children come into existence).

Comment: VFX was the pong (Score 1) 125

I had a full vr helmet in the late 90s to play doom, decent, and so on. I can't remember the name of the helmet but it came with a mouse that looked like a hockey puck.

I would venture that this was the VFX family of helmet (VFX-1, VFX-3D)
It was one of the first 3D helmet, with extensive support in games.
Resolution was shitty (~260 vertical column per eye) in fact so shitty, that manufacturer did give separate count of R, G and B pixels (call it "790" horizontal resolution !).
Field of view was also awfully small (think looking into a small windows in front of you, as if you looked a laptop screen, instead of today's occulus rift's "surrounded by the picture everywhere").
Image was blurry (LCD. All this was happening long before the advent of OLED and other fast refresh devices).

But still, even if it was in its infancy, it was one of the first big thing to arrive on mainstream market.
I've never had one my self, luckily the local computer shop had one and I hacked around a bit.

A bit later, the "I-glasses" family of device started to get popular. Much lighter, slighty higher resolution, and used a mirror system that made possible to overlay picture over the actual sight ("augmented" reality).

Personnally, much later, I managed to land an eMagine 3D Visor (was working in medical research, had more money than when I was a kid).
Slightly better angle of view (45 one of the best pre-occulus), OLED display (so no blur, high resolution, etc.)
(though support for non-nVidia hardware required ordering a new firmware on a swappable ROM chip)

Nowadday Occulus Rift and the like have advanced a lot:
- replaced the complex optics and simple display, with simple optics and shaders to compensate distortion.
- actual real full field of view. you don't look through a small windows, you actually have a picture completely surrounding you.
- high resolution (thanks to all the "Apple Retina" and "Cram a FullHD 1080p resolution on a smartwarch" craze, we have small high resolution displays)
- really fast / low latency tracking (thanks to cheap high speed cameras, which supplements the electronic accelerometer/gyroscopes of old time)

We've reached the point where the technical short-comings are more or less being solved.

Thus we aren't as much in the Pong-era, as in the late 8bits / early 16bits console era:
technical problems are being solved, hardware gets available and affordable, now we need to learn to harness the medium and develop nice stuff.
Artists need to learn what can be done with this.
We are at the dawn of tons of new things coming out for VR 3D.

It's good that indie dev are currently thriving.

Comment: The lawsuite is in Europe - law is different (Score 1) 279

by DrYak (#49527869) Attached to: German Court Rules Adblock Plus Is Legal

While you own the physical media, you don't own the data on the media. You only have a license to use that data and part of the license is not skipping ads, etc.

In the US maybe... excpet that TFA's lawsuite happened in Germany, EU. European country have their copyright law siding a little bit more toward end users than USA.

Among other, several country have the local DCMA-equivalent law, explicitely granting excetion for fair use. And explicitely consider "fair use" to b0rk the encryption for "technical reasons" such as needing to be able to play your own media because you buy it and want to play it and the manufacturer doesn't support your OS. (e.g.: Switzerland, although it's not *EU*, just geographically in Europe).
deCSS is considered lawful here: you bought the CD, use whatever you need to exercise your fair use rights.
There's no concept of "you're actually just renting the data and thus must follow the license in order to be able to consume it".

It'd be akin to requiring a login to use a free website, but the agreement for the login to say that you accept the ads in order to use the website.

Again, in most european countries, EULA aren't considered binding. You can't sell your soul just because there was a sentence hidden somewhere in the big pile of legalese.
The only things which *are* legally binding are the general provision covered in the law itself (warranties, etc.)

But a website owner CANNOT sue you because you violated the license you were supposed to accept and used Adbock anyway.
On the other hand, nothing forbids the owner to kick you out and ban your account either.

Comment: Actually, not. (Score 3, Insightful) 88

by DrYak (#49517793) Attached to: AMD Publishes New 'AMDGPU' Linux Graphics Driver

Actually the footprint of binary is dwindling.

Before:
- either catalyst, which is a completely closed source down to the kernel module.
- or opensource, which is an entirely different stack, even the kernel module is different.

Now:
- open source stack is still here the same way as before.
- catalyst is just the opengl library which sits atop the same opensource stack as the opensource.

So no, actually I'm rejoincing. (That might also be because I don't style my facial hair as "neck bread" ).

Comment: Simplifying drivers (Score 4, Informative) 88

by DrYak (#49517781) Attached to: AMD Publishes New 'AMDGPU' Linux Graphics Driver

(do I need now binary blobs for AMD graphics or not?)

The whole point of AMDGPU is to simplify the situation.
Now the only difference between catalyst and radeon drivers is the 3d acceleration - either run a proprietary binary opengl, or run mesa Gallium3D.
All the rest of the stack downward from this point is opensource: same kernel module, same library, etc.

Switching between prorietary and opensource driver will be just choosing which opengl implementation to run.

I decided (I don't need gaming performance) that Intel with its integrated graphics seems the best bet at the moment.

If you don't need performance, radeon works pretty well too.
Radeon have an opensource driver. It works best for a little bit older cards. Usually the latest gen cards lag a bit (driver is released after a delay, performance isn't as good as binary) (though AMD is working to reduce the delay).

Like Intel, the opensource driver is also supported by AMD (they have opensource developpers on their payroll for that), although compared to Intel, AMD's opensource driver team is a bit understaffed.
AMD's official policy is also to only support the latest few cards generation in their proprietary drivers. For older cards, the opensource *are* the official drivers.
(Usually by the time support is dropped out of catalyst, the opensource driver has caught up enough with performance to be a really good alternative).

The direction toward which AMD is moving with AMDGPU is even more reinforcing this approach:
- the stack is completely opensource at the bottom
- for older cards, stick with Gallium3D/mesa
- for newer cards, you can swap out the top opengl part with catalyst, and keep the rest of the stack the same.
- for cards in between it's up to you to make your choice between opensource or high performance.

If you look overall, the general tendency is toward more opensource at AMD.
- stack has moved toward having more opensource components, even if you choose catalyst.
- behind the scene AMD is doing efforts to make future cards more opensource friendly and be able to release faster the necessary code and documentation.

AMD: you can stuff your "high performance proprietary driver" up any cavity of your choosing. I'll buy things from you again when you have a clear pro-free software strategy again -- if you're around by then at all.

I don't know what you don't find clear, in their strategy.

They've always officially support opensource: they release documentation, code, and have a few developpers on their pay roll.
Open-source has always been the official solution for older cards.
Catalyst has always been the solution for latest cards which don't have opensource drivers yet, or if you want to max out performance or latest opengl 4.x

And if anything, they're moving more toward opensource: merging the to to rely more on opensource base component, to avoid duplication of development efforts,
and finding ways to be faster with opensource on newer generations.

For me that's good enough, that why I usually go with radeon when I have the choice (desktop PC that I build myself) , and I'm happy with the results.

"Life, loathe it or ignore it, you can't like it." -- Marvin the paranoid android

Working...