Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: The lawsuite is in Europe - law is different (Score 1) 278

by DrYak (#49527869) Attached to: German Court Rules Adblock Plus Is Legal

While you own the physical media, you don't own the data on the media. You only have a license to use that data and part of the license is not skipping ads, etc.

In the US maybe... excpet that TFA's lawsuite happened in Germany, EU. European country have their copyright law siding a little bit more toward end users than USA.

Among other, several country have the local DCMA-equivalent law, explicitely granting excetion for fair use. And explicitely consider "fair use" to b0rk the encryption for "technical reasons" such as needing to be able to play your own media because you buy it and want to play it and the manufacturer doesn't support your OS. (e.g.: Switzerland, although it's not *EU*, just geographically in Europe).
deCSS is considered lawful here: you bought the CD, use whatever you need to exercise your fair use rights.
There's no concept of "you're actually just renting the data and thus must follow the license in order to be able to consume it".

It'd be akin to requiring a login to use a free website, but the agreement for the login to say that you accept the ads in order to use the website.

Again, in most european countries, EULA aren't considered binding. You can't sell your soul just because there was a sentence hidden somewhere in the big pile of legalese.
The only things which *are* legally binding are the general provision covered in the law itself (warranties, etc.)

But a website owner CANNOT sue you because you violated the license you were supposed to accept and used Adbock anyway.
On the other hand, nothing forbids the owner to kick you out and ban your account either.

Comment: Actually, not. (Score 3, Insightful) 88

by DrYak (#49517793) Attached to: AMD Publishes New 'AMDGPU' Linux Graphics Driver

Actually the footprint of binary is dwindling.

Before:
- either catalyst, which is a completely closed source down to the kernel module.
- or opensource, which is an entirely different stack, even the kernel module is different.

Now:
- open source stack is still here the same way as before.
- catalyst is just the opengl library which sits atop the same opensource stack as the opensource.

So no, actually I'm rejoincing. (That might also be because I don't style my facial hair as "neck bread" ).

Comment: Simplifying drivers (Score 4, Informative) 88

by DrYak (#49517781) Attached to: AMD Publishes New 'AMDGPU' Linux Graphics Driver

(do I need now binary blobs for AMD graphics or not?)

The whole point of AMDGPU is to simplify the situation.
Now the only difference between catalyst and radeon drivers is the 3d acceleration - either run a proprietary binary opengl, or run mesa Gallium3D.
All the rest of the stack downward from this point is opensource: same kernel module, same library, etc.

Switching between prorietary and opensource driver will be just choosing which opengl implementation to run.

I decided (I don't need gaming performance) that Intel with its integrated graphics seems the best bet at the moment.

If you don't need performance, radeon works pretty well too.
Radeon have an opensource driver. It works best for a little bit older cards. Usually the latest gen cards lag a bit (driver is released after a delay, performance isn't as good as binary) (though AMD is working to reduce the delay).

Like Intel, the opensource driver is also supported by AMD (they have opensource developpers on their payroll for that), although compared to Intel, AMD's opensource driver team is a bit understaffed.
AMD's official policy is also to only support the latest few cards generation in their proprietary drivers. For older cards, the opensource *are* the official drivers.
(Usually by the time support is dropped out of catalyst, the opensource driver has caught up enough with performance to be a really good alternative).

The direction toward which AMD is moving with AMDGPU is even more reinforcing this approach:
- the stack is completely opensource at the bottom
- for older cards, stick with Gallium3D/mesa
- for newer cards, you can swap out the top opengl part with catalyst, and keep the rest of the stack the same.
- for cards in between it's up to you to make your choice between opensource or high performance.

If you look overall, the general tendency is toward more opensource at AMD.
- stack has moved toward having more opensource components, even if you choose catalyst.
- behind the scene AMD is doing efforts to make future cards more opensource friendly and be able to release faster the necessary code and documentation.

AMD: you can stuff your "high performance proprietary driver" up any cavity of your choosing. I'll buy things from you again when you have a clear pro-free software strategy again -- if you're around by then at all.

I don't know what you don't find clear, in their strategy.

They've always officially support opensource: they release documentation, code, and have a few developpers on their pay roll.
Open-source has always been the official solution for older cards.
Catalyst has always been the solution for latest cards which don't have opensource drivers yet, or if you want to max out performance or latest opengl 4.x

And if anything, they're moving more toward opensource: merging the to to rely more on opensource base component, to avoid duplication of development efforts,
and finding ways to be faster with opensource on newer generations.

For me that's good enough, that why I usually go with radeon when I have the choice (desktop PC that I build myself) , and I'm happy with the results.

Comment: NAT is just bandaid (Score 1) 380

by DrYak (#49515863) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

You know what else solves the "not enough IP addresses" problem? NAT.

It's a short-term quick hack which might make some problem seem to disappear, but creates ton of other problems.
NAT creates layers of indirection, and NAT makes machines not directly addressable.
Require hole punching and the like even for very basic functionality (like VoIP).
The internet was envisioned as a distributed network with all being equal peers, but NAT is contributing to the current assymetry of having a few key content distributor and every body else being a passive consumer.

And it's a lot less of a change than switching to IPv6.

IPv6 here. No it's not that complicated, and can be made automated. (e.g.: you don't even need to setup DHCP. your router just hands out prefixes, and the devices on the net autonomously decide their address by appending their mac address).
With NAT, you'll end up needing to fumble with your router and open / redirect ports anyway, just to be sure that everything works as it should.

Comment: WHAT is Google's busniess ? (Score 1) 245

by DrYak (#49483881) Attached to: Google Responds To EU Antitrust Claims In Android Blog Post

I couldn't figure out why Google wasn't getting pissy AT ALL over Cyanogen forking and talking smack about them..

Much more basic: Ask your self, *WHAT* is google's business, what are they earning money from ?
They are not earning lots of money buy selling copies of Android.

Instead they earn money with their service: they probably earn a percentage of sales of apps on their store, and they earn tons of money through their data-mining/advertising.

So yet another fork of android doesn't mean less revenue for Google. It means yet another portable platform that will eventually log into maps.google.com, and ask about pizza, and earn them tons of money.

It's the same reason why google can at the same time support Firefox development (they pay them a good budget) and at the same develop their own browser.
That might sound weird. But it makes sens. Google isn't in the business of *selling* browsers. More browsers mean more people online eventually using their service, and thus means more indirect profits, no matter exactly were the browser came from, as long as it conforms sufficiently to standards (HTML5, etc.) and can use their service, and isn't completely married to a competitor service.

The only thing regarding to Android that would drive them mad a little bit, is if Microsoft decided to fork Android, and design a special fork that only exclusively works on Microsoft's services (Bing, Office 365, etc.).
Lukily for them, Microsoft did instead attempt to make such a microsoft-exclusive platform using their windows OS and we all know what kind of success they had with this.

Comment: variation in user land and "GNU/Linux" (Score 3, Informative) 172

by DrYak (#49465275) Attached to: Linux 4.0 Kernel Released

In practice Bash is part of most Linux installations.

Even in the realm of "GNU/Linux" not everybody uses bash (some use zsh, for exemple).

And that's only the portion of users running an actual "GNU" userland.

Then you have the embed world using Busybox (with uClib, etc.) and co for the userland (which has its own simplified shell).
And then you have Android (which runs a completely different user land by Google, like Bionic for a C library, a different message passing bus, and most of the things usually handled by deamon running in userland, handled by java-like code on a java-like VM).

And the other way arround: you have other Unice (OS X, various *BSD) which obviously do not run Linux kernel, but do run bash.
OS X, for example, was affected by bash.

Comment: UVA, UVB, UVC (Score 5, Interesting) 137

by DrYak (#49461673) Attached to: UW Scientists, Biotech Firm May Have Cure For Colorblindness

Interesting that you mention that - I've never really thought I could see UV, but I have noticed that black lights and UV LEDs have a weird intense brightness that makes me squint even though the visible light isn't that bright, and I can't really perceive a different color.

such things were also reported by people who got caract surgery. Some type of replacement synthetic lens were more transparent in the UV and suddenly people started to see UV. (Some replacement were way too much transparent in the UV and could damage the eye by not protecting it enough).

Germicidal lamps don't cause the same effect for me.

Both are "UV" in the sense that they are above the violent band. But they're not the same wavelenght.
Blacklight UVA: is just slightly above the the violet band, with wavelenght shorter than 400nm
Germicidal Lamps UVC: is way above the violet band, with wavelenght around 280nm (e.g.: around wavelenghts most likely to be absorbed by DNA and other critical biological structures - thus damaging the germ cells).

Cones can detect UVA (it's just usually blocked by the eye's len).
Cones cannot detect UVC (and would probably just die if exposed to it).

Comment: Sniffing doesn't work with OE (Score 2) 42

With OE, sniffing data doesn't work.
OE will encrypt the tiny bit that interests you, in the middle of an otherwise plain text connection.

So by simply passively listening to packets, you won't be able to gain access to the juicy parts, they will be the (only) encrypted part.

OE can prevent incident like google car which recorded sensitive information while logging data packets from non secure Wifi. Or attacks like FireSheep passively listening to clear text cookies in a internet café

But currently OE apparently lack any authentication scheme. so it's trivial to throw a MITM attack.

OE isn't competing as being an alternative to HTTPS. (Https is the real deal if you want security).
OE is trying to be a tiny bit better than plain text over HTTP, and currently is a little bit better at preventing accidental eavesdrop.

Comment: Which CPU are you talkin about ? (Score 4, Interesting) 109

by DrYak (#49420759) Attached to: Google Rolls Out VP9 Encoding For YouTube

So 720p decoding in CPU is probably achievable, but 1080p or 4K... not so much.

Which CPU are you talking about?

The huge power hungry multi-core x86_64, optionally assisted by massively parallel GPUs (running opencl) that sits on your desk ?
well decoding high res video is a walk in the park.

The small diminutive ARM designed to be as power efficient as possible that is in your pocket?
much more problematic. it won't pack enough power for higher resolutions, and in the cases were it *DOES* manage to code the video real time, it's going to kill the batter really fast.

The situation of VP9 isn't that different than H.265
- desktops work well enough even without dedicated acceleration
- smartphone are limited by the current lack of acceleration (well except the few latest phone which slowly start to get H265 hardware) due to CPU limits and battery life.

Comment: The point is to build them (Score 1) 45

by DrYak (#49420747) Attached to: Uber's Hiring Plans Show Outlines of Self-Driving Car Project

Or you simply buy a few hundred and later a few thousand self driving cars.

The problem is that currently you can go to the nearest dealer and buy them.
They don't exist yet. There are just prototypes being developed here and there.
They need to be developed (which requires having a huge database to learn from).

Also, the problem of buying car from another company (say Google if their robo car is the first to be mass produced), it that Uber would become dependant on Google's whim. If their future business model rely on a service powered by robo cars, it would a bit risky to entirely depend on an external company for said cars.

The point of Uber, apparently, is to beat others in the development of autonomous cars. Not to depend on anyone else. Make their own robo car business.

And they have a similar mass of useful data out of which to build the car's intelligence. Which was graciously provided by the hipsters using the service, who never consented to be part in an AI research in the first place.
The kind of data log that used to enable the controversial god-mode, can also be used to build a very precise model of "how are the driver managing to navigate inside city centers ?"

Comment: Opposite? (Score 4, Informative) 42

From what I understand how this is supposed to work, it's the opposite:

I think it's: you type a simple *http* address, the website behaves like a plain normal one. (so no https address, nothing misleading you into thinking you are using secure https website)
But when you submit data to it, the browser will automatically switch on-the-fly to an alternate, encrypted route, so the data is sent encrypted to a alternate destination handling encryption.
It's not a full blown https, but it's better than nothing.
Think of it as "https-lite" for sending data. Designed for server which can afford having a full blown https stack.

Except, that the thing is so much simplified that there aren't enough checks in this protocol. So a third party could use the feature to re-route data to their eavesdropping infrastructure, instead of re-routing it to an encryption feature on the original http server.

The best way to accelerate a Macintoy is at 9.8 meters per second per second.

Working...