Or you know, you could *actually DO* sports instead of standing in front of a light box and shout at picture of sportsmen.
Or you know, you could *actually DO* sports instead of standing in front of a light box and shout at picture of sportsmen.
Actually the footprint of binary is dwindling.
- either catalyst, which is a completely closed source down to the kernel module.
- or opensource, which is an entirely different stack, even the kernel module is different.
- open source stack is still here the same way as before.
- catalyst is just the opengl library which sits atop the same opensource stack as the opensource.
So no, actually I'm rejoincing. (That might also be because I don't style my facial hair as "neck bread" ).
(do I need now binary blobs for AMD graphics or not?)
The whole point of AMDGPU is to simplify the situation.
Now the only difference between catalyst and radeon drivers is the 3d acceleration - either run a proprietary binary opengl, or run mesa Gallium3D.
All the rest of the stack downward from this point is opensource: same kernel module, same library, etc.
Switching between prorietary and opensource driver will be just choosing which opengl implementation to run.
I decided (I don't need gaming performance) that Intel with its integrated graphics seems the best bet at the moment.
If you don't need performance, radeon works pretty well too.
Radeon have an opensource driver. It works best for a little bit older cards. Usually the latest gen cards lag a bit (driver is released after a delay, performance isn't as good as binary) (though AMD is working to reduce the delay).
Like Intel, the opensource driver is also supported by AMD (they have opensource developpers on their payroll for that), although compared to Intel, AMD's opensource driver team is a bit understaffed.
AMD's official policy is also to only support the latest few cards generation in their proprietary drivers. For older cards, the opensource *are* the official drivers.
(Usually by the time support is dropped out of catalyst, the opensource driver has caught up enough with performance to be a really good alternative).
The direction toward which AMD is moving with AMDGPU is even more reinforcing this approach:
- the stack is completely opensource at the bottom
- for older cards, stick with Gallium3D/mesa
- for newer cards, you can swap out the top opengl part with catalyst, and keep the rest of the stack the same.
- for cards in between it's up to you to make your choice between opensource or high performance.
If you look overall, the general tendency is toward more opensource at AMD.
- stack has moved toward having more opensource components, even if you choose catalyst.
- behind the scene AMD is doing efforts to make future cards more opensource friendly and be able to release faster the necessary code and documentation.
AMD: you can stuff your "high performance proprietary driver" up any cavity of your choosing. I'll buy things from you again when you have a clear pro-free software strategy again -- if you're around by then at all.
I don't know what you don't find clear, in their strategy.
They've always officially support opensource: they release documentation, code, and have a few developpers on their pay roll.
Open-source has always been the official solution for older cards.
Catalyst has always been the solution for latest cards which don't have opensource drivers yet, or if you want to max out performance or latest opengl 4.x
And if anything, they're moving more toward opensource: merging the to to rely more on opensource base component, to avoid duplication of development efforts,
and finding ways to be faster with opensource on newer generations.
For me that's good enough, that why I usually go with radeon when I have the choice (desktop PC that I build myself) , and I'm happy with the results.
Still, there are opensource implementation of VDPAU using VAAPI as a back-end.
and there are VAAPI implementation using VDPAU as a backend (useful also for opensource drivers which tend to implement VDPAU).
That why solution like 6rd.
ISP can keep their current IPv4 gear, and just offer an IPv6 tunnel that the clients can use over the IPv4 infrastructure.
No need to immediately replace all the components, and meanwhile, IPv6 is already available.
You know what else solves the "not enough IP addresses" problem? NAT.
It's a short-term quick hack which might make some problem seem to disappear, but creates ton of other problems.
NAT creates layers of indirection, and NAT makes machines not directly addressable.
Require hole punching and the like even for very basic functionality (like VoIP).
The internet was envisioned as a distributed network with all being equal peers, but NAT is contributing to the current assymetry of having a few key content distributor and every body else being a passive consumer.
And it's a lot less of a change than switching to IPv6.
IPv6 here. No it's not that complicated, and can be made automated. (e.g.: you don't even need to setup DHCP. your router just hands out prefixes, and the devices on the net autonomously decide their address by appending their mac address).
With NAT, you'll end up needing to fumble with your router and open / redirect ports anyway, just to be sure that everything works as it should.
I couldn't figure out why Google wasn't getting pissy AT ALL over Cyanogen forking and talking smack about them..
Much more basic: Ask your self, *WHAT* is google's business, what are they earning money from ?
They are not earning lots of money buy selling copies of Android.
Instead they earn money with their service: they probably earn a percentage of sales of apps on their store, and they earn tons of money through their data-mining/advertising.
So yet another fork of android doesn't mean less revenue for Google. It means yet another portable platform that will eventually log into maps.google.com, and ask about pizza, and earn them tons of money.
It's the same reason why google can at the same time support Firefox development (they pay them a good budget) and at the same develop their own browser.
That might sound weird. But it makes sens. Google isn't in the business of *selling* browsers. More browsers mean more people online eventually using their service, and thus means more indirect profits, no matter exactly were the browser came from, as long as it conforms sufficiently to standards (HTML5, etc.) and can use their service, and isn't completely married to a competitor service.
The only thing regarding to Android that would drive them mad a little bit, is if Microsoft decided to fork Android, and design a special fork that only exclusively works on Microsoft's services (Bing, Office 365, etc.).
Lukily for them, Microsoft did instead attempt to make such a microsoft-exclusive platform using their windows OS and we all know what kind of success they had with this.
In practice Bash is part of most Linux installations.
Even in the realm of "GNU/Linux" not everybody uses bash (some use zsh, for exemple).
And that's only the portion of users running an actual "GNU" userland.
Then you have the embed world using Busybox (with uClib, etc.) and co for the userland (which has its own simplified shell).
And then you have Android (which runs a completely different user land by Google, like Bionic for a C library, a different message passing bus, and most of the things usually handled by deamon running in userland, handled by java-like code on a java-like VM).
And the other way arround: you have other Unice (OS X, various *BSD) which obviously do not run Linux kernel, but do run bash.
OS X, for example, was affected by bash.
Interesting that you mention that - I've never really thought I could see UV, but I have noticed that black lights and UV LEDs have a weird intense brightness that makes me squint even though the visible light isn't that bright, and I can't really perceive a different color.
such things were also reported by people who got caract surgery. Some type of replacement synthetic lens were more transparent in the UV and suddenly people started to see UV. (Some replacement were way too much transparent in the UV and could damage the eye by not protecting it enough).
Germicidal lamps don't cause the same effect for me.
Both are "UV" in the sense that they are above the violent band. But they're not the same wavelenght.
Blacklight UVA: is just slightly above the the violet band, with wavelenght shorter than 400nm
Germicidal Lamps UVC: is way above the violet band, with wavelenght around 280nm (e.g.: around wavelenghts most likely to be absorbed by DNA and other critical biological structures - thus damaging the germ cells).
Cones can detect UVA (it's just usually blocked by the eye's len).
Cones cannot detect UVC (and would probably just die if exposed to it).
With OE, sniffing data doesn't work.
OE will encrypt the tiny bit that interests you, in the middle of an otherwise plain text connection.
So by simply passively listening to packets, you won't be able to gain access to the juicy parts, they will be the (only) encrypted part.
OE can prevent incident like google car which recorded sensitive information while logging data packets from non secure Wifi. Or attacks like FireSheep passively listening to clear text cookies in a internet café
But currently OE apparently lack any authentication scheme. so it's trivial to throw a MITM attack.
OE isn't competing as being an alternative to HTTPS. (Https is the real deal if you want security).
OE is trying to be a tiny bit better than plain text over HTTP, and currently is a little bit better at preventing accidental eavesdrop.
So 720p decoding in CPU is probably achievable, but 1080p or 4K... not so much.
Which CPU are you talking about?
The huge power hungry multi-core x86_64, optionally assisted by massively parallel GPUs (running opencl) that sits on your desk ?
well decoding high res video is a walk in the park.
The small diminutive ARM designed to be as power efficient as possible that is in your pocket?
much more problematic. it won't pack enough power for higher resolutions, and in the cases were it *DOES* manage to code the video real time, it's going to kill the batter really fast.
The situation of VP9 isn't that different than H.265
- desktops work well enough even without dedicated acceleration
- smartphone are limited by the current lack of acceleration (well except the few latest phone which slowly start to get H265 hardware) due to CPU limits and battery life.
Or you simply buy a few hundred and later a few thousand self driving cars.
The problem is that currently you can go to the nearest dealer and buy them.
They don't exist yet. There are just prototypes being developed here and there.
They need to be developed (which requires having a huge database to learn from).
Also, the problem of buying car from another company (say Google if their robo car is the first to be mass produced), it that Uber would become dependant on Google's whim. If their future business model rely on a service powered by robo cars, it would a bit risky to entirely depend on an external company for said cars.
The point of Uber, apparently, is to beat others in the development of autonomous cars. Not to depend on anyone else. Make their own robo car business.
And they have a similar mass of useful data out of which to build the car's intelligence. Which was graciously provided by the hipsters using the service, who never consented to be part in an AI research in the first place.
The kind of data log that used to enable the controversial god-mode, can also be used to build a very precise model of "how are the driver managing to navigate inside city centers ?"
From what I understand how this is supposed to work, it's the opposite:
I think it's: you type a simple *http* address, the website behaves like a plain normal one. (so no https address, nothing misleading you into thinking you are using secure https website)
But when you submit data to it, the browser will automatically switch on-the-fly to an alternate, encrypted route, so the data is sent encrypted to a alternate destination handling encryption.
It's not a full blown https, but it's better than nothing.
Think of it as "https-lite" for sending data. Designed for server which can afford having a full blown https stack.
Except, that the thing is so much simplified that there aren't enough checks in this protocol. So a third party could use the feature to re-route data to their eavesdropping infrastructure, instead of re-routing it to an encryption feature on the original http server.
My "portable online life" has been an Ericsson T39 (that outlast it's intended time by decade)
combined with successive models of PDAs from Palm.
Add in foldable keyboard for the PDAs and you get a small laptop replacement.
Only started using smartphone when switched to WebOS powered smartphone by Palm.
The combo has a few advantages:
- better life battery
(phone is very efficient as it doesn't to much beyond being a phone. It's as simple as you can get, and can last a week on a charge.
PDA isn't constantly online and thus is also low energy requirement. specially the older one could last a long time between charges)
- separate PDA used to be more offline oriented (think google maps over 3G/4G vs. dedicated map application with locally stored maps. very useful when you travel abroad).
- redundancy (typically, one would sync contacts over bluetooth or irda between the 2 devices. If one dies or gets stolen, the other is till working).
"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson