And, this being KDE, by the time it's accepted in mainstream, it's going to be configurable, so you can make it paint the title bar of active application in neon pink, if you want.
Would you have preferred that it looks more like Gnome3 ?!
I love everyone. Give me a hug.
The main advantages of free/libre open-source software is:
- source is available to review and hack upon for a WAY MUCH LARGER audience. It's "a few security reviewers cherry picked by a government" vs. "virtually anybody who has the time and resource to invest in it".
So you have a bigger pool from which to pick somebody who "is going to understand everything at every layer", or at least understand big enough parts of it, at a large enough number of layers, with enough overlap with the other "somebodies".
- the whole echo system is open. You can review lots of other stuff (compilers, libraries, etc.) You can have deterministic building to check if you really have the code that really produced the official binaries (that's already something that Tor, Truecrypt, Bitcoin, etc. are doing).
There's lot of things that you can do to check every piece of software that you need to trust.
Well of course, that's a lot work required. So in the end, you'll end up having to trust multiplt other people anyway. But at least, with opensource, that's a choice, and in any case you can do the checks your serlf (or more reallistically: ask someone you actually trust to do it for you. As in the current ongoing review of TrueCrypt, for example).
Whereas, no matter how motivated, with closed source software you'll always hit a wall. (Well microsoft gives you a peek at the windows code, but not necessarily all the rest needed to check full security).
By itself, that doesn't create a backdoor, but anything compiled using the tainted binary could potentially have a backdoor secretly added, even though the source code for both that code and the compiler would appear to be perfectly clean.
...And solutions against this do exist:
A. Deterministic building.
All software were security is important (Tor, Truecrypt, Bitcoin, to mention a few who practicise this approach) have clear procedures designed to compile a binary in a perfectly repeatable form. A rogue compiler would be easy to detect, because it won't create the same binary as everybody else.
B. Comparing compilers.
Use a small collection of different compilers (a few version of GCC, a few other of LLVM, etc) to compile a compiler whose source you trust (say, a security-reviewed and approved GCC 4.9).
From this point on, you can already compare the output of each of these "GCC 4.9-as-compiled-by-other" by compiling a few test code and see if they matches. Look if any of the test codes has backdoors injected.
- Now you already know which compiler you can trust
Then use that compiler (I mean the multiple versions produced by the various compilers of the first step) to bootstrap it self (you end-up with several version of "GCC 4.9 as compiled by GCC 4.9", each with a different starting point).
Normally all these last step compilers should be more or less similar (see "deterministic" building to reduce the amount of random differences). A rogue compiler will notably stand out.
- Now you have trusted environment, compiled by a trusty compiler.
Seems complicated, but as I've said, people in critical niches (Tor, Truecrypt, Bitcoin) are already doing exactly that.
That raises tremendously the bar of what the governments need to back-door software (virtually any modern compiled need to be compromised, as well as numerous tools around them. Forget one obscure thing somewhere, and someday a researcher or hobbyist will notice discrepencies)
I think most of us are already familiar with this sort of attack, but it's worth repeating, since it's exactly the sort of thing that Microsoft's "Transparency Centers" don't address, and exactly the sort of thing we'd be expecting a government to be doing.
Yup. The first most important thing is to determine a clear procedure how to take the official source and rebuild the same binaries that everybody is having.
(i.e.: you should be able to check out the source, hit recompile and end-up with an installation CD that is indistinguishable from the retail one. So you know you're actually check the real source, and not some decoy put here for you, while a different backdoor-infested version is getting distributed to your government).
And as you say that excatly NOT what microsoft is doing.
Also, having only 2 centers world-wide, where only government mandated devs are invited severly limits the research exposure of the code.
I'm ready to predict that the only real results will be.
- Big security people who don't happen to be sent by a government won't have a look at the code, and probably several shortcomings will never get seen. The end result won't be as secure as if you let the OpenBSD devs create a LibreDows(*) fork with a "Valhalla Rampage" treatment on it.
- Some black hat will manage to slip through the checks, leak the source. It will get passed around on under ground dark nets, and the next week you'll see an abominable explosion of 0-day exploits traded on the shadiest parts of the net.
(*): Only works when built on system with massive security counter-measures in their default C library. Like OpenBSD. Secured wrappers provided for Linux (those blissfully ignorant people). Go fuck yourself if you use some outdated os like old-school VMS (pre OpenVMS). Or if you use an outdated compiler like Visua... Oops. Damn!
Or let me put it this way, get on a train in Belgium and go to Israel. Go on, I dare ya. Oh wait, you can't!
Leave apart the fact that it is actually possible, but it would be a journey that takes several days and quite a few stops to change trains in european capital cities (these are distances where using air planes start to actually make sense). (Although I happen to have taken night trains across europe over long distance. But these are easier: instead of having to change trains, they switch the trains' cars around and so you stay in the same cabin until you arrive at your destination city)
Leave also apart the fact that we happen to have "geography" between these two points: mountains (Alps), sea (e.g.: the Mediterranean sea that you mention), etc. whereas your country is mostly flat (that's why the tornadoes happen much more easily, to go back to TFA's point. that also means that if anything, building a large-scale rail-road system would probably be much more easy in the US than in EU).
The main problem is: why in the first place should I travel such a long way ?
Answer A: for vacations.
Yup, why not. Go on, go visit Israel for you vacations. I've heard there are nice surfing spots there too.
And as said above, taking an airplane is the most sensible solution. (Though I've been on such long distance road trip across Europe by car, in addition to train as mentioned above). (And money-broken students take busses, that's the cheapest way around).
The thread was talking about cars, and driverless cars. Given the speed of current cars, such a long distance trip would take even longer by car than by rail. So "my country is bigger" argument won't actually work in favour of cars against trains, but in favour of planes against trains and cars.
Answer B: for work
And that's the biggest problem regarding transportation you have in the US: your society is organised in such crazy way that the biggest part of the population has to commute over such bat-shit crazy distance on a regular basis. Nobody in his/her right mind will live in Belgium and travel for work to Israel. Not even by car. Nor plane or trains. If you get a job in Israel, you move there, so you're living nearby your work place. And if you miss Belgium, you can always travel back there for vacation (refer to "A" above).
The main problem is not that train would be impossible. (They are possible), neither is the huge distance (it's flat. it would actually be easier to build train there than here).
The main problem is the distribution of the population (spread all over) and their travel needs (bat-shit fucking crazy distance, each individual travelling that distance in a completely different direction) so it's not easy to group those needs together and have the people travel together in groups (the basic requirement for any public transportation network).
Putting those responsible behind bars instead of back on the road again with a slap on the wrist should be exercised first
Because exercising punishment is the best approach to bring back the deads ?
What about putting some technology that would have prevented the deaths in the first place... "Oh, noes! Me don't want a NANNY STATE!"
Humans have been driving themselves around for over a century now, and yet we're at our deadliest ever every single year we continue to do so.
Over that century, the number of driving humans and their density on the roads has increased. The more cars in the same place, the higher the chance of two of them colliding.
When exactly did humans become so irresponsible with 2 tons of steel and why?
When they started to be too many on the roads.
- People are stupid (even if every single person is average when singled out. But pack them together and they start doing stupid things). The more people you put on the road, the higher chance that some cretin will try something asinine and dangerous.
- Also by increasing the number of cars, you increase the level of responsibility and concentration needed for the same level of safety. A century ago, if you lifted your eyes from the road a few seconds, the most likely to happe is that you would crash on a tree on the side of the nearly empty country-side road. Now, the same behaviour in our modern over-crowded fast highways would result into a massive death toll.
We put a helmet on to ride a bicycle
And we put seat-belts into cars (requirement nearly everywhere)
And we put air-bags into cars (requirement in lots of places around).
And we put collision avoidance system into cars (standards with some manufacturer like Volvo, and soon a requirement in EU in the next few years).
all this are technologies that help diminish the death toll (proven by statistics).
autonomous cars are just the next evolution of features that can help diminish the deaths.
Just an additional tool. For when the driver is distracted, at least the AI can take care of the driving.
but won't take a cell phone away from a teenager when they get behind the wheel.
Yup, just tell the kids not to use the phone, your are 100% certain that every single one of them will comply.
People will always be people. Bring enough of them at the same place and they'll invent new way to behave stupid.
Hey, why don't we remove seat-belts, air-bags, etc. and just tell the people to be more careful ?
Even better idea: remove traffic lights, remove traffic signs, etc. and just tell people to drive sane and not to crash?
There's a point where you can't just trust that absolutely every single individual will behave perfectly.
The more redundant safety you put into the system, the less risk that when the driver fails something bad will happen.
Yes, the government can assassinate anyone they want by remotely taking over a car. This "feature" has been in place in all vehicles since 2008.
Do not confuse:
- onboard I.A. that can react accordingly to surroundings.
(we're progressively heading this way as more anti-collision features are shipped on cars)
and doesn't rely at all on any remote access
- a car that communicate with the network and the mothership can issue "kill the engine" commands (which, if the cars happens to be on a fast highway, also boils down to "kill the driver" command). There's no need of camera. There's no need of any IA. A pure classic car can be made to remotely shutdown given the proper hardware.
Driverless cars weigh more, but if you put the car on a rail and let a computer drive it would move 10x faster on 10x less energy and have no accidents. I added the costs that it would take to build a system like that and then realized it would pay for itself in 5 years.
Welcome to Europe. Let me introduce you to this wonderful technology called "TRAINS" that we have here.
We've scaled up your plan a bit (they also transport 100x the number of passengers).
We've also jumped on the "electrical vehicle" bandwagon while we're at it (very few are still diesel powered)
(also there's a human in front who can override the system just in case, though some metropolitan transport have gone 100% driverless).
I think that's what I was saying: a random mixture of disk sizes is not supported by this particular RAID implementation - it will only use the same size across each disk, meaning you are constrained to the size of the smallest disk in the pool.
Okay I was thinking that you were comparing with other RAID implementation (most fake RAID cards can't even *grow* the raid, once you've cycled the drives and that the "smallest disk in pool" is now bigger).
Btrfs and ZFS sound like they handle it much better.
Yup, they would handle whatever you throw at them, as long as they can manage to fit the constrains you've asked.
RAID implementations don't always support cobbling together a random mixture of disk sizes which change over time.
Linux' software RAID support this without any problem. As you finished a cycle of yearly swap over the whole pool, you can increase the RAID to the new maximum (= shared minimum accross the drives). The resize is done on-line and is gracefully restartable (in fact, you can even migrate to bigger RAIDs with more drives gracefully).
(e.g.: After 6 years, once you've upgraded a RAID6 from 6x 1TB to 6x4TB, you can easily grow the system from 4TB to 16TB).
In addition to that, modern filesystems like BTRFS and ZFS can entirely handle the random mixture of disk. Just specify the level of redundancy (i want to be able to lose 2 drives and still suffer no data loss), plug in drives, add them to the pool, and let BTRFS or ZFS handle the actual details.
(e.g.: throw watever mix you want, total size would be always sum of drives minus what's needed for the level of redundancy you asked for).
I can't afford a $70 a month unlimited data cell phone bill plan though.
So the solution would be to use as much Wifi as possible.
(Though the problem would be that not all wifi routers are multi-cast compatible)
Here around (european country) I see an ever lower trend:
Things move to the cloud.
"Set-top box" are nothing more than a glorified network stream viewer.
channels are simply DVB-IPTV over a multicast connection.
(You can actually watch the same channels on your laptop by pointing VLC to the correct rtp:// address)
(and in fact, you can download an iOS / Android App that does exactly that)
"DVR"... are just an extra functionality on the server.
Servers keep a backup of the last 7 days worth of data streams.
Whenever you want to rewatch a previous show (a type of premium service), pause a currently watch show, etc. there's no actual recording to a HD (the boxes don't have a disk by default).
The STB simply opens a private unicast stream from the backup.
"Recording show for a longer time than a week" is either a paid option (a premium service where the server can keep a copy of some stream longer than 7 days) or a paid option (as in, you pay for some USB attached storage and get to keep your own copy)
Beside for the storage (which is paid by the various "playback" premium options) there isn't much requirement (streams are multi-cast, so no big stress on the bandwith. The unicast playbacks are an option and are paid-for)
Doing opensoure vs closedsource comparison has also being been done on a regular basis at phoronix.
To sum things up:
Current Mesa/Gallium3D stack is opengl 3.x only, proprietary drivers are 4.x (but work is being done, including by paid developers)
except for the latest generation (where the opensource driver team is still debugging the support - but at least AMD does publish documentation and pays a few opensource developpers on their own, so I WILL EVENTUALLY end up supported), the opensource drivers have a decent performance, which has progressively went closer to the proprietary. For slightly older cards you might as well use the opensource drivers (a bit less buggy). For really old cards, even AMD is acknowledging it: they dropped the support from catalyst and are pointing toward the opensource drivers as the preferred drivers.
In short: if it's not the latest generation of hardware, give the opensource drivers a try. Unless you want to only play OpenGL 4.x games on your machine.
Here, take this pair of dice, they are better performance predictors...
More seriously: performance is rather random, mainly due to the fact that the opensource drivers are entirely developed by reverse-engineering on whatever the developers hapenned to have (if you happen to have a slighly different model, there isn't much they can do). So random bugs and problems even in the middle of an otherwise supported range.
For newer cards the situation is even worse performance-wise, because they boot underclocked by default, and the driver don't know how to ramp-up clocks as demand increases.
At least, opensource drivers follow linux standards and some features aren't utterly broken.
So for know, stick to closed-source drivers - best performance ever -, unless you happen to need a feature which works differently under windows (and thus wasn't ported to linux). In that case, you might do an attempt with opensource and se on which random result you end-up.
With time, this is bound to change: Nvidia might get interested in helping a bit (they hey released a few bits of useful information regarding the Tegra line of embed GPUs).
Has a bit lower support than their (windows proprietary) driver (opensource Linux is GL 3.x, Windows is GL 4.x), and their opensource drivers are a bit slower.
there would be an entity with massive computing power available to take over any other crypto currency.
Except that massive computing power is in the form ASICs which are extremely optimized for computing SHA256^2 and nothing else.
So the largest part of the current computing power would be pretty much useless.