Considering that it is the Slashdot, the true missing option is the "I'm the ISP."
HTTPS is already designed with that kind of decoupling in mind. But it wouldn't make sense to offer encryption without identity verification to the end-user, because that would make the encryption useless, so any protocol that does encryption has to do both.
I know that. That's basic AAA.
Also note that for an effective MITM attack you would need to have new certificate for which you have got the private key. There are a number of things that will make this increasingly difficult in the future, like certificate pinning, increased willingness of browsers and OS vendors to blacklist CAs, and increased monitoring for rogue certificates which makes it easier to find rogue CAs.
I think you fail to realize the scale, the proportions, of the opposition the browsers face.
It's not some script kiddies who are threat here.
That's countries covering close to a half planet's population. They might as well simply outlaw the browsers. In fact, they already do outlaw some encryption software.
I personally would still argue that the CA system is the Achilles heel of HTTPS but the situation is getting better and it's a matter of time until we get a more distributed and robust way of certificate verification.
But that's another problem: you can't make CA distributed. CAs are the "single point of failure" which are allowed to be that, based on the promise that they will work hard not to fail. Making it distributed would basically nullify the promise, making the whole CA system vulnerable. IOW, nothing changes.
Even with the identity verification, the encryption is not a guarantee against the MITM.
Because the man (the one in the middle) could have hijacked the certificate.
The oft quoted example here is the China injecting the JS into the unencrypted traffic. They probably do not even need to hack anything to hijack the certificate - they likely already have the laws which force the CA to hand over the certificates legally. And once that happens, back you are at the drawing board.
Decoupling at least allows the two technologies (A) to be developed independently and (B) to be easier replaced.
I'm sure the users will appreciate the extra traffic!
Most serious hosters still charge by traffic. The web-site owners too would appreciate the increased traffic and higher bill.
Just decouple the traffic encryption and the identity verification already.
[...] TLS on TCP is lots slower when there is any packet loss.
And how a (almost) stateless protocol like QUIC supposed to handle the packet loss any better?
The previous write-ups about the Google protocols were all like one based on the premise that packet loss is a very very rare occurrence. That's why they use effectively a stateless transport: because they assume that errors are rare. In other words, they are too very bad at handling it.
Coming from the old days of IPX vs TCP debates, I remember how the IPX proponents were going abruptly silent in the face of a bad network connection: IPX wasn't able to transfer literally anything, while TCP slowly was churning data, allowing to download OS update and fix the issue. It would be hilarious (and not unexpected) if (or rather when) Google would step into the same cowpie.
3D graphics is an outlier in the driver development.
But for a useable desktop, 2D graphics is sufficient. The 2D is commonly supported via GPU's ROM and as such implementing a 2D driver isn't hard.
Even 3D in itself isn't as hard. The problem is that games require (A) lots of edge cases optimized and (B) huge number of acceleration features implemented.
Writing drivers isn't hard.
But for the broader acceptance of an OS, one needs a whole shitload of them.
In the past, a computer with a half dozen devices was "packed". Today? A cheap tiny ARM SoC easily runs up to 30+ built-in devices.
Not replaced, you dummy.
Elevated to a new level.
Sorry, GP meant: the only fully hardware accelerated encryption method; implementation is approved by Intel Inc.
If you insist on your definition of RAD you'll likely run into limitations (any RAD system) and be disappointed.
No, I will not be. I have used in the past the Borland Delphi for 5+ years and well aware of the limitations which come with the paradigm (rigid system libraries, "there is only true way to do it", "if there is no button for it, it's impossible", and so on). (And yes, to this day, I deeply hate Borland Delphi.)
I'm interested in RAD for specific purpose, so to say. To show that GUI development can be as easy as writing 10-20 lines of shell, but with the bonus of having a UI which is little bit more than text console. And, well, introduce some GUI into the Linux part of the product.
I don't really see the point of full RAD to be honest.
I do not look at them as a programming language or programming environment.
I see them as a tool to quickly develop and deploy a simple GUI, when the text console doesn't cut it.
BASIC is every bit as modern as any other language and structurally equivalent to any modern static language. It's more verbose than C and similar languages [...]
Verbosity is the problem.
If I were fine with the screens and screens of the boilerplate code, I would have simply used the Java.
I don't understand why you couldn't get QtCreator working. Qt is easy to install and use on Ubuntu. And the Qt QUI designer is very easy to work with.
As I heard it was a systemic problem back then: not all package dependencies were declared, meaning that after installation you have to also install bunch of other stuff to make it working. (Many years ago, first time I tried QtCreator, it actually refused to run, because some linked libraries were missing.)
I'm not sure about now, but back then it wasn't even close to anything RAD. It was only a helper to create the GUI in a XML form, which was back then not even properly integrated with the rest: one had to write some code manually to actually tell Qt what resource corresponds to the window. And add resource manually to the resource file.
I would say that Python + libglade + glade is also a pretty good combination. It's not quite the RAD experience you seem to want, but it is a fast and powerful way of developing GUI apps, thanks go a nice API and Python.
Yes, it is not RAD. And for that I already use QT + C++, which provides very powerful, simple and no-fuss API to build GUIs dynamically (without external UI building tools like glade or Qt Designer).
The problem is not me per se - I have no problems with most of the stuff. The problem are the other team-members who are not well versed as me in the scripting languages and building GUIs. On many past projects I have left behind lots of stuff which 95% of coworkers can't support or develop further. And I want to solve the problem by throwing in something that requires as least boilerplate as possible, stays as close to demos/examples as possible. I'm simply trying to find something to help the people start moving in the right direction.
Xubuntu 14.04 says that the installation would take 195MB of space. Bit heavy. Worst part: it is BASIC.
On the positive side, Lazarus + FP SDK, requires almost 1GB of space on my Xubuntu.
My ultimate goal is to be able to put together a quick UI, check in the source, and tell in few words to others who going to check it out how to compile it and get it running.
All in all, Xojo gets on the short list of things to try.
Only after I have pressed "Post" button, I hare realized that I probably given not enough context.
"Anything unique to a Linux developer who looks for RAD on Linux"?
From RAD for Linux, I'm aware only of the FreePascal-based Lazarus (Borland Delphi remake).
Past attempts with the KDevelop (and QtCreator) were pretty abysmal: right after "apt-get install" and few clicks to throw together UI, they were failing simply to compile the "Hello World" msgbox application. (And wild goose chase to install what was missing was counter the whole idea of *rapid* application development.)
Does Mono provide something unique to grant a look at it?