Assuming the need is infinite, if your demands are satisfied you might turn to flexibility and convenience. Last quarter we here in Norway saw a tiny dip in fixed residential broadband for the first time ever, whether that's a fluke or not is uncertain but business lines have been on the decline for some time because small 1-5 man shops use 3G/LTE to check their mail rather than having a dedicated broadband line in the office. It's just an extension of that most "normal" people I run into use wireless now instead of wired networks because it's capped by their Internet speed anyway. And even if you gave them gigabit Internet, they'd probably still feel wireless was fast enough.
This. Actual stamps is mostly a consumer thing, I just checked our commercial postal service and they recommend a "stamping" machine if you send more than 40 letters/week where you charge it up like a prepaid cell phone, same thing for packages except there they normally print to labels they slap on the package. And for the big companies you get bulk pre-printed envelopes with logo that are collected at your place of business and charged to your corporate account, we have those at work. The potential for abuse is small since you can't drop them off at a regular mailbox and it'd be obvious who you're using to pay for your postage. A lot of the consumer-to-business mail is prepaid and rolled into the cost of business too, the few times I use stamps is to other people but most of that is replaced by email since you don't need a formal signature on anything. I guess there's the odd package, but if it's too big to fit a mail box you're going to the post office anyway.
Nokia has more brand name recognition, so of course we won't use that.
Of the "let's frame it and put it on a wall" more than "I want one in my pocket" variety. I'll always have fond memories of Nokia 3210 and the state of the art in 1999, but it's not selling a new phone and it's not quite up to collectible/antique standards either. And Elop's little stunt sure didn't help Nokia's reputation as a has-been either. Not to mention that Nokia running Windows Phone might have some of the same hardware but there's very little in common between "old Nokia" and "new Nokia" anyway. I think this was a pretty easy call of Microsoft and would have happened regardless, if they'd ponied up a little more they could have gotten the Nokia name for good as it matters more to consumers than the commercial market the remains of Nokia serves.
Because Facebook is really interested in their stock value and not kicking the DEA in the teeth? They're not going to win any favors with anybody for actively sabotaging a criminal investigation, even an illegally conducted one. They want to have the public on their side which is why we're hearing about this in the news, Facebook couldn't win an escalating conflict with proxies and whatnot. If this becomes a big enough PR problem for the police though, the practice might go away.
3) Contractual obligations/customer relations, in the enterprise world people build systems they expect to last many, many years and not have the parts disappear on a whim. Which is is why Intel has launched Itaniums as late as 2012, whoever they suckered into buying it will get time to bail out. Don't underestimate the value of grudges in the enterprise, any executive who gets burned by IBM ditching it fast and dirty will be their enemy when the next big consulting/outsourcing contract rolls around.
Except they're not chasing the mainstream, they're chasing the hype wave of Apple/Google/Microsoft trying to be the "big next thing" instead of what is actually mainstream today with Win7/OS X. Instead of picking a market and staying on target to finish the job they still haven't finished on the office desktop from 1999 or the laptop from 2004 or smartphone from 2009 or tablet from 2014. And at this rate I don't think Ubuntu will stay in one place long enough to be relevant to anyone outside the ~1% of the desktop market Linux owns today.
While there'll always be exceptions I imagine it's usually a very short list of persons who want any one article removed, blowing the whistle saying "someone is trying to bury this article" should have the intended effect anyway.
Games only started using D3D 10/11 *very* recently -- the back catalog this could enable is huge, and D3D 9 games are still coming out today. It'd say it's very important to support.
Bullshit. Almost all games have had an D3D 9 rendering path since XP has been so massively popular, but a whole lot of games has taken advantage of D3D 10/11 where it's been available. It's very important to the number of games you can run on Linux, but it does not represent the state of the art. Speaking of which, WINE's support of D3D 9 through an OpenGL has been pretty good. Or rather my impression has been that if they can figure out what DirectX is doing, there's usually a fairly efficient way of doing in OpenGL. The summary tries to paint it as if OpenGL has been a blocker to DirectX support, my impression is quite the opposite. A gallium3d implementation is closer to the hardware and "more native" than a DirectX-to-OpenGL translation layer, but while it might boost performance a little it won't fundamentally support anything new.
It all depends on you definition of clickbait.
Excessively vague/misleading headline which you wouldn't have bothered to click if they'd made it informative? It's not bait if it's not a trap of sorts.
Does he mean a transient reaction in the test set-up that produces the byproducts of fusion, but not long enough to generate useful power?
A transient reaction that can't be reliably reproduced despite recreating the same conditions to the best of our ability. Which might be because the conditions necessary are so extremely specific that they only got them right once by accident or because of some contamination or malfunction that somehow produced the necessary conditions yet attempts to recreate them fail. Or the results of the initial experiment were wrong, but here they've clearly put their desire to believe it was real over their good judgement.
Except authentication is usually not username+password or digital signature, it's identification+official paper saying you're that person. Everywhere your use your passport, driver's license or any other photo ID you're relying on three things:
1) The difficulty of acquiring the information to be on the card
2) The difficulty of forging the card
3) The difficulty of fooling the issuers into producing a fake card
The last one is often a sneaky one, enough ID info and you might trick one of them into believing you've lost your ID and issue a new one. But there's enough direct fakes too, if they have the necessary information that's half the way.
Except that's pretty much what all AJAX web apps do, they "export the UI through some generic mechanism" to the browser so I'd say it's very common. No need for roll-outs and patches, if the server now says there should be a new button there is a new button for everyone. The issue is that I find most web apps really suck compared to native applications so locally I usually want a native, non-web application.
What I'm talking about is a native toolkit that'd make the applications you normally use locally network transparent at the application level, not the display server level. Essentially a toolkit where the UI is always living in its own thread, asynchronously to the actual application. Network transparency just means that thread happens to be living on a different machine, drawing to a different display. And you could tweak it to handle that better, but you wouldn't have to it'd sort of run remotely without modification.
For example, I made a basic calculator just as a proof of concept. Connected locally (I still used a TCP connection just to localhost, better options are available) it looked and acted entirely as a native app you could use every day. It recorded buttons pushed, sent the push events to the back-end and sent updated display text back. I hadn't made it better, but I hadn't made it worse either. The cool thing though was that now I could connect to it remotely. Same calculator popped up, my button clicks go over the network, display text came back over the network. It's a working local native app and a working network transparent remote app. At once. Without any application logic in the client, just drawing tools.
People would like a magic box that make them anonymous and secure on the internet while they log into Facebook, just like they want a magic diet pill while they continue to stuff their faces with sugar and fat. Or for a more relevant tech example they'd like a magic oracle to tell them if a website belongs to who they think it belongs to which is why we have CAs as the best approximation. It's never going to work that way, but there's a lot of money in selling snake oil...
The layer in the system between the user applications and the hardware interface is the place where QT, GTK, Windows graphics api, and all the other graphics toolkits go. Those toolkits shouldn't care too much about the hardware details, just the published capabilities of the GPUs.
It's kinda hard not to care, because a lot of it depends on where you have the data, where the processing capability is and what the link capability is. Sending a video stream is heavy. Even sending an event stream like applications do all by themselves is too heavy during say a resize or scrolling action. Some time ago I experimented with turning Qt into a remote application toolkit, basically taking all signals and slots and serializing them over SSL. It was actually surprisingly successful, basically it was puppeteering a client to draw the interface and using signals and slots to synchronize information on demand. Only the bits you connected sent events across the link.
There were plenty little gotcha's though, like the scrolling I found a way to make a trigger that'd only fire after a custom delay, like for example 50ms after you were done resizing. And I needed to add a system to say "When this button is called, include the check state of this radio button and text of that textbox", but the nice thing was that on the client side it was acting like a client window. It was resizing, the menus were popping up, the buttons responded (though the actions might take time due to latency) and I could do client-client signal/slots like "when the user checks this box, enable these extra fields" too without a server round-trip.
I could do neat things like send a jpg and have the client draw it and even if the client moved the window around, covered it with other dialogs, scrolled it in and out of sight it was zero overhead. Yeah I know kind of like a browser, but not like any RDP/VNC solution. Often I needed the same resources over and over and didn't want to transfer them every time, so I needed a caching system. Kind of like HTML5 persistent objects I guess but before that. And I could populate list/tree/grid objects up front or on demand, a bit like DOM manipulation in HTML.
It wasn't transparent but it was somewhat API transparent, you'd get a "RemotePushButton" instead of a "QPushButton" which acted the same, but instead of actually drawing anything just sent commands to the client which drew the real QPushButton. You didn't really see that though, you just called the functions and connected the RemotePushButton's onClick() signal as if it were a QPushButton. Kind of like HTML+AJAX on steroids but looking and feeling like a native application. That I feel would have been rather next-gen to see it finished.
If you're wondering why I didn't then mainly because the product I was thinking to use it for kinda died on the drawing board. And because to really become user friendly it'd have to integrate on a much deeper level, so you could use all the Q* classes without rewriting everything, the Remote* classes were a hack (QObjects) working with the standard library. And you'd want to put more work in persistence, the idea was that you could yank out the plug on one machine, log back in on another and it'd redraw everything but initially it passed all though and didn't shadow the client state on the server. It could have though, the rewrite was just too much.
Send poor people to serve time in some third world hell hole. Send rich people to serve time in some vacation paradise.