We'll have to see if sudo is installed.
Sure, but part of that is recognizing the extent of the real world exposure.Imagining it to be limited to a small portion of farmland is not realistic.
It's not just in agriculture though. It's hard to find decorative plants that haven't been treated with it.
I believe you must mean the Democans and Republicrats. I see no evidence that either has any intention of butting out of people's lives.
Keep em coming... Whatever they are my friend.
We'll not see their like again...
My kids even know Rick-Roll and Lime Cat. Hell, I saw Lime Cat in the 90's.
Link to Original Source
I don't think you appreciate the magnitudes involved. Picture the biggest forest fire you've ever heard of. Here's a dime store squirt gun and a canteen. Go put it out. Good luck, we're all pulling for you.
Now realize that it's nowhere near that easy.
A better approach is to make the copyright holder a legal steward of the work until it enters the public domain. That is, they have a legal duty to maintain it in the best possible form and make sure it gets handed off to interested parties when it enters the public domain. Failure to do so is a breech of the contract resulting in handing all profits from the work during copyright to the public (that is, a massive fine).
If the cost of maintaining the work exceeds the value, they may choose to terminate the copyright early, but must give sufficient public notice.
Given that Microsoft seems to be investing heavily in Azure, I'd wonder exactly how they plan to beat AWS. AWS had some new machine learning algorithm added a month ago; Azure doesn't have that. Either way, however, is a win. If Microsoft's making some fatal mistake with their new business model, then maybe they'd go bankrupt and help the industry by going open-source before death. If Azure stays where it is or ranks up in usage with its SaaS model, then there'll probably be some interesting competition between them two and Google with large user bases. Either way, there's competition, which will (almost) forever spiral downward prices and upward capabilities.
The scary thing about Microsoft is that they have at least 10s of billions of dollars in the bank. They will likely never go bankrupt, but I'm not sure they'll ever make money in computers again if the Windows/Office gravy train ever comes to a halt.
Back when IBM executive predicted "the world will probably need six computers", the main computing model was a mainframe at a distant location and time share on it via (overpriced) telephone lines and VT-100 terminals. Eventually workstations appeared and the move was to get off the mainframe and do local computing. Then came along Sun, "The network is *the* computer" and diskless workstations that would boot into an X-11 display terminal off a distant server. Well, PCs came along and desktop became powerful enough to run even fluid mechanics simulations. Then came high performance computing, and now the cloud.
A bigger machine in a far away place always had the cost advantages of the economy of scale. Everytime there is a jump in connection speeds and bandwidth some customers found it cheaper to "out source" computing to a remote machine. But eventually the advantages of local storage and local computation adds up. So let us see how long this iteration lasts.
The difference is that we still have really strong clients now and use the back end mainly for storage and some computation. It's not very comparable.
The other difference is that the technologies in use today make the "cloud" pretty much infinitely expandable, unlike a mainframe. Amazon has petabytes of storage and adds more continuously.
Yeah, that turned out to be one of the big problems with IPv6 address aggregation - sounds great in the ivory tower, doesn't meet the needs of real customers, which is too bad, because every company that wants their own AS and routable address block is demanding a resource from every backbone router in the world.
IPv6's solution to the problem was to allow interfaces to have multiple IPv6 addresses, so you'd have advertise address 2001:AAAA:xyzw:: on Carrier A and 2001:BBBB:abcd:: on Carrier B, both of which can reach your premises routers and firewalls, and if a backhoe or router failure takes out your access to Carrier A, people can still reach your Carrier B address. Except, well, your DNS server needs to update pretty much instantly, and browsers often cache DNS results for a day or more, so half your users won't be able to reach your website, and address aggregation means that you didn't get your own BGP AS to announce route changes with, but hey, your outgoing traffic will still be fine.
My back-of-a-napkin solution to this a few years ago was that there's an obvious business model for a few ISP to conspire to jointly provide dual-homing. For instance, if you've got up to 256 carriers, 00 through FF, each pair aa and bb can use BGP to advertise a block 2222:aabb:/32 to the world, and have customer 2222:aabb:xyzw::/48, so the global BGP tables get 32K routes for the pairs of ISPs, and each pair of ISPs shares another up-to-64K routes with each other using either iBGP or other local routing protocols to deal with their customers actual dual homing. (Obviously you can vary the number of ISPs, size of the dual-homed blocks, amount of prefix for this application (since
IPv6 was originally supposed to solve a whole lot of problems - not only did it have longer addresses (which ISPs need to avoid having to deploy customers on NAT, and in general to avoid running out of address spaces and crashing into the "Here Be Dragons" sign at the edge), but it was also supposed to solve a whole lot of other problems, like route aggregation, security, multihoming, automatic addressing, etc.
A lot of that turned out to be wishful thinking, e.g. the hard part about IPSEC tunnels is the key exchange and authentication, not building the tunnels, route aggregation didn't really work out because enterprises weren't willing to use carrier addresses instead of their own, and small carriers also wanted their own addresses instead of sharing their upstream's address space, or if it wasn't wishful thinking, it was addressing problems that IPv4 found other solutions for, like DHCP for automatic addressing.
And while NAT is a hopeless botch, it does provide a simple-minded stateful firewall as default behaviour, while IPv6 users need explicit firewalling to get the same security with real addresses (which they needed to do anyway, but especially if you're using tunnels, you have to be sure to put it in all the right places.
Would you provide it just run full speed until it burns out?