Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

+ - Microsoft, Chip Makers Working on Hardware DRM for Windows 10 PCs-> 1

Submitted by writertype
writertype writes: Last month, Microsoft began talking about PlayReady 3.0, which adds hardware DRM to secure 4K movies. Intel, AMD, Nvidia, and Qualcomm are all building it in, according to Microsoft. Years back, a number of people got upset when Hollywood talked about locking down "our content". So how important is hardware DRM in this day and age?
Link to Original Source

Comment: Re:Good (Score 1) 297

A better approach is to make the copyright holder a legal steward of the work until it enters the public domain. That is, they have a legal duty to maintain it in the best possible form and make sure it gets handed off to interested parties when it enters the public domain. Failure to do so is a breech of the contract resulting in handing all profits from the work during copyright to the public (that is, a massive fine).

If the cost of maintaining the work exceeds the value, they may choose to terminate the copyright early, but must give sufficient public notice.

Comment: Re:Competition (Score 1) 76

by Trailer Trash (#49546461) Attached to: Amazon's Profits Are Floating On a Cloud (Computing)

Given that Microsoft seems to be investing heavily in Azure, I'd wonder exactly how they plan to beat AWS. AWS had some new machine learning algorithm added a month ago; Azure doesn't have that. Either way, however, is a win. If Microsoft's making some fatal mistake with their new business model, then maybe they'd go bankrupt and help the industry by going open-source before death. If Azure stays where it is or ranks up in usage with its SaaS model, then there'll probably be some interesting competition between them two and Google with large user bases. Either way, there's competition, which will (almost) forever spiral downward prices and upward capabilities.

The scary thing about Microsoft is that they have at least 10s of billions of dollars in the bank. They will likely never go bankrupt, but I'm not sure they'll ever make money in computers again if the Windows/Office gravy train ever comes to a halt.

Comment: Re:It is a cycle. (Score 1) 76

by Trailer Trash (#49546449) Attached to: Amazon's Profits Are Floating On a Cloud (Computing)

Back when IBM executive predicted "the world will probably need six computers", the main computing model was a mainframe at a distant location and time share on it via (overpriced) telephone lines and VT-100 terminals. Eventually workstations appeared and the move was to get off the mainframe and do local computing. Then came along Sun, "The network is *the* computer" and diskless workstations that would boot into an X-11 display terminal off a distant server. Well, PCs came along and desktop became powerful enough to run even fluid mechanics simulations. Then came high performance computing, and now the cloud.

A bigger machine in a far away place always had the cost advantages of the economy of scale. Everytime there is a jump in connection speeds and bandwidth some customers found it cheaper to "out source" computing to a remote machine. But eventually the advantages of local storage and local computation adds up. So let us see how long this iteration lasts.

The difference is that we still have really strong clients now and use the back end mainly for storage and some computation. It's not very comparable.

The other difference is that the technologies in use today make the "cloud" pretty much infinitely expandable, unlike a mainframe. Amazon has petabytes of storage and adds more continuously.

Comment: Dual Homing Failover and IPv6 address aggregation (Score 1) 382

by billstewart (#49543191) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

Yeah, that turned out to be one of the big problems with IPv6 address aggregation - sounds great in the ivory tower, doesn't meet the needs of real customers, which is too bad, because every company that wants their own AS and routable address block is demanding a resource from every backbone router in the world.

IPv6's solution to the problem was to allow interfaces to have multiple IPv6 addresses, so you'd have advertise address 2001:AAAA:xyzw:: on Carrier A and 2001:BBBB:abcd:: on Carrier B, both of which can reach your premises routers and firewalls, and if a backhoe or router failure takes out your access to Carrier A, people can still reach your Carrier B address. Except, well, your DNS server needs to update pretty much instantly, and browsers often cache DNS results for a day or more, so half your users won't be able to reach your website, and address aggregation means that you didn't get your own BGP AS to announce route changes with, but hey, your outgoing traffic will still be fine.

My back-of-a-napkin solution to this a few years ago was that there's an obvious business model for a few ISP to conspire to jointly provide dual-homing. For instance, if you've got up to 256 carriers, 00 through FF, each pair aa and bb can use BGP to advertise a block 2222:aabb:/32 to the world, and have customer 2222:aabb:xyzw::/48, so the global BGP tables get 32K routes for the pairs of ISPs, and each pair of ISPs shares another up-to-64K routes with each other using either iBGP or other local routing protocols to deal with their customers actual dual homing. (Obviously you can vary the number of ISPs, size of the dual-homed blocks, amount of prefix for this application (since :2222: may be too long, etc.)

Comment: IPv6: Longer addresses + magic vaporware (Score 1) 382

by billstewart (#49543107) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

IPv6 was originally supposed to solve a whole lot of problems - not only did it have longer addresses (which ISPs need to avoid having to deploy customers on NAT, and in general to avoid running out of address spaces and crashing into the "Here Be Dragons" sign at the edge), but it was also supposed to solve a whole lot of other problems, like route aggregation, security, multihoming, automatic addressing, etc.

A lot of that turned out to be wishful thinking, e.g. the hard part about IPSEC tunnels is the key exchange and authentication, not building the tunnels, route aggregation didn't really work out because enterprises weren't willing to use carrier addresses instead of their own, and small carriers also wanted their own addresses instead of sharing their upstream's address space, or if it wasn't wishful thinking, it was addressing problems that IPv4 found other solutions for, like DHCP for automatic addressing.

And while NAT is a hopeless botch, it does provide a simple-minded stateful firewall as default behaviour, while IPv6 users need explicit firewalling to get the same security with real addresses (which they needed to do anyway, but especially if you're using tunnels, you have to be sure to put it in all the right places.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...