Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Good (Score 1) 302

A better approach is to make the copyright holder a legal steward of the work until it enters the public domain. That is, they have a legal duty to maintain it in the best possible form and make sure it gets handed off to interested parties when it enters the public domain. Failure to do so is a breech of the contract resulting in handing all profits from the work during copyright to the public (that is, a massive fine).

If the cost of maintaining the work exceeds the value, they may choose to terminate the copyright early, but must give sufficient public notice.

Submission + - Microsoft, Chip Makers Working on Hardware DRM for Windows 10 PCs (pcworld.com) 1

writertype writes: Last month, Microsoft began talking about PlayReady 3.0, which adds hardware DRM to secure 4K movies. Intel, AMD, Nvidia, and Qualcomm are all building it in, according to Microsoft. Years back, a number of people got upset when Hollywood talked about locking down "our content". So how important is hardware DRM in this day and age?

Comment Re:Competition (Score 1) 83

Given that Microsoft seems to be investing heavily in Azure, I'd wonder exactly how they plan to beat AWS. AWS had some new machine learning algorithm added a month ago; Azure doesn't have that. Either way, however, is a win. If Microsoft's making some fatal mistake with their new business model, then maybe they'd go bankrupt and help the industry by going open-source before death. If Azure stays where it is or ranks up in usage with its SaaS model, then there'll probably be some interesting competition between them two and Google with large user bases. Either way, there's competition, which will (almost) forever spiral downward prices and upward capabilities.

The scary thing about Microsoft is that they have at least 10s of billions of dollars in the bank. They will likely never go bankrupt, but I'm not sure they'll ever make money in computers again if the Windows/Office gravy train ever comes to a halt.

Comment Re:It is a cycle. (Score 1) 83

Back when IBM executive predicted "the world will probably need six computers", the main computing model was a mainframe at a distant location and time share on it via (overpriced) telephone lines and VT-100 terminals. Eventually workstations appeared and the move was to get off the mainframe and do local computing. Then came along Sun, "The network is *the* computer" and diskless workstations that would boot into an X-11 display terminal off a distant server. Well, PCs came along and desktop became powerful enough to run even fluid mechanics simulations. Then came high performance computing, and now the cloud.

A bigger machine in a far away place always had the cost advantages of the economy of scale. Everytime there is a jump in connection speeds and bandwidth some customers found it cheaper to "out source" computing to a remote machine. But eventually the advantages of local storage and local computation adds up. So let us see how long this iteration lasts.

The difference is that we still have really strong clients now and use the back end mainly for storage and some computation. It's not very comparable.

The other difference is that the technologies in use today make the "cloud" pretty much infinitely expandable, unlike a mainframe. Amazon has petabytes of storage and adds more continuously.

Comment Dual Homing Failover and IPv6 address aggregation (Score 1) 390

Yeah, that turned out to be one of the big problems with IPv6 address aggregation - sounds great in the ivory tower, doesn't meet the needs of real customers, which is too bad, because every company that wants their own AS and routable address block is demanding a resource from every backbone router in the world.

IPv6's solution to the problem was to allow interfaces to have multiple IPv6 addresses, so you'd have advertise address 2001:AAAA:xyzw:: on Carrier A and 2001:BBBB:abcd:: on Carrier B, both of which can reach your premises routers and firewalls, and if a backhoe or router failure takes out your access to Carrier A, people can still reach your Carrier B address. Except, well, your DNS server needs to update pretty much instantly, and browsers often cache DNS results for a day or more, so half your users won't be able to reach your website, and address aggregation means that you didn't get your own BGP AS to announce route changes with, but hey, your outgoing traffic will still be fine.

My back-of-a-napkin solution to this a few years ago was that there's an obvious business model for a few ISP to conspire to jointly provide dual-homing. For instance, if you've got up to 256 carriers, 00 through FF, each pair aa and bb can use BGP to advertise a block 2222:aabb:/32 to the world, and have customer 2222:aabb:xyzw::/48, so the global BGP tables get 32K routes for the pairs of ISPs, and each pair of ISPs shares another up-to-64K routes with each other using either iBGP or other local routing protocols to deal with their customers actual dual homing. (Obviously you can vary the number of ISPs, size of the dual-homed blocks, amount of prefix for this application (since :2222: may be too long, etc.)

Comment IPv6: Longer addresses + magic vaporware (Score 1) 390

IPv6 was originally supposed to solve a whole lot of problems - not only did it have longer addresses (which ISPs need to avoid having to deploy customers on NAT, and in general to avoid running out of address spaces and crashing into the "Here Be Dragons" sign at the edge), but it was also supposed to solve a whole lot of other problems, like route aggregation, security, multihoming, automatic addressing, etc.

A lot of that turned out to be wishful thinking, e.g. the hard part about IPSEC tunnels is the key exchange and authentication, not building the tunnels, route aggregation didn't really work out because enterprises weren't willing to use carrier addresses instead of their own, and small carriers also wanted their own addresses instead of sharing their upstream's address space, or if it wasn't wishful thinking, it was addressing problems that IPv4 found other solutions for, like DHCP for automatic addressing.

And while NAT is a hopeless botch, it does provide a simple-minded stateful firewall as default behaviour, while IPv6 users need explicit firewalling to get the same security with real addresses (which they needed to do anyway, but especially if you're using tunnels, you have to be sure to put it in all the right places.

Comment Future: IPv4 via NAT, IPv6 Native (Score 1) 390

Back when I was closer to the ISP business, the general plan of most consumer ISPs was to start supporting IPv6 (once they had all their hardware and operations support systems able to manage it - it's amazing how many moving parts there are), and migrate most users to dual-stack, with NAT for IPv4 plus native IPv6, or else to use NAT IPv4 with tunneled IPv6.

Comment Comcast was ahead of many US ISPs on IPv6 (Score 1) 390

Comcast may have lots of other issues as an ISP, such as banning customers from running server at home, and monthly usage caps (if they still do that), but they were ahead of most other US consumer ISPs on taking IPv6 seriously.

(My ISP supports IPv6 over tunnels, but doesn't run native dual-stack, at least on telco DSL. And I really should get around to actually trying it out, but I haven't...)

Comment Re:Yes, Old SATA SSD, not Rotating Disk (Score 1) 162

Anonymous Coward was asking if the "old SATA drives" referred to old SSD drives that use SATA (which wouldn't be too surprising if it were almost as fast), or old rotating hard disks that use SATA (which would be really surprising to find it faster than SSD.) Google results for the X25-m say yes, it's an SSD, just a bit older one that uses SATA instead of PCIe.

Comment Re:How about... (Score 1) 101

Actually, it wasn't my statement, but I did defend it as not too far from true.

Because many over 60 have very little experience with computers, you have more knowledge to backfill in order to teach them about computers (starting with de-mystifying the magic box). Again, not a question of intelligence or educability, just a matter of experience.

That will be true for many (more often than not), but clearly is far from universally true.

I suspect, these are simply magic.

I have little doubt most of those things are magic to most people, but through using them for decades, they have learned to deal with them from a black box perspective. The 60 somethings who have recently found a good enough reason to bother with a computer will get there too.

Slashdot Top Deals

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...