Third party regulation? You need only look at Ma Bell to see how that turned out. Most local, privately owned ISPs only avenue to offering affordable "broadband" was through the phone company's DSL. For a while this was regulated, poorly, and allowed them to stay afloat, but only just. Most of these local ISPs slowly faded away, not because they didn't have the user base, but because what they had to pay the telephone companies left little margin for profit. At one point I remember BellSouth in particular even telling us we could no longer offer the cheap, low-speed offering, even though they themselves still offered it -- meaning we had no way to compete for the lowest dollar. Dropping DSL regulation was the final death toll, the ones that were large enough to negotiate a deal survived, but that was it. Personally I feel the telephone line is a perfect example of why government regulation doesn't work.
There are plenty of companies who can compete with Comcast. Comcast is hardly the only large cable company out there. I don't see how it would necessarily be any different then the cell phone market. They would have to slowly expand their network, but competition would be possible.
"The MIT media lab. is developing a motion screen computer. It looks back at you."
They had such babies twenty six years ago...
This is legislation basically saying a company has to conform to points 1, 2 and 3 if they want to install software X of a particular variant (in this case, P2P) on your machine.
This is not really much different from telling a contractor that they're free to install a bathroom into your home, but that they will have to abide by laws 1, 2 and 3 regarding things like the electrical wiring.
( although that's based on UK and NL law - I suppose maybe in the U.S. every contractor is free to install an outlet into the side of their client's bathtub if they so desire? )
Is that over-legislation in the case of P2P? probably. But mostly because it's a bit odd to target P2P specifically - it could apply to just about any program. Security programs would be an issue, though*
The points themselves -seem- sound enough, though...
prohibit peer-to-peer file-sharing programs from being installed without the informed consent of the authorized computer user.
no stealthy installs - I'm all for that. I'm looking at you, Apple with iTunes and Safari, and you MS for MSN's final installation screen suggesting IE should be my default browser and MSN be set my homepage, and a crapload of other apps that suggest that installing a Yahoo! toolbar is vital to the operation of the principle software.. give me a donate button instead, I'll happily part with some dosh if I'm using your app, more than you're getting from Yahoo for the toolbar install I'd imagine.
The legislation would also prohibit P2P software that would prevent the authorized user from blocking the installation of a P2P file-sharing program and/or disabling or removing any P2P file-sharing program.
So, bittorrent isn't allowed to block my installation of, say, utorrent, nor is would it be allowed to prevent me from uninstalling itself (or others).
* just to get back to that security programs bit - obviously a security program -should- be allowed to block other software from being installed if that other software is malware. So that's where broader legislation could have problems.
Software developers would be required to clearly inform users when their files are made available to other peer-to-peer users
Given the "I didn't know!" defense-craptaculaire proferred by some people, I think that's sane, too. Heck, disable sharing by default, and if the user wants to share files warn them of the ramifications, and always make it clear -which- files you're sharing.. not via a configuration dialog that merely specifies the path - offer a screen where you can get an -actual list- of the files.
Better yet would be not allowing the sharing of a directory 'as is' at all. Have the user confirm that any files added to a specified share folder should be shared - keep a simple database (flat text file would do) of the files the user actually wanted to share.
That way you can't have business users dropping a random document(s) into the share folder, forgetting that they had it shared, and auto-sharing that/those document(s) with the world -unless- they also go to their P2P app to confirm that they want the added file(s) shared.
The thing -I- worry about is that IANAL. Moreover, IANAS(neaky)L - so I don't know just how these definitions (which I suspect are loosely phrased around the actual suggested legislation anyway) can be worked around, or twisted for abuse, etc.
I'd understand your confusion if English is not your first language. However, that sentence is explicit and unambiguous:
"resulting in slow-downs as the systems were forced to increasingly turn to disk-based virtual memory to handle tasks[...]" (emphasis mine).
The highlighted words assert that the slow-downs were a direct result of the memory consumption and that consequently virtual memory was consumed.
It is true that the article does not expressly state how they determined that virtual memory was consumed, but seeming that they make this assertion it stands to reason that it was at least observed in some manner. This is attested by a blog entry from the actual researches conducting the tests:
New data from the exo.repository shows that better than 8 in 10 Windows 7 systems monitored by the exo.performance.network are running alarmingly low on physical memory. And nearly the same number are demonstrating significant delays in I/O processing - ostensibly due to heavy virtual memory activity as Windows compensates for insufficient RAM. (emphasis mine)
"Ben Sheffner is an attorney at NBC Universal"
Which explains the bias I detected in the article. I repeatedly found the examples he used to support Apple's hypothetical "case" to be missing key details.
That's not to say Apple doesn't have a case (I have no idea really) but I'm always suspicious of people who intentionally omit important details.
Older Cisco equipment can function just as well as newer for 95% of lab scenarios. You are very unlikely to be needing to use all the newer features.
Anything that can run IOS 12.3 and is newer than a decade old can do a lot more than you think. We do all our BGP testing on a stack of 2600s and 3600s and never an issue even though in production its 2800s, 3800s etc.
Granted there are features that you do need the newer kit esp when syntax changes (e.g. IP SLA commands, newer netflow commands, class map based QoS to name three off the top of my head) but none of the core routing and switching features/commands has changed much since the introduction of CEF - they all do ACLs, route maps, OSPF, BGP, EIGRP, vlans, spanning tree, rapid spanning tree, IPSEC vpns. I'm speaking from an enterprise POV not a service provider but I'd imagine if you are in a telco environment you wouldn't be lacking gear.
For many minor test scenarios, you can pick a test branch office and use the good old 'reload in XYZ' command to ensure that no matter how badly you stuff it up, everything will bounce and come back (just remember NOT TO COPY RUN START lol).
Then there's the sleight of hand methods:
- always ordering more for projects than you really need. Par for the course really esp as most project managers haven't a clue when it comes to the nuts and bolts of a big cisco order.
- pushing for EOL replacements as early as possible, intentionally conflate end of sale with end of life.
- getting stuff in for projects as early as possible, then you have a month or two to use it as test gear.
- remember that your lab need not mirror reality, scale down as much as possible. e.g. to simulate a pair of 4506 multilayer switch running in VRRP, use a pair of 3560s. Use your CCO login and flash away to your hearts content (I know its breaching licencing but for test scenarios, meh).
"What people have been reduced to are mere 3-D representations of their own data." -- Arthur Miller