Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:It's too late (Score 1) 681

I'm using Classic Shell too and I agree it does fix nearly everything that annoyed me in Win 8.x

Many people on the other hand are still upset (exaggeratedly so IMO) with needing third-party applications to restore classic start menu functionality or are adamantly opposed to any sort of such work-around.

Comment Re:Here's the problem. (Score 1) 205

If your "secure" applications run on Linux, Windows or any other major modern OS, that's hundreds of million lines of code that even experienced developers have little to no insight into and many of the security exploits that pop up, Heartbleed being the latest high-profile case, are tied to baked code and libraries that get reused by thousands of developers with implicit trust since almost nobody can afford to re-audit that code for themselves even when they have the expertise to do so.

Even if your application's own code is technically flawlessly secure, there are countless ways the OS, other applications running on the same machine and hardware may be used to undermine your otherwise perfect security.

The problems extend far beyond self-taught programming... and self-taught programmers are not intrinsically bad either.

Comment Re:Here's the problem. (Score 1) 205

Systems these days are so hopelessly complex due to running full-blown OSes (mainly Linux derivatives like Android these days) for convenience that guaranteeing security is practically impossible most of the time since nobody ever knows the system inside-out so everyone is relying on everyone else making their own part of the source tree work properly without unforeseen unexpected interactions between software components and also with the hardware.

Most developers and companies do not have the time and resources to go over and get intimately acquainted with every minute detail of their development environment, libraries, OS, etc. to understand the millions of ways things can possibly go wrong assuming they even have access to the source code in the first place. If they had to do that before getting to work on their actual project, most of them would die from old age before doing anything so demanding that degree of understanding is simply not realistic.

The threat of severe legal penalties for things that are often nearly impossible to foresee would make tons of would-be developers give up on the idea - it simply makes no sense.

Comment Re:Yeah (Score 1) 37

Weight measurements are simple: the weight of a non-volatile substance (or volatile substance in a container) does not change with time, temperature or other variables. You have a quantity of whatever, put it on the scale and you are done.

Jewelry is a poor example for W&M policing since jewelry is luxury goods and jewelry is not sold by weight in the first place. Try the retail food and gas industry instead. I do not know how it works in the USA but in Canada, calibration stickers for pumps and balances used for retail must be in plain sight where consumers can easily inspect them and merchants are required to stop using the equipment if their calibration is out of date. No calibration, no sale.

Available bandwidth through a network of networks however is infinitely variable: there is nothing that can be "calibrated" to guarantee any amount of bandwidth along any particular route at any given time since the whole standard internet operates on a "best-effort" basis where "best-effort" actually means no special effort at all - just leave the equipment on and forwarding packets. If you want guaranteed performance between two points across the internet, you need to pay intervening networks for a private virtual circuit of some sort.

Comment Re:Yeah (Score 1) 37

Currently the ISPs market "Up to 50mb!" but thats only if no one else out of your remote is currently online.

If you have 50Mbps over phone lines, you have VDSL2 and VDSL2 remotes typically have at least 20Mbps of available upstream capacity per port so if everyone on the same remote has 50Mbps service, about 40% of people connected to it can simultaneously use their service at full speed before the remote actually becomes a choking point. This part of the service is something the ISP has full direct control and visibility into. Even ancient ADSL1 DSLAMs had the ability to probe lines for service quality monitoring/provisioning purposes, everyone knows performance on xDSL depends on line quality and that part of the service has absolutely nothing to do with network neutrality.

Where things become far less predictable is when the traffic leaves the ISP's middle-mile infrastructure, interconnects with peers and transit providers, internal hops across those external parties the ISP has absolutely no visibility into or power to do anything about, interconnect between those third-parties and others beyond, the far-end interconnects between those third-parties' third-parties, the far-end network, etc.

If you want network neutrality to start defining some degree of end-to-end performance guarantees (unless further limited by technical limitations such as maximum sustainable line sync), the whole internet would be affected; not only the first-mile operators.

Comment Re:Yeah (Score 1) 37

This should be enforced by weights and measures.

Good luck with that.

There are thousands of variables that can affect the bandwidth available between points A and B across the internet, many of which beyond the end-users and the ISPs' control, which makes any sort of bandwidth guarantees with "best-effort" transit impossible to actually guarantee in any remotely meaningful way. Throwing W&M, NIST or whatever else at this is not going to do anyone any good.

If you want everyone's internet service to effectively be covered by an end-to-end bandwidth SLA of some sort, things are likely going to get a fair bit more expensive if the minimum guarantee is to be remotely usable.

Comment Re:Conspiracy-theory rubbish ... (Score 1) 337

There is plenty of time since all modern gear uses some variant of store-and-forward architecture and routers need to be able to rewrite packet headers to set things like the ECN bit, modify the QoS field, check/update the TTL, check/update the CRC, etc. Most of the circuitry for this basic line-rate processing is built directly in the chips handling individual ports and most of that processing can be done on-the-fly as data gets shifted in/out of the ingress port's buffer during the store-and-forward process, addling little if any extra latency to the store-and-forward process.

The whole process of looking up through routing tables and scheduling a path between the chip receiving ingress data from a port and the chip driving the egress port is a far more complex operation than checking QoS flags to decide which priority egress queue it will land in (most modern equipment has eight egress queues to match Ethernet's three bits Class-of-Service tag) and then picking which queue gets a packet sent out next.

Basic traffic prioritization based on the QoS field is dead-simple and extremely lightweight hardware-wise.

Comment Re:Somewhere in my mind... (Score 1) 337

The difference between phone and internet is that phone is classified as an essential service or very close to and has nearly absolute QoS guarantees short if infrastructure getting ripped apart. Internet on the other hand is merely a best-effort service every step of the way unless extraordinary measures (ex.: order FTTP/FTTB with SLA for both the link uptime and bandwidth through both endpoint ISPs and their intermediate network(s)) are taken to get around that.

If the whole internet was built up to the same standards as the PSTN network is (no congestion whatsoever allowed during typical peak hours), internet service could end up considerably more expensive.

Comment Re:Conspiracy-theory rubbish ... (Score 1) 337

You do not need fancy routers to do QoS: all you need is an agreement on your NNI (such as paid transit based on QoS tag) with whoever wants to pass QoS'd traffic to your network or whatever network you want to pass QoS'd traffic to and enable QoS-based routing on your routers and switches - DiffServ QoS has been supported by half-decent carrier-grade routers over a decade. Heck, even entry-level managed switches can do basic L3 QoS-based switching (such as mapping DiffServ QoS to Ethernet CoS tags) these days.

Comment Re:Somewhere in my mind... (Score 1) 337

You want QoS for VoIP, video service providers (cable/phone or other) or subscribers may want QoS for their video streaming, other services may want QoS for whatever it is they are doing and end-users may want QoS for yet other different stuff.

QoS is just one of many methods that can be used to prioritize stuff. The only difference between cable/telco and over-the-top service providers is that the incumbent actually has access to the equipment to do it whichever way they please and manage associated costs whichever way they want.

Since there is no standard for handling QoS between networks, passing QoS'd traffic through peers and transit providers require extra agreements between entities and higher rates for the extra effort if the QoS tagging is going to actually be honored across the other parties' networks. Much of the time, networks simply clear the QoS field on ingress at their border routers.

If Network Neutrality allowed QoS and forced the whole transit and ISP business to honor it, it would come with extra fees attached to offset the extra costs. We are back almost exactly where everything started: premium rates for premium services.

Slashdot Top Deals

Lawrence Radiation Laboratory keeps all its data in an old gray trunk.

Working...