Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Very original (Score 1) 182

That's pretty much it. Or they think the $1000 is buying them some special features like running quieter.

The prices in this market are downright crazy, probably because it's a quasi-medical application. Yes there are some that offer things like UV sterilizers and engineer the air flow such that it goes through the UV sterilizer at a rate that actually allows it to work, but even the ones with features that actually work are completely overpriced, and that doesn't change as long as it's a small percentage of desperate people that need it. It's no surprise to me that once the need for the product becomes mainstream, gouging the hell out of the consumer gets harder, and no it has nothing to do with mass production, just exploitation of the sick.

 

Comment Re:CPU time for charity (Score 1) 208

This suggestion would probably be the least work to set up and then tear down. Assuming the existing hardware is running a supported platform, it's just packages and a small amount of configuration and can run in an unprivileged account. When you get towards the end of the unplug date, start disabling new jobs from tasks with long-running jobs so you don't leave too many unfinished ones. And yes the WCG does have tasks that need storage, not just CPU.

Comment Re:user error (Score 0) 710

People can live without a clothing dryer.

...If they are not allergic to dust mites like some tens of percent of people, or if they spend even more energy heating their water to 140F, or buy a bunch of chemicals to kill them with cold water, they can, Oh, and then there is the allergy to pollen from the clothes line some other tens of percent of people have.

As to the OP, there is only a small sliver of people who are perceptive enough to realize their impact on the environment, but not perceptive enough to realize that it does not do much good to cut their own emissions for the most part because the vast majority of people will not. There are productive things to do that help push technology forward, like buying into advanced auto technology or alt energy systems if you can. The rest of the stuff just makes energy cheaper for the glutton across the street, so he can have more kids raised without your environmental values.

Comment Re:This just illustrates (Score 2) 365

If we knew about the effects of excessive CO2 production in the 1900s,

FWIW.

"The greenhouse effect is the process by which absorption and emission of infrared radiation by gases in a planet's atmosphere warm its lower atmosphere and surface. It was proposed by Joseph Fourier in 1824, discovered in 1860 by John Tyndall,[66] was first investigated quantitatively by Svante Arrhenius in 1896,[67] and was developed in the 1930s through 1960s by Guy Stewart Callendar.[68]" ...just because it always amuses me to remind myself how long we've known much physics.

Comment Re:I prefer (Score 1) 337

You make that 6% and more back in improved latency performance. Of course these days, even with jumbo frames ethernet link speeds are up high enough that jitter is less of an issue, but still, that's only because bandwidth was thrown at the problem, which, if done to ATM, would easily have made up for the overhead, without the hackery of MPLS.

Comment Re:I prefer (Score 1) 337

And that's great from the perspective of defining what should happen with basic service traffic, with the exception of not allowing the ISPs to mitigate obvious DDoS attacks because they must treat all similar traffic the same.

Also, we do not want to make it impossible for Company A to build a super-fast, super reliable, prioritized network over normal ISP/carrier links that allows them to provide e.g. home-based medical monitoring or even more trivial services. There's a legitimate case for premium service contracts, and they should be looked at as an opportunity to raise money for improving basic service rather than some sort of evil back-room deal. Locking the ratio of basic service capacity to prioritized offerings is how to do this most simply, with something akin to the "medical loss ratio" also an option.

Finally, the more legal policy that gets thrown at the network staff, the harder their job gets, and believe me, in most places the network staff is already oversubscribed both manpower and talent wise (heck ISPs can't even reliably rid us of source address spoofing to this day.) Having to pass every rule change through a legislative compliance test would be back breaking.

Comment Re:I prefer (Score 1) 337

What we need is something like RSVP being widely implemented, but I haven't noticed it mentioned anywhere in these net neutrality discussions.

What we really needed was widescale deployment of ATM so the client could define QoS properly in a call-based fashion. But that didn't happen.

Comment Re:Umm, no (Score 1) 323

Are you *seriously* suggesting using an easily spoofed MAC address is one way to do that?

No, and I remind my employers of this pretty much monthly to try to push towards 802.1x/MACSec on the wired side. However, we already use (password-based) 802.1x on the WiFi side, and you can't gain anything by changing your MAC after WPA2 enterprise authentication because your encryption keys and AAA state are tied to it, and trying to use someone else's for a fresh authentication isn't something the controllers abide. Which is why the Apple tweak doesn't try to touch anything but probes; it would be completely dysfunctional if they did it on actual traffic.

Also in our case your IP is locked to the MAC and ARP traffic is properly inspected and filtered (you'd be surprised how many WiFi systems do not do this.)

So yes, our network relies on a feature (802.1x auth and WPA2) which "means less privacy for users" in the sense that we know who is using what machine, for what, and roughly where. You would be hard pressed to find an enterprise network that did not.

As far as what we use it for in house, it's to improve the odds that each client has virus-checked each of their IOS or Windows devices individually (it is more trouble for most of them to learn how to change a MAC address than just to update their virus signatures, so this works well), and, as mentioned above, the controllers do location-based roaming optimization to unstick sticky clients, and that last part it what the Apple changes have the potential to break. We do carve out exemptions for network troubleshooting, deployment planning, and for stuff like locating lost or stolen equipment, but for the most part our policy on location tracking data is "don't look at that data and throw it away promptly."

Now, if this feature does become a problem, I sincerely hope Apple bothered to put in a user-accessible control for it. Given they seem to be of the mindset that the more user control they can take away from their WiFi setup the better, that hope is pretty bleak, and we'll be lucky to even get the ability to tweak it via a .mobileconfig.

Comment Re:I prefer (Score 5, Interesting) 337

It's a giant sticky mess. Many advocates for net neutrality have only a vague idea of how things work so their proposals are vague. Many with the experience to produce more detailed proposals have ulterior motives.

Anyway, if you assume honoring protocol priorities is OK, then you end up with abusive situations where an ISP that runs video protocol 1 can sink traffic from a competitor based on the fact that they use video protocol 2. Add to that that protocols can be patented, and you'd end up with an incentive to create and patent stupid protocols just to do exactly that.

Also there are services whose availability would benefit the customer/public/economy that involve prioritizing packets between privately administered device networks, and not by protocol, and defining the difference between those services and unfair competitive practices leads us down a road to byzantinism.

Really we need to get to a point where end-users can send ToS bits into the network and have them honored as long as they are below a fair usage level for ToS packets, and a certain percent of the network is kept free for best effort, allowing the consumer some level of live control. Before we even do that, though, we need to just move towards "ISPs and other providers must make X% of all built capacity available at a (possibly tariffed) basic rate for public best effort use" and apply that principle across all areas of bandwidth, pps processing power, and -- the toughest sell but very important -- CDN capacity. The cash flow through CDNs really needs to be further regulated to eliminate the perverse incentive of making money off congested pipes on the back end. The restriction on sales of prioritized services in the other 100-X% part of the pipe would provide appropriate incentive for expansion of the entire pipe, benefiting the basic rate users not just the premium arrangements. The X could be adjusted by policy changes until the sweet spot is found or as the ecosystem changes.

Now if the above was TLDR, a solid proposal would be 100x more complicated.

Comment Re:Umm, no (Score 1) 323

It's not an assumption, it's a deduction and a prediction: Apple products will perform comparitively poorly on networks that have features such as Prediction Based Roaming (CISCO) or ClientMatch (Aruba) unless they *properly* implement 11k and the network is 11k-capable, or unless they stop randomizing the MAC in probes when associated to an enterprise SSID. It will be especially bad considering the presence of utter suck in the Apple roaming behavior is one of the primary reasons these technologies were developed. The reason I am not optimistic that they will properly handle turning off this feature when needed is that Apple has, historically, seemed determined to make their devices useless outside of the living room and coffee shop. I don't know if they've even realized running a differently named SSID on 5GHz from the one used on 2.4GHz (a position they held for years) so their clients stop crapping their pants is NOT an acceptable workaround.

Meanwhile, while they are flailing around, they will likely degrade the overall performance of the network for everyone by sending/receiving low rate frames at high transmission power to distant APs, with plenty of retransmits. This already happens now, and this feature seems to have the potential to make it more difficult for the network to compensate for bad client behavior.

Also, to your second point, in order to be exempt from CALEA, we are legally obliged make a reasonable effort to ensure the people we provide network service to are indentifiable associates of our organization, which is beside the point as far as TFA is concerned, but so you understand: if we do not, the alternative is to make our network sniff-ready for the feds at our expense. Ensuring that we qualify as a "private network" involves ensuring that we are serving members or identified guests of a private organization (ourselves, or a consortium such as eduroam), and this involves identifying said people's machines. We do this (and adjust our historical data retention and usage policies accordingly) to improve overall privacy, comparatively.

Slashdot Top Deals

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...