Become a fan of Slashdot on Facebook


Forgot your password?
Slashdot Deals: Deal of the Day - 6 month subscription of Pandora One at 46% off. ×

Comment Re:Need more mature languages (Score 2) 231

Python provides no true concurrency due to global interpreter lock. Java is not suitable for realtime due to unpredictable GC, while C/C++ is not suitable for anything which should never crash or return random results due to memory corruption.

Python has multiprocessing for 'true concurrency' if you need it
Java is not actually used for anything real-time
C/C++ can be written safely if you are willing to be careful and unit-test (also managed memory with C++11/14 constructs helps the drudgery) with tools like ASAN and Valgrind.

Yes, those are hard problems, but it's also 2015 and we can come up with powerful compilers and JIT virtual machines. Going back to less concurrency than plain old shell scripts where '&' starts a true separate process is not an answer.

Good thing no one proposed that.

Comment Re:Often a small number of users /do/ use a ton .. (Score 1) 622

There are tradeoffs. For one, we were a cash-strapped small college and couldn't afford the kind of hardware to do deep packet inspection. The other is that a lot of encrypted bulk traffic (CrashPlan) is indistinguishable from high-priority traffic. It doesn't do to say that people moving large quantities of data over SSL or IPSEC should get a pass. Finally, we had serious privacy concerns with inspecting and tagging the content of internet traffic.

In the end, the fatal blow (besides $$$) is that it's pretty damned obvious that if you moved >1GB in the last 15 minutes, you must be doing something that's not interactive. Schedule your bulk transfers for 3AM so they don't overlap

PS. The car analogy doesn't work because we are not regulating traffic "on the highway", we are regulating the "on ramps". And we don't need to check to see whether a particular on-ramp is connected to something important like a police station -- we know that a on-ramp that's sending 100x the average traffic in the last 5 minutes is definitely not important.

PPS. For us, perceptible latency kicked in around 75% congestion, not 95%. At 95% the system suffered complete congestive collapse to throughput. YMMV.

Comment Re:Often a small number of users /do/ use a ton .. (Score 1) 622

First of all, there are quote tags for a reason. Learn to use them.

Second, by your definition every single service on planet earth is "oversold". There aren't enough roads and bridges for everyone to drive to the same place at the same time, nor would there be enough parking when they got there. There aren't enough ambulances or hospital beds or doctors for everyone to come to the same ER at the same time. There aren't enough phone circuits or available spectrum slots for everyone to make a call at the same time. There aren't enough planes for everyone to fly at the same time, or enough runways/taxiways/gates for all the planes that do exist to go to the same place at once. The supermarket will run out if everyone comes in at the same time trying to buy peanut butter too.

All of these services continue to exist because, as it turns out, it's absurdly unlikely for everyone to want to use the same service at the same time. Instead they are tuned somewhere in between average and peak demand (planes probably towards the former, ERs towards the latter) and nowhere near the "100% use factor".

What's more, no sane designer would have it otherwise. Roads that could handle everyone trying to drive the same direction at the same time would be dozens of lanes wide and go underutilized 99% of the time. If we wanted to ensure that everyone could fly at the same time, we would need 1000 times as many planes, and most of them would just sit on the ground all the time. A supermarket that stored enough peanut butter in case everyone in my town decided to buy a jar at the same time would end up storing (and throwing out) literally tons of peanut butter for no good reason.

Comment Re:Often a small number of users /do/ use a ton .. (Score 1) 622

If I pay for X Mb/s, then I am well within my rights to keep my pipe running at X Mb/s for every single second of my subscription. If my Internet provider knows it can't keep up, while taking my money, then that is stealing from me.

Right, so in that alternate universe, why wouldn't the service provider come and say to you "You know what, you pay for X Mbps, but I can offer burst speeds of 50*X Mbps for you and 49 other neighbors, provided you agree to only use that max for an average of 1/50th of the total time. That "burst speed" would let you surf the web much faster, but if have any bulk non-interactive data transfer like OS patches/updates or large offsite backups, we require that you limit them to X Mbps."

Why on earth wouldn't I take that deal?

Comment Often a small number of users /do/ use a ton ... (Score 5, Insightful) 622

We have no way of evaluating their claims that a small number of users who abused the system caused it to be unprofitable for them.

Anecdote incoming, but when I helped out on college IT it was fairly consistent that the top 20% of users (well, network ports) were responsible for 80-90% of the usage. And further the top 2% (which was two dozen or so) were responsible for about 50-60% of the usage. This was pretty consistently the same few ports too -- not just that at any point the usage was skewed but that over time those users were using a ton. Since we didn't have a huge pipe to the internet, those super-users would, from time to time, really degrade everyone else's connection. That lead to the idea that we could mitigate this situation by a fair and objective set of rules:

(1) No data "caps" -- we are not interested in aggregating data over long periods of time
(2) A byte is a byte -- we are not interested in packet inspection, only counts
(3) Traffic shaping only kicks in during actual congestion -- we are not interested in doing anything until service is actually degraded

What we ended up doing was that when the pipe to the internet was 75% full or more, any user that over the last 15 minutes was in the top 20% of traffic and consuming more than 5x the average use for that time period would get shunted into the lowest QoS bucket. This classification continued until either the usage dropped or (most likely) the outbound pipe was no longer congested.

What the fuck does this have to do with Comcast? Well, as much as I hate them I do have to admit that there is a plausible case for a small fraction of users degrading service for the rest of their paying customers (or necessitating costly upgrades that will be passed along to everyone). And they have implemented their congestion control in the most indefensible way I can imagine -- monthly caps cannot possibly solve the issue of overloading on short time-scales. So I'm left with the idea that, instead of sperging about "unlimited", the tech community actually try to be productive in endorsing a fair set of guidelines (maybe not at all like those above!) on how to manage networks to ensure that a minority of users don't degrade service for everyone. Not that Comcast doesn't deserve sperging of course ...

Comment Re:Linus is right only for people of his caliber.. (Score 1) 576

Well, the specific behavior in this types of cases will/may depend on the hardware.

Which is exactly why you should use a compiler intrinsic since it's their job to keep track of machine details.

And in any case using a gnu version-specific intrinsic is probably not the best thing to do in general.

They didn't use the gnu intrinsic, they macro'd it out into to resolve to an intrinsic where available. That's the best way to do it until all compilers get their act together and provide some form of "perform arithmetic and tell me if it overflows".

This is the same way that all extensions are handled. Have AES_NI, you get some intrinsic, otherwise you go down a generic code path.

Comment Linus is right only for people of his caliber.... (Score 5, Interesting) 576

Both in the technical sense and in the human sense.

Technical: People at Linus' caliber understand exactly the rules for signed/unsigned integer promotion and where underflow is defined (as wrap) and where it's undefined[1]. Consequently he wrote perfectly-correct code for detecting the underflow and bailing out safely. Programmers at mere mortal levels of skill, however, routinely mess this up, often causing exploitable security bugs (believe me, I do code security audits as part of a real honest living). My advice for everyone (contra Linus!) is always always always use the compiler intrinsics for integer math. Feel free to decline this advice if you are a Linus level wizard (if you were, of course, you would already feel free to decline it) but if you have to wonder if you are, you probably aren't.

Linus seems to think that the kernel should only be written by folks that don't need that kind of help -- and for that I won't argue with him. It's his baby and he can chose whether to have a small number of über-developers or a larger number of mortals. Which goes straight to the second point:

Human: People at Linus' caliber thrive on negative feedback. At their level, positive feedback means nothing because there's nothing he can learn from someone praising his work. He wrote a kernel, he knows he's good. Meanwhile negative feedback is useful (unless trivially discountable): if the complaint is right, he'll correct something he was doing wrong; if the complaint is wrong, he'll be forced to think through why. In any event, he could never imagine why someone would sugar-coat their opinion on any matter.

So it seems like his mode of communication is meant to answer that question for the former: he wants people of his caliber that don't write ugly code using arithmetic crutches and don't care about strongly worded criticism. There's nothing invalid about that either -- maybe it's true that the best model is that Linuses work in the kernel and the rest of us go up into userland where we use crutches like memory protection and higher-level constructs :-)

[1] And when behavior is undefined, a smarter compiler can remove the code-path entirely -- the kernel itself was hit by such a bug where GCC legally removed a NULL check because the pointer was dereferenced before the check. See also this reference. Then there's the sad fact that people still argue against the clear language rules that say that assert( 100 + some_int > some_int ); can always be optimized away.

Comment Re:This isn't news (Score 5, Insightful) 310

In the short term, existing gangs will move to other areas of criminality, which are less profitable (or else they would switch now). The reduced cash flow will also intensify competition (read: violence) in those endeavors.

In the medium term, a few organizations will die out, the remainder will claim their new turf but with less wealth to spread around both for status (read: bling) and patronage (read: cheddar, philanthropy). There won't be much less crime here at this stage, but the organizations will be less able to buy loyalty (kinship).

In the long term, the reduced status and patronage will mean fewer recruits and ultimately an equilibrium with less crime. But you are right, gangsters don't go into accounting. The difference comes from convincing kids to go into accounting instead of criminality, and to do that you've got to reduce the total revenue of the criminal organizations.

Comment Re:There's still the pollution thing (Score 1) 216

Dude! You reasoned back from (a) the lack of heat, (b) the knowledge that an electric dryer uses resistive heating, (c) how to operate a multimeter and (d) what an ohm is. And you did all this without an SOP or flow-chart style troubleshooting guide.

That puts you well ahead of 90% of the button-mashers that use a dryer. Maybe even 95%.

[ And yeah, the $70/hr covers lots of incidentals plus downtime waiting for jobs ]

Comment Re:There's still the pollution thing (Score 4, Insightful) 216

So, while in theory the cost of these appliances and the world efficiency is improved with the model of cheap parts&labor from China. The reality is a lot of wasted time, shipping wrong replacement parts, and giving up and tossing out the old piece-o-crap to a landfill and buying something new.

That conclusion is dependant on the value of your time (or a hired appliance repair dude @ $70/hr) looking up and understanding the schematic, deducing the cause of the failure, figuring out which part or parts need to be replaced and then doing the repair, adjusted for the probability of making a mistake anywhere in the process. Compare that with the number of engineer-hours required to design the thing, maintain the production lines and run the distribution apparatus (all of it) divided into the number of units produced. You might find that you just spent more time repairing your unit than was (amortized) spent on the entire rest of its lifetime ...

I guess another way of saying this is that every good has an optimal level of reliability -- beyond which it costs less to regularly replace the failing units than to improve the process or to provide for repairs. We could probably build a washer (or a car, or a hard drive) that lasts longer than the ones we have today, but what would the point be if the TCO was actually higher? Unless you were running the Presidential Motorcade or going all Mad Max, would you buy a car that failed half as often if the TCO was $300/mo instead of $200/mo (and that's including cost of repairs plus your time and inconvenience to bring it to the shop already priced in)? Would Amazon buy more reliable hard drives for AWS (if they were on the market) or would they just buy the cheap ones and build in redundancy? Does my small business website need 99.99% uptime or is 99.9% sufficient? Will the business I lose in the 40 minutes per month difference make up the cost? We can always throw more money at any good/process to make it more reliable -- but there has to be some stopping point where we decide that the marginal gains no longer make sense.

Another aspect to keep in mind is that doing things more reliably at global-scale means paying attention to all those nines. Just like getting from 99.9% uptime to 99.99% is going to cost more than each previous SLA, so to is the calculation for every input to the washer, plus the process/machinery that assembles it, plus the process/machinery that tests it. The acceptable marginal failure rate is going to scale against the marginal cost for increasing reliability.

[ And interestingly enough, Speed Queen does specialize in super-simple super-reliable washers and dryers, largely for the commercial (coin-op) market where downtime is more expensive. If it means a lot to you for your washer, by all means pay more for one and rest easier. Last I checked, they were more than 3x the upfront cost though, meaning that even if your other washer breaks twice out of warranty and is totally unrepairable and you have to buy a new one, you're still ahead! ]

Comment Re:Said it before (Score 1) 385

Wait, so requiring your employees to work for identical wages on Halloween, Thanksgiving, Christmas and New Years when they'd rather be at home with their families is just peachy? US labor law is embarrassingly behind on this -- those days are considered 'normal working days' and employers need not pay some form of overtime. Some employers voluntarily do the right thing (or close, maybe 1.5x isn't quite enough) but others -- including every taxi company on earth -- just schedule workers and tell them to show up for regular wages or find another job.

Contrast that with working for Uber, where the employee doesn't have to clock in on New Years if she doesn't want -- but if she does the wage is at least 4-5x because of surge pricing. That's not capitalism breaking down, that's capitalism at its shining best! The worker is empowered to set the wage she demands and withhold her services if that wage isn't met.

Comment Open Source != Freely Modifable (Score 1) 177

There is no conflict between the two (sensible) requirements that:
        (A) The router's source code should be freely inspectable
        (B) The router should have strong technological measures to prevent users from using it in a way that violates the terms, for instance by transmitting on a band that is not licensed in that country.

This is also a very good model for the automotive industry -- another place where there is laughable security that merits some real auditing, but at the same time it would be ridiculous to allow any kid with a $50 flasher to get a few more horsepower by emitting particulates that are known health risks.

Certainly there is no technical reason that "I can view the source" must mean "I can modify and recompile the source and have the system accept the binary as authentic". TiVo (much to RMS' chagrin) adopted the model, as does Android (for some models, other's advertise open bootloaders, consumers chose between them).

Admittedly, this won't satisfy the software-freedom purists, but at the same time we have to have some logical partitioning between a home computer (that you should control down tot he metal) and a computer that controls particulate emissions that harm others' health or a router firmware that can block others' usage of our shared airwaves.

[ And to that point, it would be great if there was software partitioning such that I could tweak my car's systems but not the ECU portions that control emissions. Or modify the router's linux base to add features (disclosure:I do run DD-WRT actually, but not on a WiFi device) but lock the radio in such a fashion that I don't interfere with my neighbors' networks. There's certainly no technical reason this can't be accomplished. ]

2 pints = 1 Cavort