Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Profits (Score 1) 336

Given that Ford earned $7.2 Billion in net income in 2013 and GM made a $3.8 billion profit over the same period I think GM and Ford will be very surprised to hear that they cannot make cars in the US profitably since most of their profit comes from US operations.

They'd only be surprised if you told them they'd be doing it in Detroit, instead of non-union plants in other U.S. states:
http://www.nytimes.com/ref/us/...

You don't need to expand factories to make the efficient.

Correct. You just need to reduce the number of employees to increase the profit per employee, which is something you can do with automation, and.or lower wages, which is not something you can do in Michigan.

Comment Re:FLYOVER (Score 1) 336

If you're interested in high tech manufacturing with a skilled workforce, it would be hard to find a better place than the automation alley counties. What you'll spend in wages will be more than made up in productivity. And you won't be spending a fortune in recruiting costs. If you build a factory your staffing problem won't be finding qualified workers, engineers or tradesmen, but getting a big enough HR department to hire them.

The reason all but one automotive assembly line has pulled out of Detroit is that the unions wouldn't allow that much automation, or you were "allowed" to have it, but you had to still hire the same number and type of workers to satisfy the contracts, so it didn't do crap to change your value to unit labor cost ratio.

You are an absolute idiot if you locate a manufacturing facility in a state where the unions are in charge of whether or not you get labor, and you can't push costs down by automation.

Most blue collar jobs have migrated outside the U.S. due to inflated labor costs relative to value produced. It has dick all to do with what a living wage is or isn't, and *absolutely everything* to do with value produced per unit labor cost. Most U.S. auto manufacturing that still exists in the U.S. at all is in non-union states, in non-union shops.

As Steve Jobs said, "Those jobs are gone, and they're not coming back". Near the end, before they sold it to Canon, the NeXT factory producing laser printers required exactly two (2) full time workers to operate the entire factory.

Comment Re:Almost all router bandwidth management is shit. (Score 2) 104

OK, as someone who has been trying different methods of QoS over the past years, with varying levels of success, mainly to have my VoIP phone rock solid over DSL, I'm very interested in what you're saying.

Is there a reason this approach hasn't been implemented yet? Does it break something? If my router is lying to one my upstream router about its TCP window size, wouldn't that impact both the FTP and video stream?

You lie about the window size on a per connection basis, so no, since it's not a global policy, it's a resource policy by application, and potentially by port/IP tuple, so it's not a problem. The point is to keep the upstream router packet buffers relatively empty so that the packets you want don't have to be RED-queued. Nothing breaks because of it.

It generally won't work, unless everyone "plays fair", and the port overcommit ratio for upstream vs. downstream bandwidth is relatively low. As the downstream data rate increases to approach the upstream data rate, the technique loses value, unless you get rid of overcommit, or do it on a per-customer "flow" basis (as opposed to a per virtual circuit "flow" basis) within the upstream router itself, or move to a "resource container" or similar approach for buffer ratio allocation in the upstream router.

So in theory, Comcast (as an example) could do it if they made everyone use the router they supplied, and their routers all participates in limiting upstream buffer impact.

Maybe the next time they replace everyone's cable modems, they'll bother to do it?

Without the deployed infrastructure, it's easier to RED-queue and just intentionally drop packets, forcing a client to request a retransmit as a means of source-quenching traffic. This wastes a lot of buffers, but they probabilistically get through, and for streaming video, that's good enough if there's a lot of client overbuffering going on before playback starts (JWZPlayer, for example, is a common player used for pirated content that will habitually under-buffer so intentional drops tend to make it choppy).

For VOIP, unfortunately, forced retransmit causes things to just typically suck, unless you use a sideband protocol instead, where the router at the one hop upstream peer agrees to reserve buffers for specifically that traffic. This is why Skype is terrible, but your phone calls over your wall jacks which are actually wired to the same packet interface instead of a POTS line are practically as good as a land line or cell phone.

Google hangouts tend to get away with it because they are predominantly broadcast, and are either "gossip"-based CSMA/CD (ALOHA style) networks between participants (i.e. people talk over each other, or wait until the other end is done before talking themselves). It means they tolerate large latencies in which 1:1 VOIP/Skype connections won't. They can be a bit of a PITA for conference calls because of that (Google uses it internally, and gets away with it, but mostly because Google has its own, parallel Internet, including transoceanic fibers), but if Google employees never see the problem, they never fix the problem. Same way any company that assumes local-equivalent bandwidth works as well for their customers as it does for them (free hint to Microsoft inre: Office 386 there).

Comment Almost all router bandwidth management is shit. (Score 5, Interesting) 104

Almost all router bandwidth management is shit.

Bandwidth management schemes currently used by everything you mention are all base on rate limiting packet delivery based on some mythical QoS value, and they ignore the actual problem that the people who are using these things are attempting (and failing) to address.

The problem is that the point of a border routers is to hook a slower border uplink to a faster interior connection; on the other end of the slower uplink, you have a faster ISP data rate. In other words, you have a gigabit network in your house, and the ISP has a gigabit network at their DSLAM, but your DSL line sure as hell is *NOT* a gigabit link.

What that means is that software that attempts to "shape" packets ignores an upstream-downloads or a downstream-uploads ability to overwhelm the available packet buffers on the high speed side of the link when communicating to the low speed side of the link.

So you can start streaming a video down, and then start an FTP transfer, and your upstream router at the ISP is going to have its buffers full of untransmitted FTP download packets worth of data, instead of your streaming video data, and it doesn't matter how bitchy you are about letting those upstream FTP packets through your router on your downstream side of the link, it's not going to matter to the video stream, since all of the upstream router buffers that you want used for your video are already full of FTP data that you don't want to receive yet.

The correct thing to do is to have your border router lie about available TCP window size to the router on the other end, so that all intermediate routers between that router and the system transmitting the FTP packets in the first place also lie about how full the window is, and the intermediate routers don't end up with full input packet buffers with nowhere to send them in the first place.

Does your border router do this? No? Then your QoS software and AltQ and other "packet shaping" software is shit. Your upstream routers high speed input buffers are going to end up packed full of packets you want less, and you will be receiver live-locked and the packets that you *do* want won't get through to you because of that.

You can either believe this, or you can get a shitty router and not get the performance you expect as the QoS software fails to work.

Then you can read the Jeffrey Mogul paper from DEC Western Research Labs from 1997 here: http://citeseerx.ist.psu.edu/v... ...after which, you should probably ask yourselves why CS students don't read research papers, and are still trying to solve problems which were understood 27 years ago, and more or less solved 17 years ago, but still have yet to make their way into a commercial operating system.

BTW: I also highly recommend the Peter Druschel/Guarav Banga paper from Rice University in 1996 on Lazy Receiver Processing, since most servers are still screwed by data buss bandwidth when it comes to getting more packets than they can deal with, either as a DOS technique against the server, or because they are simply overloaded. Most ethernet firmware is also shit unless it's been written to not transfer data unless you tell it it's OK, separately from the actual interrupt acknowledgement. If you're interested, that paper's here: http://citeseerx.ist.psu.edu/v... and I expect that we will be discussing that problem in 2024 when someone decides it's actually a problem for them.

Comment Hasn't he learned anything? (Score 2) 360

May as well be a buggy manufacturer in the early 1900s mocking Henry Ford as not having the infrastructure to support automobiles. "Look!" says the CEO, "His automobiles have to be serviced by one of those rare individuals that knows how, but our horse and buggy work everywhere!"

Prior to widespread adoption of internal combustion engines, gas stations (as such) didn't exist. Prior to widespread adoption of the telegraph and the telephone, infrastructure supporting those innovations didn't exist. Prior to the widespread adoption of the Internet, there weren't millions of miles of high speed data cables crossing the globe with signals directed by complex high-speed routing devices. Prior to the widespread adoption of cell phones and smartphones, there was no infrastructure to support them either.

Yet all these things thrived because the infrastructure grew with their adoption. When someone has a car and needs fuel, he has to figure out the logistics of that himself and it can seem unworkable on a larger scale. When half his neighbors have cars and need fuel, an enterprising young businessman comes along and opens a gas station. When Elon Musk sells a few hundred high-end sports cars (the Roadster) around the world to some rich people, he and his customers have to work out some painful logistics for things like service and it can seem unworkable on a larger scale. Check back in five years and see how much trouble it is to run around in the latest Tesla car then.

Tesla's working because they started at the high end of the market where margins are high and logistics are easier. They've used those high margins to push through massive infrastructure improvements around the US and in other richer areas to allow for an even more rapid adoption. They've established a brand by promising big and delivering bigger, then continuing to deliver long after the sale (improving an existing car? who's ever heard of such a thing?!) Mercedes can claim Tesla isn't a threat, but they're a few years away from either having to spend a fortune trying to catch up or they'll end up paying Elon Musk licensing fees for his tech.

Comment Re:Multiple heads? (Score 1) 256

Actually, "client" workloads (personal computers) aren't very parallel so the requests are served sequentially. As such, this won't help too much.

Most client machines don't have multiple drives mirrored either. I was thinking purely in a server setting when I made the comments, though I'll admit that I didn't specify.

A HD with two head systems still wouldn't match an SSD for random reads, but it'd be much better than one. Depending on the use it's seeing, it could even employ different algorithms depending on the use mode it's seeing to help speed things along. In addition, more cache might help it during a large sequential read, allowing the heads to leapfrog each other better. Like I said - engineering and programming nightmare, but an interesting thought experiment.

By the way, if I remember correctly multiple requests on flight were implemented on SATA standard for client drives, 10 years ago or so on (SCSI had them for quite a while). I'm not sure Windows XP uses these queues.

You're talking about how the system queues multiple data(read/write) requests with the drive, and the drive possibly delivering them out of order(because it's using an optimized path to collect all the data), right?

I assumed that capability from the start. The REAL trick to the system is that to date it's one read head per platter, thus one device serving all the data. With two head systems, the question comes up of how you optimally assign said demands between the two head systems to most efficiently move the data.

Comment Multiple heads? (Score 1) 256

This is actually a very interesting proposal. While I imagine the engineering and programming would be a relative nightmare*, it would provide a number of options for hard drives.

While it wouldn't double performance in most cases, especially not sequential operations, for random operations it'd be almost as good as two drives. Maybe better if the access is typically really random and one head can 'field' mostly the outer disc calls while the other catches the inner disk ones.

*Just look at the difference between programming a single thread application and multi-threading!

Comment Re:It was a "joke" back then (Score 1) 276

One thing that isn't obvious though is that it's a 30Hz monitor. All the 60Hz ones, as far as I can tell, are still in $1000+ territory.

I should probably have put some disclaimers in my post about affordability and suitability. I'm not a refresh snob but I can't help but think that 30Hz is a bit slow for gaming, perhaps even video watching.

Comment Re:Over 18 (Score 1) 632

Nothing you say says that Mr Saverin has gotten away from his US tax liability. Only by renouncing citizenship can one end the tax liability, and even that continues for some years (10 I think) after the renouncement.

He did renounce it. And he renounced it before the IPO. So his liability is for what he owed before he renounced it, which is ... not the $1.1B.

Comment Re:perception (Score 1) 320

Actually, the total tax burden for the working and middle classes in the USA is not that different from much of Europe. If you deduct the amount that the US citizen pays for health insurance from the amount that the EU citizen pays in taxes (while receiving socialised medical coverage), it's often quite a lot more. Part of the reason that the US has what appears from the outside to be an irrational distrust of government is that they get such poor value for money from their taxes. This leads to a nasty feedback loop (population expects the government to be incompetent, so it's hard to get competent people to want to work for the government, so the government becomes more incompetent, so the population expects...).

Comment I think there's a more important question... (Score 1) 320

How many homeless volunteers took off with the camera and sold it to buy booze?

I think there's a more important question... how many mountain lions, gazelles, and other animals took off with the Harmless Radio Collars(tm) that Marlon Perkins had Jim Fowler attach to them while filming Mutual of Omaha's "Wild Kingdom"?

Comment Re:BS (Score 2) 359

Yes, it is quite large, in relative terms. The city of Pittsburgh is only about 30,000 people, meaning the % of the population in those 2 centers alone accounts for roughly 1% of the population.

Off by a factor of over 10; as of 2012: population of 306,211. That's 0.08%, not 1%.

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...