Comment Never bring politics... (Score 4, Funny) 163
Never bring politics... to an electronic documentation of timeline fight with a database company.
Never bring politics... to an electronic documentation of timeline fight with a database company.
Almost all router bandwidth management is shit.
Bandwidth management schemes currently used by everything you mention are all base on rate limiting packet delivery based on some mythical QoS value, and they ignore the actual problem that the people who are using these things are attempting (and failing) to address.
The problem is that the point of a border routers is to hook a slower border uplink to a faster interior connection; on the other end of the slower uplink, you have a faster ISP data rate. In other words, you have a gigabit network in your house, and the ISP has a gigabit network at their DSLAM, but your DSL line sure as hell is *NOT* a gigabit link.
What that means is that software that attempts to "shape" packets ignores an upstream-downloads or a downstream-uploads ability to overwhelm the available packet buffers on the high speed side of the link when communicating to the low speed side of the link.
So you can start streaming a video down, and then start an FTP transfer, and your upstream router at the ISP is going to have its buffers full of untransmitted FTP download packets worth of data, instead of your streaming video data, and it doesn't matter how bitchy you are about letting those upstream FTP packets through your router on your downstream side of the link, it's not going to matter to the video stream, since all of the upstream router buffers that you want used for your video are already full of FTP data that you don't want to receive yet.
The correct thing to do is to have your border router lie about available TCP window size to the router on the other end, so that all intermediate routers between that router and the system transmitting the FTP packets in the first place also lie about how full the window is, and the intermediate routers don't end up with full input packet buffers with nowhere to send them in the first place.
Does your border router do this? No? Then your QoS software and AltQ and other "packet shaping" software is shit. Your upstream routers high speed input buffers are going to end up packed full of packets you want less, and you will be receiver live-locked and the packets that you *do* want won't get through to you because of that.
You can either believe this, or you can get a shitty router and not get the performance you expect as the QoS software fails to work.
Then you can read the Jeffrey Mogul paper from DEC Western Research Labs from 1997 here: http://citeseerx.ist.psu.edu/v...
BTW: I also highly recommend the Peter Druschel/Guarav Banga paper from Rice University in 1996 on Lazy Receiver Processing, since most servers are still screwed by data buss bandwidth when it comes to getting more packets than they can deal with, either as a DOS technique against the server, or because they are simply overloaded. Most ethernet firmware is also shit unless it's been written to not transfer data unless you tell it it's OK, separately from the actual interrupt acknowledgement. If you're interested, that paper's here: http://citeseerx.ist.psu.edu/v... and I expect that we will be discussing that problem in 2024 when someone decides it's actually a problem for them.
May as well be a buggy manufacturer in the early 1900s mocking Henry Ford as not having the infrastructure to support automobiles. "Look!" says the CEO, "His automobiles have to be serviced by one of those rare individuals that knows how, but our horse and buggy work everywhere!"
Prior to widespread adoption of internal combustion engines, gas stations (as such) didn't exist. Prior to widespread adoption of the telegraph and the telephone, infrastructure supporting those innovations didn't exist. Prior to the widespread adoption of the Internet, there weren't millions of miles of high speed data cables crossing the globe with signals directed by complex high-speed routing devices. Prior to the widespread adoption of cell phones and smartphones, there was no infrastructure to support them either.
Yet all these things thrived because the infrastructure grew with their adoption. When someone has a car and needs fuel, he has to figure out the logistics of that himself and it can seem unworkable on a larger scale. When half his neighbors have cars and need fuel, an enterprising young businessman comes along and opens a gas station. When Elon Musk sells a few hundred high-end sports cars (the Roadster) around the world to some rich people, he and his customers have to work out some painful logistics for things like service and it can seem unworkable on a larger scale. Check back in five years and see how much trouble it is to run around in the latest Tesla car then.
Tesla's working because they started at the high end of the market where margins are high and logistics are easier. They've used those high margins to push through massive infrastructure improvements around the US and in other richer areas to allow for an even more rapid adoption. They've established a brand by promising big and delivering bigger, then continuing to deliver long after the sale (improving an existing car? who's ever heard of such a thing?!) Mercedes can claim Tesla isn't a threat, but they're a few years away from either having to spend a fortune trying to catch up or they'll end up paying Elon Musk licensing fees for his tech.
Actually, "client" workloads (personal computers) aren't very parallel so the requests are served sequentially. As such, this won't help too much.
Most client machines don't have multiple drives mirrored either. I was thinking purely in a server setting when I made the comments, though I'll admit that I didn't specify.
A HD with two head systems still wouldn't match an SSD for random reads, but it'd be much better than one. Depending on the use it's seeing, it could even employ different algorithms depending on the use mode it's seeing to help speed things along. In addition, more cache might help it during a large sequential read, allowing the heads to leapfrog each other better. Like I said - engineering and programming nightmare, but an interesting thought experiment.
By the way, if I remember correctly multiple requests on flight were implemented on SATA standard for client drives, 10 years ago or so on (SCSI had them for quite a while). I'm not sure Windows XP uses these queues.
You're talking about how the system queues multiple data(read/write) requests with the drive, and the drive possibly delivering them out of order(because it's using an optimized path to collect all the data), right?
I assumed that capability from the start. The REAL trick to the system is that to date it's one read head per platter, thus one device serving all the data. With two head systems, the question comes up of how you optimally assign said demands between the two head systems to most efficiently move the data.
This is actually a very interesting proposal. While I imagine the engineering and programming would be a relative nightmare*, it would provide a number of options for hard drives.
While it wouldn't double performance in most cases, especially not sequential operations, for random operations it'd be almost as good as two drives. Maybe better if the access is typically really random and one head can 'field' mostly the outer disc calls while the other catches the inner disk ones.
*Just look at the difference between programming a single thread application and multi-threading!
One thing that isn't obvious though is that it's a 30Hz monitor. All the 60Hz ones, as far as I can tell, are still in $1000+ territory.
I should probably have put some disclaimers in my post about affordability and suitability. I'm not a refresh snob but I can't help but think that 30Hz is a bit slow for gaming, perhaps even video watching.
Nothing you say says that Mr Saverin has gotten away from his US tax liability. Only by renouncing citizenship can one end the tax liability, and even that continues for some years (10 I think) after the renouncement.
He did renounce it. And he renounced it before the IPO. So his liability is for what he owed before he renounced it, which is
How many homeless volunteers took off with the camera and sold it to buy booze?
I think there's a more important question... how many mountain lions, gazelles, and other animals took off with the Harmless Radio Collars(tm) that Marlon Perkins had Jim Fowler attach to them while filming Mutual of Omaha's "Wild Kingdom"?
"My name is Linux Torvalds... and I pronounce him 'Linus'...".
Once thing they should look at is a city within a single mega-structure.
Why should they build an Arcology, when there are already two in progress:
Masdar City in Abu Dhabi: http://en.wikipedia.org/wiki/M...
Arcosanti North of Phoenix Arizona: http://en.wikipedia.org/wiki/A...
Yes, it is quite large, in relative terms. The city of Pittsburgh is only about 30,000 people, meaning the % of the population in those 2 centers alone accounts for roughly 1% of the population.
Off by a factor of over 10; as of 2012: population of 306,211. That's 0.08%, not 1%.
If those San Francisco residents who are "entrenched" had to pay for their taxes like new residents do, they would be paying 1.25% per year property taxes on the current value rather than the basis of when they bought the property.
That's a great reason to do what rental property owners do, and own a company that owns the property, instead of owning it themselves. Then if they ever want to sell it, they can sell if for a heck of a lot more money by selling the company, rather than selling the property, so the taxes don't go up any more than if you'd bought under prop 13 and never sold.
That's the McDonald's model (McDonald's happily admits to being a real estate company that happens to sell burgers and rents out properties their franchisees). It's also the same model that the Kaiser Family Trust uses.
The only way to fix the Bay Area housing crisis is to build more fucking housing.
One of the things that isn't talked about is the amount of empty office and residential apartments in the Bay Area. It's actually worth more money to price them out of the range that people are willing to pay, and then take the "market rent you are not getting", and use it as a tax write-off. It's a common practice in China (Google "ghost cities"), and it's becoming more common in the Bay Area.
If you want to take a little trip on 101 between SF and SJ, it's easy to see a lot of empty buildings, and it's easy to see some of the mega-complexes that are going in in Redwood City and elsewhere, which are probably going to remain mostly empty as a tax write-off to balance out other income.
I was talking with a friend(another ex-Pittsburgher) and he reminded me that both Apple and Google have recently opened relatively large campuses in Pittsburgh.
150 employees in an old cookie factory for Google, and 100 employees for Apple retail is hardly "relatively large"...
The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.