
I believe the idea is that it is legally binding promise from the website operator to the user. It's not trying to be a technical fix.
On a similar vein, the USA has a good thing going on with Foreign Affairs. If there are people you want to stay in contact with, e.g. a spouse, think long and hard before trying to subscribe to both the Economist and Foreign Affairs at the same time.
As a Brit who moved to the Bay Area in California, I'd advise counting your blessings.
British railways obviously could be further improved but I think we might as well take some pride in the system even while we push for improvement rather than constantly trashing ourselves.
Where I work we have a 200MB+ stripped executable. At that point, even though your build is incremental you can see link times in the minutes due to the network traffic needed to pull all the object files together (from the fileserver to the machine where the linking takes place). Full builds take a similar order of walltime to those in this post. This is only the tip of the iceberg though as its normal to have test cases which take hours to load and can even take days to run (my field is electronic design automation software: http://en.wikipedia.org/wiki/Electronic_design_automation).
It was certainly a shock to the system when I came back to this world from web development! You learn to cope with this latency though by becoming more careful and disciplined in how you design and program before hand so you rely less on fast testing iterations. You also try to make the most complete use of any debugging session/environment you have open and you build as much instrumentation as possible into the software.
As far as debugging goes, thankfully code bases generally scale at worst linearly with developers and age while bisection search has log(n) performance.
As much as you try to work around the latency and whatever benefits it bring to your own discipline, theres no doubt that the latency hurts productivity and penalizes experimentation.
I was in this exact situation, although the pay/opportunities difference was rather more stark. I'm friends with the owner and I was the lead developer and the only person that could take on the broader and larger software/systems issues.
Philosophy: as much as you care for the company they're not your friends or family in the business context and no business is worth sacrificing yourself for. You must realise that if it's a matter of business necessity, as much as the people in the company may care for you, they would also do the equivalent - make you redundant - if necessary.
Leaving the company doesn't mean you have to leave them in the lurch. I'm consulting for the company I used to work for (I negotiated this ability into my new contract). I can still take care of the big issues that no one else can at my old company and this actually means the old company has become a more efficient business as they only pay me to handle the issues that I truly need to handle. Sometimes I wish I had more free time in the evenings, but because I care for the people there I'll continue to consult until the company is in a good technical position that I'm comfortable with.
Now I've left my old company I've seen the other developer grow as a developer and, unexpectedly, I believe I've become a more effective lead there as I'm more inclined to discuss and outline a solution rather than implement it myself.
In your situation I would move, as long as I was going to work for people I respect and I believed I would grow as a person in the new environment. The extra 1 1/2 hours of the day you'll save in commuting is a very significant chunk of time too - equivalent to a 20% salary raise in itself.
What was expected? Loyalty to a company is meaningless as the company is not anthropomorphic. It's always your relationships/loyalty to people that counts.
The relationships that really work are also far more than networking - they are not just business transactions.
When you remove transaction costs and risks (probabilistic transaction costs) a number of previously unprofitable businesses become profitable, therefore new businesses start. This was clearly visible in the Internet and financial bubbles when one cost of business, credit, become much cheaper than before.
In hindsight its obvious that credit was too cheap in these situations, so the businesses that started in the credit bubble were unsustainable. The US health care system on the other hand is simply inefficient and therefore an unnecessary transaction cost on the labour market (we have existential proof of this by looking at other health care systems in the world).
Which leads us to:
6. PROFIT!
Yes, I get the feeling that many OS calendering programs miss the point. The real value in a calender 'solution' is that they help you synchronise with other people - and as such the fact they help you arrange your day is of very little marginal value over a dead tree calender. Any calender project that doesn't get that synchronisation bit right is also going to be marginal at best.
To be fair, I never understood this until I started working in a corporate environment.
Assuming you're aiming for a life-long monogamous relationship, I've always felt that you're best off understanding one good way to succeed rather than exploring the infinite ways you can fail.
The original end-to-end paper argues that applications are best implemented at end points rather than in the network - the final application end points (e.g. the receiving end of file transfer) must be aware of failure modes in the network (e.g. errors, security, etc.) therefore the network can never be completely abstracted away nor can the application be mostly implemented in the network. It doesn't sound like anything has has changed today and the original paper even notes that partial optimisations (e.g. HTTP caching) can be implemented in the network. This hardly moves the application into the cache.
Sounds like the summary has conflated a narrow paper about the state of TCP with a general principle for building networked applications.
End-to-end paper: http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf
A mere 20% of the population controls the vast majority of wealth in this country
And pay almost all of the taxes. Half of the people in the country pay no income taxes, and many are given a tax "rebate" (on taxes they never even paid!) as a form of redistribution.
I think a healthier view is that 50% of the wealth should pay 50% of the taxes. I'd agree it would be much healthier if the top 20% didn't pay 50% of the taxes, but that means that 50% of the wealth should be owned by something that is a lot closer to 50% of the population. You'd have a society which has a much greater direct stake and control of the welfare of their country and there would be fewer social ills that come from large disparities of income.
The horizontal pitch I can imagine making for a low power, mediocre processor+GPU combination is that if we're all gathering a lot more data all the time on the go, being able to easily process all that data (for filtering/compression) at the collection point is advantageous. The only other one I can think of right now is that I'd love an even more underpowered (CPU-wise) version of this for a silent home media server.
There are a fair few vertical markets that push the high performance parallel envelope that can use the GPU capability, but Fusion doesn't seem to fit those markets and even the sum of the verticals doesn't appear very horizontal to my untrained eye. E.g. there are high value non-video uses in finance and oil. See Maxeler: http://www.maxeler.com/content/frontpage/
Digital chips are roughly comprised of memory (flip flops) with logic in between. On each clock cycle the logic takes data from one piece of (input) memory, transforms it in some way and stores it in some other (output) memory.
One of the primary limitations on the speed of the chip is the longest path. The length of a path is roughly a function of the physical length of the path that the data takes from input to output memory and the number/type of logic gates in between. The speed of the chip is roughly limited to 1/[Time taken for data to cross the longest path] Hz
If you remove a significant amount of logic there is a chance that you might be removing the longest and most complex paths for data to cross and also that all the components can be spaced closer together, meaning that there is less distance for the data to cover. For this reason, removing parts of the chip might be able to speed it up.
If you're trying to implement login like this and HTTP Digest authentication isn't an option, then I'd suggest reading the HTTP Digest authentication spec. and implementing that using URL parameters. I think that should actually then get you a fairly secure login.
This isn't really related to the issues in the article though.
The bogosity meter just pegged.