Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
For eg. we in Canada are often forced to contend with inferior services simply due to the small size of our market relative to the US. As a tech company based in Toronto, assuming no legal obstacles to server location, I can see big advantages in setting up data centers 100mi away in Buffalo if the price is competitive -- you can still physically get to them with a car if you need to .
Similarly, starting up a technology-oriented company in Buffalo is not as crazy as it sounds when you consider the fact that there are great tech universities in close proximity, including RIT and Waterloo, and a large pool of highly-educated immigrants in Toronto that are relegated to driving taxis and delivering pizza.
If wages for MBA-toting advertising executives and investment bankers were being driven to zero as well, then at least it would be fair.
As with any complex tool, if you don't know why it's useful, or when it should be used, you're probably going to make a mess.
The visual nature of kettle masks its complexity due to the "pictures == easy, code == 3ll3t" bias. To simplify a bit, Kettle gives you the ability to create a multi-db, multi-data format "query plan", much as a DB optimiser would do when given a multi-table SQL statement with joins, filters, etc. The problem is that in kettle, you have to understand how to optimise that "query" yourself to write an efficient transform. Developers that truly understand how a database executes a query, let alone understand what query plan is good, become a rarer breed with each day.
In short, never give kettle to a developer that thinks of a database purely in terms of "put" and "get"
I don't see how that answers my question.
The government is there to establish laws, and, ideally enforce them. That's partly the responsibility of the regulators.
But what happens when said regulators are captured by the industry they are supposed to regulate, enacting laws that serve their interests (and not of the public good), ignoring violations, etc. You can't sue any companies in that industry because the laws will have been written to protect them! Look at the last 20 years of 'regulation' and legislation that has taken place in the financial industry for a great example.
Everything you say in response makes sense on some level. But what happens when the regulators in that ideal government you speak of are clearly captured?. The problem is pervasive today, and I have trouble seeing how it would be any better once, by necessity, government shrinks to the point that tax revenues are significantly less than the revenues of larger corporations
You've touched on the fact that data replication is a hard problem and all user scenarios can't be (sensibly) solved with a single solution. MySQL replication works well enough for the web crowd that has no idea what ACID stands for and its adoption has spread as a result. There being only one choice of replication solution also makes that an easy choice to make.
Even being able to choose which replication solution to use with postgresql requires a substantial level of expertise. What postgresql has lacked, until now, is an "out of the box" solution that will be used by default, by the uninitiated, to get postgresql in the door. Then if they ever learn what ACID stands for, if they understand what asynchronous or synchronous replication is, they will be happy as hell that they didn't choose mysql way back when their whole site/business ran on only 2 servers.
I see the replication feature as being more about perception than anything else.
Postgresql has long had a variety of replication options outside of the core that serve various needs, but it seems that the perception out in the community remained that postgresql was a stable, stand-alone database, and getting replication to work on top involved "hacks", while mysql, despite its faults, had "solid" replication that lent itself better to large installations.
Of course this perception is far from reality, but it has been deemed a serious enough problem for the postgres team to finally include replication in the core.
A better summary of the changes is here.
After years of resisting, one of the more significant changes is the inclusion of WAL shipping-based replication into postgresql core, and the ability to do read-only queries on the standby systems. This will hopefully go a long way towards appeasing mysql users used to the "easy" replication that mysql provides.
It all comes down to whether a particular city can justify, and is enlightened enough, to invest in a more sensible traffic system. They do exist and it is not some far-off fantasy.
Regardless of the reputation Los Angeles has, I was surprised at how well the traffic moved for the most part when I lived there for a few years. Sure, there are traffic jams, but the lights were surprisingly effective; I never once sat at an empty intersection, and there did appear to be some logic in the timing of lights.
Now contrast that with Toronto, a city half the size but with far fewer and smaller roads/freeways, where the traffic is horrendous, and worse yet, the lights will happily make you idle for 1 minute staring at an empty intersection. Costs the local economy money in productivity, increased shipping costs, pollution, and so forth, but to my knowledge nothing has been done, either because no one complains or the cost is prohibitive.