It's in the works, hopefully for version 9.4.
Last time I was in Japan, I had a good connection, but the ISP decided to drop every ssh connection above a given traffic. My tunnels kept being broken until I set a speed limit on my side.
Was that a DSL connection with an ISP-supplied router, or maybe a cable TV ISP? With optical fiber I've never had any problems, SSH sessions stay open for days (and this is without a fixed IP address), and p2p "just works". This is in Tokyo, BTW.
Sony has been in the ISP fray since 1995.
Do please check out this informative post from Magnus Hagander, one of the PostgreSQL core team members, which clarifies most of the points raised here:
About security updates and repository "lockdown"
I have received a lot of questions since the announcement that we are temporarily shutting down the anonymous git mirror and commit messages. And we're also seeing quite a lot of media coverage.
Let me start by clarifying exactly what we're doing:
- We are shutting down the mirror from our upstream git to our anonymous mirror
- This also, indirectly, shuts down the mirror to github
- We're temporarily placing a hold on all commit messages
There has been some speculation in that we are going to shut down all list traffic for a few days - that is completely wrong. All other channels in the project will operate just as usual. This of course also includes all developers working on separate git repositories (such as a personal fork on github).
We are also not shutting down the repositories themselves. They will remain open, with the same content as today (including patches applied between now and Monday), they will just be frozen in time for a few days.
Don't try to actually make sense of the decisions made in the article. I am glad that he summed up all of the reasons why he didn't go with a relational database early in the article, so I didn't have to bother reading the rest. I am an advocate of NoSQL, but this whole article is describing a project that is almost perfect for a relational database.
Heck yeah, it reminds me of a project I did in 2004 or 2005, which stored over a hundred thousands of articles (some of them more than 64Kb!) with multiple authors, keywords and other fancy schmancy stuff. I've no idea what "a good amount of traffic from a niche group of scientists and researchers means in real terms, but the system I put together was getting something like 40,000 unique vistors a day, running off some not particularly spectacular hardware (this was a time when 1GB was a lot of memory). As there was no NoSQL back then, I had to "make do" with a proper relational database (PostgreSQL), which wasn't exactly a speed demon at the time, but very kindly took care of things like indexes and keeping things in sync (aka "relational integrity") leaving me free to concentrate on optimizing the whole stack. Oh yes, it was only me on the "team". And I managed to bodge a Lucene-based search system into the setup (as PostgreSQL's full-text search was a bit sucky).
I suppose what with it being 2013 and such, it would be possible to push it into the cloud and squeeze in some JSONy bits as well if necessary
Kids of today, eh...
We weren't thrilled about this, because writing your own indexes can be problematic. Any time we stored a document, we would have to update the index. That's fine, except if anything goes wrong in between those two steps, the index would be wrong. However, the coding wouldnâ(TM)t be too terribly difficult, and so we decided this wouldn't be a showstopper. But just to be sure, we would need to follow best practices, and include code that periodically rebuilds the indexes.
Hello, I'm a time traveller from 1973 where I've been fondly imagining you folks in the future had written software to solve this kind of problem in a more generic fashion. Back in the past we have some visionary guy by the name of Codd, and in my wilder dreams I sometimes imagine by the year 2000 someone has created some kind of revolutionary database software which is based on his "SEQUEL" ideas and does fancy stuff like maintaining its own indexes.
Then I wake up and realise it was just a flight of fantasy.
... trees illegally fell you?
My iDevice was running the least-outdated version of iOS 4 and not being too bothered about these things I never got round to updating it. Also, I was a bit leery about installing a new major release until the early adopters had suffered through the kinks. The release of the Google map app, which requires iOS 5.something or later was enough reason to finally upgrade.
My first Mac was a PPC G4 iBook which worked fine for all kinds of web development and working with various C/C++-based open source projects. For me at least, any subtle incompatibilities were due to the differing OS, not the underlying architecture, and that hasn't changed with the move to Intel.
However, although now I'm on my 2nd Intel MacBook, with the way things are going I can see a day when OS X gets too dumbed down/walled off to be useable for me and I'll become a very ex-Apple customer.
So, I reached out to Bob, a developer evangelist that I met at the Hackathon at the Museum of Science.
Bob? Microsoft Bob? You met Microsoft Bob in a science museum? I think we might be on to something here...
Seconded... I've corralled the company's system into something approaching sanity, but no time for any kind of documentation apart from the odd comment in the code (usually starting with "FIXME!"). There's also a plethora of sub-systems I have trouble managing even though I wrote them myself - mainly due to having to throw them together at short notice while working on something else at the same time.