The USA seem to be the only major country I know of where you pay to *receive* texts. Everywhere else (i.e. those heathen regions which use metric, including the country mentioned in this article) only pays to send them.
The change DOES also apply to the usual stuff like HP Proliant DL380 etc.
Yup, we got the same mail today. We have a bunch of ageing Proliants, and are currently engaged in a procurement round for a new generation of servers (we buy them by the ton, almost). Guess which company just ruled themselves out of the process?
Sounds like the kind of domain name which would trigger these nanny-state filters some countries are so fond of nowadays. But let's be clear on this - facialnetwork.com is in no way involved with bukkake porn involving minors and there is no record known to myself of any of their senior management being on any form of sex offender registry.
Similar here, at my previous job... except irony of ironies, while even the manager and sales droids were very happy with their Macs, the Photoshoppers were Windows only.
Meanhile at my current job, where the developer workstations are pretty much all Linux, I am looked down upon because my laptop of choice comes pre-installed with a certified UNIX OS.
(Posting from my home desktop Linux right now BTW, in case anyone wants to accuse me of being a hipster).
> The date was April 11, 2011.
So you're saying you had this ESP-like experience a whole *month* after the earthquake actually happened?
It's in the works, hopefully for version 9.4.
Last time I was in Japan, I had a good connection, but the ISP decided to drop every ssh connection above a given traffic. My tunnels kept being broken until I set a speed limit on my side.
Was that a DSL connection with an ISP-supplied router, or maybe a cable TV ISP? With optical fiber I've never had any problems, SSH sessions stay open for days (and this is without a fixed IP address), and p2p "just works". This is in Tokyo, BTW.
Sony has been in the ISP fray since 1995.
Do please check out this informative post from Magnus Hagander, one of the PostgreSQL core team members, which clarifies most of the points raised here:
About security updates and repository "lockdown"
I have received a lot of questions since the announcement that we are temporarily shutting down the anonymous git mirror and commit messages. And we're also seeing quite a lot of media coverage.
Let me start by clarifying exactly what we're doing:
- We are shutting down the mirror from our upstream git to our anonymous mirror
- This also, indirectly, shuts down the mirror to github
- We're temporarily placing a hold on all commit messages
There has been some speculation in that we are going to shut down all list traffic for a few days - that is completely wrong. All other channels in the project will operate just as usual. This of course also includes all developers working on separate git repositories (such as a personal fork on github).
We are also not shutting down the repositories themselves. They will remain open, with the same content as today (including patches applied between now and Monday), they will just be frozen in time for a few days.
Don't try to actually make sense of the decisions made in the article. I am glad that he summed up all of the reasons why he didn't go with a relational database early in the article, so I didn't have to bother reading the rest. I am an advocate of NoSQL, but this whole article is describing a project that is almost perfect for a relational database.
Heck yeah, it reminds me of a project I did in 2004 or 2005, which stored over a hundred thousands of articles (some of them more than 64Kb!) with multiple authors, keywords and other fancy schmancy stuff. I've no idea what "a good amount of traffic from a niche group of scientists and researchers means in real terms, but the system I put together was getting something like 40,000 unique vistors a day, running off some not particularly spectacular hardware (this was a time when 1GB was a lot of memory). As there was no NoSQL back then, I had to "make do" with a proper relational database (PostgreSQL), which wasn't exactly a speed demon at the time, but very kindly took care of things like indexes and keeping things in sync (aka "relational integrity") leaving me free to concentrate on optimizing the whole stack. Oh yes, it was only me on the "team". And I managed to bodge a Lucene-based search system into the setup (as PostgreSQL's full-text search was a bit sucky).
I suppose what with it being 2013 and such, it would be possible to push it into the cloud and squeeze in some JSONy bits as well if necessary
Kids of today, eh...
We weren't thrilled about this, because writing your own indexes can be problematic. Any time we stored a document, we would have to update the index. That's fine, except if anything goes wrong in between those two steps, the index would be wrong. However, the coding wouldnâ(TM)t be too terribly difficult, and so we decided this wouldn't be a showstopper. But just to be sure, we would need to follow best practices, and include code that periodically rebuilds the indexes.
Hello, I'm a time traveller from 1973 where I've been fondly imagining you folks in the future had written software to solve this kind of problem in a more generic fashion. Back in the past we have some visionary guy by the name of Codd, and in my wilder dreams I sometimes imagine by the year 2000 someone has created some kind of revolutionary database software which is based on his "SEQUEL" ideas and does fancy stuff like maintaining its own indexes.
Then I wake up and realise it was just a flight of fantasy.
... trees illegally fell you?