PPS: Given your custom IPC for Python, could you go us one further and write an OSGi for Python using it? Pretty please!
PPS: Given your custom IPC for Python, could you go us one further and write an OSGi for Python using it? Pretty please!
That's not loadavg, that's IO latency. You should probably be using iostat to get useful numbers.
oo, thank you very much for that tip, i'll try to pass it on and will definitely remember it for the next projects i work on. thank you.
It doesn't matter how awesome someone thinks their Python-LMDB project is. It doesn't matter how important someone thinks their Python-LMDB project is.
the mistake you've made has been raised a number of times in the slashdot comments (3 so far). the wikipedia page that was deleted was about LMDB, not python-lmdb. python-lmdb is just bindings to LMDB and that is not notable in any significant way.
CPython is a compiler.
it's an interpreter which was [originally] based on a FORTH engine.
It compiles Python source code to Python bytecode,
there is a compiler which does that, yes.
and the Python runtime executes the compiled bytecode.
it interprets it.
CPython has one major weakness, the GIL (global interpreter lock).
*sigh* it does. the effect that this has on threading is to reduce threads to the role of a mutually-exclusive task-switching mechanism.
I've seen the GIL harm high-throughput, multi-threaded event processing systems not dissimilar from the one you describe.
yes. you are one of the people who will appreciate, given that the codebase could not be written in (or converted to) any other language, due to time-constraints, that using processes and custom-written IPC because threads (which you'd think would be perfect to get high-performance on event processing because there would be no overhead on passing data between threads) couldn't be used, means that the end-result is going to be... complicated.
If you must insist on Python and want to avoid multi-threaded I/O bound weaknesses of the GIL, then use Jython.
not a snowball in hell's chance of that happening
java on the other hand i just... i don't even want to begin describing why i don't want to be involved in its deployment - i'm sure there are many here on slashdot happy to explain why java is unsuitable.
there are many other ways in which the limitation of threads in python imposed by the GIL may be avoided. i chose to work around the problem by using processes and custom-writing an IPC infrastructure using edge-triggered epoll. it was... hard. others may choose to use stackless python. others may agree with the idea to use jython, but honestly if the application was required to be reasonably reliable as well as high-performance there would be absolutely no way that i could ever endorse such an idea. sorry
if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that
You should pipe it to
don't jest... please
but, seriously: the complete lack of need in this application for joins (as well as any other features of SQL or NOSQL databases) was what led me to research key-value stores in the first place.
A lot of the locking semantics you mentioned sound pretty similar to RCU which is used extensively in the Linux kernel, and allows for lockless reading on certain architectures.
but i am not an expert on these things. i'm sure that if howard chipped in here (and he _is_ an expert on the linux kernel and on high-performance efficient algorithm implementation) he'd be able to tell you more and probably a lot more accurately than i can.
The use cases for LMDB are pretty limited.
weeelll.... the article _did_ say "high performance", so there are some sacrifices that can be made especially when those features provided by SQL databases are clearly not even needed.
basically what was needed then was to actually *re-implement* some of the missing features (indexes for example) and that took quite some research. it turns out that (after finding an article written by someone who has implemented a SQL database using the very same key-value stores that everyone uses) you can implement secondary indexes *using* a key-value store with range capabilities by concatenating the value that you wish to have range-search on with the primary key of the record that you wish to access, and then storing that as the key with a zero-length value in the secondary-index key-value store.
this was what i had to implement - directly - in python, to provide secondary indexing using timestamps so that records could be deleted for example once they were no longer needed. it was actually incredibly efficient, *because of the performance of LMDB*.
so... yeah. didn't need SQL queries. added some basic secondary-indexing manually. got the transactional guarantees directly from the implementation of LMDB. got many other cool features....
please remember that i am keenly aware that SQLite, MySQL and i think even PostgreSQL can now be compiled to use LMDB as its back-end data store... but that the application was _so demanding_ that even if that had been done it still would not have been enough.
but, apart from that: i don't believe you are correct in saying that there are a limited number of use cases for LMDB *itself* - the statement "there are a limited number of use cases for range-based key-value stores" *might* be a bit more accurate, but there are clearly quite a _lot_ of use cases for range-based key-value stores [including as the back-end of more complex data management systems such as SQL and NOSQL servers].
this high-performance task scheduler application happens to be one of them... and the main point of the article is that, amongst the available key-value stores currently in existence, my research tells me that i picked the absolute best of them all.
I apologize for that, I was wrong and spoke too quickly. If you can find notable sources for P-LMDB, then it's worth a shot bringing it to that user's attention.
hey not a problem. you're right about py-lmdb - my main concern is to get LMDB the recognition that its peer stores (such as BerkeleyDB) already have: http://en.wikipedia.org/wiki/B... - someone else mentioned that there are other such key-value stores (some of them at the same development period as LMDB) which already have articles. and it's that an *oracle* employee marked the page for deletion that's the main issue of contention here.
The author got poor performance from a SQL database with no indexing, which degraded as the number of records grew? You don't say! A database that has to do a full scan for reads performs poorly?
yes. it was that i had to do that analysis in a formal repeatable independent way, which i had never done before, and i was very surprised at the poor results. i was at least expecting a *consistent* and reliable rate of... well, i don't know: i was kinda expecting PostgreSQL to be top of the list and i was kinda expecting it to reach 100,000 or 200,000 records per second... and it just... couldn't. i was *completely* caught off-guard by the need to switch off all the safety checks, and by how dramatic the effect on performance of adding indexes really was.
so it was then by complete contrast that, for example, the py-lmdb benchmarks got an ORDER OF MAGNITUDE better sequential-read-speeds (2.5 million per second) than i was expecting that made me really sit up and take notice.
Surprise about load average seems equally naive. If you fork a bunch of processes that are doing IO, of COURSE the load increases. Load is a measure of the number of processes not sleeping. That's all it is. I don't understand his surprise that a system steadily doing a great deal of IO would show a lot of time spent in IO calls in profiling.
you've missed the point. it was that the exact same design using 20 (or so) shm file handles instead of 200 file handles opening to the exact same data (effectively) resulted in a reasonable loadavg, whereas having the 200 file handles open had a loadavg that ground the system completely to a halt.
so it's not the *actual* loadavg that is relevant but that the *relative* loadavg before and after that one simple change was so dramatically shifted from "completely unusable and in no way deployable in a live production environment" to a "this might actually fly, jim" level.
Never mind what projects use it; what have independent reliable sources written about LMDB?
i've written something and i'm pretty wubwubwubreliawibble oh look pretty coloured lights...
there isn't a python-lmdb wikipedia article, and one has never been created. the discussion involves the LMDB page (not the python bindings) despite LMDB having significant notable uses.
"a high-performance task scheduling engine written (perplexingly) in Python"
guys, there is this thing, it's called "algorithm"....
yeah.... except that algorithm took a staggering 3 months to develop. and it wasn't one algorithm, it was several, along with creating a networking IPC stack and having to create several unusual client-server design decisions. i can't go into the details because i was working in a secure environment, but basically even though i was the one that wrote the code i was taken aback that *python* - a scripted programming language - was capable of such extreme processing rates.
normally those kinds of speed rates would be associated with c for example.
but the key point of the article - leaving that speed aside - is that if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that, over the long term, because of the overwhelming overhead associated with SQL (and NoSQL) databases maintaining transaction logs and making other guarantees in ways that are clearly *significantly* less efficient than the ways that LMDB do it, by way of those guarantees being integrated at a fundamental design level into LMDB.
At some point there will be an article on Wikipedia, that only meets Wikipedia's notability requirements due to media spillover complaining about the notability requirements.
OpenLDAP was originally using Berkeley DB, until recently. they'd worked with it for years, and got fed up with it. in order to minimise the amount of disruption to the code-base, LMDB was written as a near-drop-in replacement.
LMDB is - according to the web site and also the deleted wikipedia page - a key-value store. however its performance absolutely pisses over everything else around it, on pretty much every metric that can be measured, with very few exceptions.
basically howard's extensive experience combined with the intelligence to do thorough research (even to computing papers dating back to the 1960s) led him to make some absolutely critical but perfectly rational design choices, the ultimate combination of which is that LMDB outshines pretty much every key-value store ever written.
i mean, if you are running benchmark programs in *python* and getting sequential read access to records at a rate of 2,500,000 (2.5 MILLION) records per second... in a *scripted* programming language for goodness sake... then they have to be doing something right.
the random write speed of the python-based benchmarks showed 250,000 records written per second. the _sequential_ ones managed just over 900,000 per second!
there are several key differences between Berkeley DB's API and LMDB's API. the first is that LMDB can be put into "append" mode (as mentioned above). basically what you do is you *guarantee* that the key of new records is lexicographically greater than all other records. with this guarantee LMDB baiscally lets you put the new record _right_ at the end of its B+ Tree. this results in something like an astonishing 5x performance increase in writes.
the second key difference is that LMDB allows you to add duplicate values per key. in fact i think there's also a special mode (never used it) where if you do guaranteed fixed (identical) record sizes LMDB will let you store the values in a more space-efficient manner.
so it's pretty sophisticated.
from a technical perspective, there are two key differences between LMDB and *all* other key-value stores.
the first is: it uses "append-only" when adding new records. basically this has some guarantees that there can never be any corruption of existing data just because a new record is added.
the second is: it uses shared memory "copy-on-write" semantics. what that means is that the (one allowed) writer NEVER - and i mean never - blocks readers, whilst importantly being able to guarantee data integrity and transaction atomicity as well.
the way this is achieved is that because Copy-on-write is enabled, the "writer" may make as many writes it wants, knowing full well that all the readers will NOT be interfered with (because any write creates a COPY of the memory page being written to). then, finally, once everything is done, and the new top level parent B+ Tree is finished, the VERY last thing is a single simple LOCK, update-pointer-to-top-level, UNLOCK.
so as long as Reads do the exact same LOCK, get-pointer-to-top-level-of-B-Tree, UNLOCK, there is NO FURTHER NEED for any kind of locking AT ALL.
i am just simply amazed at the simplicity, and how this technique has just... never been deployed in any database engine before, until now. the reasons as howard makes clear are that the original research back in the 1960s was restricted to 32-bit memory spaces. now we have 64-bit so shared memory may refer to absolutely enormous files, so there is no problem deploying this technique, now.
all incredibly cool.