Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:(not)perplexingly (Score 1) 98

It doesn't matter how awesome someone thinks their Python-LMDB project is. It doesn't matter how important someone thinks their Python-LMDB project is.

the mistake you've made has been raised a number of times in the slashdot comments (3 so far). the wikipedia page that was deleted was about LMDB, not python-lmdb. python-lmdb is just bindings to LMDB and that is not notable in any significant way.

Comment Over-emphasizing (Score 1) 98

CPython is a compiler.

it's an interpreter which was [originally] based on a FORTH engine.

  It compiles Python source code to Python bytecode,

there is a compiler which does that, yes.

and the Python runtime executes the compiled bytecode.

it interprets it.

CPython has one major weakness, the GIL (global interpreter lock).

*sigh* it does. the effect that this has on threading is to reduce threads to the role of a mutually-exclusive task-switching mechanism.

I've seen the GIL harm high-throughput, multi-threaded event processing systems not dissimilar from the one you describe.

yes. you are one of the people who will appreciate, given that the codebase could not be written in (or converted to) any other language, due to time-constraints, that using processes and custom-written IPC because threads (which you'd think would be perfect to get high-performance on event processing because there would be no overhead on passing data between threads) couldn't be used, means that the end-result is going to be... complicated.

If you must insist on Python and want to avoid multi-threaded I/O bound weaknesses of the GIL, then use Jython.

not a snowball in hell's chance of that happening :) not in a milllion years. not on this project, and not on any project i will actively and happily be involved in. and *especially* i cannot ever endorse the use of java for high performance reliable applications. i'm familiar with python's advantages and disadvantages, the way that the garbage collector works, and am familiar with the size of the actual python interpreter and am happy that it is implemented in c.

java on the other hand i just... i don't even want to begin describing why i don't want to be involved in its deployment - i'm sure there are many here on slashdot happy to explain why java is unsuitable.

there are many other ways in which the limitation of threads in python imposed by the GIL may be avoided. i chose to work around the problem by using processes and custom-writing an IPC infrastructure using edge-triggered epoll. it was... hard. others may choose to use stackless python. others may agree with the idea to use jython, but honestly if the application was required to be reasonably reliable as well as high-performance there would be absolutely no way that i could ever endorse such an idea. sorry :)

Comment Do not use joins (Score 2) 98

if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that

You should pipe it to /dev/nul. That's webscale.

don't jest... please :) jokes about "you should just have a big LED on the box with a switch and a battery" _not_ appreciated :)

but, seriously: the complete lack of need in this application for joins (as well as any other features of SQL or NOSQL databases) was what led me to research key-value stores in the first place.

Comment Re:Would it hurt ... (Score 1) 98

A lot of the locking semantics you mentioned sound pretty similar to RCU which is used extensively in the Linux kernel, and allows for lockless reading on certain architectures.

http://en.wikipedia.org/wiki/R... .... yes, i think so. now imagine that all the copying is done by the OS using the OS's virtual memory page-table granularity (so does not have any very very very significant overhead). and also imagine that the library is intelligent enough to move the older page into its record of free pages during a cleanup phase that doesn't cost very much either. and also remember that on accessing B+ trees to find a record you only need to know the "top" (root) node... so you can update (or create) using those COW semantics as many B+ tree nodes as you like, knowing that it's *only* the root node that you need (after the fact) to tell (new) readers about... ... and now it's no longer expensive to do those RCU style operations, and the performance is streets ahead of any other key-value store.

but i am not an expert on these things. i'm sure that if howard chipped in here (and he _is_ an expert on the linux kernel and on high-performance efficient algorithm implementation) he'd be able to tell you more and probably a lot more accurately than i can.

Comment Re:Oh my... (Score 5, Interesting) 98

The use cases for LMDB are pretty limited.

weeelll.... the article _did_ say "high performance", so there are some sacrifices that can be made especially when those features provided by SQL databases are clearly not even needed.

basically what was needed then was to actually *re-implement* some of the missing features (indexes for example) and that took quite some research. it turns out that (after finding an article written by someone who has implemented a SQL database using the very same key-value stores that everyone uses) you can implement secondary indexes *using* a key-value store with range capabilities by concatenating the value that you wish to have range-search on with the primary key of the record that you wish to access, and then storing that as the key with a zero-length value in the secondary-index key-value store.

this was what i had to implement - directly - in python, to provide secondary indexing using timestamps so that records could be deleted for example once they were no longer needed. it was actually incredibly efficient, *because of the performance of LMDB*.

so... yeah. didn't need SQL queries. added some basic secondary-indexing manually. got the transactional guarantees directly from the implementation of LMDB. got many other cool features....

please remember that i am keenly aware that SQLite, MySQL and i think even PostgreSQL can now be compiled to use LMDB as its back-end data store... but that the application was _so demanding_ that even if that had been done it still would not have been enough.

but, apart from that: i don't believe you are correct in saying that there are a limited number of use cases for LMDB *itself* - the statement "there are a limited number of use cases for range-based key-value stores" *might* be a bit more accurate, but there are clearly quite a _lot_ of use cases for range-based key-value stores [including as the back-end of more complex data management systems such as SQL and NOSQL servers].

this high-performance task scheduler application happens to be one of them... and the main point of the article is that, amongst the available key-value stores currently in existence, my research tells me that i picked the absolute best of them all.

Comment Re:Did you make any effort to get this undeleted? (Score 1) 98

I apologize for that, I was wrong and spoke too quickly. If you can find notable sources for P-LMDB, then it's worth a shot bringing it to that user's attention.

hey not a problem. you're right about py-lmdb - my main concern is to get LMDB the recognition that its peer stores (such as BerkeleyDB) already have: http://en.wikipedia.org/wiki/B... - someone else mentioned that there are other such key-value stores (some of them at the same development period as LMDB) which already have articles. and it's that an *oracle* employee marked the page for deletion that's the main issue of contention here.

Comment database performance (Score 2) 98

The author got poor performance from a SQL database with no indexing, which degraded as the number of records grew? You don't say! A database that has to do a full scan for reads performs poorly?

yes. it was that i had to do that analysis in a formal repeatable independent way, which i had never done before, and i was very surprised at the poor results. i was at least expecting a *consistent* and reliable rate of... well, i don't know: i was kinda expecting PostgreSQL to be top of the list and i was kinda expecting it to reach 100,000 or 200,000 records per second... and it just... couldn't. i was *completely* caught off-guard by the need to switch off all the safety checks, and by how dramatic the effect on performance of adding indexes really was.

  so it was then by complete contrast that, for example, the py-lmdb benchmarks got an ORDER OF MAGNITUDE better sequential-read-speeds (2.5 million per second) than i was expecting that made me really sit up and take notice.

Surprise about load average seems equally naive. If you fork a bunch of processes that are doing IO, of COURSE the load increases. Load is a measure of the number of processes not sleeping. That's all it is. I don't understand his surprise that a system steadily doing a great deal of IO would show a lot of time spent in IO calls in profiling.

you've missed the point. it was that the exact same design using 20 (or so) shm file handles instead of 200 file handles opening to the exact same data (effectively) resulted in a reasonable loadavg, whereas having the 200 file handles open had a loadavg that ground the system completely to a halt.

so it's not the *actual* loadavg that is relevant but that the *relative* loadavg before and after that one simple change was so dramatically shifted from "completely unusable and in no way deployable in a live production environment" to a "this might actually fly, jim" level.

Comment Oh my... (Score 5, Informative) 98

"a high-performance task scheduling engine written (perplexingly) in Python"

guys, there is this thing, it's called "algorithm"....

yeah.... except that algorithm took a staggering 3 months to develop. and it wasn't one algorithm, it was several, along with creating a networking IPC stack and having to create several unusual client-server design decisions. i can't go into the details because i was working in a secure environment, but basically even though i was the one that wrote the code i was taken aback that *python* - a scripted programming language - was capable of such extreme processing rates.

normally those kinds of speed rates would be associated with c for example.

but the key point of the article - leaving that speed aside - is that if something like PostgreSQL had been used as the back-end store, that rate would be somewhere around 30,000 tasks per second or possibly even less than that, over the long term, because of the overwhelming overhead associated with SQL (and NoSQL) databases maintaining transaction logs and making other guarantees in ways that are clearly *significantly* less efficient than the ways that LMDB do it, by way of those guarantees being integrated at a fundamental design level into LMDB.

Comment I can't wait for it (Score 1) 98

At some point there will be an article on Wikipedia, that only meets Wikipedia's notability requirements due to media spillover complaining about the notability requirements.

yaaay! :) works for me. wasn't there a journalist who published a blog and used that as the only notable reference to create a fake article? :)

Comment Would it hurt ... (Score 5, Informative) 98

OpenLDAP was originally using Berkeley DB, until recently. they'd worked with it for years, and got fed up with it. in order to minimise the amount of disruption to the code-base, LMDB was written as a near-drop-in replacement.

LMDB is - according to the web site and also the deleted wikipedia page - a key-value store. however its performance absolutely pisses over everything else around it, on pretty much every metric that can be measured, with very few exceptions.

basically howard's extensive experience combined with the intelligence to do thorough research (even to computing papers dating back to the 1960s) led him to make some absolutely critical but perfectly rational design choices, the ultimate combination of which is that LMDB outshines pretty much every key-value store ever written.

i mean, if you are running benchmark programs in *python* and getting sequential read access to records at a rate of 2,500,000 (2.5 MILLION) records per second... in a *scripted* programming language for goodness sake... then they have to be doing something right.

the random write speed of the python-based benchmarks showed 250,000 records written per second. the _sequential_ ones managed just over 900,000 per second!

there are several key differences between Berkeley DB's API and LMDB's API. the first is that LMDB can be put into "append" mode (as mentioned above). basically what you do is you *guarantee* that the key of new records is lexicographically greater than all other records. with this guarantee LMDB baiscally lets you put the new record _right_ at the end of its B+ Tree. this results in something like an astonishing 5x performance increase in writes.

the second key difference is that LMDB allows you to add duplicate values per key. in fact i think there's also a special mode (never used it) where if you do guaranteed fixed (identical) record sizes LMDB will let you store the values in a more space-efficient manner.

so it's pretty sophisticated.

from a technical perspective, there are two key differences between LMDB and *all* other key-value stores.

the first is: it uses "append-only" when adding new records. basically this has some guarantees that there can never be any corruption of existing data just because a new record is added.

the second is: it uses shared memory "copy-on-write" semantics. what that means is that the (one allowed) writer NEVER - and i mean never - blocks readers, whilst importantly being able to guarantee data integrity and transaction atomicity as well.

the way this is achieved is that because Copy-on-write is enabled, the "writer" may make as many writes it wants, knowing full well that all the readers will NOT be interfered with (because any write creates a COPY of the memory page being written to). then, finally, once everything is done, and the new top level parent B+ Tree is finished, the VERY last thing is a single simple LOCK, update-pointer-to-top-level, UNLOCK.

so as long as Reads do the exact same LOCK, get-pointer-to-top-level-of-B-Tree, UNLOCK, there is NO FURTHER NEED for any kind of locking AT ALL.

i am just simply amazed at the simplicity, and how this technique has just... never been deployed in any database engine before, until now. the reasons as howard makes clear are that the original research back in the 1960s was restricted to 32-bit memory spaces. now we have 64-bit so shared memory may refer to absolutely enormous files, so there is no problem deploying this technique, now.

all incredibly cool.

Submission + - Python-LMDB in a high-performance environment

lkcl writes: In an open letter to the core developers behind OpenLDAP (Howard Chu) and Python-LMDB (David Wilson) is a story of a successful creation of a high-performance task scheduling engine written (perplexingly) in python. With only partial optimisation allowing tasks to be executed in parallel at a phenomenal rate of 240,000 per second, the choice to use Python-LMDB for the per-task database store based on its benchmarks as well as its well-researched design criteria turned out to be the right decision. Part of the success was also due to earlier architectural advice gratefully received here on slashdot. What is puzzling though is that LMDB on wikipedia is being constantly deleted, despite its "notability" by way of being used in a seriously-long list of prominent software libre projects, which has been, in part, motivated by the Oracle-driven BerkeleyDB license change. It would appear that the original complaint about notability came from an Oracle employee as well...

Comment pay them!! (Score 3, Interesting) 265

the key point that people keep missing is that corporations - which are legally obligated to maximise profits - take whatever they can get "for free". software libre developers *do not have* the opportunity that is normally present in business transactions to present the person receiving their work with the VERY IMPORTANT opportunity to transfer to that developer a reward (payment) which represents the value of the software that the person is receiving.

so it should come as absolutely no surprise that those software libre developers are not equipped with the financial means to support themselves (the Gentoo leader ending up with a $50,000 credit-card debt and having to quit and go work for Microsoft is an example that springs to mind) and they *CERTAINLY* don't have the financial means to pay for e.g. security reviews or security tools.

the solution is incredibly simple: if you are using software libre for your business, PAY THE DEVELOPERS. find a way. pick a project that's important or fundamental to your business, and PAY THEM.

Slashdot Top Deals

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...