My comments, as a full stack developer with experience using many different types of databases.
What I see is a design problem not related with what type of database is used
SqLite is a very fast embedded relational database that can work for many different use cases; it is not a toy and have a really big capacity. BUT, it blocks the entire database when a change is made on it.
The network databases as PostgreSQL or MySQL have record level blocking and a better mechanism to deal with concurrency, but they are slower because they depend on the network capacity while SqLite is used directly as a library.
Just that when you use SqLite as if it were a network database, you have an usage problem, as it lacks many components and it is not so easy (as not impossible) to add the extra functionality.
A different problem is to use python. These tests didn't take into consideration how slow could be the scripting used or even the servers involved. Database is just a key component, but only one. It is possible that the product can't deal easily with excess loading.
An embedded databases can be used in many different ways. For example, I wanted to have parallel level access to some data with SqLite; what was my solution? ... just to copy the data file several times and to offer it as an exclusive database for each connection. This works in many cases, in particular when the data is not shared (for example, when it is related to only one use case or user).
Or, it is possible to use all the database in memory, or to replicate two SqLite instances, one in memory and the other in disk.
To avoid memory leaks, I define a static memory area as the database heap, and I could define that area to lay in non paged memory, speeding the database.
For not to enter in many details... ANY database can deliver when used correctly. When we use then without investing time to know them, our results can't be the best ones.