This is like the old BS about "Nobody was ever fired for using IBM" well Oracle is about the same as all the rest. The only time any of the major DBs lose data is during some crazy edge case. Server A was on fire, Server B had a HD failure on the raid and a bug in the raid slowed the whole system to a crawl, and C it was superbowl night and this was a NFL score keeping site.
When I do mission critical data storage, I don't just kind of throw it into any DB and hope for the best. There are logs, logs, and more logs that can be used to rebuild the datastore. There are checksums, hashes, etc. that verify that things have remained as they should be, and as truly mission critical gets reached, there are whole other systems, using whole other architectures that do the same thing and then verify the results of the primary system.
I have literally seen no realiablity difference between Oracle, Sybase, Postgres, MySQL, MariaDB, and even SQLite. Data goes in, and data comes out. If you work near the edges of any of them, then prepare to get burned. If you stay in the happy zone then things just work. To use your example of a bank, it isn't a problem to use hardware and specifications that are multiples of what is needed, keeping everything boring and safe.
If I were trying to run my banking DB from a beaglebone storing on an SD card, then I would recommend everyone switch banks regardless of DB.