From the beginning, my comparison to on-premises enterprise servers vs. cloud-based servers has been a fundamental premise of the cloud sales teams and the assumptions (based on those assertations) of the customers in how they would be used. In a testing and development environment, the ability to turn servers off and on based on need and thereby lowering the costs is essentially true, assuming the development and testing teams are not global (requiring most or all of the servers to be on). The production servers, on the other hand, which must be on all the time, have costs which are typically higher than on-premises servers, even when the on-premises servers are utilizing cloud storage with their high cost to transfer data back from the cloud. The real value of cloud servers is the ability to extend processing power, especially horizontal extensions (multiple dynamic nodes) and the nearly automatic cloud storage continuity.
If we look at cloud vs on-premises using the "car purchase" model: on-premises is a purchase (generally) like purchasing a car; hybrid systems (servers on-premises and in the cloud) would compare with leased car pricing; and finally, the all-in cloud would be similar to rental car pricing. Over simplifying, I know, but the model has worked for me over the entire time "cloud" has been around. Some specialty clouds a la IBM Mainframe and Oracle on-premises clouds may have a different pricing dynamic but I have not seen a ton of those (more specialty Oracle applications with specific applications than IBM).