The OP needs lower-priced spot instances, which are intermittently available and designed exactly for this workflow.
Here's how to utilize lower-priced spot instances for scientific computing:
1. Set up one long-running, low-cost instance (a small is fine) that creates a distributed queue using Amazon's SQS, and adds jobs to the queue corresponding to each "unit" of the relevant computational problem of interest. New jobs can be added using a command line interface, or through a web interface.
2. Create a user start-up Bash script for the spot instances that runs your main program -- I prefer using Python and boto for simplicity. The main program should connect to the SQS queue, and begin an "infinite" while loop. Inside the loop, the next job off the queue is pulled, containing the input parameters that define the "unit" of the computational problem of interest. These input parameters are fed to the main algorithm, and the resulting output is uploaded to Amazon S3. The loop continues.
3. Any time the queue is empty or the spot instance remains idle for ~5 minutes, the spot instance then auto-terminates using EC2's command line interface.
4. Finally, just write a simple Python script to pull all the results off S3, combine & analyze them, and export to another useful format.
You'll also need to set up your spot instance price threshold, and make sure the queue has jobs to run. That's it, it's fairly simple.