visionofarun writes: In high performance computing it is common for applications to have all of the data in the physical memory to meet performance criticality. When multiple processes communicate, as we know, shared memory serves as the fastest way of IPC. Any such typical application would initialize the shared memory by loading the data from the disk into the shared memory. Now a question arises — what is the maximum data size that an application can hold in the physical memory, of course, without swapping.
geokes writes: Which hosting platform would be best for the following: 1. Convert a physical server to virtual (vmware converter, for example) 2. Upload the VM to the cloud 3. Start it for testing 4. Shut it down, so I only pay for storage, no CPU time 5. Optional: Synchronize original server with cloud image
Server would run Windows (versions 2003R2 to 2008R2) Applications on the server would be proprietary and local (no need for public access except RDP) In some cases a workstation image would also be uploaded for end users to run a client application, so the two VMs would need network communications. Is there a software package for all this? If not, I'm OK with scripting it, ideally with PowerShell.