It sounds like you have many ingredients of DevOps in your experience, but none of the benefits because they seem to be a drag on your efficiency. You may be in the middle of a journey that you've yet to realize how to raise the state of the art in your own work, your team, and your software development organization.
My definition of DevOps is: DevOps is the process of removing all friction between the developer and customer value.
You need to treat friction as technical debt: file a bug and work on it!
I think you are close, but I have to disagree with a point you make. You want SAs in the loop to **develop** the environments and tuning: we want infrastructure as code where sysadmins document their wisdom and make it reproducible everywhere (in dev, staging, test, etc.): not by hand in only in production. Otherwise, we end up with deltas between development and production, which is a gap that can cause trouble and problems to creep up only in production.
This is where we start to close the loop on infrastructure and software engineering by instrumenting our code with metrics, performing forensic analysis with logs, and tracking health/uptime/performance with monitors. Otherwise, yes - handing off production to the system administrators to do their dark magick is the old way and it is NOT the DevOps way.
DevOps allows us to approach the problem where tuning and troubleshooting on your laptop or in production should be, as much as possible, a shared exercise with shared tools.
There is so much misunderstanding because there is not a universal, static definition of DevOps that everyone can point to and say "that is DevOps" or "you are doing it wrong!" This is because DevOps is ultimately defined by the capacity of the people who practice it and I think we can see (already in these postings) that many people do not have the capacity to define it.
The history of DevOps begins with the people who coined it: Patrick Debois and Andrew Clay Shafer's first discussion about Agile Engineering at a conference in 2008, which led to a Google group and then to the first community meeting as DevOpsDays Belgium in 2009. W#e can trace to the beginnings and primary source folks, so please stop demonizing and making DevOps anything more mysterious than a knowledge gap.
For an overview with my definition of DevOps, please see my blog post with talk and slides that I presented at Silicon Valley Code Camp earlier this month:
You want infrastructure as code: when you shell into a machine, you've already lost the battle because you are going to be doing things by hand which is slow and fraught with human error.
Your general approach is correct: scrap the servers + packages, instead code them into a provisioning system such as Chef, Puppet, Ansible, Salt, etc. and handle all of the variables and corner cases for a fleet of servers with different OSs using these systems.
Model them for local development using Vagrant and eventually Docker.
Disk cloning is one easy way to solve this problem, but then you must customize the new clone, and that represents a different set of problems.
Eventually you learn that you don't want to copy the docroots or other data between each clone. In fact, your application or data or configuration up to date at the time of the snapshot, but may not represent the current application data or configuration.
This leads many to synthesize infrastructure via provisioning tools like Salt/Puppet/Chef/etc. following infrastructure as code principle and then to publish the application onto the server from revision control or even better, from a build system, because that is up to date.
The problem is that doing anything by hand is slow and introduces human error.
We all start to solve this by documenting your work procedures (i.e. a run book) to make our knowledge reproducible the next time we set up a server. The next solution is to code those procedures in a shell script to speed up things. However, you quickly find out that you'll need variables and you want to address corner cases because you need the script to work on more than one server. So your shell script needs to be tested in multiple places and you've now begun to code infrastructure.
Today there are many provisioning tools (and run book modules provided by the community) which solve this problem elegantly and allow you to provision a fleet: Chef, Puppet, Ansible, Salt, and many others. They allow you to scale your efforts so that you'll never need a full day to provision a server again.
We don't do things by hand anymore today: it does not scale and it is not repeatable.
Vagrant changed my life! Learn about Vagrant, use shell and evolve towards Chef/Puppet provisioning, then optimize toward application containers to go even faster. You'll gain the benefit of keeping your customer development environment on your Windows/Mac/Linux desktop or laptop while being able to test multiple different projects for different business clients reflecting their production environment.
I had a lengthier example, but I lost the post. Anyhow, this is the beginning of your journey to immutable infrastructure as code: a lot of buzzwords that won't mean anything until you complete the journey..
Vagrant is part of a free toolset ecosystem (Packer, Terraform, Consul, etc.) which solve modern infrastructure issues that the OP is expressing: the Vagrant creators are smart and approachable in the forums and I've had the chance to meet some in person to confirm they are humble and generous souls.
Check out Pertino.com, a network as a service startup. You can set up a free account for three devices forever. If you need to expand past three devices at the same time, then Pertino has become valuable to you.
At a minimum, you get a very easy to use (and administer) private, secure network between you and whomever you invite onto your network, so you can do Remote Desktop, VNC, X, or whatever else you choose for you and your family to use without resorting to GotoMyPC, WebEx, etc. (mind you, all of those solutions are valid Desktop Sharing services, too) . But you can also do NFS, SMB, FTP, etc. for file sharing. Or anything: you finally have a virtual private network where you and your remote clients/family get a LAN-like experience in the cloud.
Full disclosure: I work there, so I am hopelessly biased. The value I see in this solution is that it is easy and secure for everyone, covers mobile and desktop, and allows you to try almost any solution out there to solve your needs because you have a peer to peer network with remote devices.
If you like Guacamole, you'll probably also like AJAXterm, which can give you a webpage-based shell... Can't find a definitive webpage for it...
:) Netsite evolved into the Netscape Enterprise Server and I was there at Netscape when the web site cluster served over 100 million hits per day in 1996. Those were amazing times, many server manufacturers would bring in hardware and we would benchmark a portion of www.netscape.com's traffic on them, which usually led to discussions about how to tune or optimize the OS or the IP stack, I know we helped SGI at the time.
The server and software engineering folks helped develop a dynamic DNS server that would help globally load balance web traffic based upon the inquiring IP address. They also helped hack SSL into rsync back in the day, so that is how we securely published web content updates out to the cluster.
Sadly, we also pioneering web advertising at Netscape. My colleague Alan spec'd out the dimensions to the ad banners, in case you wondered where those 460x68 dimensions came from: it allowed a minimal amount of horizontal white space on each side of the web page when the web browser had a vertical scroll bar on a 640x480 laptop display running Navigator, IIRC.
So those ad banners were physically changed on the docroot via a cron script in order to rotate them. The joy of hacks in a funded start up, but it made money! In fact, unlike most corporations today (e.g.: Microsoft), there was a strategic decision *not* to create an advertising server, so we helped create an industry and did not compete in it. Well, didn't complete until TW/AOL acquired Netscape -- but that was the day Netscape really died (it could be argued that bought Netscape solely for our web site traffic and advertising revenue since they didn't know what to do with the browser and server software. Witness the eventual release of the browser software to the mozilla.org project (thanks also to jwz!) and iPlanet/Sun eventually selling the server line to Red Hat, who continues to open source the directory and certificate servers today).
I wrote the plug-in finder, could it have been the most used CGI on the web at the time in 1996 -- who knows? I went on to become a technology evangelist at Netscape.
Good days indeed, thanks for the memories!
Research is what I'm doing when I don't know what I'm doing. -- Wernher von Braun