NOAA and NIST have huge headquarters in Boulder, Co. Cutting these departments would affect Boulder in a *huge* way - much like GE's pull-out of Michael Moore's home town devastated that town.
I graduated with a CS degree in 1994. I went directly into the C/C++ programming world and had a blast learning systems and writing code that was immediately put to use in large and small systems. I worked with embedded controllers. I wrote air traffic control system software. I did a little bit of everything and it was great.
One day after work, I went home and sat down to watch a movie and before I knew it, the end credits were rolling. I had been thinking about my code and my work for my job during the whole movie and was so preoccupied with it that I had missed the *entire* movie. I made a decision. My personal, non-work time is *my* time. I don't want to be 'working' when I'm at home and on the weekends (working means thinking about my code). I loved coding and coming up with solutions to problems that had never been made before, but enough was enough.
I switched (slowly) to IT. I started to do some system maintenance work and porting to other OS's along with my daily work. I maintained the code repository. I became the linux guru at the company. My next job was strictly IT-only. My new employer was happy to hire a guy who was a programmer to be his IT person. I lived in that role for a long and happy time. It was a research company and I had many opportunities to use my programming skills to make my IT work much less mundane.
The upside to this move was that I had more time to program on my own, in my spare time. IT is mostly mind-numbingly simple and can be forgotten about at 5pm when it's time to go home. You've fought all of the fires. Everyone else is going home. If the pager goes off, you handle the issue and go back to your life. I was satisfied and I had my peace and solitude in my personal-time back. I even started writing code and building websites for myself and my buddies which was a much more pleasant way to spend my off-hours. I loved it that if I was thinking about code in my off-hours, it was for my own projects and not someone else's projects.
I've stayed in IT for the last 17 years. I've stayed away from Windows (since it's mostly learning where to click) and kept mostly in the enterprise/startup/linux world where scripting is still a common task among IT people. I've used cfengine, puppet, chef and other tools to automate my tasks and nagios is a close friend. I've found that working for a startup, I have the opportunity to write more core-level scripts and even some programs (I still program in C or C++ once in a while) and get to help with the company with some serious tasks to keep my creative juices satisfied.
I market myself as a 50/50 kind of guy. SysAdmin and Programmer, although most of what I do during the day is IT and most of what I do in my spare time is programming. I love my current combination of tasks.
I don't know how much age discrimination will hurt me when I get to me 50+ years old. The age discrimination for IT people seems to be a pending doom for my line of work and I may have to go back to programming one of these days as a primary job some day, but I'm keeping my fingers crossed that I can continue to work as I have been.
Feel free to DM me through slashdot if you want to talk further. ( or @scumola on twitter )
http://beatport.com/ sells the original WAV files. They're a little more expensive than iTunes, but you get the raw data.
I live in Longmont, Co and there are a *ton* of harddrive companies around here. Many of them hire for embedded programmers (mostly C and C++). If you know Pascal, C is not a big jump away. Also, I used to work at a weather research facility and there are tons of scientists there that all write Fortran to do their weather modelling. You'd still be very useful in both of those fields. Search harddrive places and harddrive controller places - all very 'embedded programming' friendly.
The best advice that I can offer is:
* Use the best tool for the job (don't use SQL for everything, key/value DBs are better for *many* tasks)
* Index smartly (don't index on the whole string if needed, try just the first few chars sometimes)
* Make sure your indexes fit in memory
* Other than that, just log long-running queries and optimize those.
I agree. Scripts and "patchwork" and "duct tape" is easier to maintain for an IT person than a huge program that may be more robust and more well though-out, but if something breaks in the large application, you need to re-design, and get the developer to change things, QA, deploy, etc
Doesnt' netstumbler already do this?
LTO tape is $50/400GB ($125/TB) - PAINFULLY SLOW, SEMI CHEAP
Disk is approx $150/2TB ($75/TB) - CHEAPEST AND PRETTY FAST
S3 is $153/TB/mo + xfer fees - SLOW, NOT CHEAP
Buying your own harddrives and storing them yourself is the cheapest option and probably will have very good retention. Since you're doing this frequently, it seems that it might be worth it to buy a SATA SAN that you can mount several drives in and a *bunch* of SATA drives. Put in 3-4 drives, raid them, copy your data (might take a while). Put the drives somewhere safe. If this is customer data, you can charge them a fee for data retention, so you don't have to eat the whole cost, but you'll have to put some money into the platform to begin with. If you roll your own, ZFS might be a better option instead of Linux's software raid because you can turn on compression and move data around if you need to. Getting something going with hot-plug drives (a PC chassis or SAN or whatnot) might also be a good investment. You may be able to re-use the drives after a while. Drive costs will also drop over time and you'll be able to buy 4TB and 8TB drives for the same price in a year or less too. After a year or two, we're talking a few bucks to store this amount of data on your own, fast media.
As far as storage, a safety deposit box will only work for so many drives. Might look into http://www.ironmountain.com/ for secure off-site storage, or just encrypt the data and take it home (off-site) with your or one of your employees so it's physically in more than one location.
I ran a whole IT department by myself with hundreds of servers. Life was great. I used tools that I understood, was confortable with, were tested over time. Everything ran smoothly. Then new management came in, changed everything, now it takes > 4 admins for twice the number of servers, and we're always behind with tickets and our support is worse. So my answer is "it depends".
[root@localhost ~]# nslookup slashdot.org
I used to do occasional linux recoveries for a place called Reynolds Data Recovery in Colorado. They weren't a mega-huge recovery company, but they got a few dozen drives every day and did good business. They used a collection of software - some proprietary utilities from the drive manufacturers, some commercial utilities. Also, some drives overheated, so they had a freezer that they could put a drive in, so it ran long enough to copy the data off it, also, they had clean rooms, so they could re-seat heads onto platters if they came off somehow, then they'd run the drive "open" until they could copy the data off. Other times, the electronics (controller card on the drive) were dead, so they had a huge shelf of working controller cards from every possible drive that you could think of. They'd pop the old card off, put in a known-working card, then copy the data off. The data would normally be returned on a 'loaner' drive that the customer would return or a new drive that the customer would pay for. RAIDs were hit-and-miss and sometimes they worked and sometimes they didn't. I'm not sure of any of the names of the software that they used, but it varied depending on how difficult the recovery was.
When I had to do linux recoveries, I slowly built-up a little distro of my own which had tons of tools on it. I'd have my 'distro' on a disk that they could plug in when they needed me to work on a linux disk, then I'd ssh into the machine remotely and work on the disk without having to drive in. I'd fix the partitions or the disk if it was possible and copy the data off onto a backup disk. There are some good tools availble in linux to do recoveries of things, but with the newer filesystems nowadays, it's more and more difficult to get anything off now. I'm not sure about SSD. Never had to deal with them yet.
% wget -S -O
Resolving www.coworkforce.com... 184.108.40.206
Connecting to www.coworkforce.com|220.127.116.11|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Date: Fri, 06 Mar 2009 17:24:49 GMT
Set-Cookie: ASPSESSIONIDASBTDDQQ=NIFBMIKAFMPHFLDLIKBAMPBD; path=/
Link to Original Source