The problem with public transport is that it's full of the Public.
The problem with public transport is that it's full of the Public.
The problem with thinking independently means that when it all goes bollock-up, it's youre fault for ignoring best practice..... even when best practice is bollox.
People may know better, but honestly, would anyone take the risk?
My RX8 has a 'feature' that protects the engine from being overheated by being rev'd while not under load.
Conveniently it makes performing the high-idle part of most emissions tests impossible because the car forcibly cuts itself back to idle halfway through most tests.
Most car manufacturers are fiddling the books in some way.
Never said you were. And such is the way in small companies. You have to do work outside your specialty. That's part of the fun.
I honestly had no idea how it actually backed up, it was a function within the accounts application itself to generate the backup. Which it did, to a local disk. I then had an automatic scheduled upload of that backup to the server.
Ultimately, like I said, I'm not really an IT guy - I was the one with google and enough patience to fuck about until things worked again. We didn't have one. We did pay one company a hundred quid a month for a while in case something went TU, but we stopped paying him six months before the final death just to make the dead plane glide those few hundred yards further.
The most IT thing I've done is run a simple website off my own desktop at home, and maybe the whole make a datalogger work with remote internet access.
I wasn't the I guy either. I just had enough of a head to google shit that borked and try figure it out and make it work again.
The total cost was actually weet FA in numbers terms, but I think I put the final nail in the company's coffin.
My first 'job' was a jobbridge internship with a 'small' company. Small enough that I was literally person number three on the employee roster. The company worked in the renewable energy sector, and had been hammered pretty hard over the last few years by The Recession as domestic and corporate purse strings were pulled tighter and tighter.
I was taken as an Engineer, but rapidly found myself wearing a wide range of hats from Sales, to Customer Support, to System Design, to Project Management, web development in PHP, and finally, IT Support.
Because, one day, I managed to figure out why one of my colleagues couldn't log in to the server upstairs, and corrected the problem.
I will say, the Server was the problem.
It was a dinosaur. It was 14 years old - twice as old as the company - and had been bought second hand. It was a monstrous beige tower with a pentium II processor and God Knows What else inside. It ran Windows Server 2000, and was solely dedicated to serving the company accounts and acting as a networked file storage. Inside the case where four HDD's.... A pair of 9GB ones for the OS and programs, and a pair of 32GB ones for files. Both pairs were mirrored in RAID 1. It had a pair of lockable Zip disk drives still fitted though the keys long lost, along with a floppy drive and a CD Drive with no write ability. Or ability to read DVDs.
It creaked as it worked, then fumed, whuffed, whirred and occasionally burped. And it sat there, creaking away for years without thought or consideration to its well being or security. Until I came along.
By this stage, it was obvious the company was dying - the Titanic had hit the iceberg a long time ago, and everything that was happening was just a desperate attempt to bail it out. We might've slowed the sinking - from two months, out to six, even buying a full year - but the abyss of liquidation always loomed.
So, any suggestion of upgrading the server hardware was met by 'With What Money?'. At the same time, everybody knew the server was the lynchpin. If it broke, that was it - company gone. A suggestion that I use a spare computer from home was quietly discouraged - in case the company went under by surprise and someone decided to liquidate it to pay a creditor rather than give it back to me. Or we turned up to find the doors locked.
The best I could do was schedule a backup of the accounts and a few other critical systems, and have it go somewhere offsite. I asked our webhost if we could use our spare space for it, and they were happy to let it happen, provided we didn't cause them problems. So, I set it to run the backup every Sunday morning - 1am or so. Each successive backup would overwrite the previous because there just wasn't the spare space to hold two (No money to pay for it)
I figured even if the server went pop, or we had a building fire or some other catastrophe, at least those copies would survive. I'd figure out what to run them on afterwards.
Someone, somewhere, should see the potential problem in this. In my defence, I am not, nor ever was, an IT professional. The software education I have is more related to the engineering side of things - making machines and robotics work with a view towards industrial automation, rather than the maintenance and setup of IT infrastructure and data security.
I just did what I thought I could to keep the Titanic afloat.
So, one Monday morning, I come to the office and am met by shrill sound of metal screaming against metal and a high speed. There's a heart-in-mouth moment as I realise that it's coming from the server cabinet.
But, we have backups, I assured myself. The disks are mirrored in RAID 1, so if one drops out, the other should still be clean and working. If that fails, I've my own little backup too....
Unfortunately - that only works if the damaged disk decides to drop out of the array.
I find this out after I shut it down, remove the deceased disk, and rebooted the server. (While googling furiously for a 32GB HDD with a type of cable connector I've never seen before). The death knell is sounded with a simple, innocuous sentence moments after it the server shuffles back online.
"I still can't connect to Sage. It's telling me the accounts Database is corrupted."
There was no anger. No frustration. No real realisation of the gravity of what that meant. Maybe it was just because it was an expected death - like the granny that'd been on lifesupport for too long, and finally decided to shuffle off the mortal coil.
But, we had one last hope!
The server stayed up! Maybe it'd successfully completed the backup before it died or corrupted!
Sure enough, hope shines anew as I find a fresh backup from Sunday sitting there, waiting for me. Filesize looked good. Everything looked golden. There's hope in there that this Titanic might yet not sink.
We were still afloat, and that dumb little idea I had might've saved the company.
It downloads. It imports. It loads up with the software. There is a moment of rejoicing as it seems that things might just be OK after all. Purchasing. Customer records. Product records. It all looks good. Only invoices remain to be checked.
Do we dare to hope?
The window populates with records and for one brief moment, all looks well as we scroll on down to the most recent of records. Then Sage locks up solid, and spits out an error.
There is a horrible, sinking moment as I begin to piece together what must've happened. It is a dreadful, sickening feeling, like watching the cold waters of the North Atlantic rush in, and me trapped by a locked hatch.
The disk must've initially crashed during the backup - probably as it was about to finish. Instead of dropping out of the array, it remained 'online' for want of a better term, while the backup happened. It spat corrupted data to the backup, making sure it was just corrupt enough to be unusable. It then remained in the array still 'working' and filling its mirror with corrupted data until we arrived in Monday morning to hear the final screams of a Harddisk dying.
There's a clawing feeling that it was somehow 'My Fault'.... and it probably was. With hindsight, maybe I should've set it to run the backup while we were in the building, rather than at home over the weekend. I could've used an external drive to keep one locally too. There were probably a dozen things that I could've done that'd stop it.
Nobody blamed me for it, not to my face anyway - but the disappointment was clear. My sole achievement had been to give things a chance, even if they were for naught.
We did finally get things running from a separate backup that'd been taken by chance elsewhere a few weeks earlier, combined with a lot of data re-entry from the papers we had around the office.
But by then, the damage was done. The remaining disk was on borrowed time at best.
The Directors called a meeting for a week later.
The Company was wound up and liquidated a month after that, finally slipping beneath the waves on New Year's Eve.
One of my colleagues took out a bank loan and bought most of the stock and company name, then used it to 'merge' and invest in another company. The other now works in some government thing - and I, after 4 years on the scratcher out of college, finally managed to get a paying job on foot of the experience. (Though with another company, natch).
The Server, however, miraculously found a new home, for reasons I can never understand. Someone saw the advertisement online and bought it within an hour of it being posted. Maybe it's even still working somewhere to this day - or maybe it's been parted out. I like to think it's still working for someone, somewhere, in some format.
It might've found its way to Nigeria or something, were to this day it continues to rattle and humm under some Prince's desk, spitting out emails begging for help with financial difficulties.
I did something a bit odd for my desktop, since I wanted a media system more than a gaming monster. It's a year old and is currently running an Apache on a Linux VM to host a website, among other things that've gone far beyond its original design brief. It'll even play Crysis. Well, Crysis 2. The A10 has really impressed me, - better than a crippled Intel and a budget graphics card anyway - and I'm curious to see what happens with the next the next FM2+ products
Fast RAM and an APU go well together. Even if the 7850k wasn't available when I built it.
APU: AMD A10-7700k
Motherboard.: MSI A88xm-E45
RAM: 8GB Kingston Hyper-X 2400Mhz
Cooler: Something that works
PSU: Corsair CX600M Modular
SSD: 128GH Samsung 240Evo
HDD: 1x320GB + 1x1TB salvaged from 'somewhere'
Case: Fractale Design 1000 USB3
It's got basically 500 days of contant running on it. And only the ten year old HDD has ever given issues.
All the exhaust backpressure can either wedge them out of place, or cause the retaining springs to overheat and push the seal out of its groove. If it clips a port, that's game over.
The difference between a $300 cat, and a $1500 one is the $300 one will physically melt at the exhaust temperatures it'll be exposed to.
It's 1500 quid for a new one and it's emissions exempt, that's why not.
And a failed catalyst quickly causes failed seals.
As an RX8 owner, I'm probably responsible for at least half that total.
With the catalyst gone out the tailpipe it smells like a refinery fire going up the road. A very fast refinery fire.
Widely Employed by a brutal Imperialist power as it cuts a swathe across the world.
Is Comcast a real company, or just an Onion-like parody of one?
Really. The Eqyptians?
Surely there're countries that'll pay far more for this information than Egypt? And be able to do far more interesting things with it.
Sitting here, watching it, I'm reminded of how awesome the trailer was for Episode 1 a long time ago and the reaction it got.
What this country needs is a dime that will buy a good five-cent bagel.