Yep, but, they should test to see if the variable has a value. I remember vaguely that I tested for something like that by appending a value to the end saving it to a new variable and then testing both the original and new and if the same it was null.
Really? Really doing a delete and you don't check the existence of the folder before you start? I am not a Unix/Linux scripting expert (just a very dangerous amateur), but, I always test to see if the directory is there before I even start my scripts. If the folder isn't there the script screams, rants and raves to the console and then stops before it even starts processing. Common code I do for most Z/OS BASH scripts at the start before I even run the rest of the script:
1. Is the folder(s) there that I need.
2. Do I have the proper access to the folder(s)/file(s).
If either two fail I dump to the console full information on what happened and what I think should be done to fix the problem.
It is a common set I use ->
if test -d $1
File existence check:
if test -f $1
Can I read the file:
if test -r $1
Not pretty, probably can be coded better, but, this works for me and saved my ass a few times.
1. A lot of the movies that are showing are crap (and that is being kind).
2. The cost of my going to a movie and wife along with some munchies, well, I can buy the DVD in a few months for less money.
3. We can pause the movie at any time and take a break or grab some munchies (and not the over-priced crap in the theatre).
4. Did I mention most of the movies are crap?
5. We can skip the various 'ads' at the start of the movie. I want to see the movie, not pay to see advertising.
6. I don't have to put up with people talking about the 'good stuff' coming up and spoiling it for me.
7. I don't have to put up with the cell phones going off.
8. Did I mention most of the movies are crap?
We have very comfortable chairs at home and there is no line up to get food, drinks or when we go to the bathroom.
I wait a few months until the DVDs or Blue Ray versions come out. I then wait until friends and family give their feedback and then I may buy a copy, but, I usually wait a few more months and the video store discounts the movie. I have hundreds of videos, but, over 95% I have not paid more than $10 for. There are exceptions, but, they are for movies in a series that I (or my wife) love and want to see the next one quickly.
Again, did I mention most of the movies are crap?
Should read 'big red button'... Sigh, one of these days I will learn to spell.
First words out of Little Kim's mouth when he visited the site that connects to the real world.
Where I work developers do not have any access to production. We have a process and it tries to ensure that what gets to production has been tested by the dev team, reviewed by the support team, and finally tested by the client before it even gets near the production machine. The devellopment team defines what is to be added/changed/deleted (specs). The support team and client approves the changes before we start. The code is then tested and signed off by everyone. Once that code is ready the development team writes the installation document that defines every change required to release to production (and must match the modules in the specs) and the computer operations team follows that list of instructions. The list is also then checked against the change list and only the changes identified get released, anything else will not be released unless authorized by the support team and the client.
There are exceptions for emergencies, but, all code releases for any reason has to be signed off by the support team and client before it even hits the production machine.
The upside of this is that every change made has been documented and verified. All code released to the production machine has been documented and an audit trail is available for review. This way a developer will have a very tough time trying to sneak in code that should not be in a production machine (it still can happen, but, it is very hard).
Do the fight, at least if there is a paper trail your ass is covered. If your company has auditors, buy them a coffee and see if they can help you explain to senior management why root access for everyone is a bad thing. I needed it years ago as the support person, but, when I moved back to development they kept my access. They gave me very strange looks when I asked for access to be revoked, but, when they got audited they didn't get nailed by the auditor for having developers full access to the prod machines.
As a compromise see if you can get a 'SYSTEST' area defined where an image of prod data is stored and the new code that is to be promoted can be staged. That way developers can put up their code and prove it works with prod data and if it gets signed off by management you can 'promote' the code to the prod servers.
Been there with failing backups. We do a disaster recover exercise on a regular basis. The first time doing a disaster recovery test many years ago our backup tapes failed and we had to revert to another set of backups to finish the exercise. It drove home to senior management about the importance of backing up and just as importantly ensuring that those backups will work when they are needed!
You will probably be getting a large number of suggestions. I have done both support and development on mainframes and servers so here is some input:
1. Let management know at a high level the state of the machine(s) and get permission to spend part of your time documenting the system. When you get permission ask them for how often they need updates and how much detail. Keeping them in the loop seems to make them happy and feel important.
2. Document the current state and highlight areas of concern. Put down what the concerns are, the risks and the potential costs to the company if it fails.
3. Go through the document and organize it by risks. Try to figure out the size of the risk and how much work it will take to fix it and what is needed to fix the problem.
4. Automate as much of your process as possible. Any task you have to do on a regular basis (in my humble opinion if you do it more than once then automate it) should be automated. Dedicate time to document what you did.
5. Senior management is probably not wanting to see details. When you present, keep it simple and short. Point out the costs of failure and if you need software to help put that forward as an 'investment in infrastructure'.
6. If the company has an internal auditor make friends with him/her. Getting them on your side to present to management will help. Having the auditor explain to them the financial costs will help your cause a lot.
7. When you do things take the time to document what you are doing, WHY you are doing it, how you did it and where to go for the programs/scripts/data.
8. Pick the brains as much as possible of all the people there. Offering to buy coffee and donuts seems to make them more receptive to an informal
session and the amount of information they have could help you.
Part of every project we do now is dedicated to documentation and the client now knows the importance of that documentation and is happy to pay for it. The current system is over 25 years old and a lot of business knowledge has been lost due to people retiring or leaving. When we find things we put them into a document. The hardest thing to find is the 'WHY', but, once you get that the rest of the information starts to make more sense. Our most popular section is the 'HOW TO DO' as this is the short cut for every other document in the system.
When you do your documentation try to keep the documents as open as possible. Try to avoid proprietary packages as much as possible. We had an old flow chart program that we didn't have the program for and it took me a week to find an open source package that could read and export the files.
And document every step of the way. What you did, how you did it, where you did it and WHY! I did support for three years on a legacy mainframe app and a lot was never documented, especially the WHY. Half the time I put into fixing the outage was documentation.
Don't forget to verify that the restore process using the backups works too.
Felt it in Ottawa too.
After a period of time your people will get tired. When that happens code productivity will drop along with quality. I went through that where I work when we went 84 days non-stop with 12-16 hour days. We were making so many rookie errors at the end, but, we had a hard drop-dead date that had to be met. It took us months afterwards doing bug fixes. Afterwards a number of us had to take extended vacations so that we could recover (in my case I had to take 7 weeks off).
If you have to work those hours make sure that the people are eating properly and getting at least 1 day a week so that they can take care of the personal business. Good food (not pizza, beer, fries, hamburgers) will help things. We had veggies, yogurt, juices and healthy foods in here and that helped.
If that was true then why do we have so many holes in Windows? That is closed source and everytime I turn around there is another security hole that has to get patched. I have dual boot machines at home and most of my time doing patches is for the windows side of things. On the other side of things my Linux boxes at home don't have as many problems with security and when a hole is found a patch is done much more quickly than I could even hope for in Windows.
It has all of the sound of a security vendor trying to scare people into going with a product that they know has problems and then sell them more of their offerings to 'protect'.
Security by obscurity is not security at all. Open source allows anyone to review the code and if there is a problem then a patch can be proposed and the hole is closed quickly. With closed source we don't know (unless you have a disassembler and can read assembler code) what is there and are dependant on the vendor doing timely patches.
One other observation. Security is not absolute, it is a process. This goes for both open source and closed source. What is secure today is not necessarily secure in the future. When holes are found they need to be analysed and fixed.
I remember reading years ago that governments and business would create versions of a document for distribution that had minor differences. It would make it easier to identify the source of the document when there was a leak. In my less than humble opinion they re purposed that idea to do the same thing electronically.
1. Wouldn't it have been easier to mark up a random frame somewhere in the movie with information about the distribution point and then track it that way?
2. Minor edits on the credits would also have been an option. It would take more time and effort, but, it would be part of the movie and no one would know the difference.