No one is expecting management to come in and fight the fire at 2 AM. What is expected of management however is for them to understand what is happening within their organization (and not at the bit's and bytes level) because they are directly responsible for the actual organization. What management should be able to do is to be able to bring in another competent person to fix the fire at 11 AM because you were killed on the highway while you were driving into the office at 2. And that competent person should be able to get a start fixing that problem because management was able to give them the proper "keys" and there is proper documentation for them to get a gist of the layout of the system.
Yes -- you are a sysop, and not management. You are an employee hired to perform what management wants. If management screws up and something happens to the organization, they can be legally held responsible -- think Sorbanes Oxley, if you are following their orders then you are off the hook (one of the reasons why executives are paid the salaries that they are). If you go off and do something on your own without their approval, or try to hide things from them under the guise of "I know what's right for the business", and something happens it will be your butt on the line.
Say that you worked in a finance group responsible for transferring company assets into different external funds that are dictated by upper management, and you thought "hey upper management doesn't understand what they are doing and they don't listen to me, I'm going to go out and transfer some of the companies money into some the funds that I think are doing well, and I know it can make a huge return of investment for the company". How far do you think your arguments would float?
One of the things is that sysops and admins need to stop "hiding" the incompetencies of management by "by going behind management and doing the right thing". If you really believe that the organization is going to fail because of management decisions, document what those decisions are, document how you believe that they are harming the organization and report it to the organization's internal auditing or business controls folks.
The code of ethics for the ACM includes the following http://www.acm.org/about/code-of-ethics
All in all I believe that if you really read full list of the ethics of these types of organizations you will find that if you are doing your job well, properly documenting any issues, validating problems, and responsibly reporting them, incompetency will not have a leg to stand on.
A policy should have been in place that defined who the business owner (management) of the resource was (network in this case). It is the responsibility of management to ensure that they define who has a business need for access (and have it documented), and it's the responsibility of the tech grunt to run the system (or network) for the business owner.
The key point is that as a non-manager type person, if management says jump, get it in writing and jump. Management is ultimately responsible for the system and network to the business. If management has made bad choices or decisions, it's their fault and if the request or actions leading up to the failure are documented, that admin can refer to that.
All organizations should at least have a documented policy of who can have access to resources and that the business owner of the resource can be easily determined. The business owner needs to be someone who is legally responsible to the organization (i.e. an executive, or someone high enough in management).
As a system administrator, you should insist on having this documented just to protect yourself. If you suspect that there is some management decisions that could jeopardize the operation of the system, document it, report it to the business owner and let them make the final decision (with documentation).
In the case of Terry Childs, had this been documented, he would have been able to either say that the person who was requesting the passwords did not have a business need (and would be able to back that statement with documentation), -or- if the person did have authority to have access, he could have simply have documented why it was a bad decision, hand the passwords over and walk away from it.
Yes there is a pride element. You've spent years building up a system and making it shine, but unless you are running your own business, you are not the legal owner of that system.
This really isn't anything new. Knuth didn't get it "wrong". He based his analysis of the algorithms assuming a system that had dedicated memory and where each instruction of code ran uninterrupted and in a consistent fashion.
Certain memory access patterns are "bad" under a system that uses virtual memory, especially when the base system is memory constrained. This has been a well known fact for decades. In fact one of the maybe lost arts of programming was ensuring reference locality, not only of data, but also of code. It was a common practice to ensure that often called subroutines or functions where either located in same page of memory as the calling code, or to group all the often called functions into as few pages of memory as possible.
Basically, every address space has what is sometimes called a working set, a set of pages that have been recently referenced. There are three things that can happen with a working set. It can remain the same size, it can grow and it can shrink. If it remains the same, there is no additional load to the operating system. If it shrinks, there is no additional load to the operating system, in fact this can help a memory constrained system. A growing working set however an lead to a thrashing system. Some operating systems will monitor working set sizes and can adjust dispatch priorities and execution classes depending on what the recent working set size history is. An application with a growing working set may very will find itself at the end of the queue way behind applications that have a static working set size.
Take for an example the following very simple program
static string buffer
while not infile.eof() do
Here the working set of this program will be very small. Ignoring the file i/o routines, all the code and data references will be limited to basically a fixed section of memory. From a virtual memory stand point, this is a "well behaved" application.
Now take the following
static string buffer
while not infile.eof() do
bindex = random(0,4095)
infile.readinto(buffer[ bindex ], 256)
outfile.wwritefrom(buffer[ bindex ], 256)
Functionally the same program, however the data reference pattern here is all over the place. The working set will be large, since many of the buffer pages will be referenced. The program never stays long on the same memory location.
Finally take the following example
static string buffer
infile.readinto(buffer, 256* 4096)
// fill the entire buffer
for i = 0 to 4095 do
numbercrunch( buffer[i] )
Here there will be an initially huge working set as the data is read in. However, the working set will shrink to a reasonable size once the numbercrunching phase starts since the data references will all be localized to a small block of memory.
People on both sides of the argument should at least realize that they don't have all the "answers". The folks that are saying "it's no big deal", need to realize the concentration and location of the spill will have a local impact to the environment and the local economy, that even though there are natural processes that "spill oil into the environment", this event is straining the system. The folks that are saying "it's the end of the world", need to understand that there are some natural processes that can have similar impacts, and that in the very long run the environment will recover (just not in the time scale that one might expect). I would say instead of arguing or putting heads into sand, people should just get in and "clean it up" to the best of our ability, and make sure that reasonable safeguards are put in place to ensure future events such as this can't happen.
Basically the natural seepage in the northern Golf of Mexico it's about 120,000 barrels a year. For the entire Golf of Mexico it's about 625-1,875 barrels a day (or 2.5 to 6.9 x 10^5 barrels a year).
The "problem" I suspect with the current oil well is the localization of the spill, and thus higher concentrations of the oil. Kind of like trying to eat a teaspoon of hot sauce directly versus adding a teaspoon of hot sauce to a bowl of chili. It's the same amount of the stuff, just dispersed over a larger area.
I just wish that the US population would get over the general reaction to anything is to sue someone.
If you can't take a little bloody nose, maybe you ought to go back home and crawl under your bed. It's not safe out here. It's wondrous, with treasures to satiate desires both subtle and gross. But it's not for the timid. Q - "Q Who" Star Trek the Next Generation
One of my favorite quotes.
So, basically all that is new is the recharge time and the decibels.
With the introduction of XA architecture in the late 80's, IBM moved some of the virtualization technology down into the hardware, they created a new instruction, SIE - Start Interpretive Execution that could tap into this facility. This facility ended up being the heart of both LPARs and VM/XA (which grew into current z/VM). Conceptually the SIE instruction, or the LPAR facility saves the current processor context, and starts a new context. The "guest" system (or the LPAR) now runs in this new context until some condition has been met (e.g. certain timer pops, certain state changes, etc, as defined by the meta-system (z/VM or the base system managing the LPARs). The movement of this function down into hardware was a logical extension of what used to be called hardware VM assists in pre-XA days.
Basically the base hardware provides LPARs (in fact for quite some time IBM mainframes can only run in LPAR mode, even if one has only one system image). LPARs allow sharing of the physical processors, sharing of physical I/O devices, and partitioning of physical memory. With an LPAR you cannot exceed the physical resources available, meaning that you cannot define an LPAR image with more processors then are physically available, or give an LPAR image more memory then is physically available. This is where z/VM comes in.
z/VM provides the ability to virtualize the physical resources. You can define a VM guest with more memory then is physically available, or more processors then are physically available. In addition z/VM can provide virtualized I/O devices, or provide more fine grained partitioning of physical devices (e.g. carving a disk volume into a collecting of smaller volumes in what is called mini-disks -- which are not the same as a disk partition).
I feel that comments can be broken into four types:
i += 4;
/* Increment i by 4 */ BAD comment
i += 4;
/* Ignore the first 4 fields */ Better
You really don't understand what a monopoly is do you?
the USB IF acts to maintain the USB standard - and it features vendor ID codes that are assured (by the specification) to be unique to each individual vendor who uses them.
They *must* act to prevent other companies from just deciding to use a Vendor ID *that does not belong to them* (read: have not licenced to use because the ID has been licenced by someone else, namely Apple).
How on earth did this get +1 informative?
The sole reason the USB IF exists in the first place is to prevent (or correct) issues like this arising, when one company breaks the spec for their own ends.
Several years ago I read (and wish I remembered where) a technique that I thought was quite interesting. It was a rule based authentication scheme. Each account on a system would have it's own set of rules that only the user would know. For example.
What is 2+4?:cat
Here I might have set up the rule to say whenever there is a mathematical equation, with an even result and it's in the morning enter "cat", if it's in the afternoon enter "river", if the result is odd and it's monday then enter "blue", tuesday enter
The response has nothing to do mathematically with the question, but relies on the fact that I know what the proper response should be. And even is someone was watching my response. Each time I log in a different rule would be used (maybe the next question would be "what color are roses?")
1.79 x 10^12 furlongs per fortnight -- it's not just a good idea, it's the law!