Comment Cookie! (Score 3, Funny) 140
Talk about a big one...
Are they testing for a buffer overflow?
Talk about a big one...
Are they testing for a buffer overflow?
"HDD is as slow as a dog" beware of this symptom if this behavior does not go away when recreating a filesystem!
Modern hard drives have bad blocks when they are shipped like old ones had (there was a paper with list of bad blocks attached to every new HDD back then).
The difference is that new HDDs have extra space used to relocate bad blocks and do it automatically when they sense that a block is about do die. When there are many relocated blocks it is equivalet to having a fragmented filesystem: for a sequential read the disk head has to seek data in physical places that are afar. The apparent HDD speed decreases as the number of bad blocks increases.
The next stage is to having bad blocks reported by O/S that don't go away when overwritting them. This means that there is so many of them that there is no "extra" space left to relocate them. Your HDD is then basically a colander full of holes and You shouldn't entrust your data to it.
As a side note, after this explanation it should be clear why it happens to be impossible to recover a file due to bad blocks but a after low level format they "vanish". This is not a magic feature of a low level format like "polishing" the plate surface with some strong magnetic field, in reality when a block allocated to a file dies, and You try to read it, the HDD has no choice but to report an error because the data is no longer there, but when You write to it (which low level format does) the block gets relocated.
For this same reason it is recommended to have HDDs holding data that is rarely accessed (typically forensics evidence HDDs) being periodically fully re-read. This forces the relocation of "weak" blocks before they become unreadable.
If You have an idea that You really like, You'll be willing to at least hack a lame half/working prototype by yourself, seeing it functioning would be enough of reward, You would enjoy little implementation details even if it takes to learn some programming skills.
Otherwise Your idea is just a bubble of hot air that even You do not really believe in.
Researchers decide what revenue makes You happiest.
And
I for one welcome or new 75K earning happy overlords.
Try to be curious and helpful without being arrogant.
Curious: when someone takes time to explain something to You, listen, even if You believe that You know already. Each new explanation may bring interesting details. Additionally listening is a way to be polite.
Helpful: When a teammate of Yours is struggling with a problem, take time to understand it and try to find a solution on your side, if You come with something that works (test to be sure), offer the solution: "look, I've got something interesting here, what do You think about it?"
The key here is be sincere and modest, especially when you're the youngest in the team, consider that for a senior it is not a very comfortable situation to be rescued by a rookie. Getting your help accepted is not always easy,
Free bonus: as time will come, You shall get a reputation of someone who can solve hard problems, those everyone else given up, You'll get opportunities to work on more interesting (and challenging) parts of projects and so on.
Good luck
I was talking about the report, not the summary.
I agree with Your analysis of the agenda.
Yet the report is wrong in the sense that it understates (intentionally?) value of being compliant.
Ironically the point of view of authors is a good illustration of Economics of Security http://en.wikipedia.org/wiki/Economics_of_security.
The word penalty isn't used even once in the document while compliance efforts are mainly driven by the need to avoid penalties because penalties are the main impact (otherwise there would be no need for regulations).
In figure 1 of the report one can read that consequences of custodial data leak would be cleanup and notification costs.
However here's an exerpt from a randomly picked PCI-DSS FAQ (http://pci.evolve-online.com/pci-faqs.asp)
"
What are the penalties for non-compliance?
In the event of a security breach, penalties for non-compliance are imposed. We understand currently these to be in the order of:
* Fines at the rate of 5 euros per compromised account
* A breach fee in excess of 100,000 euros per incident
* Possible restrictions on the merchant
* Permanent prohibition of the merchant's participation in Visa and MasterCard programs
* Beyond compliance, business risks relative to brand, customer loyalty and company valuation exist if the cardholder data is not securely managed
"
Disclamer: I do PCI-DSS audits
After conducting an audit of a Merchant et a PSP (payement service provider), a QSA (qualified security assesor) issues a ROC (report on compliance to PCI-DSS) that is submitted du issuers (VISA, Mastercard, Amex, JCB and Discover).
Then the issuers certify the auditee.
An individual can not be a QSA by itself, it has to work in an organization that is qualified as well. Among other things a QSA organization has to provision a HUGE amount of cash in case it is found liable of having unduly declared an auditee compliant.
When a breach occurs, there is an investigation and eventually it is found that the ROC was not accurate by the time of the audit in such case the QSA organization and the QSA individual are in trouble.
BTW a certification is only for one year.
Now the case is not about PCI-DSS but "Cardholder Information Security Program" (CISP) and the breach happened in 2005.
Therefore I think the outcome would not have much impact on PCI program where liabilities are well defined.
Most equipments systems and application have fancy features that allow to do elaborate things efficiently with less resources. This is an enjoyable part of our work, unfortunately it should be banned.
Restrain Yourself from the temptation to use those features.
Implement everything with the most basic and standard approach.
This may be frustrating, you may feel that you are wasting cash and time and sacrificing performance, but actually you'll get a more reliable and flexible system. And and outsider will be able to understand it more quickly.
Most systems allow to insert comments in the configuration. Use that extensively. The comments are the most immediate documentation and usually the most up to date.
One last hint: once your system is running and you have removed anything fancy from it leaving only the necessary complexity, take 15 minutes to describe the profile of the person that is eligible to manage it. Include books with the general knowledge that this person will need. Handle the description to your management.
This approach has following advantages:
- screening out totally unfit candidates
- helping the successor filling gaps in his knowledge
- avoiding to describe in your documentation common knowledge (in my experience this is 30-70% of the document and could be replaced with references to appropriate books)
- (free bonus) giving the management a better understanding of your own value
There are drawbacks as well:
- Going through books would take more to get a grasp than if you explain everything inline.
You can palliate by giving references to specific chapters. And stress on the fact that no one should be allowed to touch the systems *before* having the knowledge in the book. It's like driving the car: you should learn *before*, not *while* going to the highway.
"Only wimps use tape backup: _real_ men just upload their important stuff
on ftp, and let the rest of the world mirror it
Linus Torvalds Jul 20 1996, 3:00 am
As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison