Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Where this comes from... (Score 1) 247

I agree.

AFAIK, the idea that passwords have to be changed in intervals from one to three months comes from the old days back when many terminal users used one Unix system that had /etc/passwd files. These were crypt() hashed so anybody could read them and start cracking them. One day some TLA calculated how much time it would take an attacker with serious resources (or better, what was regarded as a serious resourece back then) to brute force crack a password. They came up with something like "a crypt hash would be reasonalbe secure for two months, so if it is changed every month, it will be secure. This ended up written into some rainbow book (orange?) and from there on it was simply copied to all other standard security books and references.

According to my knowledge, this is why we are stuck now with every best practice guide still portraying the idea that passwords have to be changed in regular intervals.
[quotation needed]

Of course, this has been outdated at least since shadow passwords were introduced, let alone Moore's law or Rainbow tables.

Comment correctly formatted (Score 1) 174

10. They were just a bunch of students making a cool experiment that got out of hands. Once they realised that the problems started when they tried to make money out of it - since the feds could follow the money trail - they abandoned it. This is also why it did not carry a harmful payload for a long time and why the only malicious payload quickly self-destructed itself.

11. It really is the creation of some TLAs somewhere, from Mossad to CIA or FSB or the Secret Service of Trinidad & Tobago or such. This is why Conficker dropped real malicious payload only for a short time: if you want to have a large army of bots to attack other nations in the case of war, it does not make sense to drop a malicious payload - you don't want to go through the hassle of actually making some money, but you can't afford someone to find this out; also, you do not want to destroy or harm your bots hosts or make your bot appear more dangerous to their host maintainers than necessary since they might put more effort into removing your bot. But not deploying any malicious payload at all turned out to spark all sorts of speculations and media interest so they had to make Conficker drop a plausible payload that self-destructed after a short while.

12. Some mafia guys though of hiring a bunch of experts for the development of the perfect and most advanced botnet and it all worked fine. Until they realized that this one perfect botnet created thousands of times the media and police attraction that all other bots preceding them combined. So as then any Security researcher, every cyber-crime unit and any self-proclaimed virus hunter was watching them they abandoned the project and instead returned to deploying hundreds of less effective smaller-scale bots that also got them loads of money but no media attention instead.

Comment Re:so where are they now? (Score 0, Redundant) 174

10. They were just a bunch of students making a cool experiment that got out of hands. Once they realised that the problems started when they tried to make money out of it - since the feds could follow the money trail - they abandoned it. This is also why it did not carry a harmful payload for a long time and why the only malicious payload quickly self-destructed itself. 11. It really is the creation of some TLAs somewhere, from Mossad to CIA or FSB or the Secret Service of Trinidad & Tobago or such. This is why Conficker dropped real malicious payload only for a short time: if you want to have a large army of bots to attack other nations in the case of war, it does not make sense to drop a malicious payload - you don't want to go through the hassle of actually making some money, but you can't afford someone to find this out; also, you do not want to destroy or harm your bots hosts or make your bot appear more dangerous to their host maintainers than necessary since they might put more effort into removing your bot. But not deploying any malicious payload at all turned out to spark all sorts of speculations and media interest so they had to make Conficker drop a plausible payload that self-destructed after a short while. 12. Some mafia guys though of hiring a bunch of experts for the development of the perfect and most advanced botnet and it all worked fine. Until they realized that this one perfect botnet created thousands of times the media and police attraction that all other bots preceding them combined. So as then any Security researcher, every cyber-crime unit and any self-proclaimed virus hunter was watching them they abandoned the project and instead returned to deploying hundreds of less effective smaller-scale bots that also got them loads of money but no media attention instead.

Comment answer to the question (Score 1) 536

I do also suspect that the CR/LF conversion might be the source of his troubles. Thus, it's not a bug, it's a feature of the ftp protocol. I guess you could simply use binary mode for the ftp transfer.

Now, let's assume that it's not an CR/LF problem but that instead for some unknown reason the ftp transfers get aborted and thus the file size mismatches.

Okay, first of all, if you want to guarantee that a file that departed from one system is the very same file after its arrival on another system it is not wise to use the file size for verification, as the two files could have the same length but different contents. Therefore typically md5sum is used. Or better yet, use both MD5 and SHA-1 hashes so nobody could probably ever produce meaningful collisions for both of them at the same time.

Now, what programs should be used for the transmission itself? Well, that depends on your requirements: Is confidentiality important or is it really just about integrity and availability? Is speed or link saturation a topic? Like, if your current pipe is like 80% full, you probably cannot afford to encrypt your data. Otherwise, of course you should except for like if an IPS/IDS maintainer wants to be able to scan the contents. Let's take a look at both possibilities:

  • First: Confidentiality not an issue but bandwidth/speed/IDS is

Basically suitable is every tcp data transfer application that does by itself not meddle with the data itself. So this kind of excludes ftp as it can substitute CR/LF (Unix) line brakes with CR (Windows) ASCII text line brakes while transferring data from UNIX to windows and vice versa. But then again, you can use FTP just fine if used in binary mode. However, even the Swiss army knife of network transmissions can be easily used for the purpose of reliably transmitting files from A to B: netcat.

nc or nc.exe is available for both Windows and Unix and is often used in the forensics world in manual combination with md5 and/or sha-1 hashes to transmit forensic evidence from e.g. a suspect drive to the examiners workstation. Here the chain of evidence would be maintained by recording a hash of the data on the suspect drive, recording a hash of the data on the examiners workstation after arrival and recording the date, time and contents of the transmission. Note that it might be vital to have a log of what has been transferred when so that it can be proven that you sent some data the other party claims to never having received it. So, recapping, e.g. netcat, ftp, SMB/CIFS shares, HTTP and any other TCP based file transfer utility could be used. HTTP and FTP could even be easily scanned for viruses/malware during transit. UDP based file transfer utilities could be used as well as long as the implementation does take care of the integrity. As most likely a short script would be used in order to generate logs containing MD5 and SHA-1 hashes on both sides, the time and date of the transfer and the filename, this script could as well easily handle data retransfers in the case of packet loss.

  • Second an better: confidentiality with some bandwidth and CPU constraints

Sorry, this posting by now bores me. So, the recap:

Use SSH (SCP), cryptcat (used among others in forensics for the chain of evidence when confidentiality is an issue), HTTPS, SMIME or any other encrypted transfer tool, really. Hell, you could even generate an encrypted PGP file or whatever with a script and pipe it through whatever data transfer application you want. (Like ftp in binary mode ;) )

So, overall, what are needed here are two small scripts that do something like this:

On the sending side:

10 compute SHA-1 / MD5 hash of a file to be transferred (and optionally compress it) 20 send file 30 receive a SHA-1 / MD5 hash of the transferred file from the receiver 40 compare the hashed 50 complete transaction including logging the date, time, filename and hash, if hashed match 60 else goto 20

On the receiving side: 10 receive file 20 generate SHA-1 / MD5 hash of the received file 30 send SHA-1 / MD5 hash back to the sender of the file 40 goto 10 if same file is resent, otherwise complete transaction recording date, time, filename and hash.

Those are pretty simple scrips but they assume that you can run a script or some custom software on the receiving end. If you can't do so, all you could do is to sniff the transmission traffic, extract the file from that stream and compare the hashes of both. That should be enough to hold even in a court: Here we have the evidence that the file has been sent by the transmitting script/application (hash of application log) and there we have the evidence that it has been transmitted on the wire, including the acknowledgements of the receiver (hash of the sniffer extracted data).

Why again was that trivial question posted on /.?

Comment answer to the question (Score 1) 206

I do also suspect that the CR/LF conversion might be the source of your troubles. Thus, it's not a bug, it's a feature of the ftp protocol. I guess you could simply use binary mode for the ftp transfer.

Now, let's assume that it's not an CR/LF problem but that instead for some unknown reason the ftp transfers get aborted and thus the file size mismatches. Okay, first of all, if you want to guarantee that a file that departed from one system is the very same file after its arrival on another system it is not wise to use the file size for verification, as the two files could have the same length but different contents. Therefore typically md5sum is used. Or better yet, use both MD5 and SHA-1 hashes so nobody could probably ever produce meaningful collisions for both of them at the same time. Now, what programs should be used for the transmission itself? Well, that depends on your requirements: Is confidentiality important or is it really just about integrity and availability? Is speed or link saturation a topic? Like, if your current pipe is like 80% full, you probably cannot afford to encrypt your data. Otherwise, of course you should except for like if an IPS/IDS maintainer wants to be able to scan the contents. Let's take a look at both possibilities: * First: Confidentiality not an issue but bandwidth/speed/IDS is Basically suitable is every tcp data transfer application that does by itself not meddle with the data itself. So this kind of excludes ftp as it can substitute CR/LF (Unix) line brakes with CR (Windows) ASCII text line brakes while transferring data from UNIX to windows and vice versa. But then again, you can use FTP just fine if used in binary mode. However, even the Swiss army knife of network transmissions can be easily used for the purpose of reliably transmitting files from A to B: netcat. nc or nc.exe is available for both Windows and Unix and is often used in the forensics world in manual combination with md5 and/or sha-1 hashes to transmit forensic evidence from e.g. a suspect drive to the examiners workstation. Here the chain of evidence would be maintained by recording a hash of the data on the suspect drive, recording a hash of the data on the examiners workstation after arrival and recording the date, time and contents of the transmission. Note that it might be vital to have a log of what has been transferred when so that it can be proven that you sent some data the other party claims to never having received it. So, recapping, e.g. netcat, ftp, SMB/CIFS shares, HTTP and any other TCP based file transfer utility could be used. HTTP and FTP could even be easily scanned for viruses/malware during transit. UDP based file transfer utilities could be used as well as long as the implementation does take care of the integrity. As most likely a short script would be used in order to generate logs containing MD5 and SHA-1 hashes on both sides, the time and date of the transfer and the filename, this script could as well easily handle data retransfers in the case of packet loss. * Second an better: confidentiality with some bandwidth and CPU constraints Sorry, this posting by now bores me. So, the recap: Use SSH (SCP), cryptcat (used among others in forensics for the chain of evidence when confidentiality is an issue), HTTPS, SMIME or any other encrypted transfer tool, really. Hell, you could even generate an encrypted PGP file or whatever with a script and pipe it through whatever data transfer application you want. (Like ftp in binary mode ;) ) So, overall, what are needed here are two small scripts that do something like this: On the sending side: 10 compute SHA-1 / MD5 hash of a file to be transferred (and optionally compress it) 20 send file 30 receive a SHA-1 / MD5 hash of the transferred file from the receiver 40 compare the hashed 50 complete transaction including logging the date, time, filename and hash, if hashed match 60 else goto 20 On the receiving side: 10 receive file 20 generate SHA-1 / MD5 hash of the received file 30 send SHA-1 / MD5 hash back to the sender of the file 40 goto 10 if same file is resent, otherwise complete transaction recording date, time, filename and hash. Those are pretty simple scrips but they assume that you can run a script or some custom software on the receiving end. If you can't do so, all you could do is to sniff the transmission traffic, extract the file from that stream and compare the hashes of both. That should be enough to hold even in a court: Here we have the evidence that the file has been sent by the transmitting script/application (hash of application log) and there we have the evidence that it has been transmitted on the wire, including the acknowledgements of the receiver (hash of the sniffer extracted data). Why again was that trivial question posted on /.?

Comment That's a different thing (Score 1) 382

The COFEE stick is used for "merely" acquiring digital evidence. See this part of your quote:

for later interpretation by computer experts

The summary describes a tool that will also interpret the evidence found.

What COFEE will do for you is to gather volatile information on life windows systems like running processes, open network connections, system date and time, RAM contents etc. The disk contents are not acquired as they will supposedly remain as they are.

In contrast to this the tool the summary mentions should not acquire any evidence but instead search through existing evidence and interpret it, like searching through your harddrive for keywords on a bad word list or searching for hashes of known kiddypr0n etc.

There is a big difference there:

If Microsoft's tool is the equivalent of a toolkit designed to help a cop take a sample of your blood for later testing of anything illegal in your blood that will not be there anymore several hours later when a doctor will do the same, the tool described in the summary is the equivalent of a tool designed to tell the cop if there is anything illegal in your blood without acquiring the blood for later analysis by an expert.

Although this is quite a bad analogy as the device in my analogy might well be technically feasibly. Let's instead consider the following analogy:

Instead of using a camera in order to take pictures of a suspected crime scene they want to use a device similar to a camera that instead of acquiring evidence from suspected crime scenes will allow a cop to look through it at any scene in order to see if a crime has happened at all.

Imagine a cop on the open street looking through a camera at you and then getting arrested because the camera told him that you somehow supposedly committed a crime.

Comment Re:SpinRite (Score 1) 399

Disclaimer: this is a redundant posting but I wanted to make sure the author of the comment saw my post which quotes a blog entry by Scott A. Moulton who is a forensic and data recovery expert and currently teaches the SANS 606: Drive and Data Recovery Forensics course.

Quoted from here:

Spinrite is not data recovery software. I get many questions about why I left off Spinrite on my recommendations of recovery software. I specifically leave off Spinrite because under the strictest terms it is not data recovery software. Almost every single data recovery package knows, and will warn you not to write the data back to the original source drive. Data Recovery/Forensics software almost always recover from a source to a destination. Spinrite does not do that, it refreshes the surface and controls reads to get the maximum amount of data from the sectors and then puts it back down on the same drive.

I think it does quite a few things very well and it does an excellent job at reporting and reading the SMART info and refreshing the surface of the hard drive. However, I would like to first try to get the data from the drive before scanning it and trying to rebuild sectors. There are many reasons for this, but the most important one being that the drive can die in the process of running Spinrite. It is possible to do more damage to the drive by doing excessive read and writes. There are times that you only get once [sic] good chance at data and if you use a tool that just goes in and surgically removes the data you want BEFORE doing the scan you will be a lot safer.

If I was going to use Spinrite, I would get everything I could off the drive to another destination first and then use Spinrite to try to get anything I could not repair (although I never have to with the tools I use). Another horrific story I have seen with drives sent to me, is that if Spinrite it runs successfully, people are under the impression that the drive is repaired and is usable again and continue to use it. Big mistake and it usually dies again shortly. On a Windows Hard Drive I would try NTFSExplorer/FatExplorer first in the hopes of doing a surgical recovery as oppose to spending days rewriting sectors in the hopes that my drive can live though it as Spinrite does. But for $80 it is well worth the attempt if you are going to do nothing else. Good Luck.

Oct 6, 2008 11:26 PM ~ Scott A. Moulton

Also, you can find some very interesting papers/presentations/videos here.

Comment Do NOT (easily) use Spinrite! (Score 1) 399

Disclaimer: this is a redundant posting but I wanted to make sure the author of the comment saw my post which quotes a blog entry by Scott A. Moulton who is a forensic and data recovery expert and currently teaches the SANS 606: Drive and Data Recovery Forensics course.

Quoted from here:

Spinrite is not data recovery software. I get many questions about why I left off Spinrite on my recommendations of recovery software. I specifically leave off Spinrite because under the strictest terms it is not data recovery software. Almost every single data recovery package knows, and will warn you not to write the data back to the original source drive. Data Recovery/Forensics software almost always recover from a source to a destination. Spinrite does not do that, it refreshes the surface and controls reads to get the maximum amount of data from the sectors and then puts it back down on the same drive.

I think it does quite a few things very well and it does an excellent job at reporting and reading the SMART info and refreshing the surface of the hard drive. However, I would like to first try to get the data from the drive before scanning it and trying to rebuild sectors. There are many reasons for this, but the most important one being that the drive can die in the process of running Spinrite. It is possible to do more damage to the drive by doing excessive read and writes. There are times that you only get once good chance at data and if you use a tool that just goes in and surgically removes the data you want BEFORE doing the scan you will be a lot safer.

If I was going to use Spinrite, I would get everything I could off the drive to another destination first and then use Spinrite to try to get anything I could not repair (although I never have to with the tools I use). Another horrific story I have seen with drives sent to me, is that if Spinrite it runs successfully, people are under the impression that the drive is repaired and is usable again and continue to use it. Big mistake and it usually dies again shortly. On a Windows Hard Drive I would try NTFSExplorer/FatExplorer first in the hopes of doing a surgical recovery as oppose to spending days rewriting sectors in the hopes that my drive can live though it as Spinrite does. But for $80 it is well worth the attempt if you are going to do nothing else. Good Luck.

Oct 6, 2008 11:26 PM

Also, you can find some very interesting papers/presentations/videos here.

Comment Do NOT (easily) use SpinRite! (Score 1) 399

You can get a very good explanation of why not here.

I am referring to a blog entry from Scott A. Moulton who is a forensic and data recovery expert and currently teaches the SANS 606: Drive and Data Recovery Forensics course.

Spinrite is not data recovery software. I get many questions about why I left off Spinrite on my recommendations of recovery software. I specifically leave off Spinrite because under the strictest terms it is not data recovery software. Almost every single data recovery package knows, and will warn you not to write the data back to the original source drive. Data Recovery/Forensics software almost always recover from a source to a destination. Spinrite does not do that, it refreshes the surface and controls reads to get the maximum amount of data from the sectors and then puts it back down on the same drive.

I think it does quite a few things very well and it does an excellent job at reporting and reading the SMART info and refreshing the surface of the hard drive. However, I would like to first try to get the data from the drive before scanning it and trying to rebuild sectors. There are many reasons for this, but the most important one being that the drive can die in the process of running Spinrite. It is possible to do more damage to the drive by doing excessive read and writes. There are times that you only get once good chance at data and if you use a tool that just goes in and surgically removes the data you want BEFORE doing the scan you will be a lot safer.

If I was going to use Spinrite, I would get everything I could off the drive to another destination first and then use Spinrite to try to get anything I could not repair (although I never have to with the tools I use). Another horrific story I have seen with drives sent to me, is that if Spinrite it runs successfully, people are under the impression that the drive is repaired and is usable again and continue to use it. Big mistake and it usually dies again shortly. On a Windows Hard Drive I would try NTFSExplorer/FatExplorer first in the hopes of doing a surgical recovery as oppose to spending days rewriting sectors in the hopes that my drive can live though it as Spinrite does. But for $80 it is well worth the attempt if you are going to do nothing else. Good Luck.

Oct 6, 2008 11:26 PM

Also, you can find some very interesting papers here.

Comment Try this SANS course: Drive and Data Recovery F... (Score 1) 399

SECURITY 606: http://www.sans.org/training/description.php?mid=1237

One thing that nobody seems to have mentioned yet is freezer trick. If the drive is just not spinning anymore (and you do not hear a click of death), just throw your drive in a ziplock bag into the freezer for a couple of hours. Often times it will then run long enough to make a bit-to-bit (dd) copy as others already mentioned.

Comment Not Enterprise but Futurama Spaceship Concept (Score 3, Informative) 541

The Enterprise does not move without actually moving but the Futurama spaceship does.

As far as I can remember (and I read the Enterprise technical manual over 15 years ago), the warp gondola create a field in which space-time is bended and thus much smaller. So, this vastly decreases the length of the space surrounded by the enterprise and thus it can fly through the shortened space with "normal" means in much less time, therefore creating the possibility to travel faster than light: light has to travel the "long way", outside of the shortened space whereas the enterprise can take "the shortcut" while traveling with nearly light speed, thereby going faster than light.

Why this will never work IRL is left as an exercise to the reader. (Hint: even in a shortened space-time, a mile is still a mile and a second is still a second when measured from within that space)

Now, the Futurama spaceship in contrast works by moving the universe aroud itself. Way cooler, isn't it?

Slashdot Top Deals

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...