Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Last Chance - Get 15% off sitewide on Slashdot Deals with coupon code "BLACKFRIDAY" (some exclusions apply)". ×

Comment The benefits of handling attack. (Score 4, Interesting) 44

I do IT Security for a research university. For the last 10 years, we have attempted to handle all incoming attack. Some gets missed, but we make an attempt. It is good work for the interns/trainees. We document the incident, block the attacking IP for an appropriate amount of time, and notify the remote abuse contact. We have found that handling attack provides significant benefits:
  • * Our security team remains functional. Ignoring incidents creates bad habits in the security team.
  • * It creates memory of how we are attacked. We need to know how we are attacked, so our defenses are anchored in reality.
  • * It greatly reduces the amount of attack. The number of attacks drop off sharply a couple weeks after we begin religiously reporting attacking IPs. We have tested this effect several times. When we stop reporting, it ramps up. When we start, it drops to about 1/10th it's prior levels.
  • * It notifies the owner/ISP of the remote computer that they are attacking. Usually they are also innocent victims.
  • * In the last few years, the percentage of remote resolutions has been climbing. Currently, about 1/2 of the reported non-Chinese incidents appear to result in remote resolution.

We utilize some automation to handle the load. We have a few honey-pots. We also monitor our dark IPs. We learned to distinguish DoS backscatter, and the various types of frequently spoofed attacks. We thought that an enterprising hacker would attempt to spoof an important Internet resource and cause us to auto-immune ourselves to death. So we whitelisted a bunch of critical external IPs and looked for critical spoofing. In the last 10 years the amount of spoofed attack has dropped drastically. We recently found an incident where an attacker spoofed a critical Google resource and tried to get us to block it. That is the only time we have detected that kind of spoofed attack.

We have found that most attackers (even governments) don't like to have their attack methods documented and publicized. We have found that some ISPs turn evil and knowingly host attack, but they are quickly and easily blocked until they go broke or come to their senses.

We have found many institutional scans. The best of these groups provide timely assistance to those who are making mistakes. In our view, the best groups include the ShadowServer Foundation, EFF, and the Chaos Computer Club. The worst of these groups are simply feeding on the mistakes of others. The worst groups provide no assistance to others. The worst groups actually have motivation to preserve or enhance the problems of others.

More info is available here:

Comment Re:logs? (Score 4, Informative) 104

Actually, we got the same response when we offered to send the actual logs.

A very similar thing happened to USU. We received a summons from Homeland/ICE to produce 3 months of records (plus identifying info) for an IP that was one of our TOR exit nodes.

I eventually managed to contact the Special Agent in charge of the investigation. He turned out to be a reasonable person. I explained that the requested info was for an extremely active TOR exit node. I said that we had extracted and filtered the requested data, it was 90 4 gig files (for a total of 360 gigs of log files) or about 3.2 billion log entries. I asked him how he wanted us to send the info. He replied that all he needed to know was that it was a TOR exit node. I then asked again if he wanted the data. He said something like: "Oh God no! Somebody would have to examine it. It won't tell us anything. It would greatly increase our expenditures. Thanks anyway."

And that was the end of it.

YMMV. All Rights Reserved. Not Available In All States. It helps if your institution has it's own Police, Lawyers, and (an extremely active and effective) department of Journalism. And, it doesn't hurt if it is cheaper (and easier) for you to respond to the summons/subpoena, than it is for the Authority to issue it and deal with the result.

Comment Re:Why would they want to deal with that? (Score 2) 37

TOR exit nodes are nothing but trouble.

I think this is an issue where some are more equal than others.

If an individual runs a TOR exit node, they can be easily intimidated and hassled. There is very little cost to law enforcement for engaging in the intimidation.

At the other end of the spectrum, a large public institution is not susceptible to this kind of intimidation. And, there is a very large cost if law enforcement attempts the intimidation. For example, at the institution I support, if the local cops or low level FBI attempted this kind of intimidation, they would be met by the institution's police force, the institution's lawyers and the institution's journalists. Everything would be recorded in multiple ways. Heck, we even have a state assistant DA permanently assigned to USU. He participated in the process that created the policy and procedures approving the TOR infrastructure.

At this point, if a major university's CS group is not investigating TOR, they should probably give back the funding and become a trade tech. The issues surrounding TOR are critical to our society. A university should not turn it's back to these issues.

Given all that, a law enforcement attempt at intimidation would be ineffective. And, it would likely result in the kind of bad publicity that can cause law enforcement to lose budget.

However you have a good point, libraries are widely distributed in the gap between your unfortunate friend and USU. The smaller ones would be easily intimidated. The larger ones, not so much.

Comment Re:Balance TOR's costs against the benefits. (Score 2) 37

Thanks DamonHD,

I am interested to understand what level of inspection you could and did perform to decide "abusiveness". Especially for the secure traffic.



We did traffic analysis using net flow information of a few days of traffic on a preliminary TOR exit node. In this situation, traffic analysis is very powerful. We did not try to determine who was talking. But, we have spent years deciphering the nature of connections using flow analysis. We are very successful in determining the nature of the various connections. Encryption does not change the underlying size, flow and pace of the connection. The TOR structure does little to obscure the ultimate timing of request and response. It does nothing to conceal the size of the requests and responses leaving the exit node. We can easily distinguish:

  • * Password guessing.
  • * Port scanning.
  • * Automated vulnerability assessment tools.
  • * Automated attack tools.
  • * Human driven web browsing.

When we tallied all the traffic for browsing, almost all of it was human driven. When we tallied all the traffic destined to a SSH or RDP port, over 90% of it was abusive.

Comment Re:Balance TOR's costs against the benefits. (Score 1) 37

Thanks Westlake,

I would replace the work "cost" with "risk."

As in exposure to a hostile legal, political and social environment.

We had risk in there earlier. But we later changed it to cost. USU is weird. I suspect all universities are weird. USU is a top tier research university. USU is not run by accountants and MBAs. It is run by researchers and teachers. We are shielded from most legal issues. We are constrained by funding. If we can fund it, we can invest in long term experiments. This is one of them.

I don't see many public libraries having the resources to implement your plan.

This is an extremely significant point. In order to understand the TOR issues and implement TOR properly, an institution has to have a significant investment in IT. Not a problem for universities, and large metropolitan libraries. But, most smaller libraries will not have the expertise to even understand the issues and how to mitigate them.

When the shit hits the fan, "thinking it over" and "hoping for the best" is no longer an option. In the end, you have to make a decision or one will be made for you.

True. We may need to clarify that abuse response message to make the following points more clear:

  • We have made our decision.
  • Here is our rationale.
  • When things change, we may change.

I expect we will change our decision to implement TOR sometime in the next 5 years for one of the following reasons:

  • TOR is replaced by something better. (Quite likely.)
  • TOR is infiltrated by the NSA and discredited. (Somewhat likely.)
  • The majority (greater than 80%) of TOR browsing traffic becomes abusive. (Somewhat likely.)
  • USU decides to get serious about privacy and implements an interior solution that uses NAT and non-logging proxies to obscure to external inspection, who is doing what. (Somewhat likely.)

Comment Balance TOR's costs against the benefits. (Score 5, Interesting) 37

When we set up TOR infrastructure at USU, we looked at the costs and benefits.

There are definite costs to running TOR infrastructure. You have to be aware of them. Some of the costs can be mitigated, but some can't. At the end, you have to be able to show that the benefits outweigh the costs.

First we examined the benefit. We made a clear statement of the benefit. It is:

USU has many researchers and students who deal in sensitive subjects such as Climate Change, Reproductive Issues, Political Systems, Animal Research, etc.. These students and researchers frequently need privacy and security to advance the goals of USU.

Then we discussed the various costs and methods of mitigating the costs. Afterwards, we decided that the costs could be made acceptable, if we were careful.

  • Our cost mitigation strategy had several parts:
  • 1) We arranged for the TOR infrastructure to have an academic sponsor. The USU CS department agreed to sponsor the TOR project. This gave us an existing structure for providing IT support. And, frankly, TOR is easier to support than some of the other academic projects.
  • 2) Most of the direct costs of creating and administering the TOR infrastructure are born by the USU CS department. It really helps that their admin is a diligent and responsible admin. It has been a joy to work with him.
  • 3) We have tried to put all the TOR infrastructure on a small CIDR. If people need to block TOR, we try to make it easy for them to block it without effecting other things. That said, if I had to do it again, I think I would continue to have the TOR entry nodes and intermediate relays on a small USU CIDR. I think I would ask USU's ISP (UEN) for a small /28 and hook it up external to USU's normal security perimeter. Then I would put the TOR exit nodes on that external CIDR. This makes it easier to set routing and firewall policy. It also enables entering the TOR switching network internal to USU.
  • 4) We examined the TOR traffic and tried to minimize the abusive bits. In our case, we found that most of the TOR web browsing looked non-abusive. However, the majority of the SSH and RDP traffic looked abusive. So, we asked the TOR admin to limit those protocols.
  • 5) We clearly documented our TOR setup and use. The TOR nodes have meaningful hostnames. The systems have are well defined roles and responsibilities. We have strongly discouraged the TOR admin from using those systems for anything else.
  • 6) We created processes for dealing with the abuse reports.

Here is our standard response to an abuse report against USU's TOR infrastructure:

The activity that you have reported is being emitted by a TOR exit node:

$ host domain name pointer tor-exit-node.cs.usu.edu.

$ host domain name pointer tor-exit-node-2.cs.usu.edu.

This TOR node is a project of USU's CS department. USU has many researchers and students who deal in sensitive subjects such as Climate Change, Reproductive Issues, Political Systems, Animal Research, etc.. These students and researchers frequently need privacy and security to advance the goals of USU.

Almost all TOR traffic is generated by innocent people who are attempting to escape the shadow of a totalitarian government. But, unfortunately, sometimes criminals attempt to use TOR to attack others.

We are in discussion with our TOR admins to try to find ways to limit the attack activity. Of course, this rapidly becomes a sticky issue. If we start inspecting and censoring some of the TOR activity, then we have less of a defense when we get pressure to inspect and block the rest. And, even starting down this path may make us legally liable for ALL the TOR traffic. Our best action may be to keep our hands off and observe strict network neutrality.

We are still pondering our options.

Please accept our apologies in the mean time.

USU IT Security

Comment Orion is the best counter for large incoming mass. (Score 3, Interesting) 272

If you actually want to effectively counter the "Dinosaur Killer" scenario, the best answer is early detection and a large "Orion" ship. See: https://en.wikipedia.org/wiki/...

We could have build a large Orion propulsion ship anytime in the last 40 years. It would probably cost less than an aircraft carrier. A large Orion propulsion ship could get almost anywhere in the inner solar system in a few weeks. And the propulsion system will work just fine to redirect another large mass. Yes, there will be a bunch of fallout damage from the initial take-off, but we can decide where to place it. and the fallout damage from Orion's propulsion is tiny compared to the damage from an asteroid strike.

I have always hoped that there was a secret plan to convert our offensive arsenal into Orion propulsion if the need occurred.

Comment A bit obtuse, but not bad. (Score 2) 55

As security definitions go, "Security is the set of activities that reduce the likelihood of a set of adversaries successfully frustrating the goals of a set of users." is not bad. It is a bit obtuse. It lends itself to Venn diagrams and powerpoint. It is also weakened by it's fixation on adversaries. Adversaries are nice if you can blame them, but usually, you are your own worst enemy.

The worst security definition that I have seen is the one currently used by the US Security communities. Geer stated it as: "..the absence of unmitigatable surprise." This definition is horrible. It offers you no guidance on prioritization or limits. This definition says you are insecure until you have achieved omniscience and omnipotence.

The best definition of security that I have found is: "Security is a MEANINGFUL assurance that YOUR most important goals are being accomplished." This is easily understood by everybody and it guides you to effective action. Using this definition you are guided to create and maintain the potential for success. The other definitions ultimately force you to focus your efforts on less important objectives.

Comment Only 3K PPS of attack? I thought it would be more. (Score 4, Interesting) 58

We see 3k PPS of attack and we probably have 1/8th of their address space. Remember, you need to scale by address space. Utah's state network is one of 3 early Utah experiments in municipal broadband. The other 2 are UEN and Utopia. When it was set up, IP addresses were allocated in /8, /16 and /24 chunks. They probably got a /16 (65K addresses) for each major department. In total, the Utah state government network probably has at least a million public IP addresses.

If you have a million public IPs, you catch about 3 million attacks every time somebody messes around with Z-Map or MasScan. They always try it at least 3 times. That is 1% of that scary 300 million per day total. And there are a lot of people in the world playing with Z-Map.

I do IT Security for Utah State University. We are at the North end of the state. We see about 3k PPS of attack all the time. We have 128K of public IP address space. Most days, we are at about 300K PPS at the border. 3K PPS of attack is about 1% of the total. Having 1% attack be incoming packets is normal for the last few years for us. This works out to about 1 attack packet per IP address every 30 seconds. Of course, almost all of them are rejected at the border. Most of my peers are seeing the same attack levels. But, all my peers are at universities.

However, In the last couple years the attack has shifted. Now, about 1/2 of our detected attack is sponsored or condoned by the Chinese government. The rest is evenly divided between other governments and organized crime. We assume that this shift is the inevitable consequence of the current cyberwar. The shift has also made it easier to do most attribution. Almost all attack by civil servants is easier to identify. It is predictable. It follows patterns. It has preferential quality of service. When you report abuse from a non-government attacker, it shifts methods, or stops, or moves to another target. When you report abuse to a government attacker, it increases. Sometimes it improves.

The shift in attack may be local to Utah and due to the NSA facility, but I think it is more likely that we are all screwed.

Comment Don't know about hackers, but China is helpful.. (Score 1) 69

I don't know about hackers, but lately China has done more to help me secure my university than the NSA, FBI, and Homeland Security combined.

I do network and computer security for a university. In the last couple years we have received a couple alerts from the FBI. The info was fairly old and limited in scope. And, they didn't want us to share the info with those who really needed to have it.

In the same period, the Chinese government has instituted a program of rigourous scanning and vulnerability assessment against my university. If I pay close attention, I discover all kinds of useful information. They have shown me 0-day exploits. They have taught me devious manipulations. They have even taught me a ingenious method of detecting firewall failure.

The Chinese give me daily updates on the latest hacking techniques. They never complain if I share the info. And they don't waste my time with meaningless paperwork. If I wasn't getting it for free, I would be willing to pay for this service. I don't understand why my government can't be as helpful

Comment Depends on your attacker. (Score 1) 467

My experience may not be applicable to you. I do IT Security for a university. We encounter a wide variety of attackers from script-kiddy to aggressive hostile government.

When our attackers desire to remain hidden, we usually can not detect and remove them using any common tool. The techniques for remaining in hidden control of systems are straightforward, effective and available to any attacker. We can detect all kinds of stuff by carefully inspecting network activity, but learning to do it takes years. And, analyzing 1 machine's traffic is slower than real-time.

For example, a while ago one of my coworkers managed to crack the C&C for a major fake-antivirus group. For 2 months we grabbed the rootkits as they went by. Code on compromised machines was updated daily. VirusTotal pronounced it all clean. Usually, the victims had no clue. None of the virus or malware detectors/removers would regain control of a compromised system. Sometimes the utilities would claim to have done something. It was never complete or successful. On the other hand, if we isolated a compromised machine from the C&C for 3 weeks, some of the utilities would start to be effective. At 6 weeks, almost all of them were effective. Of course, this fake antivirus group was indiscriminate and had a huge footprint.

We still use Microsoft Security Essentials or EndPoint Protection. It almost never prevents compromise, but in some circumstances it will let us know that that we have been had. Some attackers get what they want immediately and don't try to hide. Others break discipline after a few days or weeks. Then there are the ones that get what they want and sell you to less capable attackers. Finally, if the user/machine is vulnerable to attack then the machine eventually gets infested with multiple attackers. Once multiple attackers start interfering with each other, something always gets dropped.

We always recommend a "change passwords/backup/wipe/rebuild/restore" when we discover compromise. Even then, sometimes an attacker regains control by hiding hostile code in user files.

The preventative measures that seem to be most effective for us are:

  1. 1) Some form of Addblock. The primary attack vector for most of our people is hostile browser adds.
  2. 2) Limiting the execution of unwanted browser code. We recommend Chrome/Click-To-Run for most users. Motivated users can get better protection with Firefox/NoScript.
  3. 3) Working with our users to improve our defenses. See: https://www.youtube.com/playli...

Comment Inexplicable gaps in Crypto products. (Score 1) 421

In my completely uninformed opinion, there seem to be inexplicable and congenital faults in IT's use of cryptography.

A few crypto products need efficiency and performance. But, many don't. Many existing products are optimized for efficiency and performance, even when these goals are contrary to the stated goals of the product. Frequently, crypto solutions unnecessarily limit the size of keys. They extend the lifetime of keys. They limit the number of available keys. In many cases, all three of these latter goals are false savings.

We rarely use symmetric crypto, even though it is frequently simpler and more robust. Public Key is almost always preferred, even when it is easy to distribute keys.

Reliable, trustworthy sources of truly random numbers seem to be very useful, inexpensive, and straightforward to create. See: http://en.wikipedia.org/wiki/C...

If we are interested in secure communications, it should be normal and expected that we would pick up several hardware random number generators. We should have multiple simple, robust, trustworthy tools to generate symmetric keys. We should have multiple tools to utilize simple, robust, trustworthy symmetric crypto.

Instead, we seem to focus on always using a single complex public key solution even when it is not appropriate.

In my ignorance, I have been trying to map out a simple, robust tool for system administration, that makes use of symmetric crypto. See: https://it.wiki.usu.edu/201501...

I would really like to learn that I have been wasting my time.

Comment Re:Anyone can intercept SSH some of the time (Score 1) 278

This guide doesn't recommend disabling passwords. That's a huge omission.

Thanks. I figured that was obvious enough to not need explanation. So I decided it was out of scope. But, I am wrong all the time.

I am assuming you feel that we should teach our admins to test all their SSH passwords against standard attack dictionaries and disable/notify any that fail. This is a good idea. I will try to add it tomorrow.

Are there other conditions that are detectable by SSH admins that require disabling passwords?

Comment Re:Anyone can intercept SSH some of the time (Score 1) 278

You should have user honeypots. Once in a while present a fake certificate. If the user ignore the wrong fingerprint and type in the correct password, reset the account password.

That is an interesting idea. It is easy to MITM our SSH client connections. But, this control comes with a large expense. Because it is easy for our clients to see Security's actions, and it is hard for them to see the actions of attackers, they will conclude that Security is being evil for no good reason. This will greatly reduce our effectiveness by isolating Security from our community. Other controls may mitigate this problem with less expense.

For example, we are currently pushing our people to adopt widespread 2-factor authentication. Our people are ready to accept 2-factor. They understand it's value. They are familiar with it's use. We have multiple cheap 2-factor solutions. 2-factor somewhat mitigates MITM and also helps other issues.

That said, I think we really need a simpler form of SSH for trusted point-to-point communications. It should exclusively use pre-distributed one-time pads for it's authentication and encryption. We can now generate and distribute 100+ Gigabyte files of true-random data. This data can be used to authenticate. It can be used to generate secure symmetric encryption keys. We can handle millions of secure connections before we need to redistribute pads again.

Since I am not a cryptographer, this idea has many problems. But I believe that securely using these huge one-time pads could be as easy as:

  • Ask Schneier for a good, symmetric encryption algorithm :)
  • Select a key-size that is twice as long as Schneier thinks we need :) So, if Schneier thinks 512bits are fine, we use 1024 bit keys. This is only 128 bytes.
  • Generate about 128 Gigabytes of random data from a truly random noise source. Use 64Gigs of it for connection keys. That will allow about 512 million connections. This may be excessive and need to be adjusted.
  • Use the rest of the Random data 2 Gigs at a time. This gives you 32 records. The server always gets the first copy/install of the file. The server always uses the first record. Each subsequent client copy/install uses the data in it's record for install identification and session identification. This may not be enough records. It may need to be adjusted. But, it probably should not increase to hundreds. If there are too many copies, it is impossible to protect confidentiality.
  • Throw away the first key record. You can spare some. Use that space to write down the GMT time-stamp when this file was created and the number of times the file has been copied.
  • Use the next key record as the FileID for this file.
  • The server only tries to use uses 1 pad file at a time.
  • When the server starts up, it skips down the number of keys indicated by it's current key index or the number of minutes since pad creation, whichever is greater. If the server detects that GMT time is running backwards, it should terminate with a descriptive error message.
  • Every minute, it switches to the next key in the list. Don't worry, this will only use up 10 million of your possible keys in 20 years. The server should not attempt to respond to more than one connection attempt per second.
  • Whenever the server has authenticated a successful connection, it switches to the next key in the list.
  • When something pokes it's port, the server assembles a message that says something like: Number of non-padding bytes in message. Message Type 0. Server Message#1. I have received 0 of your messages. I am copy 1 of the file with the ID of #FileID. My Copy ID is (the first field in my Copy ID Record). The local time is (current time). The number of times I have incremented keys is: (CurrentKeyIndex). The number of successful connections is (ConnectionNumber). The authentication number for this connection is (use ConnectionNumber to index into the Copy ID Record and retrieve the value). Optional padding. End of Server Message #1.
  • Then the server encrypts all that info using the current encryption key and sends it out to the client. It should all fit in a standard ethernet/IP/TCP packet. All messages must be padded to the same length. A good starting message length is probably 1400 bytes.
  • The client uses the current time as a guess to a starting index into the key data. It should probably start 1 before to allow for sloppy timekeeping. It sequentially tries each key until it manages to decrypt the server's message. It should probably give up and fail with an error if tries more than 20 keys. This number may need adjusting. When it fails, the client drops the connection without saying anything.
  • If the client decodes the server message, it then checks it's own expected and calculated information against the info provided by the server. If it doesn't check out, it drops the connection and sends an urgent error message that somebody is attempting to mimic the server using a replay attack. If it checks out, it uses the key to encrypt it's response. It also updates it's CurrentKeyIndex.
  • The response of the client looks like: Number of non-padding bytes in message. Message Type 1. The latest message I have decoded from you is (LastServerMessageNumber). This is my Message #1. Nice to meet you. I am copy (whatever) of the file with the ID of #FileID. My Copy Id is (the first field in my Copy ID Record. My local time is (timestamp). I have now updated and crossed off (CurrentKeyIndex) number of keys. My number of successful connections is (ConnectionNumber). The authentication number for this connection is (use ConnectionNumber to index into the Copy ID Record and retrieve the value.) Optional padding. End of Client Message #1.
  • Then the server checks the client's supplied info for inconsistencies. If it fails, the server crosses off the key, drops the connection, and sends an urgent error message that somebody is attempting to mimic the client via a replay attack. It if checks out, the server sends an encrypted acknowledgement, and updates it status information on that copy of the file.
  • Once the client receives the acknowledgement, it updates it's info on the server. Then both sides continue the encrypted conversation. The conversation looks like a sequence of encrypted messages.
  • Most messages have the same format: Number of non-padding bytes in message. Message Type 2. Timestamp. From (Copy #) To (Copy #). Your latest message was (whatever). This is my message (whatever). [MESSAGE CONTENTS] Optional padding. End of message (whatever).
  • You will also need some utility messages. A NAK may look like: Number of non-padding bytes in message. Message Type 3. Timestamp. From (Copy #) To (Copy #) Please re-transmit everything after message (whatever). This is my message (whatever). Optional padding. End of message (whatever).
  • A FIN may look like: Number of non-padding bytes in message. Message Type 4. Timestamp. From (Copy #) To (Copy #) Your latest message was (whatever). Time to say goodbye. Optional padding. End of message (whatever).
  • A Change Key may look like: Number of non-padding bytes in message. Message Type 5. Timestamp. From (Copy #) To (Copy #) Your latest message was (whatever). I'm feeling paranoid. Lets change to the next key. Optional padding. End of message (whatever).
  • An Oh Shit may look like: Number of non-padding bytes in message. Message Type 6. Timestamp. From (Copy #) To (Copy #) Somebody just showed up with a NSL. I'm wiping my key-files/one-time pads. You should wipe this key-file/pad. Send lawyers, money, guns. So long and thanks for all the fish. Optional padding. End of message (whatever).

As you can see, this system is very simple,crude and inefficient. We are just re-implementing the old concepts of secure phones using 1-time pads. None of this is new. We can use simple logic because we don't want or need complexity. It allows for 1 server and multiple clients. You have to redo this logic to have more than one server per pad/keyfile. It only solves one problem, but it is so simple that it should eliminate almost all opportunity for logic and programming flaws. Remember, complexity is the enemy. We don't care about efficiency. We want security. The NSA has used feature creep to corrupt many forms of existing crypto.

This proposal is connection oriented, but it can run on TCP or UDP or ICMP. You probably want to use TCP to reduce spoofing, DoS opportunities and sort out some of the low level attacks. If you do, you have to remember that you can't trust TCP to eliminate spoofing or verify message delivery.

Comment Re:Anyone can intercept SSH some of the time (Score 1) 278

Protecting SSH communications for your organization is fairly straightforward if you do some work. You need to use multiple layers. Here is our guide to protecting SSH:


We try to use multiple overlapping security layers to protect SSH:

  • * If possible, use firewalls to limit the vulnerable scope of SSH to a few trusted hosts.
  • * Configure firewalls to limit credential guessing by rate-limiting connections to the SSH port.
  • * If possible, treat the SSH Port as a shared secret. Then, only interesting, targeted attacks find the SSH server. In many situations, this gives you very real protection. This protection is based on the very real increase in cost for an attack to find and attack an SSH server on an alternate, properly obscured port.
  • * The SSH server should not allow known usernames including root. The attacker must find a username.
  • * Motivated admins should use 2-factor authentication to access their critical SSH servers.
  • * Admins are trained to create good passwords for their usernames.
  • * SSH users should verify the identity of their systems when they first connect.
  • * System admins must regularly review the activity of their SSH servers.
  • * Security monitors all SSH connections, including ones on non-standard ports. We follow up on interesting connections.
  • * We have SSH Honeypots that help us track, understand and respond to SSH attack. These Honeypots allow us to track which credentials are being attacked. They give us advance warning when a institutional credential is attacked. And, analyzing the use of unique credential lists gives us insight into our attackers.

Much of this work can be automated. The rest is excellent training material for new security recruits and interns.

Looking back, the main change I should have made to improve our SSH protections would be to default block incoming TCP/22 at the border years ago. Then, only allow it for groups that can show they use it to provide services to a large community. Anybody using SSH for administration can change the SSH port.

"Consider a spherical bear, in simple harmonic motion..." -- Professor in the UCB physics department