In the old days, you could attack one thing. You could defend one thing. But, that doesn't map well to the internet. Now, we all talk to each other. We all use the same methods of defense. When one actor attacks another, the attack is exposed, analyzed, and re-used. Now, when somebody attacks, they increase the cost of defense for everybody. When somebody comes up with improved defense, we all learn how to increase the cost of attack for everybody.
For over a decade, several branches of the US government have focused almost all their energy on attacking others across the internet. The result is an internet where compromise and breach are daily events. Somehow, our protectors don't see that they are crafting the tools of our demise and handing them to our enemies. If we are honest, we are more to blame for the great compromise at the OPM than our attackers. If we had spent the last decade on creating and encouraging defense, then breach would be difficult and rare.
Now, our governments are blindly following the tradition of attack. They wish to attack the protocols we use to determine identity and create security. They don't see or care that everybody else will do likewise. They don't see the great devastation that will follow.
We utilize some automation to handle the load. We have a few honey-pots. We also monitor our dark IPs. We learned to distinguish DoS backscatter, and the various types of frequently spoofed attacks. We thought that an enterprising hacker would attempt to spoof an important Internet resource and cause us to auto-immune ourselves to death. So we whitelisted a bunch of critical external IPs and looked for critical spoofing. In the last 10 years the amount of spoofed attack has dropped drastically. We recently found an incident where an attacker spoofed a critical Google resource and tried to get us to block it. That is the only time we have detected that kind of spoofed attack.
We have found that most attackers (even governments) don't like to have their attack methods documented and publicized. We have found that some ISPs turn evil and knowingly host attack, but they are quickly and easily blocked until they go broke or come to their senses.
We have found many institutional scans. The best of these groups provide timely assistance to those who are making mistakes. In our view, the best groups include the ShadowServer Foundation, EFF, and the Chaos Computer Club. The worst of these groups are simply feeding on the mistakes of others. The worst groups provide no assistance to others. The worst groups actually have motivation to preserve or enhance the problems of others.
More info is available here:
A very similar thing happened to USU. We received a summons from Homeland/ICE to produce 3 months of records (plus identifying info) for an IP that was one of our TOR exit nodes.
I eventually managed to contact the Special Agent in charge of the investigation. He turned out to be a reasonable person. I explained that the requested info was for an extremely active TOR exit node. I said that we had extracted and filtered the requested data, it was 90 4 gig files (for a total of 360 gigs of log files) or about 3.2 billion log entries. I asked him how he wanted us to send the info. He replied that all he needed to know was that it was a TOR exit node. I then asked again if he wanted the data. He said something like: "Oh God no! Somebody would have to examine it. It won't tell us anything. It would greatly increase our expenditures. Thanks anyway."
And that was the end of it.
YMMV. All Rights Reserved. Not Available In All States. It helps if your institution has it's own Police, Lawyers, and (an extremely active and effective) department of Journalism. And, it doesn't hurt if it is cheaper (and easier) for you to respond to the summons/subpoena, than it is for the Authority to issue it and deal with the result.
TOR exit nodes are nothing but trouble.
I think this is an issue where some are more equal than others.
If an individual runs a TOR exit node, they can be easily intimidated and hassled. There is very little cost to law enforcement for engaging in the intimidation.
At the other end of the spectrum, a large public institution is not susceptible to this kind of intimidation. And, there is a very large cost if law enforcement attempts the intimidation. For example, at the institution I support, if the local cops or low level FBI attempted this kind of intimidation, they would be met by the institution's police force, the institution's lawyers and the institution's journalists. Everything would be recorded in multiple ways. Heck, we even have a state assistant DA permanently assigned to USU. He participated in the process that created the policy and procedures approving the TOR infrastructure.
At this point, if a major university's CS group is not investigating TOR, they should probably give back the funding and become a trade tech. The issues surrounding TOR are critical to our society. A university should not turn it's back to these issues.
Given all that, a law enforcement attempt at intimidation would be ineffective. And, it would likely result in the kind of bad publicity that can cause law enforcement to lose budget.
However you have a good point, libraries are widely distributed in the gap between your unfortunate friend and USU. The smaller ones would be easily intimidated. The larger ones, not so much.
I am interested to understand what level of inspection you could and did perform to decide "abusiveness". Especially for the secure traffic.
We did traffic analysis using net flow information of a few days of traffic on a preliminary TOR exit node. In this situation, traffic analysis is very powerful. We did not try to determine who was talking. But, we have spent years deciphering the nature of connections using flow analysis. We are very successful in determining the nature of the various connections. Encryption does not change the underlying size, flow and pace of the connection. The TOR structure does little to obscure the ultimate timing of request and response. It does nothing to conceal the size of the requests and responses leaving the exit node. We can easily distinguish:
When we tallied all the traffic for browsing, almost all of it was human driven. When we tallied all the traffic destined to a SSH or RDP port, over 90% of it was abusive.
I would replace the work "cost" with "risk."
As in exposure to a hostile legal, political and social environment.
We had risk in there earlier. But we later changed it to cost. USU is weird. I suspect all universities are weird. USU is a top tier research university. USU is not run by accountants and MBAs. It is run by researchers and teachers. We are shielded from most legal issues. We are constrained by funding. If we can fund it, we can invest in long term experiments. This is one of them.
I don't see many public libraries having the resources to implement your plan.
This is an extremely significant point. In order to understand the TOR issues and implement TOR properly, an institution has to have a significant investment in IT. Not a problem for universities, and large metropolitan libraries. But, most smaller libraries will not have the expertise to even understand the issues and how to mitigate them.
When the shit hits the fan, "thinking it over" and "hoping for the best" is no longer an option. In the end, you have to make a decision or one will be made for you.
True. We may need to clarify that abuse response message to make the following points more clear:
I expect we will change our decision to implement TOR sometime in the next 5 years for one of the following reasons:
There are definite costs to running TOR infrastructure. You have to be aware of them. Some of the costs can be mitigated, but some can't. At the end, you have to be able to show that the benefits outweigh the costs.
First we examined the benefit. We made a clear statement of the benefit. It is:
USU has many researchers and students who deal in sensitive subjects such as Climate Change, Reproductive Issues, Political Systems, Animal Research, etc.. These students and researchers frequently need privacy and security to advance the goals of USU.
Then we discussed the various costs and methods of mitigating the costs. Afterwards, we decided that the costs could be made acceptable, if we were careful.
Here is our standard response to an abuse report against USU's TOR infrastructure:
=BEGIN ABUSE RESPONSE=
The activity that you have reported is being emitted by a TOR exit node:
$ host 184.108.40.206 220.127.116.11.in-addr.arpa domain name pointer tor-exit-node.cs.usu.edu.
$ host 18.104.22.168
22.214.171.124.in-addr.arpa domain name pointer tor-exit-node-2.cs.usu.edu.
This TOR node is a project of USU's CS department. USU has many researchers and students who deal in sensitive subjects such as Climate Change, Reproductive Issues, Political Systems, Animal Research, etc.. These students and researchers frequently need privacy and security to advance the goals of USU.
Almost all TOR traffic is generated by innocent people who are attempting to escape the shadow of a totalitarian government. But, unfortunately, sometimes criminals attempt to use TOR to attack others.
We are in discussion with our TOR admins to try to find ways to limit the attack activity. Of course, this rapidly becomes a sticky issue. If we start inspecting and censoring some of the TOR activity, then we have less of a defense when we get pressure to inspect and block the rest. And, even starting down this path may make us legally liable for ALL the TOR traffic. Our best action may be to keep our hands off and observe strict network neutrality.
We are still pondering our options.
Please accept our apologies in the mean time.
USU IT Security
=END ABUSE RESPONSE=
We could have build a large Orion propulsion ship anytime in the last 40 years. It would probably cost less than an aircraft carrier. A large Orion propulsion ship could get almost anywhere in the inner solar system in a few weeks. And the propulsion system will work just fine to redirect another large mass. Yes, there will be a bunch of fallout damage from the initial take-off, but we can decide where to place it. and the fallout damage from Orion's propulsion is tiny compared to the damage from an asteroid strike.
I have always hoped that there was a secret plan to convert our offensive arsenal into Orion propulsion if the need occurred.
The worst security definition that I have seen is the one currently used by the US Security communities. Geer stated it as: "..the absence of unmitigatable surprise." This definition is horrible. It offers you no guidance on prioritization or limits. This definition says you are insecure until you have achieved omniscience and omnipotence.
The best definition of security that I have found is: "Security is a MEANINGFUL assurance that YOUR most important goals are being accomplished." This is easily understood by everybody and it guides you to effective action. Using this definition you are guided to create and maintain the potential for success. The other definitions ultimately force you to focus your efforts on less important objectives.
If you have a million public IPs, you catch about 3 million attacks every time somebody messes around with Z-Map or MasScan. They always try it at least 3 times. That is 1% of that scary 300 million per day total. And there are a lot of people in the world playing with Z-Map.
I do IT Security for Utah State University. We are at the North end of the state. We see about 3k PPS of attack all the time. We have 128K of public IP address space. Most days, we are at about 300K PPS at the border. 3K PPS of attack is about 1% of the total. Having 1% attack be incoming packets is normal for the last few years for us. This works out to about 1 attack packet per IP address every 30 seconds. Of course, almost all of them are rejected at the border. Most of my peers are seeing the same attack levels. But, all my peers are at universities.
However, In the last couple years the attack has shifted. Now, about 1/2 of our detected attack is sponsored or condoned by the Chinese government. The rest is evenly divided between other governments and organized crime. We assume that this shift is the inevitable consequence of the current cyberwar. The shift has also made it easier to do most attribution. Almost all attack by civil servants is easier to identify. It is predictable. It follows patterns. It has preferential quality of service. When you report abuse from a non-government attacker, it shifts methods, or stops, or moves to another target. When you report abuse to a government attacker, it increases. Sometimes it improves.
The shift in attack may be local to Utah and due to the NSA facility, but I think it is more likely that we are all screwed.
I do network and computer security for a university. In the last couple years we have received a couple alerts from the FBI. The info was fairly old and limited in scope. And, they didn't want us to share the info with those who really needed to have it.
In the same period, the Chinese government has instituted a program of rigourous scanning and vulnerability assessment against my university. If I pay close attention, I discover all kinds of useful information. They have shown me 0-day exploits. They have taught me devious manipulations. They have even taught me a ingenious method of detecting firewall failure.
The Chinese give me daily updates on the latest hacking techniques. They never complain if I share the info. And they don't waste my time with meaningless paperwork. If I wasn't getting it for free, I would be willing to pay for this service. I don't understand why my government can't be as helpful
When our attackers desire to remain hidden, we usually can not detect and remove them using any common tool. The techniques for remaining in hidden control of systems are straightforward, effective and available to any attacker. We can detect all kinds of stuff by carefully inspecting network activity, but learning to do it takes years. And, analyzing 1 machine's traffic is slower than real-time.
For example, a while ago one of my coworkers managed to crack the C&C for a major fake-antivirus group. For 2 months we grabbed the rootkits as they went by. Code on compromised machines was updated daily. VirusTotal pronounced it all clean. Usually, the victims had no clue. None of the virus or malware detectors/removers would regain control of a compromised system. Sometimes the utilities would claim to have done something. It was never complete or successful. On the other hand, if we isolated a compromised machine from the C&C for 3 weeks, some of the utilities would start to be effective. At 6 weeks, almost all of them were effective. Of course, this fake antivirus group was indiscriminate and had a huge footprint.
We still use Microsoft Security Essentials or EndPoint Protection. It almost never prevents compromise, but in some circumstances it will let us know that that we have been had. Some attackers get what they want immediately and don't try to hide. Others break discipline after a few days or weeks. Then there are the ones that get what they want and sell you to less capable attackers. Finally, if the user/machine is vulnerable to attack then the machine eventually gets infested with multiple attackers. Once multiple attackers start interfering with each other, something always gets dropped.
We always recommend a "change passwords/backup/wipe/rebuild/restore" when we discover compromise. Even then, sometimes an attacker regains control by hiding hostile code in user files.
The preventative measures that seem to be most effective for us are:
A few crypto products need efficiency and performance. But, many don't. Many existing products are optimized for efficiency and performance, even when these goals are contrary to the stated goals of the product. Frequently, crypto solutions unnecessarily limit the size of keys. They extend the lifetime of keys. They limit the number of available keys. In many cases, all three of these latter goals are false savings.
We rarely use symmetric crypto, even though it is frequently simpler and more robust. Public Key is almost always preferred, even when it is easy to distribute keys.
Reliable, trustworthy sources of truly random numbers seem to be very useful, inexpensive, and straightforward to create. See: http://en.wikipedia.org/wiki/C...
If we are interested in secure communications, it should be normal and expected that we would pick up several hardware random number generators. We should have multiple simple, robust, trustworthy tools to generate symmetric keys. We should have multiple tools to utilize simple, robust, trustworthy symmetric crypto.
Instead, we seem to focus on always using a single complex public key solution even when it is not appropriate.
In my ignorance, I have been trying to map out a simple, robust tool for system administration, that makes use of symmetric crypto. See: https://it.wiki.usu.edu/201501...
I would really like to learn that I have been wasting my time.
This guide doesn't recommend disabling passwords. That's a huge omission.
Thanks. I figured that was obvious enough to not need explanation. So I decided it was out of scope. But, I am wrong all the time.
I am assuming you feel that we should teach our admins to test all their SSH passwords against standard attack dictionaries and disable/notify any that fail. This is a good idea. I will try to add it tomorrow.
Are there other conditions that are detectable by SSH admins that require disabling passwords?
"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet