Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment more on point (Score 1) 349

since fail2ban would ban the entire NAT(ed) other office if one actor there were to fail-out from a host in that office, it suffers from the same "short coming" as my script in general, and if you know that some particular shop somewhere is behind a nat, why wouldn't you then white-list that address anyway? e.g. using fail2ban is a good way to let one noob at (remote office) lock out everyone at (remote office). Just because it _hasn't_ happened to you yet doesn't mean that you are ready for the case when it does.

That's a real wizzer of a solution there bob...

If you don't already have white-lists (and preferably VPNs) between known good sites you are just a denial-of-service or "I can't remember my password with this hangover" event away from the theoretical firing anyway.

Again, if you don't know how to apply your tools then all solutions that you don't already think are super-duper will seem suspect. Since you don't seem to know the weaknesses of your current solution, and you improperly apply your "wisdom" as analysis of _my_ solution, you are proved doubly wrong.

Cookbook fail to you, good sir...

(P.S. I know, and point out, that the good and bad attempts are counted in the limit. There are reasons. That those reasons don't apply to your case doesn't make _me_ wrong, it makes _you_ short-sighted for assuming that what doesn't work for your case can't possibly be correct for anyone. 8-)

Comment The reason(s) for this constructon (Score 1) 349

While I do use this at home, I also use it on a number of forward facing servers for business purposes (usually with different thresholds and numbers). I spend very little time at "my desk" so the ability to know that I will always have a computer with a pre-shared key available is quite limited. If I am, say, at a hangar at an airfield and I get an emergency call to check on a host, I can ssh to my own (unprivileged) account and elevate my privileges thereafter. So I, and my very few alternates, can respond from anywhere with no chance of leaking meaningful key material as one might if they tried to match up known/authorized keys (and USB sticks are verboten in many of the places I find myself).

In that usage pattern, if I ended up having to ssh in more than five times in a single hour then things are really not right. (and if I knew that sort of thing was going to happen I _could_ always tweak the rule, but I more often use the multi-session ControlPath etc options to side-step the 5-per-hour limit if larger maintenance comes to the front).

That is, I limit the connections pass-or-fail, because it matches the expected (sparse) use pattern and so also limits the ability of a compromised machine I might use as a source box from spanning into the target machine. For instance I can use a source host and then invalidate it by making a couple extra connections so if, say, I have to use an internet cafe (it's never happened, but it might) or hotel computer or whatever, I can keep a clever follower-on from using a key-logger or whatever, from just using the link agian. [granted he could use the information from a different computer etc and I have other means for dealing with that sort of thing (locking the access account after use until I can get somewhere secure and change the password; single-use passwords on some systems, etc), but in terms of a quick access and then block, this works well.]

Different access models require different tools. Being able to ssh in from just about anywhere has come up as useful. Having several useful ways of closing that door, or having it slammed shut perforce, after the valid use are also important levels in any paradigm.

Also, if you reuse the named recent table (e.g. "bad_actors" in this example) [or indeed a whole chain if it's not SSH specific if you replace "ACCEPT" with "RETURN"] in different rules you can easily catch a machine on its very first port-scan or on a single attempt to reach a service you know you don't offer (like SMB service) and drop it into the named table. This lets the co-variants of the one rule "gang up" on the bad actor from different parts of your rule set without invoking expensive external processes. For instance if you also --set an IP address as a bad_actor for sending you a SYN/FIN or a broadcast ping then that one host doesn't get to double or triple dip your security.

Comment Re:Better than that... (Score 1) 349

I would expect to be called on shortcomings... But that didn't happen... Someone who didn't bother to understand the code mis-applied it to his situation and then called that misapplication for being flawed.

See, I responded in a conversational chain about "brute forcing a key" with a basic structure on how to blacklist a brute force attempt source. (and in two other places I did paste the same code since Slashdot doesn't let you easily fold sub-topics, but in each case the conversation was slightly different.)

Now at no time did I say "this will solve all your problems or address all your issues". For example one of the "short-comings" was about logging and the other involved use _inside_ a VPN where connection rates would be intentionally much higher. Neither is a real short-coming as people with even trivial knowledge of program flow and iptables in general would know how to deal with both situations. Things like picking the network interfaces to apply the rules to, and fully understanding that where rules are not desired, they should not be applied. (it's kind of no-duh that way, life). [In fact, if you look at the command I use "ext+" (instead of the default "eth+" et al.) as the interface, which is completely non-standard to deter "cut and paste" application and encourage thought about how the model might be used.

Logging is another issue wholly. Most people collect _way_ more logs than they should and then end up losing their important information in a flood of data. [ASIDE: this is why Gestaltism failed and the Scientific Method came to prominence.] It _shouldn't_ take much brain at all to figure out the various ways that logging would dress onto the skeleton above. On systems with high logging standards I usually replace most-or-all "ACCEPT" rules with a jump to an accept chain that contains uniform "success" logging (e.g. see LOG target --log-prefix element). I like to put failure logging at "the point of failure detection", and only one fail notice, so that I don't have to fish through repeats. Then I let tools (like the way the "recent" match stores the date/time of encounters) do their jobs rather than spending a lot of CPU to re-chew raw logs for no flipping reason at all. [Mil-spec sites will, clearly, have other requirements, which are solved by other means.]

As for the Condescension. That too is a useful tool, applied quite carefully in this case, that makes people think and re-read instead of reflex flame. Now you have jumped valiantly to the defense of some clod, and I decry you for that, because you have amplified his mistake with your opprobrium. This makes you more wrong than him. You have stepped in as arbiter of form with disregard to content. You are pure noise with no signal whatsoever. your single data point is my *horror* repetition of the code in other contexts. You got me. I am willing to put the same idea in front of more than one subset of a conversation. How this must wound the internet, and confuse it beyond its ability to cope. The internet has never seen repetition so foul as I have done here.... oh wait....

I do indeed condescend, to him, and to you. His histrionic, left-handed, and unsupported assertion (q.v. "I would be fired if...") set the tone for what followed and I was willing in whole to treat with him on his terms. Your yappy-dog, I want to seem important too, infantile insertion was not even up to the low bar we were dancing above. Oh good show to you find tagger-along. You have wounded me to the quick with your amazing and subtle support of his shortsightedness. Bravo!

If you don't understand why littering a design pattern/example with noise is just plain bad instruction, perhaps you should retire from the field and take up something that better suits your cook-book-only, can't be bothered to think, self-limiting mentality.

Comment Re:Better than that... (Score 1) 349

not if one adds logs to ones blocks, which is a different issue entirely.

Again "starting point".. contemplate it. Add "branching" and "core logical structure" and "basic computer programming" to the list of ideas you should work on and the do some reading up boy.

When you particluarly consider that I was addressing a "better way" to do the throttling (e.g. reducing from "4 attempts an hour" to "five attemtps in an hour causes indefinite block") all your plaintive whines are basically immature and off-topic blather.

Just admit that your "but what about (whatever)" statements reveal that you don't particularly know how to generalize yoru knowledge, the buckle down and do some study.

Comment Re:P.S. (Score 1) 349

uh... I did... lets look at the command "iptables"... the word is right there... "table"... it's like the third through seventh letters... this is not that tough...

Perhaps the word "sub" confused you with sandwitch making issues?

See, the technique is to add a "--new -chain", which is correct to use synonymously as "table" given the history of the command (e.g. most people don't really try to get all "a chain isn't a table" particularly since I delinated a range in the table that happens to be in just the one chain therein so both words apply equally).

There is this mathematical discipline called "set theory" and a range within a table is itself a table, just as a subset is a set. So the range of the table that lies between the ACCEPT and the subsequent DROP is an ordered space where other directives can be added. If one wanted to log dropped connect attempts one could add "iptables --jump LOG" with no other conditionals or redundant tests right after the ACCEPT directive.

Perhaps you should learn to read before you get all corrective at people.

Maybe I am too old-school referring to a chain as a sub table, but that's where decades of experience tends to become a weight. Us old folks don't bother with distinctions that make no difference. I know that means the ball bounces a litte too fast for the self-obsessed to follow, but what is one to do in these circumstances.

Even more so, if one wanted to log _successful_ connections one could replace the word "ACCEPT" with the word "RETURN" and then place logs or whatnot after the --jump into ssh_throttle itself.

It's called "branching" and we use it in computer sicence all the time to remove things like redundant testing.

Comment P.S. (Score 1) 349

The entire table is available for adding logging and whatnot. The (empty) range from the ACCEPT line to the final DROP in the sub table can also be decorated with logging or whatever else you might want to do with the failed packet, and then you only get one log entry per event instead of attempt. That logging or whatever was not germane to the example mechanic.

Comment Re:Better than that... (Score 1) 349

Why would the filter be on the NAT interface anyway?

If you don't know that every solution is not one-size-fits-all you deserve not to have that job.

If you cannot suss out the part where a generalized solution is a starting point for adaptation then you should never have managed to get the job in the first place. /doh.

Comment Better for me... (Score 1) 349

I use the below. It has several benefits. It creates a blacklist of sorts of only the bad actors, which can be shared with other rule groups. It only does the test once instead of in separate rules for reject and log etc. I don't boter logging when I can check the bad actors table directly, but any fun things can happen after the ACCEPT (or RETURN) as long as you make sure you reach the pivotal --set operation.

Most importantly this lets 5 attempts per hour from any given IP address and if they have to go away for a day (obviously ajustable) before they can try again. This means you can get deep resistance (60 seconds is too short to truely deter some probes) but if you accidentlaly invalidate a host via over-use you just have to wait a day for things to be good again.

Good for use with ControlMaster option if you frequently have to connect to that host from a particular source. I also allow two password attempts only per connection.

Note that you can replace ACCEPT with RETURN if you want to run other tests after this before accepting the packet.


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Auto-blacklisting throttle. (Score 1) 349


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Better than that... (Score 3, Informative) 349

The below will create a dynamic blacklist. Any IP address that connects more than three times in five hours (pass or fail) will go into a blacklit that will persist until they stop trying for at least a day.

This will recodr your bad actors _and_ it will "expire" in case you invalidate a system by accident (e.g. over-use).


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment You don't want to _know_ about the broken stuff (Score 1) 430

I did try to get the coding standard fixed.

Meanwhile, elsewhere in the code, in full compliance with the coding standards I found:

(1) unconditional static down-casts from base types to derived classes despite the possibility of error event classes being the return value. (e.g. class A, A_failed, and B, where B and A_failed were derived from A, and then a key static cast from A* to B* without any check for A_failed at all.)

(2) shaving down (bit shifting and masking) pointers through a void* arg and then into four bytes (char) that were then pushed into a byte queue, where they were later popped off as four bytes and shifted back into a pointer some type. (The "real time programmer" who came from a VX works background didn't believe in just making an array of void* and moving all the bytes at once for whatever retarded reason.) [also broken because the A* to void* to B* three-way conversion isn't necessarily safe since it should be cast to A*, reinterpret_cast to void*, then reinterpret_cast to A* then dynamic_cast to B* to be safe and symmetric.]

(3) so many unsafe operations at in the module call prototypes that I eventually just made my code "correct" (e.g. call-safe) then put a conversion layer that used the unsafe API in both directions and called that translation unit "unsafe.cc" and had lots of forwarding functions spelled out why the calling convention was flirting with disaster so that all the unsafe calls and unsafe casts were all in one pile and in one place.

Item 3 was somewhat insurrectionalist because I wasn't allowed to get any of my criticisms to be acknowledged by, or then past my boss who's "it worked when we tested the prototype code that one time" attitude kept things tightly broken.

So we had nicely regimented coding standards but I always look at the brand name of any medical equipment I see sitting next to a bed now because I know what the code base for one particular brand really looks like and how much they didn't give a rats ass about doing things right (as opposed to doing things they way someone decided they should be done based on single test runs).

That guy who noticed that if we build buildings the way we built software, the first woodpecker to come along would destroy civilization? Yeah, he wasn't exactly wrong.

Slashdot Top Deals

If you think the system is working, ask someone who's waiting for a prompt.

Working...