Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Better than that... (Score 1) 349

not if one adds logs to ones blocks, which is a different issue entirely.

Again "starting point".. contemplate it. Add "branching" and "core logical structure" and "basic computer programming" to the list of ideas you should work on and the do some reading up boy.

When you particluarly consider that I was addressing a "better way" to do the throttling (e.g. reducing from "4 attempts an hour" to "five attemtps in an hour causes indefinite block") all your plaintive whines are basically immature and off-topic blather.

Just admit that your "but what about (whatever)" statements reveal that you don't particularly know how to generalize yoru knowledge, the buckle down and do some study.

Comment Re:P.S. (Score 1) 349

uh... I did... lets look at the command "iptables"... the word is right there... "table"... it's like the third through seventh letters... this is not that tough...

Perhaps the word "sub" confused you with sandwitch making issues?

See, the technique is to add a "--new -chain", which is correct to use synonymously as "table" given the history of the command (e.g. most people don't really try to get all "a chain isn't a table" particularly since I delinated a range in the table that happens to be in just the one chain therein so both words apply equally).

There is this mathematical discipline called "set theory" and a range within a table is itself a table, just as a subset is a set. So the range of the table that lies between the ACCEPT and the subsequent DROP is an ordered space where other directives can be added. If one wanted to log dropped connect attempts one could add "iptables --jump LOG" with no other conditionals or redundant tests right after the ACCEPT directive.

Perhaps you should learn to read before you get all corrective at people.

Maybe I am too old-school referring to a chain as a sub table, but that's where decades of experience tends to become a weight. Us old folks don't bother with distinctions that make no difference. I know that means the ball bounces a litte too fast for the self-obsessed to follow, but what is one to do in these circumstances.

Even more so, if one wanted to log _successful_ connections one could replace the word "ACCEPT" with the word "RETURN" and then place logs or whatnot after the --jump into ssh_throttle itself.

It's called "branching" and we use it in computer sicence all the time to remove things like redundant testing.

Comment P.S. (Score 1) 349

The entire table is available for adding logging and whatnot. The (empty) range from the ACCEPT line to the final DROP in the sub table can also be decorated with logging or whatever else you might want to do with the failed packet, and then you only get one log entry per event instead of attempt. That logging or whatever was not germane to the example mechanic.

Comment Re:Better than that... (Score 1) 349

Why would the filter be on the NAT interface anyway?

If you don't know that every solution is not one-size-fits-all you deserve not to have that job.

If you cannot suss out the part where a generalized solution is a starting point for adaptation then you should never have managed to get the job in the first place. /doh.

Comment Better for me... (Score 1) 349

I use the below. It has several benefits. It creates a blacklist of sorts of only the bad actors, which can be shared with other rule groups. It only does the test once instead of in separate rules for reject and log etc. I don't boter logging when I can check the bad actors table directly, but any fun things can happen after the ACCEPT (or RETURN) as long as you make sure you reach the pivotal --set operation.

Most importantly this lets 5 attempts per hour from any given IP address and if they have to go away for a day (obviously ajustable) before they can try again. This means you can get deep resistance (60 seconds is too short to truely deter some probes) but if you accidentlaly invalidate a host via over-use you just have to wait a day for things to be good again.

Good for use with ControlMaster option if you frequently have to connect to that host from a particular source. I also allow two password attempts only per connection.

Note that you can replace ACCEPT with RETURN if you want to run other tests after this before accepting the packet.


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Auto-blacklisting throttle. (Score 1) 349


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment Better than that... (Score 3, Informative) 349

The below will create a dynamic blacklist. Any IP address that connects more than three times in five hours (pass or fail) will go into a blacklit that will persist until they stop trying for at least a day.

This will recodr your bad actors _and_ it will "expire" in case you invalidate a system by accident (e.g. over-use).


iptables --new-chain SSHTHROTTLE
iptables --append SSHTHROTTLE --match recent --name bad_actors --update --seconds 86400 --jump DROP
iptables --append SSHTHROTTLE --match hashlimit --hashlimit-name ssh_throttle --hashlimit-upto 5/hour --hashlimit-mode srcip --hashlimit-burst 2 --jump ACCEPT
iptables --append SSHTHROTTLE --match recent --name bad_actors --set --jump DROP
iptables --append INPUT --in-interface ext+ --proto tcp --match conntrack --ctstate NEW --dport 22 --syn --jump SSHTHROTTLE

Comment You don't want to _know_ about the broken stuff (Score 1) 430

I did try to get the coding standard fixed.

Meanwhile, elsewhere in the code, in full compliance with the coding standards I found:

(1) unconditional static down-casts from base types to derived classes despite the possibility of error event classes being the return value. (e.g. class A, A_failed, and B, where B and A_failed were derived from A, and then a key static cast from A* to B* without any check for A_failed at all.)

(2) shaving down (bit shifting and masking) pointers through a void* arg and then into four bytes (char) that were then pushed into a byte queue, where they were later popped off as four bytes and shifted back into a pointer some type. (The "real time programmer" who came from a VX works background didn't believe in just making an array of void* and moving all the bytes at once for whatever retarded reason.) [also broken because the A* to void* to B* three-way conversion isn't necessarily safe since it should be cast to A*, reinterpret_cast to void*, then reinterpret_cast to A* then dynamic_cast to B* to be safe and symmetric.]

(3) so many unsafe operations at in the module call prototypes that I eventually just made my code "correct" (e.g. call-safe) then put a conversion layer that used the unsafe API in both directions and called that translation unit "unsafe.cc" and had lots of forwarding functions spelled out why the calling convention was flirting with disaster so that all the unsafe calls and unsafe casts were all in one pile and in one place.

Item 3 was somewhat insurrectionalist because I wasn't allowed to get any of my criticisms to be acknowledged by, or then past my boss who's "it worked when we tested the prototype code that one time" attitude kept things tightly broken.

So we had nicely regimented coding standards but I always look at the brand name of any medical equipment I see sitting next to a bed now because I know what the code base for one particular brand really looks like and how much they didn't give a rats ass about doing things right (as opposed to doing things they way someone decided they should be done based on single test runs).

That guy who noticed that if we build buildings the way we built software, the first woodpecker to come along would destroy civilization? Yeah, he wasn't exactly wrong.

Comment Re:Ya to me sounds like "I'm special" syndrome (Score 1) 430

Treating all programmers as interchangable morons who cannot be trusted to write code is a sign of managerial immaturity.

An outstanding programmer often knows when rules must be broken. Just as an outstanding jazz musician knows when to use dischord.

Now just because the Dunning-Kruger effect causes programmatic noobs to assume they are masters deserving of liberty doesn't mean that the masters are a-priori being immature.

Fault: there is too much baby in this bathwater. Get a seive before proceeding. Session closed... 8-)

Comment And yet, you are wrong to "find it impossible..." (Score 1) 430

I have worked on projects that loast hundreds of millions of CPU cycles because the coding standards encoded individual ideals into runtime performance killers. (the example I have placed elsewhere is "must use getters/setters" and "may not put function defintions inside class definitions" turns "class foo { int X; ... int getX() const {return X} };" (which can be optimized down to a register load) into a (not optimizeable at all) far call from each point of use into the foo.o (object file) after a potential PIC (position independent code) fixup for a shared library.

And this stupidity can waste a _lot_ of man hours. In order to get my part of the medical device that _that_ coding standards bug was written under, to run in acceptable time (e.g. to not overuse my CPU time budget in a freaking real-time heart monitoring ap) I had to break the coding standard and put the getters/setter (or occasionally the plain "public" variable) into the class definition anyway, then run it into the version control system, then go through the "naughty programmer" output list and create a bug report for each such optimization and assign that bug to that "naughty naughty" message. Then the bug review team had to deal with those bugs. Then the code review team had to approve those optimizations.

Even with this only costing me a couple of hours on the one set of modules, when you consider the ten or twelve people that the automation systme then had to nag, and the hours _they_ lost. you get into a lot of wasted time over all.

Now add the part where once every _other_ programmer who silently followed the automatically enforced rules ran over time-budget for their code (so the system was too slow), and all _their_ code had to be fixed once everybody noticed that _mine_ was not so plagued.

Then the cost of the project running late and eventually being determined to be "not worth the effort being expended" and getting canceled outright...

Well, truely hundreds of man hours and _many_ thousands of dollars were wasted on a project that was largely killed because all the programmers were muzzled into paralysis by a huge steaming pile of these sorts of pointless restrictions (many of which would have been good for a _class_ in programming but most of which were _toxic_ to a real project).

Well, you know, there are reasons that failed projects fail, and sometimes those reasons involve over-regimentation of the otherwise creative process of finding solutions and expressing those in code.

Comment Re:Standards are (_Not_ Usually) Good (nor bad) (Score 3, Insightful) 430

Standards and enforcement of same is (usually) a symptomp of the "interchangeable morons" school of management. It presumes that all problems have a (ayn rand-ish) uniform solution that all _programmers_ will process identically.

A small number of "do not do"s with a "unless you have good cause" _guidelines_ are reasonable, but something as firm as a "standard" is a great way to make your great programmers no better than your worst over time.

Standards often contain bugs themselves. Things that create a hidden cost on the programmer and the program that can bog both down.

examples:

Even Microsofte eventually abandoned their "Standard" for putting the variable type as encoded dirt on the front of their varialbe names such as "lpintThingy" having plagued their code with Thingies that are no longer long pointers to integers and that cannot be globally searched and replaced because that hazards destroying other code.

Combined rule failure (use getters and setters + don't put member function definitions inside of class definitions => what would be a register load operation becomes an optimization resistant far call across translation units ** every dang time you set or read a scalar).

If you cannot trust your programmers to write good code then making them format it so it _looks_ like good code is a wasted effort.

If you cannot trust your great programmers to write great code eventually they will stop even trying to do so and you will be left with a hassle avoiding idiot or someone looking for a new job.

If you cannot trust your new programmers to understand your previous code then your new programmers are probably inferrior to your older coders.

If you are not winnowing out the _bad_ programmers via rational code review then your management is useless.

All but the most rudimentary coding guidelines are productivity and creativity and performance murderers.

Every company eventually realizes this, on and off, for a while, each time a management team ages into a job, and then forgets it again when they hire new managers.

Slashdot Top Deals

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...