Basically they're looking for weaknesses in the encryption or a way to break the encryption. The basic idea is that if you have a x-bit key for you encryption system then you should be able to generate 2^x different keys. So for instance if you had 4-bit encryption, then you would have 4 bits that you could assign a value to. That is you have something like _ _ _ _ where each _ can be either a 1 or a 0. When you work out the number of unique ways you can make this assignment you get 16, or 2^4.
To break the encryption, you need to find the key. In the 4-bit example above, it is easy you just try all 16 of the possibilities. Now, if you raise the encryption strength to say 256-bit, you have 2^256 theoretically possible keys. Now, if you assume that you can check 30,000 keys per second it would take you well over a million years to check all of the keys (actually somewhere on the order of 10^60 years). So doing that is obviously not a practical solution.
So what they're doing is trying to find a method that is more efficient. There are sometimes reasons why certain keys don't need to be tested and there are statistical methods that should theoretically be more efficient that simply randomly trying keys. So previous to this study they think the complexity is 2^119. Though it's not quite right, you can think of this as the average number of keys you would have to try before getting the right one. With the method they are suggesting they are claiming that the complexity could be lowered to 2^110.5. Practically speaking, this doesn't really make cracking the key any easier. But in the future, who knows