This means that it only is useful if you can solve the problem faster by writing it with regular expressions than you can with whatever language you are using it form.
In my experience there is something close to a 10/1 ratio between gratuitous pointless uses of regular expressions and effective useful ones.
My favorite abuse case is where no real wildcarding is done and all they really want to know is, "does this string contain this substring". I find extreme cases in bad perl.
I found a build script that was attempting to prepare localization templates, it would build a regular expression for each variable it was trying to replace (with no real wildcarding of course) and then evaluate each line in the template with every expression. Since the number of lines in the template were directly proportional to the number of variables this was n^2 evaluations.
I replaced this with a simple linear approach where a look up table with all the translations are read into a lookup table in memory and each line of the template can be scanned character by character.
The resulting script went from O(n^2) with regular expressions to O(n) with simple character comparisons, our problem input went from 20 minutes to process to less than a second, 1200 times faster.
Regular expressions are very powerful, but they are no substitute for thinking or datastructures or algorithms.
Don't teach new programmers regular expressions, teach them to think. Teach them how to analyse algorithms. Teach them how to build datastructures.
Then, once you've done all that, then you can teach them regular expressions, but make sure they know what they're doing. (And it won't work, they'll immediately start abusing them, happens every time.)