I implemented the strictest controls possible for a site that was being heavily phished and it worked very well. Here's the things you have to understand about DMARC, DKIM, and SPF (since SPF matters to DMARC too).
As a basic overview, here's what these do.
SPF = Only allow emails from specific domains / ip addresses
DKIM = When an email arrives, verify the signature with the domain it claims to be from to ensure it actually came from there
DMARC = How strict should we be with SPF and DKIM?
DMARC in itself isn't an actual verification system. What DMARC does is allows you to tell mail servers exactly how to handle emails that do not pass SPF and/or DKIM checks. Without DMARC, mail servers have to guess and basically follow their own rules. If you've taken the time to document where email from a particular domain comes from (including 3rd party services), ensured that your SPF includes everything, and have verified that all emails are signed with DKIM then eventually you can be strict enough with your DMARC settings to say that anything not passing both SPF and DKIM can simply be trashed. That's what the strictest setting looks like. You can also tell mail servers to send it to the spam folder, just in case you missed something. You can tell it to treat SPF strictly and ignore DKIM or vice versa. You can tell it to apply your DMARC rules to a percentage of your emails (to make it easier to transition into to using it with a small group of messages). You can also have providers send you an XML based email of the days activity to see how messages were handled from different services and where those messages originated. The reports can be a pain to make sense of but once you have everything setup properly you tend to stop looking at them.
It's important to remember, because SPF if easier to implement since it's just a DNS rule. For DKIM you have to actually sign the email before it's sent which may or may not be possible from all of your various points of email origin. DKIM is better, but that makes it more complicated. And that's why you have to have something like DMARC so that you can tell mail servers just how thoroughly those rules have been implemented.
The site that I implemented it for was a very old site where people managed high dollar transactions over email. Phishing was RAMPANT but even more so because there was a good chance a phisher could pass off an email as actually coming from our domain. The combination of 3 protocols in strict mode stopped that completely. It didn't stop PHISHING, but it did secure our domain against it. After that phishers had to use other domains, leaving off a middle letter, trying spelling variations, etc. This gave us the ability to work with registrars to either buy the domains or report the domains for abuse.
As an early poster said, you can't completely stop phishing but there are preventative measures you can take to protect compromised accounts.
After that we took additional steps to secure users accounts. We started recording ip addresses with all logins or return visits along with geographic data from MaxMind. Once we had enough sample data to create a general point of origin, we started locking accounts if they were accessed more than 200 miles from their normal center point and always if they logged in from a different country. As soon as the account was locked for a geographic reason, we sent users an email notifying them that their account had been accessed from another country or outside of their area and that if this lock was in error, they could click a link to disable that function for 2 weeks while they were traveling. Otherwise they should change their password. Users really appreciated it. We expected some usability frustration, but overall these users were very happy to know we were watching out for them.
People also tried to create fake accounts on the system to initiate transactions. For that, we took a page out of Fark's playbook. On Fark, when you get blocked / banned you don't KNOW you've been banned. You can keep posting away but nobody else can see it. We did the same thing when we recognized fake accounts through an assortment of internal mechanisms, including CAPTCHA to make sure these guys had to work by hand. We'd let them continue doing whatever they wanted for a period of 7-10 days after we spotted them but nothing their accounts did actually reached any other users. If we just blocked the account, they'd just create a new account so it was key to make sure they didn't KNOW they'd been identified. After 7-10 days we'd actually lock the account so they knew we "caught them".
We used to keep a metric report for our own entertainment to show how much time these guys were spending trying to scam people AFTER we flagged their accounts. It was incredibly amusing to watch them waste 6-8 hours a day for months on end after all the hassle they caused us.