Presumably you are using modern tools to compile and build your software, manage source code, and manage your project work. Many of these tools will either incorporate or integrate with bug tracking software and testing frameworks. If there's a native bug tracker available, select it. If there's a native test framework available, use it.
What you need is a least-friction option, where testers, analysts, and developers can all see the bugs, write up the bugs, test the bugs, and fix the bugs. You don't need "The Most Advanced Framework Available Today", you don't need "The Best Test Tracking and Reporting Software Ever Produced", you need a solution that works well for all the people involved. Having a third party tool where the developer has to stop working, log in to the bug tracker, read the bug details, switch back to the development environment, make some changes, switch back to the bug tracker, write up the findings, switch to the test framework, execute a test, switch back ... All that switching is a huge productivity killer. The smoother the integration, the more effective and efficient the engineers will be - and that's where your expenses really lie.
Here's the problem. Some organizations say "hey, let's evaluate and buy the bestest test software out there" without giving a thought to the developers. So the QA department runs off on their own, buys a tool, and starts building tests in it that the developers can't run. If the developers can't run the tests, they don't know if they're fixing the problems correctly, so they waste tons of time. Worse, if a developer makes a change that breaks some test, they won't know until that result is reported to them, possibly days, weeks or even months later, depending on your QA cycles. During the intervening time, the developer continues to write code based on their original faulty change, creating technical dependencies on what may be a completely flawed base assumption. When the test finally reveals the flaw, the developer's choices are limited to: A) rewrite everything according to the better architecture uncovered by the flaw, or B) make a scabby patch so the test passes. If you choose A, the software's release will be expensively delayed. If you choose B once, you'll likely choose it again, you're incurring technical debt, all your software is likely to be crap, and no good developers will want to work for you. The correct answer is of course C) don't produce tests the developer's can't run themselves on demand, or tests that aren't automated as a part of the build process.