Reading through the comments you'll find a very wide variety of opinions. And, I think much of that is reflected by individual's positive and negative experiences with the pattern. I've had both.
First the negative. If you divide up your services poorly, or define what is a service poorly you will absolutely have a negative experience. Also if you fail to take into account caching you'll have a negative experience. My first venture into micro service land was horrible. An architect decided to try and build a "platform" from the services that would supposedly allow us to build many different apps from the same few core services. He decided that the act of putting stuff away into an inventory location should be one service. The act of taking something out of an inventory service should be another service. And, yet another service would provide the list of work to be done. It kept going and it was bad.
The positive came when we tore that mess down and rewrote it without an architect providing an esoteric design. We decided that the lines between things had been drawn horribly. Getting data about an item should be one service. A Kafka consumer to get our list of work from upstream. Getting inventory locations should be another. We should have one that built a route through the locations to guide us. And, we should have one service that interacted with them and stored the information as a single document in a mongo repo.
You could in some ways argue that that pattern is a mixture of micro services and a monolith. And, I wouldn't say you're necessarily wrong. But, it has been the most effective pattern for us.
Each service works in isolation. When the data contract for getting information about items changes we go modify the item service and maintain our internal mapping. When we decided there was another team who had developed a better pattern for building efficient paths we were able to swap out our back end call quickly because it was proxied through our own service.
The confusion I see most often about what is a micro service comes from not being able to clearly delineate what should be a service on its own. And, in not having the correct tooling in place to make it effectively. We have a few data sources that are API based but can have rather slow response times. So we build a cache. In our local team we have a varnish server that is configured to route our traffic correctly but also add a buffer to each call. Some calls for data that doesn't change often can be cached for up to 24 hours. Some calls we just want to avoid an HTTP hiccup and will cache the result for 2-3 seconds. If you run four varnish nodes you're now talking that in the case of the 24 hour cache you will have four slow calls for that unique piece of data in 24 hours and the rest will be lightning fast. We use Micronaut, Kotlin, and async/suspend calls throughout to allow for the most efficient use of our compute resources. We are also highly aggressive with our TTL's in mongo to keep the total dataset down to a manageable level with heavy indexing on our most commonly queried fields.
As we rebuilt using the above stack there was a lot of learning along the way. It requires constantly evolving the system and you need to truly own your architecture. This is not something that you build it and walk away from it. This is something that you build and maintain. In the long term it reduced our total cost of ownership and support issues tremendously. When we had our abortive bad experience our issues were compounding and it was a nightmare.
The only way to truly evolve a micro service architecture correctly is to approach it in an iterative format. Start with something small, make it work, then add onto it. As it grows you will find where something should be peeled off and become its own service. Hopefully you figure that out before you get too far down the rabbit hole of wiring it in. Unit tests, code coverage analysis tools like jacoco, and functional specs are essential also.
The other big tool that helps maintain this pattern is the ability to make the same change to many projects in parallel. We have a home built tool that one of our principal engineers slapped together a few years ago that will cycle through every repo in our git org and run a processing rule against it. This could be something as simple as get the latest version of library X from artifactory and then update the project to use that library. It could be something more complex like update a library in gradle, add a block to override references to com.javax.singleton and replace it with com.jakarta.singleton throughout the entire project. Having this tool allows us to maintain many independent git repos concurrently with little effort. And, because our build pipelines are running the full unit and functional spec test suites every time you make a pull request we have a pretty good idea that our updates will work when we do that. Occasionally we still make mistakes, but we deploy services one by one with canaries and heavily automated monitoring. So if we've introduced a new error we typically know about it in minutes and can roll back and fix it.
Microservices as a concept by itself is nothing without a culture and the tooling to support it. If you don't have that then you might be better off with a monolith.