In other words, google is making it a lot more complicated and inconvenient. My current WinMo phone does this better. Let's compare.
Currently, if I want to reach a company, I use one type of interaction: voice interaction. It goes like this:
1. I tell my phone, "Call GOOG 411." My phone asks me if I want to call "GOOG 411" or whatever and gives me a chance to confirm or correct myself.
2. I ask GOOG 411 for "Company X, Anytown USA"
3. I listen to the results. Google gives me a chance to verify them and correct myself.
4. I say which result I want. Google calls the business for me.
All that without taking my eyes off of what I'm doing (walking, driving, doing the dishes, taking out the trash).
Soon, when I want to reach a company, I'll have to do a more complicated routine:
1. Launch Voice Search (VS for short).
2. Ask for "Company X, Anytown USA."
3. Voice Search terminates.
4. To review the results on the screen, I have to take my eyes off what I am doing.
5. If they're incorrect, I'm out of luck. My current VS session has ended and I need to start over.
6. Assuming I found what I wanted, I try to remember the phone number of the business I want to reach.
7. I launch Voice Actions (VA for short).
8. I tell Voice Actions to dial the ten digit number I've hopefully remembered.
9. VA doesn't ask me if it understood me correctly. I watch the screen to see if has. If VA got it wrong, I have to launch VA again.
This is ridiculous. Notice how Google has made me take twice as many steps to reach a business. Notice how Google is forcing me to mix three types of interaction:
-Voice interaction to initiate search and make the call
-Screen viewing to check the results
-Touch interaction to scroll through the results
What a step back in functionality this is! I hope Google is paying attention and fixes this. Until they do, I have good reason to stick with my WinMo phone. It does hands-free stuff better.