"limited experience with a tool-poor scripting language..." - which are you referring to, Ruby? If so, Ruby is not tool-poor.
"...but in return, a lot of problems become quite a bit easier to solve." - Yes, I agree with you. Perhaps our disagreement is our perspective: I advise organizations, and so I tend to be on the side of maintainability - and that requires languages and tools that are naturally maintainable - not ones that require great effort to craft maintainability. I think that you advocate for the developer - and particularly the advanced developer. Yes, I agree that scripting languages enable you to code more quickly, although I have found the refactoring can introduce lots of bugs with scripting languages, unless you have a very high coverage unit test suite, which I try to avoid, and compilers help me to avoid that - saving me a huge amount of effort, and instead allowing me to focus on behavioral tests which are far more stable when one refactors.
I will note that I have seen very, very expert developers create mountains of unmaintainable code very rapidly, and not even know that their code was unmaintainable.
Aha. Now I know where the disconnect is in our discussion on this. I have been thinking in terms of updates, and you have been (it sounds like) been thinking in terms of fetching data. Yes, for fetching data, you are right, asynchronous is far more efficient, if one can get away with a best effort (eventual consistency) approach, which is usually the case for UIs.
For transactions that do updates, a synchronous approach is far easier to implement, because one does not have to keep track of application state, because (1) one handles failures immediately, and (2) the transaction is atomic (if it is not, then you have to manage state at the application level). E.g., consider a user who reserves an airline seat, but between the time the user received notice of the available seats, the selected seat is given away. The user does not now the seat was given away (and their UI has not refreshed yet), so they click Submit to reserve the seat. In a synchronous approach, the Submit will fair right then, and so their UI will immediately receive a failure response and can update it self accordingly. But in an async approach, the user will receive a success response, and might even close their browser before a failure message is received. Thus, the application then has to have additional logic to record that the user was not notified of the transaction failure, and will likely have to email the user to let them know that their seat reservation failed. Much more complicated.
I.e., message oriented is simpler for getting data, but synchronous is simpler for updates. Do you agree?
"Not that I know of. And what would be the point? That would amount to a REST call with metadata attached in PB format, which is kind of like a bicycle for fish." - that would indeed be ridiculous, but I would expect the attached binary content to be unencoded, as it is in an HTTP binary encoded part. There is a major use case for that: queries that send binary data. E.g., I have been using the docker engine and docker registry REST APIs, and many of the methods include both query parameters and binary object parts (i.e., attached files) in the same request or response. I think we need to look at the gRPC specs to see if it handles this case.
"Asynchronous message passing refers to a style of programming similar to (but different from) OOP. It really has nothing to do with UDP other than a superficial similarity." - I disagree. The debate between synchronous calls and messaging is as old as the Internet. You are right that these do indeed distinguish different programming paradigms: in synchronous calls one handles the response inline, whereas an asynchronous approach requires and event-oriented design with compensating transactions at the application level. The asynchronous approach is much, much more complex for the programmer to implement, because of the large number of states that must all be handled, but it is warranted when requests have lots of latency or when there are lots of clients compared to the number of servers - so many that it would be difficult to maintain that many stateful connections. With an elastic back end, however, the latter concern goes away.
"What works for Google (or the DOD, or IBM) doesn't work for most other companies, projects, or programmers, because they operate under a completely different set of constraints." - that is VERY true.
I agree that C++ is too complex. The problem is, alternatives are even worse for other reasons. Ruby is HORRIBLE from maintainability and performance points of view. To write maintainable Ruby, one has to use TDD, which is deeply incompatible with how many people think. (See the debates between David Heinemeyer Hanson and Kent beck: http://martinfowler.com/articl...) And I have used Ruby a-lot, so I am not guessing. And Go is really awful because it throws out enormously useful features like virtual methods and exception handling, and Go also has lots of "gotchas" - my favorite is comparing a reference with nil and getting false, then type asserting it, comparing the asserted value with nil, and then getting true. Things like that will result in lots of bugs, and there are many gotchas like that. So I have turned back to C++ because, if used well (i.e., conservatively), one can write very efficient and maintainable programs - IF used well. Efficiency matters at Internet scale because if a Ruby program requires 10,000 VMs but a C++ program requires only 100, that is a very large cost saving.
"not designed for large messages...": Hmmm - isn't there a way to attach a file - i.e., a MIME "part"? Since PB uses HTTP2, it would be hard for me to imagine that they left that out. But if you are right, I agree it would be a terrible problem. Perhaps attaching files is part of gRPC but not PB?
Not sure I understand your comment about non-copy memory transfers, since PB/gRPC are remote (out-of-process) communication tools.
Yes, you are right, that message passing (e.g., UDP) is more scalable when one has a single server. But if you can massively scale the servers, the limitations of RPC-like communication go away.
Q: How many IBM CPU's does it take to execute a job? A: Four; three to hold it down, and one to rip its head off.