In the '90's I was working for BigTelCo on an ordering system.
Unix / C system "A" would enquire about account details based on any of various inputs (account number, main phone number, etc.). They sent a transaction to a central system "B" app server for which I wrote about 1/3 of the code. Well over 90% of system B was COBOL. Typically we were running about 0.7 sec response times. During that 0.7 sec, our system would:
ID the type of access inputs, look it up in an IMS database, figure out which datacenter (Georgia / Florida / Kansas / Colorado / Massachusetts) had the account, send the transaction there.
Pull the transaction, call a dynamic table to see what data were required (could be changed w/o recompiling or bouncing system), pull the data, create stream-style (not block I/O as the mainframe was used to) data, send it back to Unix for parsing.
Did I mention that part of the routing, and all the dynamic tables, were provided from software written in PL/1? So our COBOL modules were linked with PL/1 to create the final executables.
That's not the most clever or the least wanky system I've ever been on, but the old COBOL girl did pretty good. The Unix / C folks got intelligible data as soon as they figured out how to tweak HP's EBCDIC-to-ASCII tool so the non-alpha, non-numeric characters would be handled. And at that point the data stream looked just like what they'd been passing one another, from C to C.
And yes, the last time I heard, people were still creating wrappers around the mainframe system's feeds so that other C / C++ systems could use the data.