I think there is an important distinction here.
Circuits and GUIs are graphical by themselves. To specify them graphically is to specify them in their own terms. Such graphical representations are natural and compact. They are not really abstractions. (For circuits, their behavior can be added to the graphics using a minimal set of graphical conventions. For GUIs, this is not possible; hence, the behavior of GUIS isn't usually specified in a graphical way.)
Most things in programming are not graphical. C functions aren't. Algorithms aren't. Data structures aren't. Databases aren't. Contracts on what a function may or may not do aren't. Communication protocols aren't. Etc.
Graphical languages can be used as an aid in explaining or specifying these things, but the results will be symbolic representations, just like textual representations are. This is a fundamentally different way of using graphics.
Such symbolic graphical languages certainly have their use (UML diagrams, database model diagrams, state machines, etc.) but they take up a lot more space than equivalent textual representations. Take natural numbers, for instance. It's perfectly OK to replace them with a graphical representation (dots and circles on a screen) when introducing them, but only a textual representation such as the decimal representation will scale to larger sizes. This holds for pretty much all aspects of programming. For instance, when specifying program flow logic, a flowchart is a very space-inefficient way to do so when compared to textual code. It also takes much more time to create. There is no way to specify the equivalent of 10 million lines of code in less than 1 million pages of flowcharts, and they would cover only the control flow, not all other things that the code specifies. Therefore, graphics will only be used for those aspects of a program that are easy to visualize, and usually, only as a secondary representation, next to a textual one. Text is a lot more compact and usable.