Maybe I mostly remember the slings and arrows -- these so-called BASIC program listings that were about eight lines of actual readable (and thus re-writeable) BASIC code and the rest of the page or pages being DATA statements with numbers. Then the PCs came, and we could, if sufficiently masochistic, type in similar listings to use with DEBUG.EXE. Later, as software grew larger, there soon came the need of faffing about with config.sys and autoexec.bat so that available memory was maximized. In the late 1980s onwards, there were the expanded memory nonsense too and more and more options and things in config.sys. There there would be jumper settings so DMA channels, port-addresses and interrupt lines on the various plug-in cards in the PCs. This continued well into the 1990s, then that got replaced by something called Plug-and-play which maybe, maybe not, did work, thus everyone called it "Plug-and-pray". And all on the original 640K plus whatever High memory had been put into place. I do not miss any of all this. TFS mentions the dreariness of business computing. they are absolutely right!
But I might not be typical -- I started with learning FORTRAN, then after that BASIC seemed primitive (no functions? and thus no data hiding? i have to make sure I don't re-use any of the variable-names anywhere else? and only one letter? at least FORTRAN allowed me to use six! bah) but the PC-compatible had Turbo Pascal, and there was also the assembler and later, Turbo C, so that became a nice set-up, with direct control of the pins on the parallell and serial ports, and even some DIY card with A-D converters! Yay!
Then there were the wonderful Unix systems, HP-UX and AIX back around the mid-1980s, where you could actually do more than one thing at a time without the machine crashing. And even if your program decided to hang, or accessed some memory out of bounds, it would say "bus error" or "segmentation fault" and stop, but the rest of the system, including other programs, would continue happily along as if nothing had happened. These even had networking so we could have programs on one machine talk with programs on another machine.
Of course this didn't last. Those Unix systems were way too expensive. Instead, Windows NT happened, and a form of multitasking and even eventually a useful networking system (TCP/IP is useful, all the other weird and wonderful variants turned out not to be so) and the access to the parallell port vanished, while the support for the serial ports became increasingly wobbly. ISA, EISA, Micro Channel, and MS-DOS became dinosaurs soon after; parallell and serial ports followed on as being branded "legacy". And like the dinosaurs, some of their descendants are still around now: RS-232 serial ports never really went away completely. USB came, but turned out to not be as hacker-friendly as those serial ports -- there is a reason everyone today runs (RS-232 style) serial via USB using a pl2303 or FTDI or similar chip to talk and listen to the UART in their SBC or microcontroller board.
There was a sort of dark age, of PCs running klunky MS-DOS or slightly less klunky Windows, until the late half of the 1990s, when Linux distros became easily available, and so good that they actually worked right on some reasonable random PC hardware that would be available, and all the good old Unix ways of doing things finally became economically feasible, intially on PCs, many of them second-hand. Around the middle of the 2000s the first single-board computers started showing up, and some of these are now becoming as understandable and documented as those old 8088 PCs with their MS-DOS once were.
To some extent we are in a golden age right now.
The most common way of highlighting special items such as filenames, functions, variables, command-line invocations and suchlike in documentary text is to put them in an alternate font, sometimes italics or bold. In code, quote-characters mark the beginning and end of a string to be displayed, and there are escape-conventions for including the same quote characters within the string as part of it. As far as the compiler looking at that is concerned only the non-escaped quote characters at the ends are taken as meaning begin and end string.
If there is something more of a nontechnical work, say, a novel where protagonists A and B are discussing their concerns raised by the absence of a file and the spelling of its name, it might be useful to show what we could call human and non-human audience quote characters. But here the audience is human, and we are known to be pretty good at understanding even when faced with moderately severe syntax errors.
Consider also the convention in print that long quotations that go over several paragraphs have open quote characters at the beginning of each paragraph, but only one closing quote character at the end. Useful for human readers, but makes for many complications to a system that expects quote-characters to appear in pairs separating what is inside and outside.
Indeed, they also would confuse us using those fancy text-figure numerals that makes lowercase o and zero indistinguishable, just so that even if you can read and ignore the curly-ness of the quotes, you won't be able to get this other distinction right when it isn't obvious from context. Same for uppercase I and lowercase l in most sans-serif fonts, but copy-and-paste might be able to handle these. It it wasn't for these stupid quotes of course...
It all comes from having overloaded some characters: the ASCII 0x22 character has been pressed into service for denoting inches, seconds of arc, beginning a quote, ending a quote, ditto mark. Similarly, there is the characters for minus, em-dash, en-dash, hyphen all being represented by ASCII 0x2d. So how do we know which ones we will want to use? I can think of writing prose where the storyline might have to include pieces of programming code, and thus will want to have all these different ones there at the same time.
"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"