Assembly is not difficult.
And nobody creates chips from scratch any more, but the underlying electronics is still worth learning. If you disagree, go look at the THOUSANDS of Arduino etc. projects. Arduino is a microcontroller, not a processor. It has a pittance of RAM, a pittance of speed (16Mhz?) can't access external memory directly, etc. But the principles behind using it reveal a lot about how the electronics work and the problems associated with them.
Just a digital circuit? Far from it when you have power-issues, trace-length issues, hidden impedances in the circuit, etc.
Assembler is also how you begin to understand what a chip DOES. Sure, we all "know". Sure we do. So how do you do bitwise-add-plus-carry? What CPU flags might be triggered and when? What about the circuit timing? Rising-edge, falling-edge, high, low? What about memory refresh, clock-cycles etc.?
Sure, this stuff is not necessary to OPERATE a computer. Nor to PROGRAM a computer in most languages. But then it does begin to come into how to ENGINEER a computer. There are Arduino projects that push a string of bytes down to a Z80 chip with no onboard RAM (literally, the Arduino acts as a memory emulator). They've been enormously helpful in understanding how integrated circuits work - you can literally manually clock one cycle at a time and interrogate the bus timing, memory access, etc. of the attached Z80 as you go. Hell, it's even generating information useful for anyone making a Z80 emulator, etc. What timing does that undocumented instruction have?
There are levels required for certain things, and there's also what happens in science - understanding of OTHER seemingly-unrelated, or obsolete, or total disparate sciences affects your understanding of everything else you touch.
Understanding assembler doesn't make you a better programmer automatically, but it completes the circle - you know what's happening and so can understand why it happens when your engineers come back and tell you the bus timings aren't as they are on the spec sheet, and you can compensate. It's like being a car mechanic who's never seen a Wankel engine. Sure, maybe you never will. Maybe you'll only ever seen them when tinkering with one you bought on eBay. But the different ideas give you different concepts that you can join together because they are based on two solutions to the same problems.
Nobody sits and does arithmetic any more. And you don't need to be able to do mental arithmetic to be a great mathematician. But the knowledge of such things CAN greatly enhance your understanding - and the speed at which you understand new things. Fermat's Last Theorem was solved because someone linked it to elliptic curves. I bet there were mathematicians the world over who were told to stop wasting their time on a 400-year-old problem because it would never be relevant. Now we've JOINED two areas of mathematics, we understand both more. And the guy who had both mathematics in his head simultaneously understands them better than anyone else.
Assembler is not something you'd branch out the next version of Windows into. Of course not. But if you don't tinker with it, understand it, play with other circuits, write your own bootloader, even, then - sorry - you're not a geek with the interest that I would get on with and who I find best at doing geek jobs.
At the end of the day, some bastard had to write the Windows bootloader in assembler. Then, only a few years ago, someone had to rewrite all their bootloaders to take account of UEFI. And every new architecture needs someone to write a bootstrap in assembler even if it's only ever used to get the compiler up and running.
Saying it's a waste is to completely miss the point of life. To pursue interests to satisfy man's innate curiosity.
Your instructor was as blinkered as you.
And, fuck, if you can't get the hang of assembler, when one instruction rarely does anything more than a single piece of binary arithmetic, and each official instruction is clearly documented as to EVERY side-effect down to individual registers and processor flags, I wouldn't want you near any code of import.
When I was a kid, someone mocked me for learning Pi to 32 decimal places. I'm a mathematician. I'm also a computer geek. I learned it because I was writing a program to compute it using a sum-of-series algorithm and have programmed in my own higher-precision data types and was checking their answer was correct. To 32 decimal places. On a computer that didn't even have IEEE floating-point.
Base knowledge doesn't hurt. And it certainly can help. But if you spend your life thinking that "programming" is about knocking up crappy business programs quickly in Java or Ruby, then you'll never use it.
Get out there and learn some fucking real computing. Maybe you'll never use it. But I'd rather hire a coder who was there in the 80's knocking up their own computers from raw transistors, than one who's sat through your instructor's lectures and just wants to do some "Agile" coding or whatever buzzword is in vogue this week.