AI stagnated in academia because the average research budget grant simply didn't cover the cost of high performance computing at the time. In the late 1980's, MSDOS PC's were stuck with 4.77MHz/16MHz CPU's, 256 color VGA modes, and 64 kbyte memory segment block allocation sizes (aka tiny, small, medium, large and huge memory models for code generation). Academics were lucky to have a 80x87 floating-point coprocessor. Even the 32-bit or 64-bit workstations weren't that much faster due to the fact that many had external storage on the network. Even when there were fast workstations, they decided to slow everything down with interpreted languages like LISP, Prolog and Scheme. Some did compile into native executable code but required large code libraries. A lot of academics had to make do with PC's to do image processing. Unless they had an i860 coprocessor with a framegrabber board, they had to store images on disk, fetch each column or row of pixels one by one, do a FFT or inverse FFT, then write the data out, repeating the process for the other axis. Now the same work can be done using a HD webcam, a smartphone or an ink-jet printer.
Supercomputers were restricted to weather simulations and aerodynamics. Around 2005, it was possible for a desktop or laptop to do 3D volume visualization with some old school texture mapping tricks and high-level shaders. Now there are a dozen different methods of doing rendering and image processing each taking advantage of GPU capabilities; OpenGL, OpenCL, CUDA, DirectX, compute shaders, Matlab, Blender, GIMP, ImageMagick, WebGL, Java, Python (PyCuda, PyGL)