Of course, computing in memory requires a completely different way of thinking about software than the CPUs or GPUs that we know and love.
Which is... what they're doing.
This is an implementation of something I've been talking about for years. The brain is much more efficient than today's AI chips - not because "wetware" is efficient (it's terribly inefficient, with a huge amount of overhead and numerous chained loss steps), but because analog accumulation is efficient compared to vector math for AI tasks. It's like the difference between trying to determine how much water is in a container via simulating every molecule in the container, vs. just measuring the container. You want the laws of physics to do the "math" for you. For the input field of a neuron in a DNN, instead of multiplying two vectors (activations times weights), your first vector may be flow paths, and the second be the resistance to the flow on that path (in the case of light, optical channels and transparency, respectively). You then need a physical nonlinear activation function (with a bias) based on how much flow accumulated during the elapsed timeperiod, and its results needs to control how much flow leaves that neuron to the next layer.
Doing this with light seems like a very fast and efficient way to implement that - much moreso than via chemicals as our brain does it.
Developing such a hardware system for training (rather than inference) will be much harder with conventional NNs, though. It might require predictive coding networks. Though that would be pretty keen if we ended up switching to PCNs, as they have all sorts of great properties (including realtime learning).