It's a bit misleading to say it beat a 50 year old record. Strassen's method from 1969 has been bested 11 times according to Wikipedia (Sorry, I didn't search for every paper on fast matrix multiplication myself) since then, the latest in 2020.
Strassen can multiply two 2x2 matrices with just 7 multiplications instead of the 8 it takes using the straightforward method taught in high school. But it's not used to multiply 2x2 matrices in practice, because saving that one multiply costs extra additions. And it processes the data in a slower, out of order way. And there are a bunch of extra temporary values to keep track of. So it's slower. Even without vector instructions it's slower.
But it can work in a divide and conquer way too. Think of a 4x4 matrix as 2x2 matrix made of four 2x2 matrices. We can multiply it the normal way by doing 8 multiplies, but each multiply is of 2x2 matrices this time (for a total of 8*8 = 64 scalar multiplies). Strassen's method lets us save one multiply, but this time it's not just one scalar multiply operation, it's a whole 2x2 matrix multiply that gets to be saved.
It's still slower! But eventually with big enough matrices, it gets to be faster. Like 512x512 big at least.
So they found something better than Strassen for small matrices. But it's not actually useful for those sizes. And it's unlikely their method will be either. Now if they had something for large matrices, it might actually be faster than what's used now.