No, they're trying to vastly improve on de-compiling tools. Tools that can disassemble object/executable code (which is pretty trivial, e.g., objdump does it) but then analyze the assembly to back out C or C++ code that might have originally created the code.
Think "ftoc" but replace FORTRAN with "exectuable/object code".
I've done some decompiling of (small) executables, using non-AI tools that often did little more than treat C as an assembler language. It did usually get things like "this is a subroutine" right and created the parts like "int a113s_r4r5r6(int r4, int r5, intr6) { .... return r6; }".
So we got C code that we could recompile, and while not exact byte-for-byte in the output, the resulting recompiled code was "correct". We could theoretically edit the resulting C code, but because obviously all the labels, variable, names, etc, are stripped out, the decompiler had to generate *something* so we got stuff like I mentioned above.
As a subject-matter expert, most of my job was trying to recognize what the code was *really* doing, replacing the decompiler-generated names with my best educated-guess as to what the function/variable is really doing or might be called. The decompiler didn't always (usually) see things the "this is an array access", and had instead emitted code like
int *v123;
int v3245234;
v123 =
v123 += v3544;
v3245234 = * v123;
Which is C-as-assembly, essentially. But recognizing the pattern and making substitutions like
v345235 == "a"
v3245234 == "b"
v3544 == "i"
we might recast that as
int b = a[i];
I'm sure the AI parts here are geared towards doing that sort of thing better and more accurately. Not to mention being able to compare object code against known object code in the wild and find the corresponding source code, e.g., when FOSS software got included, or when libc code was statically linked.