These will be used in data centers where it is common to have redundant systems connected with redundant cables, in order to maintain really high uptimes. Say a hypothetical system has a cluster which consists of 16 compute nodes and 2 storage nodes, Each of CPUserver01 through CPUserver16 will have two of these cables going to storageServerA, and two going to StorageServerB. For a total of 64 of these cables, for that one little compute cluster. Which would leave it an island, so of course there will be more network interfaces.
For this technology to get any market penetration, it will need to be cost effective at these bandwidths, and fit in the racks. Historically, Dense Wavelength Division Multiplexing, DWDM has been great at getting a lot of bandwidth on to a very long single strand (comparatively) inexpensive fiber, which allows in fiber signal amplification, and is the winner at going the distance, but not so good at being cost effective, or space efficient. These things, with the associated drivers should take up far less space inside the servers, and cost less, but they only will get 800Gbits in each direction, only go 300 meters, and use much more expensive (per kilometer of cable) 64 strand fiber.