If you can not do inference (not training, inference) on the device (for any definition of device) or can not wait for the data to arrive to a Hyperscaler Data Center, doing said inference on the shelter at the bottom of a mast, or failing that, the DataCenter where the regional 6G core is incarnated is a perfectly cromulent position to perform said inference.
Since 4G, and even before, we had ML on Telecommunication Networks. I can distinctly name 4 and 5G SON (Self Optimizing Networks), some preemtive alarm detection and correction in the Nokia NMS Subsystem, and when I was teaching CEMoD 16, we also had many of those. Changing the name of all that to AI, and stoping doing it in a system agnostic way with OpenCL and SYCL, and start doing it in a propiertary way with CUDA/nVIDIA only is a great way to attract 1 Milliard of fresh money, so congrats.
Also, I guess that possing as an american company when the company is ~75% European is great for the press releases.
As for nVIDIA, we all know that AI is a bubble, the questios are will it burst? will it deflate? when will that happen? nVIDIA is using their inflated share price to buy something that will not deflate or pop, just boring organic growth, driven by 6G (the digital G that will last 2 decades, instead of all the other Gs, that lasted 1). Good for them to diversify with cheap/inflated money. I'd have done the same.
Their rivals must be thinking why didn't I think of this first? and rightly so. Is an easy way to achieve a solid win-win for BOTH companies.