It may be fundamentally changing the cybersecurity landscape, but if so, it does not do so in a good way. What is happening is that defenders get some things, not a lot, but attackers get a massive upgrade. In particular, I have research that finds that on the defender side, LLMs do not find even relatively obvious vulnerabilities reliably. Finding some things does not cut it for defenders when the attackers can randomize and have a chance to find other things than the defenders found.
Personally, I think defenders have reached the end of the sustainability of the "test and fix" approach, because searching for vulnerabilities is a massive more powerful tool for attackers due to that randomization possibility that the defenders do not have. After all, an attacker just has to find one vulnerability that works. The defenders have to find and fix all (!) vulnerabilities that AI does now allow the attackers to find for cheap. That is really bad. Even worse is that AI can cheaply write crappy attack code that sometimes work, which is all the attackers need. That is the second barrier that is failing. Up to now writing working attack code was slow and expensive and gave the defenders time when it was not a zero-day.
My take is we will have to massive upgrade software quality and use "secure by construction" for anything that needs to survive being exposed to the Internet in the future. The problem with that is that most current coders cannot do it. Hence we probably will get significant unemployment on one side and far more expensive software creation on the other. Well, looks like we will be making a real step towards professionalization of IT and that is always painful, but in the end it probably will be a good thing when the dust settles.