Most national regulators require that any safety-critical computer systems in nuclear facilities are formally proven correct. Due to the difficulty in producing absolutely bug-free code, and proving that you have done so, a lot of systems continue to rely on pure analog control.
For example, nuclear-grade UPS systems typically offer a feature such as the following:
"Digital logic free. 100% analog control with fully verified behavior. No need for expensive and time consuming software verification"
Similar validation is available for nuclear grade diesel generators and their control systems.
Similar design principles are often applied to the reactor instrumentation, although reactor control is usually digital and verified to the highest level. That typically means no inputs to the system, except the core sensors and core controls. The software uses only a minimal subset of language and OS features - e.g. no memory allocation, no dynamic linking or binding, etc. Calibration and model data must be built into code using a validated code generator and then statically linked into the binary, all memory must be statically allocated at compile time, etc.
The risk is whether less critical systems may be at risk - SCADA and similar systems may be in use for alternator controls, or in switchyard controls. The risk is that grid power to the plant could be interrupted, forcing the plant onto generator power. Or perhaps, other plant might be degraded - non-critical water pumps or plant controls, could mean that under degraded conditions, the plant has less tolerance to a reactor accident.
Realistically, unless you have schematics which detail the control systems in use, it is not possible to determine the severity of a particular attack. Further the interaction between different plant systems may be difficult to predict.
Even if the only realistic target for a cyber attack is the switchyard, that is still highly disruptive and degrades the safety margin of the plant by removing grid power as an energy source.