build more power lines,
That's the crux of the problem. The UK's grid was built when large carbon-fueled power stations were king, and so the distribution network is essentially a hub-and-spoke design radiating out from those to the end users, with the load carrying capacity getting smaller and smaller as you go. That means by the time you get to the rural fringes, like Scotland and Cornwall/Devon, which is where a lot of the optimal offshore wind sites are, the grid simply does not have the capacity to take large amounts of power the other way. Baseload capacity has to be pushed around the core of the grid, which - since we're talking overhead HT lines - are generally built along mostly flat routes between the carbon-fueled power stations, and only pass conveniently close to on-shore wind farms or off-shore wind landing stations by coincidence.
You know what else the UK has a lot of besides green energy it can't fully utilise? Nimbyism. As you'd probably expect, Nimbys really don't like the idea of new HT lines running through their local countryside (or nuclear plants, to touch on your other point), so the other goal of this regionalisation is to encourage them to accept the tradeoff in return for (relatively) cheaper local market prices in areas where there is a lot of existing or potential green (or nuclear) energy capacity. Supplementing that with local storage systems - batteries, pumped water, molten salt, or whatever - should hopefully be something that comes along in lockstep, but so far that doesn't seem to be integral to the plan, but is rather being left up to the generating companies to decide on.
a consultant rushed to warn clients to be "extra careful" sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning, "your outputs could eventually be read by others, even if you opted out of training data sharing or used 'temporary chat'!"
I mean, seriously? This is one of a whole bunch of companies that have been blatently hoovering up any data they can get their hands on without any regard to copyright, constraints placed via things like robots.txt, or thought to the hosting costs that can be incurred by continual spidering of vast amounts of website data, and you *honestly* thought you could trust them with the data you *chose* to provide them with or that it might not backfire like this?
Zuckerberg was right all along; "Dumb Fucks" indeed.
...and then there's this.
"This" is a cast iron example of why everyone involved in AI - the content producers, AI companies, VCs backing them, policitians, and users - need to deal with the elephant in the room; copyright law was not designed for the digital age, and certainly wasn't designed for the wholesale ingestation and regurgitation of AI engines. That the media companies, usually the first to cry "foul" and demand outrageous amounts of damages because copyright, are themselves playing fast and loose with other's content while complaining about their own being used as training data more than proves the point it's way past its sell by date.
While amended since, the Berne Convention dates from 1886. AI isn't a crisis for copyright; it's an opportunity to give it a thorough overall, make it fairer for all given it's now so easy to content shift and share data, and generally fit for purpose and fair for the 21st century and beyond. Fail to do so, and it's just a matter of time before the legal fallout (and damages) under the current system are going to give the lawyers on the winning sides of the inevitable disputes a whole new fleet of superyachts.
to climb on one of the death tubes
Well, there's part of your problem. The rest of us get in the death tube.
The absence of labels [in ECL] is probably a good thing. -- T. Cheatham