Comment Re:Memory safety is of diminished concern (Score 1) 228
Since we're discussing reliability and safety, then SEL4 (also written in C) is probably a better choice, since we can prove the source code is correct.
Since we're discussing reliability and safety, then SEL4 (also written in C) is probably a better choice, since we can prove the source code is correct.
Lots of Irritating Single Parentheses.
A far better language for many use cases is TCL/Tk.
I really should learn Rust. It looks like an interesting language.
However, C has value. We can formally prove C programs correct (eg: SEL4) and we can formally prove that C compilers generate the correct binaries for the source code (eg: VST).
This is a standard that goes well beyond memory safety. True, almost no C program in existence goes to these lengths, but the fact is that they can. It is possible.
As far as I know, nothing similar can be done in Rust, although I welcome corrections. In other words, yes, average Rust is much better than average C, but as far as I know, the best you can do with C vastly outstrips the best you can do with Rust.
This will change, at some point. It may have already, but if it has, it's not exactly well publicised. Some day, it will be possible to formally verify that a Rust program meets the spec and formally verify that the Rust code is being executed correctly, at least within certain parameters.
But until that time, there is a critical systems use case for C that can't be matched in any of the popular alternatives.
The new standard improves error correction.
...between hand-turned code programmed by a teenager who had in-depth knowledge of standards and testing procedures, or the very best code produced by a statistics-driven LLM AI, I'll take the teenager's code every time.
AI code is, feankly, rubbish, and it can never be better than rubbish because it's based on the probability of text appearing together and not on any actual understanding of either the problem space or the language.
In fact, it's worse than that. Because most of the code on the Internet has a high defect density, AI is actually "learning" the wrong associations, even when the associations mean anything.
An easy few of tests for Mr Nvidia:
1. Build an AI and suitable prompts to generate an OS kernel and GUI that will run transparently across a computer cluster of variable size, where the user I/O need not be on the same node as the userspace threads, and where neither need be on the same node as the kernel threads.
In other words, something functionally equivalent to Linux, MOSIX, and X11.
Humans can build it, we know that because they have. Can AI do the same?
2. Build an AI and suitable prompts to build a real-time microkernel that is provably correct in source and whose binary provably matches the source.
In other words, something functionally equivalent to SEL4 on the ARM processor.
Again, humans can build it, we know that because they have. Can AI do the same?
3. Build an AI and suitable prompts to build a C compiler and toolchain whose output provably matches the source code.
In other word, something functionally equivalent to VST.
As above, humans can do it. Can AI?
4. Build an AI and suitable prompts to build software for controlling a Boeing 737 Max 10, and be willing to use that as the corporate jet from thereon out.
This last one is the real clincher. Would boss guy be willing to put themselves and key members of their staff in the hands of critical software generated by their AI?
If the answer is "no" (and I already know it is), then you need human programmers. AI can't do the stuff that people need, and as computing becomes ever-more pervasive and ever-more virtualised, more and more people will need systems that are correct, robust, and distributable.
Furthermore, good and bad habits are learned at the start, and learning is mostly done between 5-24. Nobody does their best work after 24. So if you want top-flight programmers who can produce code that is nearly defect-free, then you want them trained to be that good young.
Yes, good programmers are rare, but you won't get more of them by not training anyone. And some good programmers is better than an AI world with no good programs in it at all.
Well, not quite, because the cloud is just somebody else's computer, although it'll be a virtual computer running inside a single physical computer. Plan9 would have enabled the cloud to be an arbitrary-sized virtual computer running over an arbitrary number of computers.
This would have been much cheaper, and since the cloud's virtual machines don't need a gui, Plan9's gui limitations don't matter.
It would also have meant that instead of using a whole bunch of apps to securely log into the cloud, you could have your computer go all Zen and become one with the cloud instance you wanted to access.
Actually, there's no reason the cloud couldn't do this with Linux. MOSIX allows exactly the sort of migrations you'd want.
Is already known to impact brain development and is linked to a range of autoimmune disorders. Yes, air pollution is an inevitable consequence of technology, but if it is allowed to spiral out of control, it's going to impose a limit on technology.
This is partly because your next generation of budding scientists and engineers will have a lower mental capacity than their predecessors, and because of the dementia link, a shorter productive period. But you'll also have fewer scientists and engineers, as deaths in infancy and childhood are rising due to air quality related health conditions.
In other words, you want the cleanest air that the technology of the time will support, in order to maximise both the number of people who can contribute to science and technology, but to also maximise their ability to do so, all without impairing the economy that is needed to pay for their education and career.
We're going to have air pollution, we need a functioning economy, therefore we need the best possible balance that we can achieve.
In any given hacking incident, there are various things to consider:
1. Did the company actually put any effort into IT security? (Many don't, as it's a cost and there's no corresponding return they can put on the balance sheet.)
I would consider effort to mean they'd set their firewall to block everything that wasn't supposed to work from outside, to have put their remote access machines in a DMZ, to have used a hardened distro (be it Linux, FreeBSD, or OpenBSD - Windows does not constitute putting in effort) or to have bought a book on system hardening for the OS they're using, and to have put in effort on security for their Internet-accessible products commensurate to the risk.
2. Did the company keep systems patched?
3. Did the company invest a certain fraction of their income on testing and maintaining security in their own stuff?
You can't expect small shops to invest as much as the big guys, but even they have to invest in IT security. In this case, though, we're not talking about a 2-man startup but a megacorp. They should have been capable of investing a lot.
Yes, you can only invest do much, but it should always be commensurate to the risk. And in a era of heightened international tensions, the risks faced by the well-known megabrands are very high indeed.
It's hard to get any useful data about this case, but it seems like their website was compromised and the payments system taken offline. They're cited a lot in partnership with Microsoft and seem to be using Azure, but it's difficult to know if that's the right company.
However, it likely is. And that's not a combination I'd associate with taking IT security seriously.
https://www.cps.gov.uk/legal-g...
This is the relevant clause: Section 3: Unauthorised Acts with intent to impair, or with recklessness as to impairing the operation of a computer
Examples of this are deliberate or reckless impairment of a computer's operation, preventing or hindering access to computer material by a legitimate user or impairing the operation or reliability of computer-held material. The offender must know that the act was unauthorised.
*Examples of this are deliberate or reckless impairment of a computer's operation, preventing or hindering access to computer material by a legitimate user or impairing the operation or reliability of computer-held material. The offender must know that the act was unauthorised.
This is the first critical part. If the user has acted to enable automatic updates or has acted to install an update, the access is clearly authorised. If a user had not enabled automatic updates and has not acted to install updates, it could reasonably be argued in court that access has not been authorised.
It would be very difficult for Microsoft to argue in court that authorisation was implicit by continuing to use their OS, because it's a change to the terms and conditions. The user would need to positively affirm such permission. Microsoft can't justify it by saying it's their software, as the CMA refers only to the owner of the computer, not the owner of the software.
*Simply modifying the contents of a computer is not criminal damage within the meaning of Section 10 of the Criminal Damage Act 1971. In Cox v Riley (QBD) 1986, the court stated that it shall not be regarded as damaging any computer or computer storage medium unless its effect on that computer or computer storage medium impairs its physical condition.
This is the second critical section. There must be (a) damage, and (b) it must be either with intent of through recklessness.
In other words, the complainant must show that use was impaired and that Microsoft had caused this through negligence in developing and testing their software. It's not enough for the software to be modified (that's legal, even if done by a black hat, if there's no damage) and it's not enough that function is impaired (Microsoft must be shown to have failed to use proper procedures in either development or testing).
But there is an additional clause: Section 3ZA: Unauthorised acts causing, or creating risk of, serious damage
*Section 3ZA is designed to cater for computer misuse, where the impact is to cause damage to, for example, critical national infrastructure and where the maximum penalty of ten years available under Section 3 may be inadequate.
*When considering, the definition of “critical national infrastructure”, (in line with European Directive (2013/40/EU)) it could be understood to be an asset, system or part thereof located in Member States, which is essential for the maintenance of vital societal functions, health, safety, security, economic or social well-being of people, such as power plants, transport networks or government networks, and the disruption or destruction of which would have a significant impact in a Member State as a result of the failure to maintain those functions.
In other words, if Microsoft's forced updates shut down a line of intensive care computer systems, or the 999 emergency services systems in the UK, it would certainly have caused damage to infrastructure essential for health.
Such computers shouldn't be on the Internet. Heartbleed showed they were. And, since they are, an update by Microsoft that bricks them violates the Computer Misuse Act.
This is NOT a guarantee that the courts could be persuaded. IANAL and even lawyers would be unlikely to want to give a strong opinion on the legal risks if Microsoft negligently bricking critical infrastructure in the UK.
What if does do is show that the law, as written, clearly states that Microsoft, if acting without authority on UK computers and if so doing causes damage, is taking on risk.
Doesn't matter. The computer misuse act refers to the owner of the computer, not the owner of the software.
https://www.legislation.gov.uk...
This unquestionably constitutes reckless unauthorised access.
Pay didn't change.
Diminishing returns, which must at some point flip into negative territory.
Where there's a will, there's a relative.