I like a memory-safe language as much as the next person, but software security really is an open-ended problem. My language may prevent me from making a string that overflows its memory, but if the string I'm building happens to be a SQL query, my code could still to vulnerable to SQL injection. Of course there are several ways to build SQL interfaces which don't allow unchecked strings as queries.
The point is that the SQL injection vulnerability is completely analogous to the string overflow vulnerability. Strings were originally implemented as abstractions in languages which had no concept of "string". There have been many such implementations of strings, and a good fraction of them are not memory safe. But now we also have languages which implement memory-safe strings as an abstraction of the language. And people have used these languages to implement a new abstraction, SQL. Again, there are many implementations. Some are safe from SQL injection. Some are not. The ones which are not safe actually may be simpler to use from a naive point of view - like just building a string and passing it to a function. Some programmers may find that more appealing than an API which requires multiple calls or multiple parameters to make a query. It doesn't require them to learn a new abstraction.
Software security problems will always exist, as long as we continue to build higher-level abstractions with APIs which allow the abstraction to be subverted. Even when you have a safe API to a new abstraction, there's an excellent chance someone will come along an implement a "simpler" API, which is simpler mostly because it is vulnerable.
I'm talking about differentials in force with respect to a system capable of extremely non-linear responses. When polar ice melts, much of the weight that was on one plate moves to adjacent plate(s), so the force on the plate where the ice was decreases, and the force on the adjacent plate(s) increases. The change in the difference in force between the plates could exceed the weight of the ice. (And it could be a positive or negative change, depending on the relationship of the plates. A negative change could reduce the static friction enough to trigger a quake. A positive change could increase the stress enough to do it.) The changes in force may not amount to much in terms of magnitudes of the total forces in play, but it could easily be enough to trigger quakes which otherwise might not have happened for thousands of years.
An earthquakes occurs when the static force of friction at a point of geologic stress is overcome, or when the force on a geologic structure exceeds its breaking point. It is an extremely non-linear response, which can be triggered by small changes in these forces. Given that, it would be surprising if tidal effects were not correlated with earthquakes.
As the polar ice melts and its weight is redistributed over the oceans, I expect this also will result in sufficient changes in tectonic forces to trigger more earthquakes, and perhaps volcanic activity as well. It wouldn't surprise me if even changes in atmospheric pressure are sometimes sufficient to trigger a quake.
I particularly would like to see a resurgence of OS research. Things have changed enough in both the hardware and the application landscape since Windows and Linux were designed that I think it would be worthwhile to revisit the questions of what an OS can and should be. But in my view the real problem for the commercial success and/or widespread deployment of a new OS is not so much legacy applications as it is device driver support for the very broad range of devices that are found across the major hardware platforms. It would not surprise me if the volume of device driver code for Windows and Linux (and maybe even Android and iOS) exceeds that of the rest of the OS. In contrast, support of legacy applications can usually be achieved through a compatibility API or container approach.
Ideally I think device drivers should be written in a language which supports a level of abstraction which is at least somewhat OS-agnostic.Then, even in cases where the device manufacturer was unwilling to provide source code, an OS developer could provide the manufacturer with a compiler that would generate a binary for their OS. But the marketing obstacles to such an approach probably far outweigh the technical challenges of its implementation.
As with the design of operating systems, it makes sense to distinguish between policy and mechanism. Science and rationality may or may not be a sufficient basis for creating policies. And here by "policy" I mean things like "people should be equal before the law", "healthcare should be a right", "what's good for Wall St. is good for America", "all citizens should be armed to the teeth", "Mars colonization should be our highest priority". That is, policies are goal statements, and reasonable people can certainly disagree about what our goals should be as a society. Mechanisms are the means we use to achieve our goals, that is, the means by which policies are implemented. So a policy might be: "wealth inequality should be bounded", and mechanisms to achieve it might include "progressive income tax", "subsidies for the poor", or "universal basic income". Given a policy, science and rationality are certainly applicable to designing and evaluating the efficacy of mechanisms to achieve the policy.
Our biggest problem is that most of our political discourse is consumed with debating what we call policies, but which are usually mechanisms to achieve some policy. The policy is seldom explicitly stated and almost never debated, while the participants in a typical political debate take it for granted that everyone accepts whatever implicit policy their proposed mechanism seeks to achieve. But even worse, we implement mechanisms without ever tying them to an explicit policy goal, which makes it difficult to determine whether a given mechanism is working. Politically you get a situation where anyone who questions a mechanism is assumed to be disagreeing with the unstated policy behind the mechanism. The result is that bad mechanisms become entrenched, and are no longer subject to rational or scientific examination. And that just sucks.
If I could interject one question into every political debate, it would be: what are you trying to achieve? And if I could have a second question it would be: how will you know if you've achieved it?
I had an excellent mentor at NASA at age 16. Learned about high-level languages and algorithms using the CAL language on Tymshare. Learned about how computers actually work by toggling in programs through console switches on a PDP-5. Learned how to code efficiently mostly by reading other people's code. Learned FORTRAN IV from McCracken's book. Read a lot of computer manuals, back when computers came with full documentation sets.
Not just a single disaster. Just as we've seen more frequent and more powerful storms in recent years, so too I expect there will be more frequent and more powerful earthquakes, including the possibility of tsunamis. More and larger volcanic eruptions would not surprise me. I expect most of the action would be around the boundary of the Pacific plate. But things could get interesting on the mid-Atlantic ridge as well.
The consequences of this on civilization would be fairly devastating. Besides lives lost in individual incidents, there could be hundreds of millions of refugees. Economic collapse would surely follow, straining our political and social institutions to the breaking point. Much of the infrastructure which supports our technology also would be severely impacted. Major ports may be destroyed by tsunamis or earthquakes, disrupting supply chains for food, and causing food shortages, even assuming that food production itself remains intact. Ultimately a sizable chunk of the human population will perish. But unless Earth goes the way of Mars or Venus, humanity will survive and over time, adapt.
It's useless to debate whether this is a natural cycle or a result of human activity. It would be great if we could limit our carbon emissions, but I fear we're already two or three decades too late. What matters now is trying to understand what is happening, anticipate the possible consequences, and prepare. What we need is something like the International Geophysical Year, except that intensive level of research needs to be sustained for at least a decade.
The article says it's nothing to worry about. Well that's what they said to Jor-El, and you know how that turned out.
The shift in mass distribution caused by melting ice will cross the boundaries of tectonic plates, changing the relative pressure on adjacent plates. This will likely lead to increased earthquake and volcanic activity. On the bright side, the ash from the volcanoes may limit global warming (maybe even trigger an ice age), and deformations of the sea floor may reduce sea level rise (or make it worse). Another possible impact is the triggering of a geomagnetic reversal.
The political debates about climate change are futile. What we should be discussing is whether we know enough about how this planet works (and have the technology) to attempt some kind of active intervention, such as carbon sequestration or actually blocking sunlight from space. But we'll probably just end up fighting over whatever habitable parts of the planet remain. Maybe the survivors will be wiser.
They dont really give a shit about the data in this case, they want to cow the tech sector into not making their jobs harder.
Maybe they care about the data, but it's likely they have other ways to brute force the passcode. This battle with the tech sector over encryption has been ongoing for more than a decade. What's different about this case is that it is the best opportunity the government has had to use fear of more mass killing to shut down the thinking part of the average person's brain. Their goal is to ensure that they have the keys to decrypt anything encrypted by the general public. (Anybody remember key escrow?)
Anyone with a basic technical understanding of how encryption works knows that there is no way to stop a knowledgeable person from implementing encryption in software, and keeping their keys private. So this is really about preventing the average person who lacks that knowledge from having unbreakable encryption. It's interesting that the situation with the general public and firearms is a similar situation, and in fact cryptography was once classified as a munition. It seems to me that a liberal interpretation of the Second Amendment might apply to encryption. I point that out especially for those of you who feel entitled to assault weapons under the Second Amendment.
Personally, I think we need to look at personal devices, and perhaps even our use of search engines, as extensions of our minds and as such, should be treated by the law with the utmost concern for privacy. After all, the technology to actually read minds is advancing, and the day may come when the precedents we set today for our personal devices are applied to our brains.
Just imagine, there could be a phone app that displays an arrow to show the user which way to walk. Using the Lidar to detect obstacles, the app could enable a phone zombie to become almost self-driving, avoiding obstacles and other people. Almost like a real person.
With self-driving cars I expect parking will become like having valet parking everywhere. Think of how guests arrive and leave at a large hotel. There will need to be a reasonable sized area where cars can come and stop to pick up and drop off passengers and their stuff. Once empty, the cars will go and park themselves in high-density fashion. Your typical Safeway parking lot will need to be reorganized to accommodate this.
There will be an opportunity to reduce the space allotted to parking at many places.
I have serious doubts about the practicality of aerial drones, at least for deliveries to individual consumers. What happens when a drone shows up at your door and the family dog attacks it? There are many other problems which other posters have mentioned.
I would really like to see autonomous, road-based drones developed. A road-based drone could be much smaller, lighter, and cheaper (in mass production) than a car, since it wouldn't need to carry people. Road drones would need the same sorts of sensors as self-driving cars, so if road drones were mass produced, self-driving cars could reap the benefit of cheaper sensors. Road drones also would need software that would be very similar to that required by a self-driving car. But a road drone, being smaller, lighter, and cheaper, would usually cause less damage if it were involved in an accident. So it might serve as a testbed for new software, before it is deployed in self-driving cars.
The trick would be to come up with a form factor that could share the roads with existing traffic. I was thinking of something the size of a bicycle, a Segway, or a very small car. But it just occurred to me that there is a lot of space underneath most cars. Maybe a road drone could just position itself under a car and stay there as long as the car is going its way. Making a transition to another car if a drone's car turns in the wrong direction would be tricky, and perhaps not even possible if all the surrounding cars have their own drones. Ideally you would want the humans driving their cars to be able to completely ignore the drones, as the drones would be smart enough and fast enough to keep themselves from being squished. Of course it would be easier if there were a substantial number of self-driving cars on the road, and the drones and cars communicated and coordinated.
I would love to work on developing something like that.
Of course vulnerabilities remain. But when you're deliberately aiming for a secure *system*, they're a lot less impactful. Kinda like how turning ASLR on simply nullifies entire classes of vulnerabilities. MULTICS, according to your paper, didn't have problems with buffer overflows. Thirty years ago, this was a solved problem. Why is it an ongoing problem now?
Because programming languages like C/C++ are still in wide use. I suppose most people who still use these languages would tell you that they must, for reasons of efficiency. Then they would start talking about how their application can't tolerate pauses for garbage collection. But of course you could have a language which supports manual allocation of data types, with a maximum length that is enforced at runtime.
I've addressed why I think software engineering hasn't progressed more in a previous post. The argument I make there about hobbyists designing languages and lack of industry support for standardization also apply to software security. But the problem of security is open-ended. We could have better languages which prevent all kinds of abuses of the hardware-level machine model, languages in which buffer overflows and stack overflow exploits are impossible. But then someone writes a program that builds a SQL query in a string, but doesn't take the necessary precautions, and you have SQL injection. Now the SQL interpreter is in some sense another level of virtual machine which needs to be protected from abuse. It's not hard to do that if your program creates SQL queries using a data structure that supports a higher level of abstraction than strings. But if a SQL client library is provided, and it takes a SQL query as a string, building a safer level of abstraction on top of that probably isn't going to occur to most programmers. Nor will they necessarily take the time to discover that someone else has implemented a higher-level interface. Strings are what they know, and strings are what they'll use.
This is not to say that I believe secure software is impossible. But it is a moving target that can't be addressed simply by instilling in programmers a comprehensive list of secure programming DOs and DON'Ts. Programmers really need to be able to recognize when their code may be creating new kinds of security vulnerabilities.
Your code should be more efficient!