Just imagine, there could be a phone app that displays an arrow to show the user which way to walk. Using the Lidar to detect obstacles, the app could enable a phone zombie to become almost self-driving, avoiding obstacles and other people. Almost like a real person.
With self-driving cars I expect parking will become like having valet parking everywhere. Think of how guests arrive and leave at a large hotel. There will need to be a reasonable sized area where cars can come and stop to pick up and drop off passengers and their stuff. Once empty, the cars will go and park themselves in high-density fashion. Your typical Safeway parking lot will need to be reorganized to accommodate this.
There will be an opportunity to reduce the space allotted to parking at many places.
I have serious doubts about the practicality of aerial drones, at least for deliveries to individual consumers. What happens when a drone shows up at your door and the family dog attacks it? There are many other problems which other posters have mentioned.
I would really like to see autonomous, road-based drones developed. A road-based drone could be much smaller, lighter, and cheaper (in mass production) than a car, since it wouldn't need to carry people. Road drones would need the same sorts of sensors as self-driving cars, so if road drones were mass produced, self-driving cars could reap the benefit of cheaper sensors. Road drones also would need software that would be very similar to that required by a self-driving car. But a road drone, being smaller, lighter, and cheaper, would usually cause less damage if it were involved in an accident. So it might serve as a testbed for new software, before it is deployed in self-driving cars.
The trick would be to come up with a form factor that could share the roads with existing traffic. I was thinking of something the size of a bicycle, a Segway, or a very small car. But it just occurred to me that there is a lot of space underneath most cars. Maybe a road drone could just position itself under a car and stay there as long as the car is going its way. Making a transition to another car if a drone's car turns in the wrong direction would be tricky, and perhaps not even possible if all the surrounding cars have their own drones. Ideally you would want the humans driving their cars to be able to completely ignore the drones, as the drones would be smart enough and fast enough to keep themselves from being squished. Of course it would be easier if there were a substantial number of self-driving cars on the road, and the drones and cars communicated and coordinated.
I would love to work on developing something like that.
Of course vulnerabilities remain. But when you're deliberately aiming for a secure *system*, they're a lot less impactful. Kinda like how turning ASLR on simply nullifies entire classes of vulnerabilities. MULTICS, according to your paper, didn't have problems with buffer overflows. Thirty years ago, this was a solved problem. Why is it an ongoing problem now?
Because programming languages like C/C++ are still in wide use. I suppose most people who still use these languages would tell you that they must, for reasons of efficiency. Then they would start talking about how their application can't tolerate pauses for garbage collection. But of course you could have a language which supports manual allocation of data types, with a maximum length that is enforced at runtime.
I've addressed why I think software engineering hasn't progressed more in a previous post. The argument I make there about hobbyists designing languages and lack of industry support for standardization also apply to software security. But the problem of security is open-ended. We could have better languages which prevent all kinds of abuses of the hardware-level machine model, languages in which buffer overflows and stack overflow exploits are impossible. But then someone writes a program that builds a SQL query in a string, but doesn't take the necessary precautions, and you have SQL injection. Now the SQL interpreter is in some sense another level of virtual machine which needs to be protected from abuse. It's not hard to do that if your program creates SQL queries using a data structure that supports a higher level of abstraction than strings. But if a SQL client library is provided, and it takes a SQL query as a string, building a safer level of abstraction on top of that probably isn't going to occur to most programmers. Nor will they necessarily take the time to discover that someone else has implemented a higher-level interface. Strings are what they know, and strings are what they'll use.
This is not to say that I believe secure software is impossible. But it is a moving target that can't be addressed simply by instilling in programmers a comprehensive list of secure programming DOs and DON'Ts. Programmers really need to be able to recognize when their code may be creating new kinds of security vulnerabilities.
MULTICS eh? Here's an interesting paper looking back on MULTICS security:
In spite of the fact that security was a top priority for MULTICS, in spite of the fact that it was written in PL/I rather than C, in spite of the fact that it was a very small, less complex system by today's standards, in spite of the fact that it was more secure than most modern systems, MULTICS was easily penetrated during the security evaluation. So I maintain my original position that writing secure software is hard. So hard that even when people are diligently trying to write secure software, vulnerabilities remain.
MIT's ITS was another system derived from MULTICS, and deliberately insecure. It even had a non-privileged "crash" command to crash the system, and logins were optional.
From the linked article:
"Overall, Brian Gorenc, manager of vulnerability research for HP Security Research, said that one of the surprises at the Pwn2Own 2015 event was the amount of Windows kernel vulnerabilities that showed up, though he noted that HP, in a way, expected it."
Although many exploits may be against vulnerabilities in the browser code, I have to wonder how we can expect a browser implementer to write secure code if kernel implementers can't. In my view the basic problem is that the goal for security in almost all software is that it be just "good enough", where "good enough" is a bar which is raised as a piece of code gets a reputation for being insecure. With the possible exceptions of governments and a few financial service companies, no one really wants to pay the cost of ensuring that software is secure. So we make a game of it, with prizes, like Pwn2Own, in an attempt to amortize the cost.
You would think that the evolution of smart mobile devices would provide the opportunity to not repeat the mistakes of the past. And it is true that mobile devices have security managers which provide some granularity to the rights that an application can be granted. But my experience has been that apps that I install on a mobile device often require more rights than it seems they should. In practice the decision I make when I install an app is, "Do I trust this app or not?" And I either grant it all the rights it wants or not. (Actually what I really think is that mobile devices are not really secure, that the security manager is effectively "security theater", so I don't put anything on a mobile device that I wouldn't want the world to see.)
An economy is a mechanism for regulating human (so far) behavior. If you're an economist, an economy is a means of regulating production and consumption, usually with a goal of achieving some kind of balance. But a computer scientist might view the mechanism itself as a (usually) distributed algorithm. The salient points are how data enters the system and how it gets processed as it moves through the system. Capitalism, for example, uses a distributed data structure we call "prices" to represent the state of supply vs. demand. Because the data is distributed, all the familiar problems of concurrent, distributed systems have to be addressed in some way.
However, just as software is typically built in layers, from firmware, to operating systems, to frameworks, to applications, once you have an economy, it is irresistible to build more complexity on top of it. So we use our economy to regulate human behavior in ways other than production and consumption, through the use of taxes, fines, and additional rules on what can be bought and sold, and who can work at what jobs.
The goal, as always, is to control human behavior. There are a few things that set humans apart from other species, but one of most under-recognized is our instinct to control things, including other humans. This is built into our DNA and is surely a big factor in our successful proliferation as a species. And it is something that the coming of the machine age will not change over anything less than evolutionary time scales, unless human nature itself is re-engineered.
But what does change as information and telecommunication technologies advance is the rate at which a system like an economy can process data, and the scale at which it can do it. The global economy is already almost completely integrated, and is becoming increasingly tightly coupled. And yet, humans are unceasing in their desire to control it, and to use it to control other humans.
What happens to people who can't find jobs? Some people say a basic income is the solution. But: pwned by the government. What is already happening? People living on credit cards. But: pwned by the banks. People going to school to qualify for better jobs. But: pwned by student loan debt. Is it even possible to have a society where most people aren't pwned? Could being pwned by a machine be any worse?
And that's tonight's word.
(You will be missed, sir.)
As someone with 45+ years of software experience
44+ years here. Old-timers represent!
I can personally verify that software development has not improved significantly over the last 25 years or so.
I can't be certain, but I strongly believe that one of the reason for the lack of progress is that there are not a lot of old programmers still in the profession.
I don't think that's it at all. It seems to me that the problem is two-fold. First, academia has lost either interest or influence or both in the area of software engineering. I remember when research in novel operating systems and programming languages was abundant. Now, instead of professors and Ph.d students who have taken some time to study prior work, hobbyists are the ones developing new programming languages and operating systems. The problem is not so much with the hobbyists themselves - some of them are extremely capable. But rather the problem is that the initial work is supported only by the personal enthusiasm of the hobbyist.
Which brings me to the second part of the problem: the software industry seems to have lost all interest in funding R&D to improve software engineering tools. There used to be a healthy segment of the software market involved in making tools for software developers. And that's probably because companies were willing to pay to buy those tools for their developers. Now we just use the free versions. Or wait for a hobbyist to save us.
Industry has also failed in the area of making software standards. Standards bodies have become just another field of corporate battle, where companies seek to either control developing standards or kill them. Software patents are part of that problem. But the short-sightedness of companies in understanding the long-term value of standards is the more fundamental problem.
Because the operating systems that run those downloaded apps were not designed to run them securely. Even the newer mobile operating systems and their security managers are not really up to the task. One is forced to grant broad privileges to many apps in order to use them. The user needs to have finer control over what an app can do with the network or local files, rather than being asked for blanket permissions when the app is installed. In some cases that control need not be explicit, but can be implicit in how the user interacts with the app. For example, if I ask an app to open a local file, that could implicitly grant read access to the app for that particular file (within the limits of my own rights on a multiuser system).
The evolutionary pressures on browsers are obviously moving them closer to being operating systems themselves. There's nothing wrong with that in principle. In fact it restores the ability to tinker with OS features and structure without having to worry about writing device drivers for every device in the world. However, if browser developers simply mimic the features of existing OS's, we will soon find our way back to square one.
Regarding the original topic, I think one would have to have a awfully good reason not to make a web OS multilingual. The only such reason that occurs to me is the possibility that the language itself may be integral to the security manager. Back in the day when there was a vibrant OS research community, people actually did experiment with OS's that attempted to forgo the overhead of virtual memory management by putting all applications in the same address space and implementing containment via the programming language. However, I tend to think that any containment or security management that a particular language can provide could just as easily be implemented in a byte-code VM.
When I started 44+ years ago, the hard part of programming was getting programs to fit the memory size and processor speed of the computers of that day. We often wrote in assembly language, because it was often the only reasonable choice. Scientific code was written in FORTRAN, business code in COBOL (or maybe RPG). A little later C came along and became a viable alternative to assembly language for many things. There was much experimentation with languages in those days, so there were a lot of other languages around, but FORTRAN, COBOL, and C/assembly were the major ones in use.
One thing we did have in those days, which I recall with great fondness, were manuals. There were manuals for users of computer systems, and manuals for programmers, describing the operating system and library interfaces. There were people who specialized in writing such manuals, and some of these people were quite good at it. But at some point manual writing became a lost art, and the industry is poorer for it.
Over time, the machines reached a level of capability that exceeded the requirements for the kind of programs we were writing. It was rare to find a program (code, not data) that wouldn't fit in memory, and compiler technology had advanced to the point that we didn't need to obsess about the speed of code sequences in assembly language. So we started writing larger and more complex programs, and the main difficulty of programming was managing and testing the code. Source code control systems were developed, and eventually continuous build systems, but testing remains an important concern to this day.
As computer networking evolved, the problems of managing concurrency and asynchronicity became more severe. Many approaches have been developed to deal with these problems, such as threads and monitors, but this remains an area of difficulty, and is more important than ever with the advent of multicore processors.
By the mid-90's, we'd learned about all we could from the structured programming craze, and were well into the thrall of object-oriented programming, with many of us programming in C++. Large teams of programmers became common, and for them, source code control and continuous build systems were essential. Then the web finally arrived on the scene, and a great war between corporations for control of the web platform began.
Now we are in the age of frameworks. Everybody and his dog has a framework, and most of them are very poorly documented. I believe that it is only by virtue of forums like stackoverflow that some of them are usable at all. But many of them are aimed squarely at dealing with the problematic "web platform", and for that we should all be grateful. However, we really need to get some smart people together and design a new web platform, including a reasonable migration path from the current mess. The problem is, as we approach the 20-year anniversary of the web debacle, there is an entire generation of programmers who have never known anything else.
The nature of programming will continue to evolve, but evolution is not a particularly efficient process. If we are indeed intelligent, we should be capable of intelligent design.
Really? For a long time the MS JVM was the fastest and most compatible (according to Sun's own verification suite) VM available for any browser on any OS.
The whole thing makes interesting reading, but just search for Java and JVM to see what I'm talking about. Microsoft was following their "embrace, extend, and extinguish" strategy, which had worked well for them many times before. They had a Java-like language, J++ that was not compatible with Sun Java.
What will not change is that corporations will continue to try to own the next standard. That will inevitably fail, and the software industry will be the poorer for it.
Link to Original Source