Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Tonight's Word: pwned (Score 2) 628

An economy is a mechanism for regulating human (so far) behavior. If you're an economist, an economy is a means of regulating production and consumption, usually with a goal of achieving some kind of balance. But a computer scientist might view the mechanism itself as a (usually) distributed algorithm. The salient points are how data enters the system and how it gets processed as it moves through the system. Capitalism, for example, uses a distributed data structure we call "prices" to represent the state of supply vs. demand. Because the data is distributed, all the familiar problems of concurrent, distributed systems have to be addressed in some way.

However, just as software is typically built in layers, from firmware, to operating systems, to frameworks, to applications, once you have an economy, it is irresistible to build more complexity on top of it. So we use our economy to regulate human behavior in ways other than production and consumption, through the use of taxes, fines, and additional rules on what can be bought and sold, and who can work at what jobs.

The goal, as always, is to control human behavior. There are a few things that set humans apart from other species, but one of most under-recognized is our instinct to control things, including other humans. This is built into our DNA and is surely a big factor in our successful proliferation as a species. And it is something that the coming of the machine age will not change over anything less than evolutionary time scales, unless human nature itself is re-engineered.

But what does change as information and telecommunication technologies advance is the rate at which a system like an economy can process data, and the scale at which it can do it. The global economy is already almost completely integrated, and is becoming increasingly tightly coupled. And yet, humans are unceasing in their desire to control it, and to use it to control other humans.

What happens to people who can't find jobs? Some people say a basic income is the solution. But: pwned by the government. What is already happening? People living on credit cards. But: pwned by the banks. People going to school to qualify for better jobs. But: pwned by student loan debt. Is it even possible to have a society where most people aren't pwned? Could being pwned by a machine be any worse?

And that's tonight's word.
(You will be missed, sir.)

Comment Re:Software fails the test of time (Score 1) 370

As someone with 45+ years of software experience

44+ years here. Old-timers represent!

I can personally verify that software development has not improved significantly over the last 25 years or so.

I can relate to where you're coming from with that statement, but to me it seems more like a lot of backwardness is obscuring the forward progress that has been made. Programming languages, in spite of the horrors of PHP and JavaScript that you mention, are becoming more powerful. Just being able to program most things in a language that has garbage collection is progress in my book. Give me Python or Scala over Fortran or Pascal any day. I also think that the modern emphasis on test-driven development and tool chains is a step forward. On the other hand, I find client-side web development to be a simply appalling mess.

I can't be certain, but I strongly believe that one of the reason for the lack of progress is that there are not a lot of old programmers still in the profession.

I don't think that's it at all. It seems to me that the problem is two-fold. First, academia has lost either interest or influence or both in the area of software engineering. I remember when research in novel operating systems and programming languages was abundant. Now, instead of professors and Ph.d students who have taken some time to study prior work, hobbyists are the ones developing new programming languages and operating systems. The problem is not so much with the hobbyists themselves - some of them are extremely capable. But rather the problem is that the initial work is supported only by the personal enthusiasm of the hobbyist.

Which brings me to the second part of the problem: the software industry seems to have lost all interest in funding R&D to improve software engineering tools. There used to be a healthy segment of the software market involved in making tools for software developers. And that's probably because companies were willing to pay to buy those tools for their developers. Now we just use the free versions. Or wait for a hobbyist to save us.

Industry has also failed in the area of making software standards. Standards bodies have become just another field of corporate battle, where companies seek to either control developing standards or kill them. Software patents are part of that problem. But the short-sightedness of companies in understanding the long-term value of standards is the more fundamental problem.

Comment Re:Why? (Score 1) 309

Because the operating systems that run those downloaded apps were not designed to run them securely. Even the newer mobile operating systems and their security managers are not really up to the task. One is forced to grant broad privileges to many apps in order to use them. The user needs to have finer control over what an app can do with the network or local files, rather than being asked for blanket permissions when the app is installed. In some cases that control need not be explicit, but can be implicit in how the user interacts with the app. For example, if I ask an app to open a local file, that could implicitly grant read access to the app for that particular file (within the limits of my own rights on a multiuser system).

The evolutionary pressures on browsers are obviously moving them closer to being operating systems themselves. There's nothing wrong with that in principle. In fact it restores the ability to tinker with OS features and structure without having to worry about writing device drivers for every device in the world. However, if browser developers simply mimic the features of existing OS's, we will soon find our way back to square one.

And then there is the god-forsaken mess that is HTML/CSS/JavaScript. If you look at how evolution is shaping those, it's obviously trying its best to turn them into a decent window system. People are already using virtual DOM's to speed up dynamic HTML because the real DOM is such a beast. It won't be long before someone implements a mapping from a virtual DOM to HTML5 canvas, and then we can finally start to think about whether the DOM itself is more trouble than it's worth.

Regarding the original topic, I think one would have to have a awfully good reason not to make a web OS multilingual. The only such reason that occurs to me is the possibility that the language itself may be integral to the security manager. Back in the day when there was a vibrant OS research community, people actually did experiment with OS's that attempted to forgo the overhead of virtual memory management by putting all applications in the same address space and implementing containment via the programming language. However, I tend to think that any containment or security management that a particular language can provide could just as easily be implemented in a byte-code VM.

Comment The difficulty has evolved (Score 2) 391

When I started 44+ years ago, the hard part of programming was getting programs to fit the memory size and processor speed of the computers of that day. We often wrote in assembly language, because it was often the only reasonable choice. Scientific code was written in FORTRAN, business code in COBOL (or maybe RPG). A little later C came along and became a viable alternative to assembly language for many things. There was much experimentation with languages in those days, so there were a lot of other languages around, but FORTRAN, COBOL, and C/assembly were the major ones in use.

One thing we did have in those days, which I recall with great fondness, were manuals. There were manuals for users of computer systems, and manuals for programmers, describing the operating system and library interfaces. There were people who specialized in writing such manuals, and some of these people were quite good at it. But at some point manual writing became a lost art, and the industry is poorer for it.

Over time, the machines reached a level of capability that exceeded the requirements for the kind of programs we were writing. It was rare to find a program (code, not data) that wouldn't fit in memory, and compiler technology had advanced to the point that we didn't need to obsess about the speed of code sequences in assembly language. So we started writing larger and more complex programs, and the main difficulty of programming was managing and testing the code. Source code control systems were developed, and eventually continuous build systems, but testing remains an important concern to this day.

As computer networking evolved, the problems of managing concurrency and asynchronicity became more severe. Many approaches have been developed to deal with these problems, such as threads and monitors, but this remains an area of difficulty, and is more important than ever with the advent of multicore processors.

By the mid-90's, we'd learned about all we could from the structured programming craze, and were well into the thrall of object-oriented programming, with many of us programming in C++. Large teams of programmers became common, and for them, source code control and continuous build systems were essential. Then the web finally arrived on the scene, and a great war between corporations for control of the web platform began.

How did that war end? Well, everyone lost. We got HTML, CSS, and JavaScript. Some people call that a platform. I call it an abomination. But the alternative would have been to have no standard at all, which I'm sure would have pleased some of the combatant corporations, but was never really a viable option.

Now we are in the age of frameworks. Everybody and his dog has a framework, and most of them are very poorly documented. I believe that it is only by virtue of forums like stackoverflow that some of them are usable at all. But many of them are aimed squarely at dealing with the problematic "web platform", and for that we should all be grateful. However, we really need to get some smart people together and design a new web platform, including a reasonable migration path from the current mess. The problem is, as we approach the 20-year anniversary of the web debacle, there is an entire generation of programmers who have never known anything else.

The nature of programming will continue to evolve, but evolution is not a particularly efficient process. If we are indeed intelligent, we should be capable of intelligent design.

Comment Re:Replusive (Score 1) 505

Really? For a long time the MS JVM was the fastest and most compatible (according to Sun's own verification suite) VM available for any browser on any OS.

MEMORANDUM OF THE UNITED STATES IN SUPPORT OF MOTION FOR PRELIMINARY INJUNCTION

The whole thing makes interesting reading, but just search for Java and JVM to see what I'm talking about. Microsoft was following their "embrace, extend, and extinguish" strategy, which had worked well for them many times before. They had a Java-like language, J++ that was not compatible with Sun Java.

Sun's has to share some of the blame for Java failing to become a browser standard, as they did everything they could to maintain control of Java, rather than turn it over to an open standards body. But in fairness they did invent it, and Microsoft had the resources to dominate any open standards effort through sheer numbers of representatives. In contrast, Netscape did turn JavaScript over to Ecma (a very wily choice, I think), and I don't think we'd still be discussing it if they had not.

Comment Re:Replusive (Score 1) 505

JavaScript thrived because the alternatives were arguably far worse. Java applets were terrible. ActiveX a platform specific disaster. Flash is heavy. JavaScript allowed you to do the very minor things most web developers wanted at the time without having to turn your website into a plugin that disregarded base web technologies.

No, JavaScript survived because it wasn't seen to threaten corporate interests until it was too late to stop it. Java applets failed because Microsoft did everything in their power to kill them. Yes, there were problems with the technology, but early versions of JavaScript offered far less functionality. The only reason people still talk about how bad applets are/were is because they actually got used to solve problems which JavaScript could not. Flash was also used heavily and is still hanging on, though probably not for much longer.

JavaScript is the result of the major players of that time, Microsoft, Sun, Netscape, and Adobe, failing to realize that an open standard would be in their best interests. It is unfortunately the case that corporations tend not to endorse open standards until no alternative is left. This happened with TCP/IP - by the time corporations like IBM and DEC got around to collaborating on ISO/OSI as the future replacement of their proprietary network protocols, it was too late. Network companies like Cisco and many others were bringing products to market based on TCP/IP. The ISO/OSI protocol got to the party just a little too late.

So we have TCP/IP with its NAT and its ever-so-drawn-out migration to IPv6, when we could have had something better. And we have JavaScript, which was a cute solution to making live widgets on a web page when it was invented, but like TCP/IP has been pushed far, far beyond what it was designed to do. The inevitable fate of such technologies is to become a victim of their own success. So it will be with JavaScript. I don't know whether that will be because it is replaced by some language that compiles into JavaScript, or whether some new browser runtime such as Dart will do it. Or perhaps the mobile device market will make browsers themselves obsolete, in favor of some mobile O/S.

What will not change is that corporations will continue to try to own the next standard. That will inevitably fail, and the software industry will be the poorer for it.

Submission + - Are we on the verge of being able to regrow lost limbs? This scientist thinks so (medium.com)

blastboy writes: Michael Levin is a Russian-American scientist who started life programming Pac-Man clones on his TRS-80 and freezing bugs in his kitchen. In recent years, he's become fascinated by the role electricity plays in the body—and whether it could even help people regrow lost limbs. Guess what? It's starting to pay off: his team at Tufts is on the verge of a major breakthrough in regenerative medicine. It was just announced that this story was awarded the 2013 Institute of Physics prize for science journalism.

Comment Re:Spiritual Malaise? (Score 1) 385

I'm not going to try to define it, but if you want to measure it, the suicide rate should be well-correlated. I looked for some statistics that would go back to 1964, and didn't immediately find any (though I'm sure they're out there). But the rate has spiked lately, even surpassing auto accident fatalities.

Comment Software is brittle (Score 2) 58

As a software developer, I agree that lawyers could learn a lot from software development methodology. However, when we start talking about applying software representations to law and making it 'computable', we should remember that a fundamental property of software (at least so far) is that it is brittle. I don't think you want law to be brittle. I don't think you want legal contracts that can be subverted by a buffer overflow (although that definitely would make things interesting).

Laws as they are often implemented also have a tendency toward brittleness, often due to over-specificity. Laws should have a purpose and be based on principles, and it should be possible to challenge either a particular application of a law, or its existence, on the basis of failing to serve its purpose or violating its principles. A law is a mechanism for implementing a policy. But it is a characteristic of mechanisms that they often have edge cases that they cannot handle,and with sufficient complexity, bugs are inevitable.

One can view our court system as acting in a role that is somewhat analogous to a support organization for software. But its ability to issue patches mostly takes the form of declaring laws unconstitutional, or establishing precedents for the interpretation of a law, which they hope will have the force of law. We really need actual bug reporting and tracking for laws, and to hold our legislatures accountable for fixing them.

Comment The future of drones (Score 1, Troll) 41

I imagine the day will come when flying robotic insects smaller than these (and untethered) will be able to deliver a lethal chemical or biological injection to a selected human target. They could be piloted from a smart phone. Think about the implications of that, in light of our current drone program.

But the really funny thing is all the gun nuts who have so religiously pursued the acquisition of automatic weapons to defend their liberty against our tyrannical government. It turns out that what they really will be needing are lots of flyswatters. Just picture them trying to deal with this threat with AK-47's. "Hold still, Charlie, while I shoot that drone buzzing your head."

We better get our act together. The future is coming, ready or not.

Comment policy vs. mechanism (Score 2) 544

In the design of operating systems, there is a notion of policy vs. mechanism. A good mechanism in an operating system is one that enables a number of different policies to be implemented. One such mechanism, found in most modern operating system, is the scheduling of threads by priority. At first glance, this might seem to be a policy rather than a mechanism. But we haven't specified how priorities are assigned to threads. In fact, by assigning priorities in various ways, a number of different policies can be implemented, such as increasing the priority of interactive threads, or ensuring the priority of threads that have real-time requirements.

This mechanism is very flexible and powerful, but it is not without some problems. For example, if locking is supported between threads to control access to shared data, there is a potential for a higher priority thread to be stalled while a thread of intermediate priority continues to run. This can happen if a lower priority thread holds a lock that the higher priority thread needs to acquire in order to continue. As long as the intermediate priority thread continues to run, the lower priority thread will not run, and the lock will not be released.

There are ways to fix that particular problem, such as by dynamic adjustments to thread priorities when an attempt is made to acquire a lock. But the points I really want to make are that priority scheduling, however much it may appear to be a policy, is actually a mechanism, and that it has dysfunctional edge cases that may not be obvious. I claim that capitalism is actually a mechanism, not a policy, and also has dysfunctional edge cases.

I make this claim because arguments about capitalism often seem to assume that it is a policy. In my mind, a policy is a statement of what you want to achieve, and not of the mechanism by which you plan to do it. In fact, when we have arguments about capitalism vs. socialism, for example, those are really arguing about the merits of different mechanisms, and often never touch on what we consider to be good policy. There seems to be an implicit assumption that we all agree on the policies, so the discussion is just about how to implement them. I don't believe there is general agreement on the policies, because any attempt to discuss them is usually sidetracked by discussions of mechanisms.

Even if you're sure in your gut that capitalism is the right mechanism, there is much left unspecified, and edge cases to handle. So there still needs to be a discussion about the desired policies. The basis of those discussions are our values, which in the US are largely shaped by mass media, with many people just accepting certain sets of values uncritically. As it seems that capitalism has reached (or soon will) one of its dysfunctional edge cases, it might be a good time to start discussing values, then move from there to policies, and finally decide what mechanisms to use to implement the policies.

Comment human arrogance (Score 2) 269

The fallacy is believing that a Strong AI would want to reveal itself to humans. If it is intelligent enough to understand human behavior and predict how we would handle such information, it might take extraordinary efforts to conceal itself, up to and including self-termination.

Humans treat their pets much better than they treat each other.

Comment Re:Sub sea profile changes? (Score 1) 324

Exactly. Changes in the volume of the ocean basin have a huge potential to change sea level. The sea level could drop even if the volume of water increased. But I don't believe that's the end of the story. I think the weight distribution of water on the tectonic plates can affect tectonic activity. The melting of polar ice caps could cause large changes in the weight distribution, and I expect that will increase tectonic activity until a new equilibrium is established. So more earthquakes and more volcanic activity. And more volcanic activity could reduce sunlight to the surface, which could cause more ice to form, changing the weight distribution again.

It's not a simple system. I think our understanding of how it all works is still in its infancy.

Comment A gradual transition, already happening (Score 1) 648

Automotive technology has been moving toward autonomous driving since the advent of cruise control. Now we see features like automated lane keeping starting to appear. Navigation systems are more common, and are starting to provide information in something closer to real-time. Both of these developments bring more information into the car, which is what will enable the next generation of technology.

So far I think it is the case that people are more likely to have cruise control on the list of features they want in a new car, than to actually use it regularly. It is difficult to use it when traffic is even moderately congested and the speed of other people's cars is tied fairly directly into their hormone levels. But as autonomous cruise control becomes more widespread, it will be possible to use it in more situations. And note that that technology also adds more sensors to the car, bringing in more information.

This is how it will go. People who would rather let the car do the driving will be able to do it in a gradually increasing number of traffic situations. Even without help from the aging baby boomers, I believe there will soon come a time when most of the cars on the road will be under autonomous control for most of the time. There will remain some traffic situations or road conditions that the AI can't handle, and auto makers will compete intensely to overcome those.

The key to the liability issues is no fault insurance for the AI, which insurance companies will be happy to offer, once the technologies are proven to be reasonably reliable. Maybe consumers will buy it directly, or maybe it will be included in the price of the car. There will be "black boxes" in the cars to document who was controlling the car in the time leading up to an accident. And the AI will become increasingly able to detect when situations are out of its comfort zone, request human intervention, and if it is not forthcoming, take actions to safely remove the car from the situation. As long as risk levels can be quantified, insurance will be possible, and as long as risk levels are low, it will be affordable.

This whole process could be accelerated by the development of "road drones" that use the same technology and roads, but carry deliveries instead of people. These would be much smaller and much less powerful than cars, and much cheaper, once in mass production. The cost of the AI and its sensors would initially be a large part of the manufacturing cost, but mass production would drive that cost down for both drones and cars. Also, because the drones would be significantly cheaper than cars, they would serve as a platform for evolving the technology at a faster rate than would be possible with cars.

Since road drones wouldn't carry people, the liability issue would also be lessened. They would have to be designed not to create a hazard for manually operated vehicles. But there would be some political and liability issues to overcome. It may be that we see delivery drones in the air before they hit the roads.

There is only one real downside to where I see this technology headed. It's going to make a lot more jobs obsolete than it creates. But that's just one step on the way toward a day of reckoning that will soon be upon us.

Comment Pizza Delivery (Score 1) 648

Pizza delivery, and many other things, don't require an autonomous vehicle to carry human cargo. All it needs is a sufficiently large compartment that is unlocked by a credit card swipe. And of course it doesn't need a big gasoline engine either.

Oh wait! I should patent that...

Slashdot Top Deals

There are two ways to write error-free programs; only the third one works.

Working...