Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Senile? (Score 1) 951

Actually, this is one possible way to interpret the Bible. That we are children, that the rules of the game are this and that, that some specific characters seem to play slightly outside normal rules (seeing the future, walking on water, not dying, etc). And that in the end, we win or we lose.

So I'd say that between this and near-death-experience testimonies, we have pretty strong clues that we are not in what Musk calls the "base reality".

One thing where I'd differ from Musk, though, is believing that the "base reality" is like ours. The rules could be completely different.

Comment Re:"Less than 20 lines of code" (Score 1) 91

Any language can replicate this, and in a similar number of lines of code (given that the functionality available in a similar library).

Of course not. How do you implement a C or C++ library so that a variable or function declared on one computer is transparently used on another? In ELIoT, you can write this:

Var -> "Declared on original computer"
tell "pi.local",
      writeln "The value of Var is ", Var

So let me try in C or C++, where the code inside "tell" is supposed to execute on another machine called "pi.local":

#include "eliot-like-lib.h"

int main()
        char *Var = "Declared on original computer";
        tell("pi.local", { printf ("The value of Var is %s\n", Var); });

This is syntactically invalid, and I see no easy way to make it syntactically valid C or C++ . The closest I can think of are Apple's blocks. So you could write ^ { printf (...); } instead, which is no big deal. But then. How do you capture the value of Var and send it over?

In order for this to work, you need a fully homoiconic language, where you can transmit the code and its data over the wire, and where there is a way to reconstruct it on the other side reliably so that you can execute there. I'm not saying you can't modify C to get there, but certainly not easily. And in any case, it's not just a library, and not in any language.

Comment Re:Security? We don't need no stinking security! (Score 1) 91

Security has been considered, but is not implemented at this stage.

The planned security model is to show only those features that are available to a given user. Say that temperature is available to anybody, but self_destruct requires a special privilege. Then anybody connecting to the device can request "temperature" and gets a response, but "self_destruct" is not even in the symbol table, so no way to access it. Trying to use it results in a run-time error, just as if you had tried to call schtroumpf.

If you want to access a privileged feature, you do something like import "self_destruct". And that checks if you are allowed to import it or not. If you are, then your symbol table is populated with self_destruct and you can call it. Otherwise, run-time error as above. This is not implemented yet, but is definitely on my to-do list.

Another validation that I plan to implement is the validation of "reply" code. Since you sent the code including the possible "reply" values, you can check on return that only a valid reply is sent, and reject any reply code that does not match one you sent.

Regarding encryption, I'm still thinking. I'd like something very lightweight for performance reasons, e.g. XOR with a one-time pad.

Submission + - ELIoT, distributed programming for the Internet of Things

descubes writes: ELIoT (Extensible Language for the Internet of Things) is a new programming language designed to facilitate distributed programming. A code sample with less than 20 lines of code looks like a single program, but really runs on three different computers to collect temperature measurements and report when they differ. ELIoT transforms a simple sensor API into a rich, remotely-programmable API, giving your application the opportunity to optimize energy usage and minimize network traffic.

Using less resources than Bash, and capable of serving hundred of clients easily on a Raspberry Pi, ELIoT transparently sends program fragments around, but also the data they need to function, e.g. variable values or function definitions. This is possible because, like in Lisp, programs are data. ELIoT has no keywords, and program constructs such as loops or if-then-else are defined in the library rather than in the language. This makes the language very flexible and extensible, so that you can adapt it to the needs of your application.

The project is still very young (published last week), and is looking for talented developers interested in distributed programming, programming languages or language design.

Comment Re:Open source isn't enough (Score 1) 246

The language alone is not good enough, but it is simple to share. By contrast, building a complete web browser today is a bit difficult, and even a smaller "graphic" language like Tao3D is not that easy to build, in particular if you include all the dependencies. For Tao3D, you need Qt with WebKit, OpenGL, VLC, XLR, LLVM and I forget half a dozen. So I think that exposing the language-only part is interesting. For a while, Tao3D was the same project as XLR, but we decided to split early on. We wanted XLR to remain a non-graphical, non-reactive, non-networked, easy to port language.

Comment Not enough innovation (Score 1) 260

While Go and Swift are interesting incremental improvements, they are not taking into account what we learned about programming languages. In many ways, these two languages seem firmly stuck in the 1980s. For example, Go has no generics, and as far as I can tell, Swift still does not have the kind of true generic types I introduced in XL in 2000, i.e. the possibility to call "ordered" all types that have a less than, and then define functions with "ordered" instead of having to use <T> all over the place just like in C++ (and please, could we stop using angle brackets?)

More generally, there was a lot to be learned from more dynamic languages deriving from Lisp. Being able to treat code as data (homoiconicity) completely changes things. It means your language can be extended in itself, just like Lisp integrated object-oriented capabilities effortlessly. It means you can do metaprogramming, introspection, reflection, dynamic code generation, in a natural way rather than with specialised ad-hoc features. All things that Go or Swift spectacularly fail to do.

A real language redesign does not bring you incremental benefits, it should bring orders of magnitude on many tasks. I speak from experience. In XL, I can do complex arithmetic in 11 lines of code. What about Swift or Go? Ask yourself why Go can't offer complex arithmetic as a library package? Similarly, in Tao3D, I can do things HTML5 just can't, in a much less verbose, much higher-level language, and simple animations take 30 times less code than in JavaScript. The 30x factor tells me that I invented something new. Many others can demonstrate similar innovation.

I fail to see benefits of a similar order of magnitude with Swift or Go, and it annoys me. Companies like Apple and Google have the means, if only the financial ones, to make bigger things happen, in particular when smaller teams like ours already did a lot of investigative work.

Comment GTR and quantum mechanics are NOT incompatible (Score 1) 62

I'm crazy enough to believe I have found a path to unification that is actually quite simple: add a new relativity principle that states that laws of physics must be the same irrespective of the measurement instrument we use. Here is a parallel:

- Special relativity states that the laws of physics must be the same irrespective of your state of motion. So a complete description of an experiment must include which referential you are using. There is no absolute space, no absolute time, no aether. And we need to add new transformation laws from one referential to the next, which are Lorentz transforms.

- General relativity states that the laws of physics must be the same irrespective of acceleration. So a complete description of an experiment must include accelerations, including gravitation. There is no flat space-time anymore, but something that is curved by gravitation fields. So we need to add new transformations from one curved space-time to another, use tensor math, covariant and contravariant quadrivectors, etc.

- My still incomplete theory of incomplete measurements (TIM) states that the laws of physics must be the same irrespective of the measurement instruments used. So a complete description of an experiment must include which instruments were used, including calibration and range. Just because two instruments are calibrated to coincide on a given range cannot be used to postulate that they match at any scale. Space, time, mass and other measurements are no longer continuous, but discrete (because all our physical instruments give discrete results). We need to add new transformation when going from one physical instrument to another, which correspond almost exactly to renormalisation in quantum mechanics, but give an explanation as to their origin.

The TIM focuses on what I learn about a system using a physical measurement instrument. This starts by defining what an instrument is:
- It's a portion of the universe (i.e. it's not "outside the matrix")
- which has an input and an output (e.g. the probe and the display of a voltmeter)
- where changes in the state at the input yield a change in the state of the output (change in voltage result in changes on the display)
- which ideally depend only on the input (the voltmeter picks the voltage at the probe, not somewhere else)
- and change the output (nothing being said about the change in the input, since even macro-scale experiments can be destructive)
- the change in the output being mapped to a mathematical representation (often a real number) through a calibration

The instrument gives me knowledge about the state at the input. Since the instrument has a limited number of states in the output, my knowledge of the system through this instrument at any given time is described by a probability for each of the possible states. If I have N states, the probabilities p_1...p_N are all positive, and their sum is 1. So the knowledge state can be represented by a unit vector in dimension N.

For example, if I care about "is there a particle here", the possible measurements are "yes" and "no". The knowledge state is therefore represented by a unit complex number. If now you want to answer that on a plate with 1 million possible positions, you have a field of 1 million complex numbers, with the additional constraint that the particle must be at only one position (which is expressed as the sum of the probabilities for all "yes" being 1). That field is remarkably similar to the wave function, and this reasoning explains why it is complex-valued, why it is a probability of presence, and why it collapses when you know where the particle is.

But the primary difference with QM and GTR is that space-time is no longer continuous. It is discrete, and the discretization depends on the instrument being used. Because it is discrete, there are never any theoretical infinities in the sums you compute (these infinities being the reason why QM and GTR are considered fundamentally incompatible).

Here is a layman view of the incompatibility between QM and GTR. Imagine ants that try to define the laws of physics on earth. They setup rules, e.g. their anthill is only at one place in the universe, so the sum of the probability to find the anthill over all of space-time is 1. But if they now start realising that the earth surface is not flat but curved, now the method above does not work. If you go to infinity along the surface of earth, you "count" the anthill multiple times, so your integral, instead of being normalized, diverges to infinity. It is only an analogy, but it is an interesting one.

Slashdot Top Deals

"The eleventh commandment was `Thou Shalt Compute' or `Thou Shalt Not Compute' -- I forget which." -- Epigrams in Programming, ACM SIGPLAN Sept. 1982