Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Lojban (Score 1) 624

No. Lojban is not designed to be a normal human language. It is intended to be syntactically unambiguous, but it is not designed to be easy to learn, or a substitute for other human languages, or even necessarily compatible with how the human brain works. As far as I've been able to tell, it is mainly an aphrodisiac to help logicians masturbate.

Comment: Re:QWERTY vs. Dvorak (Score 1) 624

You're partially right - while Dvorak is unequivocally better than Qwerty, it's not really "better enough" to justify learning in a world covered in Qwerty keyboards. However there are other designs, most notably Colemak, which are better contenders. Colemak is slightly superior to Dvorak while simultaneously being designed to fit into a Qwerty world. Because Colemak is somewhat similar to Qwerty, it's easier to learn for Qwerty users, and it's easier to switch between Colemak and Qwerty when you have to. Oh and Qwerty isn't random, either. Many of the most common letters are on the top row and many of the least common letters are on the bottom row. It is no coincidence that the word "typewriter" consists only of letters on the top row. Not. Random. "ABC"s keyboards ARE effectively random, since the alphabetic order is nearly random and unrelated to keyboard design. The ABC keyboards I've seen are substantially LESS efficient than Qwerty.

Redundancy does not require irregularity. Zamenhof didn't put enough value on redundancy and created similar forms like sep & sek, mi & ni, kiel/kie/kia/kiam (in which only an unstressed syllable differs). Such problems can be avoided without making the language irregular.

Throughout history, language has been molded and folded by war and conquest, clashes & meetings between different cultures, and by games played staying off boredom while working in the fields. Many, many people on this thread are giving ignorant and simplistic views, assuming that there are simple and/or sensible reasons for the irregularities and other features of their language, assuming that language will evolve in modern civilization the same way it did before the year 1800, assuming that a planned language automatically has all the same properties as "natural" languages, etc. All your assumptions are nonsense, folks. We won't actually know how well a planned inter-language will work in practice, or how it will evolve, until we put one into practice. Why are people so eager to put down ideas they've never seen in action?

Comment: I could maybe help you design a language. (Score 1) 624

How to replace English? What an audacious question; English is fantastically well established. I think what we need to figure out first is how to make a constructed language succeed. As you can see from many of the comments here, that's really hard for one simple reason: most people don't believe one can succeed, not even geeks. Nor do very many people get excited about the prospect of replacing English. I think many people don't even want one to succeed. And when it comes to something like language, whose success is built upon network effects, that's a huge problem. (Also a problem is all the preconceived notions people have about language, like "all languages must be [highly] irregular" or "interlanguages can't succeed because they don't evolve" or "constructed languages can't be easy to learn because any popularity will cause them to instantly devolve into a mess like English" or "English is easy, so we don't need a constructed language" or "I don't like constructed languages because they are devoid of culture and soul" or "Different languages do not take different lengths of time to learn" or "Picking up a language that no one actually speaks is difficult, since it has no purpose." There is just so much BS to push against!)

For a little while I started designing a language tentatively called Lengwish, the idea of which was to be an interlanguage for the Americas, that would use English and Spanish, and French and Portuguese to a lesser extent, as vocabulary sources, with other languages used in cases when the available vocabulary from these languages doesn't work well enough to due ambiguity or other issues. (Why "for the Americas"? Simple, it's just that I've studied Spanish for five years and would like to learn French.) I planned four purposes for it to serve:
  1. 1. To teach the basic grammar and vocabulary of English or Spanish. Learning a natural language is very hard work, especially at the beginning when you have no hope of being fluent until years later. In contrast, you can be fluent in Lengwish in less than a year. That means it's more fun, because you can feel yourself making progress quickly. Since its vocabulary is derived mainly from English and Spanish, it's a useful "stepping stone", especially for those that don't speak any European languages, to help learn one of the languages of the Americas. Since English is more popular than Spanish, the most common words tend to be derived from English rather than Spanish. In rare cases, a word is taken from other sources (Mandarin, Novial) when there is no simple English or Spanish word for a simple concept.

    Lengwish is designed to be very software-friendly, so that automated tools can help you learn it, through underlining of errors, instant translations, and "syntax highlighting".
  2. 2. To learn to learn languages. It is fairly well-known that a person can learn a third language more quickly if they already learned a second language, even if the second and third languages are not related to each other. Learning Lengwish can teach you some of the skills you will use to learn other languages, such as understanding grammar, and the ability to translate meaning rather than word-for-word.
  3. 3. To be a translation medium. Unlike English, Spanish or any other language with two thousand years of complicated baggage, Lengwish is clear and relatively unambiguous. There are thousands of English words and phrases that have more than one meaning. Consider this: what does the English term "free software" mean? "free" has two meanings, "free as in freedom" or "given without a charge" ("charge" itself has several meanings, but in this case I'm talking about a price or a fee). In some technical circles, "free software" specifically refers to "free as in freedom": freedom to see the source code, freedom to redistribute, freedom to modify; free software can be obtained "for free", but freedom is just as important. "freeware", in contrast, is software offered without a charge; it is not free-as-in-freedom, and often includes unwanted extras like advertising or "crapware". However, a layperson may not realize that there is a difference, and think that "free" just means "gratis".

    There are several reasons why languages are hard to translate, but the single biggest reason is that so many words have multiple meanings. Because Lengwish is clear and relatively unambiguous, it is much, much easier to translate automatically. When completed, Lengwish will be an excellent language for writing documents in multiple languages at once: just write your document in Lengwish and a computer will translate it to other languages with much higher reliability than if you had written it in English (perfection is impossible, but we can come close). Also, translation can be done instantly offline, for free, no need for a commercial tool or an online tool like Google Translate.
  4. 4. To be an interlanguage for international communication. This, of course, is the purpose for which languages like Esperanto were designed, but it's extremely hard to convince anyone to adopt an interlanguage for altruistic reasons, which is why this goal is listed last. Most likely, Lengwish can only succeed as an international language if it first succeeds for some other purpose. When I tell people about Esperanto, their first question is "where is it spoken?" Since Lengwish is so easy to learn, if any country decided to teach Lengwish to all its schoolchildren, this question for Lengwish would quickly have an answer, and I'm convinced that this would allow it to spread all over the world, as people would quickly see its value for overcoming the language barriers. However, for now it's hard to imagine politicians anywhere that would have the courage to suggest Lengwish as even an optional course.

    For now, those who find Pig Latin to be too obvious can enjoy using Lengwish as as a "secret language"--speak it with your friends, baffle your enemies.

I haven't gotten very far with the design yet - it's pretty challenging to have an easy-to-learn phonetic spelling system that doesn't completely mangle either the spelling or the pronounciation of most of the source English words.

Anyway, because of the "network effects" problem, I think it's crucial to design a language that offers value to people even if no one else speaks the language. That's why I've been thinking about features like reliable automated translation (note that the language must be specifically designed to translate easily to a small number of specific other languages, in order to make accurate translation practical in a small open-source effort). Similarly, using English vocabulary is attractive if it means the language can be marketed as a stepping stone to learn English.

You know that there are many interlanguage designs already, right? Before designing a whole language from scratch, you should take some time to make sure that the kind of language you want hasn't already been designed, and in any case you should study what has been done before. (I've been meaning to do more of this myself!)

It's kind of weird that the link to 'Loren Chorley' is broken but anyway, I might like to join a group that is designing a language, in order to help keep the design software-friendly, to write programs to be used during the design process (e.g. a dictionary manager to help manage vocabulary and detect conflicts), and if all goes well, to write programs to help people learn the language (interactive lessons, syntax highlighter, grammar checker, etc.) You can find my email on the bottom of the front page of loyc.net.

Comment: Three easy methods (Score 1) 1081

by Qwertie (#49258993) Attached to: How To Execute People In the 21st Century
What's the big deal here?

I believe that those suffering from a terminal illness should be allowed to apply for assisted suicide or euthanasia, and clearly whatever patients ordinarily choose for this purpose would be equally appropriate for condemned killers.
  • - Morphine.
  • - An overdose of what Micheal Jackson was using (propofol). Remember that he used this stuff frequently simply to go to sleep at night.
  • - Carbon monoxide: a gas famous for the fact that victims often aren't even aware of it.

Comment: Re:Too late (Score 1) 235

by Qwertie (#48486815) Attached to: Renewables Are Now Scotland's Biggest Energy Source

Nuclear power has benefited from the near bottomless source of government funds that is called "dual use technology". [. . .] They all pursued civilian nuclear power as a pretext for starting a nuclear weapons program. [. . .] that's why everyone assumes Iran is lying.

Yes, that's why everyone assumes Iran is lying, but I know Iran is lying for a different reason: because some of the best nuclear technologies ever conceived, such as the LFTR, do not require (significant quantities of) highly enriched uranium. If Iran wants a nuclear energy program, it would make perfect sense to choose one of the "non-proliferation" technologies. Why risk conflict with the U.S. by having a large enrichment program? Only one reason I can think of: they want the Bomb.

Comment: Re:It's too slow. (Score 1) 254

by Qwertie (#47285997) Attached to: Ask Slashdot: Best Way to Learn C# For Game Programming?
C#'s speed depends on the coding style than on the language. If you know what you're doing, Microsoft .NET gets within 2x of C++ speed most of the time. Mono is substantially worse (have a look at these benchmarks, which focus on simple programs that are written in a "C with classes" style.) If you are using features like LINQ (considered a "must-have" C# feature) you'll take a performance hit, but when you write C# code as if it were C code then its performance isn't far from C. Luckily you don't have to write the whole program so carefully, just the parts that have to be fast.

Games aren't just concerned with what is traditionally thought of as "speed", namely throughput; games are also concerned with latency. C# is based on Garbage Collection, and GC tends to add more latency than deterministic memory management (C/C++). Since writing games largely in GC languages is now a very common thing (e.g. Java on Android), I'm sure articles have been written about how to avoid getting bitten by the GC, but I don't have an article handy to show you.

I doubt the OP wants to write a graphics engine in C#. I'm no game dev so I won't suggest an engine, but the point is, the most sensitive part of game performance tends to be in the area of graphics, and you probably can use a C# wrapper of a C++-based graphics engine, so that the overall performance of the game doesn't depend that much on the performance of the .NET CLR (but performance may be sensitive to native interop costs, which are not insignificant. Interop benchmarks are included in the link above.)

Comment: Re:First Rule of secure coding. (Score 1) 51

by Qwertie (#47248371) Attached to: Book Review: Security Without Obscurity
I would think that the first rule of secure coding is "leave it to the experts". For instance I've been following the mailing list of Rust. Now the folks making Rust are smart, but they say they won't have any cryptography in the standard library in the near future because they are not confident in their abilities to do crypto correctly. Because it's very easy to inadvertently leave a weakness in your crypto code, even if you're trying to implement a documented standard.

So the first rule: don't do crypto yourself. But of course, given a crypto library written by experts, you have to find a way to use it. So the second rule: be very careful how you use it. Learn the basics of crypto, like the importance of key lengths and salts; the difference between symmetric and asymmetric crypto; learn about some of the attacks that are used (MITM, known plaintext attack, phishing, fuzzing, there are many); consider integrating a password strength meter...

Because just because you yourself couldn't get in without proper authorization, doesn't mean an experienced cracker can't get in.

Comment: Re:Unfortunate realities (Score 1) 309

by Qwertie (#47222955) Attached to: Google Engineer: We Need More Web Programming Languages
There is no reason that a single /language/ could not support efficient hardware manipulation and also run in a sandbox (with C-like efficiency). If you're writing an OS kernel or /directly/ manipulating hardware then it will not run inside the sandbox, but that doesn't mean the same programming language could not be used for both. NaCl has demonstrated this already, since you can run C code in a sandbox in a web browser and you can also write a kernel in C.

But C/C++ are difficult to use (I'm sorry, "challenging"), error-prone and crufty. Luckily it is possible to have high performance, memory safety, ease-of-use and pretty much everything else you might want in a single language. A subset of that language could be used for systems programming, and the "full" language could be used for everything else. AFAIK there is no single language that has everything today (high performance, scriptable, memory-efficient, scalable, easy to learn and use...), but there are pretty good languages like D, Julia and Nimrod that combine a ton of good things, and some of these languages (D, Rust) can be used for systems programming. So far there isn't one "perfect" language that I can point to that would be great for kernels and scripting and web apps and server apps, but I can certainly imagine such a language. Until we nail down what an ideal language looks like, why not build a VM that can run the wide variety of new languages that being created, something that works equally well for web pages and offline apps? Why, in other words, should only Google and Mozilla be in charge of the set of languages that run in a browser?

If Microsoft had done everything right, we'd probably still be locked into proprietary MS Windows. Their mistakes in the short term will probably lead to a healthy heterogeneous ecosystem in the long term... but in the short term, I am disappointed by browsers (why do you force me to use JS?) and with Android/iOS which were not really intended to compete with Windows in the first place).

Comment: Re:Why? (Score 1) 309

by Qwertie (#47222729) Attached to: Google Engineer: We Need More Web Programming Languages
What language are you talking about that "runs on any operating system"?

C/C++? The code required on different OSes can be wildly different. You have to produce a separate binary for every OS.

Java? Some code can be the same, but the user-interface code will probably have to be different on every OS.

I think when we're talking about an "offline web app", what we're really talking about is (1) write once, run everywhere, no need for separate UI frameworks on each OS, and (2) something that installs easily, or requires no installation step at all.

And I restate my point: we don't really need a new programming language for this, but Google or Mozilla could, if they wanted, make it possible to bring these two properties to almost all programming languages by creating a fantastic VM.

Comment: Re:We don't need more languages, we need bytecode. (Score 3, Insightful) 309

by Qwertie (#47222667) Attached to: Google Engineer: We Need More Web Programming Languages
I really don't agree with asm.js as the solution, although I do advocate a VM-based solution. asm.js is an assembly language inside Javascript:

function strlen(ptr) { // get length of C string
. ptr = ptr|0;
. var curr = 0;
. curr = ptr;
. while (MEM8[curr]|0 != 0) {
. . curr = (curr + 1)|0;
. }
. return (curr - ptr)|0;

Code written in asm.js is isolated from normal Javascript code and cannot access garbage-collected data (including all normal Javascript objects). I'm not sure it's even possible to access the DOM from asm.js. So, the advantages of asm.js are:
  1. - Non-asm.js Javascript engines can run the code.
  2. - "Human readability" (but not that much; asm.js is generally much harder to follow than normal code.)

A main reason to use asm.js is that you need high performance, but running in a non-asm.js engine kills your performance. Granted, performance isn't the only reason to use it.

But with most web browsers auto-updating themselves, there's no need to restrict ourselves to only JS in the long run. Whatever is standardized, all browsers will support. As for human readability, that's definitely an advantage (for those that want it), but [binary] bytecode VMs can be decompiled to code that closely resembles the original source code, as tools like Reflector and ILSpy have proven for the CLR.

The disadvantages compared to a proper VM are several:

  • - Poor data types: no 64-bit integer, no vectorized types, no built-in strings.
  • - No garbage collection.
  • - No rich standard library like JS/Java/CLR
  • - Ergo, poor cross-language interoperability (interop is C-like, rather than CLR-like)
  • - Slower startup: the asm.js compiler requires a lexing and parsing step.
  • - Extra code analysis is required for optimization. The code is not already stored in a form such as SSA that is designed to be optimized.
  • - Code size will be larger than a well-designed bytecode, unless obfuscated by shortening variable and function names and removing whitespace.
  • - No multithreading or parallelization.
  • - Poor support for dynamic languages (ironically).
  • - No non-standard control flow, such as coroutines or fibers or even 'goto' (some [non-goto] primitives in other languages do not translate to JS and goto may be the best translation).

If you add enough features to asm.js to make it into a truly great VM, it will no longer be compatible with Javascript, so the main reason for the Javascript parsing step disappears.

So as it is now, I feel that asm.js is kind of lame. However, it would make sense if there were a road map for turning asm.js into a powerful VM. This roadmap might look like this:

  1. 1. Assembly language with very limited capabilities (today).
  2. 2. Add garbage collector, 64-bit ints, DOM access, etc., with an emulation library for non-asm.js engines.
  3. 3. Carefully define a standard library designed for multi-language interoperability but also high performance (don't just mimic standard Javascript).
  4. 3. Define a bytecode form to reduce code size and startup time.
  5. 4. Add the remaining features that are impossible to support in Javascript. Programs that still want JS compatibility can be careful not to use such features.
  6. 5. Change the name: it's not JS anymore.

Comment: Re:Why? (Score 1) 309

by Qwertie (#47222041) Attached to: Google Engineer: We Need More Web Programming Languages

So what's the point of this being a "Web" language? Why not just keep downloading apps like we always have?

The advantage of "web" languages over OS-native languages is that "web" code runs on any operating system. That's a pretty big advantage from a developer's perspective! Plus, the web traditionally has built-in security features to minimize what script code can "get away with", a tradition that one hopes could continue with "offline" web apps.

But Google's NaCl has demonstrated that, in principle, any programming language could run inside a browser. So why not offer the advantages of "web languages" without actually mandating any specific language?

Comment: Re:Why? (Score 1) 309

by Qwertie (#47221815) Attached to: Google Engineer: We Need More Web Programming Languages
It shouldn't just be a web language.

Developers shouldn't have to choose between "code that runs in a browser" and "code that runs on a server". They shouldn't have to choose between "code that runs in a browser" and "code that runs fast". They shouldn't have to choose between "code that runs in a browser" and static typing, or macro systems, or aspect-oriented programming, or parallelizable code, or whatever.

The problem as I see it is that the "browser languages" are dictated "from on high" by makers of browsers like Google and Mozilla. But "non-web" languages continue to proliferate, and the "perfect" language has not yet been found, so I think browsers should get out of the business of dictating language choices. I do think, however, that they should get into the business of promoting language freedom and (because there are so many languages) interoperability between languages.

What I'd like to see is a new VM that is specifically designed to run a wide variety of languages with high performance, in contrast to the Java VM which was designed for one language (and not for high performance), and the CLR which was designed for a few specific languages (C#, VB, C++) but turns out to be unable to handle key features of many other languages. The CLR wouldn't support D or Go very well and probably not Rust either; for example, there are two general-purpose pointer types in the CLR: one type can point to the GC heap and one type can point to non-GC "native" heaps, but you cannot have a pointer that can point "anywhere", which is a feature D requires. So we should create a new VM, which
  • should contain a variety of flexible primitives so that the core features of all known languages can be supported in principle
  • should be high-performance, like Google NaCl. Some people need high performance, there's no doubt about it or way around it.
  • should use a bytecode that is designed for optimization (LLVM bitcode seems like a reasonable starting point), but is non-obfuscated (fully decompilable) by default.
  • should allow and support garbage collection (but not *require* GC-based memory management)
  • should allow multiple threads
  • should be designed to allow code from different languages to communicate easily
  • should have a standard library of data types to promote interoperability between programming languages, and avoid the need to download a large standard library from server to client

Some people believe that the web has become a kind of operating system, but I think this reflects more the way people want to use it rather than the technological reality. Right now a browser is sort of a gimpy operating system that supports only one language and one thread per page. Real OSs support any programming language. Why not the web browser?

Comment: This article talks nonsense. (Score 1) 182

by Qwertie (#47010637) Attached to: FCC Votes To Consider Next Round of 'Net Neutrality' Rules
I don't think the author of this article has a clue:

But if you buy, say, a 35 Mbps broadband plan, your ISP will be required to deliver all content to you at at least that speed.

No, if you buy a 35 Mbps plan means that under no circumstances will you receive content faster than 35 Mbps. It's the maximum, not the minimum, and who doesn't know this? Boo "Brian Fung", technology writer for The Washington Post.

It's not physically possible to guarantee 35 Mbps transfer rates, since the theoretical maximum speed is always the speed at which the server can send out the data, which could be 35 Mbps or 1 Kbps. 35 Mbps would merely be the last-mile speed, with all kinds of unadvertised potential bottlenecks elsewhere in the network.

Net neutrality isn't about guaranteeing a minumum speed, it's about having ISPs do their job--providing approximately the service advertised by building enough capacity at both sides of their network, both at the homes and at the connections to other ISPs, backbones, and popular data sources. Without net neutrality, ISPs in a monopoly or duopoly situation have an incentive to neglect that other side of the network, to NOT build more capacity but just give existing capacity to those that pay.

Comment: Re:First.... (Score 4, Interesting) 288

by Qwertie (#46875497) Attached to: Decommissioning Nuclear Plants Costing Far More Than Expected
It's hard to have a proper discussion on this one because there's no cost breakdown given, no reason why decommissioning is so expensive. There's not even any indication if it's just this one plant that is expensive, all plants in the U.S., or all first-generation plants in the world.

While the $39 million build cost would be far, far greater after adjusting for inflation, making the $608 million decommissioning seem less ridiculous, this still seems much more expensive then it ought to be. Why? Lawyers? Regulations? A poor reactor design that is simply very difficult to dismantle safely?

Coal is the largest and fastest-growing power source worldwide, and as I understand it, the dirtiest in terms of pollution in general as well as CO2. Wikipedia seems to say that renewables (including, er, wood burning?!) currently have 5% market share in the U.S. (the tables could use some clarifications). In practice, nuclear energy is a necessary ingredient to get CO2 emissions under control. So let's figure out what these huge costs are and then talk about how to reduce them in the future.

Theory is gray, but the golden tree of life is green. -- Goethe