Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:It's too slow. (Score 1) 254

C#'s speed depends on the coding style than on the language. If you know what you're doing, Microsoft .NET gets within 2x of C++ speed most of the time. Mono is substantially worse (have a look at these benchmarks, which focus on simple programs that are written in a "C with classes" style.) If you are using features like LINQ (considered a "must-have" C# feature) you'll take a performance hit, but when you write C# code as if it were C code then its performance isn't far from C. Luckily you don't have to write the whole program so carefully, just the parts that have to be fast.

Games aren't just concerned with what is traditionally thought of as "speed", namely throughput; games are also concerned with latency. C# is based on Garbage Collection, and GC tends to add more latency than deterministic memory management (C/C++). Since writing games largely in GC languages is now a very common thing (e.g. Java on Android), I'm sure articles have been written about how to avoid getting bitten by the GC, but I don't have an article handy to show you.

I doubt the OP wants to write a graphics engine in C#. I'm no game dev so I won't suggest an engine, but the point is, the most sensitive part of game performance tends to be in the area of graphics, and you probably can use a C# wrapper of a C++-based graphics engine, so that the overall performance of the game doesn't depend that much on the performance of the .NET CLR (but performance may be sensitive to native interop costs, which are not insignificant. Interop benchmarks are included in the link above.)

Comment Re:First Rule of secure coding. (Score 1) 51

I would think that the first rule of secure coding is "leave it to the experts". For instance I've been following the mailing list of Rust. Now the folks making Rust are smart, but they say they won't have any cryptography in the standard library in the near future because they are not confident in their abilities to do crypto correctly. Because it's very easy to inadvertently leave a weakness in your crypto code, even if you're trying to implement a documented standard.

So the first rule: don't do crypto yourself. But of course, given a crypto library written by experts, you have to find a way to use it. So the second rule: be very careful how you use it. Learn the basics of crypto, like the importance of key lengths and salts; the difference between symmetric and asymmetric crypto; learn about some of the attacks that are used (MITM, known plaintext attack, phishing, fuzzing, there are many); consider integrating a password strength meter...

Because just because you yourself couldn't get in without proper authorization, doesn't mean an experienced cracker can't get in.

Comment Re:Unfortunate realities (Score 1) 309

There is no reason that a single /language/ could not support efficient hardware manipulation and also run in a sandbox (with C-like efficiency). If you're writing an OS kernel or /directly/ manipulating hardware then it will not run inside the sandbox, but that doesn't mean the same programming language could not be used for both. NaCl has demonstrated this already, since you can run C code in a sandbox in a web browser and you can also write a kernel in C.

But C/C++ are difficult to use (I'm sorry, "challenging"), error-prone and crufty. Luckily it is possible to have high performance, memory safety, ease-of-use and pretty much everything else you might want in a single language. A subset of that language could be used for systems programming, and the "full" language could be used for everything else. AFAIK there is no single language that has everything today (high performance, scriptable, memory-efficient, scalable, easy to learn and use...), but there are pretty good languages like D, Julia and Nimrod that combine a ton of good things, and some of these languages (D, Rust) can be used for systems programming. So far there isn't one "perfect" language that I can point to that would be great for kernels and scripting and web apps and server apps, but I can certainly imagine such a language. Until we nail down what an ideal language looks like, why not build a VM that can run the wide variety of new languages that being created, something that works equally well for web pages and offline apps? Why, in other words, should only Google and Mozilla be in charge of the set of languages that run in a browser?

If Microsoft had done everything right, we'd probably still be locked into proprietary MS Windows. Their mistakes in the short term will probably lead to a healthy heterogeneous ecosystem in the long term... but in the short term, I am disappointed by browsers (why do you force me to use JS?) and with Android/iOS which were not really intended to compete with Windows in the first place).

Comment Re:Why? (Score 1) 309

What language are you talking about that "runs on any operating system"?

C/C++? The code required on different OSes can be wildly different. You have to produce a separate binary for every OS.

Java? Some code can be the same, but the user-interface code will probably have to be different on every OS.

I think when we're talking about an "offline web app", what we're really talking about is (1) write once, run everywhere, no need for separate UI frameworks on each OS, and (2) something that installs easily, or requires no installation step at all.

And I restate my point: we don't really need a new programming language for this, but Google or Mozilla could, if they wanted, make it possible to bring these two properties to almost all programming languages by creating a fantastic VM.

Comment Re:We don't need more languages, we need bytecode. (Score 3, Insightful) 309

I really don't agree with asm.js as the solution, although I do advocate a VM-based solution. asm.js is an assembly language inside Javascript:

function strlen(ptr) { // get length of C string
. ptr = ptr|0;
. var curr = 0;
. curr = ptr;
. while (MEM8[curr]|0 != 0) {
. . curr = (curr + 1)|0;
. }
. return (curr - ptr)|0;
}

Code written in asm.js is isolated from normal Javascript code and cannot access garbage-collected data (including all normal Javascript objects). I'm not sure it's even possible to access the DOM from asm.js. So, the advantages of asm.js are:
  1. - Non-asm.js Javascript engines can run the code.
  2. - "Human readability" (but not that much; asm.js is generally much harder to follow than normal code.)

A main reason to use asm.js is that you need high performance, but running in a non-asm.js engine kills your performance. Granted, performance isn't the only reason to use it.

But with most web browsers auto-updating themselves, there's no need to restrict ourselves to only JS in the long run. Whatever is standardized, all browsers will support. As for human readability, that's definitely an advantage (for those that want it), but [binary] bytecode VMs can be decompiled to code that closely resembles the original source code, as tools like Reflector and ILSpy have proven for the CLR.

The disadvantages compared to a proper VM are several:

  • - Poor data types: no 64-bit integer, no vectorized types, no built-in strings.
  • - No garbage collection.
  • - No rich standard library like JS/Java/CLR
  • - Ergo, poor cross-language interoperability (interop is C-like, rather than CLR-like)
  • - Slower startup: the asm.js compiler requires a lexing and parsing step.
  • - Extra code analysis is required for optimization. The code is not already stored in a form such as SSA that is designed to be optimized.
  • - Code size will be larger than a well-designed bytecode, unless obfuscated by shortening variable and function names and removing whitespace.
  • - No multithreading or parallelization.
  • - Poor support for dynamic languages (ironically).
  • - No non-standard control flow, such as coroutines or fibers or even 'goto' (some [non-goto] primitives in other languages do not translate to JS and goto may be the best translation).

If you add enough features to asm.js to make it into a truly great VM, it will no longer be compatible with Javascript, so the main reason for the Javascript parsing step disappears.

So as it is now, I feel that asm.js is kind of lame. However, it would make sense if there were a road map for turning asm.js into a powerful VM. This roadmap might look like this:

  1. 1. Assembly language with very limited capabilities (today).
  2. 2. Add garbage collector, 64-bit ints, DOM access, etc., with an emulation library for non-asm.js engines.
  3. 3. Carefully define a standard library designed for multi-language interoperability but also high performance (don't just mimic standard Javascript).
  4. 3. Define a bytecode form to reduce code size and startup time.
  5. 4. Add the remaining features that are impossible to support in Javascript. Programs that still want JS compatibility can be careful not to use such features.
  6. 5. Change the name: it's not JS anymore.

Comment Re:Why? (Score 1) 309

So what's the point of this being a "Web" language? Why not just keep downloading apps like we always have?

The advantage of "web" languages over OS-native languages is that "web" code runs on any operating system. That's a pretty big advantage from a developer's perspective! Plus, the web traditionally has built-in security features to minimize what script code can "get away with", a tradition that one hopes could continue with "offline" web apps.

But Google's NaCl has demonstrated that, in principle, any programming language could run inside a browser. So why not offer the advantages of "web languages" without actually mandating any specific language?

Comment Re:Why? (Score 1) 309

It shouldn't just be a web language.

Developers shouldn't have to choose between "code that runs in a browser" and "code that runs on a server". They shouldn't have to choose between "code that runs in a browser" and "code that runs fast". They shouldn't have to choose between "code that runs in a browser" and static typing, or macro systems, or aspect-oriented programming, or parallelizable code, or whatever.

The problem as I see it is that the "browser languages" are dictated "from on high" by makers of browsers like Google and Mozilla. But "non-web" languages continue to proliferate, and the "perfect" language has not yet been found, so I think browsers should get out of the business of dictating language choices. I do think, however, that they should get into the business of promoting language freedom and (because there are so many languages) interoperability between languages.

What I'd like to see is a new VM that is specifically designed to run a wide variety of languages with high performance, in contrast to the Java VM which was designed for one language (and not for high performance), and the CLR which was designed for a few specific languages (C#, VB, C++) but turns out to be unable to handle key features of many other languages. The CLR wouldn't support D or Go very well and probably not Rust either; for example, there are two general-purpose pointer types in the CLR: one type can point to the GC heap and one type can point to non-GC "native" heaps, but you cannot have a pointer that can point "anywhere", which is a feature D requires. So we should create a new VM, which
  • should contain a variety of flexible primitives so that the core features of all known languages can be supported in principle
  • should be high-performance, like Google NaCl. Some people need high performance, there's no doubt about it or way around it.
  • should use a bytecode that is designed for optimization (LLVM bitcode seems like a reasonable starting point), but is non-obfuscated (fully decompilable) by default.
  • should allow and support garbage collection (but not *require* GC-based memory management)
  • should allow multiple threads
  • should be designed to allow code from different languages to communicate easily
  • should have a standard library of data types to promote interoperability between programming languages, and avoid the need to download a large standard library from server to client

Some people believe that the web has become a kind of operating system, but I think this reflects more the way people want to use it rather than the technological reality. Right now a browser is sort of a gimpy operating system that supports only one language and one thread per page. Real OSs support any programming language. Why not the web browser?

Comment This article talks nonsense. (Score 1) 182

I don't think the author of this article has a clue:

But if you buy, say, a 35 Mbps broadband plan, your ISP will be required to deliver all content to you at at least that speed.

No, if you buy a 35 Mbps plan means that under no circumstances will you receive content faster than 35 Mbps. It's the maximum, not the minimum, and who doesn't know this? Boo "Brian Fung", technology writer for The Washington Post.

It's not physically possible to guarantee 35 Mbps transfer rates, since the theoretical maximum speed is always the speed at which the server can send out the data, which could be 35 Mbps or 1 Kbps. 35 Mbps would merely be the last-mile speed, with all kinds of unadvertised potential bottlenecks elsewhere in the network.

Net neutrality isn't about guaranteeing a minumum speed, it's about having ISPs do their job--providing approximately the service advertised by building enough capacity at both sides of their network, both at the homes and at the connections to other ISPs, backbones, and popular data sources. Without net neutrality, ISPs in a monopoly or duopoly situation have an incentive to neglect that other side of the network, to NOT build more capacity but just give existing capacity to those that pay.

Firefox

How Firefox Will Handle DRM In HTML 361

An anonymous reader writes "Last year the W3C approved the inclusion of DRM in future HTML revisions. It's called Encrypted Media Extensions, and it was not well received by the web community. Nevertheless, it had the support of several major browser makers, and now Mozilla CTO Andreas Gal has a post explaining how Firefox will be implementing EME. He says, 'This is a difficult and uncomfortable step for us given our vision of a completely open Web, but it also gives us the opportunity to actually shape the DRM space and be an advocate for our users and their rights in this debate. ... From the security perspective, for Mozilla it is essential that all code in the browser is open so that users and security researchers can see and audit the code. DRM systems explicitly rely on the source code not being available. In addition, DRM systems also often have unfavorable privacy properties. ... Firefox does not load this module directly. Instead, we wrap it into an open-source sandbox. In our implementation, the CDM will have no access to the user's hard drive or the network. Instead, the sandbox will provide the CDM only with communication mechanism with Firefox for receiving encrypted data and for displaying the results.'"
Power

Thorium: The Wonder Fuel That Wasn't 204

Lasrick (2629253) writes "Bob Alvarez has a terrific article on the history and realities of thorium as an energy fuel: For 50 years the US has tried to develop thorium as an energy source for nuclear reactors, and that effort has mostly failed. Besides the extraordinary costs involved, In the process of pursuing thorium-based reactors a fair amount of uranium 233 has been created, and 96 kilograms of the stuff (enough to fuel 12 nuclear weapons) is now missing from the US national inventory. On top of that, the federal government is attempting to force Nevada into accepting a bunch of the uranium 233, as is, for disposal in a landfill (the Nevada Nuclear Security Site). 'Because such disposal would violate the agency's formal safeguards and radioactive waste disposal requirements, the Energy Department changed those rules, which it can do without public notification or comment. Never before has the agency or its predecessors taken steps to deliberately dump a large amount of highly concentrated fissile material in a landfill, an action that violates international standards and norms.'"
Transportation

Traffic Optimization: Cyclists Should Roll Past Stop Signs, Pause At Red Lights 490

Lasrick writes: "Joseph Stromberg at Vox makes a good case for changing traffic rules for bicyclists so that the 'Idaho stop' is legal. The Idaho stop allows cyclists to treat stop signs as yields and red lights as stop signs, and has created a safer ride for both cyclists and pedestrians. 'Public health researcher Jason Meggs found that after Idaho started allowing bikers to do this in 1982, injuries resulting from bicycle accidents dropped. When he compared recent census data from Boise to Bakersfield and Sacramento, California — relatively similar-sized cities with comparable percentages of bikers, topographies, precipitation patterns, and street layouts — he found that Boise had 30.5 percent fewer accidents per bike commuter than Sacramento and 150 percent fewer than Bakersfield.' Oregon was considering a similar law in 2009, and they made a nice video illustrating the Idaho Stop that is embedded in this article."

Comment Re:First.... (Score 4, Interesting) 288

It's hard to have a proper discussion on this one because there's no cost breakdown given, no reason why decommissioning is so expensive. There's not even any indication if it's just this one plant that is expensive, all plants in the U.S., or all first-generation plants in the world.

While the $39 million build cost would be far, far greater after adjusting for inflation, making the $608 million decommissioning seem less ridiculous, this still seems much more expensive then it ought to be. Why? Lawyers? Regulations? A poor reactor design that is simply very difficult to dismantle safely?

Coal is the largest and fastest-growing power source worldwide, and as I understand it, the dirtiest in terms of pollution in general as well as CO2. Wikipedia seems to say that renewables (including, er, wood burning?!) currently have 5% market share in the U.S. (the tables could use some clarifications). In practice, nuclear energy is a necessary ingredient to get CO2 emissions under control. So let's figure out what these huge costs are and then talk about how to reduce them in the future.

Comment Re:TSA-like Money for Fear (Score 2) 271

You've just stated you completely fail to understand the nature of EMP. The most dangerous EMP event is a large nuclear warhead exploded high above the ground, too high to do any meaningful damage on the ground. The damage is caused by the electromagnetic radiation released from the blast as EMP. It only takes one explosion. That isn't a nuclear Armageddon. It is returning a major post-industrial computer based society to a horse and wagon based economy in seconds, without having the horses and wagons to do the work not to mention the computers, computer controlled vehicles (engines), and other electronics.

Having watched Dark Angel I used to think a hydrogen bomb could do that kind of damage. But AFAIK the biggest man-made EMP ever to damage a city was caused by Starfish Prime. It was a 1.44 megaton H-bomb detonated high in the atmosphere, which caused damage in Hawaii, 1,445 kilometres (898 mi) away. The power didn't go out, though, and the damage was limited. Clearly, the EMP from an H-bomb is bad for electronics, but I doubt there's any way that a single bomb of realistic size could knock the entire United States back to a "horse and wagon based economy".

You have to ask: if someone wants to attack with an H-bomb, what is more likely: that they would use it as a normal bomb to kill people, or as an EMP to knock out power and damage electronics in less than a 1000-mile radius? Given the seemingly limited devastation (and the need for a rocket in addition to the bomb itself), I think terrorists would surely choose option one. I imagine option two might be considered as part of a World War 3, in which case the aggressor might well use several bombs, and EMPs could be just the beginning of our worries: "I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones."

There is another source of EMP that could be more devastating than any single man-made bomb: see the Carrington Event.

Comment Re:If I have kids... (Score 2) 355

I don't think the "use of technology" causes these problems. Rather it is the failure of children to play much with physical objects, as all previous generations have done, and in extreme cases, failure to learn social interaction. That doesn't mean we have to eliminate computers from children's lives, it means children need more parenting and human contact.

Slashdot Top Deals

BLISS is ignorance.

Working...