Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:It's too slow. (Score 1) 254

C#'s speed depends on the coding style than on the language. If you know what you're doing, Microsoft .NET gets within 2x of C++ speed most of the time. Mono is substantially worse (have a look at these benchmarks, which focus on simple programs that are written in a "C with classes" style.) If you are using features like LINQ (considered a "must-have" C# feature) you'll take a performance hit, but when you write C# code as if it were C code then its performance isn't far from C. Luckily you don't have to write the whole program so carefully, just the parts that have to be fast.

Games aren't just concerned with what is traditionally thought of as "speed", namely throughput; games are also concerned with latency. C# is based on Garbage Collection, and GC tends to add more latency than deterministic memory management (C/C++). Since writing games largely in GC languages is now a very common thing (e.g. Java on Android), I'm sure articles have been written about how to avoid getting bitten by the GC, but I don't have an article handy to show you.

I doubt the OP wants to write a graphics engine in C#. I'm no game dev so I won't suggest an engine, but the point is, the most sensitive part of game performance tends to be in the area of graphics, and you probably can use a C# wrapper of a C++-based graphics engine, so that the overall performance of the game doesn't depend that much on the performance of the .NET CLR (but performance may be sensitive to native interop costs, which are not insignificant. Interop benchmarks are included in the link above.)

Comment Re:First Rule of secure coding. (Score 1) 51

I would think that the first rule of secure coding is "leave it to the experts". For instance I've been following the mailing list of Rust. Now the folks making Rust are smart, but they say they won't have any cryptography in the standard library in the near future because they are not confident in their abilities to do crypto correctly. Because it's very easy to inadvertently leave a weakness in your crypto code, even if you're trying to implement a documented standard.

So the first rule: don't do crypto yourself. But of course, given a crypto library written by experts, you have to find a way to use it. So the second rule: be very careful how you use it. Learn the basics of crypto, like the importance of key lengths and salts; the difference between symmetric and asymmetric crypto; learn about some of the attacks that are used (MITM, known plaintext attack, phishing, fuzzing, there are many); consider integrating a password strength meter...

Because just because you yourself couldn't get in without proper authorization, doesn't mean an experienced cracker can't get in.

Comment Re:Unfortunate realities (Score 1) 309

There is no reason that a single /language/ could not support efficient hardware manipulation and also run in a sandbox (with C-like efficiency). If you're writing an OS kernel or /directly/ manipulating hardware then it will not run inside the sandbox, but that doesn't mean the same programming language could not be used for both. NaCl has demonstrated this already, since you can run C code in a sandbox in a web browser and you can also write a kernel in C.

But C/C++ are difficult to use (I'm sorry, "challenging"), error-prone and crufty. Luckily it is possible to have high performance, memory safety, ease-of-use and pretty much everything else you might want in a single language. A subset of that language could be used for systems programming, and the "full" language could be used for everything else. AFAIK there is no single language that has everything today (high performance, scriptable, memory-efficient, scalable, easy to learn and use...), but there are pretty good languages like D, Julia and Nimrod that combine a ton of good things, and some of these languages (D, Rust) can be used for systems programming. So far there isn't one "perfect" language that I can point to that would be great for kernels and scripting and web apps and server apps, but I can certainly imagine such a language. Until we nail down what an ideal language looks like, why not build a VM that can run the wide variety of new languages that being created, something that works equally well for web pages and offline apps? Why, in other words, should only Google and Mozilla be in charge of the set of languages that run in a browser?

If Microsoft had done everything right, we'd probably still be locked into proprietary MS Windows. Their mistakes in the short term will probably lead to a healthy heterogeneous ecosystem in the long term... but in the short term, I am disappointed by browsers (why do you force me to use JS?) and with Android/iOS which were not really intended to compete with Windows in the first place).

Comment Re:Why? (Score 1) 309

What language are you talking about that "runs on any operating system"?

C/C++? The code required on different OSes can be wildly different. You have to produce a separate binary for every OS.

Java? Some code can be the same, but the user-interface code will probably have to be different on every OS.

I think when we're talking about an "offline web app", what we're really talking about is (1) write once, run everywhere, no need for separate UI frameworks on each OS, and (2) something that installs easily, or requires no installation step at all.

And I restate my point: we don't really need a new programming language for this, but Google or Mozilla could, if they wanted, make it possible to bring these two properties to almost all programming languages by creating a fantastic VM.

Comment Re:We don't need more languages, we need bytecode. (Score 3, Insightful) 309

I really don't agree with asm.js as the solution, although I do advocate a VM-based solution. asm.js is an assembly language inside Javascript:

function strlen(ptr) { // get length of C string
. ptr = ptr|0;
. var curr = 0;
. curr = ptr;
. while (MEM8[curr]|0 != 0) {
. . curr = (curr + 1)|0;
. }
. return (curr - ptr)|0;
}

Code written in asm.js is isolated from normal Javascript code and cannot access garbage-collected data (including all normal Javascript objects). I'm not sure it's even possible to access the DOM from asm.js. So, the advantages of asm.js are:
  1. - Non-asm.js Javascript engines can run the code.
  2. - "Human readability" (but not that much; asm.js is generally much harder to follow than normal code.)

A main reason to use asm.js is that you need high performance, but running in a non-asm.js engine kills your performance. Granted, performance isn't the only reason to use it.

But with most web browsers auto-updating themselves, there's no need to restrict ourselves to only JS in the long run. Whatever is standardized, all browsers will support. As for human readability, that's definitely an advantage (for those that want it), but [binary] bytecode VMs can be decompiled to code that closely resembles the original source code, as tools like Reflector and ILSpy have proven for the CLR.

The disadvantages compared to a proper VM are several:

  • - Poor data types: no 64-bit integer, no vectorized types, no built-in strings.
  • - No garbage collection.
  • - No rich standard library like JS/Java/CLR
  • - Ergo, poor cross-language interoperability (interop is C-like, rather than CLR-like)
  • - Slower startup: the asm.js compiler requires a lexing and parsing step.
  • - Extra code analysis is required for optimization. The code is not already stored in a form such as SSA that is designed to be optimized.
  • - Code size will be larger than a well-designed bytecode, unless obfuscated by shortening variable and function names and removing whitespace.
  • - No multithreading or parallelization.
  • - Poor support for dynamic languages (ironically).
  • - No non-standard control flow, such as coroutines or fibers or even 'goto' (some [non-goto] primitives in other languages do not translate to JS and goto may be the best translation).

If you add enough features to asm.js to make it into a truly great VM, it will no longer be compatible with Javascript, so the main reason for the Javascript parsing step disappears.

So as it is now, I feel that asm.js is kind of lame. However, it would make sense if there were a road map for turning asm.js into a powerful VM. This roadmap might look like this:

  1. 1. Assembly language with very limited capabilities (today).
  2. 2. Add garbage collector, 64-bit ints, DOM access, etc., with an emulation library for non-asm.js engines.
  3. 3. Carefully define a standard library designed for multi-language interoperability but also high performance (don't just mimic standard Javascript).
  4. 3. Define a bytecode form to reduce code size and startup time.
  5. 4. Add the remaining features that are impossible to support in Javascript. Programs that still want JS compatibility can be careful not to use such features.
  6. 5. Change the name: it's not JS anymore.

Comment Re:Why? (Score 1) 309

So what's the point of this being a "Web" language? Why not just keep downloading apps like we always have?

The advantage of "web" languages over OS-native languages is that "web" code runs on any operating system. That's a pretty big advantage from a developer's perspective! Plus, the web traditionally has built-in security features to minimize what script code can "get away with", a tradition that one hopes could continue with "offline" web apps.

But Google's NaCl has demonstrated that, in principle, any programming language could run inside a browser. So why not offer the advantages of "web languages" without actually mandating any specific language?

Comment Re:Why? (Score 1) 309

It shouldn't just be a web language.

Developers shouldn't have to choose between "code that runs in a browser" and "code that runs on a server". They shouldn't have to choose between "code that runs in a browser" and "code that runs fast". They shouldn't have to choose between "code that runs in a browser" and static typing, or macro systems, or aspect-oriented programming, or parallelizable code, or whatever.

The problem as I see it is that the "browser languages" are dictated "from on high" by makers of browsers like Google and Mozilla. But "non-web" languages continue to proliferate, and the "perfect" language has not yet been found, so I think browsers should get out of the business of dictating language choices. I do think, however, that they should get into the business of promoting language freedom and (because there are so many languages) interoperability between languages.

What I'd like to see is a new VM that is specifically designed to run a wide variety of languages with high performance, in contrast to the Java VM which was designed for one language (and not for high performance), and the CLR which was designed for a few specific languages (C#, VB, C++) but turns out to be unable to handle key features of many other languages. The CLR wouldn't support D or Go very well and probably not Rust either; for example, there are two general-purpose pointer types in the CLR: one type can point to the GC heap and one type can point to non-GC "native" heaps, but you cannot have a pointer that can point "anywhere", which is a feature D requires. So we should create a new VM, which
  • should contain a variety of flexible primitives so that the core features of all known languages can be supported in principle
  • should be high-performance, like Google NaCl. Some people need high performance, there's no doubt about it or way around it.
  • should use a bytecode that is designed for optimization (LLVM bitcode seems like a reasonable starting point), but is non-obfuscated (fully decompilable) by default.
  • should allow and support garbage collection (but not *require* GC-based memory management)
  • should allow multiple threads
  • should be designed to allow code from different languages to communicate easily
  • should have a standard library of data types to promote interoperability between programming languages, and avoid the need to download a large standard library from server to client

Some people believe that the web has become a kind of operating system, but I think this reflects more the way people want to use it rather than the technological reality. Right now a browser is sort of a gimpy operating system that supports only one language and one thread per page. Real OSs support any programming language. Why not the web browser?

Comment This article talks nonsense. (Score 1) 182

I don't think the author of this article has a clue:

But if you buy, say, a 35 Mbps broadband plan, your ISP will be required to deliver all content to you at at least that speed.

No, if you buy a 35 Mbps plan means that under no circumstances will you receive content faster than 35 Mbps. It's the maximum, not the minimum, and who doesn't know this? Boo "Brian Fung", technology writer for The Washington Post.

It's not physically possible to guarantee 35 Mbps transfer rates, since the theoretical maximum speed is always the speed at which the server can send out the data, which could be 35 Mbps or 1 Kbps. 35 Mbps would merely be the last-mile speed, with all kinds of unadvertised potential bottlenecks elsewhere in the network.

Net neutrality isn't about guaranteeing a minumum speed, it's about having ISPs do their job--providing approximately the service advertised by building enough capacity at both sides of their network, both at the homes and at the connections to other ISPs, backbones, and popular data sources. Without net neutrality, ISPs in a monopoly or duopoly situation have an incentive to neglect that other side of the network, to NOT build more capacity but just give existing capacity to those that pay.

Comment Re:First.... (Score 4, Interesting) 288

It's hard to have a proper discussion on this one because there's no cost breakdown given, no reason why decommissioning is so expensive. There's not even any indication if it's just this one plant that is expensive, all plants in the U.S., or all first-generation plants in the world.

While the $39 million build cost would be far, far greater after adjusting for inflation, making the $608 million decommissioning seem less ridiculous, this still seems much more expensive then it ought to be. Why? Lawyers? Regulations? A poor reactor design that is simply very difficult to dismantle safely?

Coal is the largest and fastest-growing power source worldwide, and as I understand it, the dirtiest in terms of pollution in general as well as CO2. Wikipedia seems to say that renewables (including, er, wood burning?!) currently have 5% market share in the U.S. (the tables could use some clarifications). In practice, nuclear energy is a necessary ingredient to get CO2 emissions under control. So let's figure out what these huge costs are and then talk about how to reduce them in the future.

Comment Re:TSA-like Money for Fear (Score 2) 271

You've just stated you completely fail to understand the nature of EMP. The most dangerous EMP event is a large nuclear warhead exploded high above the ground, too high to do any meaningful damage on the ground. The damage is caused by the electromagnetic radiation released from the blast as EMP. It only takes one explosion. That isn't a nuclear Armageddon. It is returning a major post-industrial computer based society to a horse and wagon based economy in seconds, without having the horses and wagons to do the work not to mention the computers, computer controlled vehicles (engines), and other electronics.

Having watched Dark Angel I used to think a hydrogen bomb could do that kind of damage. But AFAIK the biggest man-made EMP ever to damage a city was caused by Starfish Prime. It was a 1.44 megaton H-bomb detonated high in the atmosphere, which caused damage in Hawaii, 1,445 kilometres (898 mi) away. The power didn't go out, though, and the damage was limited. Clearly, the EMP from an H-bomb is bad for electronics, but I doubt there's any way that a single bomb of realistic size could knock the entire United States back to a "horse and wagon based economy".

You have to ask: if someone wants to attack with an H-bomb, what is more likely: that they would use it as a normal bomb to kill people, or as an EMP to knock out power and damage electronics in less than a 1000-mile radius? Given the seemingly limited devastation (and the need for a rocket in addition to the bomb itself), I think terrorists would surely choose option one. I imagine option two might be considered as part of a World War 3, in which case the aggressor might well use several bombs, and EMPs could be just the beginning of our worries: "I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones."

There is another source of EMP that could be more devastating than any single man-made bomb: see the Carrington Event.

Comment Re:If I have kids... (Score 2) 355

I don't think the "use of technology" causes these problems. Rather it is the failure of children to play much with physical objects, as all previous generations have done, and in extreme cases, failure to learn social interaction. That doesn't mean we have to eliminate computers from children's lives, it means children need more parenting and human contact.

Comment Re:Denial of the root cause (Score 2) 343

Uh, have you noticed that the countries with the most wealth seem to have the least children? So my (naive) view would be that increasing the "material expectations" of the population, by increasing the wealth of the masses, has a better chance of avoiding dangerous overcrowding than keeping the majority of the world poor. One-child policies like China's, while on the extreme side, are also effective.

I suppose when you talk of "material expectations" you are thinking of North Americans and their rampant consumerism. I submit that this is not a problem with "human beings" so much as a problem with Americans and other affluent cultures. "Human beings" are certainly capable of living with less; most often this occurs due to lack of wealth, but there are a lot of things that we could voluntarily give up without harming our quality of life.

For example, when you go to McDonald's, do you really need the 3 napkins they give you automatically? Does your Big Mac really have to come in a box that you immediately throw away? Could re-usable plastic cups be used instead? Likewise in our home life, I know most people could find, if they wanted, ways to reduce waste and use less energy. Did you know you can turn the stove off before you remove the pot, and it can keep cooking for up to several minutes? Did you know apples with blemishes are safe to eat? Personally, I have a good quality of life as I try my best to reduce waste, but I know many of my peers waste a lot of food and goods and their lives are no better for it. I submit that this is an issue of human culture rather than human beings.

Comment Re:Nuclear is obvious, an energy surplus is desire (Score 1) 433

It's interesting to watch the different arguments from pro-nuclear and anti-nuclear forces. The pro-nuclear forces point out that building all new power plants as 100% renewable in the near future is not practical but a mixture of renewables and nuclear is. They go on to point out the relatively high rate of deaths from coal power (such as direct deaths in coal mines, and indirect deaths from air pollution) per unit of power generated, compared to the few deaths from nuclear. They may even then point out that petroleum power in general has a poor safety record compared to nuclear worldwide.

The anti-nuclear crowd, meanwhile, either focuses on a tiny number of accidents like Chernobyl and a couple of problematic, but non-lethal, old reactor designs (like the 1970 pebble-bed reactor mentioned by the parent), as if costly problems are unique to the nuclear industry. After all, why pay any attention to accidents, deaths or cost overruns in fossil-fuel power when we can simply make every single new power plant a renewable power plant? Never mind that not every place in the world has plentiful sunlight or wind. They then move on to the only argument about nuclear that is actually fair--that it often costs more than renewables.

Nuclear faces political and popular opposition, often due to outdated opinions based on a few unsafe reactors from the 60s and 70s (did you know that Fukushima reactor 1 was built before Chernobyl? Or that there is another nearby reactor run by a more safety-conscious company that survived the tsunami?). This opposition and regulatory uncertainty increases costs, plus reactors are traditionally built with the "craftsman" approach where every reactor is large, somewhat unique, and built on-site. It seems to me that costs could be reduced greatly if nuclear reactors were mass-produced like trucks (small reactors seem to work great for nuclear subs!) and distributed around the country from factories, and if they used passive failsafes to make uncontrolled meltdowns "impossible" so that outer containment chambers could be less costly.

But the public opposition is no small barrier to overcome. Remember how a Tesla car makes nationwide news whenever a single battery pack is damaged and catches fire, even though there are 150,000 vehicle fires reported every year in the U.S.? You can expect the same thing with small modular reactors--barring some terrible disaster, all sorts of problems with petroleum power plants will be scarcely noticed, while a single minor nuclear incident will make nationwide headlines. Surely this makes potential nuclear investors nervous.

Slashdot Top Deals

If you want to put yourself on the map, publish your own map.

Working...