Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:logic (Score 1) 299

There's an interesting project out of Microsoft Research to design a language/IDE that is easy to input via touch interfaces called Touch Develop. I've spoken with one of the researchers at length, and his motivation for doing this boils down to the same argument you make. When he was a kid, programming was simple to get into: he used BASIC on his Apple II. It was a language that kids could teach other kids, because their parents largely didn't "get it". MSR has a pilot program in one of the local high schools where they teach an intro CS course using TouchDevelop. Anecdotally, kids seem to be able to pick up the language very quickly, and the ease of writing games motivates a lot of kids who wouldn't ordinarily be motivated to do this.

That said, I think TouchDevelop's interface (like most of Metro) is a bit of a train wreck. I am a professional programmer, but I find myself floundering around. Part of the issue is Metro's complete annihilation of the distinction between text, links, and buttons. Unfortunately, iOS 7 has continued in this trend. But I digress...

TouchDevelop is also not a graphical language, like LabView, and I also think that's a bit of a mistake. While I agree that I prefer a text-based language for real work, I think a visual interface would be entirely appropriate for a pedagogical language. Heck, LabView is used daily by lots of real engineers who simply want some basic programmability for their tools without having to invest the [significant] time into learning a text-based language.

Comment Re:Clueless (Score 1) 125

Counting operations is not enough. Memory access time is nonuniform because of cache effects, architecture (NUMA, distributed memory), code layout (e.g., is your loop body one instruction larger than L1 i-cache?), etc. Machine instructions have different timings. CISC instructions may be slower than their serial RISC counterparts. Or they may not be. SMT may make sequential code faster than parallel code by resolving dependencies faster. Branch predictors and speculation can precompute parts of your algorithm with idle function units. Better algorithms can do more work with fewer "flops". And on and on and on...

The best way to try to write fast code is to write it and run it (on representative inputs). Then write another version and run it. Run it like an experiment, and do an hypothesis test to see which one has the statistically-significant speedup. That's the only way to write fast code on modern machines. The idea that you can hand-write fast code on modern architectures is largely a myth.

Comment Re:Clueless (Score 2) 125

As a computer scientist:

We rarely refer to the cost of an algorithm in terms of flops, since it is bound to change with 1) software implementation details, 2) hardware implementation details, and 3) input data dependencies (for algorithms with dynamical properties). Instead, we describe algorithms in "Big O" notation, which is a convention for describing the theoretical worst-case performance of an algorithm in terms of n, the size of the input. Constant factors are ignored. This theoretical performance figure allows apples-to-apples comparisons between algorithms. Of course, in practice, constant factors need to be considered for many specific scenarios.

"flops" are more commonly used when talking about machine performance, and that's why they're expressed as a rate. You care about the rate of the machine, since that often directly translates into performance. Computer architects also measure integer operations per second, which is in many ways more important for general-purpose computing. Flops are really only of interest nowadays for people doing scientific computing now that graphics-related floating point things have been offloaded to GPUs.

If you want to be pedantic, computers are, of course, hardware implementations of a Turing machine. But it's silly to talk about them using Big O notation, since the "algorithm" for (sequential) machines is mostly the same regardless of what machine you're talking about. The constant factors here are the most important thing, since these things correspond to gate delay, propagation delay, clock speed, DRAM speed, etc.

Comment Re:1st step. (Score 1) 227

Microsoft doesn't use Perforce. They use an ancient, crappy fork of Perforce called Source Depot. I had to endure this piece of garbage while I interned at Microsoft. I can only guess that it's institutional momentum-- SD is so bad that I ended up using Subversion locally and then I would periodically sync back to SD when I needed to share my code. Not only does Source Depot lack run-of-the-mill Subversion features like "svn status", when I asked people what the SD equivalent was, a long conversation ensued with other Microsoft developers in which somebody ended up sending me a Powershell script. Gah. Of course, the number 1 thing I missed there was the UNIX shell.

That said, I was pleasantly surprised by the rest of Microsoft's toolchain. Having come from Eclipse (awful), IntelliJ (OK), and Xcode (baffling), Visual Studio was great. Visual Studio is so much better than any other IDE that I've ever used, that I would actually pay full price for it (I get the "Microsoft alum" discount) if I didn't have to use it on Windows. And Microsoft was more than willing to let me write code in F#, which was fantastic. In general, Microsoft takes care of their devs. Having integrated git is just icing on the cake.

Comment Re:So copyright is not just who can copy? (Score 1) 338

Just as a bit of historical context: Sid Meier/Microprose used to have a favorably-priced service that offered backup disks for a few bucks. I suspect that the reason for this, at the time, was because few people (myself included) had more than one floppy drive. I took advantage of this to acquire copies of F19 Stealth Fighter and Railroad Tycoon. Sadly, I have since lost those games, although I should point out: you can get many old games, DRM-free, at GOG for next to nothing.

I think copyright is OK. As the creator of a work, you should be able to license it however you please. Many bits of software are a true labor of love, and I think that authors should be compensated for their work. Just because you do not agree with them does not mean that the law is unjust.

However, I think that copy-protection is extremely misguided. Fair-use exemptions aside, I believe that society should be allowed to archive these things, at least for historical reasons. Actually, there's a funny story about this-- I know a researcher at Microsoft who wrote a relatively famous Apple II game in the early 1980's, when he was a high school student. One of his recent projects has been developing software to get kids into programming, which is much more complicated than when we were kids hacking on Apple, Commodore, and TI machines ourselves. To prove his point, he fired up his old Apple II game in an emulator during the presentation, and he showed the kind of code that produced a game like his; simple stuff using BASIC. But that game-- he had lost it years ago, and he had to resort to using the cracked version floating around on the Internet. I couldn't get him to comment on the merits of copy protection, but I think the lesson is pretty clear.

I should also point out that I think that modern copyright terms are completely ridiculous. 15 years ought to be a reasonable amount of time to capitalize on your work before the public gets the benefit.

I have no affiliation with GOG, but I should point out that you can get your SMAC fix there as well.

Comment Re:Google Could use some Fresh Ideas in AI (Score 5, Insightful) 117

Yeah, but there's a reason why statistical models are hot now and why the old AI-style of logical reasoning isn't: the AI stuff only works when the input is perfect, or at least, planned for. As we all know, language doesn't really have rules, just conventions. This is why the ML approach to NLP is powerful: the machine works out what was probably meant. That's far more useful, because practically nobody writes well. When Abdur Chowdhury was still Twitter's main NLP guy, he visited our department, and guess what-- people even write in more than one language in a single sentence! Not to mention, in the old AI-style approach, if you fill a big box full of rules, you have to search through them. Computational complexity is a major limiting factor in all AI problems. ML has this nice property that you can often simply trade accuracy for speed. See Monte Carlo methods.

As you point out, ML doesn't "understand" anything. I personally think "understanding" is a bit of a squishy term. Those old AI-style systems were essentially fancy search algorithms with a large set of states and transition rules. Is that "understanding"? ML is basically the same idea except that transitioning from one state to another involves the calculation of a probability distribution, and sometimes whether the machine should transition is probabilistic.

I think that hybrid ML/AI systems-- i.e., systems that combine both logical constraints and probabilistic reasoning-- will prove to be very powerful in the future. But does that mean these machines "understand"? If you mean something like what happens in the human brain, I'm not so sure. Do humans "understand"? Or are we also automata? In order to determine whether we've "cracked AI", we need to know the answers to those questions. See Kant and good luck.

Comment Re:Just wrote a 2500 pg paper on flash trading (Score 1) 136

This is not true. While complicated software can be difficult to test, and really complicated software can often only be evaluated empirically, straightforward, mathematical software you care deeply about can be reasoned about formally, even in the presence of unusual inputs. Quantifying the behavior of algorithms is, in fact, the purpose of computer science. I don't have a deep knowledge of financial algorithms, but it would surprise me if their analysis was markedly different from other algorithms. Often, best-case, worst-case, and average-case analysis for performance/runtime can be carried out, and even with nondeterministic algorithms, bounds can be put on the likelihood of their error. Good software engineering practices (using types or assertions, which would have eliminated this particular error) can also prevent your formal assumptions from being violated. It sounds to me like the people who wrote this particular algorithm did none of this. But the presence of mistakes like this don't make the idea of algorithmic trading inherently risky.

Comment Re:Stop annulling these trades. (Score 1) 136

In the interest of creating a well-functioning system, I think system designers should try to catch these errors. If errors only affected the one party who made the mistake, your proposal might be worth considering, but in fact, these errors affect people who have nothing to do with it, simply because they participate in the market. Thus, it is better to eliminate errors altogether.

The most obvious fix is that negative trades should not be allowed. Even better would be a type system which expresses valid order sizes. But even within the range of valid orders, some order sizes are more likely than others. Given the volume of orders, it ought to be pretty easy to characterize the distribution of order sizes-- I think a smarter system should flag outlying order sizes for secondary human review. This is a pretty easy check to implement, and it surprises me that it doesn't already exist in the system.

Comment Re:Zen and the Art of Motorcycle Maintenance (Score 3, Informative) 700

While ZMM certainly borrows some ideas from eastern philosophy, this is not the central point of the book. Eastern thinking is mainly used as a counterpoint to the classical Western way of thinking.

I've read ZMM about seven times. I get something different out of it on every read. It is an attempt to apply rational thinking to the idea of rationality itself, in addition to just being a great story. The section on 'gumption traps' is worth the price of admission alone.

Definitely my favorite book.

Comment Re:Well, not calling them a "fan" might be a start (Score 2) 454

In my opinion, someone who knows their way around the various interoperability issues with Windows/UNIX is what you really should be asking for. Some things are easy (did you know that Active Directory offers LDAP and Kerberos services?), but other things are harder (domain trusts with non-Windows machines). Somebody who has experience integrating Samba with a fairly recent Windows domain will tend to have a pretty good idea how the entire ecosystem works.

I am also biased, because I am a programmer, but I think that anyone who spends time programming on a Windows machine is going to have a great deal more understanding than someone who just reads about how things work in books. For one, they don't throw their hands up in the air when they can't solve something-- they poke and prod and eventually program their way out of it. IT workers with programming experience aren't the easiest people to find (and Windows hackers seem to be more elusive than UNIX hackers for cultural reasons), but they're out there. I did this for years. Anyway, someone who can answer "What's the difference between COM and .NET?" probably has a pretty good idea how Windows is put together.

Slashdot Top Deals

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...