There's a huge difference when IT is considered a competitive advantage as opposed to a cost. Generally speaking when IT is simply a cost, then it'll always be short staffed and barely able to keep up with what it needs to do - and will often be targeted first when costs need to be cut. While the idea of turning IT into a competitive advantage sounds good, it isn't easy to execute because they often need to expand their roles and need an objective means of measuring their contributions, but the basic idea is to get IT involved in the company's bottom line - in this case to find out what can they do to improve manufacturing processes.
In practice, today we can solve any control logic problem with existing programming techniques as long as we can specify all the inputs, states/transitions, and outputs. There are techniques to formally verify these programs so you can trust them for mission critical systems - they do exactly what they're designed to do, nothing more, nothing less.
I don't see this approach changing anytime soon. An AI designing a complex system is for the foreseeable future, science fiction. However what's interesting about The Human Brain Project is that it doesn't make any claims about AI, which is actually a good thing. If they start emphasizing AI research I seriously doubt they'd get very far. From what we understand about neural networks and machine learning, which incidentally have very little to do with AI, often turn out to be very good at solving very hard to describe problems like image recognition.
I think if The Human Brain Project focuses on better understanding our neurons and how they work, and are able to translate it to advanced neural networks - these systems could turn out to be adept at solving certain problems. That's a good thing.
Irreducible complexity is irreducible.
Still, I'd like to stress that picking the right language for your task can greatly reduce problem complexity.
For instance, algorithms are much more compact and easier to understand using a functional programming language. E.g. compare quicksort in Haskell and C - see http://www.haskell.org/haskellwiki/Introduction.
Complex concurrent programs remains challenging even with an excellent (IMO) concurrency library like the one in Java 1.5+. But switch to Erlang, and you'd find many concurrency patterns are expressed more naturally.
According to the article Slashdot uses an HTTP cache called Varnish, which uses the B-heap. Can any
Also, this idea seems similar to the locality optimizations proposed in algorithms like merge sort http://en.wikipedia.org/wiki/Merge_sort#Optimizing_merge_sort.
Indeed! Identifying what proxies (http://en.wikipedia.org/wiki/Proxy_%28statistics%29) to use is one of the trickier aspects in the soft sciences and statistics. If you read the Economist, you'd see proxies for just about everything (e.g. http://www.economist.com/markets/bigmac/), and a lot of research is required just to show what a given proxy measures.
programming can be taught with any language. problem solving can be taught with any language. it is better to teach these using a language they WILL use when they actually get into industry, than with stuff they may rarely come up against.
I disagree as it depends on what you're teaching. Concepts like recursion and algorithms are best taught with functional languages. E.g. quicksort is a lot shorter and easier to understand in Haskell than in C - see http://www.haskell.org/haskellwiki/Introduction.
We generally agree that you need to pick the right language for a given task - the task of teaching various computer science concepts is no different. Also, a good curriculum should impart students with the ability to pick the right language for a given task too.
If you're interested in pursuing a computer science degree at university, you might be better off without a background in a imperative/procedural language. Many students who knew C/Pascal seemed to have a tougher time grasping functional languages than those who didn't know anything at all.
As you already suggested what you need to do is you need to separate the core engine and game content.
I agree that content development is hard to open source and seem best developed by an individual or a small group.
The successful open source projects you mention all have a plugin/module system. Ensure the game engine supports a good scripting language for content creation, and plugin system that can modify any aspect of the game, and I expect it will do well in the open source world. Your game in effect should just be a plugin/module to your engine.
You want a game engine that is able to foster development of plugins that can completely change the game's underlying mechanics (e.g. Oblivion), as well as plugins/modules that can tell rich and complex story lines (e.g. Neverwinter Nights).
Getting a bit off topic here, but you raise an interesting issue.
I bet that the frequency of certain searches can predict whether a company stock will increase or decrease, e.g. lots of searches for " problems" is a precursor to that company stock crashing.
I wonder what policies are in place regarding usage of such aggregate information within Google (or other search companies).
Indeed, and there's actual research supporting usage of the Wii Balance board for physiotherapy. Research was conducted by the University of Melbourne, which the Australian doctor probably read about and decided to recommend to his patient.
Although the H.264 licensing terms appear reasonable, I still prefer a free option.
Market Capitalization = Share Price x Number of Shares in Market
Obviously, Oracle has a lot more shares in the market than Red Hat. Over time, companies can also do a stock split, e.g. halve the share price, but double the number of shares; or a reverse split where price doubles but shares are halved - either way, market cap remains the same.
I take white hat to mean Good, i.e. you're not using the exploit for personal gain.
You're Lawful Good if you're working on behalf of some legal authority. Chaotic Good if you're exposing it to shame/antagonize the companies. Neutral Good if just out of your own personal reasons.
FTA: One puzzle to be solved is why there is similar filamentary structure on both the large and the small scale. "That's a big question," says Tauber.
Interesting that these filaments could probably be modeled using fractals. http://en.wikipedia.org/wiki/Fractal
Intel has provided open source drivers and specs for their graphics hardware for several years now.
3D chips will not take us any closer to true AI.
We still don't truly understand the nature of intelligence, and we won't be able to manufacture it unless we can define it formally in some mathematical/logical notation.
I've done some work with neural networks, and we can simulate neurons with any number of connections (inputs and outputs), but having a bunch of neurons work together discerning things is not intelligence.