You get a fine-grained enough knowledge of heat (or more accurately electricity) generation during brain activity, and you'd find out quite a lot about how a computer works. Particularly if you do the equivalent of single-cell recording techniques.
The WSJ article isn't very good (as I noted in another comment); my comment here was mostly that we should also dismiss the commentary that the slashdot poster put alongside it.
We know what most regions of the brain do. We have the ability to record some parts of the brain (at various levels) and have models that can predict activation levels based on subtasks. In the visual cortex, there are even people who can decode significant bits of the signal in V1. This is significant knowledge. It's not vague, and it's not trivial. We don't have the whole picture yet, true. We probably have a few decades to go for that.
While I agree that if we want a complete replica in code, we need much closer to a complete picture. I'm speaking from a neuroscience perspective though, where understanding is the metric.
Souls are a myth from prescientific times. There's no point in contending with such concepts - they're part of history and superstition. If you don't understand brains, that's sad but correctable. There's a lot of research that you could read up on.
Or I guess you could keep tossing that "cargo cult" term around and stay ignorant of the last 60 years.
Later brain regions parse information out of V1; the visual cortex is a pipeline (that forks in places). There are some great papers about people using neuroimaging techniques to pull an image out of V1. I think some of them have made it onto Youtube.
You simply have no idea what you're talking about, mbeckman. Asserting that "we don't understand" endlessly doesn't make it true. Crack a textbook.
The textbook I recommended above goes into this in much more detail, but I'll try to give a brief intro.
The currently dominant map for understanding brain structure is the Brodmann map ; it's largely anatomical (clusters of densely interlinked neurons with mappable connections to others. The visual cortex is composed of brodmann areas 17 (primary visual cortex, containing a more-or-less bitmapped visual field), 18 (secondary visual cortex), and 19 (Third visual cortex). The visual cortex is divided into two streams, a ventral stream used to identify and characterise objects, and a dorsal stream used to locate those objects in a strategic way. This is known as the "two streams hypothesis" (in case you want to look it up).
I could go a bit further, but I'm not sure how long slashdot's max comment length is and a textbook would probably give you a better understanding than what I can give you off the top of my head.
You've used the word "cargo cult" a lot before in your opinions on brain research. You don't know what you're talking about. As I suggested, get a textbook or two and read up. I don't need a science writer; I have done research and am published in the field.
It starts gentle. I don't know if you'd enjoy reading the whole thing, but you'd probably get a lot out of it anyway. Good textbooks are like that.
I agree that we don't have the full picture. That's not what mbeckman was claiming though, and saying "we know very little" because we don't have a particular achievement is an unjustified conclusion.
Probably not - weak AI is typified by directly encoding domain knowledge on human capabilities into state machines, not typically meant to be neuroplausible or human-like. I believe the substrate here is wrong - real organisms learn (either as individuals or through generational building/encoding/selection towards instinct) how to do these things, and that knowledge is integrated. I don't think it'd be easy or likely that weak AI research methods will produce an integrated being with all these capabilities.
I'm sticking my neck out a bit here though; I'm not sure that weak AI research would be useless. Sufficiency versus usefulness is a complicated topic.
Also, my research was in neuroscience (led by cognitive modeling), not AI. It's a neighbouring field, but take what I have to say with at least a grain of salt.
The WSJ article links a paper from some researchers at Google:
The WSJ article isn't particularly good either; they misunderstand what's actually going on in the research, which seems to be about conversational modeling (a "weak AI" type of research, the "understanding" being very shallow). They point out a few applications of this kind of work though, and that seems pretty solid/useful. (It doesn't approach the goals of "strong AI", those being actually modeling semantics and deeper reasoning)
I'm calling the poster here out as being full of shit. As someone who's done neuroscience research, the idea that "Humans have no idea how the human, or any other brain, works" is bollocks. We have a reasonably good idea on the large scale, and in certain areas (such as the visual cortex), that understanding is quite far along. There are frontiers to our knowledge, but human understanding of brains is well on its way. Poster needs to pick up some neuroscience textbooks and get clued.
As a particular recommendation, I'd suggest Kolb and Whishaw's "Fundamentals of Human Neuropsychology"; it's an excellent textbook.