Labview is an excellent example of the limited scope of usefulness for graphical programming languages. The "front panel" layout features of Labview are great: for quickly whipping up a GUI, it's far easier to drag-and-drop interface widgets than to code up a layout by hand. The back-end "wiring diagram," however, is generally a royal pain to work with, as soon as you start trying anything remotely complex. There's a small scope of problems for which the ease of expressing parallel, event-driven logic in a 2-D layout is convenient; however, one quickly finds themselves burning huge amounts of time meddling with visually positioning elements in a comprehensible way for tasks that would be trivial in textual representations. There's a reason that people moved away from programming computers by manually re-wiring connections between simple hardware units. Much of Labview's visual programming interface represents a regression to the flexibility and ease of use of computers prior to 1950.
why aren't we at the stage of Star Trek where we could say, "computer, I want to mine some data, using name and location as keys and find me all the 40 year-olds with a last name of Smith".
Note that what you want has nothing to do with transcending "text" as an input medium, other than the trivial mapping of speech to text (which you succeeded at when typing your post). You want a programming environment that's a bit more flexible about interpreting "natural language" inputs --- however, the inputs are basically still "textual" in form, giving commands as a linear sequence of words. Besides the initial "voice-to-text" conversion, nothing in this situation uses "post-textual" representations. Now, if the computer was simultaneously reading your facial expressions and body posture as elements of the command --- additional dimensions difficult to realize in a plain-text representation --- then you'd have a "post-textual" interface. But, that's not how even the fictional computers in Star Trek worked; and, you probably wouldn't want them to.
I have seen these things you speak of. I have also noticed that they have an extremely low information density, especially compared to the effort required to produce communications. Compare the number of person-hours required to make a movie versus writing a book. "TV" and "youtube" are not generally the first places I turn to when I want detailed information about a subject.
Why are we still writing text-based books, and communicating in word-based languages? Surely, we should have some modern, advanced form of interpretive dance that would make all such things obsolete. Wait, that's a terrible idea! Text turns out to be a precise, expressive mode of communication, based on deep human-brain linguistic and logical capabilities. While "a picture is worth a thousand words" for certain applications, clear expression of logical concepts (versus vague "artistic" expression of ambiguous ideas) is still best done in words/text.
Give schools the power to fire bad teachers and you can give back power to good teachers.
Well, you may just end up giving that power to upper management, who has no idea who the good teachers are, only who is best at gaming the "teach-to-the-test" system. The only other thing management has to go on is firing people to save the most money (more senior, experienced teachers). Unless you're very careful to give teachers a strong voice in management decisions --- through, e.g., strong, local, democratic unions --- "fire bad teachers" will become "fire teachers who take on difficult students/subjects, and think outside the test."
I don't take Genesis 1 overly literally, either. However, if you do push for a certain rigid type of "literal" interpretation, then that's "where the Bible says the Earth is the center of the galaxy," in answer to your question. Note, however, that the definition of "literal" producing this reading was never "decided at some point" by the Church. Neither the "early fathers" (who often promoted allegorical/spiritual readings), nor later Roman Catholic dogma, nor Protestant Reformation-era understandings of "literalism," call for such biblical readings. The "extreme literalism" movement is largely a 19th Century American thing, isolated from larger theological traditions of all branches of Christianity, and developed to consolidate a political power base.
Where in the Bible does it say that the Earth is the center of the galaxy?
Implied in the Genesis 1 cosmology, where the heavenly bodies are placed in the sky above an already-formed Earth (complete with vegetation), and supported by numerous descriptions of the sun rising, setting, and even stopping still for a while (as opposed to imagery of the Earth twirling around). Granted, the Church was less upset by the heliocentric concept itself than by Galileo presenting his findings as a dialogue where the character representing "the establishment," who speaks the Pope's words, is named "Simpleton."
Well, everyone claims moral superiority (I think you just did right there). Only some vociferously base their measure of moral superiority on staying away from hookers and cocaine (in-between bouts of hookers and cocaine).
You're mistaken if you think all people reading the Bible agree on what it teaches. Especially, those unconvinced by any amount of rational scholarship on the age of the universe and the descent of humankind are likewise unconvinced by any amount of rational scholarship on Biblical exegesis. When addressing people frozen in to a shallow, reactionary 19th century worldview of both science and theology, one is likely to encounter rigid beliefs on what "the Bible teaches" every bit as shoddily constructed as their scientific views. So, perhaps the Bible doesn't teach you the world is 6,000 years old, but it does teach this to people who believe the world is 6,000 years old.
Computers are cheaper than programmers.
A fifteen million dollar computer is not cheaper than programmers plus a standard desktop workstation, which can apparently perform the same tasks. And we're not talking about a super-easy system to use, either --- the D-Wave is is complicated to set up problems for, requiring very specialized programming work (and teams of Google/NASA engineers to even test out). So, D-Wave wins neither on convenience nor raw power; so far, it's only advantage appears to be quantum-buzzword marketing compatibility.
In King James' era, a "glass" might refer to a mirror as well as, e.g., a window-pane (think "looking glass"), so the term was probably a reasonable translation at the time. However, "mirror" is more likely for a modern translation --- hence "in a mirror, dimly" in some modern translations. In less likely possibilities, the Greek word could also refer to crude lenses and glass panes (which wouldn't have been very high optical quality).
Note, also, "darkly" in the Greek was (transliterated) "ainigmati," cognate to modern "enigmatic" --- my Greek lexicon (BDAG) gives that as "that which requires special acumen to understand because it is expressed in a puzzling fashion, a riddle," or, alternatively (and more in-line with modern translation), "an indirect mode of communication; indirectly" (as in, by reflection) when used in the context of mirrors.
The simple answer to your question, which is admitted by D-Wave when pressed (though not made obvious in their PR literature) is no, D-Wave cannot run Shor's algorithm. The D-Wave is definitely not a full quantum computer in the most general sense; at best, it can carry out a very limited subset of what a general-purpose quantum computer can do ("quantum annealing" problems). At worst (and nothing better has been conclusively demonstrated), it can't do anything you can't do with cheaper fully-classical hardware (using classical simulated annealing algorithms).
No, it's more like "I have a device, and it generates power. It costs less than what I pay the utility company*."
*: when I paid the utility company $14/kWh, which I specially arranged for a week.
D-Wave keeps claiming their system is faster/more-cost-effective --- then, a few months later, independent researchers show it's not when compared against well-designed classical approaches (rather than poorly-designed or not-apples-to-apples classical algorithms). So far, they have not managed to demonstrate a definitive advantage which holds up to scrutiny.
Too bad that D-Wave blog post you linked to is full of outright fabrications/distortions. The machine they have is an annealer, not a "fast NP-complete problem solver." It does not solve NP-complete problems. An NP-complete problem is, e.g., finding the best solution to a "traveling salesman" problem --- this computer doesn't do that. Finding a probably-good-but-not-the-single-best solution to a "travelling salesman" problem is not an NP-complete problem; there are polynomial-time classical algorithms that can "almost" solve these problems (annealing) already. So, if you're trying to prove that the architecture of the D-Wave chip has been transparently disclosed to the public, it doesn't help to link to a PR fluff piece full of intentional distortions.
Unfortunately, D-Wave's proprietary approach is getting in the way of proper "baby-steps" research. Before you go selling a zillion-qbit $15M black-box system, productive research would involve letting independent research groups perform stringent tests for "quantumness" on, e.g., a simplified 2-bit system. D-Wave is selling an obfuscated system, getting in the way of low-level bare-hardware fundamentals that really advance research.