If that's what he's saying then it doesn't need to be said, so why is he saying it?
Coming up after the break, how inaccurate rulers mean that shit won't fit.
I asked myself that same question, but I think there's an answer. If you write a version of Angry Birds that sucks, then meh...some people waste a buck each on a crappy game, give it a bad review, and life goes on. If (as actually happened) you radically change the UI on a ubiquitous application *cough*Microsoft Word*cough* then it frustrates a lot of people and wastes a lot of time, but still not necessarily the end of the world. But BI apps drive decision making at a scale that boggles the mind. Things like epidemiology (containing Ebola in West Africa, or trying to reduce HIV infection rates), cancer research (listed up above, and from personal recent experience I can tell you, they're doing some incredible fucking stuff with this), and even decisions that impact negotiations between nation-states all rely upon BI. Because of the cost of the solutions and the effort needed to implement them, no decision they support is really small; nearly all of them have massive impact and thus huge ramifications if the BI solutions drive people in the wrong direction. So while he didn't quite say it this way, I think the point is that BI apps bear a greater moral burden to be effective than most apps because of the impact (good or bad) that they have.
What I wonder about is why he didn't touch upon the other moral issue of BI: usage. One of the first big BI implementations was in Germany, for example. It was used to do number-crunching to manage and provide efficiency of scale for their overall program of concentration camps. (And no, this isn't Godwin's Law in effect...I'm not comparing anyone to Hitler, just raising an interesting historical fact.) IBM designed, built, and supported the solution...this was far, far beyond just making an app that someone else bought and did something bad with, without direct involvement by the app's creator. BI solutions aren't "buy it, install it, use it" products; they need a metric assload of support and consulting services to get them off the ground, and they are purpose-built to the customer's needs. So what are the ethics around what the customer intends to do, and where do you draw the line and say "No, I'm not going to sell you my product or services to help you do that"?