Your conclusion is interesting, but flawed for it is purely subjective to your current moral viewpoint. A different morality would have a different goal.
Other "moral" options:
Help the most people and the expense of the fewest.
Help the most important people, and the expense of least important.
Help nobody while hurting as few people as possible
Help everyone equally, while hurting as few people as possible
Help everyone without regard to the harm done anyone.
Some of the values in the above options are intentionally vague. For instance, "Help the most people and the expense of the fewest." If we can help 90% by extending their lifespans by 10% (by 20%, 30%, 40%), but 10% are left dead or sick, do you do it? What if the decision was 51% get a 100% increase in lifespan, but 49% of the people die? At what point does the "Positive AI making the decision become "negative"