Unfortunately Sam Harris is bad at math. He claims "It's crucial to realize that the rate of progress doesn't matter, because any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going.". It seems he has never seen a monotonically increasing, yet asymptotically bounded function. However, that is exactly the kind of progress we are seeing in older technologies, e.g.: Airplanes stay at almost exactly same speed (because going past the sound barrier would use lots of energy) and get slightly more efficient each year, but will never get to the point where they can operate almost without any fuel or other large energy source, simply because the laws of physics don't allow that kind of progress.
But even if the possible progress is not bounded, it is still not guaranteed that we will get there. It can still take so long, that it never going to happen before human civilization is completely destroyed by some disaster. Or it could simply be stopped by economics as further improvements can easily get so expensive or tiny, that the likely benefits from pushing the research further can not offset the cost.
Harris also seems to think that general AI is ineviatable, because we want to make progress towards things such as things such as cureing cancer or Alzheimer. But it is not clear that such an achievement actually requires general superhuman intelligence. It likely requires superhuman intelligence, e.g.: the computers that simulate protein folding way better than any human could ever do, but not necessary general intelligence. Specialized artificial intelligence seems to be much easier to achieve and is at the same time likely almost as good as general intelligence for topics such as those. You don't need to develop an artificial general intelligence to cure cancer, if you already developed a specialized artificial intelligence that is able to find a cure.
Imagine what could happen when a huge neural net is applied.
The problem with huge neural nets is training them. The more possiblities a network has, the harder it becomes to train it. Large parts of the progress in the last few years were made by finding clever constraints on the network in order to make them easier to train.