Any GO expert care to explain why this is feasible or silly ?
It is silly. If you train a neural net to differentiate a photo of a dog from a photo of a cat, it can learn to do that. But it is then silly to expect it to recognize a picture of, say, a horse. That is NOT what it was trained to do.
Likewise, Alpha-Go was specifically trained to play on a 19x19 board. Any other size, such as 18x18, would not even be recognized as valid input.
On the other hand, if you trained it on variable sized boards, then it could adapt to that.
Here is an actual example: Deepmind trained a NN to play a wide variety of video games. When it was introduced to a new video game, it could used its existing training to play and master the new game much faster than even the best humans.
Go is played on 9x9, 13x13, and 19x19 boards. On the smaller boards, tactics (joseki) is more important. On bigger boards, strategy (fuseki) is more important, and apparently innocuous early moves can have far reaching effects much later in the game. On a 38x38 board, strategy would likely be even more important, and winning the game would require a profoundly different style of play. My gut feeling is that an AI, trained by playing against itself, could master that new style much faster than a human.