Comment Surely there is an upside (Score 1) 227
While not a Christian, I can see there is an appropriate non-nefarious appeal. LLM’s are (Bender's) Stochastic parrots, not reasoning “entities”, so if you want results conformant with your ethics and beliefs, you should train the LLM on material that aligns with the results you want. Between the training sets, the guardrails and any other infrastructure, there is a lot non-trivial work, and thus there is a potentially valuable business in crafting products for such an audience.
It would not be unreasonable to replicate such things with a focus on, say, Buddhist or Jewish source texts.
That is assuming that one believes that LLMs are an appropriate path forward to building AI assistants. That certainly seems to be a path that has attracted enormous amounts of capital in recent times.
If you want to craft a medical assistant, I’d expect you’d want it to be focused on medical literature and/or medical images. AGI would be another kettle of fish entirely, where you might well want a more well rounded eclectically educated “entity”; perhaps LLMs do provide a path to AGI (not clear to me how, seems like a dead end to me).