I won't respond to all your points, but there are two that I feel are deserving.
The fact is, we do not know "how" they work except at the very base level.
Yes, we do know how LLMs work. Maybe you don't, but the engineers at OpenAI and Microsoft and Google etc., absolutely do. The evidence of this is that over the past three years, they have been able to repeatedly and steadily improve the quality of the chatbots' responses, and to correct incorrect responses of the past. One notable example was an early Gemini image generator, which, when asked to render drawings of historic figures like Lincoln, would render an image with the wrong gender or race. The designers had built this into the LLM, but didn't fully anticipate the ultimate fallout. So they fixed the image generator to be more historically accurate, before they brought it back online. This kind of correction would not be possible, if the engineers didn't understand thoroughly how it works.
Intelligence is not a physical thing that can be simulated.
This statement misunderstands how LLMs work. At one level, you're right, intelligence can't be "simulated." But the responses a chatbot gives, certainly do simulate the responses an actual intelligence would give. But it's an illusion. The LLM simply digests a myriad human responses to the question provided, and synthesizes and adapts the response to *your* question / prompt, based on the patterns it saw in its training data.