> Well, maybe. I don't think it's necessarily fair to say that because we have multiple competing definitions, which can really be thought of as multiple competing theories in a scientific sense, that they are not actually good ("good" is another one of those super loaded words so what is "good" to you might not be "good" to me, unless we further clarify what the standards are). We are not in possession of all of the data as far as the human brain goes, so our definitions must be based on incomplete information.
That's exactly what I mean, we have currently multiple theories and that's the problem. It means we are still trying to figure out what happens inside our heads, we don't have a final model, we haven't reached base reality to explain everything that happens in our brains, and that's exactly why we should take our current definitions with a huge grain of salt. That's what I meant by my initial comment too. They are just approximations for our current understanding, but not base truth.
When we reach the point of knowing exactly how the brain works, we will have only 1 theory, which will be proven in many ways, not many. We may found out that what we call consciousness may not exist and is just a model that the brain builds for itself, that reasoning is just more complex pattern matching/prediction, etc.
> Words are... weird. To say we don't know what they mean is a bit off, because there is not necessarily any need for a word to be more than a fuzzy box around a concept that we employ in our day to day lives. Some people think that words spring up from the concepts or objects which they describe, as though there was some Platonic form of that thing which the word must truly apply to; I find this strange and too metaphysical for my tastes. As I see it, words are invented by humans to describe something we see/experience; they end up being as vague as our understanding, but they still serve their primary purpose, which is to communicate our lived experience of the world with another human. The idea of knowing something, or the idea of reasoning, they are not foreign to us: we understand what someone says when they say "I know that person" or "I reasoned that this would happen based [evidence, causal links, etc.]" even without pre-agreeing on what the definition is.
When we do science, we need a really good definition for concepts, an exact one, or at least we need to know that our definition is very lax and subject to change. Especially in a domain where there's too much more to know, we should be very aware that we are using words like "reasoning", "thinking", "consciousness" etc, that may not even exist in reality and are just illusions, emergent patterns arising from the brain complexity. In this case, we don't even know if the concept that they word is trying to describe, exists.
> When we then set about crafting our technical definitions for use in Philosophy or Science is the only point at which we start to run into this trouble that we're talking about. Partly, I think, because we are trying to use a word to put a box around a process that we do not possess all of the data on (knowledge/reasoning). You will likely agree when I suggest that the solution to the problem is further scientific/empirical study; but, I don't think it's necessary to say that absent a complete and perfect definition we shouldn't use the best ones that we have available to us at the moment.
Yes we need further scientific/empirical study, and us being aware that our current theories may just be at the level of the ether explanation just before Einstein proposed relativity. We should be acutely aware that while the word consciousness may relay some information accross, it's just a very very fuzzy word and not an actual ontological entity.