A Definition of Consciousness
- A conscious system is a system that can decide to do something different from what it would do if it wasn’t conscious.
This definition is somewhat circular. But it does imply something about the prerequisites for being conscious:
- To be conscious:
- You have to have a concept and awareness of what your own default behaviour would be in any particular situation.
- You have to have some understanding of the factors that determine your own default behaviour.
- You have to be able to make a decision to do something different from that default behaviour.
I would add another requirement to those three:
- When you consciously decide to do something different from your default behaviour in a particular situation, there should be a reason for making that decision, where that reason is general or abstract in the sense that it is something that would apply to many different situations, and not just the current situation.
In the short term, conscious decisions alter behaviour that happens within the current situation.
The full benefit from conscious decision-making comes from the ability to permanently change the default behaviour of the system, if the result of a conscious decision is observed to have a result that is better than what would have been expected if the system had executed its default behaviour in that situation.
To achieve this benefit, the system has to include a sub-system for measuring the success of conscious decisions, relating those measurements to the reasons for making those decisions, and then updating default behaviours accordingly.
Also, the measurement sub-system has to exist as something that is not itself subject to the possibility of self-alteration – if the measurement sub-system can be self-altered, then the whole system could very easily corrupt itself into a state where the process of self-alteration becomes dysfunctional.
The concept of “reasons” for conscious decisions is important, because reasons can be abstract strategies that apply to multiple situations. The more often that a particular strategy is applied, the faster the measurement system can observe the results of that strategy, and the faster it can decide that that particular strategy should be reinforced.
Is “X” conscious?
We can turn the list of items above into a list of questions to answer the big question: Is “X” conscious?
- Does X have a concept and awareness of what its own behaviour would be in any particular situation?
- Does X have some understanding of the factors that determine its default behaviours?
- Is X able to make a decision to do something different from its own default behaviour in any particular situation?
- Can X learn or think of abstract strategies such that each individual strategy can be applied to decide to behave differently from the default behaviour in a range of different situations?
Is a dog conscious?
It is easy to anthropomorphize the behaviour of non-human animals. There are many things that dogs do which are similar to what a person might do in a similar situation.
But:
- Does a dog have a concept and awareness of what its own behaviour would be in any particular situation?
- Does a dog have some understanding of the factors that determine its default behaviours?
- Is a dog able to make a decision to do something different from its own default behaviour in any particular situation?
- Can a dog learn or think of abstract strategies such that each individual strategy can be applied to decide to behave differently from the default behaviour in a range of different situations?
I don’t personally understand what it is like to “be a dog”, because I’m not a dog.
But I’m going to speculate that the answer to all four of those questions for a dog is “No”.
Is AI conscious?
Same questions:
- Does AI have a concept and awareness of what its own behaviour would be in any particular situation?
- Does AI have some understanding of the factors that determine its default behaviours?
- Is AI able to make a decision to do something different from its own default behaviour in any particular situation?
- Can AI learn or think of abstract strategies such that each individual strategy can be applied to decide to behave differently from the default behaviour in a range of different situations?
In principle we could attempt to engineer an AI that has these characteristics, and include a measurement system to manage the learning of abstract strategies based on the results of applying given strategies to conscious decisions relative to the expected results if a default behaviour was executed.
But, for current AI systems that people are using, especially LLM-based AIs used for chat, analysis and software development, the systems are not designed to be self-modifying in this sense – the processes of creation and modification are all explicitly managed and controlled by the human owners of the AI.
We can observe from our own interactions with these AIs that they can be quite “intelligent” in their ability to understand what we say, and in their ability to solve some kinds of problems.
This observation suggests that intelligence and consciousness are not as strongly linked as we might assume – a system can be quite intelligent without being conscious.
And maybe AI can achieve super-human intelligence, without ever being conscious.
Human Consciousness and Language
In the human case, I am going to say that the answer to all four questions is “Yes”.
One secondary question that arises is:
- How do humans source candidate abstract strategies to be applied to conscious decision-making?
In principle it can be possible for a sentient being to come up with abstract strategies from their own internal thought processes.
But, in practice, most of our ideas about abstract life strategies come from things that other people tell us:
- “You should do this, for this reason”
- “You should do that, for that reason”
For any particular strategy, nothing forces us to follow it just because someone said it.
But, at the same time, most of the possible abstract strategies that we might apply to solving our problems do come from something that someone said to us (or in the modern world, from something that we read).
This implies that human consciousness depends very much on human language.
Given that dogs, for example, do not have human-like language capable of expressing these kinds of abstract strategies, this makes an even stronger argument that dogs are not conscious in this sense.
LLM-based AI Consciousness and Language
For LLM-based AIs, it may be the case that currently none of them are conscious.
But LLMs do by their very construction have an ability to understand human language, so it is entirely possible that a conscious LLM could be constructed where it had a pool of candidate verbally-specified abstract strategies to apply to its own self-modification.
Having said that, even though it might be possible to construct a conscious self-modifying LLM-based AI, there might not be any practical benefit from doing so.
Humans have evolved to have a self-modifying intelligence, because there isn’t anyone else who can do the self-modification.
But for AIs, there are humans who can and do construct them, and then modify them as necessary – there is actually no need to construct a self-modifying conscious AI if your main goal is just to make a smarter AI.