A Definition of Consciousness

This definition is somewhat circular. But it does imply something about the prerequisites for being conscious:

I would add another requirement to those three:

In the short term, conscious decisions alter behaviour that happens within the current situation.

The full benefit from conscious decision-making comes from the ability to permanently change the default behaviour of the system, if the result of a conscious decision is observed to have a result that is better than what would have been expected if the system had executed its default behaviour in that situation.

To achieve this benefit, the system has to include a sub-system for measuring the success of conscious decisions, relating those measurements to the reasons for making those decisions, and then updating default behaviours accordingly.

Also, the measurement sub-system has to exist as something that is not itself subject to the possibility of self-alteration – if the measurement sub-system can be self-altered, then the whole system could very easily corrupt itself into a state where the process of self-alteration becomes dysfunctional.

The concept of “reasons” for conscious decisions is important, because reasons can be abstract strategies that apply to multiple situations. The more often that a particular strategy is applied, the faster the measurement system can observe the results of that strategy, and the faster it can decide that that particular strategy should be reinforced.

Is “X” conscious?

We can turn the list of items above into a list of questions to answer the big question: Is “X” conscious?

Is a dog conscious?

It is easy to anthropomorphize the behaviour of non-human animals. There are many things that dogs do which are similar to what a person might do in a similar situation.

But:

I don’t personally understand what it is like to “be a dog”, because I’m not a dog.

But I’m going to speculate that the answer to all four of those questions for a dog is “No”.

Is AI conscious?

Same questions:

In principle we could attempt to engineer an AI that has these characteristics, and include a measurement system to manage the learning of abstract strategies based on the results of applying given strategies to conscious decisions relative to the expected results if a default behaviour was executed.

But, for current AI systems that people are using, especially LLM-based AIs used for chat, analysis and software development, the systems are not designed to be self-modifying in this sense – the processes of creation and modification are all explicitly managed and controlled by the human owners of the AI.

We can observe from our own interactions with these AIs that they can be quite “intelligent” in their ability to understand what we say, and in their ability to solve some kinds of problems.

This observation suggests that intelligence and consciousness are not as strongly linked as we might assume – a system can be quite intelligent without being conscious.

And maybe AI can achieve super-human intelligence, without ever being conscious.

Human Consciousness and Language

In the human case, I am going to say that the answer to all four questions is “Yes”.

One secondary question that arises is:

In principle it can be possible for a sentient being to come up with abstract strategies from their own internal thought processes.

But, in practice, most of our ideas about abstract life strategies come from things that other people tell us:

For any particular strategy, nothing forces us to follow it just because someone said it.

But, at the same time, most of the possible abstract strategies that we might apply to solving our problems do come from something that someone said to us (or in the modern world, from something that we read).

This implies that human consciousness depends very much on human language.

Given that dogs, for example, do not have human-like language capable of expressing these kinds of abstract strategies, this makes an even stronger argument that dogs are not conscious in this sense.

LLM-based AI Consciousness and Language

For LLM-based AIs, it may be the case that currently none of them are conscious.

But LLMs do by their very construction have an ability to understand human language, so it is entirely possible that a conscious LLM could be constructed where it had a pool of candidate verbally-specified abstract strategies to apply to its own self-modification.

Having said that, even though it might be possible to construct a conscious self-modifying LLM-based AI, there might not be any practical benefit from doing so.

Humans have evolved to have a self-modifying intelligence, because there isn’t anyone else who can do the self-modification.

But for AIs, there are humans who can and do construct them, and then modify them as necessary – there is actually no need to construct a self-modifying conscious AI if your main goal is just to make a smarter AI.