It’s pretty simple, what’s the LLM thinking about when it’s not actively being prompted?
Does an LLM have the same “consciousness” when talking to someone else? Does the LLM even exist if it’s not actively being talked to? If everyone just stops talking to it, does it “die” until someone else logs in and submits a prompt?
I’m not defending AI consciousness/sentience (however we define them), but IMHO your arguments aren’t particularly relevant/convincing. Why couldn’t consciousness exist in brief flashes? Isn’t the “would it even be the same consciousness” same as any sci-fi “clone with identical memories” thought experiment: if a LLM were conscious then each conversation history would be a “self”. And yes, it would be in some vague state of death/hibernation/non-existance when not propmpted.
It’s pretty simple, what’s the LLM thinking about when it’s not actively being prompted?
Does an LLM have the same “consciousness” when talking to someone else? Does the LLM even exist if it’s not actively being talked to? If everyone just stops talking to it, does it “die” until someone else logs in and submits a prompt?
I’m not defending AI consciousness/sentience (however we define them), but IMHO your arguments aren’t particularly relevant/convincing. Why couldn’t consciousness exist in brief flashes? Isn’t the “would it even be the same consciousness” same as any sci-fi “clone with identical memories” thought experiment: if a LLM were conscious then each conversation history would be a “self”. And yes, it would be in some vague state of death/hibernation/non-existance when not propmpted.