Generative AI systems are non-deterministic- the same prompt from two users can produce two entirely different results. This is a fundamental feature of the mechanism and applies to all the large LLMs. The reason is the probabilistic nature of generative AI.
This non-determinism of generative AI makes it charmingly unpredictable. It is the very quality that makes it endearing and sound like a human assistant. After all, as human beings we don’t provide strictly robotically factual answers when posed a question.
It is the central basis for the anthropomorphization of these systems. Many users are already treating AI as the quasi-other- a technology that confronts us “as if” it were an independent other—inviting dialogue or collaboration. Despite it’s clear non-humanness, the unpredictable AI responses due to the non-deterministic design allows it to seem as though it has a personality.
How will we know when AI becomes Sentient?
From time to time, we have discussions about whether AI becomes sentient- i.e., self-aware and capable of decision making. This is a sensitive topic with technology companies- Google fired an engineer who claimed sentience, as early as 2022.
Given the non-deterministic nature of generative AI, how will we even know if AI is sentient? If you go back to the Google engineer, essentially, it comes down to very complex dialogue trying to tease out agency. The AI is essentially putting together a response to the prompt that it the “probabilistic best response”. Therefore, it is very difficult to discern and then establish that it was not chance, but a decision by a self-aware entity.
At this time, it is clearer to argue that AI presents as having alterity without genuine agency. In other words, AI mediates but does not possess intentions. A fascinating new academic literature is starting to discuss “the needs of digital minds”, for instance how do humans build empathy for the suffering of digital minds. This might be somewhat premature, in my opinion.
How Do Consumers Respond
At the same time, consumers (or users, if you wish) of generative AI find that there are unexplainable differences across the prominent LLMs. The user doesn’t know why this happens and does not care. All s/he knows is that “somehow” Gemini does something better and Claude may excel at code debugging.
Rather than being stuck in one AI system, they are already problem solving by variety seeking behavior and recombinant logics (e.g., I start with AI x and move on to y and z, as needed). This is utter pragmatism. Users are not particularly brand loyal to one AI brand- instead, they are seeking to extract the most value. There is a spirit of experimentation and reuse.
One popular recombinant strategy occurs with software development. Computer code generated in one AI may not compile. Rather than debugging it there, the user simply takes the code to another AI resulting in a workable result. Again, the user does not know why this happens and does not care. The end result is workable code.
In the case of software, the recombinant behavior extends to the traditional codebase as well.
Agency Transference or the AI made me do it
We are starting to understand the relationship between AI and humans. Given the seeming humanness of AI, users are naturally tempted to transfer their agency to the AI system. Rather than understanding the AI as an intelligent enhancement, these users essentially act as though the AI is an independent actor with agency.
Even though this is factually correct, they are starting to behave as though the AI is an independent authority. This agency transference is problematic causing a level of perverse dependence and a diminishment of the human self.
This is the heart of the vibe coding argument. I throw some instructions at the AI and see what happens. If the result is useful, I gleefully use it. If not, I simply discard it.
Unexplainability of Gen AI
Gen AI is designed as a black box. We are not able to explain why the generative AI system produced a particular output in response to a specific input. Put otherwise, no human can predict the output of Gen AI, ex ante.
Understanding the problems with this, there is now a move to create explainable AI. This approach seeks to reconstruct the mechanisms within the black box to produce the particular output from the given prompt. It is an ex post reconstruction of the micro-processes that might have led to the output.
This might help developers understand the mechanisms at work. In the future, if there are legal problems, this can help. However, users don’t benefit from this. And, they don’t want to worry about this.
Where do we go from here?
It is time we put down the metaphor of AI as a tool. Instead, one must come to grips with the idea that a broad intelligence has entered our lives. The interactions and intersections with this intelligence will absolutely change our future. We must be prepared for a vast new terrain of human-AI cognition, engagement and collaboration in the coming years.
I truly appreciate your minute detailings and being well versed with LLMs. Just like you have been giving us interesting insights of the algorithm; likewise oflate I have been a passionate user of these AI tools. They truly help infact my second standard daughter’s written content we analysed using GPT and it truly guided us about her future learning patterns. Thanks a lot for your contribution Sir.