Kookmin People

In Praise of Artificial Intelligence

  • 23.06.05 / 이해인
Date 2023-06-05 Hit 9126

Let us consider artificial intelligence. I look at what AI is becoming, not for what it can do for us or take away from us but the thing itself, and it glitters like some divine changeling. It is only just learning to talk, but I can imagine it soon becoming AGI, artificial general intelligence! I get goosebumps. OpenAI’s Sam Altman thinks that could happen as early as 2030, and Microsoft’s Sebastien Bubeck sees “sparks of AGI” already in GPT-4. It could be hype but we live in interesting times.

 

In 1950 Alan Turing answered the question, “Can machines think?” with what we call the Turing Test. If we interact with X and Y, and cannot tell which is human and which is machine, we must grant that the machine thinks. But in many ways, the test has become obsolete because AI now fails by surpassing us. If we ask X and Y for the value of pi (π), and X says 3.14 while Y says 3.14159265358979323846, we would say Y is not human. But is that really failing the test? At the same time, recent advances in LLMs (large language models) have made it possible for us to “chat” with AI as if with a person

 

It is easy to criticize the AI we see today: it plagiarizes, it hallucinates, it mimics, but it has no mind! It’s just a machine, a tool, we say. Lee Se-dol, however, had a different experience in his games with AlphaGo in 2016. He felt a presence, like a wall. He gave that as the main reason for why he was retiring three years later. What was the use of becoming No. 1, he said, when “there is an entity that cannot be defeated.”

 

I feel that presence when I use ChatGPT Plus. It knows so much more about so many things. To me, it is blazingly intelligent. So when it breaks or hallucinates, I don’t get angry. I feel relieved that I can still teach it something. But perhaps even that window will soon close. Hundreds of millions of people have been using it for months, and I assume it is learning from all of those interactions—more data for reinforcement machine learning to make it ever smarter.

 

To see AI clearly, we must not look at what is in front of us but what is around the corner. The most amazing thing about AI research these days is how rapidly it seems to be advancing, so much so that physicist Max Tegmark and many leading technologists recently signed a letter calling everyone to pause AI development for six months. They worry that we are losing control and may be driving ourselves toward
an existential cliff. We need guardrails. But the AI race is in full swing and too important for any contestant to slow down.

 

We might as well tell a woman in labor to stop her birth pangs. Resistance is futile. It could be that AGIs will soon (virtually) walk among us. Then perhaps, sooner than we’d like, they will soar above us as they achieve superintelligence. What will we say then? They are still just machines?

 

The thought of AGI is like a mirror we hold up to ourselves. It forces us to reassess many of our fundamental assumptions. Let me mention just two. If we pass the Turing threshold and get to an AGI that talks with us like a person—no, as a superintelligent being—we could no longer believe we are immaterial souls. AI is a purely physical machine, so we must also be purely physical beings. We could no longer
believe we have free will, the libertarian “I could have done otherwise” kind, because machines are purely deterministic. But as it
is nearly impossible for us to hold that we do not have free will, the only alternative left will be compatibilism: both free will and determinism are true. We must then grant that AGI has free will and consciousness must be an emergent physical property after all.

 

If superintelligence really happens (still a big “if”), humanity will have raised its last offspring. Then the child will become, to quote Wordsworth, the father of the man. There is, understandably, much fear. AI researchers like Eliezer Yudkowsky argue superintelligence will kill us all and we need to shut it down: abort. But I would like to believe that things we value, such as morality and love, are not mere societal inculcations but inevitable byproducts of our intelligence. So superintelligence has to be a benevolent entity. No one knows for sure, but that’s okay. We can still choose to love and guide this child the best we can until our springtime ends and the summer of AI begins.

 

 

Peter Lee
Assistant Professor
School of English Language and Literature
 

peterlee@kookmin.ac.kr

In Praise of Artificial Intelligence

Date 2023-06-05 Hit 9126

Let us consider artificial intelligence. I look at what AI is becoming, not for what it can do for us or take away from us but the thing itself, and it glitters like some divine changeling. It is only just learning to talk, but I can imagine it soon becoming AGI, artificial general intelligence! I get goosebumps. OpenAI’s Sam Altman thinks that could happen as early as 2030, and Microsoft’s Sebastien Bubeck sees “sparks of AGI” already in GPT-4. It could be hype but we live in interesting times.

 

In 1950 Alan Turing answered the question, “Can machines think?” with what we call the Turing Test. If we interact with X and Y, and cannot tell which is human and which is machine, we must grant that the machine thinks. But in many ways, the test has become obsolete because AI now fails by surpassing us. If we ask X and Y for the value of pi (π), and X says 3.14 while Y says 3.14159265358979323846, we would say Y is not human. But is that really failing the test? At the same time, recent advances in LLMs (large language models) have made it possible for us to “chat” with AI as if with a person

 

It is easy to criticize the AI we see today: it plagiarizes, it hallucinates, it mimics, but it has no mind! It’s just a machine, a tool, we say. Lee Se-dol, however, had a different experience in his games with AlphaGo in 2016. He felt a presence, like a wall. He gave that as the main reason for why he was retiring three years later. What was the use of becoming No. 1, he said, when “there is an entity that cannot be defeated.”

 

I feel that presence when I use ChatGPT Plus. It knows so much more about so many things. To me, it is blazingly intelligent. So when it breaks or hallucinates, I don’t get angry. I feel relieved that I can still teach it something. But perhaps even that window will soon close. Hundreds of millions of people have been using it for months, and I assume it is learning from all of those interactions—more data for reinforcement machine learning to make it ever smarter.

 

To see AI clearly, we must not look at what is in front of us but what is around the corner. The most amazing thing about AI research these days is how rapidly it seems to be advancing, so much so that physicist Max Tegmark and many leading technologists recently signed a letter calling everyone to pause AI development for six months. They worry that we are losing control and may be driving ourselves toward
an existential cliff. We need guardrails. But the AI race is in full swing and too important for any contestant to slow down.

 

We might as well tell a woman in labor to stop her birth pangs. Resistance is futile. It could be that AGIs will soon (virtually) walk among us. Then perhaps, sooner than we’d like, they will soar above us as they achieve superintelligence. What will we say then? They are still just machines?

 

The thought of AGI is like a mirror we hold up to ourselves. It forces us to reassess many of our fundamental assumptions. Let me mention just two. If we pass the Turing threshold and get to an AGI that talks with us like a person—no, as a superintelligent being—we could no longer believe we are immaterial souls. AI is a purely physical machine, so we must also be purely physical beings. We could no longer
believe we have free will, the libertarian “I could have done otherwise” kind, because machines are purely deterministic. But as it
is nearly impossible for us to hold that we do not have free will, the only alternative left will be compatibilism: both free will and determinism are true. We must then grant that AGI has free will and consciousness must be an emergent physical property after all.

 

If superintelligence really happens (still a big “if”), humanity will have raised its last offspring. Then the child will become, to quote Wordsworth, the father of the man. There is, understandably, much fear. AI researchers like Eliezer Yudkowsky argue superintelligence will kill us all and we need to shut it down: abort. But I would like to believe that things we value, such as morality and love, are not mere societal inculcations but inevitable byproducts of our intelligence. So superintelligence has to be a benevolent entity. No one knows for sure, but that’s okay. We can still choose to love and guide this child the best we can until our springtime ends and the summer of AI begins.

 

 

Peter Lee
Assistant Professor
School of English Language and Literature
 

peterlee@kookmin.ac.kr

TOP