When the Chinese say “Confucius says…” they provide citation for the source of wisdom and invoke the prescription with the full authority of the Confucius. This is not unique to Chinese people. Christians may quote Jesus and the Bible quite often, and Muslims quite Muhammad and the Quran, others rely on the Buddha , Abraham, Laozi, Brahms, Vishnu or Shiva, maybe some Marx some Mao, some Jefferson, some Lincoln… Scientists also provide citations to incorporate the wisdom of experience and force of reasoning from the past. Although scientists cite with more restricted scope of meaning and often in the spirit of critical analysis and not in faithful belief as are the case in my other examples.
My child said to me once: “my mouth says I want to eat candy,” and on other occasions “according to my mouth, it will be dark by sundown.” After hearing it a few times, I understand what he communicates is his opinion. The fact his mouth says it has the effect of self-reference. It highlights the fact that he says it and that what his mouth says may be different from what others are thinking, what will happens, whether it is possible or permitted, among other opinions on the relevant matter.
When we consider the possibility of an AI with great knowledge and skills of reason, our thoughts jump to a time when we give the AI powers that are usually afforded to humans who has the requisite skills and morality. Our minds are clouded and confused by this matter because we have not achieved a universality accepted and very precise expression of what it means to have moral and what is good and how to distinguish the good from the bad. We don’t know. Humans do not know it.
Therefore the engineers of such an AI system may consider the possibility of there being many moral authorities. Confucius may agree that “you did right”, but Jesus and Vishnu says “absolutely not!” Trump recommends “Executive Pardon” and Obama chooses “Secret Assassination.” All these are possible in our human system. We can simply remove the controversies of morality from engineering. The AI system should be designed with sufficient external interface and introspective capabilities to accommodate all human believe systems.
There recalling the American TV Show called Who’s Line is it Anyways, “where everything’s made up and the points don’t matter.” When we introduce the relativistic view of good morals everything may feel, to some, less authentic and less auspicious. What would Jesus say to the Buddha? One would believe that they will disagree with each other on the moralities of many decisions. Can we keep the peace among these holy entities if they were present within the same space and time? It may be caused by the shallowness of my mind, but their vigorous and destructive disagreement is the only thing my mind can imagine. Yes, I am very sorry, but I am rewatching Jesus versus Santa in South Park’s The Spirit of Christmas in my mind right now. When you deal with everyday situations, even the saints will have to bring to bear fire and brimstone… and Karate fireballs.
But we believe in free will. Our dedarkened minds should permit them to disagree. That is the only realistic way for us to reason about them. Suppose I have sages like Aristotle in my head, that I may query regarding the goodness of an act, and at a later time I may ask Confucius regarding the same. They may disagree but examining their response teaches me how to think of if. Perhaps we will choose Plato to justify one action and choose Rand to justify another. It seems that the only responsible course forward is to, in our mind, combine all of our powers for good.
Thus, we have achieved a pronounceable acronym: we aim to implement the Good Computational Intelligence, the GCI! the addition of Good to the name imbues it with the meaning that we have made significant effort to ensure its moral goodness that it is just as good as it is artificial, general, and intelligent.
GCI, here we come!
P.s. it would be expanded that recent developments suggest that, truly, good decisions(with high utility consequences) may not be reasonable in a symbolic way and may not be compressible into easy explanations. Like the multiverse that are all sitting on top of each other, that many and all realities may in really exist—ala everything all the time all at once—and they all interact with each other. The thoughts that we perceive as good and wise (or those producing good results) may themselves not even be explainable unless you are there at that place in a specific reality. But that the parameter are so complex or the world so exotics that there exist no approximate internal state that produce a close enough Good thought.
P.p.s. But having suspected that, (that the oracle-hood is ultimately unattainable), we should also hypothesize an oracle that can explain everything to us. For example, the Omnipotent God of a certain religion, being omnipotent, must have the ability to explain to us everything. Ignoring contradictions from our minds, we can also set that as the goal of GCI, to give us information to produce Human’s best.
P.p.s. And therein we will find our everyday misgivings about is it okay to lie to a human to produce the best result? Our solution here is once again say, “my mouth says I don’t want to be lied to, but GCI says lying to you is the best way to go.” analysis can deepen from that point forward.