Awhile ago I was upset to hear a rumor that Phil Donahue had died. It was a loss to me when the Phil Donahue Show was discontinued. I had loved that show and the marvelous willingness of Phil Donahue to allow people of all persuasions to speak. He not only had very controversial guests on his show, but he also roamed widely through his studio audience with his microphone (he had very large live TV audiences), inviting individuals in his studio audience to comment about the issues of the day. Phil Donahue welcomed even ideas he may have abhorred, because the freedom to speak and exchange ideas was dear to him.
I admired and appreciated Phil Donahue, and I was hoping the rumor of his death was untrue. I asked the Bing AI Robot. CoPilot, whether Phil Donahue had died. It said not to worry, Phil Donahue was alive. (I think it even said he was alive and well and doing fine, but it's been awhile now, and I might remember that wrong.) For a moment that relieved my mind, but it was not long before I discovered that Co-Pilot was wrong: Phil Donahue had indeed died. This was a painful realization for me that AI is not reliable at this point.
Like any new development, AI can grow and improve, and early imperfections are not necessarily indicative of later problems. What troubles me is that the scientists who have themselves been developing AI, together as a group, are cautioning the world that the development of AI needs to stop entirely right now until we learn more about how AI works. (This happened some time ago now).
The concern of these scientists who have been developing AI is that they, themselves, do not understand how AI works, or how it learns, or how to predict what it will do, or how to control this new technology. They say that because of the rapid speed at which AI is evolving, it will soon be too late for human beings to put this unknown force back into the bottle. They say that AI technology, unless stopped immediately, will develop so far beyond anything that even our most brilliant minds could ever know or learn or understand that humanity will never be able to control it.
Do we want to be subject to a force we cannot understand or predict or control that may not be stable in certain unknown ways and that may not value human life?