Project Support versus Reference

Eh…I’ve seen a persistent determination to give me a wrong answer in a chat AI. After eight or so wrong answers, each followed by me telling it it was wrong, it finally admitted it didn’t know. On another occasion, it went through several wrong answers, then returned to the first wrong answer.
Which model was it?
 
Whatever my employer has licensed, and I’m not willing to name my employer. :) I can totally believe that some lie and/or are confused more than others. But I’m nowhere near actually trusting any AI yet.

The development of AI in the last year or so has been remarkable, and at this point in my business (not-for-profit healthcare), we couldn't live without it. We just got funded by a multinational to make a custom AI service that can learn new languages based on languages with the same root. In cities like mine, with 200 spoken languages, we can slowly accrue more and more languages based on simply using the app.

I was demoed an AI talking therapies app the other day (at the mere cost of £60,000 a license) that another NFP CEO and trained therapist I know said was as good as the best therapist in her organisation, totally flawless. Hospitals routinely use AI to scan for tumors in X-rays or UC in biopsies. Every day Im hearing about total game changing uses.

This is not some hit-and-miss tech, and one bum experience with a basic chatbot oughtn't to put you off the entire realm of AI. This is probably the biggest boon to productivity in the last 2 decades. It'd be like trying the internet for the first time and being given a wrong telephone number, so you never used it again.
 
It'd be like trying the internet for the first time and being given a wrong telephone number, so you never used it again.
I said “yet.” I don’t trust it yet.

I will keep on verifying everything an AI does, for the foreseeable future. If the wrongness reduces, the verifying may reduce.
 
Awhile ago I was upset to hear a rumor that Phil Donahue had died. It was a loss to me when the Phil Donahue Show was discontinued. I had loved that show and the marvelous willingness of Phil Donahue to allow people of all persuasions to speak. He not only had very controversial guests on his show, but he also roamed widely through his studio audience with his microphone (he had very large live TV audiences), inviting individuals in his studio audience to comment about the issues of the day. Phil Donahue welcomed even ideas he may have abhorred, because the freedom to speak and exchange ideas was dear to him.

I admired and appreciated Phil Donahue, and I was hoping the rumor of his death was untrue. I asked the Bing AI Robot. CoPilot, whether Phil Donahue had died. It said not to worry, Phil Donahue was alive. (I think it even said he was alive and well and doing fine, but it's been awhile now, and I might remember that wrong.) For a moment that relieved my mind, but it was not long before I discovered that Co-Pilot was wrong: Phil Donahue had indeed died. This was a painful realization for me that AI is not reliable at this point.

Like any new development, AI can grow and improve, and early imperfections are not necessarily indicative of later problems. What troubles me is that the scientists who have themselves been developing AI, together as a group, are cautioning the world that the development of AI needs to stop entirely right now until we learn more about how AI works. (This happened some time ago now).

The concern of these scientists who have been developing AI is that they, themselves, do not understand how AI works, or how it learns, or how to predict what it will do, or how to control this new technology. They say that because of the rapid speed at which AI is evolving, it will soon be too late for human beings to put this unknown force back into the bottle. They say that AI technology, unless stopped immediately, will develop so far beyond anything that even our most brilliant minds could ever know or learn or understand that humanity will never be able to control it.

Do we want to be subject to a force we cannot understand or predict or control that may not be stable in certain unknown ways and that may not value human life?
 
The concern of these scientists who have been developing AI is that they, themselves, do not understand how AI works, or how it learns, or how to predict what it will do, or how to control this new technology. They say that because of the rapid speed at which AI is evolving, it will soon be too late for human beings to put this unknown force back into the bottle. They say that AI technology, unless stopped immediately, will develop so far beyond anything that even our most brilliant minds could ever know or learn or understand that humanity will never be able to control it.

Do we want to be subject to a force we cannot understand or predict or control that may not be stable in certain unknown ways and that may not value human life?
I think AI researchers most likely should not be classified as scientists; most of them are probably engineers. Some of the louder voices are technology enthusiasts and/or entrepreneurs, and may have a superficial understanding of AI research. Good scientists would not make some of the claims about AI which are circulating. Don’t believe all the hype. Climate change and nuclear war are still much bigger threats to the human race.
 
@mcogilvie

Hi, there. I value your opinion. Thank you.

The apparent inevitability of climate catastrophe and the risk of nuclear war (the doomsday clock is very close to midnight now), can indeed hardly be overstated. Even so, it is being said by some experts in the field that AI is even more dangerous than these and needs to be stopped entirely and immediately. Others are also deeply concerned, and given that the development of AI in our for-profit world seems to be virtually inevitable at this point. they are doing research to develop more knowledge about how AI works, yet without developing it further themselves.

I cannot understand technically the reasons for this deep concern, but I do understand that some people deeply familiar with AI are convinced that AI is a threat to human existence. In our age of catastrophic threats, I agree that it is essential to keep our perspective, and find ways to be present in the moment, rather than in catastrophic fears, so we can find meaning and enjoyment in the precious life we are actually living now.

I believe the best I can do is share what I have read with others who may have a better understanding of them, in hopes that these ideas might leap across to the folks who may need to be aware of them. I especially want to direct you to the last link I posted on this page and to mention that some include interesting research. The links are shown below:

Warmly,

Emily





 
Even so, it is being said by some experts in the field that AI is even more dangerous than these and needs to be stopped entirely and immediately.
@Mrs-Polifax Up to now there's no AI in the world. We have interaction with a huge pattern matching database that doesn't "think". Large Language Models have no intention in their responses – they just mix and repeat what they've read on internet. The only danger is that after reading one of their answers, we will die laughing. ;)
 
Top