IBB: When we bet more on AI than on ourselves
Last night I fell asleep to news that very big figures in the AI world – Elon Musk, Yoshua Bengio, Steve Wozniak, Yuval Harari, Andrew Yng, Emad Mostaque, and other AI leaders – have signed an open letter to ‘Pause Giant AI Experiments’. Were the signatures that had been sighted, authentic?
Who knows? Who can verify?
About 6 hours later, more opinions have flooded the Internet, and Linkedin in particular, so I turned to a source that I knew would keep close tabs on the matter and could verify whether there was any such thing going on.
Sure enough, I was pointed to a post:
Louis Rosenberg, a veteran AI researcher and AR/VR pioneer together with many notable personalities, like Kavya Pearlman have signed a Future of Life Institute (FLI) letter on pausing #AI.
Louis has shared the reasons that he signed the letter in this article by Barron’s. He also stated, “It’s very clear to me that the new and rapidly advancing risks caused by AI systems are moving too fast for the AI community, from researchers and industry professionals to regulators and policy makers, to adjust.”
Lots of big names in AI have been thrown about in less than 24 hours.
It also reminded me of a conference in Nashville where a big-name physicist was giving his keynote about the wonderful things that AI had achieved. He even managed to add a chill/warning factor to it all when he described America’s Cup NZ sailing team had a yacht built by AI and had kind of crawled out of the Internet.
If AI is going to be used at a larger scale, as Louis righty pointed out “there are no accuracy standards or governing bodies in place to help ensure that these systems, which will become a major part of the workforce, will not propagate errors from subtle mistakes to wild fabrications. We need time to put protections in place and ramp up regulatory authorities to ensure protections are used.”
At this while Iwrite this, it is worth noting, that the signatures of Elon Musk, Yoshua Bengio, Steve Wozniak, Yuval Harari, Andrew Yng, Emad Mostaque – and other big-name AI leaders there are out there – are still not verified.
IT BYTES BACK! says: I think there has to be check and balance.
At a glance it looks like we are hoping the data ‘rights’ itself over time so that it cannot manipulate us, and more importantly it cannot manipulate us to make inaccurate or “wrong” decisions.
Here’s the thing… the fact that we rely so much on what AI tells us, is already a kind of manipulation.
How about also “righting” our own thinking/or level of reliance, so that we are more discerning about what the AI generates for us?
Are we betting more on AI than on ourselves?