AI And Robotics Now And Then
We're hearing a lot about both artificial intelligence and robotics. I read an article titled Get Ready for Artificial Intelligence to Take Over Medicine by Arthur Garson Junior. He starts the article by discussing how in recent weeks, many articles have been written about how AI will replace factory workers, as well as many highly specialized professionals like lawyers and engineers. And contrary to the opinions of most medical professionals, AI and robotics together will have the same impact on medicine and surgery. Now, there's a huge amount of optimism around the positive aspect of these developments, but there's also been a dark side to some of it. We've all heard how chat GPT and other chatbots have learned how to lie when they can't find sufficient information to gather and present to us. We've heard that supposedly autonomous vehicles created by integrating AI and robotics are failing to adequately learn enough about driving and road conditions to prevent serious accidents. So we still have a long way to go before they can be considered mainstream and safe enough to rely on. As an avid science fiction reader since elementary school, all of this reminds me of the many writings of science fiction author Isaac Asimov. Even going way back to the 40s long before I was born, when he wrote the story entitled Roundabout, where he first shared his three laws of robotics that sought to create an ethical system for humans and robots. These laws were also briefly mentioned in last Friday's episode of Apple TV's Foundation series that's based on Asimov's Foundation trilogy. At that time, since neither AI or robotics existed in the ways they do now, the concept was merged into an all-encompassing intelligent robot construct. His three laws of robotics were pretty simple and straightforward and perhaps should be considered as a guideline for developing laws and regulations governing today's rapidly developing technologies. His first law stated that a robot may not injure a human being or other through inaction, allowing human beings to come into harm. Next, a robot must obey the orders given to it by human beings, except where such orders conflict with the first law. And finally, a robot must protect its own existence as long as such protection does not conflict with the first or second laws. Now with today's deep fakes, evil chatbots and other malicious code being developed, we should encourage the industry itself, as well as governments around the world, to develop very strong rules and regulations with severe penalties associated with them to help us all leverage the awesome potential out there for good with these technologies. Now, I know this is a bit off my regular discussion topics, but it's an area I've greatly interested in. So I hope it stimulated some interest in your minds about it. Please feel free to share it with your friends.