Artificial intelligence has been compared by musk to a Pandora’s Box, letting a genie from the bottle with unintentional results. The guy recently said, “I believe the growth of total artificial intelligence could spell the end of the human race.” Those comfortable with Isaac Asimov’s “Three Laws of Robotics” understand that in the domain of science fiction mankind has spawned childlike, planning-to-please robots like Daneel who appears in several of that writer’s novels. Hawking sees a pending “arms race” between individual and man-made wisdom.
In an interview together with the Financial Times he describes improvements in genetic engineering that could enhance a generation at a time, every 18 years to mankind. In the Financial Times interview Hawking said, “the danger is that computers develop wisdom and take over”. And yet authorities appear slow off the mark in understanding the damaging and tumultuous possibility signified by improvements in robotics and artificial intelligence.
So why not robotics and artificial intelligence?
After all, regulation before has protected mankind from injury while enabling advanced procedures to unfold. The European Union in September of this year has financed a project entitled, “Controlling Emerging Robotic Technologies in Europe: Robotics confronting Law and Ethics.” Deemed to be an in depth evaluation of the legal and ethical dilemmas raised by robotics as well as their use, it looks at threats to essential rights and liberties and whether new regulation is required to deal with possible difficulties presented by the technology. In the article the writers say “too inflexible regulations might stifle innovation’, but lack of legal clarity makes apparatus-manufacturing company’s jobs harder.
At exactly the same time as makers understand that obtrusive and early legislation could hamper promising improvements in the area, in addition they note that too little regulation and legal framework can lead to dangerous and unintentional effects. In the December issue of Scientific American, Ryan Calo, a University of Washington law professor specializing in robotics, law and policy, claims the case for U.S. national regulation. In his concluding comments he says, “if the authors don’t consider appropriate legal and policy infrastructure now, robotics could be the first transformative technology since steam in which America hasn’t played a preeminent part.” Most recently when I tried to grapple with R. Scott Bakker’s fascinating essay on what types of philosophy extraterrestrial beings might practice and staying dizzied by questions.
Fortunately, I had a novel in my possession which appeared to offer me the responses, a novel that had nothing related to the a contemporary preoccupation like question of extraterrestrial being philosophers in any way, but instead a metaphysical issue that had been barred from philosophy except among seminary students since Darwin; specifically, whether or not there was such a thing as ethical truth if God did not exist. The name of the publication was Robust Ethos: The Metaphysics and Epistemology of Godless Normative Realism (modern philosophy is not all that sharp in regards to names), by Erik J. Wielenberg. Now, I will not even try to compose a suitable philosophical review of Robust Ethos for the publication has been excellently dissected by a suitable philosopher, John Danaher.