Rendering Superintelligent A.I. as Fallible: A case for Symbiosis between Imperfect humans and the Singularity Era Bot.

It’s the year 2055 and what once was a toy rabbit awkwardly hopping on its own mechanical whim, is now jumping and running faster than a human. It can now, with fluid motion, jump over a large truck and continue its smooth stride. It’s not human enough to try out for the Olympics, however it has been recognized as the fastest techno-rabbit on the block and receives an award from the Guinness Book of A.I. achievements for furthest mecha-leaper. Chanting and cheers erupt from the audience as other animal mechanoids yield support for their ‘tricks aren’t for kids’ futuristic friend.

With advances in AGI (Artificial General Intelligence) climaxing and with the potential future singularity, how can we leverage the power of singularity with imperfect humanism. It may be that, we need to create a sense of forced fallibility within superintelligent systems by means of machine learning clauses, vital awareness, and ethical statutes.

For humans, our obvious vital source of energy comes from food, and conversely for most robots it’s electricity. We display different varying levels of energy and moods dependent upon what types foods and amounts we eat. Similarly with future superintelligent machines, electrical power adjustments and machine learning algorithmic clauses could determine a mood for a robot by increasing or decreasing throughput and processing speed in relation to their specific power levels. The hibernation of a system depleted of electricity and energy constraints could be analogous to a human nap.

Quantum battery discoveries show us the possibility of near perpetual energy, however perhaps this is the ultimate downside and scare towards integrating superintelligence fashioned by a bot. By utilizing neural networks containing vitals data, most importantly variable electricity levels, a superintelligent robot could be aware of its own varying power supply and increase or decrease its activities accordingly to its fixed energy capacity, and perhaps even its own throughput.

A.I. ethical commandments from an oversight committee could help humanity if the singularity ever occurs. By utilizing brute force fallibility measures into an A.I. systems, such that it learns and trains in accordance to our own imperfection sense, then maybe we can keep A.I. levels at somewhat of a stand still, with it not exponentially increasing above the calculations and throughput of the human brain.

 
December 4th, 2020
Artificial Intelligence, General, Machine Learning
| Author: AlienWeb

Share

FREE QUOTE