Isaac Asimov’s famous Three Laws of Robotics—constraints on the behavior of androids and automatons meant to ensure the safety of humans—were also famously incomplete.

The laws, which first appeared in his 1942 short story “Runaround” and again in classic works like I, Robot, sound airtight at first:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Instead of pursuing top-down philosophical definitions of how artificial agents should or shouldn’t behave, Salge and his colleague Daniel Polani are investigating a bottom-up path, or “what a robot should do in the first place,” as they write in their recent paper, “Empowerment as Replacement for the Three Laws of Robotics.” Empowerment, a concept inspired in part by cybernetics and psychology, describes an agent’s intrinsic motivation to both persist within and operate upon its environment.

A Roomba programmed to seek its charging station when its batteries are getting low could be said to have an extremely rudimentary form of empowerment: To continue acting on the world, it must take action to preserve its own survival by maintaining a charge.

Empowerment might sound like a recipe for producing the very outcome that safe-AI thinkers like Nick Bostrom fear: powerful autonomous systems concerned only with maximizing their own interests and running amok as a result.

The text above is a summary, you can read full article here.