In my keynote speeches, I often say that the next two decades will bring more change to humanity than the previous two centuries. When I say this, I observe people looking at me in disbelief. Some say, this is going a bit too far yet the more I think about it, the more probable it feels.

Topics such as digital transformation or artificial intelligence (AI) may be popular themes today but these are two of the many mega shifts that are going to revolutionize business, and life, in the future.

In our personal lives, waves of technology and social change are reconstructing our reality, our cultures, our society and very soon, our biology. We now dwell in a hyper-connected world where technology and humanity are constantly overlapping, whether it is voice-control interfaces, machines that speak, cognitive computing or intelligent digital assistants - science fiction is now becoming science fact.

Humanity has never been as empowered before. At the same time, it is also true that we have to deal with new kinds of instabilities - vanishing jobs, weakening national identity, social unrest and terrorism, to name a few.

I am concerned that in the age of AI domination, we are oscillating between outright techno-optimism, general hyperbole and fear mongering.

Some would argue, why not just combine man and machine to achieve a perfect symbiosis, to make us super-human?

This is the plot of an increasing number of Silicon Valley pundits, business leaders, investors, singularitarians and technophile futurist colleagues. Many of them believe that man and machine must ultimately converge because humans are really just technology, anyway.

I find the idea of 'organisms are algorithms', a paradigm frequently presented in the Silicon Valley, highly reductionist, often serving as an excuse to sell addictive tech products that gradually turn us into being more machine like. I think this concept, fuelled by a materialistic and technological approach to humanness seems to thrive best in societies that are solely focussed on profits.

Right now, the greatest danger is not that machines will take over (our lives) but that we have become mechanised, thinking and acting like machines, driven completely by convenience. We are deskilling and forgetting ourselves and consciously being led by devices.

The better alternative is for us to become more human - this is what I think AI and related technologies may afford us but we must create the right ethical framework for this to happen. We must agree on what we want, as far as our own humanity is concerned.

Right now, the greatest danger is not that machines will take over but that we have become mechanised, thinking and acting like machines, driven by convenience. We are deskilling and forgetting ourselves, consciously being led by devices.

We must prioritize remaining human.
Part of the problem is that we have always struggled to define what human intelligence is. And because we have started with an overly simplistic definition of intelligence before applying the concept to our creations, the issue has exploded.

Google maps isn't intelligent like we are, neither is Siri, Amazon Alexa or Google Home.

AI is defined as the ability of machines to accomplish a task that humans used to do but it is algorithmic and we should not confuse algorithmic superiority with higher intelligence.

Consider this theory by Luciano Floridi, a professor of philosophy, ethics and director at the Digital Ethics Lab at Oxford University - Artificial intelligence can outperform human intelligence when it is not understanding emotional states, intentions, interpretations, deep semantic skills, consciousness and self-awareness.

I think 95 percent of what is currently presented as AI is not intelligence in the sense of 'thinking machines' that could rival human intelligence in its complexity. Rather it is just intelligent assistance (IA) - and I don't really see this changing. However, IA will certainly have a bigger impact on society going forward because it will beyond a doubt, cause a vast, hopefully temporary, wave of technological unemployment and will bring about a total reset of education and training systems.

While this is clearly a social and economic challenge, it is not a truly existential risk. What we need to do today is to prepare for what I call the end of routine, which will affect every sector of society in the next decade. We must move beyond automatable work and help our children develop attributes that make us unique such as emotional intelligence, empathy, imagination and creativity.

We must not just invest in software and technology, but also in 'moral-ware' i.e. in applying the why and who question more often than the how and if question.

We should not equip machines with the ability to simulate human intelligence such as social, emotional or kinetic intelligence. Technology has no ethics because it does not exist like we do, it only simulates our existence. It has no believes, no values, no agency, no consciousness, no ethics - and it shouldn't, either. Yet, people and societies without ethics are doomed, so we will need to make sure we can maintain what makes us human, despite all the progress.