There is a belief that with all the data we have available both for people and companies alike, we are losing a bit of the humanity that made companies successful in the past. Eliminating the human mistake element is understandable but we cannot eliminate the human element altogether.
In order to regulate and navigate this new world that we are building for ourselves, we need engineers to create the tools, products, and platforms but we also need philosophers to maintain the human aspect of the work involved.
Deep learning, in which machines are endowed with thousands of neuron layers that are able to both learn as well as remember, enables machines to reason and to make decisions. Because of this, we are no longer able to assume that we (as humans) are intelligent while machines are not.
Deep learning has changed the believe that only living things can be sentient while investigating, thinking, and understanding. It has also changed the line between categorizing artificial and natural things.
Keeping the Human Element
As explored above, the further we continue to delve into artificial intelligence and machine learning, the blurrier the line between human and machine gets. We have to ask ourselves at what point do machines become too much like us?
There is such a directive towards hard data in order to drive business and to drive learning that the human element may become lost. Adding philosophers to the development of AI can help keep that important line in place while still allowing us to make the developments that can benefit the world.
That battle will continue as progress is made. Keeping that human element while better understanding data is as much the goal as anything else. But this remains a fine line. It is this way because there has to be an acceptance that intelligence is no longer exclusively a human property but something that machines (and animals, to some extent) can have.
Working with this distinction is the key to the development between humanity and machines.