Home IT Info News Today How technology is merging with the human body

How technology is merging with the human body

248


The gulf between “human” and “machine” is closing. Machine learning has enabled virtual reality to feel more “real” than ever before, and AI’s replication of processes that were once confined to the human brain is ever-improving. Both are bringing technology into ever-closer proximity with the human body. Things are getting weird.

And they are going to get a lot weirder.

Let’s use this question as a starting point: Is standing on the edge of the roof of a Minecraft cathedral in VR mode scarier than looking over the edge of a mountain in Norway? I have done both, and the sense of vertigo was greater in Minecraft.

Our brain has evolved to let us understand a version of the world we live in, and to make decisions that optimize the survival of our genes. Due to this wiring, a fear of heights is a sensible apprehension to develop: Don’t go near the edge of tall things because you might fall off and die.

In fact, what we see is the brain’s interpretation of the input data provided by our eyes. What we see is not reality, but is instead our brain’s interpretation of the parts of reality that we have evolved to consider useful. By understanding how we turn “the process of seeing” into “what we see,” the illusions of virtual reality can feel more real than reality itself: for example, Minecraft versus Norwegian mountains.

It will take a long time until humans stop perceiving things like the VR cathedral roof as risks that pose an existential threat. Indeed, over the next few years, we will continue to develop technologies that con the brain into certain interpretations.

At the same time, our understanding of the brain is becoming ever-greater. Modern research into neuroplasticity has shown us that we can re-train parts of the brain to take over from parts that stop functioning. As our understanding grows, it is not a big leap to believe that we can programmatically adjust the processing of different artificial stimuli to cause much greater slights-of-hand than VR does today.

The tricks that can be played on the aural sense are being exposed by a new wave of smart ear-buds and sound software. The recently announced Oculus earbuds show their dedication to full immersion, and the app formerly known as H__r experiments with acoustic filtration, turning background noise into harmonies.

The illusions of virtual reality can feel more real than reality itself.

The eNose Company — the self-described “specialists in artificial olfaction” (the science of smelling without a nose) — has developed a technology that replicates the function of a human nose. The applications range from lung health to the supersession of sniffer dogs.

With these developments in mind, it is not hard to imagine a full VR rig (headset, earbuds, gloves, maybe even sensors for the nose and mouth) that completely blurs the line between virtual reality and reality itself.

In fact, the virtual experience may offer avenues of perception that reality cannot, especially if we find ways to stimulate chemicals in the brains that strengthen synapses around memories. Perhaps Transcendence or VR pods (Minority Report) are not so far away.

As a result of these developments, technology is becoming closely merged with our bodies. However, the interplay between technology and the body does not end with VR. It gets even more interesting when you add artificial intelligence to the mix, as AI attempts to replicate the processes of the brain within machines.

Technologists have been trying for decades to use our understanding of the brain to build algorithms to solve highly complex, non-linear problems. Recent months and years have seen more notable breakthroughs than before, due to progress in core algorithms, smart codification of these algorithms and improvements in sheer compute power.

We are still a long way from general AI — a model that recreates the entire brain — and it is not clear if and when we could get to that point. One limiting factor is that we need to fully understand the brain before we can build a machine that replicates it.

By studying different processes of the brain — image recognition, learning a language and so on — we can decipher how those processes work and how we learn. Do brain algorithms need to be shown lots of similar things in order to learn, or is the algorithm self-teaching? In other words, is the algorithm “supervised” or “unsupervised”?

Developing truly unsupervised AI will continue to challenge practitioners for years to come, including the technology giants who have embraced (read: made lots of acquisitions in) the industry.

Featured Image: Ralf Hiemisch/Getty Images

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here