Over the months, Facebook continues to increase the realism of its avatars for virtual reality. The firm considers that realistic and faithful avatars are essential for “Social VR” applications to become relevant. For the social media giant, this is therefore a top priority.
In early September 2019, researchers from the Facebook Reality Lab published the results of their research on a new method for creating ultra-realistic avatars reproducing the gestures and expressions of their users in real time. This method is based on VR headsets equipped with cameras, but also on artificial intelligence.
Firstly, a “training” VR headset equipped with nine cameras has been used. This device is able to capture the user's face and eyes from all angles, and these images are then compared with a digital scan of the user previously taken.
Once the correspondence is established between the two, a second “tracking” VR headset is used. This device is equipped with only three cameras, to film the mouth and the eyes of the wearer, but the data collected by the training helmet allows him to better understand the images that it captures.
This comprehensive process transcribes the user's face and expressions onto their avatar in real-time VR with unprecedented precision. A wide variety of expressions can be identified. Thus, the helmet is for example capable of detect if the user tucks in their cheeks, bites their lips or moves their tongue.
Sadly, we will have to wait some time to see this method available to the general public. For good reason, the need to have a complete scan and to use the "training" headset beforehand prevents any domestic use at present.
La only solution would be to set up "scan centers" specially designed for this purpose for users, which seems irrelevant at a time when VR is struggling to democratize. However, advances in machine learning and capture technologies could allow this method to be used at home in the near future ...