The Technology Behind Virtual Reality And Augmented Reality Applications in Clean Tech
Last time we looked at the kind of applications in clean technology where using virtual reality or augmented reality systems are making a significant difference.
But, how do these systems work?
Most AR and VR systems can be broadly classified as follows.
These systems can be divided into 1) the hardware required to get the data, process and display it 2) the software needed to develop simulations of the systems being studied and create virtual objects and 3) the server where the data are stored and processed and where machine learning algorithms can be deployed to improve outcomes.
Key Hardware systems: The hardware systems can be categorized into the input systems and the display systems.
Most AR and VR systems use GPS (to determine location), cameras (to obtain the live images of where the user is located and/or looking), gyroscopes and accelerometers (to determine speed and direction of the user’s movement) and other sensors that are specific to the problem being solved (a sensor that detects a chemical for example to find leaks or spills near pipelines). The first three input devices are common to most VR/AR applications and a typical smartphone’s sensors will usually suffice. Other sensors that are specific to the problem will either need to be integrated with the existing smartphone inputs or apps will need to be developed that use smartphone infrastructure to generate the required inputs.
The output from the VR/AR application will need to be displayed. Commonly used display systems include headsets or head-mounted devices (like the Oculus Rift or Google Glass headsets) or smartphone screens. These devices are now widely available commercially and developer kits to create applications have also been released. One of the most widely used for applications other than gaming is ARToolkit. It’s an open-source, free toolkit and can be used to customize the application.
Modeling software: This is where software that is used to generate the simulations of the system being studied and create the virtual objects (for AR) or the virtual system (for VR) that is needed.
For example, think of a VR system like the one developed at Stanford’s Human-Visual Interaction lab where the user was looking at the ocean changes in response to acidification. First, existing software needs to be used to model the ocean system itself - which is a fairly complex endeavor. Some the subsystems that need to be modeled include movement of water in the ocean, interactions between species in the ocean, interactions between a changing climate and ocean chemistry, and interactions between ocean chemistry and species living in the ocean. Additional software might need to be written to improve the simulation or add more features. Then, after the physical, chemical and biological models have been created, the simulation needs to be displayed so that the user can enjoy it. This would require creating graphics of ocean waves or coral reefs or dolphins as well as a virtual representation of the user, if needed.
This is the section where expertise in the clean tech field being studied becomes critical. All aspects of the system being displayed or studied needs to be understood so that realistic models can be developed, which will then be turned into displays for the user.
From a design aspect, this is the part where generating the display to make it visually appealing as well as accurate is important. A good UI designer and software engineers who are working on display are important here, if a commercial product is being developed.
Server: This section is where the data engineering aspects of the problem come into play as well as some of the machine learning and data science aspects.
The data generated from the hardware inputs, the model software all need to be stored and retrieved efficiently as the user is interacting with the system. So, in addition to the processors in the display devices (like the smartphone or gaming console), cloud storage is going to be needed. This leads to the typical issues with any data problem - how to store the data, what is the best system to retrieve the data, how can the data be processed to solve the problem? For example, if images are being received from the camera, these need to be stored and image processing algorithms written to understand and identify the images. If models of power transmission lines are being created, they need to be stored with location identifiers so that the data can be retrieved and overlaid on an existing image of a road.
A data scientist or data engineer would most likely be spending a significant amount of their time solving problems in this section.
Any commercial application in using VR/AR to solve clean tech problems is going to need a wide range of engineers and scientists to work in all the three sections here - UI designers, software engineers, hardware engineers, data engineers, data scientists, environmental scientists, environmental engineers……
Of course, the most interesting aspects would be when the clean tech model needs to be integrated with imagery data or sensor data! That’s when a combination of software/data science and clean technology skills would be powerful.