What are the components of an augmented reality system?
Augmented reality varies depending on implementation, but the most common components include the following, categorized by hardware and software.
These hardware components comprise the backbone of augmented reality. Some of these components might already be supported if you are engaging in AR with your smartphone (more in the following section):
- Processor – Augmented reality requires significant processing power to create the imagery needed and place it in the proper location for it to appear to exist in a real-world environment. Processors may be incorporated in a mobile handset or embedded into a wearable device (more on this below).
- Display – In AR, imagery is created and then populated on some form of display. This can take several forms, depending on the specific application. These include:
- Mobile handheld device – The smartphone or tablet screen is arguably the most common way in which AR hologram imagery is viewed. A user points his or her phone’s camera at a point of interest, and the live video hologram generated by the camera lens is overlaid with AR information.
- Wearable device – Smart glasses such as Google Glass, Vuzix Blade, and Solos Smart Glasses are all designed as standard eyeglasses that also contain a small display only visible to the wearer. The person wearing the augmented reality headset can see the real world by looking straight through the lenses of the goggles, while the embedded display provides an informational overlay. VR headsets are less common in AR environments because they do not allow the wearer to see the real world directly; instead, it has to be recreated in video and displayed on the built-in screen, which is otherwise opaque.
- Automotive HUDs – HUDs, or heads-up displays, are systems that use your car’s windshield as a screen. A device projects an image – speed, directions, etc. – from the dashboard upwards onto the windshield. The driver sees the reflection of this imagery as it bounces off the glass like a mirror.
- Others — Looking ahead, more futuristic devices like smart contact lenses and systems that can project an image directly onto the retina may become viable.
- Camera – As the primary sensor required for AR to function, the camera feeds the live video to the processor, which detects key facets of the environment on which the AR data is overlaid. The camera itself does not process any of the digital information; it merely provides the video feed.
- Other sensors – AR is often designed for motion, so additional sensor types are required for operation. These may include spatial sensors, such as accelerometers and digital compasses, which indicate the direction the camera is facing; GPS sensors, which track the user’s location in the world; microphones, which incorporate audio data into the simulation: and LiDaR, which uses lasers to measure exact distance.
- Input devices – A user on the move is often not at liberty to type commands into a computer. As such, AR systems have been devised to work with numerous types of input technologies. Foremost is the mobile device touchscreen, providing a natural interaction if a phone or tablet is available. Other options include voice recognition technology, so users can control the system via speech, and gesture recognition systems, which typically translate the motion of the user’s hand into commands.
Several types of software algorithms are needed to enable augmented reality. Broadly, these include:
Image registration – Software that takes a photographic representation of one’s surroundings and uses that information to determine various real-world coordinates and objects within it. Image registration maps the real world and determines what is in the foreground vs. what is in the background, where one object ends and another begins, and points of interest as well as additional information.
3D rendering – With the real world mapped and categorized, the next step is overlaying the augmented reality information on top of it. The 3D renderer creates virtual objects and places them into the appropriate location within the live image. The programming language Augmented Reality Markup Language (ARML) is the current standard for setting the location and appearance of a virtual object.
Content management – Content management is a back-end technology incorporating a system that maintains a database of virtual objects and 3D models.
Interface – Whether it’s a video game or a technical management tool, the interface is the intermediary between the user and the video representation of the augmented reality environment.
Development toolkits – A variety of open source and proprietary technologies are used to give programmers a framework for building AR applications on the platform of their choice.