The instantreality-framework is a high-performance Mixed-Reality (MR) system, which combines various components to provide a single and consistent interface for AR/VR developers. Those components have been developed at the Fraunhofer IGD and ZGDV in close cooperation with industrial partners:
OpenSG: Open-source scene-graph rendering system
InstantIO: Network transparent device and data-stream management system
VisionLib: Flexible vision-/image-pipeline processor
Avalon: Dynamic scene management and manipulation system
The framework offers a comprehensive set of features to support classic Virtual Reality (VR) and advanced Augmented Reality (AR) equally well. The goal was to provide a very simple application interface while still including the latest research results in the fields of high-realistic rendering, 3D user interaction and total-immersive display technology.
Features and Benefits
The framework provides data-flow graph which is an extension to the X3D scene and execution model. Those graphs allow the developer to create applications by modelling and not just programming. Any application consists of a number of graphs, which are defined by components and relations between those components. Each component includes state-parameters and a processing unit, which controls the behaviour of the component. The component complexity can scale from single boolean operation to complete application including geometry and simulation subparts. The processing unit of each component can be declared by a hidden subgraph, a script or a set of native compiled class. The resulting architecture is extremely flexible and allows the developer to create complex applications by a simple drag and drop interface. The final application-graph can be deployed in any supported runtime environment from a single PDA to a multi-screen/multi-node cluster unit.
The behaviour modelling approach allows the developer to prototype and to develop new applications very efficient without programming. The runtime-environment analyses and executes the component graphs by using a wider number of automatic online and offline optimisation methods, which allow the system to process large and complex component graphs (consisting of thousand of components and relations), visualise large datasets of polygons, points, NURBS and volumes efficiently. It utilizes the latest GPU Hardware features to perform advanced real-time shading techniques including real-time shadows.
The framework includes various optimisation methods to exploit all available hardware resources to reach global application specific runtime goals:
- Auto-Parallel/Multithread: The system analyses the structure of the application graph in real-time and uses a patented method to detect independent sub-graphs and to execute them in parallel.
- Cluster: Different sort-first/sort-last algorithms balance the rendering load for every cluster-node in real-time. The method scales almost linear and is not fixed to the number of CPUs. The algorithm allows increasing the overall cluster performance by increasing the number of render-nodes.
- Multi-Resolution Datasets: The framework can create and manage multi-resolution datasets for points, meshes and volume datasets. This allows the system to control the overall render performance to reach global application goals like a minimal frame rate.
The middleware components of the framework provides system-specific synchronic and asynchronic high performance network-interfaces to incorporate application data at runtime, e.g. from Simulator packages. In addition SOAP and HTTP interfaces provide very open and flexible ways to control and manipulate the running application, e.g. by building advanced User Interfaces.
A data-stream based IO-declaration pyramid provides various levels of abstraction to control all device and data-IO aspects of the application. The system provides sensor abstraction to access the low-level data streams directly as well as high-level device and device-class independent components to define very abstract interaction-styles on parts of the application graph.
Computer Vision Based Tracker
The IO-Subsystem includes advanced image processing functions, which are utilized to provide marker and marker-less tracking. These trackers represent the latest research results in the field of computer vision based tracking for AR applications.
The Framework provides a complete set of tools and plug-ins to ease the application development and deployment:
- Integration: Plug-ins for the most common DCC tools (e.g. Maya, 3DMax) enable the application developer to integrate 3D data very efficiently. Data Importers for the Framework can directly read and process the most common CAD data formats (JTOpen, Catia5, Catia4, Step, STL)
- Composing: Special runtime environments allow the developer to integrate and compose data from different sources. The System includes various plug-ins to enable the developer to create any type of application logic and behaviour by defining components, component relations and component processing units. A nifty event and script debugger ease the development process.
- Deployment: Various server and middleware systems can be utilized to deploy the final applications on a wide number of hardware platforms. The server and services communicate using standard ZeroConf mechanisms to ease the installation and service process.
The system design includes various industry standards to easy the development and application service process:
OpenGL 2.0 (Khronos Group)
GLSL (Khronos Group)
CG (NVIDIA corporation)
X3D (ISO/IEC 19775:2004)
ECMAScript (ISO/IEC 16262:2002)
JAVA (Sun corporation)
SOAP (W3D SOAP V1.2)
ZEROCONF (IETF Zeroconf WG)
All system components are available on a wide number of hardware and software platforms (Windows XP/Vista/7, Linux, Mac OS X).