Model 3D Objects from 2D Images without the use of Calibrated Cameras or Computationally Expansive Calculations

Technology #31243

Questions about this technology? Ask a Technology Manager

Download Printable PDF

Image Gallery
Diagram that schematically illustrates a homography induced by a planar surface between two views of an object.(2A) Diagram that schematically illustrates warping silhouettes of an object to a reference view using a planar homography induced by a first planeDiagram that schematically illustrates warping silhouettes of an object to a reference view using a planar homography induced by a second plane parallel to the first plane of Fig. 2A
Mubarak Shah, Ph.D.
Saad Khan
Pingkun Yan
Managed By
Andrea Adkins
Assistant Director 407.823.0138
Patent Protection

Systems and methods for modeling three-dimensional objects from two-dimensional images

US Patent 8,363,926 B2
A Homographic Framework for the Fusion of Multi-view Silhouettes
IEEE 11th International Conference on Computer Vision, iccv, pp.1- 8, 2007

A system and method for generating 3D objects from 2D images of the object taken from multiple viewpoints, which is purely image based and avoids the complex calculations and camera calibrations required by other 3D reconstruction methods.

The ability to construct realistic 3D objects is an important aspect of rendering realistic scenes in computer vision research and applications. Thus, developing efficient algorithms for constructing 3D objects is an active area in computer vision research. Most 3D modeling algorithms use visual hull methods that estimate the silhouette or boundary of the object in 3D space and then intersect or fuse these views. While these algorithms are viable, they tend to be cumbersome since they involve the use of complex computations on 3D constructs and require challenging camera calibrations. They are therefore considered computationally expensive, requiring longer times and larger amounts of computing power during the rendering process. In addition, many of these estimation algorithms are sensitive to very small errors that may be introduced during calibration. In an attempt to reduce the possibility for error, other methods are used which bypass the estimation phase, such as photometric algorithms. However, since these methods still use 3D constructs, they too are computationally costly.

Technical Details

UCF scientists have developed a more computationally efficient algorithm that does not use 3D constructs. Instead, their algorithm constructs 3D objects from multiple views of 2D images in 3D space while still using the visual hull approach. By utilizing this technology a 3D rendering software or hardware provider could greatly reduce the computational requirements and sensitivity to error of their products, while eliminating the need for specially calibrated cameras.


  • Uses only minimal geometric information
  • Does not require fully calibrated cameras
  • Algorithm uses only 2D information, thus it is quick and can be implemented on Graphics Processing Units (GPU’s)


  • This technology can be integrated into 3D rendering hardware and software to decrease their computational requirements. It can be used for the 3D reconstruction of objects in a scene, or for creating models of real objects or actors which can then be placed in a virtual world. This method can also be utilized to facilitate the automation of detecting and locating objects in multi-camera situations such as surveillance systems or sporting events. Another possible application of the technology is in marker less motion capturing, used in special effects for movies. Finally it can be used as an initialization step in more elaborate methods of 3D reconstruction like photometrics.