Skip to content

Ground Truth Renderer

Moritz edited this page Feb 28, 2020 · 10 revisions

Features

  • Loads and renders different subsets of point clouds
  • View the point cloud in different modes
    • Splats: high quality splat rendering of the whole point cloud
    • Points: high quality point rendering of the whole point cloud
    • Neural Network: renders neural network output and loss channel comparison
  • Compare to a sparse subset of the point cloud with a different sampling rate
  • HDF5 dataset generation containing color, normal and depth images of the point cloud
  • Blending the colors of overlapping splats
  • Phong Lighting

Neural Network View Mode:

  • Neural network should output the splat rendering from a sparse point rendering input
  • The neural network must be loaded from a selected .pt file with Load Model
  • A description of the input/output channels of the network must be loaded from a .txt file with Load Description
  • Each entry consists of:
    • String: Name of the channel (render mode)
    • Int: Dimension of channel
    • String: Identifying if the channel is input (inp) or output (tar)
    • String: Transformation keywords e.g. normalization
    • Int: Offset of this channel from the start channel
  • Example for a simple description file:
[['PointsSparseColor', 3, 'inp', 'normalize', 0], ['SplatsColor', 3, 'tar', 'normalize', 0]]
  • When a Loss Function is selected
    • Loss between two channels (Loss Self and Loss Target) is computed
    • Screen area between these channel renderings can be controlled Loss Area

HDF5 Dataset Generation

  • Rendering resolution must currently be a power of 2
  • There are two modes for the dataset generation with parameters in the HDF5 Dataset tab
    • Waypoint dataset:
      • Interpolates the camera between the user set waypoints
      • Use the Advanced tab to add/remove a waypoint for the current camera perspective
      • Preview Waypoints shows a preview of the interpolation
      • Step Size controls the interpolation value between two waypoints
    • Sphere dataset:
      • Sweeps the camera along a sphere around the point cloud
      • Step Size influences the amount of viewing angles (0.2 results in ~1000 angles)
      • Theta and Phi Min/Max values define the subset of the sphere to be sweeped
      • Move the camera to the desired distance from the center of the point cloud before generation
  • Start the generation process with Generate Waypoint HDF5 Dataset or Generate Sphere HDF5 Dataset
    • Make sure that the density and rendering parameters for the Sparse Splats view mode are set according to Configure the rendering parameters
    • The generated file will be stored in the HDF5 directory
    • After generation all the view points can be selected with the Camera Recording slider
Clone this wiki locally