Reference#

Autogenerated reference from docstrings. Not everything is well documented, but most things should look okay. I’m still figuring out sphinx.ext.autodoc/napoleon, so if something doesn’t look right and you have a fix in mind, please do tell!

noisebase#

Datasets and benchmarks for neural Monte Carlo denoising.

This file serves as the main entry point for Noisebase. It registers all Noisebase config files in your Hydra path and, in addition, provides a Noisebase function should you wish to keep your use of Hydra to a minimum.

noisebase.Noisebase(config, options={})#

Loads a Noisebase dataset using Hydra internally. (Use this if you don’t want to use Hydra otherwise.)

Parameters:
  • config (str) – name of the dataset definition file (e.g. ‘sampleset_v1’)

  • options (dict) – loader options (e.g. {‘batch_size’: 8}) (see wiki for available options)

Returns:

loader (object) – For the selected framework (e.g. torch.utils.data.DataLoader for Pytorch or pl.LightningDataModule for Lightning)

class noisebase.NoisebaseSearchPathPlugin#

Adds Noisebase config files to the Hydra path.

manipulate_search_path(search_path)#

Computes search path.

We want Noisebase to work in various installations. We compute the path of the current file (src/noisebase/__init__.py) and add the conf folder relatively to it to the Hydra path.

noisebase.projective#

Compute various data transformations related to camera intrinsics, normals, and positions

class noisebase.projective.FlipRotate(orientation, height, width, window)#

Class representing a flip/rotation data augmentation

After initialization the transformation can be applied to many cameras and arrays

Parameters:
  • orientation (int) – [0,7] representing the 8 possible axis-aligned flips and rotations. 0 is identity.

  • height (int) – height of the camera resolution (in pixels)

  • width (int) – width of the camera resolution (in pixels)

  • window (int) – height and width of the cropped image

apply_array(x)#

Applies orientation change to per-pixel data

Parameters:

x (ndarray, CHW...) – will be transformed, may have additional final dimensions

apply_camera(target, up, pos, p, offset)#

Applies orientation change to camera intrinsics

Parameters:
  • target (ndarray, size (3)) – a world-space point the camera is pointing at (center of the frame)

  • up (ndarray, size (3)) – vector in world-space that points upward in screen-space

  • pos (ndarray, size (3)) – the camera’s position in world-space

  • p (ndarray, size (4,4)) – projective matrix of the camera e.g. [0.984375, 0., 0., 0. ], [0., 1.75, 0., 0. ], [0., 0., 1.0001, -0.10001], [0., 0., 1., 0. ]

  • offset (ndarray, size (2)) – offset of random crop (window) from top left corner of camera frame (in pixels)

Returns:
  • W (ndarray, size (3)) – vector in world-space that points forward in screen-space

  • V (ndarray, size (3)) – vector in world-space that points up in screen-space

  • U (ndarray, size (3)) – vector in world-space that points right in screen-space

  • pos (ndarray, size (3)) – unchanged camera position

  • offset (ndarray, size (2)) – transformed offset, MAY BE NEGATIVE!

  • pv (ndarray, size (4,4)) – computed view-projection matrix, ROW VECTOR

noisebase.projective.log_depth(w_position, pos)#

Computes per-sample compressed depth (disparity-ish)

Parameters:
  • w_position (ndarray, 3HWS) – per-sample world-space positions

  • pos (ndarray, size (3)) – the camera’s position in world-space

Returns:

motion (ndarray, 1HWS) – per-sample compressed depth

noisebase.projective.motion_vectors(w_position, w_motion, pv, prev_pv, height, width)#

Computes per-sample screen-space motion vectors (in pixels)

Parameters:
  • w_position (ndarray, 3HWS) – per-sample world-space positions

  • w_motion (ndarray, 3HWS) – per-sample world-space positions

  • pv (ndarray, size (4,4)) – camera view-projection matrix

  • prev_pv (ndarray, size (4,4)) – camera view-projection matrix from previous frame

  • height (int) – height of the camera resolution (in pixels)

  • width (int) – width of the camera resolution (in pixels)

Returns:

motion (ndarray, 2HWS) – Per-sample screen-space motion vectors (in pixels). IJ INDEXING! for gather ops and consistency, see backproject_pixel_centers in noisebase.torch.projective for use with grid_sample. Degenerate positions give inf.

noisebase.projective.normalize(v)#

Individually normalize an array of vectors

Parameters:

v (ndarray, CHW) – will be normalized along first dimension

noisebase.projective.screen_space_normal(w_normal, W, V, U)#

Transforms per-sample world-space normals to screen-space / relative to camera direction

Parameters:
  • w_normal (ndarray, 3HWS) – per-sample world-space normals

  • W (ndarray, size (3)) – vector in world-space that points forward in screen-space

  • V (ndarray, size (3)) – vector in world-space that points up in screen-space

  • U (ndarray, size (3)) – vector in world-space that points right in screen-space

Returns:

normal (ndarray, 3HWS) – per-sample screen-space normals

noisebase.projective.screen_space_position(w_position, pv, height, width)#

Projects per-sample world-space positions to screen-space (pixel coordinates)

Parameters:
  • w_normal (ndarray, 3HWS) – per-sample world-space positions

  • pv (ndarray, size (4,4)) – camera view-projection matrix

  • height (int) – height of the camera resolution (in pixels)

  • width (int) – width of the camera resolution (in pixels)

Returns:

projected (ndarray, 2HWS) – Per-sample screen-space position (pixel coordinates). IJ INDEXING! for gather ops and consistency, see backproject_pixel_centers in noisebase.torch.projective for use with grid_sample. Degenerate positions give inf.

noisebase.compression#

Compressed data formats used by Noisebase

noisebase.compression.compress_RGBE(color)#

Computes RGBE compressed representation of radiance data

Parameters:

color (ndarray, 3HWS) – per-sample RGB radiance

Returns:
  • color (ndarray, uint8, 4HWS) – radiance data in RGBE representation

  • [min_exposure, max_exposure] – exposure range for decompression

noisebase.compression.decompress_RGBE(color, exposures)#

Decompresses per-sample radiance from RGBE compressed data

Parameters:
  • color (ndarray, uint8, 4HWS) – radiance data in RGBE representation

  • [min_exposure – exposure range for decompression

  • max_exposure] – exposure range for decompression

Returns:

color (ndarray, 3HWS) – per-sample RGB radiance

noisebase.torch.projective#

also available under noisebase.torch

noisebase.torch.projective.backproject_pixel_centers(motion, crop_offset, prev_crop_offset, as_grid=False)#

Decompresses per-sample radiance from RGBE compressed data

Parameters:
  • motion (tensor, N2HW) – Per-sample screen-space motion vectors (in pixels) see noisebase.projective.motion_vectors

  • crop_offset (tensor, size (2)) – offset of random crop (window) from top left corner of camera frame (in pixels)

  • prev_crop_offset (tensor, size (2)) – offset of random crop (window) in previous frame

  • as_grid (bool) – torch.grid_sample, with align_corners = False format

Returns:
  • pixel_position (tensor, N2HW) – ij indexed pixel coordinates OR

  • pixel_position (tensor, NHW2) – xy WH position (-1, 1) IF as_grid

noisebase.torch.video_sampler#

also available under noisebase.torch

class noisebase.torch.video_sampler.VideoSampler(batch_size, frames_per_sequence, num_sequences, shuffle=False, drop_last=False, get_epoch=None, shuffle_fn=None)#

Samples frames sequentially from video datasets

Datasets must be indexed as concatenated sequences of frames
  • Each sequence must have frames_per_sequence frames

  • I.e. the dataset contains the jth frame of the ith sequence at i * frames_per_sequence + j

Each sampled batch contains frames at the same index from batch_size number of sequences

This sampler supports batching, shuffling, distributed training, and mid-epoch checkpoints

noisebase.torch.misc#

also available under noisebase.torch

class noisebase.torch.misc.Shuffler(seed)#

Numpy rng convenience