spacetransformer

``

Classes

Name Description
Space Represents the geometric space of reference for 3D medical images.
Transform 4x4 homogeneous coordinate transformation matrix wrapper.

Space

Space(
    shape,
    origin=(lambda: (0, 0, 0))(),
    spacing=(lambda: (1, 1, 1))(),
    x_orientation=(lambda: (1, 0, 0))(),
    y_orientation=(lambda: (0, 1, 0))(),
    z_orientation=(lambda: (0, 0, 1))(),
)

Represents the geometric space of reference for 3D medical images.

This class stores information about the image’s position, orientation, spacing, and dimensions in physical space. It provides methods for coordinate transformations and geometric operations commonly needed in medical image processing.

Design Philosophy: Uses explicit orientation vectors instead of implicit axis assumptions to ensure compatibility with arbitrary medical image orientations. The design prioritizes correctness and traceability over computational efficiency.

The class maintains immutability for safety - all transformation methods return
new Space instances rather than modifying existing ones. This prevents
accidental corruption of geometric metadata in medical applications.

Attributes: shape: Image dimensions (height, width, depth) in voxels origin: Physical coordinates (x,y,z) of the first voxel in mm spacing: Physical size (x,y,z) of each voxel in mm x_orientation: Direction cosines of axis 0 (x-axis) y_orientation: Direction cosines of axis 1 (y-axis) z_orientation: Direction cosines of axis 2 (z-axis)

Example: Creating a space for a typical CT scan:

>>> space = Space(
...     shape=(512, 512, 100),
...     spacing=(0.5, 0.5, 2.0),
...     origin=(0, 0, 0)
... )
>>> print(space.physical_span)
[255.5 255.5 198. ]

Transform between index and world coordinates:

>>> index_points = [[0, 0, 0], [10, 10, 10]]
>>> world_points = space.to_world_transform.apply_point(index_points)
>>> back_to_index = space.from_world_transform.apply_point(world_points)

Attributes

Name Description
end Get the world coordinates of the image’s corner voxel.
from_world_transform Get the world → index coordinate transformation (lazy-loaded).
orientation_matrix Return the 3x3 orientation matrix with direction cosines as columns.
physical_span Get the total physical span of the image in world coordinates (mm).
scaled_orientation_matrix Return the 3x3 orientation matrix scaled by voxel spacing.
shape_zyx Get the shape in ZYX order for Python indexing.
to_world_transform Get the index → world coordinate transformation (lazy-loaded).

Methods

Name Description
apply_bbox Crop the space to a bounding box.
apply_flip Flip the space along a specified axis.
apply_float_bbox Crop with floating-point bounding box and resample to specified shape.
apply_permute Rearrange axes according to the given order.
apply_rotate Rotate the space around a specified axis.
apply_shape Create a new space with modified shape, adjusting spacing and possibly origin.
apply_spacing Create a new space with modified spacing only.
apply_swap Swap two axes in the space.
apply_zoom Scale the shape by the given factor.
contain_pointset_ind Check if index coordinates are within the space bounds.
contain_pointset_world Check if world coordinates are within the space bounds.
copy Return a new Space instance with identical values.
from_dict Create a Space object from a dictionary.
from_json Create a Space object from a JSON string.
from_nifti Create a Space object from a NIfTI image.
from_sitk Create a Space object from a SimpleITK Image.
reverse_axis_order Convert space information to ZYX order for Python indexing.
to_dicom_orientation Convert orientation vectors to DICOM Image Orientation (Patient) format.
to_json Serialize the Space object to a JSON string.
to_nifti_affine Convert space information to NIfTI affine transformation matrix.
to_sitk_direction Convert orientation vectors to SimpleITK direction matrix format.
apply_bbox
Space.apply_bbox(bbox, include_end=False)

Crop the space to a bounding box.

This method creates a new space that represents a cropped region of the original space, updating the origin and shape accordingly.

Args: bbox: Bounding box array of shape (3, 2) where bbox[:,0] is start indices and bbox[:,1] is end indices (exclusive) include_end: If True, include the end indices in the crop

Returns: Space: New Space instance representing the cropped region

Raises: ValidationError: If bbox shape is not (3, 2) or bounds are invalid

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> bbox = np.array([[10, 90], [20, 80], [5, 45]]) >>> cropped = space.apply_bbox(bbox) >>> print(cropped.shape) (80, 60, 40) >>> print(cropped.origin) (10.0, 20.0, 10.0)

apply_flip
Space.apply_flip(axis)

Flip the space along a specified axis.

This method flips the image space along one of the three axes, updating both the origin and orientation vectors accordingly.

Args: axis: Axis to flip along (0=x, 1=y, 2=z)

Returns: Space: New Space instance with flipped axis

Raises: AssertionError: If axis is not in {0, 1, 2}

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> flipped = space.apply_flip(0) # Flip along x-axis >>> print(flipped.origin) (99.0, 0, 0) >>> print(flipped.x_orientation) (-1.0, 0.0, 0.0)

apply_float_bbox
Space.apply_float_bbox(bbox, shape)

Crop with floating-point bounding box and resample to specified shape.

This method performs a floating-point crop followed by resampling to the target shape. The bbox coordinates can be non-integer values, allowing for sub-voxel precision cropping.

Design Philosophy: Combines cropping and resampling in a single operation to avoid accumulation of interpolation errors. The floating-point bbox allows for precise sub-voxel alignment in medical image registration.

Args: bbox: Bounding box array of shape (3, 2) where bbox[:,0] is start coordinates (can be float) and bbox[:,1] is end coordinates shape: Target voxel dimensions (int, int, int) after resampling

Returns: Space: New Space instance with specified shape and physical region defined by the bounding box

Raises: ValidationError: If bbox or shape parameters are invalid

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> bbox = np.array([[10.5, 90.5], [20.2, 80.8], [5.1, 45.9]]) >>> resampled = space.apply_float_bbox(bbox, (80, 60, 40)) >>> print(resampled.shape) (80, 60, 40) >>> print(resampled.spacing) (1.0, 1.0, 1.0)

apply_permute
Space.apply_permute(axis_order)

Rearrange axes according to the given order.

This method reorders the axes of the space according to the specified permutation, updating shape, spacing, and orientation vectors.

Args: axis_order: Permutation of [0, 1, 2] specifying the new axis order

Returns: Space: New Space instance with reordered axes

Raises: AssertionError: If axis_order is not a valid permutation of [0, 1, 2]

Example: >>> space = Space(shape=(100, 200, 50), spacing=(1.0, 2.0, 3.0)) >>> # Reorder to ZYX >>> permuted = space.apply_permute([2, 1, 0]) >>> print(permuted.shape) (50, 200, 100) >>> print(permuted.spacing) (3.0, 2.0, 1.0)

apply_rotate
Space.apply_rotate(axis, angle, unit='degree', center='center')

Rotate the space around a specified axis.

This method rotates the image space around one of the coordinate axes, updating the orientation vectors and optionally the origin.

Args: axis: Axis of rotation (0=x, 1=y, 2=z) angle: Rotation angle unit: Angle unit (“radian” or “degree”) center: Rotation center (“center” for image center, “origin” for world origin)

Returns: Space: New Space instance with rotated orientation

Raises: ValidationError: If rotation parameters are invalid

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> rotated = space.apply_rotate(2, 90, unit=“degree”, center=“center”) >>> print(rotated.x_orientation) # Rotated 90 degrees around z (0.0, 1.0, 0.0) >>> print(rotated.y_orientation) (-1.0, 0.0, 0.0)

apply_shape
Space.apply_shape(shape, align_corners=True)

Create a new space with modified shape, adjusting spacing and possibly origin.

This method creates a new space with the specified shape, and recalculates spacing to maintain the same physical span. Origin may also be adjusted depending on align_corners setting.

Args: shape: New image dimensions (height, width, depth) in voxels align_corners: If True, maintain alignment at corner pixels, similar to PyTorch’s grid_sample align_corners parameter

Returns: Space: New Space instance with updated shape and spacing

Raises: ValidationError: If shape dimensions are invalid

Notes: When align_corners=True: - For dimensions where either old or new shape is 1, alignment is handled as if align_corners=False - For other dimensions, the corners of the image are aligned

When align_corners=False:
    - The centers of the corner pixels are aligned
    - This changes both spacing and origin

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> resized = space.apply_shape((200, 200, 100), align_corners=True) >>> print(resized.shape) (200, 200, 100) >>> print(resized.spacing) # Adjusted to maintain physical span (0.5, 0.5, 1.0)

apply_spacing
Space.apply_spacing(spacing, recompute_spacing=True)

Create a new space with modified spacing only.

This method creates a new space with the specified voxel spacing while preserving all other attributes (origin, shape, orientation).

Args: spacing: New voxel spacing (x, y, z) in mm recompute_spacing: Whether to recompute the spacing or not.

Returns: Space: New Space instance with the specified spacing

Raises: ValidationError: If spacing values are invalid

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> resampled = space.apply_spacing((0.5, 0.5, 1.0)) >>> print(resampled.spacing) (0.5, 0.5, 1.0) >>> print(resampled.shape) # Unchanged (100, 100, 50)

apply_swap
Space.apply_swap(axis1, axis2)

Swap two axes in the space.

This method exchanges two axes by reordering the shape, spacing, and orientation vectors. Equivalent to apply_permute with a swap permutation.

Args: axis1: First axis to swap (0=x, 1=y, 2=z) axis2: Second axis to swap (0=x, 1=y, 2=z)

Returns: Space: New Space instance with swapped axes

Raises: AssertionError: If axes are not in {0, 1, 2} or are equal

Example: >>> space = Space(shape=(100, 200, 50), spacing=(1.0, 2.0, 3.0)) >>> swapped = space.apply_swap(0, 2) # Swap x and z axes >>> print(swapped.shape) (50, 200, 100) >>> print(swapped.spacing) (3.0, 2.0, 1.0)

apply_zoom
Space.apply_zoom(factor, mode='floor', align_corners=True)

Scale the shape by the given factor.

This method scales the image dimensions by the specified factor(s) while keeping spacing and orientation unchanged. The mode parameter controls how non-integer results are handled.

Args: factor: Scaling factor(s). Can be a single float or tuple of 3 floats mode: Rounding mode for non-integer results (“floor”, “round”, “ceil”) align_corners: If True, maintain alignment at corner pixels, similar to PyTorch’s grid_sample align_corners parameter

Returns: Space: New Space instance with scaled shape

Raises: ValidationError: If factor or mode parameters are invalid

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> zoomed = space.apply_zoom(0.5, mode=“round”) >>> print(zoomed.shape) (50, 50, 25) >>> print(zoomed.spacing) # Unchanged (1.0, 1.0, 2.0)

contain_pointset_ind
Space.contain_pointset_ind(pointset_ind)

Check if index coordinates are within the space bounds.

This method tests whether the given index coordinates fall within the valid range [0, shape-1] for each dimension.

Args: pointset_ind: Array of index coordinates with shape (N, 3)

Returns: np.ndarray: Boolean array of shape (N,) indicating which points are inside

Example: >>> space = Space(shape=(100, 100, 50)) >>> points = np.array([[10, 20, 30], [150, 50, 25], [50, 50, 25]]) >>> inside = space.contain_pointset_ind(points) >>> print(inside) [True False True]

contain_pointset_world
Space.contain_pointset_world(pointset_world)

Check if world coordinates are within the space bounds.

This method converts world coordinates to index coordinates and then checks if they fall within the valid index range.

Args: pointset_world: Array of world coordinates with shape (N, 3)

Returns: np.ndarray: Boolean array of shape (N,) indicating which points are inside

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> world_points = np.array([[10.0, 20.0, 30.0], [150.0, 50.0, 25.0]]) >>> inside = space.contain_pointset_world(world_points) >>> print(inside) [True False]

copy
Space.copy()

Return a new Space instance with identical values.

Creates a deep copy of the Space object with all the same attribute values.

Returns: Space: New Space instance with identical values

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> space_copy = space.copy() >>> print(space_copy == space) # Same values True >>> print(space_copy is space) # Different objects False

from_dict
Space.from_dict(data)

Create a Space object from a dictionary.

Lists in the dictionary will be automatically converted to tuples to match the expected Space attribute types.

Args: data: Dictionary containing Space attributes Lists will be converted to tuples

Returns: Space: A new Space instance

Example: >>> data = { … ‘shape’: [100, 100, 50], … ‘spacing’: [1.0, 1.0, 2.0], … ‘origin’: [0, 0, 0] … } >>> space = Space.from_dict(data) >>> print(space.shape) (100, 100, 50)

from_json
Space.from_json(json_str)

Create a Space object from a JSON string.

Args: json_str: JSON string containing Space data

Returns: Space: A new Space instance

Raises: json.JSONDecodeError: If the JSON string is invalid

Example: >>> json_str = ‘{“shape”: [100, 100, 50], “spacing”: [1.0, 1.0, 2.0]}’ >>> space = Space.from_json(json_str) >>> print(space.shape) (100, 100, 50)

from_nifti
Space.from_nifti(niftiimage)

Create a Space object from a NIfTI image.

Args: niftiimage: NIfTI image object

Returns: Space: A new Space instance with geometry matching the NIfTI image

Example: >>> import nibabel as nib >>> image = nib.load(‘image.nii.gz’) >>> space = Space.from_nifti(image) >>> print(space.shape) (100, 100, 50)

from_sitk
Space.from_sitk(simpleitkimage)

Create a Space object from a SimpleITK Image.

Args: simpleitkimage: SimpleITK Image object

Returns: Space: A new Space instance with geometry matching the SimpleITK image

Example: >>> import SimpleITK as sitk >>> image = sitk.Image(100, 100, 50, sitk.sitkFloat32) >>> space = Space.from_sitk(image) >>> print(space.shape) (100, 100, 50)

reverse_axis_order
Space.reverse_axis_order()

Convert space information to ZYX order for Python indexing.

This method reverses the axis order from XYZ (medical standard) to ZYX (Python/NumPy indexing standard). Useful when interfacing with libraries that expect ZYX ordering.

Returns: Space: New Space instance with axes in ZYX order

Example: >>> space = Space(shape=(100, 200, 50), spacing=(1.0, 2.0, 3.0)) >>> zyx_space = space.reverse_axis_order() >>> print(zyx_space.shape) (50, 200, 100) >>> print(zyx_space.spacing) (3.0, 2.0, 1.0)

to_dicom_orientation
Space.to_dicom_orientation()

Convert orientation vectors to DICOM Image Orientation (Patient) format.

DICOM stores orientation as a 6-element array containing the direction cosines of the first row and first column of the image matrix.

Returns: tuple: Row and column direction cosines concatenated (Xx,Xy,Xz,Yx,Yy,Yz)

Example: >>> space = Space(shape=(100, 100, 50)) >>> orientation = space.to_dicom_orientation() >>> print(len(orientation)) 6 >>> print(orientation[:3]) # x-orientation (1.0, 0.0, 0.0) >>> print(orientation[3:]) # y-orientation (0.0, 1.0, 0.0)

to_json
Space.to_json()

Serialize the Space object to a JSON string.

All attributes are already in JSON-serializable types (tuple/list/float/int).

Returns: str: JSON string representation of the Space

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> json_str = space.to_json() >>> print(‘“shape”: [100, 100, 50]’ in json_str) True

to_nifti_affine
Space.to_nifti_affine()

Convert space information to NIfTI affine transformation matrix.

The affine matrix combines rotation, scaling, and translation into a single 4x4 homogeneous transformation matrix following NIfTI conventions.

Since this Space uses LPS coordinates but NIfTI expects RAS coordinates, the matrix is converted from LPS to RAS before output.

Returns: np.ndarray: 4x4 affine transformation matrix in RAS coordinates

Example: >>> space = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> affine = space.to_nifti_affine() >>> print(affine.shape) (4, 4) >>> print(affine[0, 0]) # x-spacing (will be negative due to LPS->RAS conversion) -1.0 >>> print(affine[2, 2]) # z-spacing 2.0

to_sitk_direction
Space.to_sitk_direction()

Convert orientation vectors to SimpleITK direction matrix format.

SimpleITK uses a flattened column-major direction matrix representation where the direction cosines are stored as a 9-element tuple.

Returns: tuple: Direction cosines in column-major order (xx,yx,zx,xy,yy,zy,xz,yz,zz)

Example: >>> space = Space(shape=(100, 100, 50)) >>> direction = space.to_sitk_direction() >>> print(len(direction)) 9 >>> print(direction[:3]) # First column (x-orientation) (1.0, 0.0, 0.0)

Transform

Transform(matrix, source=None, target=None)

4x4 homogeneous coordinate transformation matrix wrapper.

This class encapsulates a 4x4 transformation matrix and provides methods for applying transformations to points and vectors, computing inverses, and composing transformations.

Design Philosophy: The class is designed purely for geometric coordinate calculations without any resampling-related parameters. It maintains references to source and target spaces for traceability and validation.

Uses lazy evaluation for expensive operations like matrix inversion
and caches results for performance. The class is immutable except
for internal caching to ensure thread safety.

Attributes: matrix: 4x4 transformation matrix (source.index → target.index or other) source: Source Space object (can be None for world coordinates) target: Target Space object (can be None for world coordinates)

Example: Creating and using a transformation:

>>> import numpy as np
>>> from spacetransformer.core import Transform
>>> matrix = np.array([[1, 0, 0, 10],
...                    [0, 1, 0, 20],
...                    [0, 0, 1, 30],
...                    [0, 0, 0, 1]])
>>> transform = Transform(matrix)
>>> point = [0, 0, 0]
>>> transformed = transform.apply_point(point)
>>> print(transformed)
[[10. 20. 30.]]

Methods

Name Description
apply_point Apply transformation to a set of points.
apply_vector Apply transformation to a set of vectors (ignoring translation).
compose Compose transformations with self applied first.
inverse Return the inverse transformation (lazy computed and cached).
apply_point
Transform.apply_point(pts)

Apply transformation to a set of points.

This method transforms 3D points using the 4x4 transformation matrix. Points are treated as having homogeneous coordinate w=1.0, so they are affected by both rotation and translation.

Args: pts: Input points with shape (N, 3) or (3,) for single point

Returns: np.ndarray: Transformed points with shape (N, 3)

Example: >>> import numpy as np >>> matrix = np.array([[1, 0, 0, 10], … [0, 1, 0, 20], … [0, 0, 1, 30], … [0, 0, 0, 1]]) >>> transform = Transform(matrix) >>> points = [[0, 0, 0], [1, 1, 1]] >>> transformed = transform.apply_point(points) >>> print(transformed) [[10. 20. 30.] [11. 21. 31.]]

apply_vector
Transform.apply_vector(vecs)

Apply transformation to a set of vectors (ignoring translation).

This method transforms 3D vectors using only the rotational part of the transformation matrix. Vectors are treated as having homogeneous coordinate w=0.0, so they are unaffected by translation.

Args: vecs: Input vectors with shape (N, 3) or (3,) for single vector

Returns: np.ndarray: Transformed vectors with shape (N, 3)

Example: >>> import numpy as np >>> matrix = np.array([[1, 0, 0, 10], # Translation has no effect … [0, 1, 0, 20], … [0, 0, 1, 30], … [0, 0, 0, 1]]) >>> transform = Transform(matrix) >>> vectors = [[1, 0, 0], [0, 1, 0]] >>> transformed = transform.apply_vector(vectors) >>> print(transformed) [[1. 0. 0.] [0. 1. 0.]]

compose
Transform.compose(other)

Compose transformations with self applied first.

This method provides an alternative composition interface where self.compose(other) means apply self first, then other.

Args: other: Transform to apply after self

Returns: Transform: Composed transformation representing self followed by other

Example: >>> import numpy as np >>> T1 = Transform(np.eye(4)) >>> T2 = Transform(np.eye(4)) >>> composed = T1.compose(T2) # Apply T1 first, then T2 >>> print(np.allclose(composed.matrix, np.eye(4))) True

inverse
Transform.inverse()

Return the inverse transformation (lazy computed and cached).

This method computes the inverse transformation matrix and caches the result for efficient repeated use. The inverse is computed using numpy’s linear algebra inverse function.

Returns: Transform: Inverse transformation with swapped source/target spaces

Example: >>> import numpy as np >>> matrix = np.array([[1, 0, 0, 10], … [0, 1, 0, 20], … [0, 0, 1, 30], … [0, 0, 0, 1]]) >>> transform = Transform(matrix) >>> inverse = transform.inverse() >>> print(inverse.matrix[0:3, 3]) # Should be [-10, -20, -30] [-10. -20. -30.]

Functions

Name Description
calc_transform Calculate transformation matrix from source to target space.
find_tight_bbox Calculate the tight bounding box of source space in target index coordinates.
get_space_from_nifti Create a Space object from a NIfTI image.
get_space_from_sitk Create a Space object from a SimpleITK Image.
warp_point Transform point set from source to target space coordinates.
warp_vector Transform vector set between coordinate spaces (translation-invariant).

calc_transform

calc_transform(source, target)

Calculate transformation matrix from source to target space.

This function computes the transformation that maps voxel coordinates from the source space to the target space by chaining the source-to-world and world-to-target transformations.

Args: source: Source geometric space target: Target geometric space

Returns: Transform: Transform object representing source.index -> target.index mapping

Example: >>> from spacetransformer.core import Space >>> source = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> target = Space(shape=(50, 50, 25), spacing=(2.0, 2.0, 4.0)) >>> transform = calc_transform(source, target) >>> points = np.array([[0, 0, 0], [10, 10, 10]]) >>> transformed = transform.apply_point(points)

find_tight_bbox

find_tight_bbox(source, target)

Calculate the tight bounding box of source space in target index coordinates.

This function computes the minimal bounding box that contains all corners of the source space when transformed to the target space’s index coordinates. The bounding box uses half-open intervals [left, right).

Args: source: Source space to compute bounding box for target: Target space coordinate system

Returns: np.ndarray: Bounding box array of shape (3, 2) where bbox[:,0] contains left bounds and bbox[:,1] contains right bounds (exclusive)

Example: >>> from spacetransformer.core import Space >>> source = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> target = Space(shape=(50, 50, 25), spacing=(2.0, 2.0, 4.0)) >>> bbox = find_tight_bbox(source, target) >>> print(bbox) [[ 0 50] [ 0 50] [ 0 25]]

get_space_from_nifti

get_space_from_nifti(niftiimage)

Create a Space object from a NIfTI image.

This function extracts geometric information from a NIfTI image including orientation, spacing, and origin information from the affine matrix.

Priority is given to qform affine matrix, with fallback to sform affine. Coordinate system is converted from RAS (NIfTI default) to LPS (medical imaging standard) by inverting x and y axes.

Args: niftiimage: NIfTI image object with affine matrix and shape

Returns: Space: A new Space instance with geometry matching the NIfTI image in LPS coordinates

Raises: ValueError: If affine matrix is not 4x4

Example: >>> import nibabel as nib >>> image = nib.load(‘brain.nii.gz’) >>> space = get_space_from_nifti(image) >>> print(space.shape) (256, 256, 256) >>> print(space.spacing) (1.0, 1.0, 1.0)

get_space_from_sitk

get_space_from_sitk(simpleitkimage)

Create a Space object from a SimpleITK Image.

This function extracts geometric information from a SimpleITK Image including origin coordinates, voxel spacing, direction cosines, and image dimensions.

Args: simpleitkimage: SimpleITK Image object

Returns: Space: A new Space instance with geometry matching the SimpleITK image

Example: >>> import SimpleITK as sitk >>> image = sitk.ReadImage(‘brain.nii.gz’) >>> space = get_space_from_sitk(image) >>> print(space.shape) (256, 256, 256) >>> print(space.spacing) (1.0, 1.0, 1.0)

warp_point

warp_point(point_set, source, target)

Transform point set from source to target space coordinates.

This function transforms a set of points from source voxel coordinates to target voxel coordinates and returns a boolean mask indicating which points fall within the target space bounds.

Design Philosophy: Supports both NumPy and PyTorch tensors with automatic device handling to enable seamless integration with both CPU and GPU workflows. The output type matches the input type for consistency.

Args: point_set: Input points with shape (N, 3) or (3,) for single point source: Source geometric space target: Target geometric space

Returns: Tuple containing: - Transformed points in target space coordinates - Boolean mask indicating which points are within target bounds

Raises: ValidationError: If inputs are invalid

Example: >>> import numpy as np >>> from spacetransformer.core import Space >>> source = Space(shape=(100, 100, 50), spacing=(1.0, 1.0, 2.0)) >>> target = Space(shape=(50, 50, 25), spacing=(2.0, 2.0, 4.0)) >>> points = np.array([[10, 20, 10], [90, 90, 40]]) >>> transformed, mask = warp_point(points, source, target) >>> print(transformed) [[ 5. 10. 5.] [45. 45. 20.]] >>> print(mask) [True True]

warp_vector

warp_vector(vector_set, source, target)

Transform vector set between coordinate spaces (translation-invariant).

This function transforms vectors (directions) from source to target space without applying translation. Only rotational components of the transformation are applied since vectors represent directions, not positions.

Args: vector_set: Input vectors with shape (N, 3) or (3,) for single vector source: Source geometric space target: Target geometric space

Returns: Transformed vectors in target space coordinates (same type as input)

Raises: ValidationError: If inputs are invalid

Example: >>> import numpy as np >>> from spacetransformer.core import Space >>> source = Space(shape=(100, 100, 50)) >>> target = Space(shape=(50, 50, 25)) >>> vectors = np.array([[1, 0, 0], [0, 1, 0]]) >>> transformed = warp_vector(vectors, source, target) >>> print(transformed) # Should be unchanged for identity transformation [[1. 0. 0.] [0. 1. 0.]]