darts_segmentation.segment
¶
Functionality for segmenting tiles.
DEFAULT_DEVICE
module-attribute
¶
DEFAULT_DEVICE = torch.device(
"cuda" if torch.cuda.is_available() else "cpu"
)
SMPSegmenter
¶
SMPSegmenter(
model_checkpoint: pathlib.Path | str,
device: torch.device = darts_segmentation.segment.DEFAULT_DEVICE,
)
Semantic segmentation model wrapper for RTS detection using Segmentation Models PyTorch.
This class provides a stateful inference interface for semantic segmentation models trained with the DARTS pipeline. It handles model loading, normalization, patch-based inference, and memory management.
Attributes:
-
config(darts_segmentation.segment.SMPSegmenterConfig) –Model configuration including architecture and required bands.
-
model(torch.nn.Module) –The loaded PyTorch segmentation model.
-
device(torch.device) –Device where the model is loaded (CPU or GPU).
Note
The segmenter automatically: - Loads model weights from PyTorch Lightning or legacy checkpoints - Normalizes input data using band-specific statistics from darts_utils.bands - Handles memory cleanup after inference to prevent GPU memory leaks
Example
Basic segmentation workflow:
from darts_segmentation import SMPSegmenter
import torch
# Initialize segmenter
segmenter = SMPSegmenter(
model_checkpoint="path/to/model.ckpt",
device=torch.device("cuda")
)
# Check required bands
print(segmenter.required_bands)
# {'blue', 'green', 'red', 'nir', 'ndvi', 'slope', 'hillshade', ...}
# Run inference on preprocessed tile
result = segmenter.segment_tile(
tile=preprocessed_tile,
patch_size=1024,
overlap=16,
batch_size=8
)
# Access predictions
probabilities = result["probabilities"] # float32, range [0, 1]
Initialize the segmenter with a trained model checkpoint.
Parameters:
-
model_checkpoint(pathlib.Path | str) –Path to the model checkpoint file (.ckpt). Supports both PyTorch Lightning checkpoints and legacy formats.
-
device(torch.device, default:darts_segmentation.segment.DEFAULT_DEVICE) –Device to load the model on. Defaults to CUDA if available, else CPU.
Note
The checkpoint must contain: - Model architecture configuration (config or hyper_parameters) - Trained weights (state_dict or statedict) - Required input bands list Using lightning checkpoints from our training pipeline is recommended.
Source code in darts-segmentation/src/darts_segmentation/segment.py
config
instance-attribute
¶
config: darts_segmentation.segment.SMPSegmenterConfig = (
darts_segmentation.segment.SMPSegmenterConfig.from_ckpt(
ckpt
)
)
device
instance-attribute
¶
device: torch.device = (
darts_segmentation.segment.SMPSegmenter(device)
)
model
instance-attribute
¶
model: torch.nn.Module = (
segmentation_models_pytorch.create_model(
**(
darts_segmentation.segment.SMPSegmenter(
self
).config["model"]
)
)
)
__call__
¶
__call__(
input: xarray.Dataset | list[xarray.Dataset],
patch_size: int = 1024,
overlap: int = 16,
batch_size: int = 8,
reflection: int = 0,
) -> xarray.Dataset | list[xarray.Dataset]
Run inference on a single tile or a list of tiles.
Parameters:
-
input(xarray.Dataset | list[xarray.Dataset]) –A single tile or a list of tiles.
-
patch_size(int, default:1024) –The size of the patches. Defaults to 1024.
-
overlap(int, default:16) –The size of the overlap. Defaults to 16.
-
batch_size(int, default:8) –The batch size for the prediction, NOT the batch_size of input tiles. Tensor will be sliced into patches and these again will be infered in batches. Defaults to 8.
-
reflection(int, default:0) –Reflection-Padding which will be applied to the edges of the tensor. Defaults to 0.
Returns:
-
xarray.Dataset | list[xarray.Dataset]–A single tile or a list of tiles augmented by a predicted
probabilitieslayer, depending on the input. -
xarray.Dataset | list[xarray.Dataset]–Each
probabilityhas type float32 and range [0, 1].
Raises:
-
ValueError–in case the input is not an xr.Dataset or a list of xr.Dataset
Source code in darts-segmentation/src/darts_segmentation/segment.py
segment_tile
¶
segment_tile(
tile: xarray.Dataset,
patch_size: int = 1024,
overlap: int = 16,
batch_size: int = 8,
reflection: int = 0,
) -> xarray.Dataset
Run semantic segmentation inference on a single tile.
This method performs patch-based inference with optional overlap and reflection padding to handle edge artifacts. The tile is automatically normalized using band-specific statistics before inference.
Parameters:
-
tile(xarray.Dataset) –Input tile containing preprocessed data. Must include all bands specified in
self.required_bands. Variables should be float32 reflectance or normalized feature values. -
patch_size(int, default:1024) –Size of square patches for inference in pixels. Larger patches use more memory but may be faster. Defaults to 1024.
-
overlap(int, default:16) –Overlap between adjacent patches in pixels. Helps reduce edge artifacts. Defaults to 16.
-
batch_size(int, default:8) –Number of patches to process simultaneously. Higher values use more GPU memory but may be faster. Defaults to 8.
-
reflection(int, default:0) –Reflection padding applied to tile edges in pixels. Reduces edge effects. Defaults to 0.
Returns:
-
xarray.Dataset–xr.Dataset: Input tile augmented with a new data variable: - probabilities (float32): Segmentation probabilities in range [0, 1]. Attributes: long_name="Probabilities"
Note
Processing pipeline: 1. Extract and reorder bands according to model requirements 2. Normalize using darts_utils.bands.manager 3. Convert to torch tensor 4. Run patch-based inference with overlap blending 5. Convert predictions back to xarray
Memory management: - Automatically frees GPU memory after inference - Predictions are moved to CPU before returning
Example
Run inference with custom parameters:
result = segmenter.segment_tile(
tile=preprocessed_tile,
patch_size=512, # Smaller patches for limited GPU memory
overlap=32, # More overlap for smoother predictions
batch_size=4, # Smaller batches for memory constraints
reflection=16 # Add padding to reduce edge artifacts
)
# Extract probabilities
probs = result["probabilities"]
Source code in darts-segmentation/src/darts_segmentation/segment.py
157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 | |
SMPSegmenterConfig
¶
Configuration for the segmentor.
from_ckpt
classmethod
¶
from_ckpt(
ckpt: dict[str, typing.Any],
) -> darts_segmentation.segment.SMPSegmenterConfig
Load and validate the config from a checkpoint for the segmentor.
Parameters:
Returns:
-
darts_segmentation.segment.SMPSegmenterConfig–The configuration.
Source code in darts-segmentation/src/darts_segmentation/segment.py
predict_in_patches
¶
predict_in_patches(
model: torch.nn.Module,
tensor_tiles: torch.Tensor,
patch_size: int,
overlap: int,
batch_size: int,
reflection: int,
device: torch.device,
return_weights: bool = False,
) -> torch.Tensor
Predict on a tensor.
Parameters:
-
model(torch.nn.Module) –The model to use for prediction.
-
tensor_tiles(torch.Tensor) –The input tensor. Shape: (BS, C, H, W).
-
patch_size(int) –The size of the patches.
-
overlap(int) –The size of the overlap.
-
batch_size(int) –The batch size for the prediction, NOT the batch_size of input tiles. Tensor will be sliced into patches and these again will be infered in batches.
-
reflection(int) –Reflection-Padding which will be applied to the edges of the tensor.
-
device(torch.device) –The device to use for the prediction.
-
return_weights(bool, default:False) –Whether to return the weights. Can be used for debugging. Defaults to False.
Returns:
Source code in darts-segmentation/src/darts_segmentation/inference.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 | |