util
darts.legacy_training.util
¶
Utility functions for legacy training.
convert_lightning_checkpoint
¶
convert_lightning_checkpoint(
*,
lightning_checkpoint: pathlib.Path,
out_directory: pathlib.Path,
checkpoint_name: str,
framework: str = "smp",
)
Convert a lightning checkpoint to our own format.
The final checkpoint will contain the model configuration and the state dict. It will be saved to:
Parameters:
-
lightning_checkpoint
(pathlib.Path
) –Path to the lightning checkpoint.
-
out_directory
(pathlib.Path
) –Output directory for the converted checkpoint.
-
checkpoint_name
(str
) –A unique name of the new checkpoint.
-
framework
(str
, default:'smp'
) –The framework used for the model. Defaults to "smp".
Source code in darts/src/darts/legacy_training/util.py
generate_id
¶
Generate a random base-36 string of length
digits.
This method is taken from the wandb SDK.
There are ~2.8T base-36 8-digit strings. Generating 210k ids will have a ~1% chance of collision.
Parameters:
-
length
(int
, default:8
) –The length of the string. Defaults to 8.
Returns:
-
str
(str
) –A random base-36 string of
length
digits.
Source code in darts/src/darts/legacy_training/util.py
get_generated_name
¶
Generate a random name with a count attached.
The count is calculated by the number of existing directories in the specified artifact directory. The final name is in the format '{somename}-{somesecondname}-{count+1}'.
Parameters:
Returns:
-
str
(str
) –The final name.
Source code in darts/src/darts/legacy_training/util.py
get_value_from_trial
¶
get_value_from_trial(trial, constrains, param: str)
Get a value from an optuna trial based on the constrains.
Parameters:
-
trial
(optuna.Trial
) –The optuna trial
-
constrains
(dict
) –The constrains for the parameter
-
param
(str
) –The parameter name
Raises:
-
ValueError
–Unknown distribution
-
ValueError
–Unknown constrains
Returns:
-
–
str | float | int: The value suggested by optuna
Source code in darts/src/darts/legacy_training/util.py
suggest_optuna_params_from_wandb_config
¶
suggest_optuna_params_from_wandb_config(
trial, config: dict
)
Get optuna parameters from a wandb sweep config.
This functions translate a wandb sweep config to a dict of values, suggested from optuna.
Parameters:
-
trial
(optuna.Trial
) –The optuna trial
-
config
(dict
) –The wandb sweep config
Returns:
-
dict
–A dict of parameters with the values suggested from optuna.
Example
Assume a wandb config which looks like this:
parameters:
learning_rate:
max: !!float 1e-3
min: !!float 1e-7
distribution: log_uniform_values
batch_size:
value: 8
gamma:
value: 0.9
augment:
value: True
model_arch:
values:
- UnetPlusPlus
- Unet
model_encoder:
values:
- resnext101_32x8d
- resnet101
- dpn98
This function will return a dict like this:
{
"learning_rate": trial.suggest_loguniform("learning_rate", 1e-7, 1e-3),
"batch_size": 8,
"gamma": 0.9,
"augment": True,
"model_arch": trial.suggest_categorical("model_arch", ["UnetPlusPlus", "Unet"]),
"model_encoder": trial.suggest_categorical(
"model_encoder", ["resnext101_32x8d", "resnet101", "dpn98"]
),
}
See https://docs.wandb.ai/guides/sweeps/sweep-config-keys for more information on the sweep config.
Note: Not all distribution types are supported.