ai4med.libs.metrics package

3.1
class MetricAUC(name, report_path=None, auc_average='macro', class_index=None, include_list=None)

Bases: ai4med.libs.metrics.metric_list.MetricList

generate_report()
get()
update(value, label_value=None, filename='', data_prop=None)
class MetricAverage(name, invalid_value=nan, report_path=None, negate_value=False)

Bases: ai4med.libs.metrics.metric_list.MetricList

Generic class for tracking averages of metrics. Expects that the applied_key is a scalar value that will be averaged

get()
class MetricAverageFromArray(name, reduction_fn, invalid_value=nan, report_path=None)

Bases: ai4med.libs.metrics.metric_list.MetricList

Generic class for computing an overlap metric given an np array of labels and predictions. This is useful for scanning window, which returns full-size predictions and label values. As a result, metrics are not computed on the TF graph, and must instead happen outside of the graph

reduction_fn: function that accepts a ‘labels’ and ‘preds’ field and returns a scalar value, e.g., a Dice score label_key: the key in the dictionary inputted to the update fn that corresponds to the label np.array

generate_report()
update(value, label_value=None, filename='', data_prop=None)
class MetricAverageFromArrayDice(name, is_2d=False, remove_bg=True, logit_thresh=0.5, is_one_hot_targets=False, report_path=None)

Bases: ai4med.libs.metrics.avg_from_array.MetricAverageFromArray

Computes dice score metric from full size np array and collects average.

Parameters
  • name (str) – Name for the metric.

  • report_path (str, optional) – Path for saving report.

reduction_fn: function that accepts a ‘labels’ and ‘preds’ field and returns a scalar value, e.g., a Dice score label_key: the key in the dictionary inputted to the update fn that corresponds to the label np.array

get()
class MulticlassAverage(name: str, report_path: str = None, auc_average: str = 'macro', label_index: int = None)

Bases: ai4med.libs.metrics.metric_list.MetricList

generate_report()
get()
update(value, label_value=None, filename='', data_prop=None)
class Metric(name, report_path=None)

Bases: object

Base class for validation metrics.

All metrics should define:

update_fn: accepts dictionary of values, and pulls applied_key to update current summary of metric reset: reset tracking variables get: returns the current summary of the metric value should_print: whether we are printing this out should_summarize: whether we are summarizing in tensorboard name: name of metric, used to label printing and summaries is_stopping_metric: whether the metric should be considered the stopping criterion

generate_report()
get()
name()
reduce_across_ranks()
reset()
update(value, label_value=None, filename='', data_prop=None)
class MetricList(name, invalid_value=None, report_path=None)

Bases: ai4med.libs.metrics.metric.Metric

Generic class for aggregating list-based metrics, e.g., AUCs, that require the entire set of predictions

add_to_list(cur_list, val, invalid_value=None)
reduce_across_ranks()

Just appends all lists together into rank 0’s list

reset()
update(value, label_value=None, filename='', data_prop=None)
acc_dice_np(labels, preds, max_label=None, remove_bg=True)

Computes dice score over batch of volumes, with optional reduction. If no reduction, returns list of dice scores, size of batch

acc_dice_np_batch(labels, preds, max_label=None, reduction_fn=None, is_2d=False, remove_bg=True, logit_thresh=0.5, is_one_hot_targets=False)

Computes dice score over batch of volumes, with optional reduction

acc_dice_np_onehot(labels, preds, max_label=None, remove_bg=True, thresh=0.5)

Computes dice score over batch of volumes, with optional reduction. If no reduction, returns list of dice scores, size of batch

dice_np(label, prediction)
© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.