NVIDIA Morpheus (24.06)
(Latest Version)

morpheus.stages.input.duo_source_stage.DuoSourceStage

class DuoSourceStage(c, input_glob, watch_directory=False, max_files=-1, file_type=<FileTypes.Auto: 0>, repeat=1, sort_glob=False, recursive=True, queue_max_size=128, batch_timeout=5.0)[source]

Bases: morpheus.stages.input.autoencoder_source_stage.AutoencoderSourceStage

Source stage is used to load Duo Authentication messages.

Adds the following derived features:
  • locincrement: Increments every time a log contains a distinct city within a day.

  • logcount: Tracks the number of logs generated by a user within a day.

Parameters
c : morpheus.config.Config

Pipeline configuration instance.

input_glob

Input glob pattern to match files to read. For example, /input_dir/*.json would read all files with the ‘json’ extension in the directory input_dir.

watch_directory

The watch directory option instructs this stage to not close down once all files have been read. Instead it will read all files that match the ‘input_glob’ pattern, and then continue to watch the directory for additional files. Any new files that are added that match the glob will then be processed.

max_files: int, default = -1

Max number of files to read. Useful for debugging to limit startup time. Default value of -1 is unlimited.

file_type : morpheus.common.FileTypes

Indicates what type of file to read. Specifying ‘auto’ will determine the file type from the extension. Supported extensions: ‘json’, ‘csv’

repeat: int, default = 1

How many times to repeat the dataset. Useful for extending small datasets in debugging.

sort_glob

If true the list of files matching input_glob will be processed in sorted order.

recursive: bool, default = True

If true, events will be emitted for the files in subdirectories that match input_glob.

queue_max_size: int, default = 128

Maximum queue size to hold the file paths to be processed that match input_glob.

batch_timeout: float, default = 5.0

Timeout to retrieve batch messages from the queue.

Attributes
has_multi_input_ports

Indicates if this stage has multiple input ports.

has_multi_output_ports

Indicates if this stage has multiple output ports.

input_count

Return None for no max input count

input_ports

Input ports to this stage.

is_built

Indicates if this stage has been built.

is_pre_built

Indicates if this stage has been built.

name

Unique name for the stage.

output_ports

Output ports from this stage.

unique_name

Unique name of stage.

Methods

batch_user_split(x, userid_column_name, ...) Creates a dataframe for each userid.
build(builder[, do_propagate]) Build this stage.
can_build([check_ports]) Determines if all inputs have been built allowing this node to be built.
can_pre_build([check_ports]) Determines if all inputs have been built allowing this node to be built.
change_columns(df) Removes characters (_,.,{,},:) from the names of the dataframe columns.
compute_schema(schema) Compute the schema for this stage based on the incoming schema from upstream stages.
derive_features(df, feature_columns) Derives feature columns from the DUO (logs) source columns.
files_to_dfs_per_user(x, userid_column_name, ...) After loading the input batch of DUO logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition.
get_all_input_stages() Get all input stages to this stage.
get_all_inputs() Get all input senders to this stage.
get_all_output_stages() Get all output stages from this stage.
get_all_outputs() Get all output receivers from this stage.
get_match_pattern(glob_split) Return a file match pattern
get_needed_columns() Stages which need to have columns inserted into the dataframe, should populate the self._needed_columns dictionary with mapping of column names to morpheus.common.TypeId.
join() Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped.
repeat_df(df, repeat_count) This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the event_dt and eventTime columns.
set_needed_columns(needed_columns) Sets the columns needed to perform preallocation.
start_async() This function is called along with on_start during stage initialization.
stop() Stages can implement this to perform cleanup steps when pipeline is stopped.
supports_cpp_node() Indicate that this stages does not support a C++ node.
_build(builder, input_nodes)[source]

This function is responsible for constructing this stage’s internal mrc.SegmentObject object. The input of this function contains the returned value from the upstream stage.

The input values are the mrc.Builder for this stage and a list of parent nodes.

Parameters
builder : mrc.Builder

mrc.Builder object for the pipeline. This should be used to construct/attach the internal mrc.SegmentObject.

input_nodes : list[mrc.SegmentObject]

List containing the input mrc.SegmentObject objects.

Returns
list[mrc.SegmentObject]

List of tuples containing the output mrc.SegmentObject object from this stage.

_build_source(builder)[source]

Abstract method all derived Source classes should implement. Returns the same value as build.

Returns
mrc.SegmentObject:

The MRC node for this stage.

_build_sources(builder)[source]

Abstract method all derived Source classes should implement. Returns the same value as build.

Returns
mrc.SegmentObject:

The MRC nodes for this stage.

static batch_user_split(x, userid_column_name, userid_filter, datetime_column_name='event_dt')[source]

Creates a dataframe for each userid.

Parameters
x

List of dataframes.

userid_column_name

Name of a dataframe column used for categorization.

userid_filter

Only rows with the supplied userid are filtered.

datetime_column_name

Name of the dataframe column used to sort the rows.

Returns
user_dfs

Dataframes, each of which is associated with a single userid.

build(builder, do_propagate=True)[source]

Build this stage.

Parameters
builder : mrc.Builder

MRC segment for this stage.

do_propagate

Whether to propagate to build output stages, by default True.

can_build(check_ports=False)[source]

Determines if all inputs have been built allowing this node to be built.

Parameters
check_ports

Check if we can build based on the input ports, by default False.

Returns
bool

True if we can build, False otherwise.

can_pre_build(check_ports=False)[source]

Determines if all inputs have been built allowing this node to be built.

Parameters
check_ports

Check if we can build based on the input ports, by default False.

Returns
bool

True if we can build, False otherwise.

static change_columns(df)[source]

Removes characters (_,.,{,},:) from the names of the dataframe columns.

Parameters
df : pd.DataFrame

Dataframe that requires column renaming.

Returns
df : pd.DataFrame

Dataframe with renamed columns.

compute_schema(schema)[source]

Compute the schema for this stage based on the incoming schema from upstream stages.

Incoming schema and type information from upstream stages is available via the schema.input_schemas and schema.input_types properties.

Derived classes need to override this method, can set the output type(s) on schema by calling set_type for all output ports. For example a simple pass-thru stage might perform the following:

If the port types in upstream_schema are incompatible the stage should raise a RuntimeError.

static derive_features(df, feature_columns)[source]

Derives feature columns from the DUO (logs) source columns.

Parameters
df

Dataframe for deriving columns.

feature_columns

Names of columns that are need to be derived.

Returns
df

Dataframe with actual and derived columns.

static files_to_dfs_per_user(x, userid_column_name, feature_columns, userid_filter=None, repeat_count=1)[source]

After loading the input batch of DUO logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition.

Parameters
x

List of messages.

userid_column_name

Name of the column used for categorization.

feature_columns

Feature column names.

userid_filter

Only rows with the supplied userid are filtered.

repeat_count

Number of times the given rows should be repeated.

Returns
df_per_user

Dataframe per userid.

get_all_input_stages()[source]

Get all input stages to this stage.

Returns
list[morpheus.pipeline.pipeline.StageBase]

All input stages.

get_all_inputs()[source]

Get all input senders to this stage.

Returns
list[morpheus.pipeline.pipeline.Sender]

All input senders.

get_all_output_stages()[source]

Get all output stages from this stage.

Returns
list[morpheus.pipeline.pipeline.StageBase]

All output stages.

get_all_outputs()[source]

Get all output receivers from this stage.

Returns
list[morpheus.pipeline.pipeline.Receiver]

All output receivers.

get_match_pattern(glob_split)[source]

Return a file match pattern

get_needed_columns()[source]

Stages which need to have columns inserted into the dataframe, should populate the self._needed_columns dictionary with mapping of column names to morpheus.common.TypeId. This will ensure that the columns are allocated and populated with null values.

property has_multi_input_ports: bool

Indicates if this stage has multiple input ports.

Returns
bool

True if stage has multiple input ports, False otherwise.

property has_multi_output_ports: bool

Indicates if this stage has multiple output ports.

Returns
bool

True if stage has multiple output ports, False otherwise.

property input_count: int

Return None for no max input count

property input_ports: list[morpheus.pipeline.receiver.Receiver]

Input ports to this stage.

Returns
list[morpheus.pipeline.pipeline.Receiver]

Input ports to this stage.

property is_built: bool

Indicates if this stage has been built.

Returns
bool

True if stage is built, False otherwise.

property is_pre_built: bool

Indicates if this stage has been built.

Returns
bool

True if stage is built, False otherwise.

async join()[source]

Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped. Typically this is called after stop during a graceful shutdown, but may not be called if the pipeline is terminated on its own.

property name: str

Unique name for the stage.

property output_ports: list[morpheus.pipeline.sender.Sender]

Output ports from this stage.

Returns
list[morpheus.pipeline.pipeline.Sender]

Output ports from this stage.

static repeat_df(df, repeat_count)[source]

This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the event_dt and eventTime columns.

Parameters
df

To be repeated dataframe.

repeat_count

Number of times the given dataframe should be repeated.

Returns
df_array

List of repeated dataframes.

set_needed_columns(needed_columns)[source]

Sets the columns needed to perform preallocation. This should only be called by the Pipeline at build time. The needed_columns shoudl contain the entire set of columns needed by any other stage in this segment.

async start_async()[source]

This function is called along with on_start during stage initialization. Allows stages to utilize the asyncio loop if needed.

stop()[source]

Stages can implement this to perform cleanup steps when pipeline is stopped.

supports_cpp_node()[source]

Indicate that this stages does not support a C++ node.

property unique_name: str

Unique name of stage. Generated by appending stage id to stage name.

Returns
str

Unique name of stage.

Previous morpheus.stages.input.duo_source_stage
Next morpheus.stages.input.file_source_stage
© Copyright 2024, NVIDIA. Last updated on Jul 8, 2024.