morpheus.stages.input.azure_source_stage.AzureSourceStage
- class AzureSourceStage(c, input_glob, watch_directory=False, max_files=-1, file_type=<FileTypes.Auto: 0>, repeat=1, sort_glob=False, recursive=True, queue_max_size=128, batch_timeout=5.0)[source]
Bases:
morpheus.stages.input.autoencoder_source_stage.AutoencoderSourceStage
Source stage is used to load Azure Active Directory messages.
- Adds the following derived features:
appincrement
: Increments every time the logs contain a distinct app.locincrement
: Increments every time a log contains a distinct city within a day.logcount
: Tracks the number of logs generated by a user within a day.
- Parameters
- c
morpheus.config.Config
Pipeline configuration instance.
- input_globstr
Input glob pattern to match files to read. For example,
/input_dir/*.json
would read all files with the ‘json’ extension in the directory input_dir.- watch_directorybool, default = False
The watch directory option instructs this stage to not close down once all files have been read. Instead it will read all files that match the ‘input_glob’ pattern, and then continue to watch the directory for additional files. Any new files that are added that match the glob will then be processed.
- max_files: int, default = -1
Max number of files to read. Useful for debugging to limit startup time. Default value of -1 is unlimited.
- file_type
morpheus.common.FileTypes
, default = ‘FileTypes.Auto’. Indicates what type of file to read. Specifying ‘auto’ will determine the file type from the extension. Supported extensions: ‘json’, ‘csv’
- repeat: int, default = 1
How many times to repeat the dataset. Useful for extending small datasets in debugging.
- sort_globbool, default = False
If true the list of files matching
input_glob
will be processed in sorted order.- recursive: bool, default = True
If true, events will be emitted for the files in subdirectories that match
input_glob
.- queue_max_size: int, default = 128
Maximum queue size to hold the file paths to be processed that match
input_glob
.- batch_timeout: float, default = 5.0
Timeout to retrieve batch messages from the queue.
- c
- Attributes
df_type_str
Returns the DataFrame module that should be used for the given execution mode.
has_multi_input_ports
Indicates if this stage has multiple input ports.
has_multi_output_ports
Indicates if this stage has multiple output ports.
input_count
Return None for no max input count
input_ports
Input ports to this stage.
is_built
Indicates if this stage has been built.
is_pre_built
Indicates if this stage has been built.
name
The name of the stage.
output_ports
Output ports from this stage.
unique_name
Unique name of stage.
Methods
batch_user_split
(x, userid_column_name, ...)Creates a dataframe for each userid. build
(builder[, do_propagate])Build this stage. can_build
([check_ports])Determines if all inputs have been built allowing this node to be built. can_pre_build
([check_ports])Determines if all inputs have been built allowing this node to be built. change_columns
(df)Removes characters (_,.,{,},:) from the names of the dataframe columns. compute_schema
(schema)Compute the schema for this stage based on the incoming schema from upstream stages. derive_features
(df, feature_columns)Derives feature columns from the AzureAD (logs) source columns. files_to_dfs_per_user
(x, userid_column_name, ...)After loading the input batch of AzureAD logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition. get_all_input_stages
()Get all input stages to this stage. get_all_inputs
()Get all input senders to this stage. get_all_output_stages
()Get all output stages from this stage. get_all_outputs
()Get all output receivers from this stage. get_df_class
()Returns the DataFrame class that should be used for the given execution mode. get_df_pkg
()Returns the DataFrame package that should be used for the given execution mode. get_match_pattern
(glob_split)Return a file match pattern get_needed_columns
()Stages which need to have columns inserted into the dataframe, should populate the self._needed_columns
dictionary with mapping of column names tomorpheus.common.TypeId
.is_stop_requested
()Returns True
if a stop has been requested.join
()Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped. repeat_df
(df, repeat_count)This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the event_dt
andeventTime
columns.request_stop
()Request the source to stop processing data. set_needed_columns
(needed_columns)Sets the columns needed to perform preallocation. start_async
()This function is called along with on_start during stage initialization. stop
()This method is invoked by the pipeline whenever there is an unexpected shutdown. supported_execution_modes
()Returns a tuple of supported execution modes of this stage. supports_cpp_node
()Specifies whether this Stage is capable of creating C++ nodes. - _build(builder, input_nodes)[source]
This function is responsible for constructing this stage’s internal
mrc.SegmentObject
object. The input of this function contains the returned value from the upstream stage.The input values are the
mrc.Builder
for this stage and a list of parent nodes.- Parameters
- builder
mrc.Builder
mrc.Builder
object for the pipeline. This should be used to construct/attach the internalmrc.SegmentObject
.- input_nodes
list[mrc.SegmentObject]
List containing the input
mrc.SegmentObject
objects.
- builder
- Returns
list[mrc.SegmentObject]
List of tuples containing the output
mrc.SegmentObject
object from this stage.
- _build_source(builder)[source]
Abstract method all derived Source classes should implement. Returns the same value as
build
.- Returns
mrc.SegmentObject
:The MRC node for this stage.
- _build_sources(builder)[source]
Abstract method all derived Source classes should implement. Returns the same value as
build
.- Returns
mrc.SegmentObject
:The MRC nodes for this stage.
- static batch_user_split(x, userid_column_name, userid_filter, datetime_column_name='event_dt')[source]
Creates a dataframe for each userid.
- Parameters
- xlist[pd.DataFrame]
List of dataframes.
- userid_column_namestr
Name of a dataframe column used for categorization.
- userid_filterstr
Only rows with the supplied userid are filtered.
- datetime_column_namestr
Name of the dataframe column used to sort the rows.
- Returns
- user_dfsdict[str, pd.DataFrame]
Dataframes, each of which is associated with a single userid.
- build(builder, do_propagate=True)[source]
Build this stage.
- Parameters
- builder
mrc.Builder
MRC segment for this stage.
- do_propagatebool, optional
Whether to propagate to build output stages, by default True.
- builder
- can_build(check_ports=False)[source]
Determines if all inputs have been built allowing this node to be built.
- Parameters
- check_portsbool, optional
Check if we can build based on the input ports, by default False.
- Returns
- bool
True if we can build, False otherwise.
- can_pre_build(check_ports=False)[source]
Determines if all inputs have been built allowing this node to be built.
- Parameters
- check_portsbool, optional
Check if we can build based on the input ports, by default False.
- Returns
- bool
True if we can build, False otherwise.
- static change_columns(df)[source]
Removes characters (_,.,{,},:) from the names of the dataframe columns.
- Parameters
- df
pd.DataFrame
Dataframe that requires column renaming.
- df
- Returns
- df
pd.DataFrame
Dataframe with renamed columns.
- df
- compute_schema(schema)[source]
Compute the schema for this stage based on the incoming schema from upstream stages.
Incoming schema and type information from upstream stages is available via the
schema.input_schemas
andschema.input_types
properties.Derived classes need to override this method, can set the output type(s) on
schema
by callingset_type
for all output ports. For example a simple pass-thru stage might perform the following:>>> for (port_idx, port_schema) in enumerate(schema.input_schemas): ... schema.output_schemas[port_idx].set_type(port_schema.get_type()) >>>
If the port types in
upstream_schema
are incompatible the stage should raise aRuntimeError
.
- static derive_features(df, feature_columns)[source]
Derives feature columns from the AzureAD (logs) source columns.
- Parameters
- dfpd.DataFrame
Dataframe for deriving columns.
- feature_columnstyping.List[str]
Names of columns that are need to be derived.
- Returns
- dftyping.List[pd.DataFrame]
Dataframe with actual and derived columns.
- property df_type_str: Literal['cudf', 'pandas']
Returns the DataFrame module that should be used for the given execution mode.
- static files_to_dfs_per_user(x, userid_column_name, feature_columns, userid_filter=None, repeat_count=1)[source]
After loading the input batch of AzureAD logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition.
- Parameters
- xtyping.List[str]
List of messages.
- userid_column_namestr
Name of the column used for categorization.
- feature_columnstyping.List[str]
Feature column names.
- userid_filterstr
Only rows with the supplied userid are filtered.
- repeat_countstr
Number of times the given rows should be repeated.
- Returns
- df_per_usertyping.Dict[str, pd.DataFrame]
Dataframe per userid.
- get_all_input_stages()[source]
Get all input stages to this stage.
- Returns
- list[
morpheus.pipeline.pipeline.StageBase
] All input stages.
- list[
- get_all_inputs()[source]
Get all input senders to this stage.
- Returns
- list[
morpheus.pipeline.pipeline.Sender
] All input senders.
- list[
- get_all_output_stages()[source]
Get all output stages from this stage.
- Returns
- list[
morpheus.pipeline.pipeline.StageBase
] All output stages.
- list[
- get_all_outputs()[source]
Get all output receivers from this stage.
- Returns
- list[
morpheus.pipeline.pipeline.Receiver
] All output receivers.
- list[
- get_df_class()[source]
Returns the DataFrame class that should be used for the given execution mode.
- get_df_pkg()[source]
Returns the DataFrame package that should be used for the given execution mode.
- get_match_pattern(glob_split)[source]
Return a file match pattern
- get_needed_columns()[source]
Stages which need to have columns inserted into the dataframe, should populate the
self._needed_columns
dictionary with mapping of column names tomorpheus.common.TypeId
. This will ensure that the columns are allocated and populated with null values.
- property has_multi_input_ports: bool
Indicates if this stage has multiple input ports.
- Returns
- bool
True if stage has multiple input ports, False otherwise.
- property has_multi_output_ports: bool
Indicates if this stage has multiple output ports.
- Returns
- bool
True if stage has multiple output ports, False otherwise.
- property input_count: int
Return None for no max input count
- property input_ports: list[morpheus.pipeline.receiver.Receiver]
Input ports to this stage.
- Returns
- list[
morpheus.pipeline.pipeline.Receiver
] Input ports to this stage.
- list[
- property is_built: bool
Indicates if this stage has been built.
- Returns
- bool
True if stage is built, False otherwise.
- property is_pre_built: bool
Indicates if this stage has been built.
- Returns
- bool
True if stage is built, False otherwise.
- is_stop_requested()[source]
Returns
True
if a stop has been requested.- Returns
- bool:
True if a stop has been requested, False otherwise.
- async join()[source]
Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped. Typically this is called after
stop
during a graceful shutdown, but may not be called if the pipeline is terminated on its own.
- property name: str
The name of the stage. Used in logging. Each derived class should override this property with a unique name.
- Returns
- str
Name of a stage.
- property output_ports: list[morpheus.pipeline.sender.Sender]
Output ports from this stage.
- Returns
- list[
morpheus.pipeline.pipeline.Sender
] Output ports from this stage.
- list[
- static repeat_df(df, repeat_count)[source]
This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the
event_dt
andeventTime
columns.- Parameters
- dfpd.DataFrame
To be repeated dataframe.
- repeat_countint
Number of times the given dataframe should be repeated.
- Returns
- df_arraylist[pd.DataFrame]
List of repeated dataframes.
- request_stop()[source]
Request the source to stop processing data.
- set_needed_columns(needed_columns)[source]
Sets the columns needed to perform preallocation. This should only be called by the Pipeline at build time. The needed_columns shoudl contain the entire set of columns needed by any other stage in this segment.
- async start_async()[source]
This function is called along with on_start during stage initialization. Allows stages to utilize the asyncio loop if needed.
- stop()[source]
This method is invoked by the pipeline whenever there is an unexpected shutdown. Subclasses should override this method to perform any necessary cleanup operations.
- supported_execution_modes()[source]
Returns a tuple of supported execution modes of this stage.
- supports_cpp_node()[source]
Specifies whether this Stage is capable of creating C++ nodes. During the build phase, this value will be combined with
CppConfig.get_should_use_cpp()
to determine whether or not a C++ node is created. This is an instance method to allow runtime decisions and derived classes to override base implementations.
- property unique_name: str
Unique name of stage. Generated by appending stage id to stage name.
- Returns
- str
Unique name of stage.