morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage
- class CloudTrailSourceStage(c, input_glob, watch_directory=False, max_files=-1, file_type=<FileTypes.Auto: 0>, repeat=1, sort_glob=False, recursive=True, queue_max_size=128, batch_timeout=5.0)[source]
Bases:
<a href="morpheus.stages.input.autoencoder_source_stage.AutoencoderSourceStage.html#morpheus.stages.input.autoencoder_source_stage.AutoencoderSourceStage">morpheus.stages.input.autoencoder_source_stage.AutoencoderSourceStage</a>
Load messages from a Cloudtrail directory.
- Attributes
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.has_multi_input_ports">has_multi_input_ports</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.has_multi_output_ports">has_multi_output_ports</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.input_count">input_count</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.input_ports">input_ports</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.is_built">is_built</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.name">name</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.output_ports">output_ports</a>
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.unique_name">unique_name</a>
Indicates if this stage has multiple input ports.
Indicates if this stage has multiple output ports.
Return None for no max intput count
Input ports to this stage.
Indicates if this stage has been built.
The name of the stage.
Output ports from this stage.
Unique name of stage.
Methods
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.batch_user_split">batch_user_split</a>
(x, userid_column_name, ...)Creates a dataframe for each userid.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.build">build</a>
(builder[, do_propagate])Build this stage.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.can_build">can_build</a>
([check_ports])Determines if all inputs have been built allowing this node to be built.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.cleanup_df">cleanup_df</a>
(df, feature_columns)This function does clean up certain columns in the dataframe.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.derive_features">derive_features</a>
(df, feature_columns)If any features are available to be derived, can be implemented by overriding this function.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.files_to_dfs_per_user">files_to_dfs_per_user</a>
(x, userid_column_name, ...)After loading the input batch of CloudTrail logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_all_input_stages">get_all_input_stages</a>
()Get all input stages to this stage.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_all_inputs">get_all_inputs</a>
()Get all input senders to this stage.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_all_output_stages">get_all_output_stages</a>
()Get all output stages from this stage.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_all_outputs">get_all_outputs</a>
()Get all output receivers from this stage.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_match_pattern">get_match_pattern</a>
(glob_split)Return a file match pattern
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.get_needed_columns">get_needed_columns</a>
()Stages which need to have columns inserted into the dataframe, should populate the
self._needed_columns
dictionary with mapping of column names to<a href="morpheus.common.html#morpheus.common.TypeId">morpheus.common.TypeId</a>
.<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.join">join</a>
()Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.read_file">read_file</a>
(filename, file_type)Reads a file into a dataframe.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.repeat_df">repeat_df</a>
(df, repeat_count)This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the
event_dt
andeventTime
columns.<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.set_needed_columns">set_needed_columns</a>
(needed_columns)Sets the columns needed to perform preallocation.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.stop">stop</a>
()Stages can implement this to perform cleanup steps when pipeline is stopped.
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.supports_cpp_node">supports_cpp_node</a>
()Specifies whether this Stage is capable of creating C++ nodes.
- _build(builder, in_ports_streams)[source]
This function is responsible for constructing this stage’s internal
mrc.SegmentObject
object. The input of this function contains the returned value from the upstream stage.The input values are the
mrc.Builder
for this stage and aStreamPair
tuple which contain the inputmrc.SegmentObject
object and the message data type.- Parameters
- builder
mrc.Builder
- in_ports_streams
morpheus.pipeline.pipeline.StreamPair
mrc.Builder
object for the pipeline. This should be used to construct/attach the internalmrc.SegmentObject
.List of tuples containing the input
mrc.SegmentObject
object and the message data type.- builder
- Returns
typing.List[morpheus.pipeline.pipeline.StreamPair]
List of tuples containing the output
mrc.SegmentObject
object from this stage and the message data type.
- _build_source(seg)[source]
Abstract method all derived Source classes should implement. Returns the same value as
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.build">build</a>
.- Returns
morpheus.pipeline.pipeline.StreamPair
:
A tuple containing the output
mrc.SegmentObject
object from this stage and the message data type.
- static batch_user_split(x, userid_column_name, userid_filter, datetime_column_name='event_dt')[source]
Creates a dataframe for each userid.
- Parameters
- xtyping.List[pd.DataFrame]
- userid_column_namestr
- userid_filterstr
- datetime_column_namestr
List of dataframes.
Name of a dataframe column used for categorization.
Only rows with the supplied userid are filtered.
Name of the dataframe column used to sort the rows.
- Returns
- user_dfstyping.Dict[str, pd.DataFrame]
Dataframes, each of which is associated with a single userid.
- build(builder, do_propagate=True)[source]
Build this stage.
- Parameters
- builder
mrc.Builder
- do_propagatebool, optional
MRC segment for this stage.
Whether to propagate to build output stages, by default True.
- builder
- can_build(check_ports=False)[source]
Determines if all inputs have been built allowing this node to be built.
- Parameters
- check_portsbool, optional
Check if we can build based on the input ports, by default False.
- Returns
- bool
True if we can build, False otherwise.
- static cleanup_df(df, feature_columns)[source]
This function does clean up certain columns in the dataframe.
- Parameters
- dfpd.DataFrame
- feature_columnstyping.List[str]
Dataframe for columns cleanup.
Only the columns that are present in the feature columns will be preserved in the dataframe if feature columns are supplied..
- Returns
- dftyping.List[pd.DataFrame]
Clean dataframe.
- static derive_features(df, feature_columns)[source]
If any features are available to be derived, can be implemented by overriding this function.
- Parameters
- dfpd.DataFrame
- feature_columnstyping.List[str]
A dataframe.
Names of columns that are need to be derived.
- Returns
- dftyping.List[pd.DataFrame]
Dataframe with actual and derived columns.
- static files_to_dfs_per_user(x, userid_column_name, feature_columns, userid_filter=None, repeat_count=1)[source]
After loading the input batch of CloudTrail logs into a dataframe, this method builds a dataframe for each set of userid rows in accordance with the specified filter condition.
- Parameters
- xtyping.List[str]
- userid_column_namestr
- feature_columnstyping.List[str]
- userid_filterstr
- repeat_countstr
List of messages.
Name of the column used for categorization.
Feature column names.
Only rows with the supplied userid are filtered.
Number of times the given rows should be repeated.
- Returns
- df_per_usertyping.Dict[str, pd.DataFrame]
Dataframe per userid.
- get_all_input_stages()[source]
Get all input stages to this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.StreamWrapper
]
All input stages.
- typing.List[
- get_all_inputs()[source]
Get all input senders to this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.Sender
]
All input senders.
- typing.List[
- get_all_output_stages()[source]
Get all output stages from this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.StreamWrapper
]
All output stages.
- typing.List[
- get_all_outputs()[source]
Get all output receivers from this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.Receiver
]
All output receivers.
- typing.List[
- get_match_pattern(glob_split)[source]
Return a file match pattern
- get_needed_columns()[source]
Stages which need to have columns inserted into the dataframe, should populate the
self._needed_columns
dictionary with mapping of column names to<a href="morpheus.common.html#morpheus.common.TypeId">morpheus.common.TypeId</a>
. This will ensure that the columns are allocated and populated with null values.- property has_multi_input_ports: bool
Indicates if this stage has multiple input ports.
- Returns
- bool
True if stage has multiple input ports, False otherwise.
- property has_multi_output_ports: bool
Indicates if this stage has multiple output ports.
- Returns
- bool
True if stage has multiple output ports, False otherwise.
- property input_count: int
Return None for no max intput count
- property input_ports: List[morpheus.pipeline.receiver.Receiver]
Input ports to this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.Receiver
]
Input ports to this stage.
- typing.List[
- property is_built: bool
Indicates if this stage has been built.
- Returns
- bool
True if stage is built, False otherwise.
- async join()[source]
Awaitable method that stages can implement this to perform cleanup steps when pipeline is stopped. Typically this is called after
<a href="#morpheus.stages.input.cloud_trail_source_stage.CloudTrailSourceStage.stop">stop</a>
during a graceful shutdown, but may not be called if the pipeline is terminated on its own.- property name: str
The name of the stage. Used in logging. Each derived class should override this property with a unique name.
- Returns
- str
Name of a stage.
- property output_ports: List[morpheus.pipeline.sender.Sender]
Output ports from this stage.
- Returns
- typing.List[
morpheus.pipeline.pipeline.Sender
]
Output ports from this stage.
- typing.List[
- static read_file(filename, file_type)[source]
Reads a file into a dataframe.
- Parameters
- filenamestr
- file_type
<a href="morpheus.common.html#morpheus.common.FileTypes">morpheus.common.FileTypes</a>
Path to a file to read.
What type of file to read. Leave as Auto to auto detect based on the file extension.
- Returns
- pandas.DataFrame
The parsed dataframe.
- Raises
- RuntimeError
If an unsupported file type is detected.
- static repeat_df(df, repeat_count)[source]
This function iterates over the same dataframe to extending small datasets in debugging with incremental updates to the
event_dt
andeventTime
columns.- Parameters
- dfpd.DataFrame
- repeat_countint
To be repeated dataframe.
Number of times the given dataframe should be repeated.
- Returns
- df_arraytyping.List[pd.DataFrame]
List of repeated dataframes.
- set_needed_columns(needed_columns)[source]
Sets the columns needed to perform preallocation. This should only be called by the Pipeline at build time. The needed_columns shoudl contain the entire set of columns needed by any other stage in this segment.
- stop()[source]
Stages can implement this to perform cleanup steps when pipeline is stopped.
- supports_cpp_node()[source]
Specifies whether this Stage is capable of creating C++ nodes. During the build phase, this value will be combined with
CppConfig.get_should_use_cpp()
to determine whether or not a C++ node is created. This is an instance method to allow runtime decisions and derived classes to override base implementations.- property unique_name: str
Unique name of stage. Generated by appending stage id to stage name.
- Returns
- str
Unique name of stage.