tecton.SparkBatchConfig
Summary​
Configuration used to define a batch source using a Data Source Function.
The SparkBatchConfig
class is used to configure a batch source using a user
defined Data Source Function.
This class is used as an input to a BatchSource
’s
parameter batch_config
. Declaring this configuration class alone will not
register a Data Source. Instead, declare as a part of BatchSource
that takes
this configuration class instance as a parameter.
Do not instantiate this class directly. Use
tecton.spark_batch_config()
instead.
Attributes​
data_delay
: This attribute is the same as thedata_delay
parameter of the__init__
method. See below.
Methods​
__init__(...)​
Instantiates a new SparkBatchConfig.
Parameters​
-
data_source_function
(Union
[Callable
[[SparkSession
],DataFrame
],Callable
[[SparkSession
,FilterContext
],DataFrame
]]) – User defined Data Source Function that takes in aSparkSession
and an optionaltecton.FilterContext
, ifsupports_time_filtering=True
. Returns aDataFrame
. -
data_delay
(timedelta
) – By default, incremental materialization jobs run immediately at the end of the batch schedule period. This parameter configures how long they wait after the end of the period before starting, typically to ensure that all data has landed. For example, if a feature view has abatch_schedule
of 1 day and one of the data source inputs has adata_delay
of 1 hour, then incremental materialization jobs will run at01:00
UTC. (Default:datetime.timedelta(0)
) -
supports_time_filtering
(bool
) – Must be set to toTrue
if one of the following conditions is met:<data source>.get_dataframe()
is called withstart_time
orend_time
- A feature view wraps this Data Source with a
FilteredSource
If this parameter is set to true, Tecton passes a
FilterContext
object into the Data Source Function, which is expect to handle its own filtering. (Default:False
)
Returns​
A SparkBatchConfig class instance.