get_historical_features() Runs Slowly or Fails
This troubleshooting article covers how to diagnose slow or failing
get_historical_features() (GHF) in a Spark-based Tecton cluster, using the
Spark-based GHF. It does not cover Snowflake-based clusters or the v0.5 SDK
This issue can manifest itself via the following symptoms:
Materialization jobs are cancelled after an hour if you use the default cluster configuration (relying on spot instances).
Pyspark times out in an EMR notebook (Livy connection fails)
Prerequisites for troubleshooting
Methods for calling_get_historical_features(),
which also helps you determine whether you are running GHF using
pre-materialized feature data (
If you run into slow
get_historical_features(), here are some possible causes,
ways to test, and resolutions. We’ve sorted them into most to least common, so
we suggest investigating these possible causes in order.
Isolating a slow feature view (Not pre-materialized feature data only)
If you are running GHF from a feature service, it may be that only one of your feature views is executing slowly, causing the whole feature service GHF to run slowly. By isolating the slow feature view, you can focus your troubleshooting.
- Testing: Instead of running
<feature_service>.get_historical_features(), try instead to run
<feature_view>.get_historical_features()for each feature view that is contained in the feature service.
Feature view transformation logic (Not pre-materialized)
Your feature view transformation logic may be written in such a way that it is performing expensive joins or scans across a large dataset. This can cause GHF to run very slow or run out of memory.
- Inspect your transformation logic for joins or expensive reads from large tables.
We recommend simplifying your feature view logic as much as possible to make it clear where you may be doing expensive joins. For complex pipeline transformations, it can be difficult to assess what is happening. You can also use
.explain()on the resulting dataframe from GHF to inspect the physical plan of what Spark will try to execute to look for inefficiencies.
(v0.3 SDK with BatchFeatureViews): If you are using
tecton_sliding_window()and joining one or more other batch tables, run
tecton_sliding_window()outside the join, as it will explode the number of rows.
Very large or slow data source
If you are running GHF on non-materialized feature views (
then you may be running a well-written feature view against a very large and/or
slow data source that takes time to process. This will be exacerbated if you
have non-optimized Feature View logic. Note that Snowflake and Redshift tend to
be faster than Hive and, especially, File data sources.
Testing : Try substituting your data source for a smaller sample
FileDSConfigconsisting of a single parquet file.
Resolution : If you are not able to speed up the data source, we recommend using a small
FileDSConfigwhen developing features in a notebook as it can significantly speed up Tecton commands while iterating. You can scale up to the larger, production data source when your features are ready.
Using a “File” data source (Not pre-materialized)
We include the
FileConfig data source only for development and testing, as it
does not include many basic speed improvements that the
For example, it does not understand directory partitioning, and Spark scans each
file in the file source to infer the schema of the source. While Tecton will
work with a
FileConfig, it will run slowly if you attempt to use it on a large
collection of files.
Testing : Try changing your
uriparameter to a single parquet file if it is pointed at a large directory of files.
Resolution : Add a Glue catalog entry (via a Glue crawler) for this file source, and convert your
HiveSource. Ensure that you specify any file partitions in your
Hive partitions not specified (Not pre-materialized)
If you are using a
HiveConfig data source, Tecton does not by default assume a
partition scheme, however, most data lake partitions are partitioned by
Testing : Check if you have passed in the date/time partition structure via the
DatetimePartitionColumnoption in your feature repository.
Resolution: Add the partition columns via the DatetimePartitionColumn option. Here is an example.
Conversion to pandas DataFrame
Pandas DataFrames are usually a more familiar interface to manipulate dataframes that may come from a GHF call. Under the hood, GHF returns a Spark DataFrame, and converting Spark to pandas dataframes can be a very costly operation if you are passing a spine of more than a few million rows.
Testing : Instead of converting to pandas immediately, run
get_historical_features().to_spark().show(), which will avoid the pandas conversion.
Resolution : Either use Spark DataFrames in your code, or consider importing the koalas library which provides a pandas-like interface to Spark DataFrames. Note that koalas is supported natively on Spark 3.2 and above.
Slow spine generation
Due to Spark’s lazy evaluation model , when you run GHF, you are executing a series of statements all at once, including usually GHF, parsing the output, but also generating the spine dataframe. This is an area to focus on if you read your spine from an external source like Hive, Redshift, Snowflake, etc.
Testing : To ensure that your spine dataframe generation is not the slow area, you might try to generate the spine dataframe, save it as a parquet file on S3, then loading the parquet file. Then, continue with
get_historical_features(). You can also call
.cache()on your spine.
Resolution: You may want to consider generating your spine once and saving it to a faster storage layer such as S3 and adding
Under-resourced notebook cluster
Tecton creates your notebook clusters initially, however, you are free to change the configuration or create additional notebook clusters as needed. It could happen that after optimizing your feature logic, that you still need to scan and process a large amount of data. In this case, increasing the notebook cluster size (especially memory) can improve GHF performance. Spark can be much faster than older Hadoop installations because it can do much of its computation in memory, and memory is typically about 100x faster than disk if Spark has to frequently page to disk.
- Testing/Resolution : Try scaling from
For feature views, you can add a
ttl option. You should only include this
option for row-based transformations where you are not aggregating any data,
except in certain advanced scenarios. You would commonly use this if you want to
return, for example a user_id’s creation date, which is tied to a data source
row that was last updated years ago. The ttl parameter tells Tecton to keep
searching back in time from the spine’s timestamp until it finds the first
A long ttl generally doesn’t add significant extra time a GHF call, but it may if you have feature view logic that scans lots of data from a slow (data lake) source.
Testing : Try decreasing the
ttlby removing it or setting it to the
Resolution : If you are not able to speed up the feature view by changing the logic, then consider reducing the
ttlif it is possible.