Compute in Tecton
Feature computations can run in batch, steam, or real-time for a production application depending on the type of feature pipeline.
Compute engines in Tecton are interoperable. Different compute engines can be used for different feature piplines or selected independently for interactive development (i.e. training jobs).
Rift (Private Preview)
Rift is Tecton's built in compute engine for batch, stream, or real-time features. Transformations in Rift can be written with vanilla Python, Pandas, or SQL.
Rift integrates natively with data warehouses like Snowflake and BigQuery and can push compute down to those systems where appropriate. It can also run locally for fast and iterative feature development in any Python environment.
Rift can read from any data source that Python can read from and also allows you to bring arbitrary Python pip packages into feature transformations.
Rift is currently in Private Preview. See the Quickstart tutorial to learn how to access, install, and use Rift.
Tecton can integrate with Spark providers like Databricks, AWS EMR, and Google Cloud Dataproc for transforming batch and stream features. Transformations can be written using Spark SQL and PySpark.
When iterating on features in a notebook, Tecton will run Spark queries on an attached Spark cluster.
Selecting Compute Engines
On Batch and Stream Feature Views, the compute engine and transformation
language is chosen by the
mode='snowflake_sql' (batch only), Tecton will
run Pandas or Snowflake SQL transformations on Rift. Snowflake SQL
transformations will be pushed down into your configured warehouse.
mode='pyspark', Tecton will run the provided
transformation as a Spark job in your connected Spark provider.
On-Demand Feature Views always run real-time compute on Rift. Spark is not performant enough to run in real-time.