Skip to main content
Version: 0.5

Data Sources

What data sources does Tecton support for ingestion?​

Tecton supports:

  • Batch data sources: AWS Glue, Databricks (via S3), Hive, Redshift, S3 and Snowflake.
  • Streaming data sources: Kafka and Kinesis

When a data source is registered, is any data being copied?​

Tecton does not create any duplicates of the source data. It goes to the underlying data source. Tecton does manage the storage of your features, online for serving and offline for training.

When registering Hive data sources, do you have any recommendations or best practices?​

We recommend registering your Hive data sources using AWS Glue. Glue converts all schema column names to lowercase, so all transformations must assume all inputs are lowercase. Having capitalization in the column names can lead to difficult-to-catch bugs - we would recommend using lowercase schema column names for raw data sources and lowercase references to column names in transformations.

Why do streaming data source definitions also require a batch data source configuration?​

It is required to provide a historical record of your stream's output - this allows you to do backfilling for your features. Without this, streaming feature collection will begin with being setup with Tecton. The stream's historical output will need to be collected at the same level of granularity as your features will support going forward (eg, if features are processed in 15 minute intervals, the historical log needs to be stored in 15 minute intervals, at minimum). Tecton support can work with you to help set this infrastructure up, if necessary.

What infrastructure does Tecton use for streaming data sources?​

Tecton plugs into Kafka or Kinesis as a streaming data source. For processing against those streams, Tecton then uses Spark Structured Streaming.

What file formats does Tecton support?​

Today, Tecton reads raw data with Spark, and supports all data formats that Spark natively supports, including CSV, JSON, Parquet, and AVRO. .tfrecords, which is not Spark supported, is not supported by Tecton.

How does Tecton use batch and stream data sources together?​

Features in Tecton are built on top of a batch data source or a streaming data source. For each kind of data source, you will provide the scheduling cadence for the feature (eg, weekly, daily, hourly) - for streaming features, the processing is done against the stream using Spark structured streaming.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon