Skip to main content
Version: 0.5

Connecting to Data Sources

For Tecton to successfully read your data, Tecton requires the proper permissions and configuration. Permissions and configuration can vary per data source.

Supported Data Sources

Tecton has Python data source objects that can connect to the following batch data sources:

  • CSV, Parquet, and JSON Data Sources on S3
  • Hive Tables via AWS Glue Data Catalog
  • AWS Redshift Tables
  • Snowflake Tables

Tecton has Python data source objects that can connect to the following stream data sources:

  • AWS Kinesis Streams
  • Kafka Topics
note

If you are using Tecton on Spark, then you can also use a data source function to connect to an arbitrary data source. With a data source function, you write a PySpark function that loads your data source and returns a Spark DataFrame. Compared to using a data source object, a data source function gives you more flexibility in connecting to an underlying data source and specifying logic for transforming the data retrieved from the underlying data source.

The following sections will guide you on connecting each type of data source:

Once Tecton has the permissions needed to access the data sources, it must be registered in Tecton as a Data Source in your feature repository. See the documentation on Data Sources.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon