Skip to main content
Version: 1.0

Virtual Private Tecton Architecture

The Virtual Private Tecton Architecture combines the best aspects of a managed SaaS application, dedicated single-tenant infrastructure, and customer data ownership.

This guide explains the concepts behind Virtual Private Tecton, including how feature data is processed and stored.

Deployment Architecture​

The following diagram illustrates how a Tecton Deployment spans across both a Tecton Account and Customer Cloud Account.

Tecton Deployment Architecture

Virtual Private Tecton Account​

The Virtual Private Tecton Account is private because it runs in a dedicated Cloud Account & VPC.

The Tecton Account is further divided into:

  • The Tecton Control Plane, which hosts the metadata and orchestration services. For example, the Control Plane hosts the Web UI, initiates scheduled jobs, and stores feature definitions.
  • The Tecton Data Plane, which hosts all the production infrastructure that processes feature data. The Tecton Data Plane includes the Feature Server, (optional) Feature Serving Cache, and Rift compute platform.

Customer Cloud Account​

The Customer Cloud Account consists of:

  • The Customer Data Plane, which includes the Online and Offline Feature Stores for the Tecton Account. Additionally, the Customer Data Plane may include additional Compute Providers where Tecton may orchestrate feature processing jobs.
  • The Customer Application, which provides the data sources for feature pipelines, and accesses features for ML training and inference.

Data Storage & Flows​

The Virtual Private Tecton design enables customers to retain feature storage in their own Cloud Account.

The Tecton Control Plane only stores metadata needed to run the Tecton service, such as feature definitions, developer accounts, and access controls. The Control Plane is encrypted by default, and may optionally be encrypted with Customer Managed Keys.

The Tecton Serving Plane processes feature data in flight. Additionally, users may configure the Tecton Serving Cache, in which features are cached in volatile memory for up to 24 hours.

Online Feature Retrieval Data Flow​

Online Feature Retrieval runs through the Feature Server in the Tecton Data Plane.

The diagram below illustrates the data flow for Online Feature Retrieval:

  1. The Customer Application initiates a feature access request to Tecton. This request is authenticated with a Tecton Principal, and that principal must have the appropriate access configured.
  2. The feature access request is fulfilled by the Feature Server. If necessary, the Feature Server will access data from the Feature Store using the credentials provided during Account Configuration.
  3. Tecton returns the final feature values to the Customer Application.

Online Feature Retrieval Data Flow

Offline Feature Retrieval Data Flow​

Offline Feature Access is managed by the Tecton SDK. The Tecton SDK is available as a Python package, and typically installed in a Notebook environment.

To retrieve features for offline training and inference, the Tecton SDK will read data directly from the Offline Store.

Offline Feature Retrieval Data Flow

Rift Compute Data Flow​

For feature pipelines running on Rift, compute jobs are executed in the Tecton Data Plane.

For Batch Data Sources, Rift will connect to the Data Source to query raw data for processing. For Stream Data Sources, events are sent to the Ingest API. After the feature transformation logic is executed, the data is written to the Feature Stores for future use.

Rift Compute Data Flow

External Compute Data Flow​

Feature pipelines may optionally run on External Compute providers, such as Databricks, EMR, Snowflake or Dataproc.

When using External Compute, the Tecton Control Plane will connect to the External Compute providers to initiate jobs. These jobs will read raw data from the customer data source, and persist data to the Feature Stores for future use.

Was this page helpful?