Skip to main content
Version: 0.9

Cost Optimizations with Tecton

As the backbone of production ML systems, feature platforms must deliver performance, reliability, and cost-efficiency to drive maximum business impact. Tecton is engineered with a strategic focus on cost optimization, empowering customers to scale their ML workloads without excessive operational expenses.

This doc gives an overview of Tecton's multi-faceted approach to infrastructure cost savings that can save multiple orders of magnitude in infrastructure costs compared to naive ETL solutions or other feature platforms.

Performance need based Online Store Selection​

Tecton's seamless integration with both DynamoDB and Redis as Online Stores enables cost optimization for a wide range of use cases. By allowing users to select the Online Store on a per-Feature View basis, Tecton empowers teams to choose the most cost-effective option for their specific requirements.

For workloads with low query volumes and large dataset sizes, DynamoDB can be significantly cheaper than Redis. Conversely, for high query volume and low data size scenarios, Redis can provide substantial cost savings compared to DynamoDB. Redis also provides lower latencies compared to DynamoDB, especially for tail latencies (p99, p999). This flexibility is crucial, as the optimal Online Store can vary dramatically based on the characteristics of the expected feature retrieval pattern. For more details on how to select your Online Store, see the Online Store Selection Guide.

Feature Caching​

The Tecton Feature Serving Cache reduces both cost and latency of real-time inference for high-scale use-cases. By accelerating high-traffic, low-cardinality key lookups and reducing the compute load for repetitive complex feature queries, the Tecton Serving Cache minimizes the utilization and associated costs of the underlying online store and compute infrastructure. For more details on how to use the Tecton Serving Cache, see the guide for Caching Features.

Tiled Streaming Features​

Tecton's "tiled" Streaming Features provide a cost-optimized approach to handling high-volume event streams. Tiles provide a middle ground between full read and write time aggregations by performing partial aggregations in the stream processor -- by aggregating tiles instead of events, read-time performance is significantly improved for keys with many events.

Tile Reuse for Materialization Cost Minimization​

Tecton reuses tiles across time-windows, minimizing materialization-write costs and storage costs. A detailed explanation of Tecton's tile-based Aggregation Engine's architecture and the performance control it provides is in Performance and Costs of Aggregation Features.

Feature Reuse for Storage Cost Minimization​

Tecton's data model enables efficient reuse of materialized feature values, allowing an unlimited number of Feature Services to leverage a single materialized Feature View. By avoiding data duplication, Tecton minimizes the storage footprint and associated costs.

Bulk Load Backfills to the Online Store​

Tecton uses a bulk load capability for Online Store backfills that is optimized for compute and storage, and can cost up to 100x less than Online Store backfills in other feature stores. For more details on how this works, see Bulk Load Backfills to the Online Store.

Smart Retries​

Tecton's job retries ensure efficient resource utilization by distinguishing between transient and permanent failures. Rather than retrying every failed job, the platform analyzes the failure cause and selectively retries only those jobs likely to succeed on subsequent attempts. This smart retry logic helps avoid wasted compute cycles on unrecoverable failures, optimizing the overall cost and performance of Tecton's feature engineering workflows.

TTL for Storage Cost Savings​

Tecton enables automated data expiration by providing configurable time-to-live (TTL) settings on feature data stored in the Online Store. With a TTL specified, Tecton can automatically garbage collect older, stale records. This proactive data lifecycle management helps control storage costs and simplifies the operational overhead associated with maintaining the online store, particularly for high-velocity feature data pipelines.

Feature Server Autoscaling​

Tecton provides the ability to automatically right size feature serving compute based on concurrent request utilization, which results in tremendous cost-savings. See Scale Feature Servers for more details.

Ephemeral Materialization Clusters​

Unlike traditional approaches that maintain long-running clusters, Tecton spins up materialization infrastructure on-demand, allowing it to be swiftly deprovisioned once the necessary feature transformations have completed. This just-in-time resource allocation strategy helps customers avoid the carrying costs of idle or underutilized compute capacity.

Development Workspaces​

Tecton enables modelers to easily develop and experiment with features without incurring the costs of materialization through development workspaces. Development workspaces provide a guardrail against incurring high infrastructure costs while building and validating feature definitions.

...and a lot more!​

At the core of Tecton's design philosophy lies an unwavering commitment to performance, reliability and cost-efficiency. Every architectural decision is meticulously evaluated through this lens, resulting in a suite of cost-saving capabilities that go well beyond the techniques outlined above.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon