Run Tecton in Production
This page provides an introduction and overview of key topics related to operating Tecton in a production environment.
Tecton is a platform for managing, serving, and monitoring ML features. Running Tecton in production requires coordinating several components, including:
- Feature repositories with feature definitions
- Feature data materialized in an offline store and online store
- HTTP API servers for low latency feature serving
- Monitoring and alerting to maintain data quality and uptime
Why Running in Production Matters​
As you deploy machine learning models to production, the features that serve as model input become a critical part of your infrastructure. Tecton provides capabilities to help you:
- Ensure high uptime and low latency for feature data
- Maintain data quality and integrity over time
- Track feature usage and model performance
- Control infrastructure costs by monitoring jobs and usage
Key Capabilities​
Some of the key capabilities Tecton offers for running in production include:
Scalable Feature Serving​
Tecton's HTTP API and Feature servers are built to handle high query volumes with low latency. You can scale up the number of servers to handle up to 1M+ queries per second.