Skip to main content
Version: 0.9

Connect Bigtable as an Online Store

danger

Bigtable as an Online Store is currently only available in Private Preview.

Configuring Bigtable for Tecton​

Cluster Requirements​

The Bigtable instance you connect to Tecton should only be for use with Tecton. Ensure your instance has 2 or more clusters with multi-cluster routing for >=99.99% availability. Note that if you created the instance with 2 or more clusters, the default app profile uses multi-cluster routing. To configure cluster routing see here: https://cloud.google.com/bigtable/docs/configuring-app-profiles

Provisioning a Bigtable Instance​

Throughput: Each node will deliver up to 10,000 QPS. In general, Bigtable offers optimal latency when the CPU load for a cluster is under 70%. For latency-sensitive applications, however, we recommend that you plan at least 2x capacity for your application's max Bigtable queries per second (QPS).

Storage: Each node for SSD is 5 TB. For latency-sensitive applications, we recommend that you keep storage utilization per node below 60%. For applications that are not latency-sensitive, you can store more than 70% of the limit

Setting up your Bigtable instance​

You can set up a Bigtable instance either using the Google Cloud console or through terraform. You can find the terraform module example here: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigtable_instance

Connecting Tecton to your Bigtable instance​

Once your instance has been configured, Tecton Customer Success will help complete the connection.

Create a Tecton Support ticket with the following information:

  1. Bigtable project ID
  2. Bigtable instance ID
  3. Ask your Tecton deployment specialist for your Tecton control plane service account and grant the Bigtable Administrator permission to the Tecton service account. It will look like tecton-<deployment>-control-plane@<tecton-deployment>.iam.gserviceaccount.com
  4. Ask your Tecton deployment specialist for your Tecton feature server pods service account and grant the Bigtable Administrator permission to the Tecton service account
  5. Grant the Bigtable Administrator permission to the data plane service account you use for your spark jobs. You'll also use that service account in the DatabricksJsonClusterConfig for the google_service_account key under the gcp_attributes key.

Validating the connection​

Once Tecton has completed connecting to your Bigtable instance, you should test writing and reading feature data.

To materialize a feature view to Bigtable, simply specify the BigtableConfig in your feature view definition by adding the online_store=BigtableConfig() in the Feature View declaration.

Once the materialization jobs have been completed, you can use the FeatureView.get_online_features() command to test reading features from your Bigtable instance.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon