Skip to main content
Version: Beta 🚧

Connect Redis as an Online Store

Redis is an online store supported for Google Cloud deployments as an alternative to Bigtable. You need to configure and manage your Redis cluster to use as the Online Store with Tecton.

In order to use a Redis Online Store, follow this guide to connect Tecton to your Redis Enterprise Cloud cluster. Redis Enterprise Cloud is the only Redis deployment option in Google Cloud supported by Tecton.

Configuring Redis Enterprise for Tecton​

Cluster Requirements​

The Redis cluster you connect to Tecton should only be for use with Tecton. Tecton requires specific cluster configuration, depending on which managed version of Redis you're using.

You can get Redis Enterprise on Google marketplace here.

When creating your Redis Enterprise cluster, ensure your cluster meets the following requirements:

  • Multi-AZ Enabled
  • Use the same region as the rest of the deployments
  • Have High availability(Replication) enabled
  • Redis Version 6.x
  • OSS Cluster API should be enabled and use external endpoint
  • TLS should be enabled
  • maxmemory-policy should be set to noeviction

Setting up your Redis Enterprise Cluster​

You can set up a Redis Enterprise cluster either using their console or through Terraform

This sample main.tf Terraform script creates a cluster based on Tecton's requirements.

terraform {
required_version = ">= 0.13"
required_providers {
rediscloud = {
source = "RedisLabs/rediscloud"
}
}
}

provider "rediscloud" {
api_key = var.api_key
secret_key = var.secret_key
}

resource "rediscloud_subscription" "subscription-resource" {
name = "${var.cluster_name}-${var.region}-subscription"
payment_method = "marketplace"

cloud_provider {
# Running in GCP on Redis resources
provider = "GCP"
region {
region = var.region
networking_deployment_cidr = var.deployment_cidr
preferred_availability_zones = var.zones
multiple_availability_zones = true
}
}
creation_plan {
memory_limit_in_gb = 1
quantity = 1
replication = true
support_oss_cluster_api = true
throughput_measurement_by = "operations-per-second"
throughput_measurement_value = 10000
modules = []
}
}

resource "rediscloud_subscription_database" "database-resource" {
subscription_id = rediscloud_subscription.subscription-resource.id
name = "${var.cluster_name}-${var.region}-redis"
protocol = "redis"
memory_limit_in_gb = 10
data_persistence = "snapshot-every-12-hours"
throughput_measurement_by = "operations-per-second"
throughput_measurement_value = 50000
support_oss_cluster_api = true
external_endpoint_for_oss_cluster_api = true
replication = true
enable_tls = true
data_eviction = "noeviction"

alert {
name = "dataset-size"
value = 80
}

depends_on = [rediscloud_subscription.subscription-resource]
}

resource "rediscloud_subscription_peering" "databricks-peering" {
subscription_id = rediscloud_subscription.subscription-resource.id
provider_name = "GCP"
gcp_project_id = var.databricks_peering_project
gcp_network_name = var.databricks_vpc_network_name
}

resource "rediscloud_subscription_peering" "serving-peering" {
subscription_id = rediscloud_subscription.subscription-resource.id
provider_name = "GCP"
gcp_project_id = var.serving_peering_project
gcp_network_name = var.serving_vpc_network_name
}

Here is the accompanying variables.tf script:

variable "region" {
type = string
description = "Region for the Redis Enterprise database"
}

variable "zones" {
type = list(string)
description = "Preferred zones for the Redis Enterprise database"
}

variable "cluster_name" {
type = string
description = "cluster name that is used to prefixed the subscription plan and the db"
}

variable "deployment_cidr" {
type = string
description = "The subnet in which Redis Enterprise will be deployed. Must not overlap with your application VPC CIDR block, or any peered network to your application VPC."
}

variable "api_key" {
type = string
description = "Redis Enterprise Cloud API key. https://docs.redis.com/latest/rc/api/get-started/manage-api-keys/"
}

variable "secret_key" {
type = string
description = "Redis Enterprise Cloud API secret key https://docs.redis.com/latest/rc/api/get-started/manage-api-keys/"
}

variable "databricks_vpc_network_name" {
type = string
description = "The name of the network for Databricks to be peered"
}

variable "serving_vpc_network_name" {
type = string
description = "The name of the network for feature servers to be peered"
}

variable "databricks_peering_project" {
type = string
description = "GCP Databricks project ID that the VPC to be peered lives in."
}

variable "serving_peering_project" {
type = string
description = "GKE feature serving project ID that the VPC to be peered lives in."
}

Here is the accompanying output.tf script:

output "databricks_peering_project" {
value = rediscloud_subscription_peering.databricks-peering.gcp_redis_project_id
}

output "databricks_peering_network_name" {
value = rediscloud_subscription_peering.databricks-peering.gcp_redis_network_name
}

output "serving_peering_project" {
value = rediscloud_subscription_peering.serving-peering.gcp_redis_project_id
}

output "serving_peering_network_name" {
value = rediscloud_subscription_peering.serving-peering.gcp_redis_network_name
}

Note that the throughput_measurement_value, memory_limit_in_gb, and data_persistence values can be tuned based on your workload. Make sure you double the memory consumption for replication. For example, 512MB of data requires at least 1GB of memory size when replication is enabled.

Connecting Tecton to your Redis Enterprise cluster with VPC peering​

Once your cluster has been configured, Tecton Customer Success will help complete the VPC peering connection.

Create a Tecton Support ticket the following information:

  1. The endpoint for your Redis Cluster found in the Web Console
  2. VPC's peering project ID and network name for the serving plane
  3. VPC's peering project ID and network name for the data plane
  4. Relevant CIDR blocks containing the Redis Enterprise cluster
  5. TLS auth token. Your CS rep can help you send this to Tecton in a secure way.

Tecton will then send a VPC peering request, and provide instructions on how to finish setting up networking and authentication.

Tecton will help you:

  1. Set up cross-project VPC peering between Databricks and Redis using the private endpoint.
  2. Set up cross-project VPC peering between the feature server and Redis using the private endpoint.
  3. Connect the control plane and Redis using the public endpoint with the CIDR allow lists for Tecton operations that do not read any feature data.

Validating the connection​

Once Tecton has completed connecting to your Redis Enterprise cluster, you should test writing and reading feature data.

To materialize a feature view to Redis, specify the RedisConfig in your feature view definition by adding the online_store=RedisConfig() to the Feature View declaration.

Once the materialization jobs have been completed, you can use the FeatureView.get_online_features() command to test reading features from your Redis Cluster.

Managing your Redis Cluster​

Scaling​

Scale your instance during periods of low instance traffic to increase the speed and reliability of your scaling operation.

Memory management​

Redis stores all of its data in memory. This is primarily why it performs so well.

For Redis Enterprise, you can use Redis on Flash, where all the data does not need to fit in memory and some of it can be stored on an SSD attached to each node. However, even for such Redis nodes, all reads and writes go through the memory and are moved to SSD (based on key last usage).

Overview of memory fragmentation​

Memory on a Redis node can be challenging to maintain due to fragmentation.

Key deletions are a cause of memory fragmentation. Key deletions occur when feature views are deleted, individual keys expire due to  TTL enforcement, or because of data movement due to scaling.

High memory fragmentation may cause a Redis cluster to run out of memory, resulting in failure of subsequent writes or the crash of the cluster.

Suggested Redis parameters​

maxmemory-policy should be set to noeviction

Why: We are using Redis as the primary data store and do not want data to get evicted silently. As such, when memory reaches capacity, new writes will fail.

Monitoring​

See this document for information on monitoring your Redis Enterprise metrics.

Tecton also shows the following metrics in the Web UI. These metrics are located on the Online Store Monitoring tab, which appears when clicking Services on the left navigation bar.

  • Total Redis Serving QPS
  • Redis Read Latencies
  • Memory Utilization
  • Total number of keys in the cluster

Alerting​

We strongly suggest adding alerts for CPU and memory consumption above 80% for the nodes in the cluster.

Was this page helpful?