Skip to main content
Version: Beta 🚧

Transform Server Groups

Private Preview

This feature is currently in Private Preview.

This feature has the following limitations:
  • Must be enabled by Tecton support.
If you would like to participate in the preview, please file a feature request.

Introduction​

The Transform Server is a Tecton-managed compute node that executes user-specified code in Realtime Feature Views as needed when reading feature vectors from Feature Services via the HTTP API.

A Transform Server Group is a group of Transform Server nodes that execute real-time transformations. Transform Server Groups support both provisioned and autoscaling modes. Each Transform Server Group belongs to exactly one workspace, while a single workspace can have multiple Transform Server Groups.

Transform Server Groups, even those within the same workspace, are isolated from each other, in the sense that they do not share the same Realtime Feature View infrastructure. This ensures each Transform Server Groups operations remain independent and performant.

Transform Server Groups provide the following benefits:

  • Eliminate disruption caused by shared serving infrastructure, preventing resource contention between one team's test cases and another's production traffic. Transform Server Group isolation ensures that each team's operations remain independent and performant.
  • Facilitate granular resource and cost management in-repo server provisioning controls.
  • Ability to reuse custom environments across multiple Feature Services within a workspace, with isolated serving infrastructure.
  • Transform Server Groups are also compatible with Feature Server Groups. Using both for a Feature Service allows isolating serving infrastructure for both feature serving and realtime transformations for Realtime Feature Views.
  • Transform Server Groups also provide lower, and more consistent, median and tail latency compared to the current "environment-based" serving infrastructure.
Backwards compatibility with the Environments:
  • Transform Server Groups have an independent infrastructure from previous serving infrastructure for Transformations.
  • The HTTP serving API v1 remains intact and can be continued to serve traffic as-is; using Transform Server Groups requires opt-in changes to Feature Service definitions.

Managing Transform Server Groups​

This section describes how to create, use, and delete Transform Server Groups in a given workspace.

Create & Use​

  1. Select the desired live workspace:
% tecton workspace select <WORKSPACE_NAME>
  1. Use a Tecton-Managed core environment or Create a new environment.

Tecton provides a set of "managed" environments that can be used for Transform Server Groups. Each core environment is compatible with a Tecton SDK version. With Tecton 1.0, please use the environment tecton-transform-server-core:1.0 with your Transform Server Groups.

Alternatively, you can also build a custom environment with your own set of dependencies to be used in a Transform Server Group. Note: The environment must be created before creating the Transform Server Group, and must contain a version of tecton-runtime>=1.0.0.

Assuming a requirements file requirements.in with the following content:

pycountry
fuzzywuzzy
tecton-runtime

You can create a new environment with the following command:

tecton environment create --name my-custom-env --description "My custom Python environment" --requirements requirements.in

Please refer to Custom Environments for more information on creating and managing custom environments.

  1. Add a Transform Server Group declaration in your feature repository's declarative config:
from tecton import ProvisionedScalingConfig
from tecton import TransformServerGroup

fraud_team_tsg = TransformServerGroup(
name="fraud_team_tsg",
description="Fraud detection team Transform Server Group",
owner="fraud-detection",
environment="my-custom-env", # The name of the environment from step 2
scaling_config=ProvisionedScalingConfig(
desired_nodes=3,
),
)

To create a Transform Server Group with autoscaling enabled, use the AutoscalingConfig instead of ProvisionedScalingConfig.

  1. Apply the declarative config to the workspace:
% tecton apply
  1. Wait for the Transform Server Group to be in READY status, check the status with the tecton CLI command:
% tecton server-group list

Id Name Type Status Environment Description Created At Owner Last Modified By
=================================================================================================================================================================================================================
bde2a413a3491a27384fea41a75139c3 default TRANSFORM_SERVER_GROUP CREATING None Fraud detection team Feature Server Group 2024-09-09 03:13:47 UTC fraud-detection jon@tecton.ai

For detailed information about a Server Group, you can also use the tecton server-group describe -n \<name\> command

Note that applying a TransformServerGroup declaration will create new long-running cloud resources, and may incur additional costs.

  1. Once the group reaches the READY status, it can be used by Feature Services by specifying the transform_server_group parameter.
from tecton import Attribute, RequestSource, Aggregate, realtime_feature_view
from tecton.types import Field, String, Timestamp, Float64, Int64
from datetime import datetime, timedelta

similarity_request = RequestSource(schema=[Field("text", String)])


@realtime_feature_view(
sources=[similarity_request],
mode="python",
features=[Attribute("similarity", Int64), Attribute("partial_similarity", Int64)],
)
def fuzzy_similarity(request):
from fuzzywuzzy import fuzz

baseline = "Mocha Cheesecake Fudge Brownie Bars"
result = {
"similarity": fuzz.ratio(baseline, request["text"]),
"partial_similarity": fuzz.partial_ratio(baseline, request["text"]),
}
return result


fraud_detection_feature_service = FeatureService(
name="fraud_detection_feature_service",
online_serving_enabled=True,
transform_server_group=fraud_team_tsg,
features=[fuzzy_similarity],
)
  1. Once you apply your FeatureService definition (also using tecton apply), you are ready to query features! You can use the HTTP API to query the features from the FeatureService, as described in Reading Feature data for inference.

Delete​

  1. Select the desired live workspace:
% tecton workspace select <WORKSPACE_NAME>
  1. Remove (or comment out) the declaration of the Transform Server Group in the declarative config:
# from tecton import AutoscalingConfig
# from tecton import TransformServerGroup
#
# default = TransformServerGroup(
# name="default",
# description="Fraud detection team Transform Server Group",
# owner="fraud-detection",
# environment="my-custom-env", # The name of the environment created in step 2
# scaling_config=AutoscalingConfig(min_nodes=1, max_nodes=10),
# )
  1. Apply the declarative config to the workspace:
% tecton apply
  1. No further actions are needed, as soon as the Transform Server Group is deleted it won’t be shown in the list of the groups provided by the tecton server-group list CLI command. For a couple of minutes the group will be present in the list in the DELETING status:
% tecton server-group list

Id Name Type Status Environment Description Created At Owner Last Modified By
==================================================================================================================================================================================================================
bde2a413a3491a27384fea41a75139c3 default TRANSFORM_SERVER_GROUP DELETING None Fraud detection team Feature Server Group 2024-09-09 03:13:47 UTC fraud-detection jon@tecton.ai

Note that destroying a TransformServerGroup declaration will cause corresponding cloud resources to be deleted.

Scaling Configurations​

Transform Server Groups (TSGs) support two different scaling configurations to meet different workload demands - AutoscalingConfig and ProvisionedScalingConfig.

Provisioned Scaling Configuration​

The ProvisionedScalingConfig specifies a fixed number of nodes (desired_nodes) that should always be active in the Transform Server Group (TSG). This configuration is suitable for use cases where a consistent level of compute capacity is necessary, regardless of changes in workload or demand. It is ideal for applications that require predictable performance and low latency or services with a guaranteed throughput.

In the example fraud_team_tsg configuration below, the desired node count is set to 5, ensuring constant availability of 5 nodes to handle requests.

from tecton import ProvisionedScalingConfig
from tecton import TransformServerGroup

fraud_team_tsg = TransformServerGroup(
name="fraud_team_tsg",
description="Fraud detection team Transform Server Group",
owner="fraud-detection",
environment="tecton-transform-server-core:1.0", # The name of the environment from step 2
scaling_config=ProvisionedScalingConfig(
desired_nodes=3,
),
)

Autoscaling Configuration​

The AutoscalingConfig allows for dynamic adjustment of the number of nodes in the Transform Server Group (TSG) based on demand. It defines the minimum and maximum boundaries (min_nodes and max_nodes) for the number of active nodes. This configuration is ideal for environments where workloads can vary significantly, enabling the group to scale up during peak usage and scale down during idle periods.

In the example below, the fuzzy_team_tsg can have anywhere between 1 and 5 nodes active at any given time, depending on the current load.

from tecton import AutoscalingConfig
from tecton import TransformServerGroup

fraud_team_tsg = TransformServerGroup(
name="fraud_team_tsg",
description="Fraud detection team Transform Server Group",
owner="fraud-detection",
environment="tecton-transform-server-core:1.0",
scaling_config=AutoscalingConfig(
min_nodes=1,
max_nodes=5,
),
)

Validations​

When applying Transform Server Groups, the following validations are performed by Tecton:

  • The environment specified in the Transform Server Group must exist and be in READY state.
  • If using a custom environment, then the environment specified in the Transform Server Group must contain a version of tecton-runtime>=1.0.0.
  • The Transform Server Group being used in a Feature Service must be in the READY status, and defined in the same workspace.
  • If a Transform Server Group is used by a Feature Service, it cannot be scaled down to 0 nodes to ensure uptime.
  • A Transform Server Group that is in use by a Feature Service cannot be deleted.
  • Transform Server Groups cannot be renamed.

Was this page helpful?