Skip to main content
Version: Beta 🚧

Model Generated Features

Private Preview

This feature is currently in Private Preview.

This feature has the following limitations:
  • Must be enabled by Tecton Support.
  • Available for Rift-based Feature Views.
If you would like to participate in the preview, please file a feature request.

Model-generated features are a powerful technique for creating high-quality context for boosting the performance of predictive or generative AI systems. A few examples of model-generated features are:

  • Custom embeddings: Transforming product descriptions and categories into dense vector representations, enabling more accurate recommendation systems in e-commerce.
  • Text classification: Performing sentiment analysis on user posts.
  • Image analysis: Extracting signals such as product color from images.
  • Named entity recognition: Identifying and categorizing named entities (e.g., person names, organizations, locations) in unstructured text data.

Tecton provides a seamless and efficient way to use custom models for context generation. This document details the process of registering a model with Tecton and using it for inference in a Batch Feature View.

Development Overview​

Developing and registering a model with Tecton involves defining the model's functionality in a model file such as model.py , specifying the model metadata and artifacts in a config file such as config.py, iterating in local development mode and creating the model using Tecton's CLI.

Here are recommended steps for developing and registering a model with Tecton:

  1. Create a new local directory for your model repo: mkdir ${MODEL_REPO_DIR}
  2. Create a Model File, model.py in this model repo. See Model File section for details.
  3. Create or choose an environment that your model is compatible to run with. See Environment section for details.
  4. Create a config.py that specifies the model metadata that includes name, schemas, model/artifacts files, and environments. See Model Config section for details.
  5. Iterate and test in local development mode. See Local Development & Testing for details.
  6. Run tecton model create config.py πŸŽ‰

New Model CLIs​

Tecton provides a suite of command line utilities to help you manage the lifecycle of a model.

tecton model -h

See CLI Docs for details.

Model File​

The Model File is a Python file that contains some required and optional functions that define how the model works. This file serves as the entry point for Tecton to execute a model. Here’s an example model.py that contains a sentiment analysis model:

from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax


def preprocessor(input, context):
tokenizer = context["tokenizer"]
text_list = input["text"].tolist()
encoded_input = tokenizer(text_list, return_tensors="pt", padding=True)
return encoded_input


def postprocessor(input, context):
scores = input[0].cpu().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[:, ::-1]
labels = ["negative", "neutral", "positive"]
fn = np.vectorize(lambda i: labels[i])
result = fn(ranking[:, 0])
return result


def load_context(data_dir, context):
model_name = "cardiffnlp/twitter-roberta-base-sentiment"
context["tokenizer"] = AutoTokenizer.from_pretrained(model_name)
context["model"] = AutoModelForSequenceClassification.from_pretrained(model_name)

Here are the details of each function of model.py:

functionRequired/Optionaldescription
def load_context(data_dir: pathlib.Path, context: MutableMapping[str, Any]) -> NoneRequiredTecton runs this function and loads the necessary context to run model inference.
  • data_dir is the directory that contains all artifacts you specified in the model config. You can access them by joining the data_dir and the path to the artifact relative to the root directory.
  • Tecton reserves "model" keyword in context for you to put your initialized Model.
def preprocessor(input: Mapping[str, numpy.ndarray], context: MutableMapping[str, Any] -> Mapping[str, torch.Tensor]OptionalThis function is the entry point to pre-process the data passed in for model inference.
  • The input maps the column name to a Numpy array. Column names are the same as the input_schema specified in the model config
  • This function is responsible for transforming the input into the correct format that the model can accept in kwargs format.
Note:
  • If the pre-processor is not used, input will be converted to Mapping[str, torch.Tensor] and passed as kwargs to the model inference function.
  • Do not change the order of input data
def postprocessor(input: Any, context:MutableMapping[str, Any]) -> numpy.ndarrayOptionalThis function is the entry point for post-processing the data returned from the model inference.
  • The input can be any type, depending on the model.
  • This function is responsible for transforming the input to a Numpy array.
Note:
  • You need a post-processor unless your model output is an exact torch.Tensor that can be converted to the Numpy array.
  • Do not change the order of input data
caution

Do not change the order of input data in the pre or post processor, which will cause incorrect results.

Environment​

You need to choose or create an environment that your model can run with at materialization time. An environment can be created by using:

$ tecton environment create --name "my-custom-env-0.1" --description "My Custom Env 0.1" --requirements /path/to/requirements.txt

The requirements.txt should list the necessary pip packages for your model and tecton[rift-materialization]. See the environment page for more details.

Model Config​

Model Config defines all the necessary metadata for a model and is used to register the model through tecton model create path/to/config.py. See SDK Doc for details.

The supported data types for both input_schema and output_schema include Int32, Int64, Float32, Float64, String, and arrays of these types. See the data type page for more details.

Example​

from tecton import ModelConfig
from tecton.types import Field, String

model = ModelConfig(
name="your_model_name", # Replace "your_model_name" with the actual model name
model_type="pytorch",
model_file="model.py",
input_schema=[Field("text", String)],
output_schema=Field("sentiment", String),
environments=[
"your_model_environment_name"
], # Replace "your_model_environment_name" with the actual environment name
)

Inference Feature in a Batch Feature View​

The Inference feature type is used to invoke Tecton-registered models to compute feature values.

See Inference SDK Doc for details about this feature.

Here is an example:

from tecton import pandas_batch_config, Entity, BatchSource, batch_feature_view, Attribute, Inference, RiftBatchConfig
from tecton.types import Field, Int64, String
from datetime import timedelta, datetime

# Define the entity
entity = Entity(name="user_id", join_keys=[Field("user_id", String)])

# Define a data source
@pandas_batch_config(supports_time_filtering=False)
def chat_history():
import pandas

df = pandas.DataFrame(
columns=["user_id", "timestamp", "text"],
data=[
["user_1", "2024-05-14T00:00:00", "thank you so much!"],
["user_1", "2024-05-15T00:00:00", "I am very disappointed."],
["user_1", "2024-05-16T00:00:00", "okay"],
],
).astype({"timestamp": "datetime64[us]"})
return df


chat_history_ds = BatchSource(name="chat_history", batch_config=chat_history)


@batch_feature_view(
name="sentiment_bfv",
mode="pandas",
sources=[chat_history_ds],
entities=[entity],
batch_schedule=timedelta(days=1),
feature_start_time=datetime(2024, 5, 1),
timestamp_field="timestamp",
features=[
Attribute("text", String),
Inference(
input_columns=[
Field("text", String),
],
model="roberta-sentiment-v0",
name="user_sentiment",
),
],
# Turning off `online` and `offline` parameter can skip the `environment` check.
online=True,
offline=True,
run_transformation_validation=False,
environment="your_model_environment_name", # Replace "your_model_environment_name" with the actual environment name.
batch_compute=RiftBatchConfig(
# NOTE: we recommend using L4 GPU instances for Embeddings inference
instance_type="g6.xlarge",
),
)
def bfv(input_table):
return input_table

Local Development & Testing​

To locally test and iterate on your model, you can define a ModelConfig object in a notebook and use model.run() to verify the output.

Use the Model Config and Model File examples below to set up your model locally in a notebook.

from tecton import ModelConfig
from tecton.types import Field, String

model = ModelConfig(
name="roberta-sentiment-v0",
model_type="pytorch",
model_file="model.py",
input_schema=[Field("text", String)],
output_schema=Field("sentiment", String),
environments=[], # `environments` is not required in local development.
)

# Inspect model inference results.
df = model.run(
{
"text": ["I am excited", "I am disappointed"],
}
)

After verifying the model, you have two options to refer to the model in your local Feature View.

  1. Register the model via the command line (tecton model create config.py) and then refer to it in a local Batch Feature View.
  2. Call model.register() to temporarily register the model in your local notebook session.
from tecton import ModelConfig
from tecton.types import Field, String

model = ModelConfig(
name="roberta-sentiment-v0",
model_type="pytorch",
model_file="model.py",
input_schema=[Field("text", String)],
output_schema=Field("sentiment", String),
environments=[],
)

model.register()
caution

If a local model name conflicts with a remote model, the remote model will be used, and you will see a warning when registering it locally.

After registering the model with either way, the Batch Feature View can be developed locally as normal. The Batch Feature View example defined above can be used directly with the local model. Turning off online and offline parameter can skip the environment check in local development.

Limitations​

  • This capability is currently limited to Pytorch models.
  • This capability is currently limited to Batch Feature Views. Support for Realtime and Stream Feature Views is coming soon.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon