Skip to main content
Version: Beta 🚧

Embeddings

Private Preview

This feature is currently in Private Preview.

This feature has the following limitations:
  • Must be enabled by Tecton Support.
  • Available for Rift-based Feature Views.
If you would like to participate in the preview, please file a support ticket.

Embeddings are condensed, rich representations of unstructured data that can power both predictive and generative AI applications.

In predictive use cases such as fraud detection and recommendation systems, embeddings enable models to identify complex patterns within data, leading to more accurate predictions. For generative AI applications, embeddings provide a semantic bridge that allows models to leverage the deep contextual meaning of data.

Tecton provides a seamless way of generating embeddings from text data, while delivering the following benefits for users:

  1. Efficient Compute Resource Management: Large scale inference of embeddings, such as processing millions of product descriptions nightly for a recommendation system, can be computationally expensive and memory intensive. Tecton handles these workloads by carefully provisioning, scheduling, and tuning resources such as GPUs to ensure cost-efficient performance.
  2. Ease of Experimentation: Finding the optimal balance between embedding model complexity, inference performance, and infrastructure costs typically demands deep technical understanding and trial-and-error. Tecton provides ML practitioners easy tooling to quickly evaluate several state-of-the-art open source models, without worrying about the model and compute complexity.

Batch Embeddings Generation​

Embeddings of text living in one or many Batch Data Sources (e.g. Snowflake, Redshift, BigQuery, S3 etc.) can be generated with Batch Feature Views.

from tecton import batch_feature_view, Embedding, RiftBatchConfig
from tecton.types import Field, String, Timestamp, Float64
from datetime import datetime, timedelta


@batch_feature_view(
sources=[products],
entities=[product],
timestamp_field="TIMESTAMP",
features=[
Embedding(input_column=Field("PRODUCT_NAME", String), model="sentence-transformers/all-MiniLM-L6-v2"),
Embedding(input_column=Field("PRODUCT_DESCRIPTION", String), model="sentence-transformers/all-MiniLM-L6-v2"),
],
mode="pandas",
batch_schedule=timedelta(days=1),
batch_compute=RiftBatchConfig(
# NOTE: we recommend using L4 GPU instances for Embeddings inference
instance_type="g6.xlarge",
),
environment="tecton-rift-embeddings-0.10.0b7",
)
def product_info_embeddings(products):
return products[["PRODUCT_ID", "PRODUCT_NAME", "PRODUCT_DESCRIPTION", "TIMESTAMP"]]

By default, the embedding features would be named <COLUMN_NAME>_embedding. You can also specify the name by explicitly specifying the name in the Embedding: Embedding(name="merchant_embedding_all_MiniLM", column="merchant", column_dtype=String, model="sentence-transformers/all-MiniLM-L6-v2").

Supported Models​

The following model names can be specified in model="<model_name>" for using different open-source text embeddings models.

Model
mixedbread-ai/mxbai-embed-large-v1
Snowflake/snowflake-arctic-embed-l
Snowflake/snowflake-arctic-embed-m
Snowflake/snowflake-arctic-embed-s
Snowflake/snowflake-arctic-embed-xs
sentence-transformers/all-MiniLM-L6-v2
BAAT/bge-large-en-v1.5
BAAT/bge-base-en-v1.5
BAAT/bge-small-en-v1.5
thenlper/gte-large
thenlper/gte-base
thenlper/gte-small

If you'd like to use specific open-source embeddings models not listed above, please file a support request!

For using proprietary Embeddings models see Model Generated Features.

Testing Batch Embeddings Generation Interactively​

Feature Views with embeddings can be tested interactively similar to any other Batch Feature View, by running the following code in a notebook:

start = datetime(2024, 1, 1)
end = datetime(2024, 3, 1)

df = product_info_embeddings.get_features_in_range(start_time=start, end_time=end).to_pandas()

display(df.head(5))

Limitations​

  • Batch embeddings generation is only supported for Rift based Feature Views
  • Batch embedding generation is currently only supported for text inputs, the datatype of columns to be embedded should be String. For generating embeddings on other input types see Model Generated Features.
  • A single Feature View can either have Aggregates or Embeddings, but not both
  • Embeddings generation is not yet supported for Stream Feature Views

Was this page helpful?