Debugging Resource Providers in Realtime Feature Views
Resource Providers in online retrieval initialize Resources once and reuse the connection across Feature Service requests. When a Realtime Feature View uses a Resource Provider, Tecton first invokes the Resource Provider transformation to create a Resource, which the Realtime Feature View then uses in its transformation. We'll explain how to debug through 3 examples:
- a resource not initialized
- an error with the resource provider transformation
- an error with the feature view transformation
For the three examples, consider the following code snippet:
@resource_provider(
tags={"environment": "staging"},
owner="tom@tecton.ai",
secrets={
"open_ai_key": Secret(scope="openai_embeddings", key="open_ai"),
},
)
def open_ai_client(context):
# There is a missing OpenAI import here
client = OpenAI(api_key=context.secrets["open_ai_key"])
return client
input_request = RequestSource(schema=[Field("input", String)])
@realtime_feature_view(
sources=[input_request],
mode="python",
features=[Attribute("embedding", String)],
resource_providers={"openai": open_ai_client},
)
def realtime_embedding(input_request, context):
# Context dictionary key to access resources should be resources, not resource
openai = context.resource["openai"]
response = openai.embeddings.create(input=input_request["input"], model="text-embedding-ada-002")
return {"embedding": response.data[0].embedding}
1. Resource Not Initialized​
The Resource returned by the Resource Provider transformation begins
initialization immediately after a user runs tecton apply
. It typically takes
about 2 minutes for changes to the Resource Provider to propagate to the Feature
Server. If a request is made to the Feature Service using a Resource Provider
whose Resource has not yet been initialized, the following error will be
returned:
{"error":"KeyError: \"Unable to find Resource for Resource Provider 'open_ai_client'. Newly updated Resource Providers may take upto 120 seconds to be propagated. Last refresh time is None\" (when evaluating UDF realtime_embedding)", ...}
2. Error with Resource Provider Transformation​
After changes to your Resource Provider have been propagated to the Feature Server, Tecton will attempt to initialize the Resource by invoking the Resource Provider transformation. If any errors occur during the Resource Provider invocation, they will be logged and returned with the next request that uses the Resource Provider. For example, consider the following incorrect Resource Provider, which is missing an import for OpenAI:
@resource_provider(
tags={"environment": "staging"},
owner="tom@tecton.ai",
secrets={
"open_ai_key": Secret(scope="openai_embeddings", key="open_ai"),
},
)
def open_ai_client(context):
# There is a missing OpenAI import here
client = OpenAI(api_key=context.secrets["open_ai_key"])
return client
At request time, the returned error would specify that it occurred during the Resource Provider invocation and was caused by a missing import:
{"error":"Exception calling application: (<StatusCode.FAILED_PRECONDITION: (9, 'failed precondition')>, 'Error with resource provider \"openai\": [ResourceProviderErrorType.INVOCATION_ERROR] name \\'OpenAI\\' is not defined')", ...}%
Editing the Resource Provider transformation and re-running tecton apply
will
enable the successful instantiation of the Resource Provider.
3. Error with Feature View Transformation​
Once instantiated, the Resource will be reused across online requests. If there
are issues with your Realtime Feature View transformation that references your
Resource, errors will be raised as a UDF error, indicating a bug in your Feature
View transformation. Assuming the Resource Provider transformation has been
corrected to include the missing import, consider the following example of an
incorrect Realtime Feature View transformation where the user mistakenly uses
context.resource
to access the resource instead of the correct
context.resources
:
input_request = RequestSource(schema=[Field("input", String)])
@realtime_feature_view(
sources=[input_request],
mode="python",
features=[Attribute("embedding", String)],
resource_providers={"openai": open_ai_client},
)
def realtime_embedding(input_request, context):
# Context dictionary key to access resources should be resources, not resource
openai = context.resource["openai"]
response = openai.embeddings.create(input=input_request["input"], model="text-embedding-ada-002")
return {"embedding": response.data[0].embedding}
The returned error will indicate that the issue originates from the UDF:
{"error":"AttributeError: 'RealtimeContext' object has no attribute 'resource' (when evaluating UDF realtime_embedding)", ...%
Correcting the Realtime Feature View transformation to use context.resources
,
running tecton apply
, and making an online request to the Feature Service
using that Resource Provider, would give you the correct results:
{"result":{"features":[[-0.023456, 0.012345, -0.034567, 0.078910...]]}}%