Logging & Debugging Realtime Feature Views
This feature is currently in Public Preview.
- Must be enabled by Tecton Support.
- Realtime Feature Views must use Transform Server Groups.
Identifying and fixing performance and logical errors in Realtime Feature Views can be challenging. With logging enabled for Realtime Feature Views, you can debug errors related to resource providers, secrets management, configuration issues, performance bottlenecks, and unexpected behaviors in user-code execution on Transform Server Groups.
Logging Realtime Feature Views​
With Realtime Feature View logging enabled, you can include print statements in your Feature View code that are captured in logs and accessible through the Tecton CLI. The logging system also records all errors that occur on the Transform Server, including those from Realtime Feature View transformations, Resource Provider transformations, and secret usage in realtime scenarios. These comprehensive logs provide valuable diagnostic information for troubleshooting. Here's an example:
@realtime_feature_view(
sources=[transaction_request],
mode="python",
features=feature_schema,
description="Whether the transaction amount is high (over $10000)",
)
def transaction_amount_is_high(transaction_request):
print(transaction_request)
result = {}
result["transaction_amount_is_high"] = int(transaction_request["amount"] >= 10000)
return result
Ensure that there is no sensitive information in the print statements as anyone with access to logs can view them.
In this example, when a realtime request is made, your print statement will be
logged and flushed every ~1 minute to S3 in your Data Plane. To view these logs
in your console, you can execute the following commands to tail the last n logs:
tecton server-group logs -n "<server_group_name>" -t 100
or specify a start time and end time to view logs
tecton server-group logs -n "<server_group_name>" -s 2025-03-14T20:14:39.095671Z -e 2025-03-14T20:15:39.291406Z
Logs will show up as:
Timestamp Node Message
==========================================================================================
2025-03-24T23:32:25.177753Z i-07619721d3e052d35 [2025-03-24 23:32:25,177] __main__
{"amount": 100}
2025-03-24T23:34:25.295494Z i-07619721d3e052d35 [2025-03-24 23:34:25,294] __main__
{"amount": 200}
Using Third Party Logging Tools​
You can integrate your preferred logging tool with logs accessible from the data
plane. All logs are stored in your S3 data plane bucket following this
standardized path structure:
realtime-logs/<workspace_name>/<server_group_name>/<year>/<month>/<day>/<hour>/<log_id>
Monitoring tools like Observe and Datadog provide guidance on ingesting logs from S3.
Logging Behavior and Best Practices​
- Logs are currently flushed every 1 minute to S3.
- Logs will be deleted after 7 days.
- Log statements can affect performance. For optimal serving latency, add print statements selectively during the debugging phase and remove them once issues are resolved. Additionally, keeping log payloads concise will help minimize performance impact while still providing the diagnostic information you need.
- Tecton automatically captures standard output and standard error logs, making them readily available in the console for review. For more advanced logging requirements, you can implement a custom logging handler using a Resource Provider. This approach creates a reusable logger resource that can be efficiently shared across multiple transformations and realtime requests. This pattern improves both performance and code organization.