Skip to main content
Version: 1.0

🪛 Features as Tools

Open In Colab

This tutorial guides you through creating an LLM generated restaurant recommendation function by using Features as Tools.

This is an example of how an LLM can make use of feature views as data retrieval tools. At development time, you provide the set of feature views that the LLM agent is allowed to call. The LLM uses the feature view’s description field to determine what tools are relevant for answering the user question. It makes the corresponding feature retrieval call(s) using the Tecton entity keys provided in the context of the call.

Install Packages​

!pip install 'tecton-gen-ai[tecton,langchain,llama-index,dev]' langchain-openai llama-index-llms-openai

Log in to Tecton​

Make sure to hit enter after pasting in your authentication token.

import tecton

tecton.login("explore.tecton.ai")

Features as Tools

In the following cell you’ll create a Tecton Agent with a simple prompt and two different feature views for the LLM as tools. The system prompt in this example, instructs the LLM to address the user by name and to only suggest restaurants that in near the location. In order to address the use by name the LLM will need to use a tools. The location is configured to be provided as part of the context in the request by using a RequestSource object.

from tecton_gen_ai.agent import AgentService, AgentClient
from tecton_gen_ai.fco import prompt
from tecton_gen_ai.utils.tecton import make_request_source

def restaurant_recommender_agent( user_info_fv, recent_visit_fv):

location_request = make_request_source(location = str)

@prompt(sources=[ location_request])
def sys_prompt(location_request ):
location = location_request["location"]
return f"""
Address the user by name.
You are a consierge service that recommends restaurants.
Respond to the user's question.
If the user asks for a restaurant recommendation respond with a specific restaurant that you know and suggested menu items.
Only suggest restaurants that are in or near {location}.
"""

return AgentService(
name="restaurant_recommender",
prompts=[sys_prompt],
tools=[ user_info_fv, recent_visit_fv], # feature views used as tools
)

The sys_prompt instructions above require the LLM to “address the user by name” which requires the use of a tool.

In the next cell we create a couple of mock feature views that provide basic user information and the list of their recently visited restaurants respectively.

In practice, these feature views would be implemented as stream feature views that would provide the latest user information and recent visits within seconds of the corresponding user activity.

import pandas as pd
from tecton import RequestSource
from tecton.types import Field, String


from tecton_gen_ai.testing import make_local_batch_feature_view


# mock user info data
user_info = pd.DataFrame(
[
{"user_id": "user1", "name": "Jim", "age": 30, "food_preference": "American"},
{"user_id": "user2", "name": "John", "age": 40, "food_preference": "Italian"},
{"user_id": "user3", "name": "Jane", "age": 50, "food_preference": "Chinese"},
]
)
user_info_fv = make_local_batch_feature_view( "user_info_fv", user_info, entity_keys=["user_id"], description="User's basic information.")

# mock user's recent visits
recent_eats = pd.DataFrame(
[
{"user_id": "user1", "last_3_visits":str(["Mama Ricotta's", "The Capital Grille", "Firebirds Wood Fired Grill"])},
{"user_id": "user2", "last_3_visits":str(["Mama Ricotta's", "Villa Antonio", "Viva Chicken"])},
{"user_id": "user3", "last_3_visits":str(["Wan Fu", "Wan Fu Quality Chinese Cuisine", "Ru San's"])},
]
)
recent_eats_fv = make_local_batch_feature_view( "recent_eats_fv", recent_eats, entity_keys=["user_id"], description="User's recent restaurant visits.")

The feature views entity key in both cases is user_id, the key will need to be provided in the context when invoking the LLM agent.

In the following cell, you create a Tecton AgentClient with uses the mock feature views defined above.

# create the Tecton Agent
recommender_agent = restaurant_recommender_agent( user_info_fv, recent_eats_fv )

# create a client to invoke the agent
client = AgentClient.from_local( recommender_agent )

Put it all together​

The Tecton AgentClient created above can be used to create a LangChain or LlamaIndex runnable agent.

In the cell below you will instantiate an LLM model using OpenAI’s gpt-4o-mini model and create a LangChain agent that is ready to use the tools we’ve provided above. In the following section, you’ll do the same using a LlamaIndex.

Obtain an OpenAI API key and replace “your-openai-key” in the following cell.

import os
from langchain_openai import ChatOpenAI
from tecton_gen_ai.testing.utils import print_md


# replace with your key
os.environ["OPENAI_API_KEY"] = "your-openai-key"

# instantiate LLM model for LangChain
langchain_llm = ChatOpenAI(model="gpt-4o-mini")

# create invocable agent for LangChain
langchain_agent = client.make_agent(langchain_llm, system_prompt="sys_prompt")

Testing with different context​

In the following cells you’ll provide different context and invoke the LLM agent to test how features as tools provide user specific personalization.

The first test uses the langchain_agent, it’s input and output are a dictionaries with “input” and “output” keys:

print("Context - user 1 in Charlotte \n\nResponse:")
with client.set_context({"user_id": "user1", "location":"Ballantyne, Charlotte, NC"}):
print_md(langchain_agent.invoke({"input":"what do you know about me and what would you recommend"})["output"])
Context - user 1 in Charlotte 

Response:
Hello Jim! I see that you enjoy American cuisine, and you've recently visited Mama Ricotta's, The Capital Grille,  
and Firebirds Wood Fired Grill.                                                                                    
For your next dining experience, I recommend The Cheesecake Factory in Ballantyne. They have a wide variety of     
American dishes. You might want to try the Chicken Madeira, which is a delicious sautéed chicken breast topped with
fresh mushrooms and cheese, or the Fish Tacos if you're in the mood for something lighter. Don't forget to save    
room for one of their famous cheesecakes for dessert! Enjoy your meal!                                             

Different Context for Different Users​

In the following cells you can see how the response changes based on the user_id and the location provided resulting in a personalized response for each user and based on their current location.

The following test uses the llamaindex_agent with its chat method that receiving a query string as input and returns output as a string as well:

print( "Context - user 2 in Charlotte \n\nResponse:")
with client.set_context({"user_id": "user2", "location":"Ballantyne, Charlotte, NC"}):
print_md(langchain_agent.invoke({"input":"I want to try a place tonight that I've never been to that also fits my preference"})["output"])

Context - user 2 in Charlotte 

Response:
Hi John! Since you're in the mood for Italian and want to try somewhere new, I recommend Osteria Luce. It's a      
fantastic Italian restaurant in the Ballantyne area.                                                               
Suggested menu items:                                                                                              
 • Branzino al Sale - Whole roasted sea bass with salt crust.                                                      
 • Pasta alla Vodka - Classic pasta with a creamy vodka sauce.                                                     
 • Tiramisu - A delicious espresso-soaked dessert to finish off your meal.                                         
Enjoy your dining experience tonight!                                                                              
print( "Context - user 3 in Charlotte \n\nResponse:")
with client.set_context({"user_id": "user3", "location":"Charlotte, NC"}):
print_md(langchain_agent.invoke({"input":"Recommend one of my regular spots for dinner"})["output"])

Context - user 3 in Charlotte 

Response:
Hi Jane! Since you enjoy Chinese food, I recommend visiting Wan Fu Quality Chinese Cuisine if you haven't been     
there recently. Their menu features delicious dishes like the Kung Pao Chicken and the Shrimp Fried Rice, both of  
which are crowd favorites. Enjoy your dinner!                                                                      

With LlamaIndex​

Tecton also integrates with LlamaIndex in the same way it does with LangChain in order to provide enriched prompts and features as tools.

The only difference is in the LangChain.invoke vs LlamaIndex.chat methods and their signatures. Tecton delivers fresh context in the same way for both:

from llama_index.llms.openai import OpenAI

# instantiate LLM model for with LlamaIndex integration
llamaindex_llm = OpenAI(model="gpt-4o-mini")

# create invocable agent for LlamaIndex
llamaindex_agent = client.make_agent(llamaindex_llm, system_prompt="sys_prompt")

print( "Context - user 3 in Charlotte \n\nResponse:")
with client.set_context({"user_id": "user3", "location":"Charlotte, NC"}):
print_md(llamaindex_agent.chat("Recommend one of my regular spots for dinner").response)

Context - user 3 in Charlotte 

Response:
Hi Jane! Since you enjoy Chinese cuisine, I recommend visiting Wan Fu Quality Chinese Cuisine. It's a great spot   
for dinner. You might want to try their Kung Pao Chicken or the Beef with Broccoli. Enjoy your meal!               

Notice that in the LlamaIndex case, given that it is stateful, a new instance of the agent is needed if the context is different between calls.

# create new instance of agent for LlamaIndex 
llamaindex_agent = client.make_agent(llamaindex_llm, system_prompt="sys_prompt")

print( "Context - user 2 in New York \n\nResponse:")
with client.set_context({"user_id": "user2", "location":"New York, NY"}):
print_md(llamaindex_agent.chat("something romantic for dinner").response)

Context - user 2 in New York 

Response:
For a romantic dinner, I recommend Carbone in Greenwich Village. It's an upscale Italian restaurant with a classic 
ambiance that's perfect for a special evening.                                                                     
You might want to try their famous Spicy Rigatoni Vodka and the Veal Parmesan. For dessert, don't miss the         
Tiramisu—it's a delightful way to end your meal. Enjoy your evening, John!                                         

Conclusion

Tecton delivers real-time, streaming and batch feature pipelines in production. Enhancing an LLMs context through features as tools, brings the power of fresh context to generative AI applications. By using features as tools for the LLM, you give the LLM the ability to retrieve relevant information only when the user question needs requires it.

Was this page helpful?

🧠 Hi! Ask me anything about Tecton!

Floating button icon