LLM Extension
Creating AsyncLLMBaseExtension by using tman
Run the following command,
Abstract APIs to implement
on_data_chat_completion(self, ten_env: TenEnv, **kargs: LLMDataCompletionArgs) -> None
on_data_chat_completion(self, ten_env: TenEnv, **kargs: LLMDataCompletionArgs) -> None
This method is called when the LLM Extension receives a data completion request. It's used when data is passed in via data protocol in streaming mode.
on_call_chat_completion(self, ten_env: TenEnv, **kargs: LLMCallCompletionArgs) -> any
on_call_chat_completion(self, ten_env: TenEnv, **kargs: LLMCallCompletionArgs) -> any
This method is called when the LLM Extension receives a call completion request. It's used when data is passed in via call protocol in non-streaming mode.
This method is called when the LLM Extension receives a call completion request.
on_tools_update(self, ten_env: TenEnv, tool: LLMToolMetadata) -> None
on_tools_update(self, ten_env: TenEnv, tool: LLMToolMetadata) -> None
This method is called when the LLM Extension receives a tool update request.
APIs
cmd_in: tool_register
tool_register
This API is used to consume the tool registration request. An array of LLMToolMetadata will be received as input. The tools will be appended to self.available_tools
for future use.
cmd_out: tool_call
tool_call
This API is used to send the tool call request. You can connect this API to any LLMTool extension destination to get the tool call result.
Last updated