LLMCapabilities Model
LLMCapabilities Model documentation for Python SDK
Capabilities and features supported by an LLM service
Properties
| Name | Type | Description | Notes |
|---|---|---|---|
| supports_chat | bool | Supports conversational/chat completion format with message roles | [optional] |
| supports_completion | bool | Supports raw text completion with prompt continuation | [optional] |
| supports_function_calling | bool | Supports function/tool calling with structured responses | [optional] |
| supports_system_messages | bool | Supports system prompts to define model behavior and context | [optional] |
| supports_streaming | bool | Supports real-time token streaming during generation | [optional] |
| supports_sampling_parameters | bool | Supports sampling parameters like temperature, top_p, and top_k for generation control | [optional] |
Example
from goodmem_client.models.llm_capabilities import LLMCapabilities
# TODO update the JSON string below
json = "{}"
# create an instance of LLMCapabilities from a JSON string
llm_capabilities_instance = LLMCapabilities.from_json(json)
# print the JSON string representation of the object
print(LLMCapabilities.to_json())
# convert the object into a dict
llm_capabilities_dict = llm_capabilities_instance.to_dict()
# create an instance of LLMCapabilities from a dict
llm_capabilities_from_dict = LLMCapabilities.from_dict(llm_capabilities_dict)↑ Back to Python SDK ↑ Back to Python SDK ↑ Back to Python SDK