LLMs API
LLMs API documentation for Python SDK
All URIs are relative to http://localhost:8080
| Method | HTTP request | Description |
|---|---|---|
| create_llm | POST /v1/llms | Create a new LLM |
| delete_llm | DELETE /v1/llms/{id} | Delete an LLM |
| get_llm | GET /v1/llms/{id} | Get an LLM by ID |
| list_llms | GET /v1/llms | List LLMs |
| update_llm | PUT /v1/llms/{id} | Update an LLM |
create_llm
CreateLLMResponse create_llm(llm_creation_request)
Create a new LLM
Creates a new LLM configuration for text generation services. LLMs represent connections to different language model API services (like OpenAI, vLLM, etc.) and include all the necessary configuration to use them for text generation. DUPLICATE DETECTION: Returns ALREADY_EXISTS if another LLM exists with identical {endpoint_url, api_path, model_identifier} after URL canonicalization. The api_path field defaults to '/v1/chat/completions' if omitted. Requires CREATE_LLM_OWN permission (or CREATE_LLM_ANY for admin users).
Example
- Api Key Authentication (ApiKeyAuth):
import goodmem_client
from goodmem_client.models.create_llm_response import CreateLLMResponse
from goodmem_client.models.llm_creation_request import LLMCreationRequest
from goodmem_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:8080/v1/default
# See configuration.py for a list of all supported configuration parameters.
configuration = goodmem_client.Configuration(
host = "http://localhost:8080/v1/default"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: ApiKeyAuth
configuration.api_key['ApiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['ApiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with goodmem_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = goodmem_client.LLMsApi(api_client)
llm_creation_request = {"displayName":"GPT-4 Turbo","description":"OpenAI's GPT-4 Turbo model for chat completions","providerType":"OPENAI","endpointUrl":"https://api.openai.com","apiPath":"/v1/chat/completions","modelIdentifier":"gpt-4-turbo-preview","supportedModalities":["TEXT"],"credentials":{"kind":"CREDENTIAL_KIND_API_KEY","apiKey":{"inlineSecret":"sk-your-api-key-here"}},"capabilities":{"supportsChat":"true","supportsCompletion":"true","supportsFunctionCalling":"true","supportsSystemMessages":"true","supportsStreaming":"true","supportsSamplingParameters":"true"},"defaultSamplingParams":{"maxTokens":"2048","temperature":"0.7","topP":"0.9"},"maxContextLength":"32768","labels":{"environment":"production","team":"ai"}} # LLMCreationRequest | LLM configuration details
try:
# Create a new LLM
api_response = api_instance.create_llm(llm_creation_request)
print("The response of LLMsApi->create_llm:\n")
pprint(api_response)
except Exception as e:
print("Exception when calling LLMsApi->create_llm: %s\n" % e)Parameters
| Name | Type | Description | Notes |
|---|---|---|---|
| llm_creation_request | LLMCreationRequest | LLM configuration details |
Return type
Authorization
HTTP request headers
- Content-Type: application/json
- Accept: application/json
HTTP response details
| Status code | Description | Response headers |
|---|---|---|
| 201 | Successfully created LLM with status information | * Location - URL of the created LLM resource |
| 400 | Invalid request - missing required fields or invalid format | - |
| 401 | Unauthorized - invalid or missing API key | - |
| 403 | Forbidden - insufficient permissions to create LLMs | - |
| 409 | Conflict - LLM already exists with identical owner_id, provider_type, endpoint_url, api_path, model_identifier, and credentials_fingerprint | - |
delete_llm
delete_llm(id)
Delete an LLM
Permanently deletes an LLM configuration. This operation cannot be undone and removes the LLM record and securely deletes stored credentials. IMPORTANT: This does NOT invalidate or delete any previously generated content using this LLM - existing generations remain accessible. Requires DELETE_LLM_OWN permission for LLMs you own (or DELETE_LLM_ANY for admin users).
Example
- Api Key Authentication (ApiKeyAuth):
import goodmem_client
from goodmem_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:8080/v1/default
# See configuration.py for a list of all supported configuration parameters.
configuration = goodmem_client.Configuration(
host = "http://localhost:8080/v1/default"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: ApiKeyAuth
configuration.api_key['ApiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['ApiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with goodmem_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = goodmem_client.LLMsApi(api_client)
id = '550e8400-e29b-41d4-a716-446655440000' # str | The unique identifier of the LLM to delete
try:
# Delete an LLM
api_instance.delete_llm(id)
except Exception as e:
print("Exception when calling LLMsApi->delete_llm: %s\n" % e)Parameters
| Name | Type | Description | Notes |
|---|---|---|---|
| id | str | The unique identifier of the LLM to delete |
Return type
void (empty response body)
Authorization
HTTP request headers
- Content-Type: Not defined
- Accept: Not defined
HTTP response details
| Status code | Description | Response headers |
|---|---|---|
| 204 | LLM successfully deleted | - |
| 400 | Invalid request - LLM ID in invalid format | - |
| 401 | Unauthorized - invalid or missing API key | - |
| 403 | Forbidden - insufficient permissions to delete this LLM | - |
| 404 | Not found - LLM with the specified ID does not exist | - |
get_llm
LLMResponse get_llm(id)
Get an LLM by ID
Retrieves the details of a specific LLM configuration by its unique identifier. Requires READ_LLM_OWN permission for LLMs you own (or READ_LLM_ANY for admin users to view any user's LLMs). This is a read-only operation with no side effects.
Example
- Api Key Authentication (ApiKeyAuth):
import goodmem_client
from goodmem_client.models.llm_response import LLMResponse
from goodmem_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:8080/v1/default
# See configuration.py for a list of all supported configuration parameters.
configuration = goodmem_client.Configuration(
host = "http://localhost:8080/v1/default"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: ApiKeyAuth
configuration.api_key['ApiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['ApiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with goodmem_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = goodmem_client.LLMsApi(api_client)
id = '550e8400-e29b-41d4-a716-446655440000' # str | The unique identifier of the LLM to retrieve
try:
# Get an LLM by ID
api_response = api_instance.get_llm(id)
print("The response of LLMsApi->get_llm:\n")
pprint(api_response)
except Exception as e:
print("Exception when calling LLMsApi->get_llm: %s\n" % e)Parameters
| Name | Type | Description | Notes |
|---|---|---|---|
| id | str | The unique identifier of the LLM to retrieve |
Return type
Authorization
HTTP request headers
- Content-Type: Not defined
- Accept: application/json
HTTP response details
| Status code | Description | Response headers |
|---|---|---|
| 200 | Successfully retrieved LLM | - |
| 400 | Invalid request - LLM ID in invalid format | - |
| 401 | Unauthorized - invalid or missing API key | - |
| 403 | Forbidden - insufficient permissions to view this LLM | - |
| 404 | Not found - LLM with the specified ID does not exist | - |
list_llms
ListLLMsResponse list_llms(owner_id=owner_id, provider_type=provider_type, label_=label_)
List LLMs
Retrieves a list of LLM configurations accessible to the caller, with optional filtering. PERMISSION-BASED FILTERING: With LIST_LLM_OWN permission, you can only see your own LLMs (owner_id filter is ignored if set to another user). With LIST_LLM_ANY permission, you can see all LLMs or filter by any owner_id. This is a read-only operation with no side effects.
Example
- Api Key Authentication (ApiKeyAuth):
import goodmem_client
from goodmem_client.models.list_llms_response import ListLLMsResponse
from goodmem_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:8080/v1/default
# See configuration.py for a list of all supported configuration parameters.
configuration = goodmem_client.Configuration(
host = "http://localhost:8080/v1/default"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: ApiKeyAuth
configuration.api_key['ApiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['ApiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with goodmem_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = goodmem_client.LLMsApi(api_client)
owner_id = '550e8400-e29b-41d4-a716-446655440000' # str | Filter LLMs by owner ID. With LIST_LLM_ANY permission, omitting this shows all accessible LLMs; providing it filters by that owner. With LIST_LLM_OWN permission, only your own LLMs are shown regardless of this parameter. (optional)
provider_type = 'OPENAI' # str | Filter LLMs by provider type (e.g., OPENAI, VLLM, OLLAMA, etc.) (optional)
label_ = '?label.environment=production&label.team=ai' # str | Filter by label value. Multiple label filters can be specified (e.g., ?label.environment=production&label.team=ai) (optional)
try:
# List LLMs
api_response = api_instance.list_llms(owner_id=owner_id, provider_type=provider_type, label_=label_)
print("The response of LLMsApi->list_llms:\n")
pprint(api_response)
except Exception as e:
print("Exception when calling LLMsApi->list_llms: %s\n" % e)Parameters
| Name | Type | Description | Notes |
|---|---|---|---|
| owner_id | str | Filter LLMs by owner ID. With LIST_LLM_ANY permission, omitting this shows all accessible LLMs; providing it filters by that owner. With LIST_LLM_OWN permission, only your own LLMs are shown regardless of this parameter. | [optional] |
| provider_type | str | Filter LLMs by provider type (e.g., OPENAI, VLLM, OLLAMA, etc.) | [optional] |
| label_ | str | Filter by label value. Multiple label filters can be specified (e.g., ?label.environment=production&label.team=ai) | [optional] |
Return type
Authorization
HTTP request headers
- Content-Type: Not defined
- Accept: application/json
HTTP response details
| Status code | Description | Response headers |
|---|---|---|
| 200 | Successfully retrieved LLMs | - |
| 400 | Invalid request - invalid filter parameters or pagination token | - |
| 401 | Unauthorized - invalid or missing API key | - |
| 403 | Forbidden - insufficient permissions to list LLMs | - |
update_llm
LLMResponse update_llm(id, llm_update_request)
Update an LLM
Updates an existing LLM configuration including display information, endpoint configuration, model parameters, credentials, and labels. All fields are optional - only specified fields will be updated. IMPORTANT: provider_type is IMMUTABLE after creation and cannot be changed. Requires UPDATE_LLM_OWN permission for LLMs you own (or UPDATE_LLM_ANY for admin users).
Example
- Api Key Authentication (ApiKeyAuth):
import goodmem_client
from goodmem_client.models.llm_response import LLMResponse
from goodmem_client.models.llm_update_request import LLMUpdateRequest
from goodmem_client.rest import ApiException
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:8080/v1/default
# See configuration.py for a list of all supported configuration parameters.
configuration = goodmem_client.Configuration(
host = "http://localhost:8080/v1/default"
)
# The client must configure the authentication and authorization parameters
# in accordance with the API server security policy.
# Examples for each auth method are provided below, use the example that
# satisfies your auth use case.
# Configure API key authorization: ApiKeyAuth
configuration.api_key['ApiKeyAuth'] = os.environ["API_KEY"]
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['ApiKeyAuth'] = 'Bearer'
# Enter a context with an instance of the API client
with goodmem_client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = goodmem_client.LLMsApi(api_client)
id = '550e8400-e29b-41d4-a716-446655440000' # str | The unique identifier of the LLM to update
llm_update_request = {"displayName":"Updated GPT-4 Turbo","description":"Updated OpenAI GPT-4 Turbo with enhanced configuration for production use","endpointUrl":"https://api.openai.com","apiPath":"/v1/chat/completions","modelIdentifier":"gpt-4-turbo-preview","supportedModalities":["TEXT"],"credentials":{"kind":"CREDENTIAL_KIND_API_KEY","apiKey":{"inlineSecret":"sk-updated-api-key-here"}},"capabilities":{"supportsChat":"true","supportsCompletion":"true","supportsFunctionCalling":"true","supportsSystemMessages":"true","supportsStreaming":"true","supportsSamplingParameters":"true"},"version":"2.0.1","monitoringEndpoint":"https://monitoring.company.com/llms/status","replaceLabels":{"environment":"production","team":"ml-platform","cost-center":"ai-infrastructure"}} # LLMUpdateRequest | LLM update details
try:
# Update an LLM
api_response = api_instance.update_llm(id, llm_update_request)
print("The response of LLMsApi->update_llm:\n")
pprint(api_response)
except Exception as e:
print("Exception when calling LLMsApi->update_llm: %s\n" % e)Parameters
| Name | Type | Description | Notes |
|---|---|---|---|
| id | str | The unique identifier of the LLM to update | |
| llm_update_request | LLMUpdateRequest | LLM update details |
Return type
Authorization
HTTP request headers
- Content-Type: application/json
- Accept: application/json
HTTP response details
| Status code | Description | Response headers |
|---|---|---|
| 200 | Successfully updated LLM | - |
| 400 | Invalid request - ID format or update parameters invalid | - |
| 401 | Unauthorized - invalid or missing API key | - |
| 403 | Forbidden - insufficient permissions to update this LLM | - |
| 404 | Not found - LLM with the specified ID does not exist | - |