Update an LLM
Updates an existing LLM configuration including display information, endpoint configuration, model parameters, credentials, and labels. All fields are optional - only specified fields will be updated.
IMPORTANT: providerType is IMMUTABLE after creation and cannot be changed. Requires UPDATE_LLM_OWN permission for LLMs you own (or UPDATE_LLM_ANY for admin users).
In: header
Path Parameters
The unique identifier of the LLM to update
LLM update details
Update display name
length <= 255Update description
Update endpoint base URL (OpenAI-compatible base, typically ends with /v1)
Update API path
Update model identifier (cannot be empty)
Update supported modalities (if array contains ≥1 elements, replaces stored set; if empty or omitted, no change)
Update credentials
Update version information
Update monitoring endpoint URL
Update LLM capabilities (replaces entire capability set; clients MUST send all flags)
Update default sampling parameters
Update maximum context window size in tokens
int32Update provider-specific client configuration (replaces entire config; no merging)
Empty Object
Replace all existing labels with this set. Empty map clears all labels. Cannot be used with mergeLabels.
Empty Object
Merge with existing labels: upserts with overwrite. Labels not mentioned are preserved. Cannot be used with replaceLabels.
Empty Object
Response Body
curl -X PUT "http://localhost:8080/v1/llms/550e8400-e29b-41d4-a716-446655440000" \ -H "Content-Type: application/json" \ -d '{ "displayName": "Updated GPT-4 Turbo", "description": "Updated OpenAI GPT-4 Turbo with enhanced configuration for production use", "endpointUrl": "https://api.openai.com/v1", "apiPath": "/chat/completions", "modelIdentifier": "gpt-4-turbo-preview", "supportedModalities": [ "TEXT" ], "credentials": { "kind": "CREDENTIAL_KIND_API_KEY", "apiKey": { "inlineSecret": "sk-updated-api-key-here" } }, "capabilities": { "supportsChat": "true", "supportsCompletion": "true", "supportsFunctionCalling": "true", "supportsSystemMessages": "true", "supportsStreaming": "true", "supportsSamplingParameters": "true" }, "version": "2.0.1", "monitoringEndpoint": "https://monitoring.company.com/llms/status", "replaceLabels": { "environment": "production", "team": "ml-platform", "cost-center": "ai-infrastructure" } }'{
"llmId": "550e8400-e29b-41d4-a716-446655440000",
"displayName": "GPT-4 Turbo",
"description": "OpenAI's GPT-4 Turbo model for chat completions",
"providerType": "OPENAI",
"endpointUrl": "https://api.openai.com/v1",
"apiPath": "/chat/completions",
"modelIdentifier": "gpt-4-turbo-preview",
"supportedModalities": [
"TEXT"
],
"labels": "{\"environment\": \"production\", \"team\": \"ai\"}",
"version": "1.0.0",
"monitoringEndpoint": "https://monitoring.example.com/llms/status",
"capabilities": {
"supportsChat": "true",
"supportsCompletion": "true",
"supportsFunctionCalling": "true",
"supportsSystemMessages": "true",
"supportsStreaming": "true",
"supportsSamplingParameters": "true"
},
"defaultSamplingParams": {
"maxTokens": "2048",
"temperature": "0.7",
"topP": "0.9",
"topK": "50",
"frequencyPenalty": "0.0",
"presencePenalty": "0.0",
"stopSequences": "[\"\\n\\n\", \"END\"]"
},
"maxContextLength": "32768",
"clientConfig": {
"property1": {},
"property2": {}
},
"ownerId": "550e8400-e29b-41d4-a716-446655440000",
"createdAt": "1617293472000",
"updatedAt": "1617293472000",
"createdById": "550e8400-e29b-41d4-a716-446655440000",
"updatedById": "550e8400-e29b-41d4-a716-446655440000"
}