GoodMem
ReferenceAPIREST APILLMs

Get an LLM by ID

Retrieves the details of a specific LLM configuration by its unique identifier. Requires READ_LLM_OWN permission for LLMs you own (or READ_LLM_ANY for admin users to view any user's LLMs). This is a read-only operation with no side effects.

GET
/v1/llms/{id}
x-api-key<token>

In: header

Path Parameters

idstring

The unique identifier of the LLM to retrieve

Response Body

curl -X GET "http://localhost:8080/v1/llms/550e8400-e29b-41d4-a716-446655440000"
{
  "llmId": "550e8400-e29b-41d4-a716-446655440000",
  "displayName": "GPT-4 Turbo",
  "description": "OpenAI's GPT-4 Turbo model for chat completions",
  "providerType": "OPENAI",
  "endpointUrl": "https://api.openai.com/v1",
  "apiPath": "/chat/completions",
  "modelIdentifier": "gpt-4-turbo-preview",
  "supportedModalities": [
    "TEXT"
  ],
  "labels": "{\"environment\": \"production\", \"team\": \"ai\"}",
  "version": "1.0.0",
  "monitoringEndpoint": "https://monitoring.example.com/llms/status",
  "capabilities": {
    "supportsChat": "true",
    "supportsCompletion": "true",
    "supportsFunctionCalling": "true",
    "supportsSystemMessages": "true",
    "supportsStreaming": "true",
    "supportsSamplingParameters": "true"
  },
  "defaultSamplingParams": {
    "maxTokens": "2048",
    "temperature": "0.7",
    "topP": "0.9",
    "topK": "50",
    "frequencyPenalty": "0.0",
    "presencePenalty": "0.0",
    "stopSequences": "[\"\\n\\n\", \"END\"]"
  },
  "maxContextLength": "32768",
  "clientConfig": {
    "property1": {},
    "property2": {}
  },
  "ownerId": "550e8400-e29b-41d4-a716-446655440000",
  "createdAt": "1617293472000",
  "updatedAt": "1617293472000",
  "createdById": "550e8400-e29b-41d4-a716-446655440000",
  "updatedById": "550e8400-e29b-41d4-a716-446655440000"
}
Empty
Empty
Empty
Empty