LLMSamplingParams Model
LLMSamplingParams Model documentation for Python SDK
Sampling and generation parameters for controlling LLM text output
Properties
| Name | Type | Description | Notes |
|---|---|---|---|
| max_tokens | int | Maximum tokens to generate (>0 if set; provider-dependent limits apply) | [optional] |
| temperature | float | Sampling temperature 0.0-2.0 (0.0=deterministic, 2.0=highly random) | [optional] |
| top_p | float | Nucleus sampling threshold 0.0-1.0 (smaller values focus on higher probability tokens) | [optional] |
| top_k | int | Top-k sampling limit (>0 if set; primarily for local/open-source models) | [optional] |
| frequency_penalty | float | Frequency penalty -2.0 to 2.0 (positive values reduce repetition based on frequency) | [optional] |
| presence_penalty | float | Presence penalty -2.0 to 2.0 (positive values encourage topic diversity) | [optional] |
| stop_sequences | List[str] | Generation stop sequences (≤10 sequences; each ≤100 chars; generation halts on exact match) | [optional] |
Example
from goodmem_client.models.llm_sampling_params import LLMSamplingParams
# TODO update the JSON string below
json = "{}"
# create an instance of LLMSamplingParams from a JSON string
llm_sampling_params_instance = LLMSamplingParams.from_json(json)
# print the JSON string representation of the object
print(LLMSamplingParams.to_json())
# convert the object into a dict
llm_sampling_params_dict = llm_sampling_params_instance.to_dict()
# create an instance of LLMSamplingParams from a dict
llm_sampling_params_from_dict = LLMSamplingParams.from_dict(llm_sampling_params_dict)↑ Back to Python SDK ↑ Back to Python SDK ↑ Back to Python SDK