Chat Features in Julep
Learn about the robust chat system and its various features for dynamic interaction with agents
Overview
Julep provides a robust chat system with various features for dynamic interaction with agents. Here’s an overview of the key components and functionalities.
Features
Tool Integration
The chat API allows for the use of tools, enabling the agent to perform actions or retrieve information during the conversation.
Multi-agent Sessions
You can specify different agents within the same session using the agent
parameter in the chat settings.
Response Formatting
Control the output format, including options for JSON responses with specific schemas.
Memory and Recall
Configure how the session accesses and stores conversation history and memories.
Document References
The API returns information about documents referenced during the interaction, useful for providing citations or sources.
- To use the Document(RAG) with the chat API, you need to create a session with the
recall_options
parameter set to appropriate search parameters. To learn more about therecall_options
parameter, check out the Session page. - To use the chat API, you need to create a session first. To learn more about the session object, check out the Session page.
Input Structure
- Messages: An array of input messages representing the conversation so far.
- Tools: (Advanced) Additional tools provided for this specific interaction.
- Tool Choice: Specifies which tool the agent should use.
- Memory Access: Controls how the session accesses history and memories.(
recall
parameter) - Additional Parameters: Various parameters to control the behavior of the chat. You can find more details in the Additional Parameters section.
Here’s an example of how a typical message object might be structured in a chat interaction:
Additional Parameters
Parameter | Type | Description | Default |
---|---|---|---|
stream | bool | Indicates if the server should stream the response as it’s generated. | False |
stop | list[str] | Up to 4 sequences where the API will stop generating further tokens. | [] |
seed | int | If specified, the system will make a best effort to sample deterministically for that particular seed value. | None |
max_tokens | int | The maximum number of tokens to generate in the chat completion. | None |
logit_bias | dict[str, float] | Modify the likelihood of specified tokens appearing in the completion. | None |
response_format | str | Response format (set to json_object to restrict output to JSON). | None |
agent | UUID | Agent ID of the agent to use for this interaction. (Only applicable for multi-agent sessions) | None |
repetition_penalty | float | Number between 0 and 2.0. 1.0 is neutral and values larger than that penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. | None |
length_penalty | float | Number between 0 and 2.0. 1.0 is neutral and values larger than that penalize number of tokens generated. | None |
min_p | float | Minimum probability compared to leading token to be considered. | None |
frequency_penalty | float | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. | None |
presence_penalty | float | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. | None |
temperature | float | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | None |
top_p | float | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | 1.0 |
recall | bool | Whether to use the document (RAG) search or not | True |
save | bool | Whether this interaction should be stored in the session history or not | True |
remember | bool | DISABLED: Whether this interaction should form new memories or not (will be enabled in a future release) | False |
model | str | The model to use for the chat completion. | None |
Usage
Here’s an example of how to use the chat API in Julep using the SDKs:
To use the Chat endpint, you always have to create a session first.
To learn more about the Session object, check out the Session page.
Check out the API reference or SDK reference (Python or JavaScript) for more details on different operations you can perform on sessions.
Response
- Content-Type:
application/json
- Body: A
MessageChatResponse
object containing the full generated message(s)
Both types of responses include the following fields:
id
: The unique identifier for the chat responsechoices
: An object of generated message completions containing:role
: The role of the message (e.g. “assistant”, “user”, etc.)id
: Unique identifier for the messagecontent
: list of actual message contentcreated_at
: Timestamp when the message was createdname
: Optional name associated with the messagetool_call_id
: Optional ID referencing a tool calltool_calls
: Optional list of tool calls made during message generationcreated_at
: When this resource was created as UTC date-timedocs
: List of document references used for this request, intended for citation purposesjobs
: List of UUIDs for background jobs that may have been initiated as a result of this interactionusage
: Statistics on token usage for the completion request
Finish Reasons
stop
Natural stop point or provided stop sequence reached
length
Maximum number of tokens specified in the request was reached
content_filter
Content was omitted due to a flag from content filters
tool_calls
The model called a tool
Support
If you need help with further questions in Julep:
- Join our Discord community
- Check the GitHub repository
- Contact support at hey@julep.ai