This guide covers common patterns and best practices for building effective Julep agents, inspired by a blog written by Anthropic on Building effective agents.
Act as a “traffic controller” by detecting request types and sending them to the correct handler. Ideal when inputs are diverse or require specialized expertise.Steps generally include:
Execute subtasks concurrently by either dividing the workload (sectioning) or collecting multiple perspectives (voting).Keys to consider:
• Parallel processes for performance or redundancy
• Syncing results and handling errors
• Aggregating diverse outputsExample implementations:
Sectioning:
Copy
tools:- name: aggregate_results type: ... # depends on the specific needs# Custom workflow to run a subtaskrun_subtask:- ....# Main workflowmain:- prompt: role: system content: > $ f'''Break this task into multiple subtasks. Here is the task: {_.task}''' unwrap: true- over: $ _.subtasks do: - workflow: run_subtask arguments: ...- tool: aggregate_results arguments: results: $ _
Voting:
Copy
tools:- name: perform_voting description: Perform voting on the results of running the task instances, and return the majority best result. type: ... # depends on the specific needs# Custom workflow to run a subtaskrun_subtask:- ....# Main workflowmain:- over: $ _.main_tasks do: - workflow: run_subtask # Run the same task multiple times (given that the `run_subtask` workflow is non-deterministic) arguments: ...- tool: perform_voting arguments: results: $ _- evaluate: final_result: $ _
Use a central “orchestrator” that delegates subtasks to multiple “worker” agents and integrates their outputs. Best for:
Large or dynamic multi-step tasks
Coordinating various specialized capabilities
Flexible task distribution
Example implementation:
Copy
main:# Orchestrator planning- prompt: role: system content: $ f'''Break down the task into subtasks. Here is the task: {_.task}''' unwrap: true# Worker delegation- foreach: in: $ _.subtasks do: tool: assign_worker arguments: task: $ _
Create iterative feedback loops to refine outputs until they meet preset criteria. Suitable for:
• Content refinement
• Code reviews
• Detailed document enhancementsGeneral flow:
Generate an initial result
Evaluate against criteria
Provide improvement feedback
Optimize/retry until goals are met
Example implementation:
Copy
tools:- name: score_content description: Score the content based on the criteria. Returns a json object with a score between 0 and 1, and a feedback string. type: function function: parameters: type: object properties: content: type: string description: Content to score# Subworkflow to evaluate contentevaluate_content:- tool: score_content arguments: content: $ _.content- if: $ _.score < 0.5 # If the content does not meet the criteria, improve it then: - workflow: improve_content arguments: content: $ steps[0].input.content # steps[0].input is the main input of this workflow feedback: $ _.feedback # _ is the output of the score_content tool call else: evaluate: final_content: $ steps[0].input.content# Subworkflow to improve contentimprove_content:- prompt: role: system content: $ f'''Improve the content based on the feedback. Here is the feedback: {_.feedback}''' unwrap: true- workflow: evaluate_content arguments: content: $ _main:# Initial generation- prompt: role: system content: $ f'''Generate initial content. Here is the task: {_.task}''' unwrap: true# Evaluation loop- loop: while: $ not _.meets_criteria do: - tool: evaluate_content - tool: improve_content
Explanation:
The evaluate_content subworkflow:
Takes content as input and scores it using a scoring tool
If the score is below 0.5, it triggers the improvement workflow
Uses a special variable (_) to manage content and feedback between workflows
Returns the final content once quality criteria are met
The improve_content subworkflow:
Receives content and feedback from the evaluation
Uses an LLM to improve the content based on specific feedback
Automatically triggers another evaluation cycle by calling evaluate_content
The main workflow ties these together by:
Generating initial content from a task description
Running a continuous loop that alternates between evaluation and improvement
Only completing when content meets the defined quality criteria
This creates a powerful feedback loop where content is repeatedly refined based on specific feedback until it reaches the desired quality level. The pattern is particularly useful for tasks requiring high accuracy or quality, such as content generation, code review, or document analysis.
These patterns represent proven approaches from production implementations. Choose and adapt them based on your specific use case requirements and complexity needs.