Overview

This tutorial demonstrates how to:

  • Set up a web crawler using Julep’s Spider integration
  • Process and store crawled content in a document store
  • Implement RAG for enhanced AI responses
  • Create an intelligent agent that can answer questions about crawled content

Task Structure

Let’s break down the task into its core components:

1. Input Schema

First, we define what inputs our task expects:

input_schema:
    type: object
    properties:
        url:
            type: string
        reducing_strength:
            type: integer

This schema specifies that our task expects:

2. Tools Configuration

Next, we define the external tools our task will use:

- name: get_page
  type: api_call
  api_call:
    method: GET
    url: https://r.jina.ai/
    headers:
      accept: application/json
      x-return-format: markdown
      x-with-images-summary: "true"
      x-with-links-summary: "true"
      x-retain-images: "none"
      x-no-cache: "true"
      Authorization: "Bearer JINA_API_KEY"

- name: create_agent_doc
  type: system
  system:
    resource: agent
    subresource: doc
    operation: create

We’re using two tools:

  • The get_page api call for web crawling
  • The create_agent_doc system tool for storing processed content

3. Main Workflow Steps

1

Crawl Website

- tool: get_page
  arguments:
    url: $ "https://r.jina.ai/" + steps[0].input.url
  
- evaluate:
    result: $ chunk_doc(_.json.data.content.strip())

- workflow: index_page
  arguments:
    content: $ _.result
    document: $ steps[0].output.json.data.content.strip()
    reducing_strength: $ steps[0].input.reducing_strength

This step:

  • Takes the input URL and crawls the website
  • Processes content into readable markdown format
  • Chunks content into manageable segments
  • Filters out unnecessary elements like images and SVGs
2

Process and Index Content

- evaluate:
    document: $ _.document
    chunks: |
      $ [" ".join(_.content[i:i + max(_.reducing_strength, len(_.content) // 9)]) 
        for i in range(0, len(_.content), max(_.reducing_strength, len(_.content) // 9))]
  label: docs

# Step 1: Create a new document and add it to the agent docs store
- over: $ [(steps[0].input.document, chunk.strip()) for chunk in _.chunks]
  parallelism: 3
  map:
    prompt: 
    - role: user
      content: >-
        $ f'''
        <document>
        {_[0]}  
        </document>

        Here is the chunk we want to situate within the whole document
        <chunk>
        {_[1]}
        </chunk>

        Please give a short succinct context to situate this chunk within the overall document for the purposes of improving search retrieval of the chunk. 
        Answer only with the succinct context and nothing else.
        '''
    unwrap: true
    settings:
      max_tokens: 16000

- evaluate:
    final_chunks: |
      $ [
        NEWLINE.join([succint, chunk.strip()]) for chunk, succint in zip(steps['docs'].output.chunks, _)
      ]

This step:

  • Processes each content chunk in parallel
  • Generates contextual metadata for improved retrieval
  • Prepares content for storage
3

Store Documents

- over: $ _['final_chunks']
  parallelism: 3
  map:
    tool: create_agent_doc
    arguments:
      agent_id: $ agent.id
      data:
        metadata:
          source: jina_crawler
        title: Website Document
        content: $ _

This step:

  • Stores processed content in the document store
  • Adds metadata for source tracking
  • Creates searchable documents for RAG

Usage

Start by creating an execution for the task. This execution will make the agent crawl the website and store the content in the document store.

Next, create a session for the agent. This session will be used to chat with the agent.

Finally, chat with the agent.

Example Output

This is an example output when the agent is asked “What is Julep?”

Next Steps

Try this task yourself, check out the full example, see the RAG Chatbot cookbook.