Chasm Network
  • Entering Chasm
    • Chasm Network
    • CAI Token
    • Chasm FAQ
    • Chasm DeAI Jokers
  • Chasm API Documentation
  • Chasm Scout Season 0
    • Getting Started
    • Onboarding to Season 0
    • Chasm Inference Scout Setup Guide
      • Installing Chasm Inference Scout via CLI
    • Updating Inference Scouts
    • Competitive Scout Optimization
    • Scout Ranking Methodology
    • Inference Scout Elo Ranking System
    • Dispute Scout Setup Guide
      • Ollama Setup Guide
      • Dispute Scout Calculations
      • VLLM Setup Guide
  • Terms of Use and Legal
    • Terms of Use (Node)
Powered by GitBook
On this page
  • Note on Resource Intensity and Cost Management
  • Obtaining your WEBHOOK_API_KEY
  • Setup Guide and Software Requirements
  • Create and run the Docker Image
  • Optimizations for Advanced Users
  • Advanced Scripts Setup
  1. Chasm Scout Season 0

Dispute Scout Setup Guide

PreviousInference Scout Elo Ranking SystemNextOllama Setup Guide

Last updated 10 months ago

Disputes are used for fault detection in the opML mechanism. Scout Disputers are required to have a higher technical level of understanding how an opML dispute works, as well as understanding on how to set the dispute scouts up manually.

Scouts running the dispute module monitor the inference results produced by other Scouts in the network. They raise a Dispute whenever they suspect discrepancies between the reported results and the actual results.

Note on Resource Intensity and Cost Management

Running a Dispute Scout instance is computationally intensive, particularly in terms of inference requirements. To manage costs effectively, users should consider the following:

  1. Local Hardware Option: Ideally, users should use to run the Dispute Scout on their own hardware. This approach helps prevent inference costs from accumulating rapidly.

  2. Cost Monitoring: If using cloud-based solutions such as Openrouter or Groq, regularly monitor your usage and associated costs to avoid unexpected expenses.

A is provided on how to run Ollama locally.

Obtaining your WEBHOOK_API_KEY

Please refer to the Inference Scout Setup Guide in order to obtain your WEBHOOK_API_KEY. Note that WEBHOOK_URL is replaced with LLM_BASE_URL for dispute scouts.

Setup Guide and Software Requirements

  1. Install Docker: Follow the

  2. Install Docker Compose: Follow the

  3. Git clone the repository, and enter the repository: git cloneand cd chasm-scout/dispute

  4. Set up the environment file: Use nano .env or vim .env to create a file with the following content, depending on your chosen model and supplier. Choose ONE of the following supplier options:

    1. Ollama (local GPU):

      ## Ollama (local GPU)
      ## note that ollama doesn't need an api key as it's local,
      ## but it's required to set the key as ollama so 
      ## the system knows it's ollama
      
      LLM_API_KEY=ollama
      LLM_BASE_URL=http://localhost:11434/v1
      MODELS=stablelm2:zephyr,llama3:8b,qwen:4b,gemma2,gemma2:2b,mistral:7b,phi3:3.8b
      SIMULATION_MODEL=llama3:8b
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=
    2. Groq:

      ## Groq
      LLM_API_KEY=
      LLM_BASE_URL=https://api.groq.com/openai/v1
      MODELS=llama3-8b-8192,mixtral-8x7b-32768,gemma-7b-it
      SIMULATION_MODEL=llama3-8b-8192
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=
    3. OpenRouter:

      ## Openrouter
      
      LLM_API_KEY=
      LLM_BASE_URL=https://openrouter.ai/api/v1
      MODELS=google/gemma-7b-it,meta-llama/llama-3-8b-instruct,microsoft/wizardlm-2-7b,mistralai/mistral-7b-instruct-v0.3
      SIMULATION_MODEL=meta-llama/llama-3-8b-instruct
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=

The SIMULATION_MODEL is set to meta-llama/llama-3-8b-instruct as an example, but users are free to test other models in the list above.

Do not put all three entries above - pick the supplier you want to go with, and simply copy that to put into your .env file.

Create and run the Docker Image

  1. Make sure you're in chasm-scout/dispute, otherwise change your directory into chasm-scout/dispute by doing a cd dispute from the root folder.

  2. Build the Docker Image for the dispute scout by running docker compose build.If you get any errors, make sure you've installed Docker and Docker Compose as per the instructions above.

  3. Run the Docker image you've just built by running docker compose up -d

  4. Check if everything is going well by running docker compose logs.

If you get an error saying Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?, start your docker instance by running sudo systemctl start docker

Optimizations for Advanced Users

Additional .env variables: You can add the following variable to your .env file:

## default value is 0.5, feel free to change the threshold to something else
MIN_CONFIDENCE_SCORE=0.5

This variable sets the minimum confidence score required to prevent inaccurate dispute reports due to potential model inaccuracies, especially when using smaller models. The default value is 0.5. Run the benchmark.py script via python benchmark.pyto see if the values work for you.

The dispute/strategies/ folder contains strategies for determining disputes:

  • StaticTextAnalysisStrategy

  • SemanticSimilarityAnalysis

  • LLMQualityStrategy

  • ResponseSimilarityAnalysis

A detailed publication on these strategies is forthcoming.

Advanced Scripts Setup

To use the additional scripts provided like benchmark.py, you have to install the dependencies as well as Python. A rough guide is provided below, but users venturing here are expected to know more than average.

  1. Install Python and dependencies:

    sudo apt-get -y update && sudo apt-get install -y python3 python3-pip python3-venv
  2. Create and activate a virtual environment:

    python3 -m venv dispute-scout
    source dispute-scout/bin/activate
  3. Install requirements:

    pip install -r requirements.txt
  4. Run the benchmark:

    python benchmark.py

The output will help you evaluate if a dispute will be filed with your current settings.

Ollama
simple guide
Chasm Inference Scout Setup Guide
Docker Installation Guide
Docker Compose Installation Guide
https://github.com/ChasmNetwork/chasm-scout