# Dispute Scout Setup Guide

Disputes are used for fault detection in the opML mechanism. Scout Disputers are required to have a higher technical level of understanding how an opML dispute works, as well as understanding on how to set the dispute scouts up manually.&#x20;

Scouts running the dispute module monitor the inference results produced by other Scouts in the network. They raise a Dispute whenever they suspect discrepancies between the reported results and the actual results.

### Note on Resource Intensity and Cost Management

Running a Dispute Scout instance is computationally intensive, particularly in terms of inference requirements. To manage costs effectively, users should consider the following:

1. Local Hardware Option: Ideally, users should use [Ollama](https://github.com/ollama/ollama) to run the Dispute Scout on their own hardware. This approach helps prevent inference costs from accumulating rapidly.
2. Cost Monitoring: If using cloud-based solutions such as Openrouter or Groq, regularly monitor your usage and associated costs to avoid unexpected expenses.

A [simple guide](https://network-docs.chasm.net/chasm-scout-season-0/dispute-scout-setup-guide/ollama-setup-guide) is provided on how to run Ollama locally.

## Obtaining your WEBHOOK\_API\_KEY

Please refer to the Inference Scout Setup Guide in order to obtain your `WEBHOOK_API_KEY`. Note that `WEBHOOK_URL` is replaced with `LLM_BASE_URL` for dispute scouts.

{% content-ref url="chasm-inference-scout-setup-guide" %}
[chasm-inference-scout-setup-guide](https://network-docs.chasm.net/chasm-scout-season-0/chasm-inference-scout-setup-guide)
{% endcontent-ref %}

## Setup Guide and Software Requirements

1. Install Docker: Follow the [Docker Installation Guide](https://docs.docker.com/engine/install/ubuntu/)
2. Install Docker Compose: Follow the [Docker Compose Installation Guide](https://docs.docker.com/compose/install/linux/)
3. Git clone the repository, and enter the repository: \
   `git clone`[`https://github.com/ChasmNetwork/chasm-scout`](https://github.com/ChasmNetwork/chasm-scout)and `cd chasm-scout/dispute`
4. Set up the environment file: Use `nano .env` or `vim .env` to create a file with the following content, depending on your chosen model and supplier. Choose ONE of the following supplier options:&#x20;

   1. Ollama (local GPU):

      <pre class="language-bash"><code class="lang-bash"><strong>## Ollama (local GPU)
      </strong>## note that ollama doesn't need an api key as it's local,
      ## but it's required to set the key as ollama so 
      ## the system knows it's ollama

      LLM_API_KEY=ollama
      LLM_BASE_URL=http://localhost:11434/v1
      MODELS=stablelm2:zephyr,llama3:8b,qwen:4b,gemma2,gemma2:2b,mistral:7b,phi3:3.8b
      SIMULATION_MODEL=llama3:8b
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=
      </code></pre>
   2. Groq:

      <pre class="language-bash"><code class="lang-bash"><strong>## Groq
      </strong><strong>LLM_API_KEY=
      </strong>LLM_BASE_URL=https://api.groq.com/openai/v1
      MODELS=llama3-8b-8192,mixtral-8x7b-32768,gemma-7b-it
      SIMULATION_MODEL=llama3-8b-8192
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=
      </code></pre>
   3. OpenRouter:

      <pre class="language-bash"><code class="lang-bash"><strong>## Openrouter
      </strong>
      LLM_API_KEY=
      LLM_BASE_URL=https://openrouter.ai/api/v1
      MODELS=google/gemma-7b-it,meta-llama/llama-3-8b-instruct,microsoft/wizardlm-2-7b,mistralai/mistral-7b-instruct-v0.3
      SIMULATION_MODEL=meta-llama/llama-3-8b-instruct
      ORCHESTRATOR_URL=https://orchestrator.chasm.net
      WEBHOOK_API_KEY=
      </code></pre>

{% hint style="info" %}
The SIMULATION\_MODEL is set to `meta-llama/llama-3-8b-instruct` as an example, but users are free to test other models in the list above.
{% endhint %}

{% hint style="info" %}
Do not put all three entries above - pick the supplier you want to go with, and simply copy that to put into your `.env` file.
{% endhint %}

## Create and run the Docker Image

1. Make sure you're in `chasm-scout/dispute`, otherwise change your directory into `chasm-scout/dispute` by doing a `cd dispute` from the root folder.
2. Build the Docker Image for the dispute scout by running `docker compose build.`If you get any errors, make sure you've installed Docker and Docker Compose as per the instructions above.&#x20;
3. Run the Docker image you've just built by running `docker compose up -d`
4. Check if everything is going well by running `docker compose logs`.

{% hint style="info" %}
If you get an error saying `Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?`, start your docker instance by running `sudo systemctl start docker`
{% endhint %}

## Optimizations for Advanced Users

Additional `.env` variables: You can add the following variable to your `.env` file:

```
## default value is 0.5, feel free to change the threshold to something else
MIN_CONFIDENCE_SCORE=0.5
```

This variable sets the minimum confidence score required to prevent inaccurate dispute reports due to potential model inaccuracies, especially when using smaller models. The default value is `0.5`. Run the `benchmark.py` script via `python benchmark.py`to see if the values work for you.

The `dispute/strategies/` folder contains strategies for determining disputes:

* StaticTextAnalysisStrategy
* SemanticSimilarityAnalysis
* LLMQualityStrategy
* ResponseSimilarityAnalysis

A detailed publication on these strategies is forthcoming.

### Advanced Scripts Setup

To use the additional scripts provided like `benchmark.py`, you have to install the dependencies as well as `Python`. A rough guide is provided below, but users venturing here are expected to know more than  average.

1. Install Python and dependencies:

   ```bash
   sudo apt-get -y update && sudo apt-get install -y python3 python3-pip python3-venv
   ```
2. Create and activate a virtual environment:

   ```bash
   python3 -m venv dispute-scout
   source dispute-scout/bin/activate
   ```
3. Install requirements:

   ```bash
   pip install -r requirements.txt
   ```
4. Run the benchmark:

   ```bash
   python benchmark.py
   ```

The output will help you evaluate if a dispute will be filed with your current settings.
