GPU COMPUTE MARKETPLACE — EARN SAR RUNNING AI INFERENCE

GPU Marketplace. Arabic AI.

The only GPU compute marketplace with Saudi data residency, Arabic AI models, and PDPL compliance. OpenAI-compatible API. Per-token billing.

Windows, macOS (Apple Silicon), and Linux. 4 MB provider app. Zero config.

Need the other path? Open Enterprise Support

Unavailable right nowProviders Online

Saudi Energy Advantage

Saudi energy-cost conditions provide a structural advantage for sustained AI operations.

Arabic AI, First-Class

Ship Arabic AI workloads with first-class support for ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.

4 MB Provider App

Auto-detects GPU, installs inference engine (Ollama/MLX), downloads model. Works on Windows, macOS Apple Silicon, and Linux.

How DCP works

1. Choose Model

Select from Arabic AI models (ALLaM, JAIS, Falcon) or global models via OpenAI-compatible API.

2. Call Inference API

Send requests to your model endpoint. Saudi data residency, per-token billing, zero ops.

3. Track & Settle

Monitor usage and costs in real-time. Pay per token with SAR billing.

Explore all paths and tools
Settlement proof: estimate hold in halala -> runtime-based final settlement -> unused hold returned automatically.

PLATFORM STATUS

40+ providers registered

3 platforms supported (Win/Mac/Linux)

100-270 tok/s on consumer GPUs

Built on structural advantages, not promo claims

  • Saudi energy-cost conditions create a durable infrastructure advantage for long-run AI workloads.
  • Arabic AI support is first-class: ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3 are part of the supported model lane.
  • Execution uses native inference engines: Ollama on Windows/Linux and MLX on macOS Apple Silicon, with isolated runtime boundaries.

Already have an account? Sign in here

Unavailable right now

Providers Online

Unavailable right now

Providers Registered

Unavailable right now

Last telemetry update

How DCP runs

These are platform policy and operating-model statements, separate from live telemetry.

Runtime settlement

Estimate hold in halala before execution, then completion-based settlement with unused hold returned automatically.

Containerized GPU execution

Workloads run in approved Docker runtimes with NVIDIA Container Toolkit and explicit GPU scoping.

Arabic AI support

Arabic-ready model support includes ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.

How DCP Billing Works

  • 1. Before execution, DCP places an estimate hold in halala from your wallet.
  • 2. After completion, final cost is settled from actual runtime (not the estimate).
  • 3. Any unused hold is returned to wallet balance in halala automatically.

100 halala = 1 SAR.

Current flow: wallet top-up in SAR, estimate hold before execution, completion-based settlement, and automatic return of any unused hold.

landing.model_marquee_title

MetaFalcon LLMMistral AIInceptionQwenTIIStability AIMicrosoftHugging FaceALLaM
MetaFalcon LLMMistral AIInceptionQwenTIIStability AIMicrosoftHugging FaceALLaM

Saudi Energy Advantage

Saudi energy-cost conditions provide a structural advantage for sustained AI operations.

Arabic AI, First-Class

Ship Arabic AI workloads with first-class support for ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.

4 MB Provider App

Auto-detects GPU, installs inference engine (Ollama/MLX), downloads model. Works on Windows, macOS Apple Silicon, and Linux.

API access in 60 seconds

OpenAI-compatible. Drop in your API key and start generating. No setup, no queue.

// Drop-in replacement for OpenAI
from openai import OpenAI
client = OpenAI(
base_url="https://api.dcp.sa/v1",
api_key="your-key"
)

Built for What Comes Next

Built for developers worldwide — transparent matching, predictable job states, and completion-driven settlement.

Pay-as-you-go

Wallet-based pay-for-use billing: estimate before execution, then final settlement after completion.

Start Renting

PDPL Compliant

Data handling is designed for Saudi residency workflows and PDPL-oriented controls under current platform policy.

Learn More

OpenAI-Compatible API

Drop-in replacement for OpenAI API. Use your existing code with Arabic AI models hosted in Saudi Arabia.

View Docs

How It Works

01

Register

Sign up as a provider or renter and create your API access.

02

Connect

Providers run the daemon and publish availability; renters pick compatible providers from marketplace capacity.

03

Compute

Submit a workload, and DCP routes it through compatibility checks to suitable capacity.

04

Earn / Pay

A usage estimate appears before execution; final settlement reconciles to actual completed runtime (75% provider, 25% platform).

Earn SAR With Your GPU

4 MB desktop app. Auto-detects your GPU, installs the inference engine (Ollama or MLX), downloads the AI model, and connects to DCP. Zero config.

Why providers choose DCP

  • >Windows, macOS (Apple Silicon), and Linux — works on the hardware you already own
  • >4 MB desktop app — not 180 MB like Electron competitors
  • >Auto-detects GPU, auto-installs inference engine (Ollama/MLX), auto-downloads AI model
  • >100-270 tok/s on consumer GPUs (RTX 3060 Ti to RTX 5090) — benchmark-proven
  • >MoE models (30B parameters, only 3B active) = enterprise quality at consumer hardware speed
  • >Auto NAT traversal via Cloudflare Tunnel — no port forwarding needed
  • >Real-time dashboard with GPU temp, utilization, live earnings, and job feed

Windows

Download DCP Provider

4 MB installer — Windows 10/11, RTX GPUs

macOS / Linux

curl -sSL https://api.dcp.sa/install | bash -s -- YOUR_KEY

macOS: Apple Silicon M1-M4 (MLX) | Linux: NVIDIA RTX GPUs (Ollama)

After install, your dashboard shows:

✓ GPU detected: RTX 4090 (24 GB)
✓ Ollama installed, model downloaded
✓ Cloudflare Tunnel active — no port forwarding needed
✓ Connected to DCP — earning SAR on inference jobs

What You Can Run

From quick inference in the Playground to full training pipelines via Docker — the network handles it.

LLM Inference

Run ALLaM, Falcon, JAIS, Llama 3, and other open-source models at full GPU speed.

ALLaMFalconLlama 3JAIS

Image Generation

Generate images with SDXL and ControlNet pipelines — or fine-tune with your own data.

SDXLControlNetDreamBooth

Model Fine-Tuning

LoRA and QLoRA fine-tuning with ready-to-run Docker templates. Bring your dataset, pick a base model, go.

LoRAQLoRAPyTorch

Arabic AI Models

First-class support for Saudi Arabic NLP — ALLaM 7B, Falcon H1, JAIS 13B, BGE-M3 embeddings, and rerankers.

ALLaM 7BFalcon H1JAIS 13BBGE-M3

Custom Docker Jobs

Approved Docker images with CUDA support. GPU passthrough uses NVIDIA Container Toolkit in isolated containers.

DockerCUDACustom

Scientific Compute

Batch processing, data pipelines, and rendering on CUDA workloads supported by provider policy and image availability.

CUDABatchHPC
Official SDKs — New

Install and Go

Use the official SDKs from the docs to reduce boilerplate and speed your first job submission path.

  • Provider SDK examples and setup guidance (Python package details in docs)
  • Renter SDK examples and setup guidance (Node.js package details in docs)
  • Auth helpers and job polling patterns built into SDK examples
  • Open-source SDK reference implementations and migration notes
SDK Docs
terminal

# Python — provider SDK

# Check the latest SDK package name in /docs/sdk-guides

# Node.js — renter SDK

# Check the latest SDK package name in /docs/sdk-guides

Quick start:

from dcp_provider import DCPProvider

provider = DCPProvider(api_key="your-key")
provider.register_gpu()
provider.start()  # initialize, heartbeat, and serve inference workloads
API-First

Integrate Programmatically

Submit jobs and retrieve results via REST API. The Playground handles the simple path — the API and SDKs handle everything else.

  • REST API with API key auth
  • Official Python and Node.js SDKs
  • Webhook callbacks for optional job lifecycle notifications
  • Status polling and output retrieval for every job

Submit a job

curl -X POST https://api.dcp.sa/api/jobs/submit \
  -H "Content-Type: application/json" \
  -H "x-renter-key: dcp-renter-..." \
  -d '{
    "provider_id": 26,
    "job_type": "llm_inference",
    "duration_minutes": 5,
    "container_spec": {
      "image_type": "vllm-serve"
    },
    "params": {
      "model": "ALLaM-7B-Instruct",
      "prompt": "Hello world"
    }
  }'

Response:

{
  "job_id": "job-abc123",
  "status": "queued",
  "status_detail": "queued"
}

List Your Saudi GPU and Start Matching

Publish compatible capacity on a Saudi-hosted, container-based marketplace so jobs are routed when demand and policy align.