The only GPU compute marketplace with Saudi data residency, Arabic AI models, and PDPL compliance. OpenAI-compatible API. Per-token billing.
Windows, macOS (Apple Silicon), and Linux. 4 MB provider app. Zero config.
Need the other path? Open Enterprise Support
Saudi Energy Advantage
Saudi energy-cost conditions provide a structural advantage for sustained AI operations.
Arabic AI, First-Class
Ship Arabic AI workloads with first-class support for ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.
4 MB Provider App
Auto-detects GPU, installs inference engine (Ollama/MLX), downloads model. Works on Windows, macOS Apple Silicon, and Linux.
How DCP works
1. Choose Model
Select from Arabic AI models (ALLaM, JAIS, Falcon) or global models via OpenAI-compatible API.
2. Call Inference API
Send requests to your model endpoint. Saudi data residency, per-token billing, zero ops.
3. Track & Settle
Monitor usage and costs in real-time. Pay per token with SAR billing.
Choose your mode
Use the same entry lanes everywhere: Marketplace, Playground, Docs/API, Enterprise Support.
Choose your next path
One structure across DCP pages so each team can route directly to the right flow.
Self-serve renter
Create a renter account and start API + container job runs.
Provider onboarding
Register GPU hardware, install daemon, and start heartbeat checks.
Enterprise intake
Open procurement, security, and rollout planning support.
Arabic model docs
Review ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3 support paths.
PLATFORM STATUS
40+ providers registered
3 platforms supported (Win/Mac/Linux)
100-270 tok/s on consumer GPUs
Built on structural advantages, not promo claims
Already have an account? Sign in here
Unavailable right now
Providers Online
Unavailable right now
Providers Registered
Unavailable right now
Last telemetry update
These are platform policy and operating-model statements, separate from live telemetry.
Runtime settlement
Estimate hold in halala before execution, then completion-based settlement with unused hold returned automatically.
Containerized GPU execution
Workloads run in approved Docker runtimes with NVIDIA Container Toolkit and explicit GPU scoping.
Arabic AI support
Arabic-ready model support includes ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.
100 halala = 1 SAR.
Current flow: wallet top-up in SAR, estimate hold before execution, completion-based settlement, and automatic return of any unused hold.
landing.model_marquee_title
Saudi Energy Advantage
Saudi energy-cost conditions provide a structural advantage for sustained AI operations.
Arabic AI, First-Class
Ship Arabic AI workloads with first-class support for ALLaM 7B, Falcon H1, JAIS 13B, and BGE-M3.
4 MB Provider App
Auto-detects GPU, installs inference engine (Ollama/MLX), downloads model. Works on Windows, macOS Apple Silicon, and Linux.
OpenAI-compatible. Drop in your API key and start generating. No setup, no queue.
Built for developers worldwide — transparent matching, predictable job states, and completion-driven settlement.
Wallet-based pay-for-use billing: estimate before execution, then final settlement after completion.
Start RentingData handling is designed for Saudi residency workflows and PDPL-oriented controls under current platform policy.
Learn MoreDrop-in replacement for OpenAI API. Use your existing code with Arabic AI models hosted in Saudi Arabia.
View DocsSign up as a provider or renter and create your API access.
Providers run the daemon and publish availability; renters pick compatible providers from marketplace capacity.
Submit a workload, and DCP routes it through compatibility checks to suitable capacity.
A usage estimate appears before execution; final settlement reconciles to actual completed runtime (75% provider, 25% platform).
4 MB desktop app. Auto-detects your GPU, installs the inference engine (Ollama or MLX), downloads the AI model, and connects to DCP. Zero config.
Why providers choose DCP
macOS / Linux
curl -sSL https://api.dcp.sa/install | bash -s -- YOUR_KEY
macOS: Apple Silicon M1-M4 (MLX) | Linux: NVIDIA RTX GPUs (Ollama)
After install, your dashboard shows:
✓ GPU detected: RTX 4090 (24 GB) ✓ Ollama installed, model downloaded ✓ Cloudflare Tunnel active — no port forwarding needed ✓ Connected to DCP — earning SAR on inference jobs
From quick inference in the Playground to full training pipelines via Docker — the network handles it.
Run ALLaM, Falcon, JAIS, Llama 3, and other open-source models at full GPU speed.
Generate images with SDXL and ControlNet pipelines — or fine-tune with your own data.
LoRA and QLoRA fine-tuning with ready-to-run Docker templates. Bring your dataset, pick a base model, go.
First-class support for Saudi Arabic NLP — ALLaM 7B, Falcon H1, JAIS 13B, BGE-M3 embeddings, and rerankers.
Approved Docker images with CUDA support. GPU passthrough uses NVIDIA Container Toolkit in isolated containers.
Batch processing, data pipelines, and rendering on CUDA workloads supported by provider policy and image availability.
Use the official SDKs from the docs to reduce boilerplate and speed your first job submission path.
# Python — provider SDK
# Check the latest SDK package name in /docs/sdk-guides
# Node.js — renter SDK
# Check the latest SDK package name in /docs/sdk-guides
Quick start:
from dcp_provider import DCPProvider provider = DCPProvider(api_key="your-key") provider.register_gpu() provider.start() # initialize, heartbeat, and serve inference workloads
Submit jobs and retrieve results via REST API. The Playground handles the simple path — the API and SDKs handle everything else.
Submit a job
curl -X POST https://api.dcp.sa/api/jobs/submit \
-H "Content-Type: application/json" \
-H "x-renter-key: dcp-renter-..." \
-d '{
"provider_id": 26,
"job_type": "llm_inference",
"duration_minutes": 5,
"container_spec": {
"image_type": "vllm-serve"
},
"params": {
"model": "ALLaM-7B-Instruct",
"prompt": "Hello world"
}
}'Response:
{
"job_id": "job-abc123",
"status": "queued",
"status_detail": "queued"
}Publish compatible capacity on a Saudi-hosted, container-based marketplace so jobs are routed when demand and policy align.