OpenAI-compatible over RabbitMQ — using vLLM locally inside Space
Service
@spaces.GPU Probe
Service
@spaces.GPU Probe
Ping
Ping result
Startup status