# Tandemn > Tandemn is an AI infrastructure platform for running inference workloads across heterogeneous GPU clusters. Tandemn schedules jobs, chooses an efficient hardware mix, and gives teams a simple CLI and server workflow for batch inference. ## Docs - [Configuration](https://docs.tandemn.com/admin/configuration.md): How to approach Tandemn server configuration. - [Environment variables](https://docs.tandemn.com/admin/environment-variables.md): Environment variables used by Tandemn users and administrators. - [Administration](https://docs.tandemn.com/admin/index.md): Operate the Tandemn server and help users troubleshoot setup and job submission. - [Server deployment](https://docs.tandemn.com/admin/server-deployment.md): Operational guidance for running the Tandemn server. - [Troubleshooting](https://docs.tandemn.com/admin/troubleshooting.md): Fix common Tandemn setup and job submission issues. - [Analytics](https://docs.tandemn.com/cli/analytics.md): Inspect completed Tandemn runs and scheduler timeseries. - [tandemn check](https://docs.tandemn.com/cli/check.md): Verify that the Tandemn CLI can reach the server. - [plan and deploy](https://docs.tandemn.com/cli/deploy.md): Preview placement plans and submit inference jobs to Tandemn System. - [CLI Reference](https://docs.tandemn.com/cli/index.md): Command reference for the Tandemn CLI. - [Input format](https://docs.tandemn.com/cli/input-format.md): Prepare OpenAI-style batch JSONL workloads for Tandemn System. - [Monitoring and operations](https://docs.tandemn.com/cli/monitoring.md): Monitor Tandemn jobs, metrics, clusters, logs, and cleanup. - [CLI overview](https://docs.tandemn.com/cli/overview.md): The Tandemn System command line workflow. - [Replica operations](https://docs.tandemn.com/cli/replicas.md): Add, kill, and hot-swap Tandemn replica clusters. - [Architecture](https://docs.tandemn.com/concepts/architecture.md): How the Tandemn server, CLI, users, and GPU resources work together. - [Batch inference](https://docs.tandemn.com/concepts/batch-inference.md): How Tandemn thinks about queued inference workloads. - [Concepts](https://docs.tandemn.com/concepts/index.md): Understand the core ideas behind Tandemn's CLI-first inference orchestration model. - [Job lifecycle](https://docs.tandemn.com/concepts/job-lifecycle.md): What happens after a user submits a Tandemn job. - [Models and routing](https://docs.tandemn.com/concepts/models-and-routing.md): How model selection and hardware routing fit into Tandemn. - [Getting started](https://docs.tandemn.com/getting-started/index.md): Prepare your environment, install Tandemn, and submit a first inference job. - [Install the CLI](https://docs.tandemn.com/getting-started/install-cli.md): Install the Tandemn CLI and connect it to a Tandemn server. - [Install the server](https://docs.tandemn.com/getting-started/install-server.md): Install and start the Tandemn System control plane. - [Requirements](https://docs.tandemn.com/getting-started/requirements.md): What you need before deploying Tandemn or using the CLI. - [Run your first job](https://docs.tandemn.com/getting-started/run-first-job.md): Submit a first batch inference job with Tandemn. - [Introduction](https://docs.tandemn.com/introduction.md): Learn what Tandemn does, who it is for, and where to start. - [FAQ](https://docs.tandemn.com/learn-more/faq.md): Answers to common questions about Tandemn. - [Learn more](https://docs.tandemn.com/learn-more/index.md): Find answers, support guidance, and official Tandemn links. - [Links](https://docs.tandemn.com/learn-more/links.md): Official Tandemn links and resources. - [Support](https://docs.tandemn.com/learn-more/support.md): How to get help with Tandemn. - [Quickstart](https://docs.tandemn.com/quickstart.md): Deploy Tandemn, connect the CLI, and submit your first inference job.