Skip to main content

CTC Server SetupNewThis content is new.

5.4.0This functionality requires CAESES version 5.4.0 or later.

The CTC Server (CAESES Task Controller) is a modular system designed to extend CAESES with advanced job scheduling capabilities.

CTC Server

Currently, the CTC server comes with the Slurm Bridge module, which replaces the SSH resource manager. Its main purpose is to act as a SLURM (Simple Linux Utility for Resource Management) bridge, enabling CAESES to:

  • Submit jobs to Slurm queues
  • Monitor job states
  • Cancel jobs
  • Allow monitoring of jobs via a lightweight web interface

Prerequisites

  • CAESES version: 5.4.0 or higher
  • Operating systems: Linux
  • SLURM version: 20.x or newer (for Slurm Bridge module)
  • Network: HTTP connectivity between CAESES and CTC Server
  • Permissions: User account with SLURM command execution rights (squeue, sbatch, scancel, scontrol)

Installation (Linux)

  1. Extract the provided .tar.gz package.

    tar -xvzf ctc-server-x.y.z.tar.gz
    cd ctc-server-x.y.z/bin
  2. The executable is located in the bin/ directory.

  3. All required dependencies are bundled with the package.

Systemd Service Setup (Linux)

You can run the CTC Server as a background service using systemd.

  1. Create a new systemd service file:

    sudo nano /etc/systemd/system/ctc-server.service
  2. Add the following content (adjust <ctc-installation-path> and <user> accordingly):

    ctc-server.service
    [Unit]
    Description=CTC Server
    After=network.target

    [Service]
    Type=simple
    ExecStart=/<ctc-installation-path>/bin/ctcserver
    WorkingDirectory=/<ctc-installation-path>/
    Restart=on-failure
    User=<user>

    [Install]
    WantedBy=multi-user.target
  3. Reload systemd and enable the service:

    sudo systemctl daemon-reload
    sudo systemctl enable ctc-server
    sudo systemctl start ctc-server
  4. Verify the status:

    systemctl status ctc-server

Configuration

The CTC Server uses an INI configuration file, typically called ctcconfig.ini.

  • File format: .ini
  • Default location: bin/ folder in the installation directory (same folder as the executable)
ctcconfig.ini
[Core]
datadir=/mnt/data/shared/
port=5170
bind_address=0.0.0.0
logpath=/mnt/data/shared/log/
logkeep=30
numLogs=1000

[DB]
name=ctcserver.db

Parameters

[Core]

  • datadir / core-data-dir
    • Base directory used by the CTC Server to manage runtime data.
    • Default: <installation_dir>/data/
  • port
    • Port on which the web interface is served.
    • Default: 5170
  • bind_address
    • Address/interface the web server binds to.
    • Example: 127.0.0.1 (local only), or 0.0.0.0 (all interfaces)

[DB]

  • name
    • Name of the SQLite database file used by the server.
    • Default: ctcserver.db
    • Stores:
      • Job counter
      • Mapping of CAESES job IDs ↔ Slurm job IDs (for crash recovery)
      • Other runtime configuration data

Security Considerations

Change Default Credentials

The CTC Server ships with default credentials (username: admin, password: admin). You should change the default password immediately after installation to prevent unauthorized access.

Authentication

The CTC Server uses JWT (JSON Web Token) based authentication for both the web interface and API access:

  • All API endpoints require authentication
  • Sessions expire after 1 hour of inactivity
  • Default credentials: admin / admin
  • Password can be changed via the web interface

Logs

Logging behavior is controlled by the following parameters:

  • logpath
    • Log files are stored under the directory specified in Core/logpath. If no path is specified, logs will be stored in a folder called logs located in the bin/ directory alongside the executable.
  • logkeep
    • Number of days logs are retained before deletion, is configured via Core/logkeep
  • numLogs
    • Maximum number of log files to keep before rotation/deletion.

Local database (DB/name) complements logs by storing job counters and mappings.

Architecture

The CTC Server is designed with a modular architecture:

  • Core functionality includes authentication, web interface, database management, and logging
  • Task controller modules provide specific capabilities (e.g., Slurm Bridge for SLURM integration)

Additional task controller modules will be added in future releases to extend functionality beyond Slurm.