Difference between revisions of "HPRC:AMS:Service Unit"

AMS Service Unit

A 'Service Unit' (SU) is charged to a job when the job uses one 'effective_core' for one hour (walltime).

The calculation of effective_core depends on whether a job is a exclusive job or not ('-x' specified or not).

Exclusive Job

When an exclusive job runs on a node with m cores.
'effective_core = m'
Note that the effective_core for an exclusive job on a node is independent how many cores are requested by the job.

Once the effective_core on a node is calculated, the effective_core for the job is simply the sum of the effective_core on all nodes where the job runs.

Non-Exclusive Job

The effective core on a node are calculated by considering requested memory. When a job requests xxx memory by "-M xxx" (for our LSF configuration on Ada for a node with m cores, this is the runtime memory limit per core) and yyy cores by "-n yyy), thenfor that requested memory, we have
'memory_equivalent_core = min(m, ceil(xxx*yyy/total_memory*m))'
where total_memory is the total memory on a node available to users.

Example 1

```   A nxt node has 20 cores and around 50G memory available to users.
When a job requests 2.5G memory and 2 cores, the job uses memory_equivalent_core of min(20, ceil(2.5*2/50*20))=2 core.
```

Example 2

```   A nxt node has 20 cores and around 50G memory available to users.
When the job requests 3G memory and 2 cores, the job uses memory_equivalent_core of min(20, ceil(3*2/50*20))=3 cores.
```

Example 3

```   A nxt node has 20 cores and around 50G memory available to users.
When the job requests 8G memoryand 10 cores, the job uses memory_equivalent_core of min(20, ceil(8*10/50*20))=20 cores.
```

Once the memory_equivalent_core is calculated, the effective_core on a node where a job request yyy cores can be calculated as below:

```  effective_core = max(yyy, memory_equivalent_cores)
```

Finally, the effective_core for a job is the sum of the effective_core on all nodes where the job runs.