{"affected":[{"ecosystem_specific":{"binaries":[{"libnss_slurm2":"24.11.5-150700.3.3.1","libpmi0":"24.11.5-150700.3.3.1","libslurm42":"24.11.5-150700.3.3.1","perl-slurm":"24.11.5-150700.3.3.1","slurm":"24.11.5-150700.3.3.1","slurm-auth-none":"24.11.5-150700.3.3.1","slurm-config":"24.11.5-150700.3.3.1","slurm-config-man":"24.11.5-150700.3.3.1","slurm-cray":"24.11.5-150700.3.3.1","slurm-devel":"24.11.5-150700.3.3.1","slurm-doc":"24.11.5-150700.3.3.1","slurm-lua":"24.11.5-150700.3.3.1","slurm-munge":"24.11.5-150700.3.3.1","slurm-node":"24.11.5-150700.3.3.1","slurm-pam_slurm":"24.11.5-150700.3.3.1","slurm-plugins":"24.11.5-150700.3.3.1","slurm-rest":"24.11.5-150700.3.3.1","slurm-slurmdbd":"24.11.5-150700.3.3.1","slurm-sql":"24.11.5-150700.3.3.1","slurm-sview":"24.11.5-150700.3.3.1","slurm-torque":"24.11.5-150700.3.3.1","slurm-webdoc":"24.11.5-150700.3.3.1"}]},"package":{"ecosystem":"SUSE:Linux Enterprise Module for HPC 15 SP7","name":"slurm","purl":"pkg:rpm/suse/slurm&distro=SUSE%20Linux%20Enterprise%20Module%20for%20HPC%2015%20SP7"},"ranges":[{"events":[{"introduced":"0"},{"fixed":"24.11.5-150700.3.3.1"}],"type":"ECOSYSTEM"}]},{"ecosystem_specific":{"binaries":[{"libnss_slurm2":"24.11.5-150700.3.3.1","libpmi0":"24.11.5-150700.3.3.1","perl-slurm":"24.11.5-150700.3.3.1","slurm":"24.11.5-150700.3.3.1","slurm-auth-none":"24.11.5-150700.3.3.1","slurm-config":"24.11.5-150700.3.3.1","slurm-config-man":"24.11.5-150700.3.3.1","slurm-cray":"24.11.5-150700.3.3.1","slurm-devel":"24.11.5-150700.3.3.1","slurm-doc":"24.11.5-150700.3.3.1","slurm-hdf5":"24.11.5-150700.3.3.1","slurm-lua":"24.11.5-150700.3.3.1","slurm-munge":"24.11.5-150700.3.3.1","slurm-node":"24.11.5-150700.3.3.1","slurm-openlava":"24.11.5-150700.3.3.1","slurm-pam_slurm":"24.11.5-150700.3.3.1","slurm-plugins":"24.11.5-150700.3.3.1","slurm-rest":"24.11.5-150700.3.3.1","slurm-seff":"24.11.5-150700.3.3.1","slurm-sjstat":"24.11.5-150700.3.3.1","slurm-slurmdbd":"24.11.5-150700.3.3.1","slurm-sql":"24.11.5-150700.3.3.1","slurm-sview":"24.11.5-150700.3.3.1","slurm-torque":"24.11.5-150700.3.3.1","slurm-webdoc":"24.11.5-150700.3.3.1"}]},"package":{"ecosystem":"SUSE:Linux Enterprise Module for Package Hub 15 SP7","name":"slurm","purl":"pkg:rpm/suse/slurm&distro=SUSE%20Linux%20Enterprise%20Module%20for%20Package%20Hub%2015%20SP7"},"ranges":[{"events":[{"introduced":"0"},{"fixed":"24.11.5-150700.3.3.1"}],"type":"ECOSYSTEM"}]}],"aliases":[],"details":"This update for slurm fixes the following issues:\n\nUpdate to version 24.11.5.\n\nSecurity issues fixed:\n  \n- CVE-2025-43904: an issue with permission handling for Coordinators within the accounting system allowed Coordinators\n  to promote a user to Administrator (bsc#1243666).\n\nOther changes and issues fixed:\n\n- Changes from version 24.11.5\n\n  * Return error to `scontrol` reboot on bad nodelists.\n  * `slurmrestd` - Report an error when QOS resolution fails for\n\tv0.0.40 endpoints.\n  * `slurmrestd` - Report an error when QOS resolution fails for\n\tv0.0.41 endpoints.\n  * `slurmrestd` - Report an error when QOS resolution fails for\n\tv0.0.42 endpoints.\n  * `data_parser/v0.0.42` - Added `+inline_enums` flag which\n\tmodifies the output when generating OpenAPI specification.\n\tIt causes enum arrays to not be defined in their own schema\n\twith references (`$ref`) to them. Instead they will be dumped\n\tinline.\n  * Fix binding error with `tres-bind map/mask` on partial node\n\tallocations.\n  * Fix `stepmgr` enabled steps being able to request features.\n  * Reject step creation if requested feature is not available\n\tin job.\n  * `slurmd` - Restrict listening for new incoming RPC requests\n\tfurther into startup.\n  * `slurmd` - Avoid `auth/slurm` related hangs of CLI commands\n\tduring startup and shutdown.\n  * `slurmctld` - Restrict processing new incoming RPC requests\n\tfurther into startup. Stop processing requests sooner during\n\tshutdown.\n  * `slurmcltd` - Avoid auth/slurm related hangs of CLI commands\n\tduring startup and shutdown.\n  * `slurmctld` - Avoid race condition during shutdown or\n\tereconfigure that could result in a crash due delayed\n\tprocessing of a connection while plugins are unloaded.\n  * Fix small memleak when getting the job list from the database.\n  * Fix incorrect printing of `%` escape characters when printing\n\tstdio fields for jobs.\n  * Fix padding parsing when printing stdio fields for jobs.\n  * Fix printing `%A` array job id when expanding patterns.\n  * Fix reservations causing jobs to be held for `Bad Constraints`.\n  * `switch/hpe_slingshot` - Prevent potential segfault on failed\n\tcurl request to the fabric manager.\n  * Fix printing incorrect array job id when expanding stdio file\n\tnames. The `%A` will now be substituted by the correct value.\n  * Fix printing incorrect array job id when expanding stdio file\n\tnames. The `%A` will now be substituted by the correct value.\n  * `switch/hpe_slingshot` - Fix VNI range not updating on slurmctld\n\trestart or reconfigre.\n  * Fix steps not being created when using certain combinations of\n\t`-c` and `-n` inferior to the jobs requested resources, when\n\tusing stepmgr and nodes are configured with\n\t`CPUs == Sockets*CoresPerSocket`.\n  * Permit configuring the number of retry attempts to destroy CXI\n\tservice via the new destroy_retries `SwitchParameter`.\n  * Do not reset `memory.high` and `memory.swap.max` in slurmd\n\tstartup or reconfigure as we are never really touching this\n\tin `slurmd`.\n  * Fix reconfigure failure of slurmd when it has been started\n\tmanually and the `CoreSpecLimits` have been removed from\n\t`slurm.conf`.\n  * Set or reset CoreSpec limits when slurmd is reconfigured and\n\tit was started with systemd.\n  * `switch/hpe-slingshot` - Make sure the slurmctld can free\n\tstep VNIs after the controller restarts or reconfigures while\n\tthe job is running.\n  * Fix backup `slurmctld` failure on 2nd takeover.\n  \n- Changes from version 24.11.4\n\n  * `slurmctld`,`slurmrestd` - Avoid possible race condition that\n    could have caused process to crash when listener socket was\n    closed while accepting a new connection.\n  * `slurmrestd` - Avoid race condition that could have resulted\n\tin address logged for a UNIX socket to be incorrect.\n  * `slurmrestd` - Fix parameters in OpenAPI specification for the\n    following endpoints to have `job_id` field:\n    ```\n    GET /slurm/v0.0.40/jobs/state/\n    GET /slurm/v0.0.41/jobs/state/\n    GET /slurm/v0.0.42/jobs/state/\n    GET /slurm/v0.0.43/jobs/state/\n    ```\n  * `slurmd` - Fix tracking of thread counts that could cause\n\tincoming connections to be ignored after burst of simultaneous\n\tincoming connections that trigger delayed response logic.\n  * Avoid unnecessary `SRUN_TIMEOUT` forwarding to `stepmgr`.\n  * Fix jobs being scheduled on higher weighted powered down nodes.\n  * Fix how backfill scheduler filters nodes from the available\n\tnodes based on exclusive user and `mcs_label` requirements.\n  * `acct_gather_energy/{gpu,ipmi}` - Fix potential energy\n\tconsumption adjustment calculation underflow.\n  * `acct_gather_energy/ipmi` - Fix regression introduced in 24.05.5\n\t(which introduced the new way of preserving energy measurements\n\tthrough slurmd restarts) when `EnergyIPMICalcAdjustment=yes`.\n  * Prevent `slurmctld` deadlock in the assoc mgr.\n  * Fix memory leak when `RestrictedCoresPerGPU` is enabled.\n  * Fix preemptor jobs not entering execution due to wrong\n\tcalculation of accounting policy limits.\n  * Fix certain job requests that were incorrectly denied with\n\tnode configuration unavailable error.\n  * `slurmd` - Avoid crash due when slurmd has a communications\n\tfailure with `slurmstepd`.\n  * Fix memory leak when parsing yaml input.\n  * Prevent `slurmctld` from showing error message about `PreemptMode=GANG`\n\tbeing a cluster-wide option for `scontrol update part` calls\n\tthat don't attempt to modify partition PreemptMode.\n  * Fix setting `GANG` preemption on partition when updating\n\t`PreemptMode` with `scontrol`.\n  * Fix `CoreSpec` and `MemSpec` limits not being removed\n\tfrom previously configured slurmd.\n  * Avoid race condition that could lead to a deadlock when `slurmd`,\n\t`slurmstepd`, `slurmctld`, `slurmrestd` or `sackd` have a fatal\n\tevent.\n  * Fix jobs using `--ntasks-per-node` and `--mem` keep pending\n\tforever\twhen the requested mem divided by the number of CPUs\n\twill surpass the configured `MaxMemPerCPU`.\n  * `slurmd` - Fix address logged upon new incoming RPC connection\n    from `INVALID` to IP address.\n  * Fix memory leak when retrieving reservations. This affects\n\t`scontrol`, `sinfo`, `sview`, and the following `slurmrestd`\n\tendpoints:\n    `GET /slurm/{any_data_parser}/reservation/{reservation_name}`\n    `GET /slurm/{any_data_parser}/reservations`\n  * Log warning instead of `debuflags=conmgr` gated log when\n\tdeferring new incoming connections when number of active\n\tconnections exceed `conmgr_max_connections`.\n  * Avoid race condition that could result in worker thread pool\n\tnot activating all threads at once after a reconfigure resulting\n\tin lower utilization of available CPU threads until enough\n\tinternal activity wakes up all threads in the worker pool.\n  * Avoid theoretical race condition that could result in new\n\tincoming RPC\n    socket connections being ignored after reconfigure.\n  * slurmd - Avoid race condition that could result in a state\n\twhere\tnew incoming RPC connections will always be ignored.\n  * Add ReconfigFlags=KeepNodeStateFuture to restore saved `FUTURE`\n\tnode state on restart and reconfig instead of reverting to\n\t`FUTURE` state. This will be made the default in 25.05.\n  * Fix case where hetjob submit would cause `slurmctld` to crash.\n  * Fix jobs using `--cpus-per-gpu` and `--mem` keep pending forever\n\twhen the requested mem divided by the number of CPUs will surpass\n\tthe configured `MaxMemPerCPU`.\n  * Enforce that jobs using `--mem` and several `--*-per-*` options\n\tdo not violate the `MaxMemPerCPU` in place.\n  * `slurmctld` - Fix use-cases of jobs incorrectly pending held\n\twhen `--prefer` features are not initially satisfied.\n  * `slurmctld` - Fix jobs incorrectly held when `--prefer` not\n\tsatisfied in some use-cases.\n  * Ensure `RestrictedCoresPerGPU` and `CoreSpecCount` don't overlap.\n\n- Changes from version 24.11.3\n\n  * Fix database cluster ID generation not being random.\n  * Fix a regression in which `slurmd -G` gave no output.\n  * Fix a long-standing crash in `slurmctld` after updating a\n    reservation with an empty nodelist. The crash could occur\n\tafter restarting slurmctld, or if downing/draining a node\n\tin the reservation with the `REPLACE` or `REPLACE_DOWN` flag.\n  * Avoid changing process name to '`watch`' from original daemon name.\n    This could potentially breaking some monitoring scripts.\n  * Avoid `slurmctld` being killed by `SIGALRM` due to race condition\n    at startup.\n  * Fix race condition in slurmrestd that resulted in '`Requested\n    data_parser plugin does not support OpenAPI plugin`' error being\n\treturned for valid endpoints.\n  * Fix race between `task/cgroup` CPUset and `jobacctgather/cgroup`.\n    The first was removing the pid from `task_X` cgroup directory\n\tcausing memory limits to not being applied.\n  * If multiple partitions are requested, set the `SLURM_JOB_PARTITION`\n    output environment variable to the partition in which the job is\n\trunning for `salloc` and `srun` in order to match the documentation\n\tand the behavior of `sbatch`.\n  * `srun` - Fixed wrongly constructed `SLURM_CPU_BIND` env variable\n    that could get propagated to downward srun calls in certain mpi\n    environments, causing launch failures.\n  * Don't print misleading errors for stepmgr enabled steps.\n  * `slurmrestd` - Avoid connection to slurmdbd for the following\n    endpoints:\n\t```\n    GET /slurm/v0.0.41/jobs\n    GET /slurm/v0.0.41/job/{job_id}\n\t```\n  * `slurmrestd` - Avoid connection to slurmdbd for the following\n    endpoints:\n\t```\n    GET /slurm/v0.0.40/jobs\n    GET /slurm/v0.0.40/job/{job_id}\n\t```\n  * `slurmrestd` - Fix possible memory leak when parsing arrays with\n    `data_parser/v0.0.40`.\n  * `slurmrestd` - Fix possible memory leak when parsing arrays with\n    `data_parser/v0.0.41`.\n  * `slurmrestd` - Fix possible memory leak when parsing arrays with\n    `data_parser/v0.0.42`.\n  \n- Changes from version 24.11.2\n\n  * Fix segfault when submitting `--test-only` jobs that can\n    preempt.\n  * Fix regression introduced in 23.11 that prevented the\n    following flags from being added to a reservation on an\n    update: `DAILY`, `HOURLY`, `WEEKLY`, `WEEKDAY`, and `WEEKEND`.\n  * Fix crash and issues evaluating job's suitability for running\n    in nodes with already suspended job(s) there.\n  * `slurmctld` will ensure that healthy nodes are not reported as\n    `UnavailableNodes` in job reason codes.\n  * Fix handling of jobs submitted to a current reservation with\n    flags `OVERLAP,FLEX` or `OVERLAP,ANY_NODES` when it overlaps nodes\n    with a future maintenance reservation. When a job submission\n    had a time limit that overlapped with the future maintenance\n    reservation, it was rejected. Now the job is accepted but\n    stays pending with the reason '`ReqNodeNotAvail, Reserved for\n    maintenance`'.\n  * `pam_slurm_adopt` - avoid errors when explicitly setting some\n    arguments to the default value.\n  * Fix QOS preemption with `PreemptMode=SUSPEND`.\n  * `slurmdbd` - When changing a user's name update lineage at the\n    same time.\n  * Fix regression in 24.11 in which `burst_buffer.lua` does not\n    inherit the `SLURM_CONF` environment variable from `slurmctld` and\n    fails to run if slurm.conf is in a non-standard location.\n  * Fix memory leak in slurmctld if `select/linear` and the\n    `PreemptParameters=reclaim_licenses` options are both set in\n    `slurm.conf`.  Regression in 24.11.1.\n  * Fix running jobs, that requested multiple partitions, from\n    potentially being set to the wrong partition on restart.\n  * `switch/hpe_slingshot` - Fix compatibility with newer cxi\n    drivers, specifically when specifying `disable_rdzv_get`.\n  * Add `ABORT_ON_FATAL` environment variable to capture a backtrace\n    from any `fatal()` message.\n  * Fix printing invalid address in rate limiting log statement.\n  * `sched/backfill` - Fix node state `PLANNED` not being cleared from\n    fully allocated nodes during a backfill cycle.\n  * `select/cons_tres` - Fix future planning of jobs with\n    `bf_licenses`.\n  * Prevent redundant '`on_data returned rc: Rate limit exceeded,\n    please retry momentarily`' error message from being printed in\n    slurmctld logs.\n  * Fix loading non-default QOS on pending jobs from pre-24.11\n    state.\n  * Fix pending jobs displaying `QOS=(null)` when not explicitly\n    requesting a QOS.\n  * Fix segfault issue from job record with no `job_resrcs`.\n  * Fix failing `sacctmgr delete/modify/show` account operations\n    with `where` clauses.\n  * Fix regression in 24.11 in which Slurm daemons started\n    catching several `SIGTSTP`, `SIGTTIN` and `SIGUSR1` signals and\n    ignored them, while before they were not ignoring them. This\n    also caused slurmctld to not being able to shutdown after a\n    `SIGTSTP` because slurmscriptd caught the signal and stopped\n    while slurmctld ignored it. Unify and fix these situations and\n    get back to the previous behavior for these signals.\n  * Document that `SIGQUIT` is no longer ignored by `slurmctld`,\n    `slurmdbd`, and slurmd in 24.11. As of 24.11.0rc1, `SIGQUIT` is\n    identical to `SIGINT` and `SIGTERM` for these daemons, but this\n    change was not documented.\n  * Fix not considering nodes marked for reboot without ASAP in\n    the scheduler.\n  * Remove the `boot^` state on unexpected node reboot after return\n    to service.\n  * Do not allow new jobs to start on a node which is being\n    rebooted with the flag `nextstate=resume`.\n  * Prevent lower priority job running after cancelling an ASAP\n    reboot.\n  * Fix srun jobs starting on `nextstate=resume` rebooting nodes.\n","id":"SUSE-SU-2025:01751-1","modified":"2025-05-29T12:53:40Z","published":"2025-05-29T12:53:40Z","references":[{"type":"ADVISORY","url":"https://www.suse.com/support/update/announcement/2025/suse-su-202501751-1/"},{"type":"REPORT","url":"https://bugzilla.suse.com/1243666"},{"type":"WEB","url":"https://www.suse.com/security/cve/CVE-2025-43904"}],"related":["CVE-2025-43904"],"summary":"Security update for slurm","upstream":["CVE-2025-43904"]}