Skip to content
Snippets Groups Projects
  • Simon McVittie's avatar
    94699abd
    pv-runtime: Use threads to enumerate graphics drivers in parallel · 94699abd
    Simon McVittie authored
    
    SrtSystemInfo is not thread-aware, but can safely be handed off from
    one thread to another, and caches its results internally; so we can use
    a thread per architecture, plus an extra thread for cross-architecture
    Vulkan and EGL ICDs, to enumerate graphics drivers and populate the
    cache in parallel with any other container setup. We join the threads
    just before looking at their results, to maximize the length of time
    for which we're running in parallel.
    
    On slow hardware (Lenovo T520 circa 2011, with 500G 7200rpm HDD) this
    cuts something like 20% off the setup time with a cold cache
    (`echo 3 | sudo tee /proc/sys/vm/drop_caches`). It also has a benefit
    (more like 15%) with a warm cache, immediately after a previous
    pressure-vessel run.
    
    This does make it somewhat harder to profile pressure-vessel, because
    when two I/O-bound operations run in parallel, they both take longer
    than they otherwise would, even though the overall task finishes sooner;
    this makes it hard to attribute I/O cost to particular actions. The
    new --single-thread option can be used to get a better idea of where
    the time is really going.
    
    Signed-off-by: default avatarSimon McVittie <smcv@collabora.com>
    94699abd
    History
    pv-runtime: Use threads to enumerate graphics drivers in parallel
    Simon McVittie authored
    
    SrtSystemInfo is not thread-aware, but can safely be handed off from
    one thread to another, and caches its results internally; so we can use
    a thread per architecture, plus an extra thread for cross-architecture
    Vulkan and EGL ICDs, to enumerate graphics drivers and populate the
    cache in parallel with any other container setup. We join the threads
    just before looking at their results, to maximize the length of time
    for which we're running in parallel.
    
    On slow hardware (Lenovo T520 circa 2011, with 500G 7200rpm HDD) this
    cuts something like 20% off the setup time with a cold cache
    (`echo 3 | sudo tee /proc/sys/vm/drop_caches`). It also has a benefit
    (more like 15%) with a warm cache, immediately after a previous
    pressure-vessel run.
    
    This does make it somewhat harder to profile pressure-vessel, because
    when two I/O-bound operations run in parallel, they both take longer
    than they otherwise would, even though the overall task finishes sooner;
    this makes it hard to attribute I/O cost to particular actions. The
    new --single-thread option can be used to get a better idea of where
    the time is really going.
    
    Signed-off-by: default avatarSimon McVittie <smcv@collabora.com>
graphics-provider.c 7.98 KiB