- Aug 06, 2021
-
-
Simon McVittie authored
This is a bit complicated, because there are two reasonable things that people might use LD_PRELOAD for, and in this mode it's particularly important to distinguish between them. One is to inject arbitrary code, like MangoHud or fakeroot. In this case, we want to take the loadable module from the namespace in which the user initiated pv-wrap. We can do this the same way we deal with the ${LIB} and ${PLATFORM} dynamic string tokens: load it once per ABI, pass a separate option to pv-adverb for each one, and let pv-adverb recombine them. The other is to work around libraries not being loaded soon enough, like the way people sometimes use LD_PRELOAD="libpthread.so.0 libGL.so.1" to force an optirun library to be loaded. In this case, we absolutely do not want to import the host library of that name into the container unconditionally, because if we do, it will sabotage our careful efforts to get the correct instance of libpthread.so.0 to be chosen. In this case, assume that the user meant "take whatever libpthread.so.0 you would naturally load, and preload it into each executable". Resolves: https://github.com/ValveSoftware/steam-runtime/issues/435 Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
The PvRuntime already provides pv-wrap with a pre-configured pv-adverb command-line, so we can repurpose that to include the necessary command-line options to regenerate the ld.so.cache the way we want it. Resolves: #74 Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
This avoids relying on the LD_LIBRARY_PATH as a way to get the overridden libraries into place. Co-authored-by:
Simon McVittie <smcv@collabora.com> Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
- Aug 03, 2021
-
-
Ludovico de Nittis authored
This reverts commit f32f230a. Since the steam-runtime PR #439 the STEAM_COMPAT_FLAGS options are now a responsibility of run.sh Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
Simon McVittie authored
We can share this between PvRuntime and pv-adverb, which both want to know overlapping sets of details of known architectures. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Aug 02, 2021
-
-
Simon McVittie authored
Resolves: T29581 Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Jul 29, 2021
-
-
Ludovico de Nittis authored
Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
- Jul 22, 2021
-
-
Simon McVittie authored
Previously, we assumed that if OS files on the provider are in a location that is not /usr or a related directory, for example if the OS has /lib/ld-linux.so.2 -> /some/odd/path/i386/ld.so, then they will appear below the same path_in_container_ns as /usr, for example /run/host/some/odd/path/i386/ld.so. However, nothing sets this up for directories other than /usr, /lib*, /bin, /sbin and /etc, so it's a bad assumption. A previous commit handled /etc by redirecting it to /run/host/etc, /run/parent/etc or /run/gfx/etc as appropriate, so we don't need to worry about that here. For the rest, assume that if they appear in the container at all, they'll appear at a path that matches their location in the provider. For the common case where provider = host, which is the only one where we really need to support non-FHS layouts, this means that users can work around lack of explicit support for a particular non-FHS directory with something like PRESSURE_VESSEL_FILESYSTEMS_RO=/some/odd/path. In particular, if we didn't have explicit support for /nix, NixOS users would have been able to use that workaround to get it mounted. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
If the realpath() of an OS file is below /etc, each of our code paths ends up with it visible below /run/host/etc, /run/parent/etc or /run/gfx/etc, as appropriate. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
We must expose /nix in the sandbox as /nix, not /run/host/nix, because hard-coding paths below /nix is ubiquitous on NixOS. There's already a special case in wrap.c to mount /nix read-only. This resolves a regression that occurred when we switched to a runtime structure that relies on PRESSURE_VESSEL_COPY_RUNTIME. Resolves: https://github.com/ValveSoftware/steam-runtime/issues/431 Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Jul 20, 2021
-
-
Simon McVittie authored
This was only necessary because we were reusing a single container across multiple entry-point invocations, and expecting "most" arbitrary environment variables from each new invocation to be taken into account for commands running in the container, which meant that we needed to keep track of which environment variables had to be exceptions to that rule for technical reasons. Now that we're no longer injecting multiple commands into the same container like that, we don't need this complexity. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
If we have libGLX_nvidia.so.0 for *any* architecture - even if we are msising some instances - then we still want to share /usr/share/nvidia with the container. Because we always use libGLX_nvidia.so.0 from the graphics stack provider and do not have a concept of whether it is older or newer, and we do not expect our runtime to have a copy of libGLX_nvidia.so.0, we do not need to worry about giving the runtime's library an incompatible version of the data files from the provider. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
The NVIDIA driver hard-codes /usr/share/nvidia even if it is installed in /opt or something, so instead of deriving ${prefix} from the library path and then checking for ${prefix}/share/nvidia followed by /usr/share/nvidia as a fallback, we do the opposite: check for /usr/share/nvidia first, followed by ${prefix}/share/nvidia as a fallback. Signed-off-by:
Simon McVittie <smcv@collabora.com> Resolves: #73 (T29292)
-
- Jul 14, 2021
-
-
Ludovico de Nittis authored
If we have libGLX_nvidia.so.0 from the provider, we should also bind `/etc/nvidia` and `/usr/share/nvidia` because they usually contain the application profiles. http://us.download.nvidia.com/XFree86/Linux-x86_64/470.42.01/README/profiles.html#ApplicationProf9ccbe Fixes: #73 Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
- Jun 10, 2021
-
-
Ludovico de Nittis authored
Since the commit fce30b8d, this variable was not used anymore. Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
Ludovico de Nittis authored
This commit addresses most of the warnings printed at compilation time while using clang and `ninja scan-build`. Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
- May 11, 2021
-
-
Simon McVittie authored
By including this in libsteam-runtime-tools-0-helpers, we reduce the number of modules we need to manage and keep in sync. The rest of libcapsule isn't actively used yet, so this is a significant simplification. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- May 07, 2021
-
-
Simon McVittie authored
This is slightly simpler, and makes it easy for PvRuntime to locate tools in the helpers path (libexec/steam-runtime-tools-0) too. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 30, 2021
-
-
Simon McVittie authored
At the moment we deploy the runtime from a giant tarball to avoid Steam downloader limitations, but that leads to a noticeable delay the first time we launch a game after a new runtime version has been downloaded. Now that the Steam download mechanism can deal better with larger numbers of smaller files, we're considering returning to the original design where the runtime depot contains unpacked files. However, the Steam download mechanism doesn't preserve permissions, modification times, or filenames that differ only by case, and has not always preserved empty directories, so we need a way to deal with all of those things. By reading a manifest written in a subset of the BSD mtree(5) format, we can create directories and symlinks, and set permissions modification times on regular files. As a bonus, it's actually slightly faster to duplicate a runtime with hard-links (--copy-runtime mode) by reading the manifest than by reading the actual directory tree, because the manifest is more likely to be contiguous on disk. In principle the mtree(5) manifest could also be used to validate that the runtime content has not become corrupted by checking files against their sha256sums. This isn't implemented here (and it would have to be done only on demand rather than routinely, because it would be slow), but the parser does at least read the sha256. In the tests, we now need to remove the mtree manifest when copying and editing a runtime. When we edit a runtime in-place, it no longer conforms to the manifest, so this can't necesarily be expected to work. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 28, 2021
-
-
Simon McVittie authored
If we are operating from ./scout_platform_x.y.z, self->id will be NULL. Instead of matching on the names of directories, we can just check whether the deployment we are going to use is the same file (device and inode number) as the old deployment we are considering deleting. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 27, 2021
-
-
Simon McVittie authored
SrtSystemInfo is not thread-aware, but can safely be handed off from one thread to another, and caches its results internally; so we can use a thread per architecture, plus an extra thread for cross-architecture Vulkan and EGL ICDs, to enumerate graphics drivers and populate the cache in parallel with any other container setup. We join the threads just before looking at their results, to maximize the length of time for which we're running in parallel. On slow hardware (Lenovo T520 circa 2011, with 500G 7200rpm HDD) this cuts something like 20% off the setup time with a cold cache (`echo 3 | sudo tee /proc/sys/vm/drop_caches`). It also has a benefit (more like 15%) with a warm cache, immediately after a previous pressure-vessel run. This does make it somewhat harder to profile pressure-vessel, because when two I/O-bound operations run in parallel, they both take longer than they otherwise would, even though the overall task finishes sooner; this makes it hard to attribute I/O cost to particular actions. The new --single-thread option can be used to get a better idea of where the time is really going. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
This encapsulates both the PROVIDER_GRAPHICS_STACK flag and the associated paths: if the object is null then the paths are meaningless, and if the object is non-null then they are meaningful. Making this an immutable "value object" also means we can share it between threads, unlike PvRuntime, which has state. This could become important if we want to make graphics driver enumeration multi-threaded to speed up pressure-vessel. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Ludovico de Nittis authored
Unset SDL_VIDEODRIVER if it was previously set to "wayland", when we are in a Scout SteamLinuxRuntime, because Scout is too old to support Wayland. This is not necessary for Soldier or Sniper because we expect it to be working for them. Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
- Apr 26, 2021
-
-
Simon McVittie authored
There's no point in doing this when we aren't going to use them. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 22, 2021
-
-
Simon McVittie authored
This is surprisingly expensive to do, particularly with a cold cache. If we can avoid this, then deleting unwanted libraries from the mutable copy of the runtime becomes a lot faster. With the disk cache cleared and running on slow hardware, this cuts down deletion of the unwanted libraries from 2-4 seconds to basically instantaneous. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
Some development libraries follow this pattern, and we already delete those without needing to use libelf to load the library and find out its SONAME: libfcitx-config.so -> libfcitx-config.so.4 libfcitx-config.so.4 -> libfcitx-config.so.4.1 libfcitx-config.so.4.1 However, other libraries follow this pattern, which results in the code that uses libelf to find the SONAME being the only way we can figure out that the .so symlink needs removing: libdbus-glib-1.so -> libdbus-glib-1.so.2.2.2 libdbus-glib-1.so.2 -> libdbus-glib-1.so.2.2.2 libdbus-glib-1.so.2.2.2 To avoid relying on the libelf code path, which is surprisingly slow when run with a cold disk cache, we can do one scan through the directory removing regular files and runtime symlinks, and a second scan through the directory removing development symlinks that have become dangling as a result. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
If libfoo.so.0 is a symlink pointing to libfoo.so.0.1.2, and we have overridden libfoo.so.0, then we can know that libfoo.so.0.1.2 is also to be avoided, even without opening it to determine its SONAME. This reduces the need to use libelf, which is surprisingly slow if the disk cache is cold. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
If we got libbz2.so.1.0 from the host, for which libbz2.so.1 is an alias, we will also want to remove libbz2.so.1 from the container. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
The comments here are not really clear enough to express what's going on, particularly in the presence of libraries that have aliases. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
We need to 'goto out' to free some arrays of objects, which are too complicated for `__attribute__((__cleanup__))`. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Ludovico de Nittis authored
Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-
Simon McVittie authored
Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Simon McVittie authored
As well as making this giant function a bit smaller, this will make it easier to insert profiling markers to see where the time goes. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 21, 2021
-
-
Simon McVittie authored
These refer to the host, but in a Flatpak subsandbox environment the graphics stack provider is not actually the host. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 20, 2021
-
-
Simon McVittie authored
We don't actually need this information, and it has a significant startup time cost with a cold cache. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 16, 2021
-
-
Simon McVittie authored
At the moment we assume it's just "bwrap" when using Flatpak, but when we stop supporting the Flatpak sandbox escape code path, that will become meaningless. Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
- Apr 13, 2021
-
-
Simon McVittie authored
Since !269, on systems where PulseAudio is available, we set PULSE_SERVER to a suitable non-empty value, and "lock" it into the environment to avoid it getting overridden by pressure-vessel-launch (in use-cases where we're using that). We also create an /etc/asound.conf in the container's namespace that will make PulseAudio the default for applications that use the ALSA user-space library libasound.so.2, such as Shadowrun Returns. Conversely, on systems where PulseAudio is *not* available (for example where the system is using plain ALSA), we "lock" PULSE_SERVER to a null value so that we will actively remove it from the environment if set. However, this caused a regression: we created /etc/asound.conf based on whether PULSE_SERVER was "locked", which effectively meant this was done unconditionally. An /etc/asound.conf that configures PulseAudio to be the default is not going to work on non-PulseAudio systems. Instead of checking whether PULSE_SERVER is "locked", check whether it's null. This has the desired effect: we configure PulseAudio to be the default if and only if we detected that it's available. Helps: https://github.com/ValveSoftware/steam-runtime/issues/344 Helps: https://github.com/ValveSoftware/steam-runtime/issues/384 Signed-off-by:
Simon McVittie <smcv@collabora.com>
-
Ludovico de Nittis authored
libdrm.so.2 is not included in the freedesktop.org GL Platform runtime and this leads us to search for the libdrm directory in the wrong place. For this reason we first look at libdrm_amdgpu.so.1 and use libdrm.so.2 as a fallback. Signed-off-by:
Ludovico de Nittis <ludovico.denittis@collabora.com>
-