Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restrict current thread only #32

Closed
giuseppe opened this issue Aug 21, 2024 · 5 comments
Closed

restrict current thread only #32

giuseppe opened this issue Aug 21, 2024 · 5 comments

Comments

@giuseppe
Copy link

there are cases where it is desiderable to only lock the current thread, not the entire process,and be usable as:

	go func() {
		runtime.LockOSThread()

		if err := landlock.V5.BestEffort().RestrictPaths(
			landlock.RODirs("/"),
			landlock.RWDirs("/tmp"),
		); err != nil {
			fmt.Println("Error", err)
			return
		}

               // run having RW access only to /tmp 
	}()
       // access any file everywhere from here

from my understanding, this is a limitation only with the API, but there is nothing preventing such code to work.

Would it make sense to expand the API to allow limiting the current thread only?

@gnoack
Copy link
Collaborator

gnoack commented Sep 25, 2024

Hello Giuseppe!

This is indeed a limitation with the Go-Landlock API. I introduced it because Go programs are inherently multithreaded.

As @l0kod said in containers/common#2126 (comment), the underlying problem here is that there are no proper security boundaries between OS threads. So while it is technically possible to landlock a single thread, an attacker who takes control over the thread can very easily manipulate one of the other (unrestricted) threads to do things on its behalf, e.g. by rewiring one of the many function or object pointers that live in the same address space. So while you can technically landlock individual OS threads from the C API, the enforced policy is easily circumventable if there are other OS threads around.

A API that only restricts a single OS thread would also be difficult and error-prone to use, as

  1. the runtime.LockOSThread() call is easy to forget, but absolutely needed, and
  2. you need to be extremely careful in what you are doing under the Landlock policy, to prevent cross-thread manipulation. And at this point, you have probably proven that your code is safe anyway?

I do not understand your code change on containers/common#2126 in full detail though. If you have a deeper reasoning for why this would be safe to do in a single-threaded environment, I'd be interested in it.

Maybe I don't understand what you are exactly trying to prevent with it.

  • If you try to prevent buffer overflow-style attacks, landlocking a single thread does not help you much.
  • If you try to prevent logic bugs within the realm of a normal Go program execution (e.g. path traversal mitigation), maybe a better and more explicit option would be to pass suitable handles to individual files (io.Reader/io.Writer) or to directory hierarchies (fs.FS) to the downstream code instead?

@rhatdan
Copy link

rhatdan commented Sep 26, 2024

Comments copied over from containers/common:

Yup, I understand the whole theory, but It still prevents us from putting up a bigger barrier/fence with limited effort. As I said the goal is to prevent exscallation based on file system layout of the object being copied.

In the case of Podman (And Docker) they pull down tar balls and expand them onto existing file systems. In the past a carefully crafted tarball has been able to cause the container tools to write files outside of the sandboxed (landlocked) directory tree, which I am trying to prevent. We have the ability to control containers which are launched by a Rootful podman such as nothing in the container runs as root. But rootful podman pulling and un-tarring the container image payload is happening as root and we are attempting to sandbox that component since we know that the content will only be written to a certain directory tree. (/var/lib/containers/storage).

@gnoack
Copy link
Collaborator

gnoack commented Oct 1, 2024

In my understanding from containers/common#2126, the pull request was closed for now, in favor of running it in a subprocess. The reexec package is pretty nice - it seems like a good way to do that, if the use case has clearly defined inputs and outputs. I will close this bug as well, as a dependency. I hope you understand that this would be a bit of a dangerous API to expose to less experienced users. But it might be feasible if the API makes these dangers clear enough.

Please feel free to reopen if the topic comes up again.

@gnoack gnoack closed this as completed Oct 1, 2024
@AkihiroSuda
Copy link

rewiring one of the many function or object pointers that live in the same address space

These can be detected by checking unsafe, cgo, asm, reflect, /proc/PID/mem, process_vm_writev(2), etc. ?

@gnoack
Copy link
Collaborator

gnoack commented Dec 31, 2024

rewiring one of the many function or object pointers that live in the same address space

These can be detected by checking unsafe, cgo, asm, reflect, /proc/PID/mem, process_vm_writev(2), etc. ?

Hello @AkihiroSuda!

Pointer rewiring does not need to go through any APIs like these. Any thread that lives in the same address space can manipulate memory used by other threads directly. Also, pointer rewiring is only one example of how a (Landlock-restricted) thread can influence what another (unrestricted) thread is doing.

Userspace code (like the Go runtime, your program, and all its (cgo?) dependencies) does not normally enforce any security boundaries between threads. There are likely many places in the standard library and elsewhere where a thread can (purposefully) influence what another thread is doing or which code it is executing. And I do not see a realistic way to get all that userspace code hardened for this use case. Fundamentally, if you enable Landlock on only a subset of your threads, your Trusted Computing Base for the Landlock sandbox is all the code that you have loaded into userspace, and that is not something we can control well enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants