Discussion:
[PATCH v12 01/13] sk_run_filter: add support for custom load_pointer
(too old to reply)
Will Drewry
2012-02-29 23:53:28 UTC
Permalink
This change allows CONFIG_SECCOMP to make use of BPF programs for
user-controlled system call filtering (as shown in this patch series).

To minimize the impact on existing BPF evaluation, function pointer
use must be declared at sk_chk_filter-time. This allows ancillary
load instructions to be generated that use the function pointer rather
than adding _any_ code to the existing LD_* instruction paths.

Crude performance numbers using udpflood -l 10000000 against dummy0.
3 trials just udpflooding, 3 with tcpdump. Averaged then differenced.

[v11 #s]
* x86 32-bit (Atom N570 @ 1.66 GHz 2 core HT) [stackprot]:
(tcpdump) (no bpf)
- Without: 88.19s - 70.98s = 17.21s
- With: 90.97s - 74.96s = 15.98s
- Slowdown per packet: -123 nanoseconds

* x86 64-bit (Atom N570 @ 1.66 GHz 2 core HT) [stackprot]:
(tcpdump) (no bpf)
- Without: 117.80s - 103.99s = 13.80s
- With: 116.81s - 101.85s = 14.96s
- Slowdown per packet: 116 nanoseconds

In this round of benchmarking, the gap seems narrower and I've attempted
to take extra steps to ensure a quiescent system (especially given the
wild variability I was seeing on 64-bit last time). It's not clear to
me if this overhead (on an Atom-class system) is acceptable or not.
Irrespective, I had planned to follow on to this patch set updating
every in-kernel callsite of sk_run/chk_filter to use bpf_*_filter
directly and alleviate the bounce through. I was hoping that could be
done independently, however. If it would make sense to do that first
or just duplicate the sk_run_filter code in seccomp.c initially, please
let me know.

v11: - un-inline sk_*_filter to adhere to the existing export symbol use
and not break bpf_jit_free (***@chromium.org)
- updated benchmarks
v10: - converted length to a u32 on the struct because it is passed in
at bpf_run_filter time anyway. (***@nul.nu)
- added a comment about the LD_*_ falling through (***@perches.com)
- more clean up (pointer->load, drop SKB macro, comments) (***@nul.nu)
v9: - n/a
v8: - fixed variable positioning and bad cast (***@gmail.com)
- no longer passes A as a pointer (inspection of x86 asm shows A is
%ebx again; thanks ***@gmail.com)
- cleaned up switch macros and expanded use
(***@perches.com, ***@nul.nu)
- added length fn pointer and handled LD_W_LEN/LDX_W_LEN
- moved from a wrapping struct to a typedef for the function
pointer. (matches existing function pointer style)
- added comprehensive comment above the typedef.
- benchmarks
v7: - first cut

Signed-off-by: Will Drewry <***@chromium.org>
---
include/linux/filter.h | 46 +++++++++++++++++++
net/core/filter.c | 117 +++++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 157 insertions(+), 6 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..bdd02f9 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -110,6 +110,9 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */
*/
#define BPF_MEMWORDS 16

+/* BPF program (checking) flags */
+#define BPF_CHK_FLAGS_NO_SKB 1
+
/* RATIONALE. Negative offsets are invalid in BPF.
We use them to reference ancillary data.
Unlike introduction new instructions, it does not break
@@ -145,6 +148,33 @@ struct sk_filter
struct sock_filter insns[0];
};

+/**
+ * struct bpf_load_fn - callback and data for bpf_run_filter
+ * This structure is used by bpf_run_filter if bpf_chk_filter
+ * was invoked with BPF_CHK_FLAGS_NO_SKB.
+ *
+ * @load:
+ * @data: const pointer to the data passed into bpf_run_filter
+ * @k: offset into @skb's data
+ * @size: the size of the requested data in bytes: 1, 2, or 4.
+ * @buffer: If non-NULL, a 32-bit buffer for staging data.
+ *
+ * Returns a pointer to the requested data.
+ *
+ * This function operates similarly to load_pointer in net/core/filter.c
+ * except that the pointer to the returned data must already be
+ * byteswapped as appropriate to the source data and endianness.
+ * @buffer may be used if the data needs to be staged.
+ *
+ * @length: the length of the supplied data for use by the LD*_LEN
+ * instructions.
+ */
+struct bpf_load_fn {
+ void *(*load)(const void *data, int k, unsigned int size,
+ void *buffer);
+ u32 length;
+};
+
static inline unsigned int sk_filter_len(const struct sk_filter *fp)
{
return fp->len * sizeof(struct sock_filter) + sizeof(*fp);
@@ -153,9 +183,15 @@ static inline unsigned int sk_filter_len(const struct sk_filter *fp)
extern int sk_filter(struct sock *sk, struct sk_buff *skb);
extern unsigned int sk_run_filter(const struct sk_buff *skb,
const struct sock_filter *filter);
+extern unsigned int bpf_run_filter(const void *data,
+ const struct sock_filter *filter,
+ const struct bpf_load_fn *load_fn);
extern int sk_attach_filter(struct sock_fprog *fprog, struct sock *sk);
extern int sk_detach_filter(struct sock *sk);
+
extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen);
+extern int bpf_chk_filter(struct sock_filter *filter, unsigned int flen, u32 flags);
+

#ifdef CONFIG_BPF_JIT
extern void bpf_jit_compile(struct sk_filter *fp);
@@ -228,6 +264,16 @@ enum {
BPF_S_ANC_HATYPE,
BPF_S_ANC_RXHASH,
BPF_S_ANC_CPU,
+ /* Used to differentiate SKB data and generic data */
+ BPF_S_ANC_LD_W_ABS,
+ BPF_S_ANC_LD_H_ABS,
+ BPF_S_ANC_LD_B_ABS,
+ BPF_S_ANC_LD_W_LEN,
+ BPF_S_ANC_LD_W_IND,
+ BPF_S_ANC_LD_H_IND,
+ BPF_S_ANC_LD_B_IND,
+ BPF_S_ANC_LDX_W_LEN,
+ BPF_S_ANC_LDX_B_MSH,
};

#endif /* __KERNEL__ */
diff --git a/net/core/filter.c b/net/core/filter.c
index 5dea452..fe8d396 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -98,9 +98,10 @@ int sk_filter(struct sock *sk, struct sk_buff *skb)
EXPORT_SYMBOL(sk_filter);

/**
- * sk_run_filter - run a filter on a socket
- * @skb: buffer to run the filter on
+ * bpf_run_filter - run a BPF filter program on @data
+ * @data: buffer to run the filter on
* @fentry: filter to apply
+ * @load_fn: custom data accessor
*
* Decode and apply filter instructions to the skb->data.
* Return length to keep, 0 for none. @skb is the data we are
@@ -109,8 +110,9 @@ EXPORT_SYMBOL(sk_filter);
* and last instruction guaranteed to be a RET, we dont need to check
* flen. (We used to pass to this function the length of filter)
*/
-unsigned int sk_run_filter(const struct sk_buff *skb,
- const struct sock_filter *fentry)
+unsigned int bpf_run_filter(const void *data,
+ const struct sock_filter *fentry,
+ const struct bpf_load_fn *load_fn)
{
void *ptr;
u32 A = 0; /* Accumulator */
@@ -118,6 +120,7 @@ unsigned int sk_run_filter(const struct sk_buff *skb,
u32 mem[BPF_MEMWORDS]; /* Scratch Memory Store */
u32 tmp;
int k;
+ const struct sk_buff *skb = data;

/*
* Process array of filter instructions.
@@ -350,6 +353,55 @@ load_b:
A = 0;
continue;
}
+ case BPF_S_ANC_LD_W_ABS:
+ k = K;
+load_fn_w:
+ ptr = load_fn->load(data, k, 4, &tmp);
+ if (ptr) {
+ A = *(u32 *)ptr;
+ continue;
+ }
+ return 0;
+ case BPF_S_ANC_LD_H_ABS:
+ k = K;
+load_fn_h:
+ ptr = load_fn->load(data, k, 2, &tmp);
+ if (ptr) {
+ A = *(u16 *)ptr;
+ continue;
+ }
+ return 0;
+ case BPF_S_ANC_LD_B_ABS:
+ k = K;
+load_fn_b:
+ ptr = load_fn->load(data, k, 1, &tmp);
+ if (ptr) {
+ A = *(u8 *)ptr;
+ continue;
+ }
+ return 0;
+ case BPF_S_ANC_LDX_B_MSH:
+ ptr = load_fn->load(data, K, 1, &tmp);
+ if (ptr) {
+ X = (*(u8 *)ptr & 0xf) << 2;
+ continue;
+ }
+ return 0;
+ case BPF_S_ANC_LD_W_IND:
+ k = X + K;
+ goto load_fn_w;
+ case BPF_S_ANC_LD_H_IND:
+ k = X + K;
+ goto load_fn_h;
+ case BPF_S_ANC_LD_B_IND:
+ k = X + K;
+ goto load_fn_b;
+ case BPF_S_ANC_LD_W_LEN:
+ A = load_fn->length;
+ continue;
+ case BPF_S_ANC_LDX_W_LEN:
+ X = load_fn->length;
+ continue;
default:
WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n",
fentry->code, fentry->jt,
@@ -360,6 +412,21 @@ load_b:

return 0;
}
+EXPORT_SYMBOL(bpf_run_filter);
+
+/**
+ * sk_run_filter - run a filter on a socket
+ * @skb: buffer to run the filter on
+ * @fentry: filter to apply
+ *
+ * Runs bpf_run_filter with the struct sk_buff-specific data
+ * accessor behavior.
+ */
+unsigned int sk_run_filter(const struct sk_buff *skb,
+ const struct sock_filter *filter)
+{
+ return bpf_run_filter(skb, filter, NULL);
+}
EXPORT_SYMBOL(sk_run_filter);

/*
@@ -423,9 +490,10 @@ error:
}

/**
- * sk_chk_filter - verify socket filter code
+ * bpf_chk_filter - verify socket filter BPF code
* @filter: filter to verify
* @flen: length of filter
+ * @flags: May be BPF_CHK_FLAGS_NO_SKB or 0
*
* Check the user's filter code. If we let some ugly
* filter code slip through kaboom! The filter must contain
@@ -434,9 +502,13 @@ error:
*
* All jumps are forward as they are not signed.
*
+ * If BPF_CHK_FLAGS_NO_SKB is set in flags, any SKB-specific
+ * rules become illegal and a bpf_load_fn will be expected by
+ * bpf_run_filter.
+ *
* Returns 0 if the rule set is legal or -EINVAL if not.
*/
-int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
+int bpf_chk_filter(struct sock_filter *filter, unsigned int flen, u32 flags)
{
/*
* Valid instructions are initialized to non-0.
@@ -542,9 +614,36 @@ int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
pc + ftest->jf + 1 >= flen)
return -EINVAL;
break;
+#define MAYBE_USE_LOAD_FN(CODE) \
+ if (flags & BPF_CHK_FLAGS_NO_SKB) { \
+ code = BPF_S_ANC_##CODE; \
+ break; \
+ }
+ case BPF_S_LD_W_LEN:
+ MAYBE_USE_LOAD_FN(LD_W_LEN);
+ break;
+ case BPF_S_LDX_W_LEN:
+ MAYBE_USE_LOAD_FN(LDX_W_LEN);
+ break;
+ case BPF_S_LD_W_IND:
+ MAYBE_USE_LOAD_FN(LD_W_IND);
+ break;
+ case BPF_S_LD_H_IND:
+ MAYBE_USE_LOAD_FN(LD_H_IND);
+ break;
+ case BPF_S_LD_B_IND:
+ MAYBE_USE_LOAD_FN(LD_B_IND);
+ break;
+ case BPF_S_LDX_B_MSH:
+ MAYBE_USE_LOAD_FN(LDX_B_MSH);
+ break;
case BPF_S_LD_W_ABS:
+ MAYBE_USE_LOAD_FN(LD_W_ABS);
+ /* Falls through to BPF_S_LD_B_ABS. */
case BPF_S_LD_H_ABS:
+ MAYBE_USE_LOAD_FN(LD_H_ABS);
case BPF_S_LD_B_ABS:
+ MAYBE_USE_LOAD_FN(LD_B_ABS);
#define ANCILLARY(CODE) case SKF_AD_OFF + SKF_AD_##CODE: \
code = BPF_S_ANC_##CODE; \
break
@@ -572,6 +671,12 @@ int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
}
return -EINVAL;
}
+EXPORT_SYMBOL(bpf_chk_filter);
+
+int sk_chk_filter(struct sock_filter *filter, unsigned int flen)
+{
+ return bpf_chk_filter(filter, flen, 0);
+}
EXPORT_SYMBOL(sk_chk_filter);

/**
--
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Will Drewry
2012-02-29 23:53:32 UTC
Permalink
Add syscall_get_arch() to export the current AUDIT_ARCH_* based on system call
entry path.

Signed-off-by: Will Drewry <***@chromium.org>
---
arch/x86/include/asm/syscall.h | 23 +++++++++++++++++++++++
1 files changed, 23 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
index 386b786..8ec41f1 100644
--- a/arch/x86/include/asm/syscall.h
+++ b/arch/x86/include/asm/syscall.h
@@ -13,6 +13,7 @@
#ifndef _ASM_X86_SYSCALL_H
#define _ASM_X86_SYSCALL_H

+#include <linux/audit.h>
#include <linux/sched.h>
#include <linux/err.h>
#include <asm/asm-offsets.h> /* For NR_syscalls */
@@ -88,6 +89,12 @@ static inline void syscall_set_arguments(struct task_struct *task,
memcpy(&regs->bx + i, args, n * sizeof(args[0]));
}

+static inline int syscall_get_arch(struct task_struct *task,
+ struct pt_regs *regs)
+{
+ return AUDIT_ARCH_I386;
+}
+
#else /* CONFIG_X86_64 */

static inline void syscall_get_arguments(struct task_struct *task,
@@ -212,6 +219,22 @@ static inline void syscall_set_arguments(struct task_struct *task,
}
}

+static inline int syscall_get_arch(struct task_struct *task,
+ struct pt_regs *regs)
+{
+#ifdef CONFIG_IA32_EMULATION
+ /*
+ * TS_COMPAT is set for 32-bit syscall entries and then
+ * remains set until we return to user mode.
+ *
+ * TIF_IA32 tasks should always have TS_COMPAT set at
+ * system call time.
+ */
+ if (task_thread_info(task)->status & TS_COMPAT)
+ return AUDIT_ARCH_I386;
+#endif
+ return AUDIT_ARCH_X86_64;
+}
#endif /* CONFIG_X86_32 */

#endif /* _ASM_X86_SYSCALL_H */
--
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Will Drewry
2012-02-29 23:53:29 UTC
Permalink
Any other users of bpf_*_filter that take a struct sock_fprog from
userspace will need to be able to also accept a compat_sock_fprog
if the arch supports compat calls. This change let's the existing
compat_sock_fprog be shared.

Signed-off-by: Will Drewry <***@chromium.org>

v11: introduction
---
include/linux/filter.h | 11 +++++++++++
net/compat.c | 8 --------
2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index bdd02f9..f052f5c 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -10,6 +10,7 @@

#ifdef __KERNEL__
#include <linux/atomic.h>
+#include <linux/compat.h>
#endif

/*
@@ -135,6 +136,16 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */

#ifdef __KERNEL__

+#ifdef CONFIG_COMPAT
+/*
+ * A struct sock_filter is architecture independent.
+ */
+struct compat_sock_fprog {
+ u16 len;
+ compat_uptr_t filter; /* struct sock_filter * */
+};
+#endif
+
struct sk_buff;
struct sock;

diff --git a/net/compat.c b/net/compat.c
index 73bf0e0..5ba1bdc 100644
--- a/net/compat.c
+++ b/net/compat.c
@@ -328,14 +328,6 @@ void scm_detach_fds_compat(struct msghdr *kmsg, struct scm_cookie *scm)
__scm_destroy(scm);
}

-/*
- * A struct sock_filter is architecture independent.
- */
-struct compat_sock_fprog {
- u16 len;
- compat_uptr_t filter; /* struct sock_filter * */
-};
-
static int do_set_attach_filter(struct socket *sock, int level, int optname,
char __user *optval, unsigned int optlen)
{
--
1.7.5.4
Will Drewry
2012-02-29 23:53:34 UTC
Permalink
This change adds the SECCOMP_RET_ERRNO as a valid return value from a
seccomp filter. Additionally, it makes the first use of the lower
16-bits for storing a filter-supplied errno. 16-bits is more than
enough for the errno-base.h calls.

Returning errors instead of immediately terminating processes that
violate seccomp policy allow for broader use of this functionality
for kernel attack surface reduction. For example, a linux container
could maintain a whitelist of pre-existing system calls but drop
all new ones with errnos. This would keep a logically static attack
surface while providing errnos that may allow for graceful failure
without the downside of do_exit() on a bad call.

v12: - move to WARN_ON if filter is NULL
(***@redhat.com, ***@mit.edu, ***@chromium.org)
- return immediately for filter==NULL (***@chromium.org)
- change evaluation to only compare the ACTION so that layered
errnos don't result in the lowest one being returned.
(***@chromium.org)
v11: - check for NULL filter (***@chromium.org)
v10: - change loaders to fn
v9: - n/a
v8: - update Kconfig to note new need for syscall_set_return_value.
- reordered such that TRAP behavior follows on later.
- made the for loop a little less indent-y
v7: - introduced

Reviewed-by: Kees Cook <***@chromium.org>
Signed-off-by: Will Drewry <***@chromium.org>
---
arch/Kconfig | 6 ++++--
include/linux/seccomp.h | 15 +++++++++++----
kernel/seccomp.c | 43 ++++++++++++++++++++++++++++++++++---------
3 files changed, 49 insertions(+), 15 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 7a696a9..1350d07 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -237,8 +237,10 @@ config HAVE_ARCH_SECCOMP_FILTER
bool
help
This symbol should be selected by an architecure if it provides
- asm/syscall.h, specifically syscall_get_arguments() and
- syscall_get_arch().
+ asm/syscall.h, specifically syscall_get_arguments(),
+ syscall_get_arch(), and syscall_set_return_value(). Additionally,
+ its system call entry path must respect a return value of -1 from
+ __secure_computing_int() and/or secure_computing().

config SECCOMP_FILTER
def_bool y
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 6ef133c..a81fccd 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -12,13 +12,14 @@

/*
* All BPF programs must return a 32-bit value.
- * The bottom 16-bits are reserved for future use.
+ * The bottom 16-bits are for optional return data.
* The upper 16-bits are ordered from least permissive values to most.
*
* The ordering ensures that a min_t() over composed return values always
* selects the least permissive choice.
*/
#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ERRNO 0x00030000U /* returns an errno */
#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */

/* Masks for the return value sections. */
@@ -64,11 +65,17 @@ struct seccomp {
struct seccomp_filter *filter;
};

-extern void __secure_computing(int);
-static inline void secure_computing(int this_syscall)
+/*
+ * Direct callers to __secure_computing should be updated as
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER propagates.
+ */
+extern void __secure_computing(int) __deprecated;
+extern int __secure_computing_int(int);
+static inline int secure_computing(int this_syscall)
{
if (unlikely(test_thread_flag(TIF_SECCOMP)))
- __secure_computing(this_syscall);
+ return __secure_computing_int(this_syscall);
+ return 0;
}

extern long prctl_get_seccomp(void);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 71df324..88dd568 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -137,21 +137,25 @@ static void *bpf_load(const void *nr, int off, unsigned int size, void *buf)
static u32 seccomp_run_filters(int syscall)
{
struct seccomp_filter *f;
- u32 ret = SECCOMP_RET_KILL;
static const struct bpf_load_fn fns = {
bpf_load,
sizeof(struct seccomp_data),
};
+ u32 ret = SECCOMP_RET_ALLOW;
const void *sc_ptr = (const void *)(uintptr_t)syscall;

+ /* Ensure unexpected behavior doesn't result in failing open. */
+ if (WARN_ON(current->seccomp.filter == NULL))
+ return SECCOMP_RET_KILL;
+
/*
* All filters are evaluated in order of youngest to oldest. The lowest
- * BPF return value always takes priority.
+ * BPF return value (ignoring the DATA) always takes priority.
*/
for (f = current->seccomp.filter; f; f = f->prev) {
- ret = bpf_run_filter(sc_ptr, f->insns, &fns);
- if (ret != SECCOMP_RET_ALLOW)
- break;
+ u32 cur_ret = bpf_run_filter(sc_ptr, f->insns, &fns);
+ if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
+ ret = cur_ret;
}
return ret;
}
@@ -289,6 +293,13 @@ static int mode1_syscalls_32[] = {

void __secure_computing(int this_syscall)
{
+ /* Filter calls should never use this function. */
+ BUG_ON(current->seccomp.mode == SECCOMP_MODE_FILTER);
+ __secure_computing_int(this_syscall);
+}
+
+int __secure_computing_int(int this_syscall)
+{
int mode = current->seccomp.mode;
int exit_code = SIGKILL;
int *syscall;
@@ -302,16 +313,29 @@ void __secure_computing(int this_syscall)
#endif
do {
if (*syscall == this_syscall)
- return;
+ return 0;
} while (*++syscall);
break;
#ifdef CONFIG_SECCOMP_FILTER
- case SECCOMP_MODE_FILTER:
- if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
- return;
+ case SECCOMP_MODE_FILTER: {
+ u32 action = seccomp_run_filters(this_syscall);
+ switch (action & SECCOMP_RET_ACTION) {
+ case SECCOMP_RET_ERRNO:
+ /* Set the low-order 16-bits as a errno. */
+ syscall_set_return_value(current, task_pt_regs(current),
+ -(action & SECCOMP_RET_DATA),
+ 0);
+ return -1;
+ case SECCOMP_RET_ALLOW:
+ return 0;
+ case SECCOMP_RET_KILL:
+ default:
+ break;
+ }
seccomp_filter_log_failure(this_syscall);
exit_code = SIGSYS;
break;
+ }
#endif
default:
BUG();
@@ -322,6 +346,7 @@ void __secure_computing(int this_syscall)
#endif
audit_seccomp(this_syscall);
do_exit(exit_code);
+ return -1; /* never reached */
}

long prctl_get_seccomp(void)
--
1.7.5.4
Serge E. Hallyn
2012-03-02 18:24:55 UTC
Permalink
Post by Will Drewry
This change adds the SECCOMP_RET_ERRNO as a valid return value from a
seccomp filter. Additionally, it makes the first use of the lower
16-bits for storing a filter-supplied errno. 16-bits is more than
enough for the errno-base.h calls.
Returning errors instead of immediately terminating processes that
violate seccomp policy allow for broader use of this functionality
for kernel attack surface reduction. For example, a linux container
could maintain a whitelist of pre-existing system calls but drop
all new ones with errnos. This would keep a logically static attack
surface while providing errnos that may allow for graceful failure
without the downside of do_exit() on a bad call.
v12: - move to WARN_ON if filter is NULL
- change evaluation to only compare the ACTION so that layered
errnos don't result in the lowest one being returned.
v10: - change loaders to fn
v9: - n/a
v8: - update Kconfig to note new need for syscall_set_return_value.
- reordered such that TRAP behavior follows on later.
- made the for loop a little less indent-y
v7: - introduced
Clever :)

Thanks, Will.

For patches 1-7,

Acked-by: Serge Hallyn <***@canonical.com>

The -1 return value from __secure_computing_int() seems like it
could stand #define, like

#define SECCOMP_DONTRUN -1
#define SECCOMP_RUN 0

or something Maybe not, but -1 always scares me and I had to look back
and forth a few times to make sure it was doing what I would want.

(I've only quickly looked at the following ones. I had no
objection, but didn't seriously review them.)
Post by Will Drewry
---
arch/Kconfig | 6 ++++--
include/linux/seccomp.h | 15 +++++++++++----
kernel/seccomp.c | 43 ++++++++++++++++++++++++++++++++++---------
3 files changed, 49 insertions(+), 15 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 7a696a9..1350d07 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -237,8 +237,10 @@ config HAVE_ARCH_SECCOMP_FILTER
bool
help
This symbol should be selected by an architecure if it provides
- asm/syscall.h, specifically syscall_get_arguments() and
- syscall_get_arch().
+ asm/syscall.h, specifically syscall_get_arguments(),
+ syscall_get_arch(), and syscall_set_return_value(). Additionally,
+ its system call entry path must respect a return value of -1 from
+ __secure_computing_int() and/or secure_computing().
config SECCOMP_FILTER
def_bool y
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 6ef133c..a81fccd 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -12,13 +12,14 @@
/*
* All BPF programs must return a 32-bit value.
- * The bottom 16-bits are reserved for future use.
+ * The bottom 16-bits are for optional return data.
* The upper 16-bits are ordered from least permissive values to most.
*
* The ordering ensures that a min_t() over composed return values always
* selects the least permissive choice.
*/
#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ERRNO 0x00030000U /* returns an errno */
#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
/* Masks for the return value sections. */
@@ -64,11 +65,17 @@ struct seccomp {
struct seccomp_filter *filter;
};
-extern void __secure_computing(int);
-static inline void secure_computing(int this_syscall)
+/*
+ * Direct callers to __secure_computing should be updated as
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER propagates.
+ */
+extern void __secure_computing(int) __deprecated;
+extern int __secure_computing_int(int);
+static inline int secure_computing(int this_syscall)
{
if (unlikely(test_thread_flag(TIF_SECCOMP)))
- __secure_computing(this_syscall);
+ return __secure_computing_int(this_syscall);
+ return 0;
}
extern long prctl_get_seccomp(void);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 71df324..88dd568 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -137,21 +137,25 @@ static void *bpf_load(const void *nr, int off, unsigned int size, void *buf)
static u32 seccomp_run_filters(int syscall)
{
struct seccomp_filter *f;
- u32 ret = SECCOMP_RET_KILL;
static const struct bpf_load_fn fns = {
bpf_load,
sizeof(struct seccomp_data),
};
+ u32 ret = SECCOMP_RET_ALLOW;
const void *sc_ptr = (const void *)(uintptr_t)syscall;
+ /* Ensure unexpected behavior doesn't result in failing open. */
+ if (WARN_ON(current->seccomp.filter == NULL))
+ return SECCOMP_RET_KILL;
+
/*
* All filters are evaluated in order of youngest to oldest. The lowest
- * BPF return value always takes priority.
+ * BPF return value (ignoring the DATA) always takes priority.
*/
for (f = current->seccomp.filter; f; f = f->prev) {
- ret = bpf_run_filter(sc_ptr, f->insns, &fns);
- if (ret != SECCOMP_RET_ALLOW)
- break;
+ u32 cur_ret = bpf_run_filter(sc_ptr, f->insns, &fns);
+ if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
+ ret = cur_ret;
}
return ret;
}
@@ -289,6 +293,13 @@ static int mode1_syscalls_32[] = {
void __secure_computing(int this_syscall)
{
+ /* Filter calls should never use this function. */
+ BUG_ON(current->seccomp.mode == SECCOMP_MODE_FILTER);
+ __secure_computing_int(this_syscall);
+}
+
+int __secure_computing_int(int this_syscall)
+{
int mode = current->seccomp.mode;
int exit_code = SIGKILL;
int *syscall;
@@ -302,16 +313,29 @@ void __secure_computing(int this_syscall)
#endif
do {
if (*syscall == this_syscall)
- return;
+ return 0;
} while (*++syscall);
break;
#ifdef CONFIG_SECCOMP_FILTER
- if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
- return;
+ case SECCOMP_MODE_FILTER: {
+ u32 action = seccomp_run_filters(this_syscall);
+ switch (action & SECCOMP_RET_ACTION) {
+ /* Set the low-order 16-bits as a errno. */
+ syscall_set_return_value(current, task_pt_regs(current),
+ -(action & SECCOMP_RET_DATA),
+ 0);
+ return -1;
+ return 0;
+ break;
+ }
seccomp_filter_log_failure(this_syscall);
exit_code = SIGSYS;
break;
+ }
#endif
BUG();
@@ -322,6 +346,7 @@ void __secure_computing(int this_syscall)
#endif
audit_seccomp(this_syscall);
do_exit(exit_code);
+ return -1; /* never reached */
}
long prctl_get_seccomp(void)
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Will Drewry
2012-03-05 21:03:11 UTC
Permalink
Post by Serge E. Hallyn
This change adds the SECCOMP_RET_ERRNO as a valid return value from =
a
Post by Serge E. Hallyn
seccomp filter. =A0Additionally, it makes the first use of the lower
16-bits for storing a filter-supplied errno. =A016-bits is more than
enough for the errno-base.h calls.
Returning errors instead of immediately terminating processes that
violate seccomp policy allow for broader use of this functionality
for kernel attack surface reduction. =A0For example, a linux contain=
er
Post by Serge E. Hallyn
could maintain a whitelist of pre-existing system calls but drop
all new ones with errnos. =A0This would keep a logically static atta=
ck
Post by Serge E. Hallyn
surface while providing errnos that may allow for graceful failure
without the downside of do_exit() on a bad call.
v12: - move to WARN_ON if filter is NULL
)
ium.org)
Post by Serge E. Hallyn
=A0 =A0 =A0- change evaluation to only compare the ACTION so that la=
yered
Post by Serge E. Hallyn
=A0 =A0 =A0 =A0errnos don't result in the lowest one being returned.
v10: - change loaders to fn
=A0v9: - n/a
=A0v8: - update Kconfig to note new need for syscall_set_return_valu=
e.
Post by Serge E. Hallyn
=A0 =A0 =A0- reordered such that TRAP behavior follows on later.
=A0 =A0 =A0- made the for loop a little less indent-y
=A0v7: - introduced
Clever :)
Thanks, Will.
For patches 1-7,
Thanks!
Post by Serge E. Hallyn
The -1 return value from __secure_computing_int() seems like it
could stand =A0#define, like
#define SECCOMP_DONTRUN -1
#define SECCOMP_RUN 0
or something Maybe not, but -1 always scares me and I had to look bac=
k
Post by Serge E. Hallyn
and forth a few times to make sure it was doing what I would want.
Works for me. The -1 just matches what syscall emulation, etc does on
x86. I'll add this to the tweaks for v14.

Thanks!
Post by Serge E. Hallyn
(I've only quickly looked at the following ones. =A0 I had no
objection, but didn't seriously review them.)
---
=A0arch/Kconfig =A0 =A0 =A0 =A0 =A0 =A0| =A0 =A06 ++++--
=A0include/linux/seccomp.h | =A0 15 +++++++++++----
=A0kernel/seccomp.c =A0 =A0 =A0 =A0| =A0 43 ++++++++++++++++++++++++=
++++++++++---------
Post by Serge E. Hallyn
=A03 files changed, 49 insertions(+), 15 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 7a696a9..1350d07 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -237,8 +237,10 @@ config HAVE_ARCH_SECCOMP_FILTER
=A0 =A0 =A0 bool
=A0 =A0 =A0 help
=A0 =A0 =A0 =A0 This symbol should be selected by an architecure if =
it provides
Post by Serge E. Hallyn
- =A0 =A0 =A0 asm/syscall.h, specifically syscall_get_arguments() an=
d
Post by Serge E. Hallyn
- =A0 =A0 =A0 syscall_get_arch().
+ =A0 =A0 =A0 asm/syscall.h, specifically syscall_get_arguments(),
+ =A0 =A0 =A0 syscall_get_arch(), and syscall_set_return_value(). =A0=
Additionally,
Post by Serge E. Hallyn
+ =A0 =A0 =A0 its system call entry path must respect a return value=
of -1 from
Post by Serge E. Hallyn
+ =A0 =A0 =A0 __secure_computing_int() and/or secure_computing().
=A0config SECCOMP_FILTER
=A0 =A0 =A0 def_bool y
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 6ef133c..a81fccd 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -12,13 +12,14 @@
=A0/*
=A0 * All BPF programs must return a 32-bit value.
- * The bottom 16-bits are reserved for future use.
+ * The bottom 16-bits are for optional return data.
=A0 * The upper 16-bits are ordered from least permissive values to =
most.
Post by Serge E. Hallyn
=A0 *
=A0 * The ordering ensures that a min_t() over composed return value=
s always
Post by Serge E. Hallyn
=A0 * selects the least permissive choice.
=A0 */
=A0#define SECCOMP_RET_KILL =A0 =A0 0x00000000U /* kill the task imm=
ediately */
Post by Serge E. Hallyn
+#define SECCOMP_RET_ERRNO =A0 =A00x00030000U /* returns an errno */
=A0#define SECCOMP_RET_ALLOW =A0 =A00x7fff0000U /* allow */
=A0/* Masks for the return value sections. */
@@ -64,11 +65,17 @@ struct seccomp {
=A0 =A0 =A0 struct seccomp_filter *filter;
=A0};
-extern void __secure_computing(int);
-static inline void secure_computing(int this_syscall)
+/*
+ * Direct callers to __secure_computing should be updated as
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER propagates.
+ */
+extern void __secure_computing(int) __deprecated;
+extern int __secure_computing_int(int);
+static inline int secure_computing(int this_syscall)
=A0{
=A0 =A0 =A0 if (unlikely(test_thread_flag(TIF_SECCOMP)))
- =A0 =A0 =A0 =A0 =A0 =A0 __secure_computing(this_syscall);
+ =A0 =A0 =A0 =A0 =A0 =A0 return =A0__secure_computing_int(this_sysc=
all);
Post by Serge E. Hallyn
+ =A0 =A0 return 0;
=A0}
=A0extern long prctl_get_seccomp(void);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 71df324..88dd568 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -137,21 +137,25 @@ static void *bpf_load(const void *nr, int off,=
unsigned int size, void *buf)
Post by Serge E. Hallyn
=A0static u32 seccomp_run_filters(int syscall)
=A0{
=A0 =A0 =A0 struct seccomp_filter *f;
- =A0 =A0 u32 ret =3D SECCOMP_RET_KILL;
=A0 =A0 =A0 static const struct bpf_load_fn fns =3D {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 bpf_load,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 sizeof(struct seccomp_data),
=A0 =A0 =A0 };
+ =A0 =A0 u32 ret =3D SECCOMP_RET_ALLOW;
=A0 =A0 =A0 const void *sc_ptr =3D (const void *)(uintptr_t)syscall;
+ =A0 =A0 /* Ensure unexpected behavior doesn't result in failing op=
en. */
Post by Serge E. Hallyn
+ =A0 =A0 if (WARN_ON(current->seccomp.filter =3D=3D NULL))
+ =A0 =A0 =A0 =A0 =A0 =A0 return SECCOMP_RET_KILL;
+
=A0 =A0 =A0 /*
=A0 =A0 =A0 =A0* All filters are evaluated in order of youngest to o=
ldest. The lowest
Post by Serge E. Hallyn
- =A0 =A0 =A0* BPF return value always takes priority.
+ =A0 =A0 =A0* BPF return value (ignoring the DATA) always takes pri=
ority.
Post by Serge E. Hallyn
=A0 =A0 =A0 =A0*/
=A0 =A0 =A0 for (f =3D current->seccomp.filter; f; f =3D f->prev) {
- =A0 =A0 =A0 =A0 =A0 =A0 ret =3D bpf_run_filter(sc_ptr, f->insns, &=
fns);
Post by Serge E. Hallyn
- =A0 =A0 =A0 =A0 =A0 =A0 if (ret !=3D SECCOMP_RET_ALLOW)
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 =A0 =A0 =A0 =A0 u32 cur_ret =3D bpf_run_filter(sc_ptr, f->=
insns, &fns);
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 if ((cur_ret & SECCOMP_RET_ACTION) < (ret =
& SECCOMP_RET_ACTION))
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ret =3D cur_ret;
=A0 =A0 =A0 }
=A0 =A0 =A0 return ret;
=A0}
@@ -289,6 +293,13 @@ static int mode1_syscalls_32[] =3D {
=A0void __secure_computing(int this_syscall)
=A0{
+ =A0 =A0 /* Filter calls should never use this function. */
+ =A0 =A0 BUG_ON(current->seccomp.mode =3D=3D SECCOMP_MODE_FILTER);
+ =A0 =A0 __secure_computing_int(this_syscall);
+}
+
+int __secure_computing_int(int this_syscall)
+{
=A0 =A0 =A0 int mode =3D current->seccomp.mode;
=A0 =A0 =A0 int exit_code =3D SIGKILL;
=A0 =A0 =A0 int *syscall;
@@ -302,16 +313,29 @@ void __secure_computing(int this_syscall)
=A0#endif
=A0 =A0 =A0 =A0 =A0 =A0 =A0 do {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (*syscall =3D=3D this=
_syscall)
Post by Serge E. Hallyn
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return 0;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 } while (*++syscall);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
=A0#ifdef CONFIG_SECCOMP_FILTER
- =A0 =A0 =A0 =A0 =A0 =A0 if (seccomp_run_filters(this_syscall) =3D=3D=
SECCOMP_RET_ALLOW)
Post by Serge E. Hallyn
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 case SECCOMP_MODE_FILTER: {
+ =A0 =A0 =A0 =A0 =A0 =A0 u32 action =3D seccomp_run_filters(this_sy=
scall);
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 switch (action & SECCOMP_RET_ACTION) {
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* Set the low-order 16-bi=
ts as a errno. */
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 syscall_set_return_value(c=
urrent, task_pt_regs(current),
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0 =A0 =A0 =A0 =A0 =A0-(action & SECCOMP_RET_DATA),
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0 =A0 =A0 =A0 =A0 =A00);
Post by Serge E. Hallyn
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -1;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return 0;
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 =A0 =A0 =A0 =A0 }
=A0 =A0 =A0 =A0 =A0 =A0 =A0 seccomp_filter_log_failure(this_syscall)=
;
Post by Serge E. Hallyn
=A0 =A0 =A0 =A0 =A0 =A0 =A0 exit_code =3D SIGSYS;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 }
=A0#endif
=A0 =A0 =A0 =A0 =A0 =A0 =A0 BUG();
@@ -322,6 +346,7 @@ void __secure_computing(int this_syscall)
=A0#endif
=A0 =A0 =A0 audit_seccomp(this_syscall);
=A0 =A0 =A0 do_exit(exit_code);
+ =A0 =A0 return -1; =A0 =A0 =A0/* never reached */
=A0}
=A0long prctl_get_seccomp(void)
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kern=
el" in
Post by Serge E. Hallyn
More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
Please read the FAQ at =A0http://www.tux.org/lkml/
Serge Hallyn
2012-03-05 21:13:08 UTC
Permalink
----- Original message -----
Post by Will Drewry
Post by Serge E. Hallyn
Post by Will Drewry
This change adds the SECCOMP_RET_ERRNO as a valid return value from a
seccomp filter.  Additionally, it makes the first use of the lower
16-bits for storing a filter-supplied errno.  16-bits is more than
enough for the errno-base.h calls.
Returning errors instead of immediately terminating processes that
violate seccomp policy allow for broader use of this functionality
for kernel attack surface reduction.  For example, a linux container
could maintain a whitelist of pre-existing system calls but drop
all new ones with errnos.  This would keep a logically static attack
surface while providing errnos that may allow for graceful failure
without the downside of do_exit() on a bad call.
v12: - move to WARN_ON if filter is NULL
     - change evaluation to only compare the ACTION so that layered
       errnos don't result in the lowest one being returned.
v10: - change loaders to fn
 v9: - n/a
 v8: - update Kconfig to note new need for syscall_set_return_value.
     - reordered such that TRAP behavior follows on later.
     - made the for loop a little less indent-y
 v7: - introduced
Clever :)
Thanks, Will.
For patches 1-7,
Thanks!
Post by Serge E. Hallyn
The -1 return value from __secure_computing_int() seems like it
could stand  #define, like
#define SECCOMP_DONTRUN -1
#define SECCOMP_RUN 0
or something Maybe not, but -1 always scares me and I had to look back
and forth a few times to make sure it was doing what I would want.
Works for me.  The -1 just matches what syscall emulation, etc does on
x86.  I'll add this to the tweaks for v14.
Thanks!
Well, in that case maybe it's not worth it. Sounds
like ignorance on my part.

thanks,
-serge

Will Drewry
2012-02-29 23:53:30 UTC
Permalink
Replaces the seccomp_t typedef with struct seccomp to match modern
kernel style.

v8-v10: no changes
v7: struct seccomp_struct -> struct seccomp
v6: original inclusion in this series.

Signed-off-by: Will Drewry <***@chromium.org>
Reviewed-by: James Morris <***@namei.org>
---
include/linux/sched.h | 2 +-
include/linux/seccomp.h | 10 ++++++----
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 7fe1083..2ee7041 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1445,7 +1445,7 @@ struct task_struct {
uid_t loginuid;
unsigned int sessionid;
#endif
- seccomp_t seccomp;
+ struct seccomp seccomp;

/* Thread group tracking */
u32 parent_exec_id;
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index cc7a4e9..d61f27f 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -7,7 +7,9 @@
#include <linux/thread_info.h>
#include <asm/seccomp.h>

-typedef struct { int mode; } seccomp_t;
+struct seccomp {
+ int mode;
+};

extern void __secure_computing(int);
static inline void secure_computing(int this_syscall)
@@ -19,7 +21,7 @@ static inline void secure_computing(int this_syscall)
extern long prctl_get_seccomp(void);
extern long prctl_set_seccomp(unsigned long);

-static inline int seccomp_mode(seccomp_t *s)
+static inline int seccomp_mode(struct seccomp *s)
{
return s->mode;
}
@@ -28,7 +30,7 @@ static inline int seccomp_mode(seccomp_t *s)

#include <linux/errno.h>

-typedef struct { } seccomp_t;
+struct seccomp { };

#define secure_computing(x) do { } while (0)

@@ -42,7 +44,7 @@ static inline long prctl_set_seccomp(unsigned long arg2)
return -EINVAL;
}

-static inline int seccomp_mode(seccomp_t *s)
+static inline int seccomp_mode(struct seccomp *s)
{
return 0;
}
--
1.7.5.4
Will Drewry
2012-02-29 23:53:31 UTC
Permalink
Adds a stub for a function that will return the AUDIT_ARCH_*
value appropriate to the supplied task based on the system
call convention.

For audit's use, the value can generally be hard-coded at the
audit-site. However, for other functionality not inlined into
syscall entry/exit, this makes that information available.
seccomp_filter is the first planned consumer and, as such,
the comment indicates a tie to HAVE_ARCH_SECCOMP_FILTER. That
is probably an unneeded detail.

Suggested-by: Roland McGrath <***@chromium.org>
Signed-off-by: Will Drewry <***@chromium.org>

v11: fixed improper return type
v10: introduced
---
include/asm-generic/syscall.h | 14 ++++++++++++++
1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/include/asm-generic/syscall.h b/include/asm-generic/syscall.h
index 5c122ae..a2c13dc 100644
--- a/include/asm-generic/syscall.h
+++ b/include/asm-generic/syscall.h
@@ -142,4 +142,18 @@ void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
unsigned int i, unsigned int n,
const unsigned long *args);

+/**
+ * syscall_get_arch - return the AUDIT_ARCH for the current system call
+ * @task: task of interest, must be in system call entry tracing
+ * @regs: task_pt_regs() of @task
+ *
+ * Returns the AUDIT_ARCH_* based on the system call convention in use.
+ *
+ * It's only valid to call this when @task is stopped on entry to a system
+ * call, due to %TIF_SYSCALL_TRACE, %TIF_SYSCALL_AUDIT, or %TIF_SECCOMP.
+ *
+ * Note, at present this function is only required with
+ * CONFIG_HAVE_ARCH_SECCOMP_FILTER.
+ */
+int syscall_get_arch(struct task_struct *task, struct pt_regs *regs);
#endif /* _ASM_SYSCALL_H */
--
1.7.5.4
Will Drewry
2012-02-29 23:53:37 UTC
Permalink
This change adds support for a new ptrace option, PTRACE_O_TRACESECCOMP,
and a new return value for seccomp BPF programs, SECCOMP_RET_TRACE.

When a tracer specifies the PTRACE_O_TRACESECCOMP ptrace option, the
tracer will be notified for any syscall that results in a BPF program
returning SECCOMP_RET_TRACE. The 16-bit SECCOMP_RET_DATA mask of the
BPF program return value will be passed as the ptrace_message and may be
retrieved using PTRACE_GETEVENTMSG.

If the subordinate process is not using seccomp filter, then no
system call notifications will occur even if the option is specified.

If there is no attached tracer when SECCOMP_RET_TRACE is returned,
the system call will not be executed and an -ENOSYS errno will be
returned to userspace.

This change adds a dependency on the system call slow path. Any future
efforts to use the system call fast path for seccomp filter will need to
address this restriction.

v12: - rebase to linux-next
- use ptrace_event and update arch/Kconfig to mention slow-path dependency
- drop all tracehook changes and inclusion (***@redhat.com)
v11: - invert the logic to just make it a PTRACE_SYSCALL accelerator
(***@nul.nu)
v10: - moved to PTRACE_O_SECCOMP / PT_TRACE_SECCOMP
v9: - n/a
v8: - guarded PTRACE_SECCOMP use with an ifdef
v7: - introduced

Signed-off-by: Will Drewry <***@chromium.org>
---
arch/Kconfig | 11 ++++++-----
include/linux/ptrace.h | 5 ++++-
include/linux/seccomp.h | 1 +
kernel/seccomp.c | 20 +++++++++++++++-----
4 files changed, 26 insertions(+), 11 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 6908bf6..beb1507 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -236,15 +236,16 @@ config ARCH_WANT_OLD_COMPAT_IPC
config HAVE_ARCH_SECCOMP_FILTER
bool
help
- This symbol should be selected by an architecure if it provides:
- asm/syscall.h:
+ An arch should select this symbol if it provides all of these things:
- syscall_get_arch()
- syscall_get_arguments()
- syscall_rollback()
- syscall_set_return_value()
- SIGSYS siginfo_t support must be implemented.
- __secure_computing_int()/secure_computing()'s return value must be
- checked, with -1 resulting in the syscall being skipped.
+ - SIGSYS siginfo_t support
+ - uses __secure_computing_int() or secure_computing()
+ - secure_computing is called from a ptrace_event()-safe context
+ - secure_computing return value is checked and a return value of -1
+ results in the system call being skipped immediately.

config SECCOMP_FILTER
def_bool y
diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h
index 5c71962..597e4fd 100644
--- a/include/linux/ptrace.h
+++ b/include/linux/ptrace.h
@@ -58,6 +58,7 @@
#define PTRACE_EVENT_EXEC 4
#define PTRACE_EVENT_VFORK_DONE 5
#define PTRACE_EVENT_EXIT 6
+#define PTRACE_EVENT_SECCOMP 7
/* Extended result codes which enabled by means other than options. */
#define PTRACE_EVENT_STOP 128

@@ -69,8 +70,9 @@
#define PTRACE_O_TRACEEXEC (1 << PTRACE_EVENT_EXEC)
#define PTRACE_O_TRACEVFORKDONE (1 << PTRACE_EVENT_VFORK_DONE)
#define PTRACE_O_TRACEEXIT (1 << PTRACE_EVENT_EXIT)
+#define PTRACE_O_TRACESECCOMP (1 << PTRACE_EVENT_SECCOMP)

-#define PTRACE_O_MASK 0x0000007f
+#define PTRACE_O_MASK 0x000000ff

#include <asm/ptrace.h>

@@ -98,6 +100,7 @@
#define PT_TRACE_EXEC PT_EVENT_FLAG(PTRACE_EVENT_EXEC)
#define PT_TRACE_VFORK_DONE PT_EVENT_FLAG(PTRACE_EVENT_VFORK_DONE)
#define PT_TRACE_EXIT PT_EVENT_FLAG(PTRACE_EVENT_EXIT)
+#define PT_TRACE_SECCOMP PT_EVENT_FLAG(PTRACE_EVENT_SECCOMP)

/* single stepping state bits (used on ARM and PA-RISC) */
#define PT_SINGLESTEP_BIT 31
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 3fe48fd..e183598 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -21,6 +21,7 @@
#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
#define SECCOMP_RET_TRAP 0x00020000U /* disallow and force a SIGSYS */
#define SECCOMP_RET_ERRNO 0x00030000U /* returns an errno */
+#define SECCOMP_RET_TRACE 0x7ffe0000U /* pass to a tracer or disallow */
#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */

/* Masks for the return value sections. */
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 10c6077..1064327 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -17,13 +17,13 @@
#include <linux/audit.h>
#include <linux/compat.h>
#include <linux/filter.h>
+#include <linux/ptrace.h>
#include <linux/sched.h>
#include <linux/seccomp.h>
#include <linux/security.h>
#include <linux/slab.h>
#include <linux/uaccess.h>

-#include <linux/tracehook.h>
#include <asm/syscall.h>

/* #define SECCOMP_DEBUG 1 */
@@ -346,14 +346,24 @@ int __secure_computing_int(int this_syscall)
-(action & SECCOMP_RET_DATA),
0);
return -1;
- case SECCOMP_RET_TRAP: {
- int reason_code = action & SECCOMP_RET_DATA;
+ case SECCOMP_RET_TRAP:
/* Show the handler the original registers. */
syscall_rollback(current, task_pt_regs(current));
/* Let the filter pass back 16 bits of data. */
- seccomp_send_sigsys(this_syscall, reason_code);
+ seccomp_send_sigsys(this_syscall,
+ action & SECCOMP_RET_DATA);
return -1;
- }
+ case SECCOMP_RET_TRACE:
+ /* Skip these calls if there is no tracer. */
+ if (!ptrace_event_enabled(current,
+ PTRACE_EVENT_SECCOMP))
+ return -1;
+ /* Allow the BPF to provide the event message */
+ ptrace_event(PTRACE_EVENT_SECCOMP,
+ action & SECCOMP_RET_DATA);
+ if (fatal_signal_pending(current))
+ break;
+ return 0;
case SECCOMP_RET_ALLOW:
return 0;
case SECCOMP_RET_KILL:
--
1.7.5.4
Will Drewry
2012-02-29 23:53:33 UTC
Permalink
[This patch depends on ***@mit.edu's no_new_privs patch:
https://lkml.org/lkml/2012/1/30/264
]

This patch adds support for seccomp mode 2. Mode 2 introduces the
ability for unprivileged processes to install system call filtering
policy expressed in terms of a Berkeley Packet Filter (BPF) program.
This program will be evaluated in the kernel for each system call
the task makes and computes a result based on data in the format
of struct seccomp_data.

A filter program may be installed by calling:
struct sock_fprog fprog = { ... };
...
prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &fprog);

The return value of the filter program determines if the system call is
allowed to proceed or denied. If the first filter program installed
allows prctl(2) calls, then the above call may be made repeatedly
by a task to further reduce its access to the kernel. All attached
programs must be evaluated before a system call will be allowed to
proceed.

Filter programs will be inherited across fork/clone and execve.
However, if the task attaching the filter is unprivileged
(!CAP_SYS_ADMIN) the no_new_privs bit will be set on the task. This
ensures that unprivileged tasks cannot attach filters that affect
privileged tasks (e.g., setuid binary).

There are a number of benefits to this approach. A few of which are
as follows:
- BPF has been exposed to userland for a long time
- BPF optimization (and JIT'ing) are well understood
- Userland already knows its ABI: system call numbers and desired
arguments
- No time-of-check-time-of-use vulnerable data accesses are possible.
- system call arguments are loaded on access only to minimize copying
required for system call policy decisions.

Mode 2 support is restricted to architectures that enable
HAVE_ARCH_SECCOMP_FILTER. In this patch, the primary dependency is on
syscall_get_arguments(). The full desired scope of this feature will
add a few minor additional requirements expressed later in this series.
Based on discussion, SECCOMP_RET_ERRNO and SECCOMP_RET_TRACE seem to be
the desired additional functionality.

No architectures are enabled in this patch.

v12: - added a maximum instruction count per path (***@nul.nu,***@redhat.com)
- removed copy_seccomp (***@chromium.org,***@nul.nu)
- reworded the prctl_set_seccomp comment (***@nul.nu)
v11: - reorder struct seccomp_data to allow future args expansion (***@zytor.com)
- style clean up, @compat dropped, compat_sock_fprog32 (***@nul.nu)
- do_exit(SIGSYS) (***@chromium.org, ***@mit.edu)
- pare down Kconfig doc reference.
- extra comment clean up
v10: - seccomp_data has changed again to be more aesthetically pleasing
(***@zytor.com)
- calling convention is noted in a new u32 field using syscall_get_arch.
This allows for cross-calling convention tasks to use seccomp filters.
(***@zytor.com)
- lots of clean up (thanks, Indan!)
v9: - n/a
v8: - use bpf_chk_filter, bpf_run_filter. update load_fns
- Lots of fixes courtesy of ***@nul.nu:
-- fix up load behavior, compat fixups, and merge alloc code,
-- renamed pc and dropped __packed, use bool compat.
-- Added a hidden CONFIG_SECCOMP_FILTER to synthesize non-arch
dependencies
v7: (massive overhaul thanks to Indan, others)
- added CONFIG_HAVE_ARCH_SECCOMP_FILTER
- merged into seccomp.c
- minimal seccomp_filter.h
- no config option (part of seccomp)
- no new prctl
- doesn't break seccomp on systems without asm/syscall.h
(works but arg access always fails)
- dropped seccomp_init_task, extra free functions, ...
- dropped the no-asm/syscall.h code paths
- merges with network sk_run_filter and sk_chk_filter
v6: - fix memory leak on attach compat check failure
- require no_new_privs || CAP_SYS_ADMIN prior to filter
installation. (***@mit.edu)
- s/seccomp_struct_/seccomp_/ for macros/functions (***@redhat.com)
- cleaned up Kconfig (***@redhat.com)
- on block, note if the call was compat (so the # means something)
v5: - uses syscall_get_arguments
(***@nul.nu,***@redhat.com, ***@chromium.org)
- uses union-based arg storage with hi/lo struct to
handle endianness. Compromises between the two alternate
proposals to minimize extra arg shuffling and account for
endianness assuming userspace uses offsetof().
(***@chromium.org, ***@nul.nu)
- update Kconfig description
- add include/seccomp_filter.h and add its installation
- (naive) on-demand syscall argument loading
- drop seccomp_t (***@redhat.com)
v4: - adjusted prctl to make room for PR_[SG]ET_NO_NEW_PRIVS
- now uses current->no_new_privs
(***@mit.edu,***@linux-foundation.com)
- assign names to seccomp modes (***@xenotime.net)
- fix style issues (***@xenotime.net)
- reworded Kconfig entry (***@xenotime.net)
v3: - macros to inline (***@redhat.com)
- init_task behavior fixed (***@redhat.com)
- drop creator entry and extra NULL check (***@redhat.com)
- alloc returns -EINVAL on bad sizing (***@canonical.com)
- adds tentative use of "always_unprivileged" as per
***@linux-foundation.org and ***@mit.edu
v2: - (patch 2 only)

Reviewed-by: Indan Zupancic <***@nul.nu>
Signed-off-by: Will Drewry <***@chromium.org>
---
arch/Kconfig | 17 +++
include/linux/Kbuild | 1 +
include/linux/seccomp.h | 70 ++++++++++-
kernel/fork.c | 3 +
kernel/seccomp.c | 324 ++++++++++++++++++++++++++++++++++++++++++++---
kernel/sys.c | 2 +-
6 files changed, 394 insertions(+), 23 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index e5d3778..7a696a9 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -233,4 +233,21 @@ config HAVE_CMPXCHG_DOUBLE
config ARCH_WANT_OLD_COMPAT_IPC
bool

+config HAVE_ARCH_SECCOMP_FILTER
+ bool
+ help
+ This symbol should be selected by an architecure if it provides
+ asm/syscall.h, specifically syscall_get_arguments() and
+ syscall_get_arch().
+
+config SECCOMP_FILTER
+ def_bool y
+ depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
+ help
+ Enable tasks to build secure computing environments defined
+ in terms of Berkeley Packet Filter programs which implement
+ task-defined system call filtering polices.
+
+ See Documentation/prctl/seccomp_filter.txt for details.
+
source "kernel/gcov/Kconfig"
diff --git a/include/linux/Kbuild b/include/linux/Kbuild
index ecd0afb..b1e6a74 100644
--- a/include/linux/Kbuild
+++ b/include/linux/Kbuild
@@ -332,6 +332,7 @@ header-y += scc.h
header-y += sched.h
header-y += screen_info.h
header-y += sdla.h
+header-y += seccomp.h
header-y += securebits.h
header-y += selinux_netlink.h
header-y += sem.h
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index d61f27f..6ef133c 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -1,14 +1,67 @@
#ifndef _LINUX_SECCOMP_H
#define _LINUX_SECCOMP_H

+#include <linux/compiler.h>
+#include <linux/types.h>
+
+
+/* Valid values for seccomp.mode and prctl(PR_SET_SECCOMP, <mode>) */
+#define SECCOMP_MODE_DISABLED 0 /* seccomp is not in use. */
+#define SECCOMP_MODE_STRICT 1 /* uses hard-coded filter. */
+#define SECCOMP_MODE_FILTER 2 /* uses user-supplied filter. */
+
+/*
+ * All BPF programs must return a 32-bit value.
+ * The bottom 16-bits are reserved for future use.
+ * The upper 16-bits are ordered from least permissive values to most.
+ *
+ * The ordering ensures that a min_t() over composed return values always
+ * selects the least permissive choice.
+ */
+#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
+
+/* Masks for the return value sections. */
+#define SECCOMP_RET_ACTION 0xffff0000U
+#define SECCOMP_RET_DATA 0x0000ffffU
+
+/**
+ * struct seccomp_data - the format the BPF program executes over.
+ * @nr: the system call number
+ * @arch: indicates system call convention as an AUDIT_ARCH_* value
+ * as defined in <linux/audit.h>.
+ * @instruction_pointer: at the time of the system call.
+ * @args: up to 6 system call arguments always stored as 64-bit values
+ * regardless of the architecture.
+ */
+struct seccomp_data {
+ int nr;
+ __u32 arch;
+ __u64 instruction_pointer;
+ __u64 args[6];
+};

+#ifdef __KERNEL__
#ifdef CONFIG_SECCOMP

#include <linux/thread_info.h>
#include <asm/seccomp.h>

+struct seccomp_filter;
+/**
+ * struct seccomp - the state of a seccomp'ed process
+ *
+ * @mode: indicates one of the valid values above for controlled
+ * system calls available to a process.
+ * @filter: The metadata and ruleset for determining what system calls
+ * are allowed for a task.
+ *
+ * @filter must only be accessed from the context of current as there
+ * is no locking.
+ */
struct seccomp {
int mode;
+ struct seccomp_filter *filter;
};

extern void __secure_computing(int);
@@ -19,7 +72,7 @@ static inline void secure_computing(int this_syscall)
}

extern long prctl_get_seccomp(void);
-extern long prctl_set_seccomp(unsigned long);
+extern long prctl_set_seccomp(unsigned long, char __user *);

static inline int seccomp_mode(struct seccomp *s)
{
@@ -31,15 +84,16 @@ static inline int seccomp_mode(struct seccomp *s)
#include <linux/errno.h>

struct seccomp { };
+struct seccomp_filter { };

-#define secure_computing(x) do { } while (0)
+#define secure_computing(x) 0

static inline long prctl_get_seccomp(void)
{
return -EINVAL;
}

-static inline long prctl_set_seccomp(unsigned long arg2)
+static inline long prctl_set_seccomp(unsigned long arg2, char __user *arg3)
{
return -EINVAL;
}
@@ -48,7 +102,15 @@ static inline int seccomp_mode(struct seccomp *s)
{
return 0;
}
-
#endif /* CONFIG_SECCOMP */

+#ifdef CONFIG_SECCOMP_FILTER
+extern void put_seccomp_filter(struct seccomp_filter *);
+extern void get_seccomp_filter(struct seccomp_filter *);
+#else /* CONFIG_SECCOMP_FILTER */
+/* These macros consume the seccomp.filter reference. */
+#define put_seccomp_filter(_s) do { } while (0)
+#define get_seccomp_filter(_s) do { } while (0)
+#endif /* CONFIG_SECCOMP_FILTER */
+#endif /* __KERNEL__ */
#endif /* _LINUX_SECCOMP_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index b18c107..da3e489 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -34,6 +34,7 @@
#include <linux/cgroup.h>
#include <linux/security.h>
#include <linux/hugetlb.h>
+#include <linux/seccomp.h>
#include <linux/swap.h>
#include <linux/syscalls.h>
#include <linux/jiffies.h>
@@ -171,6 +172,7 @@ void free_task(struct task_struct *tsk)
free_thread_info(tsk->stack);
rt_mutex_debug_task_free(tsk);
ftrace_graph_exit_task(tsk);
+ put_seccomp_filter(tsk->seccomp.filter);
free_task_struct(tsk);
}
EXPORT_SYMBOL(free_task);
@@ -1166,6 +1168,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
goto fork_out;

ftrace_graph_init_task(p);
+ get_seccomp_filter(p->seccomp.filter);

rt_mutex_init_task(p);

diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e8d76c5..71df324 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -3,16 +3,272 @@
*
* Copyright 2004-2005 Andrea Arcangeli <***@cpushare.com>
*
- * This defines a simple but solid secure-computing mode.
+ * Copyright (C) 2012 Google, Inc.
+ * Will Drewry <***@chromium.org>
+ *
+ * This defines a simple but solid secure-computing facility.
+ *
+ * Mode 1 uses a fixed list of allowed system calls.
+ * Mode 2 allows user-defined system call filters in the form
+ * of Berkeley Packet Filters/Linux Socket Filters.
*/

+#include <linux/atomic.h>
#include <linux/audit.h>
-#include <linux/seccomp.h>
-#include <linux/sched.h>
#include <linux/compat.h>
+#include <linux/filter.h>
+#include <linux/sched.h>
+#include <linux/seccomp.h>
+#include <linux/security.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/tracehook.h>
+#include <asm/syscall.h>

/* #define SECCOMP_DEBUG 1 */
-#define NR_SECCOMP_MODES 1
+
+#ifdef CONFIG_SECCOMP_FILTER
+/**
+ * struct seccomp_filter - container for seccomp BPF programs
+ *
+ * @usage: reference count to manage the object liftime.
+ * get/put helpers should be used when accessing an instance
+ * outside of a lifetime-guarded section. In general, this
+ * is only needed for handling filters shared across tasks.
+ * @prev: points to a previously installed, or inherited, filter
+ * @insns: the BPF program instructions to evaluate
+ * @len: the number of instructions in the program
+ *
+ * seccomp_filter objects are organized in a tree linked via the @prev
+ * pointer. For any task, it appears to be a singly-linked list starting
+ * with current->seccomp.filter, the most recently attached or inherited filter.
+ * However, multiple filters may share a @prev node, by way of fork(), which
+ * results in a unidirectional tree existing in memory. This is similar to
+ * how namespaces work.
+ *
+ * seccomp_filter objects should never be modified after being attached
+ * to a task_struct (other than @usage).
+ */
+struct seccomp_filter {
+ atomic_t usage;
+ struct seccomp_filter *prev;
+ unsigned short len; /* Instruction count */
+ struct sock_filter insns[];
+};
+
+/* Limit any path through the tree to 5 megabytes worth of instructions. */
+#define MAX_INSNS_PER_PATH ((5 << 20) / sizeof(struct sock_filter))
+
+static void seccomp_filter_log_failure(int syscall)
+{
+ int compat = 0;
+#ifdef CONFIG_COMPAT
+ compat = is_compat_task();
+#endif
+ pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
+ current->comm, task_pid_nr(current),
+ (compat ? "compat " : ""),
+ syscall, KSTK_EIP(current));
+}
+
+/**
+ * get_u32 - returns a u32 offset into data
+ * @data: a unsigned 64 bit value
+ * @index: 0 or 1 to return the first or second 32-bits
+ *
+ * This inline exists to hide the length of unsigned long.
+ * If a 32-bit unsigned long is passed in, it will be extended
+ * and the top 32-bits will be 0. If it is a 64-bit unsigned
+ * long, then whatever data is resident will be properly returned.
+ */
+static inline u32 get_u32(u64 data, int index)
+{
+ return ((u32 *)&data)[index];
+}
+
+/* Helper for bpf_load below. */
+#define BPF_DATA(_name) offsetof(struct seccomp_data, _name)
+/**
+ * bpf_load: checks and returns a pointer to the requested offset
+ * @nr: int syscall passed as a void * to bpf_run_filter
+ * @off: offset into struct seccomp_data to load from (must be u32 aligned)
+ * @size: number of bytes to load (must be 4 bytes)
+ * @buf: temporary storage supplied by bpf_run_filter (4 bytes)
+ *
+ * Returns a pointer to the loaded data (usually @buf).
+ * On failure, returns NULL.
+ */
+static void *bpf_load(const void *nr, int off, unsigned int size, void *buf)
+{
+ unsigned long value;
+ u32 *A = buf;
+
+ if (size != sizeof(u32) || off % sizeof(u32))
+ return NULL;
+
+ if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
+ struct pt_regs *regs = task_pt_regs(current);
+ int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
+ int index = (off % sizeof(u64)) ? 1 : 0;
+ syscall_get_arguments(current, regs, arg, 1, &value);
+ *A = get_u32(value, index);
+ } else if (off == BPF_DATA(nr)) {
+ *A = (u32)(uintptr_t)nr;
+ } else if (off == BPF_DATA(arch)) {
+ struct pt_regs *regs = task_pt_regs(current);
+ *A = syscall_get_arch(current, regs);
+ } else if (off == BPF_DATA(instruction_pointer)) {
+ *A = get_u32(KSTK_EIP(current), 0);
+ } else if (off == BPF_DATA(instruction_pointer) + sizeof(u32)) {
+ *A = get_u32(KSTK_EIP(current), 1);
+ } else {
+ return NULL;
+ }
+ return buf;
+}
+
+/**
+ * seccomp_run_filters - evaluates all seccomp filters against @syscall
+ * @syscall: number of the current system call
+ *
+ * Returns valid seccomp BPF response codes.
+ */
+static u32 seccomp_run_filters(int syscall)
+{
+ struct seccomp_filter *f;
+ u32 ret = SECCOMP_RET_KILL;
+ static const struct bpf_load_fn fns = {
+ bpf_load,
+ sizeof(struct seccomp_data),
+ };
+ const void *sc_ptr = (const void *)(uintptr_t)syscall;
+
+ /*
+ * All filters are evaluated in order of youngest to oldest. The lowest
+ * BPF return value always takes priority.
+ */
+ for (f = current->seccomp.filter; f; f = f->prev) {
+ ret = bpf_run_filter(sc_ptr, f->insns, &fns);
+ if (ret != SECCOMP_RET_ALLOW)
+ break;
+ }
+ return ret;
+}
+
+/**
+ * seccomp_attach_filter: Attaches a seccomp filter to current.
+ * @fprog: BPF program to install
+ *
+ * Returns 0 on success or an errno on failure.
+ */
+static long seccomp_attach_filter(struct sock_fprog *fprog)
+{
+ struct seccomp_filter *filter;
+ unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
+ unsigned long total_insns = fprog->len;
+ long ret;
+
+ if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
+ return -EINVAL;
+
+ /* Walk the list to ensure the new instruction count is allowed. */
+ for (filter = current->seccomp.filter; filter; filter = filter->prev) {
+ if (total_insns > MAX_INSNS_PER_PATH - filter->len)
+ return -E2BIG;
+ total_insns += filter->len;
+ }
+
+ /* Allocate a new seccomp_filter */
+ filter = kzalloc(sizeof(struct seccomp_filter) + fp_size, GFP_KERNEL);
+ if (!filter)
+ return -ENOMEM;
+ atomic_set(&filter->usage, 1);
+ filter->len = fprog->len;
+
+ /* Copy the instructions from fprog. */
+ ret = -EFAULT;
+ if (copy_from_user(filter->insns, fprog->filter, fp_size))
+ goto out;
+
+ /* Check the fprog */
+ ret = bpf_chk_filter(filter->insns, filter->len, BPF_CHK_FLAGS_NO_SKB);
+ if (ret)
+ goto out;
+
+ /*
+ * Installing a seccomp filter requires that the task have
+ * CAP_SYS_ADMIN in its namespace or be running with no_new_privs.
+ * This avoids scenarios where unprivileged tasks can affect the
+ * behavior of privileged children.
+ */
+ ret = -EACCES;
+ if (!current->no_new_privs &&
+ security_capable_noaudit(current_cred(), current_user_ns(),
+ CAP_SYS_ADMIN) != 0)
+ goto out;
+
+ /*
+ * If there is an existing filter, make it the prev and don't drop its
+ * task reference.
+ */
+ filter->prev = current->seccomp.filter;
+ current->seccomp.filter = filter;
+ return 0;
+out:
+ put_seccomp_filter(filter); /* for get or task, on err */
+ return ret;
+}
+
+/**
+ * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
+ * @user_filter: pointer to the user data containing a sock_fprog.
+ *
+ * Returns 0 on success and non-zero otherwise.
+ */
+long seccomp_attach_user_filter(char __user *user_filter)
+{
+ struct sock_fprog fprog;
+ long ret = -EFAULT;
+
+ if (!user_filter)
+ goto out;
+#ifdef CONFIG_COMPAT
+ if (is_compat_task()) {
+ struct compat_sock_fprog fprog32;
+ if (copy_from_user(&fprog32, user_filter, sizeof(fprog32)))
+ goto out;
+ fprog.len = fprog32.len;
+ fprog.filter = compat_ptr(fprog32.filter);
+ } else /* falls through to the if below. */
+#endif
+ if (copy_from_user(&fprog, user_filter, sizeof(fprog)))
+ goto out;
+ ret = seccomp_attach_filter(&fprog);
+out:
+ return ret;
+}
+
+/* get_seccomp_filter - increments the reference count of @orig. */
+void get_seccomp_filter(struct seccomp_filter *orig)
+{
+ if (!orig)
+ return;
+ /* Reference count is bounded by the number of total processes. */
+ atomic_inc(&orig->usage);
+}
+
+/* put_seccomp_filter - decrements the ref count of @orig and may free. */
+void put_seccomp_filter(struct seccomp_filter *orig)
+{
+ /* Clean up single-reference branches iteratively. */
+ while (orig && atomic_dec_and_test(&orig->usage)) {
+ struct seccomp_filter *freeme = orig;
+ orig = orig->prev;
+ kfree(freeme);
+ }
+}
+#endif /* CONFIG_SECCOMP_FILTER */

/*
* Secure computing mode 1 allows only read/write/exit/sigreturn.
@@ -34,10 +290,11 @@ static int mode1_syscalls_32[] = {
void __secure_computing(int this_syscall)
{
int mode = current->seccomp.mode;
- int * syscall;
+ int exit_code = SIGKILL;
+ int *syscall;

switch (mode) {
- case 1:
+ case SECCOMP_MODE_STRICT:
syscall = mode1_syscalls;
#ifdef CONFIG_COMPAT
if (is_compat_task())
@@ -48,6 +305,14 @@ void __secure_computing(int this_syscall)
return;
} while (*++syscall);
break;
+#ifdef CONFIG_SECCOMP_FILTER
+ case SECCOMP_MODE_FILTER:
+ if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
+ return;
+ seccomp_filter_log_failure(this_syscall);
+ exit_code = SIGSYS;
+ break;
+#endif
default:
BUG();
}
@@ -56,7 +321,7 @@ void __secure_computing(int this_syscall)
dump_stack();
#endif
audit_seccomp(this_syscall);
- do_exit(SIGKILL);
+ do_exit(exit_code);
}

long prctl_get_seccomp(void)
@@ -64,25 +329,48 @@ long prctl_get_seccomp(void)
return current->seccomp.mode;
}

-long prctl_set_seccomp(unsigned long seccomp_mode)
+/**
+ * prctl_set_seccomp: configures current->seccomp.mode
+ * @seccomp_mode: requested mode to use
+ * @filter: optional struct sock_fprog for use with SECCOMP_MODE_FILTER
+ *
+ * This function may be called repeatedly with a @seccomp_mode of
+ * SECCOMP_MODE_FILTER to install additional filters. Every filter
+ * successfully installed will be evaluated (in reverse order) for each system
+ * call the task makes.
+ *
+ * Once current->seccomp.mode is non-zero, it may not be changed.
+ *
+ * Returns 0 on success or -EINVAL on failure.
+ */
+long prctl_set_seccomp(unsigned long seccomp_mode, char __user *filter)
{
- long ret;
+ long ret = -EINVAL;

- /* can set it only once to be even more secure */
- ret = -EPERM;
- if (unlikely(current->seccomp.mode))
+ if (current->seccomp.mode &&
+ current->seccomp.mode != seccomp_mode)
goto out;

- ret = -EINVAL;
- if (seccomp_mode && seccomp_mode <= NR_SECCOMP_MODES) {
- current->seccomp.mode = seccomp_mode;
- set_thread_flag(TIF_SECCOMP);
+ switch (seccomp_mode) {
+ case SECCOMP_MODE_STRICT:
+ ret = 0;
#ifdef TIF_NOTSC
disable_TSC();
#endif
- ret = 0;
+ break;
+#ifdef CONFIG_SECCOMP_FILTER
+ case SECCOMP_MODE_FILTER:
+ ret = seccomp_attach_user_filter(filter);
+ if (ret)
+ goto out;
+ break;
+#endif
+ default:
+ goto out;
}

- out:
+ current->seccomp.mode = seccomp_mode;
+ set_thread_flag(TIF_SECCOMP);
+out:
return ret;
}
diff --git a/kernel/sys.c b/kernel/sys.c
index 4686119..0addc11 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1955,7 +1955,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
error = prctl_get_seccomp();
break;
case PR_SET_SECCOMP:
- error = prctl_set_seccomp(arg2);
+ error = prctl_set_seccomp(arg2, (char __user *)arg3);
break;
case PR_GET_TSC:
error = GET_TSC_CTL(arg2);
--
1.7.5.4
Indan Zupancic
2012-03-02 05:45:20 UTC
Permalink
Hello,

Mostly minor nitpicks.
Post by Will Drewry
diff --git a/arch/Kconfig b/arch/Kconfig
index e5d3778..7a696a9 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -233,4 +233,21 @@ config HAVE_CMPXCHG_DOUBLE
config ARCH_WANT_OLD_COMPAT_IPC
bool
+config HAVE_ARCH_SECCOMP_FILTER
+ bool
+ help
+ This symbol should be selected by an architecure if it provides
'architecture'
Post by Will Drewry
+ asm/syscall.h, specifically syscall_get_arguments() and
+ syscall_get_arch().
+
+config SECCOMP_FILTER
+ def_bool y
+ depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
+ help
+ Enable tasks to build secure computing environments defined
+ in terms of Berkeley Packet Filter programs which implement
+ task-defined system call filtering polices.
+
+ See Documentation/prctl/seccomp_filter.txt for details.
+
source "kernel/gcov/Kconfig"
diff --git a/include/linux/Kbuild b/include/linux/Kbuild
index ecd0afb..b1e6a74 100644
--- a/include/linux/Kbuild
+++ b/include/linux/Kbuild
@@ -332,6 +332,7 @@ header-y += scc.h
header-y += sched.h
header-y += screen_info.h
header-y += sdla.h
+header-y += seccomp.h
header-y += securebits.h
header-y += selinux_netlink.h
header-y += sem.h
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index d61f27f..6ef133c 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -1,14 +1,67 @@
#ifndef _LINUX_SECCOMP_H
#define _LINUX_SECCOMP_H
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+
+/* Valid values for seccomp.mode and prctl(PR_SET_SECCOMP, <mode>) */
+#define SECCOMP_MODE_DISABLED 0 /* seccomp is not in use. */
+#define SECCOMP_MODE_STRICT 1 /* uses hard-coded filter. */
+#define SECCOMP_MODE_FILTER 2 /* uses user-supplied filter. */
I'd add a space or tab to align the numbers, or rename DISABLED to NONE,
but that's just me I guess.
Post by Will Drewry
+
+/*
+ * All BPF programs must return a 32-bit value.
+ * The bottom 16-bits are reserved for future use.
+ * The upper 16-bits are ordered from least permissive values to most.
+ *
+ * The ordering ensures that a min_t() over composed return values always
+ * selects the least permissive choice.
+ */
+#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */
+
+/* Masks for the return value sections. */
+#define SECCOMP_RET_ACTION 0xffff0000U
+#define SECCOMP_RET_DATA 0x0000ffffU
+
+/**
+ * struct seccomp_data - the format the BPF program executes over.
format?
Post by Will Drewry
+ * as defined in <linux/audit.h>.
If the vDSO is used this will always be the same, so what good is this?
I haven't gotten an answer to this yet.

That said, it's eays enough to add, I just don't see the point of it.
Post by Will Drewry
+ * regardless of the architecture.
+ */
+struct seccomp_data {
+ int nr;
+ __u32 arch;
+ __u64 instruction_pointer;
+ __u64 args[6];
+};
+#ifdef __KERNEL__
#ifdef CONFIG_SECCOMP
#include <linux/thread_info.h>
#include <asm/seccomp.h>
+struct seccomp_filter;
+/**
+ * struct seccomp - the state of a seccomp'ed process
+ *
+ * system calls available to a process.
+ * are allowed for a task.
+ *
I'd move 'as there' to the next line.
Post by Will Drewry
+ * is no locking.
+ */
struct seccomp {
int mode;
+ struct seccomp_filter *filter;
};
If the extra memory usage is a problem, a union could be used:

union seccomp {
long mode;
struct seccomp_filter *filter;
};

That way mode is 0 when it's disabled, 1 for SECCOMP_MODE_STRICT and
a pointer for SECCOMP_MODE_FILTER.
Post by Will Drewry
extern void __secure_computing(int);
@@ -19,7 +72,7 @@ static inline void secure_computing(int this_syscall)
}
extern long prctl_get_seccomp(void);
-extern long prctl_set_seccomp(unsigned long);
+extern long prctl_set_seccomp(unsigned long, char __user *);
static inline int seccomp_mode(struct seccomp *s)
{
@@ -31,15 +84,16 @@ static inline int seccomp_mode(struct seccomp *s)
#include <linux/errno.h>
struct seccomp { };
+struct seccomp_filter { };
-#define secure_computing(x) do { } while (0)
+#define secure_computing(x) 0
static inline long prctl_get_seccomp(void)
{
return -EINVAL;
}
-static inline long prctl_set_seccomp(unsigned long arg2)
+static inline long prctl_set_seccomp(unsigned long arg2, char __user *arg3)
{
return -EINVAL;
}
@@ -48,7 +102,15 @@ static inline int seccomp_mode(struct seccomp *s)
{
return 0;
}
-
#endif /* CONFIG_SECCOMP */
+#ifdef CONFIG_SECCOMP_FILTER
+extern void put_seccomp_filter(struct seccomp_filter *);
+extern void get_seccomp_filter(struct seccomp_filter *);
+#else /* CONFIG_SECCOMP_FILTER */
+/* These macros consume the seccomp.filter reference. */
I think it's cleaner to let put/get_seccomp_filter have struct task*
as a parameter instead of the filter.
Post by Will Drewry
+#define put_seccomp_filter(_s) do { } while (0)
+#define get_seccomp_filter(_s) do { } while (0)
+#endif /* CONFIG_SECCOMP_FILTER */
+#endif /* __KERNEL__ */
#endif /* _LINUX_SECCOMP_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index b18c107..da3e489 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -34,6 +34,7 @@
#include <linux/cgroup.h>
#include <linux/security.h>
#include <linux/hugetlb.h>
+#include <linux/seccomp.h>
#include <linux/swap.h>
#include <linux/syscalls.h>
#include <linux/jiffies.h>
@@ -171,6 +172,7 @@ void free_task(struct task_struct *tsk)
free_thread_info(tsk->stack);
rt_mutex_debug_task_free(tsk);
ftrace_graph_exit_task(tsk);
+ put_seccomp_filter(tsk->seccomp.filter);
free_task_struct(tsk);
}
EXPORT_SYMBOL(free_task);
@@ -1166,6 +1168,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
goto fork_out;
ftrace_graph_init_task(p);
+ get_seccomp_filter(p->seccomp.filter);
rt_mutex_init_task(p);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e8d76c5..71df324 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -3,16 +3,272 @@
*
*
- * This defines a simple but solid secure-computing mode.
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This defines a simple but solid secure-computing facility.
+ *
+ * Mode 1 uses a fixed list of allowed system calls.
+ * Mode 2 allows user-defined system call filters in the form
+ * of Berkeley Packet Filters/Linux Socket Filters.
*/
+#include <linux/atomic.h>
#include <linux/audit.h>
-#include <linux/seccomp.h>
-#include <linux/sched.h>
#include <linux/compat.h>
+#include <linux/filter.h>
+#include <linux/sched.h>
+#include <linux/seccomp.h>
+#include <linux/security.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/tracehook.h>
+#include <asm/syscall.h>
/* #define SECCOMP_DEBUG 1 */
-#define NR_SECCOMP_MODES 1
+
+#ifdef CONFIG_SECCOMP_FILTER
+/**
+ * struct seccomp_filter - container for seccomp BPF programs
+ *
+ * get/put helpers should be used when accessing an instance
+ * outside of a lifetime-guarded section. In general, this
+ * is only needed for handling filters shared across tasks.
+ *
+ * pointer. For any task, it appears to be a singly-linked list starting
+ * with current->seccomp.filter, the most recently attached or inherited filter.
+ * results in a unidirectional tree existing in memory. This is similar to
+ * how namespaces work.
+ *
+ * seccomp_filter objects should never be modified after being attached
+ */
+struct seccomp_filter {
+ atomic_t usage;
+ struct seccomp_filter *prev;
+ unsigned short len; /* Instruction count */
To avoid 4 bytes of the padding on 64-bit, you could move len to just after
atomic_t, which is an int.
Post by Will Drewry
+ struct sock_filter insns[];
+};
+
+/* Limit any path through the tree to 5 megabytes worth of instructions. */
+#define MAX_INSNS_PER_PATH ((5 << 20) / sizeof(struct sock_filter))
I think this is too high, that are 160 filters with max instruction length.

I vote for a 1MB or lower limit, that's 32 filters with max size.
Post by Will Drewry
+
+static void seccomp_filter_log_failure(int syscall)
+{
+ int compat = 0;
+#ifdef CONFIG_COMPAT
+ compat = is_compat_task();
+#endif
+ pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
+ current->comm, task_pid_nr(current),
+ (compat ? "compat " : ""),
+ syscall, KSTK_EIP(current));
+}
Hm, I thought everyone agreed this wasn't really necessary?
Post by Will Drewry
+
+/**
+ * get_u32 - returns a u32 offset into data
It doesn't return an offset.
Well, yes, it's a u64.
Post by Will Drewry
+ *
+ * This inline exists to hide the length of unsigned long.
and to get the upper half of it.
Post by Will Drewry
+ * If a 32-bit unsigned long is passed in, it will be extended
+ * and the top 32-bits will be 0. If it is a 64-bit unsigned
+ * long, then whatever data is resident will be properly returned.
+ */
Personally I would leave out the whole comment, because reading the
two lines of code is faster and gives the same info.
Post by Will Drewry
+static inline u32 get_u32(u64 data, int index)
+{
+ return ((u32 *)&data)[index];
+}
+
+/* Helper for bpf_load below. */
+#define BPF_DATA(_name) offsetof(struct seccomp_data, _name)
+/**
+ * bpf_load: checks and returns a pointer to the requested offset
+ *
+ * On failure, returns NULL.
+ */
+static void *bpf_load(const void *nr, int off, unsigned int size, void *buf)
+{
+ unsigned long value;
+ u32 *A = buf;
+
+ if (size != sizeof(u32) || off % sizeof(u32))
+ return NULL;
+
+ if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
+ struct pt_regs *regs = task_pt_regs(current);
+ int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
+ int index = (off % sizeof(u64)) ? 1 : 0;
+ syscall_get_arguments(current, regs, arg, 1, &value);
+ *A = get_u32(value, index);
+ } else if (off == BPF_DATA(nr)) {
+ *A = (u32)(uintptr_t)nr;
+ } else if (off == BPF_DATA(arch)) {
+ struct pt_regs *regs = task_pt_regs(current);
+ *A = syscall_get_arch(current, regs);
+ } else if (off == BPF_DATA(instruction_pointer)) {
+ *A = get_u32(KSTK_EIP(current), 0);
+ } else if (off == BPF_DATA(instruction_pointer) + sizeof(u32)) {
+ *A = get_u32(KSTK_EIP(current), 1);
+ } else {
+ return NULL;
+ }
+ return buf;
+}
The interface doesn't match, the alignment doesn't match, and the LD
instructions couldn't be reused, with code size and even slowdowns in
the networking filter code as a result. I have the impression this
isn't the best approach after all.

I'm going to take a look at implementing it as anscillary data accesses
instead of generic loads. That hopefully reduces the code churn in filter.c
a lot.
Post by Will Drewry
+
+/**
+ *
+ * Returns valid seccomp BPF response codes.
+ */
+static u32 seccomp_run_filters(int syscall)
+{
+ struct seccomp_filter *f;
+ u32 ret = SECCOMP_RET_KILL;
+ static const struct bpf_load_fn fns = {
+ bpf_load,
+ sizeof(struct seccomp_data),
+ };
+ const void *sc_ptr = (const void *)(uintptr_t)syscall;
+
+ /*
+ * All filters are evaluated in order of youngest to oldest. The lowest
+ * BPF return value always takes priority.
+ */
+ for (f = current->seccomp.filter; f; f = f->prev) {
+ ret = bpf_run_filter(sc_ptr, f->insns, &fns);
+ if (ret != SECCOMP_RET_ALLOW)
+ break;
+ }
+ return ret;
+}
+
+/**
+ * seccomp_attach_filter: Attaches a seccomp filter to current.
+ *
+ * Returns 0 on success or an errno on failure.
+ */
+static long seccomp_attach_filter(struct sock_fprog *fprog)
+{
+ struct seccomp_filter *filter;
+ unsigned long fp_size = fprog->len * sizeof(struct sock_filter);
+ unsigned long total_insns = fprog->len;
+ long ret;
+
+ if (fprog->len == 0 || fprog->len > BPF_MAXINSNS)
+ return -EINVAL;
+
+ /* Walk the list to ensure the new instruction count is allowed. */
+ for (filter = current->seccomp.filter; filter; filter = filter->prev) {
+ if (total_insns > MAX_INSNS_PER_PATH - filter->len)
+ return -E2BIG;
+ total_insns += filter->len;
+ }
Now you go over the limit once before that if check is ever true, and it can
only be true for the last filter. So take it out of the loop, that makes the
code clearer, faster and more correct.

I'm not sure E2BIG ("Argument list too long") is the best return value.
ENOMEM seems more fitting, because it's not this filter's instruction
length that's too long, it's the combined filters that are taking too
much memory.
Post by Will Drewry
+
+ /* Allocate a new seccomp_filter */
+ filter = kzalloc(sizeof(struct seccomp_filter) + fp_size, GFP_KERNEL);
+ if (!filter)
+ return -ENOMEM;
+ atomic_set(&filter->usage, 1);
+ filter->len = fprog->len;
+
+ /* Copy the instructions from fprog. */
+ ret = -EFAULT;
+ if (copy_from_user(filter->insns, fprog->filter, fp_size))
+ goto out;
+
+ /* Check the fprog */
+ ret = bpf_chk_filter(filter->insns, filter->len, BPF_CHK_FLAGS_NO_SKB);
+ if (ret)
+ goto out;
+
+ /*
+ * Installing a seccomp filter requires that the task have
+ * CAP_SYS_ADMIN in its namespace or be running with no_new_privs.
+ * This avoids scenarios where unprivileged tasks can affect the
+ * behavior of privileged children.
+ */
+ ret = -EACCES;
+ if (!current->no_new_privs &&
+ security_capable_noaudit(current_cred(), current_user_ns(),
+ CAP_SYS_ADMIN) != 0)
+ goto out;
I think this check should be done first, before allocating any memory.
Post by Will Drewry
+
+ /*
+ * If there is an existing filter, make it the prev and don't drop its
+ * task reference.
+ */
+ filter->prev = current->seccomp.filter;
+ current->seccomp.filter = filter;
+ return 0;
+ put_seccomp_filter(filter); /* for get or task, on err */
Let put_seccomp_filter() take a struct task* as argument and instead of
calling it here, just do a kfree() directly. That's more symmetrical too.
Post by Will Drewry
+ return ret;
+}
+
+/**
+ * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
+ *
+ * Returns 0 on success and non-zero otherwise.
+ */
+long seccomp_attach_user_filter(char __user *user_filter)
+{
+ struct sock_fprog fprog;
+ long ret = -EFAULT;
+
+ if (!user_filter)
+ goto out;
Isn't this checked by copy_from_user() already?
Post by Will Drewry
+#ifdef CONFIG_COMPAT
+ if (is_compat_task()) {
+ struct compat_sock_fprog fprog32;
+ if (copy_from_user(&fprog32, user_filter, sizeof(fprog32)))
+ goto out;
+ fprog.len = fprog32.len;
+ fprog.filter = compat_ptr(fprog32.filter);
+ } else /* falls through to the if below. */
+#endif
+ if (copy_from_user(&fprog, user_filter, sizeof(fprog)))
+ goto out;
+ ret = seccomp_attach_filter(&fprog);
+ return ret;
+}
+
+void get_seccomp_filter(struct seccomp_filter *orig)
+{
+ if (!orig)
+ return;
+ /* Reference count is bounded by the number of total processes. */
+ atomic_inc(&orig->usage);
+}
+
+void put_seccomp_filter(struct seccomp_filter *orig)
+{
+ /* Clean up single-reference branches iteratively. */
+ while (orig && atomic_dec_and_test(&orig->usage)) {
+ struct seccomp_filter *freeme = orig;
+ orig = orig->prev;
+ kfree(freeme);
+ }
+}
+#endif /* CONFIG_SECCOMP_FILTER */
/*
* Secure computing mode 1 allows only read/write/exit/sigreturn.
@@ -34,10 +290,11 @@ static int mode1_syscalls_32[] = {
void __secure_computing(int this_syscall)
{
int mode = current->seccomp.mode;
- int * syscall;
+ int exit_code = SIGKILL;
+ int *syscall;
switch (mode) {
syscall = mode1_syscalls;
#ifdef CONFIG_COMPAT
if (is_compat_task())
@@ -48,6 +305,14 @@ void __secure_computing(int this_syscall)
return;
} while (*++syscall);
break;
+#ifdef CONFIG_SECCOMP_FILTER
+ if (seccomp_run_filters(this_syscall) == SECCOMP_RET_ALLOW)
+ return;
+ seccomp_filter_log_failure(this_syscall);
+ exit_code = SIGSYS;
+ break;
+#endif
BUG();
}
@@ -56,7 +321,7 @@ void __secure_computing(int this_syscall)
dump_stack();
#endif
audit_seccomp(this_syscall);
- do_exit(SIGKILL);
+ do_exit(exit_code);
}
long prctl_get_seccomp(void)
@@ -64,25 +329,48 @@ long prctl_get_seccomp(void)
return current->seccomp.mode;
}
-long prctl_set_seccomp(unsigned long seccomp_mode)
+/**
+ * prctl_set_seccomp: configures current->seccomp.mode
+ *
+ * SECCOMP_MODE_FILTER to install additional filters. Every filter
+ * successfully installed will be evaluated (in reverse order) for each system
I'd move the 'each system' to the next line.
Post by Will Drewry
+ * call the task makes.
+ *
+ * Once current->seccomp.mode is non-zero, it may not be changed.
+ *
+ * Returns 0 on success or -EINVAL on failure.
+ */
+long prctl_set_seccomp(unsigned long seccomp_mode, char __user *filter)
{
- long ret;
+ long ret = -EINVAL;
- /* can set it only once to be even more secure */
- ret = -EPERM;
- if (unlikely(current->seccomp.mode))
+ if (current->seccomp.mode &&
+ current->seccomp.mode != seccomp_mode)
goto out;
- ret = -EINVAL;
- if (seccomp_mode && seccomp_mode <= NR_SECCOMP_MODES) {
- current->seccomp.mode = seccomp_mode;
- set_thread_flag(TIF_SECCOMP);
+ switch (seccomp_mode) {
+ ret = 0;
#ifdef TIF_NOTSC
disable_TSC();
#endif
- ret = 0;
+ break;
+#ifdef CONFIG_SECCOMP_FILTER
+ ret = seccomp_attach_user_filter(filter);
+ if (ret)
+ goto out;
+ break;
+#endif
+ goto out;
}
+ current->seccomp.mode = seccomp_mode;
+ set_thread_flag(TIF_SECCOMP);
return ret;
}
diff --git a/kernel/sys.c b/kernel/sys.c
index 4686119..0addc11 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1955,7 +1955,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2,
unsigned long, arg3,
Post by Will Drewry
error = prctl_get_seccomp();
break;
- error = prctl_set_seccomp(arg2);
+ error = prctl_set_seccomp(arg2, (char __user *)arg3);
break;
error = GET_TSC_CTL(arg2);
--
Reviewed-by: Indan Zupancic <***@nul.nu>

Greetings,

Indan
H. Peter Anvin
2012-03-02 05:52:59 UTC
Permalink
Post by Indan Zupancic
Post by Will Drewry
+ * as defined in <linux/audit.h>.
If the vDSO is used this will always be the same, so what good is this?
I haven't gotten an answer to this yet.
And if it isn't, or you're on an architecture which doesn't use the vdso
as the launching point, it's not. You seem to be unable to look outside
your own particular use cases, but it is very likely that the same
oddball cases which do mixed-mode programming are ones for which this
kind of filtering facility would be extremely useful -- Pin is a great
example.

-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
Indan Zupancic
2012-03-02 06:43:50 UTC
Permalink
Post by H. Peter Anvin
Post by Indan Zupancic
Post by Will Drewry
+ * as defined in <linux/audit.h>.
If the vDSO is used this will always be the same, so what good is this?
I haven't gotten an answer to this yet.
And if it isn't, or you're on an architecture which doesn't use the vdso
as the launching point, it's not.
True, but then what?
Post by H. Peter Anvin
You seem to be unable to look outside
your own particular use cases, but it is very likely that the same
oddball cases which do mixed-mode programming are ones for which this
kind of filtering facility would be extremely useful
The filtering code has no way of reading the instruction, so it can't
know if it's a good or bad one. And the mode is passed via 'arch' already,
which is the proper way of checking this.

To properly protect the ptrace user against unexpected modes it's better
to provide a way to the filter to know the task mode instead of trying
to figure it out from IP.

I'm not saying having the IP is never useful, I actually use it in my
ptrace code (and have to add IP checks to handle those odd mixed mode
cases). I just don't see how it can be used by a BPF filter.
Post by H. Peter Anvin
-- Pin is a great example.
Is that http://www.pintool.org/?

Can you explain how knowing the IP is useful for Pin?

All I am asking for is just one use case for providing the IP. Is that
asking for too much?

Because the only one I can think of creates a false sense of security:

"Oh, the IP comes from a trusted code area, so it must be fine."

And the problem with that is that the IP doesn't say anything about the
call path, only where the last instruction was. Address randomization
helps a little bit, but it's not a guarantee.

Greetings,

Indan


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
H. Peter Anvin
2012-03-02 06:55:30 UTC
Permalink
Post by Indan Zupancic
Post by H. Peter Anvin
Post by Indan Zupancic
Post by Will Drewry
+ * as defined in <linux/audit.h>.
If the vDSO is used this will always be the same, so what good is this?
I haven't gotten an answer to this yet.
And if it isn't, or you're on an architecture which doesn't use the vdso
as the launching point, it's not.
True, but then what?
Ok, fail on my part - I misread the above to refer to @arch, not
@instruction_pointer.
Post by Indan Zupancic
Post by H. Peter Anvin
-- Pin is a great example.
Is that http://www.pintool.org/?
Can you explain how knowing the IP is useful for Pin?
All I am asking for is just one use case for providing the IP. Is that
asking for too much?
However, it still applies. For something like Pin, Pin may want to trap
on all or a subset from the instrumented program, while the
instrumentation code -- which lives in the same address space -- needs
to execute those same instructions.

Yes, it's useless for *security* (unless perhaps if you keep very strict
tabs on the flow of control by using debug registers, dynamic
translation or whatnot), but it can be highly useful for
*instrumentation*, where you want to analyze the behavior of a
non-malicious program.

-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
Indan Zupancic
2012-03-02 08:12:58 UTC
Permalink
Post by H. Peter Anvin
@instruction_pointer.
Ah, that explains a lot.
Post by H. Peter Anvin
Post by Indan Zupancic
Post by H. Peter Anvin
-- Pin is a great example.
Is that http://www.pintool.org/?
Can you explain how knowing the IP is useful for Pin?
All I am asking for is just one use case for providing the IP. Is that
asking for too much?
However, it still applies. For something like Pin, Pin may want to trap
on all or a subset from the instrumented program, while the
instrumentation code -- which lives in the same address space -- needs
to execute those same instructions.
Yes, it's useless for *security* (unless perhaps if you keep very strict
tabs on the flow of control by using debug registers, dynamic
translation or whatnot), but it can be highly useful for
*instrumentation*, where you want to analyze the behavior of a
non-malicious program.
That is a good use case indeed, I'm convinced.

Thanks,

Indan


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Will Drewry
2012-03-02 18:44:41 UTC
Permalink
Post by Indan Zupancic
Hello,
Mostly minor nitpicks.
Thanks!
Post by Indan Zupancic
Post by Will Drewry
diff --git a/arch/Kconfig b/arch/Kconfig
index e5d3778..7a696a9 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -233,4 +233,21 @@ config HAVE_CMPXCHG_DOUBLE
config ARCH_WANT_OLD_COMPAT_IPC
=A0 =A0 =A0 bool
+config HAVE_ARCH_SECCOMP_FILTER
+ =A0 =A0 bool
+ =A0 =A0 help
+ =A0 =A0 =A0 This symbol should be selected by an architecure if it=
provides
Post by Indan Zupancic
'architecture'
Oops
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 asm/syscall.h, specifically syscall_get_arguments() an=
d
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 syscall_get_arch().
+
+config SECCOMP_FILTER
+ =A0 =A0 def_bool y
+ =A0 =A0 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
+ =A0 =A0 help
+ =A0 =A0 =A0 Enable tasks to build secure computing environments de=
fined
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 in terms of Berkeley Packet Filter programs which impl=
ement
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 task-defined system call filtering polices.
+
+ =A0 =A0 =A0 See Documentation/prctl/seccomp_filter.txt for details=
=2E
Post by Indan Zupancic
Post by Will Drewry
+
source "kernel/gcov/Kconfig"
diff --git a/include/linux/Kbuild b/include/linux/Kbuild
index ecd0afb..b1e6a74 100644
--- a/include/linux/Kbuild
+++ b/include/linux/Kbuild
@@ -332,6 +332,7 @@ header-y +=3D scc.h
header-y +=3D sched.h
header-y +=3D screen_info.h
header-y +=3D sdla.h
+header-y +=3D seccomp.h
header-y +=3D securebits.h
header-y +=3D selinux_netlink.h
header-y +=3D sem.h
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index d61f27f..6ef133c 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -1,14 +1,67 @@
#ifndef _LINUX_SECCOMP_H
#define _LINUX_SECCOMP_H
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+
+/* Valid values for seccomp.mode and prctl(PR_SET_SECCOMP, <mode>) =
*/
Post by Indan Zupancic
Post by Will Drewry
+#define SECCOMP_MODE_DISABLED =A0 =A0 =A0 =A00 /* seccomp is not in=
use. */
Post by Indan Zupancic
Post by Will Drewry
+#define SECCOMP_MODE_STRICT =A01 /* uses hard-coded filter. */
+#define SECCOMP_MODE_FILTER =A02 /* uses user-supplied filter. */
I'd add a space or tab to align the numbers, or rename DISABLED to NO=
NE,
Post by Indan Zupancic
but that's just me I guess.
Post by Will Drewry
+
+/*
+ * All BPF programs must return a 32-bit value.
+ * The bottom 16-bits are reserved for future use.
+ * The upper 16-bits are ordered from least permissive values to mo=
st.
Post by Indan Zupancic
Post by Will Drewry
+ *
+ * The ordering ensures that a min_t() over composed return values =
always
Post by Indan Zupancic
Post by Will Drewry
+ * selects the least permissive choice.
+ */
+#define SECCOMP_RET_KILL =A0 =A0 0x00000000U /* kill the task immed=
iately */
Post by Indan Zupancic
Post by Will Drewry
+#define SECCOMP_RET_ALLOW =A0 =A00x7fff0000U /* allow */
+
+/* Masks for the return value sections. */
+#define SECCOMP_RET_ACTION =A0 0xffff0000U
+#define SECCOMP_RET_DATA =A0 =A0 0x0000ffffU
+
+/**
+ * struct seccomp_data - the format the BPF program executes over.
format?
I'll reword - I meant that this is the format, or layout, of the data.
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0as defined in <linux/audit.h>.
If the vDSO is used this will always be the same, so what good is thi=
s?
Post by Indan Zupancic
I haven't gotten an answer to this yet.
That said, it's eays enough to add, I just don't see the point of it.
[refers to rest of thread :]
ues
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0regardless of the architecture.
+ */
+struct seccomp_data {
+ =A0 =A0 int nr;
+ =A0 =A0 __u32 arch;
+ =A0 =A0 __u64 instruction_pointer;
+ =A0 =A0 __u64 args[6];
+};
+#ifdef __KERNEL__
#ifdef CONFIG_SECCOMP
#include <linux/thread_info.h>
#include <asm/seccomp.h>
+struct seccomp_filter;
+/**
+ * struct seccomp - the state of a seccomp'ed process
+ *
+ * =A0 =A0 =A0 =A0 system calls available to a process.
lls
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0 =A0are allowed for a task.
+ *
t of current as there
Post by Indan Zupancic
I'd move 'as there' to the next line.
Done
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0 =A0is no locking.
+ */
struct seccomp {
=A0 =A0 =A0 int mode;
+ =A0 =A0 struct seccomp_filter *filter;
};
union seccomp {
=A0 =A0 =A0 =A0long mode;
=A0 =A0 =A0 =A0struct seccomp_filter *filter;
};
That way mode is 0 when it's disabled, 1 for SECCOMP_MODE_STRICT and
a pointer for SECCOMP_MODE_FILTER.
I haven't heard any specific complaints about the extra unsigned long,
but if so, we could do this. It would make me a bit uneasy even
though it should be safe in practice.
Post by Indan Zupancic
Post by Will Drewry
extern void __secure_computing(int);
@@ -19,7 +72,7 @@ static inline void secure_computing(int this_sysca=
ll)
Post by Indan Zupancic
Post by Will Drewry
}
extern long prctl_get_seccomp(void);
-extern long prctl_set_seccomp(unsigned long);
+extern long prctl_set_seccomp(unsigned long, char __user *);
static inline int seccomp_mode(struct seccomp *s)
{
@@ -31,15 +84,16 @@ static inline int seccomp_mode(struct seccomp *s=
)
Post by Indan Zupancic
Post by Will Drewry
#include <linux/errno.h>
struct seccomp { };
+struct seccomp_filter { };
-#define secure_computing(x) do { } while (0)
+#define secure_computing(x) 0
static inline long prctl_get_seccomp(void)
{
=A0 =A0 =A0 return -EINVAL;
}
-static inline long prctl_set_seccomp(unsigned long arg2)
+static inline long prctl_set_seccomp(unsigned long arg2, char __use=
r *arg3)
Post by Indan Zupancic
Post by Will Drewry
{
=A0 =A0 =A0 return -EINVAL;
}
@@ -48,7 +102,15 @@ static inline int seccomp_mode(struct seccomp *s=
)
Post by Indan Zupancic
Post by Will Drewry
{
=A0 =A0 =A0 return 0;
}
-
#endif /* CONFIG_SECCOMP */
+#ifdef CONFIG_SECCOMP_FILTER
+extern void put_seccomp_filter(struct seccomp_filter *);
+extern void get_seccomp_filter(struct seccomp_filter *);
+#else =A0/* CONFIG_SECCOMP_FILTER */
+/* These macros consume the seccomp.filter reference. */
I think it's cleaner to let put/get_seccomp_filter have struct task*
as a parameter instead of the filter.
Post by Will Drewry
+#define put_seccomp_filter(_s) do { } while (0)
+#define get_seccomp_filter(_s) do { } while (0)
+#endif /* CONFIG_SECCOMP_FILTER */
+#endif /* __KERNEL__ */
#endif /* _LINUX_SECCOMP_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index b18c107..da3e489 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -34,6 +34,7 @@
#include <linux/cgroup.h>
#include <linux/security.h>
#include <linux/hugetlb.h>
+#include <linux/seccomp.h>
#include <linux/swap.h>
#include <linux/syscalls.h>
#include <linux/jiffies.h>
@@ -171,6 +172,7 @@ void free_task(struct task_struct *tsk)
=A0 =A0 =A0 free_thread_info(tsk->stack);
=A0 =A0 =A0 rt_mutex_debug_task_free(tsk);
=A0 =A0 =A0 ftrace_graph_exit_task(tsk);
+ =A0 =A0 put_seccomp_filter(tsk->seccomp.filter);
=A0 =A0 =A0 free_task_struct(tsk);
}
EXPORT_SYMBOL(free_task);
@@ -1166,6 +1168,7 @@ static struct task_struct *copy_process(unsign=
ed long clone_flags,
Post by Indan Zupancic
Post by Will Drewry
=A0 =A0 =A0 =A0 =A0 =A0 =A0 goto fork_out;
=A0 =A0 =A0 ftrace_graph_init_task(p);
+ =A0 =A0 get_seccomp_filter(p->seccomp.filter);
=A0 =A0 =A0 rt_mutex_init_task(p);
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e8d76c5..71df324 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -3,16 +3,272 @@
=A0*
=A0*
- * This defines a simple but solid secure-computing mode.
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This defines a simple but solid secure-computing facility.
+ *
+ * Mode 1 uses a fixed list of allowed system calls.
+ * Mode 2 allows user-defined system call filters in the form
+ * =A0 =A0 =A0 =A0of Berkeley Packet Filters/Linux Socket Filters.
=A0*/
+#include <linux/atomic.h>
#include <linux/audit.h>
-#include <linux/seccomp.h>
-#include <linux/sched.h>
#include <linux/compat.h>
+#include <linux/filter.h>
+#include <linux/sched.h>
+#include <linux/seccomp.h>
+#include <linux/security.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/tracehook.h>
+#include <asm/syscall.h>
/* #define SECCOMP_DEBUG 1 */
-#define NR_SECCOMP_MODES 1
+
+#ifdef CONFIG_SECCOMP_FILTER
+/**
+ * struct seccomp_filter - container for seccomp BPF programs
+ *
+ * =A0 =A0 =A0 =A0 get/put helpers should be used when accessing an=
instance
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0 outside of a lifetime-guarded section. =A0In gen=
eral, this
Post by Indan Zupancic
Post by Will Drewry
+ * =A0 =A0 =A0 =A0 is only needed for handling filters shared acros=
s tasks.
Post by Indan Zupancic
Post by Will Drewry
+ *
rev
Post by Indan Zupancic
Post by Will Drewry
+ * pointer. =A0For any task, it appears to be a singly-linked list =
starting
Post by Indan Zupancic
Post by Will Drewry
+ * with current->seccomp.filter, the most recently attached or inhe=
rited filter.
(), which
Post by Indan Zupancic
Post by Will Drewry
+ * results in a unidirectional tree existing in memory. =A0This is =
similar to
Post by Indan Zupancic
Post by Will Drewry
+ * how namespaces work.
+ *
+ * seccomp_filter objects should never be modified after being atta=
ched
Post by Indan Zupancic
Post by Will Drewry
+ */
+struct seccomp_filter {
+ =A0 =A0 atomic_t usage;
+ =A0 =A0 struct seccomp_filter *prev;
+ =A0 =A0 unsigned short len; =A0/* Instruction count */
To avoid 4 bytes of the padding on 64-bit, you could move len to just=
after
Post by Indan Zupancic
atomic_t, which is an int.
Good call.
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 struct sock_filter insns[];
+};
+
+/* Limit any path through the tree to 5 megabytes worth of instruct=
ions. */
Post by Indan Zupancic
Post by Will Drewry
+#define MAX_INSNS_PER_PATH ((5 << 20) / sizeof(struct sock_filter))
I think this is too high, that are 160 filters with max instruction l=
ength.
Post by Indan Zupancic
I vote for a 1MB or lower limit, that's 32 filters with max size.
1MB is fine. My goal is just to protect the system from total
resource exhaustion
Post by Indan Zupancic
Post by Will Drewry
+
+static void seccomp_filter_log_failure(int syscall)
+{
+ =A0 =A0 int compat =3D 0;
+#ifdef CONFIG_COMPAT
+ =A0 =A0 compat =3D is_compat_task();
+#endif
+ =A0 =A0 pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
+ =A0 =A0 =A0 =A0 =A0 =A0 current->comm, task_pid_nr(current),
+ =A0 =A0 =A0 =A0 =A0 =A0 (compat ? "compat " : ""),
+ =A0 =A0 =A0 =A0 =A0 =A0 syscall, KSTK_EIP(current));
+}
Hm, I thought everyone agreed this wasn't really necessary?
Yep - instead of just fixing it up, I included Kees's patch in the patc=
h series.
Post by Indan Zupancic
Post by Will Drewry
+
+/**
+ * get_u32 - returns a u32 offset into data
It doesn't return an offset.
True enough
Post by Indan Zupancic
Well, yes, it's a u64.
I'm glad we agree :)
Post by Indan Zupancic
Post by Will Drewry
+ *
+ * This inline exists to hide the length of unsigned long.
and to get the upper half of it.
Post by Will Drewry
+ * If a 32-bit unsigned long is passed in, it will be extended
+ * and the top 32-bits will be 0. If it is a 64-bit unsigned
+ * long, then whatever data is resident will be properly returned.
+ */
Personally I would leave out the whole comment, because reading the
two lines of code is faster and gives the same info.
Sounds good.
Post by Indan Zupancic
Post by Will Drewry
+static inline u32 get_u32(u64 data, int index)
+{
+ =A0 =A0 return ((u32 *)&data)[index];
+}
+
+/* Helper for bpf_load below. */
+#define BPF_DATA(_name) offsetof(struct seccomp_data, _name)
+/**
+ * bpf_load: checks and returns a pointer to the requested offset
aligned)
Post by Indan Zupancic
Post by Will Drewry
+ *
+ * On failure, returns NULL.
+ */
+static void *bpf_load(const void *nr, int off, unsigned int size, v=
oid *buf)
Post by Indan Zupancic
Post by Will Drewry
+{
+ =A0 =A0 unsigned long value;
+ =A0 =A0 u32 *A =3D buf;
+
+ =A0 =A0 if (size !=3D sizeof(u32) || off % sizeof(u32))
+ =A0 =A0 =A0 =A0 =A0 =A0 return NULL;
+
+ =A0 =A0 if (off >=3D BPF_DATA(args[0]) && off < BPF_DATA(args[6]))=
{
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 struct pt_regs *regs =3D task_pt_regs(curr=
ent);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 int arg =3D (off - BPF_DATA(args[0])) / si=
zeof(u64);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 int index =3D (off % sizeof(u64)) ? 1 : 0;
+ =A0 =A0 =A0 =A0 =A0 =A0 syscall_get_arguments(current, regs, arg, =
1, &value);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 *A =3D get_u32(value, index);
+ =A0 =A0 } else if (off =3D=3D BPF_DATA(nr)) {
+ =A0 =A0 =A0 =A0 =A0 =A0 *A =3D (u32)(uintptr_t)nr;
+ =A0 =A0 } else if (off =3D=3D BPF_DATA(arch)) {
+ =A0 =A0 =A0 =A0 =A0 =A0 struct pt_regs *regs =3D task_pt_regs(curr=
ent);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 *A =3D syscall_get_arch(current, regs);
+ =A0 =A0 } else if (off =3D=3D BPF_DATA(instruction_pointer)) {
+ =A0 =A0 =A0 =A0 =A0 =A0 *A =3D get_u32(KSTK_EIP(current), 0);
+ =A0 =A0 } else if (off =3D=3D BPF_DATA(instruction_pointer) + size=
of(u32)) {
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 *A =3D get_u32(KSTK_EIP(current), 1);
+ =A0 =A0 } else {
+ =A0 =A0 =A0 =A0 =A0 =A0 return NULL;
+ =A0 =A0 }
+ =A0 =A0 return buf;
+}
The interface doesn't match, the alignment doesn't match, and the LD
instructions couldn't be reused, with code size and even slowdowns in
the networking filter code as a result. I have the impression this
isn't the best approach after all.
I'm going to take a look at implementing it as anscillary data access=
es
Post by Indan Zupancic
instead of generic loads. That hopefully reduces the code churn in fi=
lter.c
Post by Indan Zupancic
a lot.
Your alternative approach is nice. I guess it all depends on what
makes sense to netdev too. Perhaps Eric Dumazet or someone from
netdev can weigh in on it.
Post by Indan Zupancic
Post by Will Drewry
+
+/**
call
Post by Indan Zupancic
Post by Will Drewry
+ *
+ * Returns valid seccomp BPF response codes.
+ */
+static u32 seccomp_run_filters(int syscall)
+{
+ =A0 =A0 struct seccomp_filter *f;
+ =A0 =A0 u32 ret =3D SECCOMP_RET_KILL;
+ =A0 =A0 static const struct bpf_load_fn fns =3D {
+ =A0 =A0 =A0 =A0 =A0 =A0 bpf_load,
+ =A0 =A0 =A0 =A0 =A0 =A0 sizeof(struct seccomp_data),
+ =A0 =A0 };
+ =A0 =A0 const void *sc_ptr =3D (const void *)(uintptr_t)syscall;
+
+ =A0 =A0 /*
+ =A0 =A0 =A0* All filters are evaluated in order of youngest to old=
est. The lowest
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0* BPF return value always takes priority.
+ =A0 =A0 =A0*/
+ =A0 =A0 for (f =3D current->seccomp.filter; f; f =3D f->prev) {
+ =A0 =A0 =A0 =A0 =A0 =A0 ret =3D bpf_run_filter(sc_ptr, f->insns, &=
fns);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 if (ret !=3D SECCOMP_RET_ALLOW)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+ =A0 =A0 }
+ =A0 =A0 return ret;
+}
+
+/**
+ * seccomp_attach_filter: Attaches a seccomp filter to current.
+ *
+ * Returns 0 on success or an errno on failure.
+ */
+static long seccomp_attach_filter(struct sock_fprog *fprog)
+{
+ =A0 =A0 struct seccomp_filter *filter;
+ =A0 =A0 unsigned long fp_size =3D fprog->len * sizeof(struct sock_=
filter);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 unsigned long total_insns =3D fprog->len;
+ =A0 =A0 long ret;
+
+ =A0 =A0 if (fprog->len =3D=3D 0 || fprog->len > BPF_MAXINSNS)
+ =A0 =A0 =A0 =A0 =A0 =A0 return -EINVAL;
+
+ =A0 =A0 /* Walk the list to ensure the new instruction count is al=
lowed. */
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 for (filter =3D current->seccomp.filter; filter; filter =3D=
filter->prev) {
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 if (total_insns > MAX_INSNS_PER_PATH - fil=
ter->len)
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -E2BIG;
+ =A0 =A0 =A0 =A0 =A0 =A0 total_insns +=3D filter->len;
+ =A0 =A0 }
Now you go over the limit once before that if check is ever true, and=
it can
Post by Indan Zupancic
only be true for the last filter. So take it out of the loop, that ma=
kes the
Post by Indan Zupancic
code clearer, faster and more correct.
I'll make it clearer. Correctness should be maintained in either case.
Post by Indan Zupancic
I'm not sure E2BIG ("Argument list too long") is the best return valu=
e.
Post by Indan Zupancic
ENOMEM seems more fitting, because it's not this filter's instruction
length that's too long, it's the combined filters that are taking too
much memory.
Ah yeah - wrong one. What about ENOSPC? It'd be nice to
differentiate this from an actual out of memory condition even though
there is no seccomp filter "device". (Alternatively EACCES, EBUSY,
EFBIG, ?)
Post by Indan Zupancic
Post by Will Drewry
+
+ =A0 =A0 /* Allocate a new seccomp_filter */
+ =A0 =A0 filter =3D kzalloc(sizeof(struct seccomp_filter) + fp_size=
, GFP_KERNEL);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 if (!filter)
+ =A0 =A0 =A0 =A0 =A0 =A0 return -ENOMEM;
+ =A0 =A0 atomic_set(&filter->usage, 1);
+ =A0 =A0 filter->len =3D fprog->len;
+
+ =A0 =A0 /* Copy the instructions from fprog. */
+ =A0 =A0 ret =3D -EFAULT;
+ =A0 =A0 if (copy_from_user(filter->insns, fprog->filter, fp_size))
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
+
+ =A0 =A0 /* Check the fprog */
+ =A0 =A0 ret =3D bpf_chk_filter(filter->insns, filter->len, BPF_CHK=
_FLAGS_NO_SKB);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 if (ret)
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
+
+ =A0 =A0 /*
+ =A0 =A0 =A0* Installing a seccomp filter requires that the task ha=
ve
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0* CAP_SYS_ADMIN in its namespace or be running with no_=
new_privs.
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0* This avoids scenarios where unprivileged tasks can af=
fect the
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0* behavior of privileged children.
+ =A0 =A0 =A0*/
+ =A0 =A0 ret =3D -EACCES;
+ =A0 =A0 if (!current->no_new_privs &&
+ =A0 =A0 =A0 =A0 security_capable_noaudit(current_cred(), current_u=
ser_ns(),
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
CAP_SYS_ADMIN) !=3D 0)
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
I think this check should be done first, before allocating any memory=
=2E

Makes sense.
Post by Indan Zupancic
Post by Will Drewry
+
+ =A0 =A0 /*
+ =A0 =A0 =A0* If there is an existing filter, make it the prev and =
don't drop its
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0* task reference.
+ =A0 =A0 =A0*/
+ =A0 =A0 filter->prev =3D current->seccomp.filter;
+ =A0 =A0 current->seccomp.filter =3D filter;
+ =A0 =A0 return 0;
+ =A0 =A0 put_seccomp_filter(filter); =A0/* for get or task, on err =
*/
Post by Indan Zupancic
Let put_seccomp_filter() take a struct task* as argument and instead =
of
Post by Indan Zupancic
calling it here, just do a kfree() directly. That's more symmetrical =
too.

That'll let them be static inlines instead of macros then, so that's go=
od.
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 return ret;
+}
+
+/**
+ * seccomp_attach_user_filter - attaches a user-supplied sock_fprog
+ *
+ * Returns 0 on success and non-zero otherwise.
+ */
+long seccomp_attach_user_filter(char __user *user_filter)
+{
+ =A0 =A0 struct sock_fprog fprog;
+ =A0 =A0 long ret =3D -EFAULT;
+
+ =A0 =A0 if (!user_filter)
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
Isn't this checked by copy_from_user() already?
Not explicitly. I think it's safer to check before calling copy_from_us=
er.
Post by Indan Zupancic
Post by Will Drewry
+#ifdef CONFIG_COMPAT
+ =A0 =A0 if (is_compat_task()) {
+ =A0 =A0 =A0 =A0 =A0 =A0 struct compat_sock_fprog fprog32;
+ =A0 =A0 =A0 =A0 =A0 =A0 if (copy_from_user(&fprog32, user_filter, =
sizeof(fprog32)))
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto out;
+ =A0 =A0 =A0 =A0 =A0 =A0 fprog.len =3D fprog32.len;
+ =A0 =A0 =A0 =A0 =A0 =A0 fprog.filter =3D compat_ptr(fprog32.filter=
);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 } else /* falls through to the if below. */
+#endif
+ =A0 =A0 if (copy_from_user(&fprog, user_filter, sizeof(fprog)))
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
+ =A0 =A0 ret =3D seccomp_attach_filter(&fprog);
+ =A0 =A0 return ret;
+}
+
+void get_seccomp_filter(struct seccomp_filter *orig)
+{
+ =A0 =A0 if (!orig)
+ =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 /* Reference count is bounded by the number of total proce=
sses. */
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 atomic_inc(&orig->usage);
+}
+
ree. */
Post by Indan Zupancic
Post by Will Drewry
+void put_seccomp_filter(struct seccomp_filter *orig)
+{
+ =A0 =A0 /* Clean up single-reference branches iteratively. */
+ =A0 =A0 while (orig && atomic_dec_and_test(&orig->usage)) {
+ =A0 =A0 =A0 =A0 =A0 =A0 struct seccomp_filter *freeme =3D orig;
+ =A0 =A0 =A0 =A0 =A0 =A0 orig =3D orig->prev;
+ =A0 =A0 =A0 =A0 =A0 =A0 kfree(freeme);
+ =A0 =A0 }
+}
+#endif =A0 =A0 =A0 /* CONFIG_SECCOMP_FILTER */
/*
=A0* Secure computing mode 1 allows only read/write/exit/sigreturn.
@@ -34,10 +290,11 @@ static int mode1_syscalls_32[] =3D {
void __secure_computing(int this_syscall)
{
=A0 =A0 =A0 int mode =3D current->seccomp.mode;
- =A0 =A0 int * syscall;
+ =A0 =A0 int exit_code =3D SIGKILL;
+ =A0 =A0 int *syscall;
=A0 =A0 =A0 switch (mode) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 syscall =3D mode1_syscalls;
#ifdef CONFIG_COMPAT
=A0 =A0 =A0 =A0 =A0 =A0 =A0 if (is_compat_task())
@@ -48,6 +305,14 @@ void __secure_computing(int this_syscall)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 } while (*++syscall);
=A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
+#ifdef CONFIG_SECCOMP_FILTER
+ =A0 =A0 =A0 =A0 =A0 =A0 if (seccomp_run_filters(this_syscall) =3D=3D=
SECCOMP_RET_ALLOW)
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return;
+ =A0 =A0 =A0 =A0 =A0 =A0 seccomp_filter_log_failure(this_syscall);
+ =A0 =A0 =A0 =A0 =A0 =A0 exit_code =3D SIGSYS;
+ =A0 =A0 =A0 =A0 =A0 =A0 break;
+#endif
=A0 =A0 =A0 =A0 =A0 =A0 =A0 BUG();
=A0 =A0 =A0 }
@@ -56,7 +321,7 @@ void __secure_computing(int this_syscall)
=A0 =A0 =A0 dump_stack();
#endif
=A0 =A0 =A0 audit_seccomp(this_syscall);
- =A0 =A0 do_exit(SIGKILL);
+ =A0 =A0 do_exit(exit_code);
}
long prctl_get_seccomp(void)
@@ -64,25 +329,48 @@ long prctl_get_seccomp(void)
=A0 =A0 =A0 return current->seccomp.mode;
}
-long prctl_set_seccomp(unsigned long seccomp_mode)
+/**
+ * prctl_set_seccomp: configures current->seccomp.mode
LTER
Post by Indan Zupancic
Post by Will Drewry
+ *
+ * SECCOMP_MODE_FILTER to install additional filters. =A0Every filt=
er
Post by Indan Zupancic
Post by Will Drewry
+ * successfully installed will be evaluated (in reverse order) for =
each system
Post by Indan Zupancic
I'd move the 'each system' to the next line.
Post by Will Drewry
+ * call the task makes.
+ *
+ * Once current->seccomp.mode is non-zero, it may not be changed.
+ *
+ * Returns 0 on success or -EINVAL on failure.
+ */
+long prctl_set_seccomp(unsigned long seccomp_mode, char __user *fil=
ter)
Post by Indan Zupancic
Post by Will Drewry
{
- =A0 =A0 long ret;
+ =A0 =A0 long ret =3D -EINVAL;
- =A0 =A0 /* can set it only once to be even more secure */
- =A0 =A0 ret =3D -EPERM;
- =A0 =A0 if (unlikely(current->seccomp.mode))
+ =A0 =A0 if (current->seccomp.mode &&
+ =A0 =A0 =A0 =A0 current->seccomp.mode !=3D seccomp_mode)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 goto out;
- =A0 =A0 ret =3D -EINVAL;
- =A0 =A0 if (seccomp_mode && seccomp_mode <=3D NR_SECCOMP_MODES) {
- =A0 =A0 =A0 =A0 =A0 =A0 current->seccomp.mode =3D seccomp_mode;
- =A0 =A0 =A0 =A0 =A0 =A0 set_thread_flag(TIF_SECCOMP);
+ =A0 =A0 switch (seccomp_mode) {
+ =A0 =A0 =A0 =A0 =A0 =A0 ret =3D 0;
#ifdef TIF_NOTSC
=A0 =A0 =A0 =A0 =A0 =A0 =A0 disable_TSC();
#endif
- =A0 =A0 =A0 =A0 =A0 =A0 ret =3D 0;
+ =A0 =A0 =A0 =A0 =A0 =A0 break;
+#ifdef CONFIG_SECCOMP_FILTER
+ =A0 =A0 =A0 =A0 =A0 =A0 ret =3D seccomp_attach_user_filter(filter)=
;
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 if (ret)
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto out;
+ =A0 =A0 =A0 =A0 =A0 =A0 break;
+#endif
+ =A0 =A0 =A0 =A0 =A0 =A0 goto out;
=A0 =A0 =A0 }
+ =A0 =A0 current->seccomp.mode =3D seccomp_mode;
+ =A0 =A0 set_thread_flag(TIF_SECCOMP);
=A0 =A0 =A0 return ret;
}
diff --git a/kernel/sys.c b/kernel/sys.c
index 4686119..0addc11 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1955,7 +1955,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned l=
ong, arg2,
Post by Indan Zupancic
unsigned long, arg3,
Post by Will Drewry
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D prctl_get_secc=
omp();
Post by Indan Zupancic
Post by Will Drewry
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D prctl_set_seccom=
p(arg2);
Post by Indan Zupancic
Post by Will Drewry
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D prctl_set_seccom=
p(arg2, (char __user *)arg3);
Post by Indan Zupancic
Post by Will Drewry
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 error =3D GET_TSC_CTL(ar=
g2);
Post by Indan Zupancic
Post by Will Drewry
--
Thanks!
will
Will Drewry
2012-02-29 23:53:38 UTC
Permalink
Enable support for seccomp filter on x86:
- asm/tracehook.h exists
- syscall_get_arguments() works
- syscall_rollback() works
- ptrace_report_syscall() works
- secure_computing() return value is honored (see below)

This also adds support for honoring the return
value from secure_computing().

SECCOMP_RET_TRACE and SECCOMP_RET_TRAP may result in seccomp needing to
skip a system call without killing the process. This is done by
returning a non-zero (-1) value from secure_computing. This change
makes x86 respect that return value.

To ensure that minimal kernel code is exposed, a non-zero return value
results in an immediate return to user space (with an invalid syscall
number).

Signed-off-by: Will Drewry <***@chromium.org>
---
arch/x86/Kconfig | 1 +
arch/x86/kernel/ptrace.c | 7 ++++++-
2 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f50b52a..fe35491 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -81,6 +81,7 @@ config X86
select CLKEVT_I8253
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_IOMAP
+ select HAVE_ARCH_SECCOMP_FILTER

config INSTRUCTION_DECODER
def_bool (KPROBES || PERF_EVENTS || UPROBES)
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 78f05e4..cdce22c 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -1381,7 +1381,11 @@ long syscall_trace_enter(struct pt_regs *regs)
regs->flags |= X86_EFLAGS_TF;

/* do the secure computing check first */
- secure_computing(regs->orig_ax);
+ if (secure_computing(regs->orig_ax)) {
+ /* seccomp failures shouldn't expose any additional code. */
+ ret = -1L;
+ goto out;
+ }

if (unlikely(test_thread_flag(TIF_SYSCALL_EMU)))
ret = -1L;
@@ -1406,6 +1410,7 @@ long syscall_trace_enter(struct pt_regs *regs)
regs->dx, regs->r10);
#endif

+out:
return ret ?: regs->orig_ax;
}
--
1.7.5.4
Will Drewry
2012-02-29 23:53:40 UTC
Permalink
From: Kees Cook <***@chromium.org>

This consolidates the seccomp filter error logging path and adds more
details to the audit log.

Acked-by: Will Drewry <***@chromium.org>
Signed-off-by: Kees Cook <***@chromium.org>
---
include/linux/audit.h | 8 ++++----
kernel/auditsc.c | 9 +++++++--
kernel/seccomp.c | 15 +--------------
3 files changed, 12 insertions(+), 20 deletions(-)

diff --git a/include/linux/audit.h b/include/linux/audit.h
index 9ff7a2c..5aac29b 100644
--- a/include/linux/audit.h
+++ b/include/linux/audit.h
@@ -463,7 +463,7 @@ extern void audit_putname(const char *name);
extern void __audit_inode(const char *name, const struct dentry *dentry);
extern void __audit_inode_child(const struct dentry *dentry,
const struct inode *parent);
-extern void __audit_seccomp(unsigned long syscall);
+extern void __audit_seccomp(unsigned long syscall, long signr);
extern void __audit_ptrace(struct task_struct *t);

static inline int audit_dummy_context(void)
@@ -508,10 +508,10 @@ static inline void audit_inode_child(const struct dentry *dentry,
}
void audit_core_dumps(long signr);

-static inline void audit_seccomp(unsigned long syscall)
+static inline void audit_seccomp(unsigned long syscall, long signr)
{
if (unlikely(!audit_dummy_context()))
- __audit_seccomp(syscall);
+ __audit_seccomp(syscall, signr);
}

static inline void audit_ptrace(struct task_struct *t)
@@ -634,7 +634,7 @@ extern int audit_signals;
#define audit_inode(n,d) do { (void)(d); } while (0)
#define audit_inode_child(i,p) do { ; } while (0)
#define audit_core_dumps(i) do { ; } while (0)
-#define audit_seccomp(i) do { ; } while (0)
+#define audit_seccomp(i, s) do { ; } while (0)
#define auditsc_get_stamp(c,t,s) (0)
#define audit_get_loginuid(t) (-1)
#define audit_get_sessionid(t) (-1)
diff --git a/kernel/auditsc.c b/kernel/auditsc.c
index af1de0f..74652fe 100644
--- a/kernel/auditsc.c
+++ b/kernel/auditsc.c
@@ -67,6 +67,7 @@
#include <linux/syscalls.h>
#include <linux/capability.h>
#include <linux/fs_struct.h>
+#include <linux/compat.h>

#include "audit.h"

@@ -2710,13 +2711,17 @@ void audit_core_dumps(long signr)
audit_log_end(ab);
}

-void __audit_seccomp(unsigned long syscall)
+void __audit_seccomp(unsigned long syscall, long signr)
{
struct audit_buffer *ab;

ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_ANOM_ABEND);
- audit_log_abend(ab, "seccomp", SIGKILL);
+ audit_log_abend(ab, "seccomp", signr);
audit_log_format(ab, " syscall=%ld", syscall);
+#ifdef CONFIG_COMPAT
+ audit_log_format(ab, " compat=%d", is_compat_task());
+#endif
+ audit_log_format(ab, " ip=0x%lx", KSTK_EIP(current));
audit_log_end(ab);
}

diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 1064327..78ecd6f 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -60,18 +60,6 @@ struct seccomp_filter {
/* Limit any path through the tree to 5 megabytes worth of instructions. */
#define MAX_INSNS_PER_PATH ((5 << 20) / sizeof(struct sock_filter))

-static void seccomp_filter_log_failure(int syscall)
-{
- int compat = 0;
-#ifdef CONFIG_COMPAT
- compat = is_compat_task();
-#endif
- pr_info("%s[%d]: %ssystem call %d blocked at 0x%lx\n",
- current->comm, task_pid_nr(current),
- (compat ? "compat " : ""),
- syscall, KSTK_EIP(current));
-}
-
/**
* get_u32 - returns a u32 offset into data
* @data: a unsigned 64 bit value
@@ -370,7 +358,6 @@ int __secure_computing_int(int this_syscall)
default:
break;
}
- seccomp_filter_log_failure(this_syscall);
exit_code = SIGSYS;
break;
}
@@ -382,7 +369,7 @@ int __secure_computing_int(int this_syscall)
#ifdef SECCOMP_DEBUG
dump_stack();
#endif
- audit_seccomp(this_syscall);
+ audit_seccomp(this_syscall, exit_code);
do_exit(exit_code);
return -1; /* never reached */
}
--
1.7.5.4
Will Drewry
2012-02-29 23:53:39 UTC
Permalink
Documents how system call filtering using Berkeley Packet
Filter programs works and how it may be used.
Includes an example for x86 and a semi-generic
example using a macro-based code generator.

v12: - comment on the ptrace_event use
- update arch support comment
- note the behavior of SECCOMP_RET_DATA when there are multiple filters
(***@chromium.org)
- lots of samples/ clean up incl 64-bit bpf-direct support
(***@chromium.org)
- rebase to linux-next
v11: - overhaul return value language, updates (***@chromium.org)
- comment on do_exit(SIGSYS)
v10: - update for SIGSYS
- update for new seccomp_data layout
- update for ptrace option use
v9: - updated bpf-direct.c for SIGILL
v8: - add PR_SET_NO_NEW_PRIVS to the samples.
v7: - updated for all the new stuff in v7: TRAP, TRACE
- only talk about PR_SET_SECCOMP now
- fixed bad JLE32 check (***@linux.vnet.ibm.com)
- adds dropper.c: a simple system call disabler
v6: - tweak the language to note the requirement of
PR_SET_NO_NEW_PRIVS being called prior to use. (***@mit.edu)
v5: - update sample to use system call arguments
- adds a "fancy" example using a macro-based generator
- cleaned up bpf in the sample
- update docs to mention arguments
- fix prctl value (***@redhat.com)
- language cleanup (***@xenotime.net)
v4: - update for no_new_privs use
- minor tweaks
v3: - call out BPF <-> Berkeley Packet Filter (***@xenotime.net)
- document use of tentative always-unprivileged
- guard sample compilation for i386 and x86_64
v2: - move code to samples (***@lwn.net)

Signed-off-by: Will Drewry <***@chromium.org>
---
Documentation/prctl/seccomp_filter.txt | 156 +++++++++++++++++++++
samples/Makefile | 2 +-
samples/seccomp/Makefile | 38 +++++
samples/seccomp/bpf-direct.c | 176 +++++++++++++++++++++++
samples/seccomp/bpf-fancy.c | 102 ++++++++++++++
samples/seccomp/bpf-helper.c | 89 ++++++++++++
samples/seccomp/bpf-helper.h | 238 ++++++++++++++++++++++++++++++++
samples/seccomp/dropper.c | 68 +++++++++
8 files changed, 868 insertions(+), 1 deletions(-)
create mode 100644 Documentation/prctl/seccomp_filter.txt
create mode 100644 samples/seccomp/Makefile
create mode 100644 samples/seccomp/bpf-direct.c
create mode 100644 samples/seccomp/bpf-fancy.c
create mode 100644 samples/seccomp/bpf-helper.c
create mode 100644 samples/seccomp/bpf-helper.h
create mode 100644 samples/seccomp/dropper.c

diff --git a/Documentation/prctl/seccomp_filter.txt b/Documentation/prctl/seccomp_filter.txt
new file mode 100644
index 0000000..4aa3e78
--- /dev/null
+++ b/Documentation/prctl/seccomp_filter.txt
@@ -0,0 +1,156 @@
+ SECure COMPuting with filters
+ =============================
+
+Introduction
+------------
+
+A large number of system calls are exposed to every userland process
+with many of them going unused for the entire lifetime of the process.
+As system calls change and mature, bugs are found and eradicated. A
+certain subset of userland applications benefit by having a reduced set
+of available system calls. The resulting set reduces the total kernel
+surface exposed to the application. System call filtering is meant for
+use with those applications.
+
+Seccomp filtering provides a means for a process to specify a filter for
+incoming system calls. The filter is expressed as a Berkeley Packet
+Filter (BPF) program, as with socket filters, except that the data
+operated on is related to the system call being made: system call
+number and the system call arguments. This allows for expressive
+filtering of system calls using a filter program language with a long
+history of being exposed to userland and a straightforward data set.
+
+Additionally, BPF makes it impossible for users of seccomp to fall prey
+to time-of-check-time-of-use (TOCTOU) attacks that are common in system
+call interposition frameworks. BPF programs may not dereference
+pointers which constrains all filters to solely evaluating the system
+call arguments directly.
+
+What it isn't
+-------------
+
+System call filtering isn't a sandbox. It provides a clearly defined
+mechanism for minimizing the exposed kernel surface. It is meant to be
+a tool for sandbox developers to use. Beyond that, policy for logical
+behavior and information flow should be managed with a combination of
+other system hardening techniques and, potentially, an LSM of your
+choosing. Expressive, dynamic filters provide further options down this
+path (avoiding pathological sizes or selecting which of the multiplexed
+system calls in socketcall() is allowed, for instance) which could be
+construed, incorrectly, as a more complete sandboxing solution.
+
+Usage
+-----
+
+An additional seccomp mode is added and is enabled using the same
+prctl(2) call as the strict seccomp. If the architecture has
+CONFIG_HAVE_ARCH_SECCOMP_FILTER, then filters may be added as below:
+
+PR_SET_SECCOMP:
+ Now takes an additional argument which specifies a new filter
+ using a BPF program.
+ The BPF program will be executed over struct seccomp_data
+ reflecting the system call number, arguments, and other
+ metadata. The BPF program must then return one of the
+ acceptable values to inform the kernel which action should be
+ taken.
+
+ Usage:
+ prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, prog);
+
+ The 'prog' argument is a pointer to a struct sock_fprog which
+ will contain the filter program. If the program is invalid, the
+ call will return -1 and set errno to EINVAL.
+
+ Note, is_compat_task is also tracked for the @prog. This means
+ that once set the calling task will have all of its system calls
+ blocked if it switches its system call ABI.
+
+ If fork/clone and execve are allowed by @prog, any child
+ processes will be constrained to the same filters and system
+ call ABI as the parent.
+
+ Prior to use, the task must call prctl(PR_SET_NO_NEW_PRIVS, 1) or
+ run with CAP_SYS_ADMIN privileges in its namespace. If these are not
+ true, -EACCES will be returned. This requirement ensures that filter
+ programs cannot be applied to child processes with greater privileges
+ than the task that installed them.
+
+ Additionally, if prctl(2) is allowed by the attached filter,
+ additional filters may be layered on which will increase evaluation
+ time, but allow for further decreasing the attack surface during
+ execution of a process.
+
+The above call returns 0 on success and non-zero on error.
+
+Return values
+-------------
+A seccomp filter may return any of the following values. If multiple
+filters exist, the return value for the evaluation of a given system
+call will always use the highest precedent value. (For example,
+SECCOMP_RET_KILL will always take precedence.)
+
+In precedence order, they are:
+
+SECCOMP_RET_KILL:
+ Results in the task exiting immediately without executing the
+ system call. The exit status of the task (status & 0x7f) will
+ be SIGSYS, not SIGKILL.
+
+SECCOMP_RET_TRAP:
+ Results in the kernel sending a SIGSYS signal to the triggering
+ task without executing the system call. The kernel will
+ rollback the register state to just before the system call
+ entry such that a signal handler in the task will be able to
+ inspect the ucontext_t->uc_mcontext registers and emulate
+ system call success or failure upon return from the signal
+ handler.
+
+ The SECCOMP_RET_DATA portion of the return value will be passed
+ as si_errno.
+
+ SIGSYS triggered by seccomp will have a si_code of SYS_SECCOMP.
+
+SECCOMP_RET_ERRNO:
+ Results in the lower 16-bits of the return value being passed
+ to userland as the errno without executing the system call.
+
+SECCOMP_RET_TRACE:
+ When returned, this value will cause the kernel to attempt to
+ notify a ptrace()-based tracer prior to executing the system
+ call. If there is no tracer present, -ENOSYS is returned to
+ userland and the system call is not executed.
+
+ A tracer will be notified if it requests PTRACE_O_TRACESECCOMP
+ using ptrace(PTRACE_SETOPTIONS). The tracer will be notified
+ of a PTRACE_EVENT_SECCOMP and the SECCOMP_RET_DATA portion of
+ the BPF program return value will be available to the tracer
+ via PTRACE_GETEVENTMSG.
+
+SECCOMP_RET_ALLOW:
+ Results in the system call being executed.
+
+If multiple filters exist, the return value for the evaluation of a
+given system call will always use the highest precedent value.
+
+Precedence is only determined using the SECCOMP_RET_ACTION mask. When
+multiple filters return values of the same precedence, only the
+SECCOMP_RET_DATA from the most recently installed filter will be
+returned.
+
+
+Example
+-------
+
+The samples/seccomp/ directory contains both an x86-specific example
+and a more generic example of a higher level macro interface for BPF
+program generation.
+
+Adding architecture support
+-----------------------
+
+See arch/Kconfig for the authoritative requirements. In general, if an
+architecture supports both ptrace_event and seccomp, it will be able to
+support seccomp filter with minor fixup: SIGSYS support and seccomp return
+value checking. Then it must just add CONFIG_HAVE_ARCH_SECCOMP_FILTER
+to its arch-specific Kconfig.
diff --git a/samples/Makefile b/samples/Makefile
index 2f75851..5ef08bb 100644
--- a/samples/Makefile
+++ b/samples/Makefile
@@ -1,4 +1,4 @@
# Makefile for Linux samples code

obj-$(CONFIG_SAMPLES) += kobject/ kprobes/ tracepoints/ trace_events/ \
- hw_breakpoint/ kfifo/ kdb/ hidraw/ rpmsg/
+ hw_breakpoint/ kfifo/ kdb/ hidraw/ rpmsg/ seccomp/
diff --git a/samples/seccomp/Makefile b/samples/seccomp/Makefile
new file mode 100644
index 0000000..e8fe0f5
--- /dev/null
+++ b/samples/seccomp/Makefile
@@ -0,0 +1,38 @@
+# kbuild trick to avoid linker error. Can be omitted if a module is built.
+obj- := dummy.o
+
+hostprogs-$(CONFIG_SECCOMP) := bpf-fancy dropper
+bpf-fancy-objs := bpf-fancy.o bpf-helper.o
+
+HOSTCFLAGS_bpf-fancy.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-fancy.o += -idirafter $(objtree)/include
+HOSTCFLAGS_bpf-helper.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-helper.o += -idirafter $(objtree)/include
+
+HOSTCFLAGS_dropper.o += -I$(objtree)/usr/include
+HOSTCFLAGS_dropper.o += -idirafter $(objtree)/include
+dropper-objs := dropper.o
+
+# bpf-direct.c is x86-only.
+ifeq ($(SRCARCH),x86)
+# List of programs to build
+hostprogs-$(CONFIG_SECCOMP) += bpf-direct
+bpf-direct-objs := bpf-direct.o
+endif
+
+HOSTCFLAGS_bpf-direct.o += -I$(objtree)/usr/include
+HOSTCFLAGS_bpf-direct.o += -idirafter $(objtree)/include
+
+# Try to match the kernel target.
+ifeq ($(CONFIG_64BIT),)
+HOSTCFLAGS_bpf-direct.o += -m32
+HOSTCFLAGS_dropper.o += -m32
+HOSTCFLAGS_bpf-helper.o += -m32
+HOSTCFLAGS_bpf-fancy.o += -m32
+HOSTLOADLIBES_bpf-direct += -m32
+HOSTLOADLIBES_bpf-fancy += -m32
+HOSTLOADLIBES_dropper += -m32
+endif
+
+# Tell kbuild to always build the programs
+always := $(hostprogs-y)
diff --git a/samples/seccomp/bpf-direct.c b/samples/seccomp/bpf-direct.c
new file mode 100644
index 0000000..f1567ad
--- /dev/null
+++ b/samples/seccomp/bpf-direct.c
@@ -0,0 +1,176 @@
+/*
+ * Seccomp filter example for x86 (32-bit and 64-bit) with BPF macros
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-***@chromium.org>
+ * Author: Will Drewry <***@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ */
+#define __USE_GNU 1
+#define _GNU_SOURCE 1
+
+#include <linux/types.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stddef.h>
+#include <string.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+#define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n]))
+#define syscall_nr (offsetof(struct seccomp_data, nr))
+
+#if defined(__i386__)
+#define REG_RESULT REG_EAX
+#define REG_SYSCALL REG_EAX
+#define REG_ARG0 REG_EBX
+#define REG_ARG1 REG_ECX
+#define REG_ARG2 REG_EDX
+#define REG_ARG3 REG_ESI
+#define REG_ARG4 REG_EDI
+#define REG_ARG5 REG_EBP
+#elif defined(__x86_64__)
+#define REG_RESULT REG_RAX
+#define REG_SYSCALL REG_RAX
+#define REG_ARG0 REG_RDI
+#define REG_ARG1 REG_RSI
+#define REG_ARG2 REG_RDX
+#define REG_ARG3 REG_R10
+#define REG_ARG4 REG_R8
+#define REG_ARG5 REG_R9
+#else
+#error Unsupported platform
+#endif
+
+#ifndef PR_SET_NO_NEW_PRIVS
+#define PR_SET_NO_NEW_PRIVS 36
+#endif
+
+#ifndef SYS_SECCOMP
+#define SYS_SECCOMP 1
+#endif
+
+static void emulator(int nr, siginfo_t *info, void *void_context)
+{
+ ucontext_t *ctx = (ucontext_t *)(void_context);
+ int syscall;
+ char *buf;
+ ssize_t bytes;
+ size_t len;
+ if (info->si_code != SYS_SECCOMP)
+ return;
+ if (!ctx)
+ return;
+ syscall = ctx->uc_mcontext.gregs[REG_SYSCALL];
+ buf = (char *) ctx->uc_mcontext.gregs[REG_ARG1];
+ len = (size_t) ctx->uc_mcontext.gregs[REG_ARG2];
+
+ if (syscall != __NR_write)
+ return;
+ if (ctx->uc_mcontext.gregs[REG_ARG0] != STDERR_FILENO)
+ return;
+ /* Redirect stderr messages to stdout. Doesn't handle EINTR, etc */
+ ctx->uc_mcontext.gregs[REG_RESULT] = -1;
+ if (write(STDOUT_FILENO, "[ERR] ", 6) > 0) {
+ bytes = write(STDOUT_FILENO, buf, len);
+ ctx->uc_mcontext.gregs[REG_RESULT] = bytes;
+ }
+ return;
+}
+
+static int install_emulator(void)
+{
+ struct sigaction act;
+ sigset_t mask;
+ memset(&act, 0, sizeof(act));
+ sigemptyset(&mask);
+ sigaddset(&mask, SIGSYS);
+
+ act.sa_sigaction = &emulator;
+ act.sa_flags = SA_SIGINFO;
+ if (sigaction(SIGSYS, &act, NULL) < 0) {
+ perror("sigaction");
+ return -1;
+ }
+ if (sigprocmask(SIG_UNBLOCK, &mask, NULL)) {
+ perror("sigprocmask");
+ return -1;
+ }
+ return 0;
+}
+
+static int install_filter(void)
+{
+ struct sock_filter filter[] = {
+ /* Grab the system call number */
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_nr),
+ /* Jump table for the allowed syscalls */
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+#ifdef __NR_sigreturn
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+#endif
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit, 0, 1),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_read, 1, 0),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_write, 3, 2),
+
+ /* Check that read is only using stdin. */
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
+
+ /* Check that write is only using stdout */
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0),
+ /* Trap attempts to write to stderr */
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2),
+
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRAP),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
+ };
+ struct sock_fprog prog = {
+ .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+ .filter = filter,
+ };
+
+ if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+ perror("prctl(NO_NEW_PRIVS)");
+ return 1;
+ }
+
+
+ if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) {
+ perror("prctl");
+ return 1;
+ }
+ return 0;
+}
+
+#define payload(_c) (_c), sizeof((_c))
+int main(int argc, char **argv)
+{
+ char buf[4096];
+ ssize_t bytes = 0;
+ if (install_emulator())
+ return 1;
+ if (install_filter())
+ return 1;
+ syscall(__NR_write, STDOUT_FILENO,
+ payload("OHAI! WHAT IS YOUR NAME? "));
+ bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf));
+ syscall(__NR_write, STDOUT_FILENO, payload("HELLO, "));
+ syscall(__NR_write, STDOUT_FILENO, buf, bytes);
+ syscall(__NR_write, STDERR_FILENO,
+ payload("Error message going to STDERR\n"));
+ return 0;
+}
diff --git a/samples/seccomp/bpf-fancy.c b/samples/seccomp/bpf-fancy.c
new file mode 100644
index 0000000..bf1f6b5
--- /dev/null
+++ b/samples/seccomp/bpf-fancy.c
@@ -0,0 +1,102 @@
+/*
+ * Seccomp BPF example using a macro-based generator.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-***@chromium.org>
+ * Author: Will Drewry <***@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_ATTACH_SECCOMP_FILTER).
+ */
+
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+#include "bpf-helper.h"
+
+#ifndef PR_SET_NO_NEW_PRIVS
+#define PR_SET_NO_NEW_PRIVS 36
+#endif
+
+int main(int argc, char **argv)
+{
+ struct bpf_labels l;
+ static const char msg1[] = "Please type something: ";
+ static const char msg2[] = "You typed: ";
+ char buf[256];
+ struct sock_filter filter[] = {
+ /* TODO: LOAD_SYSCALL_NR(arch) and enforce an arch */
+ LOAD_SYSCALL_NR,
+ SYSCALL(__NR_exit, ALLOW),
+ SYSCALL(__NR_exit_group, ALLOW),
+ SYSCALL(__NR_write, JUMP(&l, write_fd)),
+ SYSCALL(__NR_read, JUMP(&l, read)),
+ DENY, /* Don't passthrough into a label */
+
+ LABEL(&l, read),
+ ARG(0),
+ JNE(STDIN_FILENO, DENY),
+ ARG(1),
+ JNE((unsigned long)buf, DENY),
+ ARG(2),
+ JGE(sizeof(buf), DENY),
+ ALLOW,
+
+ LABEL(&l, write_fd),
+ ARG(0),
+ JEQ(STDOUT_FILENO, JUMP(&l, write_buf)),
+ JEQ(STDERR_FILENO, JUMP(&l, write_buf)),
+ DENY,
+
+ LABEL(&l, write_buf),
+ ARG(1),
+ JEQ((unsigned long)msg1, JUMP(&l, msg1_len)),
+ JEQ((unsigned long)msg2, JUMP(&l, msg2_len)),
+ JEQ((unsigned long)buf, JUMP(&l, buf_len)),
+ DENY,
+
+ LABEL(&l, msg1_len),
+ ARG(2),
+ JLT(sizeof(msg1), ALLOW),
+ DENY,
+
+ LABEL(&l, msg2_len),
+ ARG(2),
+ JLT(sizeof(msg2), ALLOW),
+ DENY,
+
+ LABEL(&l, buf_len),
+ ARG(2),
+ JLT(sizeof(buf), ALLOW),
+ DENY,
+ };
+ struct sock_fprog prog = {
+ .filter = filter,
+ .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+ };
+ ssize_t bytes;
+ bpf_resolve_jumps(&l, filter, sizeof(filter)/sizeof(*filter));
+
+ if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
+ perror("prctl(NO_NEW_PRIVS)");
+ return 1;
+ }
+
+ if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog)) {
+ perror("prctl(SECCOMP)");
+ return 1;
+ }
+ syscall(__NR_write, STDOUT_FILENO, msg1, strlen(msg1));
+ bytes = syscall(__NR_read, STDIN_FILENO, buf, sizeof(buf)-1);
+ bytes = (bytes > 0 ? bytes : 0);
+ syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2));
+ syscall(__NR_write, STDERR_FILENO, buf, bytes);
+ /* Now get killed */
+ syscall(__NR_write, STDERR_FILENO, msg2, strlen(msg2)+2);
+ return 0;
+}
diff --git a/samples/seccomp/bpf-helper.c b/samples/seccomp/bpf-helper.c
new file mode 100644
index 0000000..579cfe3
--- /dev/null
+++ b/samples/seccomp/bpf-helper.c
@@ -0,0 +1,89 @@
+/*
+ * Seccomp BPF helper functions
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-***@chromium.org>
+ * Author: Will Drewry <***@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_ATTACH_SECCOMP_FILTER).
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "bpf-helper.h"
+
+int bpf_resolve_jumps(struct bpf_labels *labels,
+ struct sock_filter *filter, size_t count)
+{
+ struct sock_filter *begin = filter;
+ __u8 insn = count - 1;
+
+ if (count < 1)
+ return -1;
+ /*
+ * Walk it once, backwards, to build the label table and do fixups.
+ * Since backward jumps are disallowed by BPF, this is easy.
+ */
+ filter += insn;
+ for (; filter >= begin; --insn, --filter) {
+ if (filter->code != (BPF_JMP+BPF_JA))
+ continue;
+ switch ((filter->jt<<8)|filter->jf) {
+ case (JUMP_JT<<8)|JUMP_JF:
+ if (labels->labels[filter->k].location == 0xffffffff) {
+ fprintf(stderr, "Unresolved label: '%s'\n",
+ labels->labels[filter->k].label);
+ return 1;
+ }
+ filter->k = labels->labels[filter->k].location -
+ (insn + 1);
+ filter->jt = 0;
+ filter->jf = 0;
+ continue;
+ case (LABEL_JT<<8)|LABEL_JF:
+ if (labels->labels[filter->k].location != 0xffffffff) {
+ fprintf(stderr, "Duplicate label use: '%s'\n",
+ labels->labels[filter->k].label);
+ return 1;
+ }
+ labels->labels[filter->k].location = insn;
+ filter->k = 0; /* fall through */
+ filter->jt = 0;
+ filter->jf = 0;
+ continue;
+ }
+ }
+ return 0;
+}
+
+/* Simple lookup table for labels. */
+__u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label)
+{
+ struct __bpf_label *begin = labels->labels, *end;
+ int id;
+ if (labels->count == 0) {
+ begin->label = label;
+ begin->location = 0xffffffff;
+ labels->count++;
+ return 0;
+ }
+ end = begin + labels->count;
+ for (id = 0; begin < end; ++begin, ++id) {
+ if (!strcmp(label, begin->label))
+ return id;
+ }
+ begin->label = label;
+ begin->location = 0xffffffff;
+ labels->count++;
+ return id;
+}
+
+void seccomp_bpf_print(struct sock_filter *filter, size_t count)
+{
+ struct sock_filter *end = filter + count;
+ for ( ; filter < end; ++filter)
+ printf("{ code=%u,jt=%u,jf=%u,k=%u },\n",
+ filter->code, filter->jt, filter->jf, filter->k);
+}
diff --git a/samples/seccomp/bpf-helper.h b/samples/seccomp/bpf-helper.h
new file mode 100644
index 0000000..643279d
--- /dev/null
+++ b/samples/seccomp/bpf-helper.h
@@ -0,0 +1,238 @@
+/*
+ * Example wrapper around BPF macros.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-***@chromium.org>
+ * Author: Will Drewry <***@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ *
+ * No guarantees are provided with respect to the correctness
+ * or functionality of this code.
+ */
+#ifndef __BPF_HELPER_H__
+#define __BPF_HELPER_H__
+
+#include <asm/bitsperlong.h> /* for __BITS_PER_LONG */
+#include <endian.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h> /* for seccomp_data */
+#include <linux/types.h>
+#include <linux/unistd.h>
+#include <stddef.h>
+
+#define BPF_LABELS_MAX 256
+struct bpf_labels {
+ int count;
+ struct __bpf_label {
+ const char *label;
+ __u32 location;
+ } labels[BPF_LABELS_MAX];
+};
+
+int bpf_resolve_jumps(struct bpf_labels *labels,
+ struct sock_filter *filter, size_t count);
+__u32 seccomp_bpf_label(struct bpf_labels *labels, const char *label);
+void seccomp_bpf_print(struct sock_filter *filter, size_t count);
+
+#define JUMP_JT 0xff
+#define JUMP_JF 0xff
+#define LABEL_JT 0xfe
+#define LABEL_JF 0xfe
+
+#define ALLOW \
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW)
+#define DENY \
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL)
+#define JUMP(labels, label) \
+ BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+ JUMP_JT, JUMP_JF)
+#define LABEL(labels, label) \
+ BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+ LABEL_JT, LABEL_JF)
+#define SYSCALL(nr, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (nr), 0, 1), \
+ jt
+
+/* Lame, but just an example */
+#define FIND_LABEL(labels, label) seccomp_bpf_label((labels), #label)
+
+#define EXPAND(...) __VA_ARGS__
+/* Map all width-sensitive operations */
+#if __BITS_PER_LONG == 32
+
+#define JEQ(x, jt) JEQ32(x, EXPAND(jt))
+#define JNE(x, jt) JNE32(x, EXPAND(jt))
+#define JGT(x, jt) JGT32(x, EXPAND(jt))
+#define JLT(x, jt) JLT32(x, EXPAND(jt))
+#define JGE(x, jt) JGE32(x, EXPAND(jt))
+#define JLE(x, jt) JLE32(x, EXPAND(jt))
+#define JA(x, jt) JA32(x, EXPAND(jt))
+#define ARG(i) ARG_32(i)
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+
+#elif __BITS_PER_LONG == 64
+
+/* Ensure that we load the logically correct offset. */
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define ENDIAN(_lo, _hi) _lo, _hi
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+#define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32)
+#elif __BYTE_ORDER == __BIG_ENDIAN
+#define ENDIAN(_lo, _hi) _hi, _lo
+#define LO_ARG(idx) offsetof(struct seccomp_data, args[(idx)]) + sizeof(__u32)
+#define HI_ARG(idx) offsetof(struct seccomp_data, args[(idx)])
+#else
+#error "Unknown endianness"
+#endif
+
+union arg64 {
+ struct {
+ __u32 ENDIAN(lo32, hi32);
+ };
+ __u64 u64;
+};
+
+#define JEQ(x, jt) \
+ JEQ64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define JGT(x, jt) \
+ JGT64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define JGE(x, jt) \
+ JGE64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define JNE(x, jt) \
+ JNE64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define JLT(x, jt) \
+ JLT64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define JLE(x, jt) \
+ JLE64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+
+#define JA(x, jt) \
+ JA64(((union arg64){.u64 = (x)}).lo32, \
+ ((union arg64){.u64 = (x)}).hi32, \
+ EXPAND(jt))
+#define ARG(i) ARG_64(i)
+
+#else
+#error __BITS_PER_LONG value unusable.
+#endif
+
+/* Loads the arg into A */
+#define ARG_32(idx) \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx))
+
+/* Loads hi into A and lo in X */
+#define ARG_64(idx) \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx)), \
+ BPF_STMT(BPF_ST, 0), /* lo -> M[0] */ \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, HI_ARG(idx)), \
+ BPF_STMT(BPF_ST, 1) /* hi -> M[1] */
+
+#define JEQ32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 0, 1), \
+ jt
+
+#define JNE32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 1, 0), \
+ jt
+
+/* Checks the lo, then swaps to check the hi. A=lo,X=hi */
+#define JEQ64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 0, 2), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JNE64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 5, 0), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 2, 0), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JA32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \
+ jt
+
+#define JA64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (hi), 3, 0), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (lo), 0, 2), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JGE32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \
+ jt
+
+#define JLT32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \
+ jt
+
+/* Shortcut checking if hi > arg.hi. */
+#define JGE64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 0, 2), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JLT64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JGT32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \
+ jt
+
+#define JLE32(value, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \
+ jt
+
+/* Check hi > args.hi first, then do the GE checking */
+#define JGT64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 0, 2), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define JLE64(lo, hi, jt) \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 6, 0), \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+ BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+ jt, \
+ BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+
+#define LOAD_SYSCALL_NR \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, \
+ offsetof(struct seccomp_data, nr))
+
+#endif /* __BPF_HELPER_H__ */
diff --git a/samples/seccomp/dropper.c b/samples/seccomp/dropper.c
new file mode 100644
index 0000000..c69c347
--- /dev/null
+++ b/samples/seccomp/dropper.c
@@ -0,0 +1,68 @@
+/*
+ * Naive system call dropper built on seccomp_filter.
+ *
+ * Copyright (c) 2012 The Chromium OS Authors <chromium-os-***@chromium.org>
+ * Author: Will Drewry <***@chromium.org>
+ *
+ * The code may be used by anyone for any purpose,
+ * and can serve as a starting point for developing
+ * applications using prctl(PR_SET_SECCOMP, 2, ...).
+ *
+ * When run, returns the specified errno for the specified
+ * system call number against the given architecture.
+ *
+ * Run this one as root as PR_SET_NO_NEW_PRIVS is not called.
+ */
+
+#include <errno.h>
+#include <linux/audit.h>
+#include <linux/filter.h>
+#include <linux/seccomp.h>
+#include <linux/unistd.h>
+#include <stdio.h>
+#include <stddef.h>
+#include <stdlib.h>
+#include <sys/prctl.h>
+#include <unistd.h>
+
+static int install_filter(int nr, int arch, int error)
+{
+ struct sock_filter filter[] = {
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
+ (offsetof(struct seccomp_data, arch))),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, arch, 0, 3),
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
+ (offsetof(struct seccomp_data, nr))),
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, nr, 0, 1),
+ BPF_STMT(BPF_RET+BPF_K,
+ SECCOMP_RET_ERRNO|(error & SECCOMP_RET_DATA)),
+ BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
+ };
+ struct sock_fprog prog = {
+ .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
+ .filter = filter,
+ };
+ if (prctl(PR_SET_SECCOMP, 2, &prog)) {
+ perror("prctl");
+ return 1;
+ }
+ return 0;
+}
+
+int main(int argc, char **argv)
+{
+ if (argc < 5) {
+ fprintf(stderr, "Usage:\n"
+ "dropper <syscall_nr> <arch> <errno> <prog> [<args>]\n"
+ "Hint: AUDIT_ARCH_I386: 0x%X\n"
+ " AUDIT_ARCH_X86_64: 0x%X\n"
+ "\n", AUDIT_ARCH_I386, AUDIT_ARCH_X86_64);
+ return 1;
+ }
+ if (install_filter(strtol(argv[1], NULL, 0), strtol(argv[2], NULL, 0),
+ strtol(argv[3], NULL, 0)))
+ return 1;
+ execv(argv[4], &argv[4]);
+ printf("Failed to execv\n");
+ return 255;
+}
--
1.7.5.4
Will Drewry
2012-02-29 23:53:35 UTC
Permalink
This change enables SIGSYS, defines _sigfields._sigsys, and adds
x86 (compat) arch support. _sigsys defines fields which allow
a signal handler to receive the triggering system call number,
the relevant AUDIT_ARCH_* value for that number, and the address
of the callsite.

SIGSYS is added to the SYNCHRONOUS_MASK because it is desirable for it
to have setup_frame() called for it. The goal is to ensure that
ucontext_t reflects the machine state from the time-of-syscall and not
from another signal handler.

The first consumer of SIGSYS would be seccomp filter. In particular,
a filter program could specify a new return value, SECCOMP_RET_TRAP,
which would result in the system call being denied and the calling
thread signaled. This also means that implementing arch-specific
support can be dependent upon HAVE_ARCH_SECCOMP_FILTER.

v12: - reworded changelog (***@redhat.com)
v11: - fix dropped words in the change description
- added fallback copy_siginfo support.
- added __ARCH_SIGSYS define to allow stepped arch support.
v10: - first version based on suggestion

Suggested-by: H. Peter Anvin <***@zytor.com>
Signed-off-by: Will Drewry <***@chromium.org>
---
arch/x86/ia32/ia32_signal.c | 4 ++++
arch/x86/include/asm/ia32.h | 6 ++++++
include/asm-generic/siginfo.h | 22 ++++++++++++++++++++++
kernel/signal.c | 9 ++++++++-
4 files changed, 40 insertions(+), 1 deletions(-)

diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index 942bfbe..7f71cb2 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -66,6 +66,10 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
switch (from->si_code >> 16) {
case __SI_FAULT >> 16:
break;
+ case __SI_SYS >> 16:
+ put_user_ex(from->si_syscall, &to->si_syscall);
+ put_user_ex(from->si_arch, &to->si_arch);
+ break;
case __SI_CHLD >> 16:
put_user_ex(from->si_utime, &to->si_utime);
put_user_ex(from->si_stime, &to->si_stime);
diff --git a/arch/x86/include/asm/ia32.h b/arch/x86/include/asm/ia32.h
index c6435ab..bcc99e8 100644
--- a/arch/x86/include/asm/ia32.h
+++ b/arch/x86/include/asm/ia32.h
@@ -135,6 +135,12 @@ typedef struct compat_siginfo {
int _band; /* POLL_IN, POLL_OUT, POLL_MSG */
int _fd;
} _sigpoll;
+
+ struct {
+ unsigned int _call_addr; /* calling insn */
+ int _syscall; /* triggering system call number */
+ unsigned int _arch; /* AUDIT_ARCH_* of syscall */
+ } _sigsys;
} _sifields;
} compat_siginfo_t;

diff --git a/include/asm-generic/siginfo.h b/include/asm-generic/siginfo.h
index 0dd4e87..31306f5 100644
--- a/include/asm-generic/siginfo.h
+++ b/include/asm-generic/siginfo.h
@@ -90,9 +90,18 @@ typedef struct siginfo {
__ARCH_SI_BAND_T _band; /* POLL_IN, POLL_OUT, POLL_MSG */
int _fd;
} _sigpoll;
+
+ /* SIGSYS */
+ struct {
+ void __user *_call_addr; /* calling insn */
+ int _syscall; /* triggering system call number */
+ unsigned int _arch; /* AUDIT_ARCH_* of syscall */
+ } _sigsys;
} _sifields;
} siginfo_t;

+/* If the arch shares siginfo, then it has SIGSYS. */
+#define __ARCH_SIGSYS
#endif

/*
@@ -116,6 +125,11 @@ typedef struct siginfo {
#define si_addr_lsb _sifields._sigfault._addr_lsb
#define si_band _sifields._sigpoll._band
#define si_fd _sifields._sigpoll._fd
+#ifdef __ARCH_SIGSYS
+#define si_call_addr _sifields._sigsys._call_addr
+#define si_syscall _sifields._sigsys._syscall
+#define si_arch _sifields._sigsys._arch
+#endif

#ifdef __KERNEL__
#define __SI_MASK 0xffff0000u
@@ -126,6 +140,7 @@ typedef struct siginfo {
#define __SI_CHLD (4 << 16)
#define __SI_RT (5 << 16)
#define __SI_MESGQ (6 << 16)
+#define __SI_SYS (7 << 16)
#define __SI_CODE(T,N) ((T) | ((N) & 0xffff))
#else
#define __SI_KILL 0
@@ -135,6 +150,7 @@ typedef struct siginfo {
#define __SI_CHLD 0
#define __SI_RT 0
#define __SI_MESGQ 0
+#define __SI_SYS 0
#define __SI_CODE(T,N) (N)
#endif

@@ -232,6 +248,12 @@ typedef struct siginfo {
#define NSIGPOLL 6

/*
+ * SIGSYS si_codes
+ */
+#define SYS_SECCOMP (__SI_SYS|1) /* seccomp triggered */
+#define NSIGSYS 1
+
+/*
* sigevent definitions
*
* It seems likely that SIGEV_THREAD will have to be handled from
diff --git a/kernel/signal.c b/kernel/signal.c
index 354b104..12b23ff 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -160,7 +160,7 @@ void recalc_sigpending(void)

#define SYNCHRONOUS_MASK \
(sigmask(SIGSEGV) | sigmask(SIGBUS) | sigmask(SIGILL) | \
- sigmask(SIGTRAP) | sigmask(SIGFPE))
+ sigmask(SIGTRAP) | sigmask(SIGFPE) | sigmask(SIGSYS))

int next_signal(struct sigpending *pending, sigset_t *mask)
{
@@ -2700,6 +2700,13 @@ int copy_siginfo_to_user(siginfo_t __user *to, siginfo_t *from)
err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(from->si_ptr, &to->si_ptr);
break;
+#ifdef __ARCH_SIGSYS
+ case __SI_SYS:
+ err |= __put_user(from->si_call_addr, &to->si_call_addr);
+ err |= __put_user(from->si_syscall, &to->si_syscall);
+ err |= __put_user(from->si_arch, &to->si_arch);
+ break;
+#endif
default: /* this is just in case for now ... */
err |= __put_user(from->si_pid, &to->si_pid);
err |= __put_user(from->si_uid, &to->si_uid);
--
1.7.5.4
Will Drewry
2012-02-29 23:53:36 UTC
Permalink
Adds a new return value to seccomp filters that triggers a SIGSYS to be
delivered with the new SYS_SECCOMP si_code.

This allows in-process system call emulation, including just specifying
an errno or cleanly dumping core, rather than just dying.

v11: - clarify the comment (***@nul.nu)
- s/sigtrap/sigsys
v10: - use SIGSYS, syscall_get_arch, updates arch/Kconfig
note suggested-by (though original suggestion had other behaviors)
v9: - changes to SIGILL
v8: - clean up based on changes to dependent patches
v7: - introduction

Suggested-by: Markus Gutschke <***@chromium.org>
Suggested-by: Julien Tinnes <***@chromium.org>
Signed-off-by: Will Drewry <***@chromium.org>
---
arch/Kconfig | 14 +++++++++-----
include/asm-generic/siginfo.h | 2 +-
include/linux/seccomp.h | 1 +
kernel/seccomp.c | 28 ++++++++++++++++++++++++++++
4 files changed, 39 insertions(+), 6 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 1350d07..6908bf6 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -236,11 +236,15 @@ config ARCH_WANT_OLD_COMPAT_IPC
config HAVE_ARCH_SECCOMP_FILTER
bool
help
- This symbol should be selected by an architecure if it provides
- asm/syscall.h, specifically syscall_get_arguments(),
- syscall_get_arch(), and syscall_set_return_value(). Additionally,
- its system call entry path must respect a return value of -1 from
- __secure_computing_int() and/or secure_computing().
+ This symbol should be selected by an architecure if it provides:
+ asm/syscall.h:
+ - syscall_get_arch()
+ - syscall_get_arguments()
+ - syscall_rollback()
+ - syscall_set_return_value()
+ SIGSYS siginfo_t support must be implemented.
+ __secure_computing_int()/secure_computing()'s return value must be
+ checked, with -1 resulting in the syscall being skipped.

config SECCOMP_FILTER
def_bool y
diff --git a/include/asm-generic/siginfo.h b/include/asm-generic/siginfo.h
index 31306f5..af5d035 100644
--- a/include/asm-generic/siginfo.h
+++ b/include/asm-generic/siginfo.h
@@ -93,7 +93,7 @@ typedef struct siginfo {

/* SIGSYS */
struct {
- void __user *_call_addr; /* calling insn */
+ void __user *_call_addr; /* calling user insn */
int _syscall; /* triggering system call number */
unsigned int _arch; /* AUDIT_ARCH_* of syscall */
} _sigsys;
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index a81fccd..3fe48fd 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -19,6 +19,7 @@
* selects the least permissive choice.
*/
#define SECCOMP_RET_KILL 0x00000000U /* kill the task immediately */
+#define SECCOMP_RET_TRAP 0x00020000U /* disallow and force a SIGSYS */
#define SECCOMP_RET_ERRNO 0x00030000U /* returns an errno */
#define SECCOMP_RET_ALLOW 0x7fff0000U /* allow */

diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 88dd568..10c6077 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -272,6 +272,26 @@ void put_seccomp_filter(struct seccomp_filter *orig)
kfree(freeme);
}
}
+
+/**
+ * seccomp_send_sigsys - signals the task to allow in-process syscall emulation
+ * @syscall: syscall number to send to userland
+ * @reason: filter-supplied reason code to send to userland (via si_errno)
+ *
+ * Forces a SIGSYS with a code of SYS_SECCOMP and related sigsys info.
+ */
+static void seccomp_send_sigsys(int syscall, int reason)
+{
+ struct siginfo info;
+ memset(&info, 0, sizeof(info));
+ info.si_signo = SIGSYS;
+ info.si_code = SYS_SECCOMP;
+ info.si_call_addr = (void __user *)KSTK_EIP(current);
+ info.si_errno = reason;
+ info.si_arch = syscall_get_arch(current, task_pt_regs(current));
+ info.si_syscall = syscall;
+ force_sig_info(SIGSYS, &info, current);
+}
#endif /* CONFIG_SECCOMP_FILTER */

/*
@@ -326,6 +346,14 @@ int __secure_computing_int(int this_syscall)
-(action & SECCOMP_RET_DATA),
0);
return -1;
+ case SECCOMP_RET_TRAP: {
+ int reason_code = action & SECCOMP_RET_DATA;
+ /* Show the handler the original registers. */
+ syscall_rollback(current, task_pt_regs(current));
+ /* Let the filter pass back 16 bits of data. */
+ seccomp_send_sigsys(this_syscall, reason_code);
+ return -1;
+ }
case SECCOMP_RET_ALLOW:
return 0;
case SECCOMP_RET_KILL:
--
1.7.5.4
Kees Cook
2012-03-01 23:37:12 UTC
Permalink
Hi,

So far, it looks like everyone who spoke up is satisfied with this patch
series. Or their MUAs got caught up on a leap-year bug and missed the
posting. ;)

Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
format glory:

The following changes since commit 2422c8368337196594265d52cad7316c9404bfcf:
Stephen Rothwell (1):
Add linux-next specific files for 20120301

are available in the git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git seccomp

Andy Lutomirski (1):
Add PR_{GET,SET}_NO_NEW_PRIVS to prevent execve from granting privs

John Johansen (1):
Fix apparmor for PR_{GET,SET}_NO_NEW_PRIVS

Kees Cook (1):
seccomp: remove duplicated failure logging

Will Drewry (12):
sk_run_filter: add support for custom load_pointer
net/compat.c,linux/filter.h: share compat_sock_fprog
seccomp: kill the seccomp_t typedef
asm/syscall.h: add syscall_get_arch
arch/x86: add syscall_get_arch to syscall.h
seccomp: add system call filtering using BPF
seccomp: add SECCOMP_RET_ERRNO
signal, x86: add SIGSYS info and make it synchronous.
seccomp: Add SECCOMP_RET_TRAP
ptrace,seccomp: Add PTRACE_SECCOMP support
x86: Enable HAVE_ARCH_SECCOMP_FILTER
Documentation: prctl/seccomp_filter

Documentation/prctl/seccomp_filter.txt | 156 +++++++++++++
arch/Kconfig | 24 ++
arch/x86/Kconfig | 1 +
arch/x86/ia32/ia32_signal.c | 4 +
arch/x86/include/asm/ia32.h | 6 +
arch/x86/include/asm/syscall.h | 23 ++
arch/x86/kernel/ptrace.c | 7 +-
fs/exec.c | 10 +-
include/asm-generic/siginfo.h | 22 ++
include/asm-generic/syscall.h | 14 ++
include/linux/Kbuild | 1 +
include/linux/audit.h | 8 +-
include/linux/filter.h | 57 +++++
include/linux/prctl.h | 15 ++
include/linux/ptrace.h | 5 +-
include/linux/sched.h | 4 +-
include/linux/seccomp.h | 99 +++++++-
include/linux/security.h | 1 +
kernel/auditsc.c | 9 +-
kernel/fork.c | 3 +
kernel/seccomp.c | 378 ++++++++++++++++++++++++++++++--
kernel/signal.c | 9 +-
kernel/sys.c | 12 +-
net/compat.c | 8 -
net/core/filter.c | 117 ++++++++++-
samples/Makefile | 2 +-
samples/seccomp/Makefile | 38 ++++
samples/seccomp/bpf-direct.c | 176 +++++++++++++++
samples/seccomp/bpf-fancy.c | 102 +++++++++
samples/seccomp/bpf-helper.c | 89 ++++++++
samples/seccomp/bpf-helper.h | 238 ++++++++++++++++++++
samples/seccomp/dropper.c | 68 ++++++
security/apparmor/domain.c | 35 +++
security/commoncap.c | 7 +-
security/selinux/hooks.c | 10 +-
35 files changed, 1695 insertions(+), 63 deletions(-)
create mode 100644 Documentation/prctl/seccomp_filter.txt
create mode 100644 samples/seccomp/Makefile
create mode 100644 samples/seccomp/bpf-direct.c
create mode 100644 samples/seccomp/bpf-fancy.c
create mode 100644 samples/seccomp/bpf-helper.c
create mode 100644 samples/seccomp/bpf-helper.h
create mode 100644 samples/seccomp/dropper.c
--
Kees Cook
ChromeOS Security
H. Peter Anvin
2012-03-02 00:05:31 UTC
Permalink
Post by Kees Cook
Hi,
So far, it looks like everyone who spoke up is satisfied with this patch
series. Or their MUAs got caught up on a leap-year bug and missed the
posting. ;)
Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
Acked-by: H. Peter Anvin <***@zytor.com>
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
Andrew Lutomirski
2012-03-02 00:22:37 UTC
Permalink
Post by Kees Cook
Hi,
So far, it looks like everyone who spoke up is satisfied with this patch
series. Or their MUAs got caught up on a leap-year bug and missed the
posting. ;)
Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
[...]
Should we set up a site with associated example userspace tools? For
example, this:

http://web.mit.edu/luto/www/linux/nnp/

(although newns.c isn't as useful without the rest of my patches, but
that's a separate discussion.)

At least pam_nnp.c is useful, and when this stuff hits mainline, I'll
try to push it to linux-pam. Presumably you and Will have similar
little hacks to play with.

--Andy
Stephen Rothwell
2012-03-02 00:47:05 UTC
Permalink
Hi Kees,
Post by Kees Cook
Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
Add linux-next specific files for 20120301
OK, not commenting on anything else, but I cannot merge that into
linux-next because it is based on yesterday's linux-next release and
linux-next (effectively) rebases every day ...

Does this work depend on anything in linux-next? Or could it be just
based off Linus' tree. If it depends on other tree(s) merged into
linux-next, then you should base your tree on those tree(s) as long as
they never get rebased ...
--
Cheers,
Stephen Rothwell ***@canb.auug.org.au
Kees Cook
2012-03-02 00:57:49 UTC
Permalink
Hi,
Post by Stephen Rothwell
Hi Kees,
Is it time to pull this into -next so more people can feel it? I bro=
ught
Post by Stephen Rothwell
Andy Lutomirski's patches forward (needed to bump the prctl values),=
and
Post by Stephen Rothwell
updated my git tree. If we're ready, here it is in all its request-p=
ull
Post by Stephen Rothwell
The following changes since commit 2422c8368337196594265d52cad7316c9=
=A0 =A0 =A0 =A0 Add linux-next specific files for 20120301
OK, not commenting on anything else, but I cannot merge that into
linux-next because it is based on yesterday's linux-next release and
linux-next (effectively) rebases every day ...
Does this work depend on anything in linux-next? =A0Or could it be ju=
st
Post by Stephen Rothwell
based off Linus' tree. =A0If it depends on other tree(s) merged into
linux-next, then you should base your tree on those tree(s) as long a=
s
Post by Stephen Rothwell
they never get rebased ...
Unfortunately, yes, it does -- there were both ptrace changes and prctl=
changes.

And at least the ptrace changes are, IIRC, in -mm, which has no tree.
:P Given that, what's the best thing for me to do for this to be easy
for you to pull?

-Kees

--=20
Kees Cook
ChromeOS Security
Andrew Morton
2012-03-02 01:19:14 UTC
Permalink
Post by Stephen Rothwell
Hi Kees,
Post by Kees Cook
Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
__ __ __ __ Add linux-next specific files for 20120301
OK, not commenting on anything else, but I cannot merge that into
linux-next because it is based on yesterday's linux-next release and
linux-next (effectively) rebases every day ...
Does this work depend on anything in linux-next? __Or could it be just
based off Linus' tree. __If it depends on other tree(s) merged into
linux-next, then you should base your tree on those tree(s) as long as
they never get rebased ...
Unfortunately, yes, it does -- there were both ptrace changes and prctl changes.
And at least the ptrace changes are, IIRC, in -mm, which has no tree.
:P Given that, what's the best thing for me to do for this to be easy
for you to pull?
Base the tree on mainline and wreck the -mm patches I guess. I'm good
at unwrecking patches.

That assumes that we're going to merge this stuff into 3.4 - if we
don't, unwrecker gets rewrecked and grumpy.

I don't know if we're going to merge it into 3.4? I haven't been
paying a lot of attention and haven't looked at the patches in a while.

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Will Drewry
2012-03-02 02:39:40 UTC
Permalink
Post by Stephen Rothwell
Hi Kees,
Post by Kees Cook
Is it time to pull this into -next so more people can feel it? I brought
Andy Lutomirski's patches forward (needed to bump the prctl values), and
updated my git tree. If we're ready, here it is in all its request-pull
__ __ __ __ Add linux-next specific files for 20120301
OK, not commenting on anything else, but I cannot merge that into
linux-next because it is based on yesterday's linux-next release and
linux-next (effectively) rebases every day ...
Does this work depend on anything in linux-next? __Or could it be just
based off Linus' tree. __If it depends on other tree(s) merged into
linux-next, then you should base your tree on those tree(s) as long as
they never get rebased ...
Unfortunately, yes, it does -- there were both ptrace changes and prctl changes.
And at least the ptrace changes are, IIRC, in -mm, which has no tree.
:P Given that, what's the best thing for me to do for this to be easy
for you to pull?
Base the tree on mainline and wreck the -mm patches I guess.  I'm good
at unwrecking patches.
That assumes that we're going to merge this stuff into 3.4 - if we
don't, unwrecker gets rewrecked and grumpy.
I don't know if we're going to merge it into 3.4?  I haven't been
paying a lot of attention and haven't looked at the patches in a while.
I'll roll a v13 which is just a rebase back onto mainline and see if I
can just stage it in kees's tree.

Thanks!
will
Indan Zupancic
2012-03-02 04:04:41 UTC
Permalink
Post by Andrew Morton
That assumes that we're going to merge this stuff into 3.4 - if we
don't, unwrecker gets rewrecked and grumpy.
I don't know if we're going to merge it into 3.4? I haven't been
paying a lot of attention and haven't looked at the patches in a while.
I think it should be merged, but I think 3.5 is probably better.

This because we haven't heard anything from the networking people
about the BPF changes, and I'm also unsure if the current approach
is the best one: It both increases the filter.o size significantly
while slowing down sk_run_filter, while the point was to avoid both.
I'm trying to think of an alternative approach with lower impact.

The ptrace integration may need some more time to settle too, even
just to make sure the latest version does what needs to be done.

Both directly affect the user space ABI, so I think it's best to
not be too hasty with pushing this upstream. Waiting one release
while having a stable final patch gives people the chance to go
and try to use it for their purposes and thus both test the code
more and get experience with the ABI.

Greetings,

Indan
Stephen Rothwell
2012-03-02 05:03:05 UTC
Permalink
Just a note.
Post by Indan Zupancic
Post by Andrew Morton
That assumes that we're going to merge this stuff into 3.4 - if we
don't, unwrecker gets rewrecked and grumpy.
I don't know if we're going to merge it into 3.4? I haven't been
paying a lot of attention and haven't looked at the patches in a while.
I think it should be merged, but I think 3.5 is probably better.
If it is going into v3.5, I don't want to add it to linux-next until after
v3.4-rc1 is out, thanks.
--
Cheers,
Stephen Rothwell ***@canb.auug.org.au
Kees Cook
2012-03-02 05:26:23 UTC
Permalink
Post by Indan Zupancic
Post by Andrew Morton
That assumes that we're going to merge this stuff into 3.4 - if we
don't, unwrecker gets rewrecked and grumpy.
I don't know if we're going to merge it into 3.4? =A0I haven't been
paying a lot of attention and haven't looked at the patches in a whi=
le.
Post by Indan Zupancic
I think it should be merged, but I think 3.5 is probably better.
This because we haven't heard anything from the networking people
about the BPF changes, and I'm also unsure if the current approach
is the best one: It both increases the filter.o size significantly
while slowing down sk_run_filter, while the point was to avoid both.
I'm trying to think of an alternative approach with lower impact.
The ptrace integration may need some more time to settle too, even
just to make sure the latest version does what needs to be done.
Both directly affect the user space ABI, so I think it's best to
not be too hasty with pushing this upstream. Waiting one release
while having a stable final patch gives people the chance to go
and try to use it for their purposes and thus both test the code
more and get experience with the ABI.
Well, IIUC, Eric Dumazet Acked the BPF changes. While I see what
you're saying about waiting for 3.5, it seems like the best way to
really see this stabilize is to get this into 3.4. The various
approaches have been discussed for a while now. Having that wider
testing sooner rather than later seems like the better approach to me.
Waiting for 3.5 just means we'll be waiting until then to do that same
testing. Perhaps Andrew Morton can decide?

Regardless, I've updated my seccomp tree with Will's rebase to Linus's
tree so people can pull from it as need be:

git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git se=
ccomp

-Kees

--=20
Kees Cook
ChromeOS Security
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Andrew Lutomirski
2012-03-02 01:48:32 UTC
Permalink
Hi,
Post by Stephen Rothwell
Hi Kees,
Is it time to pull this into -next so more people can feel it? I br=
ought
Post by Stephen Rothwell
Andy Lutomirski's patches forward (needed to bump the prctl values)=
, and
Post by Stephen Rothwell
updated my git tree. If we're ready, here it is in all its request-=
pull
Post by Stephen Rothwell
The following changes since commit 2422c8368337196594265d52cad7316c=
=A0 =A0 =A0 =A0 Add linux-next specific files for 20120301
OK, not commenting on anything else, but I cannot merge that into
linux-next because it is based on yesterday's linux-next release and
linux-next (effectively) rebases every day ...
Does this work depend on anything in linux-next? =A0Or could it be j=
ust
Post by Stephen Rothwell
based off Linus' tree. =A0If it depends on other tree(s) merged into
linux-next, then you should base your tree on those tree(s) as long =
as
Post by Stephen Rothwell
they never get rebased ...
Unfortunately, yes, it does -- there were both ptrace changes and prc=
tl changes.
And at least the ptrace changes are, IIRC, in -mm, which has no tree.
:P Given that, what's the best thing for me to do for this to be easy
for you to pull?
IIRC the prctl changes are in -linus. I can rebase my part onto them.

--Andy
Stephen Rothwell
2012-03-02 03:10:38 UTC
Permalink
Hi Kees,
Unfortunately, yes, it does -- there were both ptrace changes and prctl changes.
And at least the ptrace changes are, IIRC, in -mm, which has no tree.
:P Given that, what's the best thing for me to do for this to be easy
for you to pull?
Does this set of patches *depend* on functionality provided by those, or
just produce conflicts against the other changes? If it is just
conflicts, then base your tree on Linus and I and (he when it comes to
it) can fix the conflicts as needed (with some hints if you think it is a
good idea i.e. is the conflicts are particularly complex).
--
Cheers,
Stephen Rothwell ***@canb.auug.org.au
Will Drewry
2012-03-02 03:41:51 UTC
Permalink
Post by Stephen Rothwell
Hi Kees,
Unfortunately, yes, it does -- there were both ptrace changes and pr=
ctl changes.
Post by Stephen Rothwell
And at least the ptrace changes are, IIRC, in -mm, which has no tree=
=2E
Post by Stephen Rothwell
:P Given that, what's the best thing for me to do for this to be eas=
y
Post by Stephen Rothwell
for you to pull?
Does this set of patches *depend* on functionality provided by those,=
or
Post by Stephen Rothwell
just produce conflicts against the other changes? =A0If it is just
conflicts, then base your tree on Linus and I and (he when it comes t=
o
Post by Stephen Rothwell
it) can fix the conflicts as needed (with some hints if you think it =
is a
Post by Stephen Rothwell
good idea i.e. is the conflicts are particularly complex).
No explicit dependency. It's just a conflict in how ptrace options
are set, defined, and masked (and STOP being renumbered). Other than
the ptrace changes, the whole series patches cleanly onto both trees.
The resolved version is less code and easier to read, so I don't
suspect it'll be a challenge at all.

thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Stephen Rothwell
2012-03-02 03:57:50 UTC
Permalink
Post by Will Drewry
No explicit dependency. It's just a conflict in how ptrace options
are set, defined, and masked (and STOP being renumbered). Other than
the ptrace changes, the whole series patches cleanly onto both trees.
The resolved version is less code and easier to read, so I don't
suspect it'll be a challenge at all.
OK, so we need a tree based on Linus' (like v3.3-rc5 or something - and
preferably tested ;-)) and an explicit request to me to include that tree.

I will sort out any conflicts with Andrews's tree and post the fixups I
make so others can correct me if necessary.
--
Cheers,
Stephen Rothwell ***@canb.auug.org.au
Eric Dumazet
2012-03-02 00:47:13 UTC
Permalink
Hi,
So far, it looks like everyone who spoke up is satisfied with this pa=
tch
series. Or their MUAs got caught up on a leap-year bug and missed the
posting. ;)
Sorry for the delay, I only had time to review patch 01/13

Acked-by: Eric Dumazet <***@gmail.com>
Indan Zupancic
2012-03-02 10:40:14 UTC
Permalink
Hello,
Post by Will Drewry
include/linux/filter.h | 46 +++++++++++++++++++
net/core/filter.c | 117 +++++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 157 insertions(+), 6 deletions(-)
I propose a slightly different approach:

Instead of more or less allowing generic load instructions, do the
same as the ancillary data functions and only allow BPF_S_LD_W_ABS.
In addition to that, rewrite and check the functions ourself after
sk_chk_filter() has done its checks.

Diff for filter.c:

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..63b728c 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -228,6 +228,7 @@ enum {
BPF_S_ANC_HATYPE,
BPF_S_ANC_RXHASH,
BPF_S_ANC_CPU,
+ BPF_S_LD_W_SECCOMP,
};

#endif /* __KERNEL__ */
diff --git a/net/core/filter.c b/net/core/filter.c
index 5dea452..7e338d6 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -350,6 +350,9 @@ load_b:
A = 0;
continue;
}
+ case BPF_S_LD_W_SECCOMP:
+ A = seccomp_load(fentry->k);
+ continue;
default:
WARN_RATELIMIT(1, "Unknown code:%u jt:%u tf:%u k:%u\n",
fentry->code, fentry->jt,
---

And in seccomp add something like:

/*
* Does SECCOMP specific checks.
* Should be called after sk_chk_filter(), as it assumes all instructions
* are rewritten to the kernel enum format.
* No SKB touching instructions are allowed. Only data LD instruction allowed
* is BPF_S_LD_W_ABS, which will be handled by seccomp_load().
*/
int seccomp_check_filter(const struct sock_filter *filter, unsigned int flen)
{
int pc;

/* Make sure there are no SKB using instructions */
for (pc = 0; pc < flen; pc++) {
u16 code = filter->code;
unsigned int k = filter->k;

if (code <= BPF_S_ALU_NEG)
continue;
if (code >= BPF_S_LDX_IMM && code < BPF_S_ANC_PROTOCOL)
continue;
switch (code) {
case BPF_S_LD_W_ABS:
filter->code = BPF_S_LD_W_SECCOMP;
if (k >= sizeof(struct seccomp_data) || k & 3)
return -EINVAL;
continue;
case BPF_S_LD_W_LEN:
filter->code = BPF_S_LD_IMM;
filter->k = sizeof(struct seccomp_data);
continue;
case BPF_S_LD_IMM:
continue;
case BPF_S_LDX_W_LEN:
filter->code = BPF_S_LDX_IMM;
filter->k = sizeof(struct seccomp_data);
continue;
default:
return -EINVAL;
}
}
return 0;
}

u32 seccomp_load(int off)
{
u32 A;
struct pt_regs *regs = task_pt_regs(current);

if (off >= BPF_DATA(args[0]) && off < BPF_DATA(args[6])) {
int arg = (off - BPF_DATA(args[0])) / sizeof(u64);
int index = (off % sizeof(u64)) ? 1 : 0;
syscall_get_arguments(current, regs, arg, 1, &value);
A = get_u32(value, index);
} else if (off == BPF_DATA(nr)) {
A = syscall_get_nr(current, regs);
} else if (off == BPF_DATA(arch)) {
A = syscall_get_arch(current, regs);
} else if (off == BPF_DATA(instruction_pointer)) {
A = get_u32(KSTK_EIP(current), 0);
} else if (off == BPF_DATA(instruction_pointer) + sizeof(u32)) {
A = get_u32(KSTK_EIP(current), 1);
}
return A;
}

This way you can even add SECCOMP specific functions in the future by using
special offsets. (E.g. 64-bit compare between an arg and scratch memory.)

Greetings,

Indan


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Will Drewry
2012-03-02 18:47:51 UTC
Permalink
Post by Indan Zupancic
Hello,
=A0include/linux/filter.h | =A0 46 +++++++++++++++++++
=A0net/core/filter.c =A0 =A0 =A0| =A0117 +++++++++++++++++++++++++++=
++++++++++++++++++---
Post by Indan Zupancic
=A02 files changed, 157 insertions(+), 6 deletions(-)
Instead of more or less allowing generic load instructions, do the
same as the ancillary data functions and only allow BPF_S_LD_W_ABS.
In addition to that, rewrite and check the functions ourself after
sk_chk_filter() has done its checks.
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8eeb205..63b728c 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -228,6 +228,7 @@ enum {
=A0 =A0 =A0 =A0BPF_S_ANC_HATYPE,
=A0 =A0 =A0 =A0BPF_S_ANC_RXHASH,
=A0 =A0 =A0 =A0BPF_S_ANC_CPU,
+ =A0 =A0 =A0 BPF_S_LD_W_SECCOMP,
=A0};
=A0#endif /* __KERNEL__ */
diff --git a/net/core/filter.c b/net/core/filter.c
index 5dea452..7e338d6 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D =
0;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 A =3D seccomp_load(fent=
ry->k);
Post by Indan Zupancic
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 continue;
This is plenty nice as far as I'm concerned. I wonder what the
networking people think?

I proposed a generic bpf interface, but if simply scoping it down to a
single additional seccomp instruction is okay, then we can address
additional instruction support or other generalizations later when
there is a motivating case.
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0WARN_RATELIMIT(1, "Unk=
nown code:%u jt:%u tf:%u k:%u\n",
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
=A0 fentry->code, fentry->jt,
Post by Indan Zupancic
---
/*
=A0* Does SECCOMP specific checks.
=A0* Should be called after sk_chk_filter(), as it assumes all instru=
ctions
Post by Indan Zupancic
=A0* are rewritten to the kernel enum format.
=A0* No SKB touching instructions are allowed. Only data LD instructi=
on allowed
Post by Indan Zupancic
=A0* is BPF_S_LD_W_ABS, which will be handled by seccomp_load().
=A0*/
int seccomp_check_filter(const struct sock_filter *filter, unsigned i=
nt flen)
Post by Indan Zupancic
{
=A0 =A0 =A0 =A0int pc;
=A0 =A0 =A0 =A0/* Make sure there are no SKB using instructions */
=A0 =A0 =A0 =A0for (pc =3D 0; pc < flen; pc++) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0u16 code =3D filter->code;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0unsigned int k =3D filter->k;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (code <=3D BPF_S_ALU_NEG)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (code >=3D BPF_S_LDX_IMM && code < =
BPF_S_ANC_PROTOCOL)
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0switch (code) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filter->code =3D BPF_S=
_LD_W_SECCOMP;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (k >=3D sizeof(stru=
ct seccomp_data) || k & 3)
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return=
-EINVAL;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filter->code =3D BPF_S=
_LD_IMM;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filter->k =3D sizeof(s=
truct seccomp_data);
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filter->code =3D BPF_S=
_LDX_IMM;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filter->k =3D sizeof(s=
truct seccomp_data);

Mapping to LD[X]_IMM is really nice.
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0continue;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0return -EINVAL;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}
=A0 =A0 =A0 =A0}
=A0 =A0 =A0 =A0return 0;
}
u32 seccomp_load(int off)
{
=A0 =A0 =A0 =A0u32 A;
=A0 =A0 =A0 =A0struct pt_regs *regs =3D task_pt_regs(current);
=A0 =A0 =A0 =A0if (off >=3D BPF_DATA(args[0]) && off < BPF_DATA(args[=
6])) {
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0int arg =3D (off - BPF_DATA(args[0])) =
/ sizeof(u64);
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0int index =3D (off % sizeof(u64)) ? 1 =
: 0;
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0syscall_get_arguments(current, regs, a=
rg, 1, &value);
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D get_u32(value, index);
=A0 =A0 =A0 =A0} else if (off =3D=3D BPF_DATA(nr)) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D syscall_get_nr(current, regs);
=A0 =A0 =A0 =A0} else if (off =3D=3D BPF_DATA(arch)) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D syscall_get_arch(current, regs);
=A0 =A0 =A0 =A0} else if (off =3D=3D BPF_DATA(instruction_pointer)) {
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D get_u32(KSTK_EIP(current), 0);
=A0 =A0 =A0 =A0} else if (off =3D=3D BPF_DATA(instruction_pointer) + =
sizeof(u32)) {
Post by Indan Zupancic
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0A =3D get_u32(KSTK_EIP(current), 1);
=A0 =A0 =A0 =A0}
=A0 =A0 =A0 =A0return A;
}
This way you can even add SECCOMP specific functions in the future by=
using
Post by Indan Zupancic
special offsets. (E.g. 64-bit compare between an arg and scratch memo=
ry.)

Yeah this would be a nice option if a more specialized (yet less
invasive) approach is appealing to the networking people. Eric, Joe,
netdev, ... any opinions? Would a standalone version be more useful?

Thanks!
will
Continue reading on narkive:
Loading...