Keep learning, keep living...

0%

NFQUEUE机制导致DNS请求5秒超时分析

在一台CentOS 7.0服务器(内核版本号:3.10.0-123.el7.x86_64)上安装我们的安全防护程序后,会出现curl访问网址超时5秒的情况。现象如下:

1
2
3
4
5
[root@localhost ~]# time curl -s www.baidu.com -o /dev/null

real 0m5.120s
user 0m0.002s
sys 0m0.009s

通过strace分析程序调用的过程:

1
strace -f -tt -o curl.strace curl -s www.baidu.com -o /dev/null

strace输出可以看到, 第一次curl调用sendmmsg同时发送了两个DNS数据包,分别是A记录和AAAA记录请求,但是只收到了A记录响应包:

然后等待5秒超时后,依次调用sendtorecvfrom串行处理两个DNS请求, 这次两个DNS响应全部收到后,继续向下执行:

而从抓包结果分析,tcpdump只能看到第一次同时发送的两个DNS请求中的A记录请求,AAAA记录请求数据包被内核协议栈丢弃了:

1
2
3
4
5
6
17:04:24.772049 IP 10.10.10.89.57416 > 114.114.114.114.53: 37081+ A? www.baidu.com. (31)
17:04:24.773693 IP 114.114.114.114.53 > 10.10.10.89.57416: 37081 3/0/0 CNAME www.a.shifen.com., A 180.101.49.12, A 180.101.49.11 (90)
17:04:29.776474 IP 10.10.10.89.57416 > 114.114.114.114.53: 37081+ A? www.baidu.com. (31)
17:04:29.778694 IP 114.114.114.114.53 > 10.10.10.89.57416: 37081 3/0/0 CNAME www.a.shifen.com., A 180.101.49.11, A 180.101.49.12 (90)
17:04:29.778925 IP 10.10.10.89.57416 > 114.114.114.114.53: 42471+ AAAA? www.baidu.com. (31)
17:04:29.780523 IP 114.114.114.114.53 > 10.10.10.89.57416: 42471 1/1/0 CNAME www.a.shifen.com. (115)

Google上搜索DNS 5秒有非常多关于类似现象的文章介绍,但基本都是在Kubernetes环境中发生。我们的环境只是普通的CentOS环境,为何也会发生呢?

weave公司的文章Racy conntrack and DNS lookup timeouts介绍了在较旧版本内核的conntrack模块中存在的BUG会导致UDP丢包。其中一种场景是当不同的线程通过相同的socket发送UDP数据包时,存在竞争条件两个数据包都会各自创建一个conntrack条目,但两个条目所包含的tuple信息是一致的,这种情况会导致丢包。

从现象看,我们的场景丢包根因应该也是由于conntrack模块的BUG导致丢包,但我们的这种场景并不存在多个线程同时使用相同socket进行发送。我们的防护逻辑是内核模块通过NFQUEUE机制将数据包送到用户态,由用户态对数据包进行过滤裁决是否允许放行。因而我们怀疑是由于NFQUEUE机制导致conntrack模块这个已知BUG的触发。

我们写一个简单的内核模块将DNS请求通过NFQUEUE送到用户态, 用户态程序直接放行, 这样来验证能否复现问题。
内核模块代码,nfqdns.c内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/skbuff.h>
#include <linux/ip.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
#include <net/udp.h>

MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("nfqdns");
MODULE_ALIAS("module nfqdns netfiler");

static int nfqueue_no = 0;
MODULE_PARM_DESC(queue, "nfquene number");
module_param(nfqueue_no, int, 0600);

static unsigned int nf_hook_out(const struct nf_hook_ops *ops,
struct sk_buff *sk,
const struct net_device *in,
const struct net_device *out,
const struct nf_hook_state *state)
{
struct udphdr *udph = NULL;
struct iphdr *iph = ip_hdr(sk);

u8 proto = iph->protocol;

if (proto != IPPROTO_UDP) {
return NF_ACCEPT;
}

udph = (struct udphdr *) skb_transport_header(sk);

if (htons(udph->dest) == 53) {
printk(KERN_INFO "[nfqdns]: %pI4:%d->%pI4:%d queued in [%d], skb: %p, ct: %p, tid: %d\n",
&iph->saddr, htons(udph->source), &iph->daddr, htons(udph->dest), nfqueue_no,
sk, sk->nfct, htons(*(unsigned short*)(udph + 1)));
return NF_QUEUE_NR(nfqueue_no);
}

return NF_ACCEPT;
}

static struct nf_hook_ops nfhooks[] = {
{
.hook = nf_hook_out,
.owner = THIS_MODULE,
.pf = NFPROTO_IPV4,
.hooknum = NF_INET_POST_ROUTING,
.priority = NF_IP_PRI_FIRST,
},
};


int __init nfqdns_init(void)
{
nf_register_hooks(nfhooks, ARRAY_SIZE(nfhooks));

printk(KERN_INFO "nfqdns module init\n");

return 0;
}

void __exit nfqdns_exit(void)
{
nf_unregister_hooks(nfhooks, ARRAY_SIZE(nfhooks));

printk(KERN_INFO "nfqdns module exit\n");

return;
}

module_init(nfqdns_init);
module_exit(nfqdns_exit);

用户态程序user.c代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
#include <stdio.h>
#include <assert.h>
#include <string.h>
#include <netinet/in.h>
#include <linux/types.h>
#include <linux/netfilter.h>
#include <libnetfilter_queue/libnetfilter_queue.h>

static int cb(struct nfq_q_handle *qh, struct nfgenmsg *nfmsg,
struct nfq_data *nfa, void *data)
{
u_int32_t id = 0;
struct nfqnl_msg_packet_hdr *ph;
uint32_t m = 0;

ph = nfq_get_msg_packet_hdr(nfa);
if (ph) {
id = ntohl(ph->packet_id);
}

printf("packet: %u\n", id);

return nfq_set_verdict(qh, id, NF_ACCEPT, 0, NULL);
}

int main(int argc, char **argv)
{
struct nfq_handle *h;
struct nfq_q_handle *qh;
struct nfnl_handle *nh;
int fd;
int rv;
char buf[4096];

assert((h = nfq_open()) != NULL);
assert(nfq_unbind_pf(h, AF_INET) == 0);
assert(nfq_bind_pf(h, AF_INET) == 0);

assert((qh = nfq_create_queue(h, 0, &cb, NULL)) != NULL);
assert(nfq_set_mode(qh, NFQNL_COPY_PACKET, 0xffff) == 0);

fd = nfq_fd(h);

while ((rv = recv(fd, buf, sizeof(buf), 0)) && rv >= 0) {
nfq_handle_packet(h, buf, rv);
}

nfq_destroy_queue(qh);

nfq_close(h);
return 0;
}

Makefile内容如下:

1
2
3
4
5
6
obj-m += nfqdns.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
gcc user.c -l netfilter_queue -o user.out
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

因为BUG存在于nf_conntrack模块中,确保加载nf_conntracknf_conntrack_ipv4模块,然后加载我们的实验内核模块:

1
2
modprobe nf_conntrack nf_conntrack_ipv4
insmod ./nfqdns.ko

接着运行用户态程序:

1
./user.out

此时执行curl命令依然可以复现DNS超时现象,可以确定和NFQUEUE机制有关系。

从我们内核模块的日志输出可以看到在执行nf_hook_out时,AAAA记录的请求数据包还没有被丢弃,也可以看到第一次发送的两个DNS请求数据包的conntrack条目确实不同:

1
2
3
4
[467402.634931] [nfqdns]: 10.10.10.89:56578->114.114.114.114:53 queued in [0], skb: ffff880067d15d00, ct: ffff8800905ead68, tid: 8725
[467402.634958] [nfqdns]: 10.10.10.89:56578->114.114.114.114:53 queued in [0], skb: ffff880067d15c00, ct: ffff8800905ea750, tid: 5451
[467407.643516] [nfqdns]: 10.10.10.89:56578->114.114.114.114:53 queued in [0], skb: ffff8800b37f9f00, ct: ffff8800905ead68, tid: 8725
[467407.645559] [nfqdns]: 10.10.10.89:56578->114.114.114.114:53 queued in [0], skb: ffff8800b8fe3100, ct: ffff8800905ead68, tid: 5451

conntrack上述BUG丢弃数据包是发生在conntrack条目确认阶段的__nf_conntrack_confirm函数中。conntrack的原理和实现可以参考这篇文章, 本文不详述。

我们通过systemtap来验证丢包是否发生在这里。systemtap脚本t.stp内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
%{
#include <net/udp.h>
%}

function read_tid: long (udphdr: long) %{
struct udphdr *udph = (struct udphdr *)STAP_ARG_udphdr;
unsigned short tid = htons(*(unsigned short *)(udph + 1));
STAP_RETVALUE = (long) tid;
%}

probe module("nf_conntrack").function("__nf_conntrack_confirm") {
iphdr = __get_skb_iphdr($skb)
saddr = format_ipaddr(__ip_skb_saddr(iphdr), %{ AF_INET %})
daddr = format_ipaddr(__ip_skb_daddr(iphdr), %{ AF_INET %})
protocol = __ip_skb_proto(iphdr)

udphdr = __get_skb_udphdr($skb)
if (protocol == %{ IPPROTO_UDP %}) {
dport = __tcp_skb_dport(udphdr)
sport = __tcp_skb_sport(udphdr)
if (dport == 53 || sport == 53) {
printf("__nf_conntrack_confirm: %s:%d->%s:%d TID: %d, ct: %p, ctinfo: %d\n", saddr, sport, daddr, dport, read_tid(udphdr), $skb->nfct, $ctinfo)
}
}
}

probe module("nf_conntrack").function("__nf_conntrack_confirm").return {
printf("__nf_conntrack_confirm: return %ld\n", $return)
}

systemtap执行结果可以确定处理第二个DNS请求数据包时,函数__nf_conntrack_confirm函数返回了0(NF_DROP):

1
2
3
4
5
[root@localhost fg]# stap -g t.stp
__nf_conntrack_confirm: 10.10.10.89:58305->114.114.114.114:53 TID: 16998, ct: 0xffff8800905eb860, ctinfo: 2
__nf_conntrack_confirm: return 1
__nf_conntrack_confirm: 10.10.10.89:58305->114.114.114.114:53 TID: 568, ct: 0xffff8800905eb248, ctinfo: 2
__nf_conntrack_confirm: return 0

而查看3.10.0-123.el7.x86_64版本的__nf_conntrack_confirm(net/netfilter/nf_conntrack_core.c)源码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
/* Confirm a connection given skb; places it in hash table */
int
__nf_conntrack_confirm(struct sk_buff *skb)
{
unsigned int hash, repl_hash;
struct nf_conntrack_tuple_hash *h;
struct nf_conn *ct;
struct nf_conn_help *help;
struct nf_conn_tstamp *tstamp;
struct hlist_nulls_node *n;
enum ip_conntrack_info ctinfo;
struct net *net;
u16 zone;

ct = nf_ct_get(skb, &ctinfo);
net = nf_ct_net(ct);

/* ipt_REJECT uses nf_conntrack_attach to attach related
ICMP/TCP RST packets in other direction. Actual packet
which created connection will be IP_CT_NEW or for an
expected connection, IP_CT_RELATED. */
if (CTINFO2DIR(ctinfo) != IP_CT_DIR_ORIGINAL)
return NF_ACCEPT;

zone = nf_ct_zone(ct);
/* reuse the hash saved before */
hash = *(unsigned long *)&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev;
hash = hash_bucket(hash, net);
repl_hash = hash_conntrack(net, zone,
&ct->tuplehash[IP_CT_DIR_REPLY].tuple);

/* We're not in hash table, and we refuse to set up related
connections for unconfirmed conns. But packet copies and
REJECT will give spurious warnings here. */
/* NF_CT_ASSERT(atomic_read(&ct->ct_general.use) == 1); */

/* No external references means no one else could have
confirmed us. */
NF_CT_ASSERT(!nf_ct_is_confirmed(ct));
pr_debug("Confirming conntrack %p\n", ct);

spin_lock_bh(&nf_conntrack_lock);

/* We have to check the DYING flag inside the lock to prevent
a race against nf_ct_get_next_corpse() possibly called from
user context, else we insert an already 'dead' hash, blocking
further use of that particular connection -JM */

if (unlikely(nf_ct_is_dying(ct))) {
spin_unlock_bh(&nf_conntrack_lock);
return NF_ACCEPT;
}

/* See if there's one in the list already, including reverse:
NAT could have grabbed it without realizing, since we're
not in the hash. If there is, we lost race. */
hlist_nulls_for_each_entry(h, n, &net->ct.hash[hash], hnnode)
if (nf_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple,
&h->tuple) &&
zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)))
goto out;
hlist_nulls_for_each_entry(h, n, &net->ct.hash[repl_hash], hnnode)
if (nf_ct_tuple_equal(&ct->tuplehash[IP_CT_DIR_REPLY].tuple,
&h->tuple) &&
zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)))
goto out;

/* Remove from unconfirmed list */
hlist_nulls_del_rcu(&ct->tuplehash[IP_CT_DIR_ORIGINAL].hnnode);

/* Timer relative to confirmation time, not original
setting time, otherwise we'd get timer wrap in
weird delay cases. */
ct->timeout.expires += jiffies;
add_timer(&ct->timeout);
atomic_inc(&ct->ct_general.use);
ct->status |= IPS_CONFIRMED;

/* set conntrack timestamp, if enabled. */
tstamp = nf_conn_tstamp_find(ct);
if (tstamp) {
if (skb->tstamp.tv64 == 0)
__net_timestamp(skb);

tstamp->start = ktime_to_ns(skb->tstamp);
}
/* Since the lookup is lockless, hash insertion must be done after
* starting the timer and setting the CONFIRMED bit. The RCU barriers
* guarantee that no other CPU can find the conntrack before the above
* stores are visible.
*/
__nf_conntrack_hash_insert(ct, hash, repl_hash);
NF_CT_STAT_INC(net, insert);
spin_unlock_bh(&nf_conntrack_lock);

help = nfct_help(ct);
if (help && help->helper)
nf_conntrack_event_cache(IPCT_HELPER, ct);

nf_conntrack_event_cache(master_ct(ct) ?
IPCT_RELATED : IPCT_NEW, ct);
return NF_ACCEPT;

out:
NF_CT_STAT_INC(net, insert_failed);
spin_unlock_bh(&nf_conntrack_lock);
return NF_DROP;
}

从代码可以看到,在从conntrack表中找到tuple信息一致的条目时,会跳转到out标签处,返回NF_DROP将数据包丢弃。

在更高版本的CentOS内核中修复了这个问题, CentOS7.8(内核版本:3.10.0-1127.el7.x86_64)的相同位置增加了冲突处理的逻辑来修复这个问题:

1
2
3
4
5
6
7
8
out:
nf_ct_add_to_dying_list(ct);
ret = nf_ct_resolve_clash(net, skb, ctinfo, h);
dying:
nf_conntrack_double_unlock(hash, reply_hash);
NF_CT_STAT_INC(net, insert_failed);
local_bh_enable();
return ret;

从源码中我们也可以看到当出现这个问题时conntrack模块的insert_failed统计值会增加。我们可以通过conntrack -S命令来查看统计值。通过验证可以看到发生该现象时insert_failed统计值确实增加了:

那么为什么NFQUEUE机制会触发该BUG呢?
curlstrace输出可以看到,curl调用sendmmsg同时发送AAAAA两个DNS请求。
系统调用sendmmsg的实现是__sys_sendmmsg(net/socket.c), 它会循环调用___sys_sendmsg来发送两个数据包:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
while (datagrams < vlen) {
if (MSG_CMSG_COMPAT & flags) {
err = ___sys_sendmsg(sock, (struct msghdr __user *)compat_entry,
&msg_sys, flags, &used_address);
if (err < 0)
break;
err = __put_user(err, &compat_entry->msg_len);
++compat_entry;
} else {
err = ___sys_sendmsg(sock,
(struct msghdr __user *)entry,
&msg_sys, flags, &used_address);
if (err < 0)
break;
err = put_user(err, &entry->msg_len);
++entry;
}

if (err)
break;
++datagrams;
}

具体网络发包过程可参考这篇文章。本文不详述。___sys_sendmsg函数会一直调用到协议栈的ip_output(net/ipv4/ip_output.c)函数:

1
2
3
4
5
6
7
8
9
10
11
12
13
int ip_output(struct sk_buff *skb)
{
struct net_device *dev = skb_dst(skb)->dev;

IP_UPD_PO_STATS(dev_net(dev), IPSTATS_MIB_OUT, skb->len);

skb->dev = dev;
skb->protocol = htons(ETH_P_IP);

return NF_HOOK_COND(NFPROTO_IPV4, NF_INET_POST_ROUTING, skb, NULL, dev,
ip_finish_output,
!(IPCB(skb)->flags & IPSKB_REROUTED));
}

conntrack实现中,外发连接的conntrack条目的创建是在LOCAL_OUT阶段,而条目确认也就是真正插入conntrack表是在POST_ROUTING阶段。这里的NF_HOOK_COND会迭代调用netfilter框架中注册的一系列hook函数。nf_conntrack_ipv4模块在该阶段注册了ipv4_confirm(net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c)函数,并且优先级为最低,最后才会执行。当没有加载我们的内核模块时,NF_HOOK_COND会一直执行到ipv4_confirm, 最终调用到__nf_conntrack_confirm函数,将conntrack条目插入conntrack表,之后层层返回。处理第二个数据包时,在LOCAL_OUT阶段处理时已经可以查到插入的conntrack条目,因而不会再给该数据包创建conntrack条目。而加载我们的内核模块后,我们在POST_ROUTING阶段注册的nf_hook_out函数优先级高于nf_conntrack_ipv4ipv4_confirm函数,会先得到执行。我们返回NF_QUEUE之后,NF_HOOK_COND就会直接返回0。后续的函数执行需要内核收到用户态程序的netlink消息后再继续执行。因而__nf_conntrack_confirm这时并未得到执行。层层返回到___sys_send_msg之后,再发送第二个数据包。在LOCAL_OUT阶段由于在conntrack表中不能查找到相应的conntrack条目,所以会给该数据包再创建conntrack条目,最终触发BUG。

在存在该BUG的系统上可以通过在/etc/resolv.conf中添加options single-request-reopen来规避:

man 5 resolv.conf可以看到相关说明:

1
2
3
4
single-request-reopen (since glibc 2.9)
The resolver uses the same socket for the A and AAAA requests. Some hardware mistakenly sends back only one reply. When that happens the client system
will sit and wait for the second reply. Turning this option on changes this behavior so that if two requests from the same port are not handled cor‐
rectly it will close the socket and open a new one before sending the second request.

这样就不会同时发送AAAAA请求包,而是A请求收到回应包再发送AAAA请求,从而能在启用NFQUEUE机制的情况下规避该BUG。

后续有时间再从源码角度分析一下conntrack机制的实现。

参考: