]> Pileus Git - ~andy/linux/commit
net: use per task frag allocator in skb_append_datato_frags
authorEric Dumazet <edumazet@google.com>
Fri, 28 Dec 2012 06:06:37 +0000 (06:06 +0000)
committerDavid S. Miller <davem@davemloft.net>
Fri, 28 Dec 2012 23:25:19 +0000 (15:25 -0800)
commitb2111724a639ec31a19fdca62ea3a0a222d59d11
tree0d707599721ae209b176feab8ce41b7e63191e78
parent210ab6656fa8c49d7238c13f85ed551ebab94fb0
net: use per task frag allocator in skb_append_datato_frags

Use the new per task frag allocator in skb_append_datato_frags(),
to reduce number of frags and page allocator overhead.

Tested:
 ifconfig lo mtu 16436
 perf record netperf -t UDP_STREAM ; perf report

before :
 Throughput: 32928 Mbit/s
    51.79%  netperf  [kernel.kallsyms]  [k] copy_user_generic_string
     5.98%  netperf  [kernel.kallsyms]  [k] __alloc_pages_nodemask
     5.58%  netperf  [kernel.kallsyms]  [k] get_page_from_freelist
     5.01%  netperf  [kernel.kallsyms]  [k] __rmqueue
     3.74%  netperf  [kernel.kallsyms]  [k] skb_append_datato_frags
     1.87%  netperf  [kernel.kallsyms]  [k] prep_new_page
     1.42%  netperf  [kernel.kallsyms]  [k] next_zones_zonelist
     1.28%  netperf  [kernel.kallsyms]  [k] __inc_zone_state
     1.26%  netperf  [kernel.kallsyms]  [k] alloc_pages_current
     0.78%  netperf  [kernel.kallsyms]  [k] sock_alloc_send_pskb
     0.74%  netperf  [kernel.kallsyms]  [k] udp_sendmsg
     0.72%  netperf  [kernel.kallsyms]  [k] zone_watermark_ok
     0.68%  netperf  [kernel.kallsyms]  [k] __cpuset_node_allowed_softwall
     0.67%  netperf  [kernel.kallsyms]  [k] fib_table_lookup
     0.60%  netperf  [kernel.kallsyms]  [k] memcpy_fromiovecend
     0.55%  netperf  [kernel.kallsyms]  [k] __udp4_lib_lookup

 after:
  Throughput: 47185 Mbit/s
61.74% netperf  [kernel.kallsyms] [k] copy_user_generic_string
 2.07% netperf  [kernel.kallsyms] [k] prep_new_page
 1.98% netperf  [kernel.kallsyms] [k] skb_append_datato_frags
 1.02% netperf  [kernel.kallsyms] [k] sock_alloc_send_pskb
 0.97% netperf  [kernel.kallsyms] [k] enqueue_task_fair
 0.97% netperf  [kernel.kallsyms] [k] udp_sendmsg
 0.91% netperf  [kernel.kallsyms] [k] __ip_route_output_key
 0.88% netperf  [kernel.kallsyms] [k] __netif_receive_skb
 0.87% netperf  [kernel.kallsyms] [k] fib_table_lookup
 0.85% netperf  [kernel.kallsyms] [k] resched_task
 0.78% netperf  [kernel.kallsyms] [k] __udp4_lib_lookup
 0.77% netperf  [kernel.kallsyms] [k] _raw_spin_lock_irqsave

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/core/skbuff.c