]> Pileus Git - ~andy/linux/commitdiff
NVMe: Call put_nvmeq() before calling nvme_submit_sync_cmd()
authorMatthew Wilcox <matthew.r.wilcox@intel.com>
Fri, 4 Feb 2011 21:14:30 +0000 (16:14 -0500)
committerMatthew Wilcox <matthew.r.wilcox@intel.com>
Fri, 4 Nov 2011 19:52:55 +0000 (15:52 -0400)
We can't have preemption disabled when we call schedule().  Accept the
possibility that we'll get preempted, and it'll cost us some cacheline
bounces.

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
drivers/block/nvme.c

index 4bfed59f3629263ecf3cbd924f47e9fede212812..1c3cd6cc0ad9e242ff7390e15d4f2a48ddc32c33 100644 (file)
@@ -842,8 +842,13 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
        nvme_setup_prps(&c.common, sg, length);
 
        nvmeq = get_nvmeq(ns);
-       status = nvme_submit_sync_cmd(nvmeq, &c, &result);
+       /* Since nvme_submit_sync_cmd sleeps, we can't keep preemption
+        * disabled.  We may be preempted at any point, and be rescheduled
+        * to a different CPU.  That will cause cacheline bouncing, but no
+        * additional races since q_lock already protects against other CPUs.
+        */
        put_nvmeq(nvmeq);
+       status = nvme_submit_sync_cmd(nvmeq, &c, &result);
 
        nvme_unmap_user_pages(dev, io.opcode & 1, io.addr, length, sg, nents);
        put_user(result, &uio->result);