1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
10 (*) Abstract memory access model.
15 (*) What are memory barriers?
17 - Varieties of memory barrier.
18 - What may not be assumed about memory barriers?
19 - Data dependency barriers.
20 - Control dependencies.
21 - SMP barrier pairing.
22 - Examples of memory barrier sequences.
23 - Read memory barriers vs load speculation.
26 (*) Explicit kernel barriers.
29 - CPU memory barriers.
32 (*) Implicit kernel memory barriers.
35 - Interrupt disabling functions.
36 - Sleep and wake-up functions.
37 - Miscellaneous functions.
39 (*) Inter-CPU locking barrier effects.
41 - Locks vs memory accesses.
42 - Locks vs I/O accesses.
44 (*) Where are memory barriers needed?
46 - Interprocessor interaction.
51 (*) Kernel I/O barrier effects.
53 (*) Assumed minimum execution ordering model.
55 (*) The effects of the cpu cache.
58 - Cache coherency vs DMA.
59 - Cache coherency vs MMIO.
61 (*) The things CPUs get up to.
63 - And then there's the Alpha.
72 ============================
73 ABSTRACT MEMORY ACCESS MODEL
74 ============================
76 Consider the following abstract model of the system:
81 +-------+ : +--------+ : +-------+
84 | CPU 1 |<----->| Memory |<----->| CPU 2 |
87 +-------+ : +--------+ : +-------+
95 +---------->| Device |<----------+
101 Each CPU executes a program that generates memory access operations. In the
102 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103 perform the memory operations in any order it likes, provided program causality
104 appears to be maintained. Similarly, the compiler may also arrange the
105 instructions it emits in any order it likes, provided it doesn't affect the
106 apparent operation of the program.
108 So in the above diagram, the effects of the memory operations performed by a
109 CPU are perceived by the rest of the system as the operations cross the
110 interface between the CPU and rest of the system (the dotted lines).
113 For example, consider the following sequence of events:
116 =============== ===============
121 The set of accesses as seen by the memory system in the middle can be arranged
122 in 24 different combinations:
124 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4
125 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3
126 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4
127 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4
128 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3
129 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4
130 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4
134 and can thus result in four different combinations of values:
142 Furthermore, the stores committed by a CPU to the memory system may not be
143 perceived by the loads made by another CPU in the same order as the stores were
147 As a further example, consider this sequence of events:
150 =============== ===============
151 { A == 1, B == 2, C = 3, P == &A, Q == &C }
155 There is an obvious data dependency here, as the value loaded into D depends on
156 the address retrieved from P by CPU 2. At the end of the sequence, any of the
157 following results are possible:
159 (Q == &A) and (D == 1)
160 (Q == &B) and (D == 2)
161 (Q == &B) and (D == 4)
163 Note that CPU 2 will never try and load C into D because the CPU will load P
164 into Q before issuing the load of *Q.
170 Some devices present their control interfaces as collections of memory
171 locations, but the order in which the control registers are accessed is very
172 important. For instance, imagine an ethernet card with a set of internal
173 registers that are accessed through an address port register (A) and a data
174 port register (D). To read internal register 5, the following code might then
180 but this might show up as either of the following two sequences:
182 STORE *A = 5, x = LOAD *D
183 x = LOAD *D, STORE *A = 5
185 the second of which will almost certainly result in a malfunction, since it set
186 the address _after_ attempting to read the register.
192 There are some minimal guarantees that may be expected of a CPU:
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195 respect to itself. This means that for:
197 ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
199 the CPU will issue the following memory operations:
201 Q = LOAD P, D = LOAD *Q
203 and always in that order. On most systems, smp_read_barrier_depends()
204 does nothing, but it is required for DEC Alpha. The ACCESS_ONCE()
205 is required to prevent compiler mischief. Please note that you
206 should normally use something like rcu_dereference() instead of
207 open-coding smp_read_barrier_depends().
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210 ordered within that CPU. This means that for:
212 a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
214 the CPU will only issue the following sequence of memory operations:
216 a = LOAD *X, STORE *X = b
220 ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
222 the CPU will only issue:
224 STORE *X = c, d = LOAD *X
226 (Loads and stores overlap if they are targeted at overlapping pieces of
229 And there are a number of things that _must_ or _must_not_ be assumed:
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232 memory references that are not protected by ACCESS_ONCE(). Without
233 ACCESS_ONCE(), the compiler is within its rights to do all sorts
234 of "creative" transformations:
236 (-) Repeat the load, possibly getting a different value on the second
237 and subsequent loads. This is especially prone to happen when
238 register pressure is high.
240 (-) Merge adjacent loads and stores to the same location. The most
241 familiar example is the transformation from:
252 Using ACCESS_ONCE() as follows prevents this sort of optimization:
254 while (ACCESS_ONCE(a))
257 (-) "Store tearing", where a single store in the source code is split
258 into smaller stores in the object code. Note that gcc really
259 will do this on some architectures when storing certain constants.
260 It can be cheaper to do a series of immediate stores than to
261 form the constant in a register and then to store that register.
263 (-) "Load tearing", which splits loads in a manner analogous to
266 (*) It _must_not_ be assumed that independent loads and stores will be issued
267 in the order given. This means that for:
269 X = *A; Y = *B; *D = Z;
271 we may get any of the following sequences:
273 X = LOAD *A, Y = LOAD *B, STORE *D = Z
274 X = LOAD *A, STORE *D = Z, Y = LOAD *B
275 Y = LOAD *B, X = LOAD *A, STORE *D = Z
276 Y = LOAD *B, STORE *D = Z, X = LOAD *A
277 STORE *D = Z, X = LOAD *A, Y = LOAD *B
278 STORE *D = Z, Y = LOAD *B, X = LOAD *A
280 (*) It _must_ be assumed that overlapping memory accesses may be merged or
281 discarded. This means that for:
283 X = *A; Y = *(A + 4);
285 we may get any one of the following sequences:
287 X = LOAD *A; Y = LOAD *(A + 4);
288 Y = LOAD *(A + 4); X = LOAD *A;
289 {X, Y} = LOAD {*A, *(A + 4) };
293 *A = X; *(A + 4) = Y;
297 STORE *A = X; STORE *(A + 4) = Y;
298 STORE *(A + 4) = Y; STORE *A = X;
299 STORE {*A, *(A + 4) } = {X, Y};
302 =========================
303 WHAT ARE MEMORY BARRIERS?
304 =========================
306 As can be seen above, independent memory operations are effectively performed
307 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
308 What is required is some way of intervening to instruct the compiler and the
309 CPU to restrict the order.
311 Memory barriers are such interventions. They impose a perceived partial
312 ordering over the memory operations on either side of the barrier.
314 Such enforcement is important because the CPUs and other devices in a system
315 can use a variety of tricks to improve performance, including reordering,
316 deferral and combination of memory operations; speculative loads; speculative
317 branch prediction and various types of caching. Memory barriers are used to
318 override or suppress these tricks, allowing the code to sanely control the
319 interaction of multiple CPUs and/or devices.
322 VARIETIES OF MEMORY BARRIER
323 ---------------------------
325 Memory barriers come in four basic varieties:
327 (1) Write (or store) memory barriers.
329 A write memory barrier gives a guarantee that all the STORE operations
330 specified before the barrier will appear to happen before all the STORE
331 operations specified after the barrier with respect to the other
332 components of the system.
334 A write barrier is a partial ordering on stores only; it is not required
335 to have any effect on loads.
337 A CPU can be viewed as committing a sequence of store operations to the
338 memory system as time progresses. All stores before a write barrier will
339 occur in the sequence _before_ all the stores after the write barrier.
341 [!] Note that write barriers should normally be paired with read or data
342 dependency barriers; see the "SMP barrier pairing" subsection.
345 (2) Data dependency barriers.
347 A data dependency barrier is a weaker form of read barrier. In the case
348 where two loads are performed such that the second depends on the result
349 of the first (eg: the first load retrieves the address to which the second
350 load will be directed), a data dependency barrier would be required to
351 make sure that the target of the second load is updated before the address
352 obtained by the first load is accessed.
354 A data dependency barrier is a partial ordering on interdependent loads
355 only; it is not required to have any effect on stores, independent loads
356 or overlapping loads.
358 As mentioned in (1), the other CPUs in the system can be viewed as
359 committing sequences of stores to the memory system that the CPU being
360 considered can then perceive. A data dependency barrier issued by the CPU
361 under consideration guarantees that for any load preceding it, if that
362 load touches one of a sequence of stores from another CPU, then by the
363 time the barrier completes, the effects of all the stores prior to that
364 touched by the load will be perceptible to any loads issued after the data
367 See the "Examples of memory barrier sequences" subsection for diagrams
368 showing the ordering constraints.
370 [!] Note that the first load really has to have a _data_ dependency and
371 not a control dependency. If the address for the second load is dependent
372 on the first load, but the dependency is through a conditional rather than
373 actually loading the address itself, then it's a _control_ dependency and
374 a full read barrier or better is required. See the "Control dependencies"
375 subsection for more information.
377 [!] Note that data dependency barriers should normally be paired with
378 write barriers; see the "SMP barrier pairing" subsection.
381 (3) Read (or load) memory barriers.
383 A read barrier is a data dependency barrier plus a guarantee that all the
384 LOAD operations specified before the barrier will appear to happen before
385 all the LOAD operations specified after the barrier with respect to the
386 other components of the system.
388 A read barrier is a partial ordering on loads only; it is not required to
389 have any effect on stores.
391 Read memory barriers imply data dependency barriers, and so can substitute
394 [!] Note that read barriers should normally be paired with write barriers;
395 see the "SMP barrier pairing" subsection.
398 (4) General memory barriers.
400 A general memory barrier gives a guarantee that all the LOAD and STORE
401 operations specified before the barrier will appear to happen before all
402 the LOAD and STORE operations specified after the barrier with respect to
403 the other components of the system.
405 A general memory barrier is a partial ordering over both loads and stores.
407 General memory barriers imply both read and write memory barriers, and so
408 can substitute for either.
411 And a couple of implicit varieties:
415 This acts as a one-way permeable barrier. It guarantees that all memory
416 operations after the LOCK operation will appear to happen after the LOCK
417 operation with respect to the other components of the system.
419 Memory operations that occur before a LOCK operation may appear to happen
422 A LOCK operation should almost always be paired with an UNLOCK operation.
425 (6) UNLOCK operations.
427 This also acts as a one-way permeable barrier. It guarantees that all
428 memory operations before the UNLOCK operation will appear to happen before
429 the UNLOCK operation with respect to the other components of the system.
431 Memory operations that occur after an UNLOCK operation may appear to
432 happen before it completes.
434 LOCK and UNLOCK operations are guaranteed to appear with respect to each
435 other strictly in the order specified.
437 The use of LOCK and UNLOCK operations generally precludes the need for
438 other sorts of memory barrier (but note the exceptions mentioned in the
439 subsection "MMIO write barrier").
442 Memory barriers are only required where there's a possibility of interaction
443 between two CPUs or between a CPU and a device. If it can be guaranteed that
444 there won't be any such interaction in any particular piece of code, then
445 memory barriers are unnecessary in that piece of code.
448 Note that these are the _minimum_ guarantees. Different architectures may give
449 more substantial guarantees, but they may _not_ be relied upon outside of arch
453 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
454 ----------------------------------------------
456 There are certain things that the Linux kernel memory barriers do not guarantee:
458 (*) There is no guarantee that any of the memory accesses specified before a
459 memory barrier will be _complete_ by the completion of a memory barrier
460 instruction; the barrier can be considered to draw a line in that CPU's
461 access queue that accesses of the appropriate type may not cross.
463 (*) There is no guarantee that issuing a memory barrier on one CPU will have
464 any direct effect on another CPU or any other hardware in the system. The
465 indirect effect will be the order in which the second CPU sees the effects
466 of the first CPU's accesses occur, but see the next point:
468 (*) There is no guarantee that a CPU will see the correct order of effects
469 from a second CPU's accesses, even _if_ the second CPU uses a memory
470 barrier, unless the first CPU _also_ uses a matching memory barrier (see
471 the subsection on "SMP Barrier Pairing").
473 (*) There is no guarantee that some intervening piece of off-the-CPU
474 hardware[*] will not reorder the memory accesses. CPU cache coherency
475 mechanisms should propagate the indirect effects of a memory barrier
476 between CPUs, but might not do so in order.
478 [*] For information on bus mastering DMA and coherency please read:
480 Documentation/PCI/pci.txt
481 Documentation/DMA-API-HOWTO.txt
482 Documentation/DMA-API.txt
485 DATA DEPENDENCY BARRIERS
486 ------------------------
488 The usage requirements of data dependency barriers are a little subtle, and
489 it's not always obvious that they're needed. To illustrate, consider the
490 following sequence of events:
493 =============== ===============
494 { A == 1, B == 2, C = 3, P == &A, Q == &C }
501 There's a clear data dependency here, and it would seem that by the end of the
502 sequence, Q must be either &A or &B, and that:
504 (Q == &A) implies (D == 1)
505 (Q == &B) implies (D == 4)
507 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
508 leading to the following situation:
510 (Q == &B) and (D == 2) ????
512 Whilst this may seem like a failure of coherency or causality maintenance, it
513 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
516 To deal with this, a data dependency barrier or better must be inserted
517 between the address load and the data load:
520 =============== ===============
521 { A == 1, B == 2, C = 3, P == &A, Q == &C }
526 <data dependency barrier>
529 This enforces the occurrence of one of the two implications, and prevents the
530 third possibility from arising.
532 [!] Note that this extremely counterintuitive situation arises most easily on
533 machines with split caches, so that, for example, one cache bank processes
534 even-numbered cache lines and the other bank processes odd-numbered cache
535 lines. The pointer P might be stored in an odd-numbered cache line, and the
536 variable B might be stored in an even-numbered cache line. Then, if the
537 even-numbered bank of the reading CPU's cache is extremely busy while the
538 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
539 but the old value of the variable B (2).
542 Another example of where data dependency barriers might be required is where a
543 number is read from memory and then used to calculate the index for an array
547 =============== ===============
548 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
553 <data dependency barrier>
557 The data dependency barrier is very important to the RCU system,
558 for example. See rcu_assign_pointer() and rcu_dereference() in
559 include/linux/rcupdate.h. This permits the current target of an RCU'd
560 pointer to be replaced with a new modified target, without the replacement
561 target appearing to be incompletely initialised.
563 See also the subsection on "Cache Coherency" for a more thorough example.
569 A control dependency requires a full read memory barrier, not simply a data
570 dependency barrier to make it work correctly. Consider the following bit of
575 <data dependency barrier>
580 This will not have the desired effect because there is no actual data
581 dependency, but rather a control dependency that the CPU may short-circuit
582 by attempting to predict the outcome in advance, so that other CPUs see
583 the load from b as having happened before the load from a. In such a
584 case what's actually required is:
597 When dealing with CPU-CPU interactions, certain types of memory barrier should
598 always be paired. A lack of appropriate pairing is almost certainly an error.
600 A write barrier should always be paired with a data dependency barrier or read
601 barrier, though a general barrier would also be viable. Similarly a read
602 barrier or a data dependency barrier should always be paired with at least an
603 write barrier, though, again, a general barrier is viable:
606 =============== ===============
609 ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b);
616 =============== ===============================
619 ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b);
620 <data dependency barrier>
623 Basically, the read barrier always has to be there, even though it can be of
626 [!] Note that the stores before the write barrier would normally be expected to
627 match the loads after the read barrier or the data dependency barrier, and vice
631 =================== ===================
632 ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c);
633 ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d);
634 <write barrier> \ <read barrier>
635 ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a);
636 ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b);
639 EXAMPLES OF MEMORY BARRIER SEQUENCES
640 ------------------------------------
642 Firstly, write barriers act as partial orderings on store operations.
643 Consider the following sequence of events:
646 =======================
654 This sequence of events is committed to the memory coherence system in an order
655 that the rest of the system might perceive as the unordered set of { STORE A,
656 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
661 | |------>| C=3 | } /\
662 | | : +------+ }----- \ -----> Events perceptible to
663 | | : | A=1 | } \/ the rest of the system
665 | CPU 1 | : | B=2 | }
667 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
668 | | +------+ } requires all stores prior to the
669 | | : | E=5 | } barrier to be committed before
670 | | : +------+ } further stores may take place
675 | Sequence in which stores are committed to the
676 | memory system by CPU 1
680 Secondly, data dependency barriers act as partial orderings on data-dependent
681 loads. Consider the following sequence of events:
684 ======================= =======================
685 { B = 7; X = 9; Y = 8; C = &Y }
690 STORE D = 4 LOAD C (gets &B)
693 Without intervention, CPU 2 may perceive the events on CPU 1 in some
694 effectively random order, despite the write barrier issued by CPU 1:
697 | | +------+ +-------+ | Sequence of update
698 | |------>| B=2 |----- --->| Y->8 | | of perception on
699 | | : +------+ \ +-------+ | CPU 2
700 | CPU 1 | : | A=1 | \ --->| C->&Y | V
701 | | +------+ | +-------+
702 | | wwwwwwwwwwwwwwww | : :
704 | | : | C=&B |--- | : : +-------+
705 | | : +------+ \ | +-------+ | |
706 | |------>| D=4 | ----------->| C->&B |------>| |
707 | | +------+ | +-------+ | |
708 +-------+ : : | : : | |
712 Apparently incorrect ---> | | B->7 |------>| |
713 perception of B (!) | +-------+ | |
716 The load of X holds ---> \ | X->9 |------>| |
717 up the maintenance \ +-------+ | |
718 of coherence of B ----->| B->2 | +-------+
723 In the above example, CPU 2 perceives that B is 7, despite the load of *C
724 (which would be B) coming after the LOAD of C.
726 If, however, a data dependency barrier were to be placed between the load of C
727 and the load of *C (ie: B) on CPU 2:
730 ======================= =======================
731 { B = 7; X = 9; Y = 8; C = &Y }
736 STORE D = 4 LOAD C (gets &B)
737 <data dependency barrier>
740 then the following will occur:
743 | | +------+ +-------+
744 | |------>| B=2 |----- --->| Y->8 |
745 | | : +------+ \ +-------+
746 | CPU 1 | : | A=1 | \ --->| C->&Y |
747 | | +------+ | +-------+
748 | | wwwwwwwwwwwwwwww | : :
750 | | : | C=&B |--- | : : +-------+
751 | | : +------+ \ | +-------+ | |
752 | |------>| D=4 | ----------->| C->&B |------>| |
753 | | +------+ | +-------+ | |
754 +-------+ : : | : : | |
760 Makes sure all effects ---> \ ddddddddddddddddd | |
761 prior to the store of C \ +-------+ | |
762 are perceptible to ----->| B->2 |------>| |
763 subsequent loads +-------+ | |
767 And thirdly, a read barrier acts as a partial order on loads. Consider the
768 following sequence of events:
771 ======================= =======================
779 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
780 some effectively random order, despite the write barrier issued by CPU 1:
783 | | +------+ +-------+
784 | |------>| A=1 |------ --->| A->0 |
785 | | +------+ \ +-------+
786 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
787 | | +------+ | +-------+
788 | |------>| B=2 |--- | : :
789 | | +------+ \ | : : +-------+
790 +-------+ : : \ | +-------+ | |
791 ---------->| B->2 |------>| |
792 | +-------+ | CPU 2 |
803 If, however, a read barrier were to be placed between the load of B and the
807 ======================= =======================
816 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
820 | | +------+ +-------+
821 | |------>| A=1 |------ --->| A->0 |
822 | | +------+ \ +-------+
823 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
824 | | +------+ | +-------+
825 | |------>| B=2 |--- | : :
826 | | +------+ \ | : : +-------+
827 +-------+ : : \ | +-------+ | |
828 ---------->| B->2 |------>| |
829 | +-------+ | CPU 2 |
832 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
833 barrier causes all effects \ +-------+ | |
834 prior to the storage of B ---->| A->1 |------>| |
835 to be perceptible to CPU 2 +-------+ | |
839 To illustrate this more completely, consider what could happen if the code
840 contained a load of A either side of the read barrier:
843 ======================= =======================
849 LOAD A [first load of A]
851 LOAD A [second load of A]
853 Even though the two loads of A both occur after the load of B, they may both
854 come up with different values:
857 | | +------+ +-------+
858 | |------>| A=1 |------ --->| A->0 |
859 | | +------+ \ +-------+
860 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
861 | | +------+ | +-------+
862 | |------>| B=2 |--- | : :
863 | | +------+ \ | : : +-------+
864 +-------+ : : \ | +-------+ | |
865 ---------->| B->2 |------>| |
866 | +-------+ | CPU 2 |
870 | | A->0 |------>| 1st |
872 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
873 barrier causes all effects \ +-------+ | |
874 prior to the storage of B ---->| A->1 |------>| 2nd |
875 to be perceptible to CPU 2 +-------+ | |
879 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
880 before the read barrier completes anyway:
883 | | +------+ +-------+
884 | |------>| A=1 |------ --->| A->0 |
885 | | +------+ \ +-------+
886 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
887 | | +------+ | +-------+
888 | |------>| B=2 |--- | : :
889 | | +------+ \ | : : +-------+
890 +-------+ : : \ | +-------+ | |
891 ---------->| B->2 |------>| |
892 | +-------+ | CPU 2 |
896 ---->| A->1 |------>| 1st |
898 rrrrrrrrrrrrrrrrr | |
900 | A->1 |------>| 2nd |
905 The guarantee is that the second load will always come up with A == 1 if the
906 load of B came up with B == 2. No such guarantee exists for the first load of
907 A; that may come up with either A == 0 or A == 1.
910 READ MEMORY BARRIERS VS LOAD SPECULATION
911 ----------------------------------------
913 Many CPUs speculate with loads: that is they see that they will need to load an
914 item from memory, and they find a time where they're not using the bus for any
915 other loads, and so do the load in advance - even though they haven't actually
916 got to that point in the instruction execution flow yet. This permits the
917 actual load instruction to potentially complete immediately because the CPU
918 already has the value to hand.
920 It may turn out that the CPU didn't actually need the value - perhaps because a
921 branch circumvented the load - in which case it can discard the value or just
922 cache it for later use.
927 ======================= =======================
929 DIVIDE } Divide instructions generally
930 DIVIDE } take a long time to perform
933 Which might appear as this:
937 --->| B->2 |------>| |
941 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
942 division speculates on the +-------+ ~ | |
946 Once the divisions are complete --> : : ~-->| |
947 the CPU can then perform the : : | |
948 LOAD with immediate effect : : +-------+
951 Placing a read barrier or a data dependency barrier just before the second
955 ======================= =======================
962 will force any value speculatively obtained to be reconsidered to an extent
963 dependent on the type of barrier used. If there was no change made to the
964 speculated memory location, then the speculated value will just be used:
968 --->| B->2 |------>| |
972 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
973 division speculates on the +-------+ ~ | |
978 rrrrrrrrrrrrrrrr~ | |
985 but if there was an update or an invalidation from another CPU pending, then
986 the speculation will be cancelled and the value reloaded:
990 --->| B->2 |------>| |
994 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
995 division speculates on the +-------+ ~ | |
1000 rrrrrrrrrrrrrrrrr | |
1002 The speculation is discarded ---> --->| A->1 |------>| |
1003 and an updated value is +-------+ | |
1004 retrieved : : +-------+
1010 Transitivity is a deeply intuitive notion about ordering that is not
1011 always provided by real computer systems. The following example
1012 demonstrates transitivity (also called "cumulativity"):
1015 ======================= ======================= =======================
1017 STORE X=1 LOAD X STORE Y=1
1018 <general barrier> <general barrier>
1021 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1022 This indicates that CPU 2's load from X in some sense follows CPU 1's
1023 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1024 store to Y. The question is then "Can CPU 3's load from X return 0?"
1026 Because CPU 2's load from X in some sense came after CPU 1's store, it
1027 is natural to expect that CPU 3's load from X must therefore return 1.
1028 This expectation is an example of transitivity: if a load executing on
1029 CPU A follows a load from the same variable executing on CPU B, then
1030 CPU A's load must either return the same value that CPU B's load did,
1031 or must return some later value.
1033 In the Linux kernel, use of general memory barriers guarantees
1034 transitivity. Therefore, in the above example, if CPU 2's load from X
1035 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1038 However, transitivity is -not- guaranteed for read or write barriers.
1039 For example, suppose that CPU 2's general barrier in the above example
1040 is changed to a read barrier as shown below:
1043 ======================= ======================= =======================
1045 STORE X=1 LOAD X STORE Y=1
1046 <read barrier> <general barrier>
1049 This substitution destroys transitivity: in this example, it is perfectly
1050 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1051 and CPU 3's load from X to return 0.
1053 The key point is that although CPU 2's read barrier orders its pair
1054 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1055 this example runs on a system where CPUs 1 and 2 share a store buffer
1056 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1057 General barriers are therefore required to ensure that all CPUs agree
1058 on the combined order of CPU 1's and CPU 2's accesses.
1060 To reiterate, if your code requires transitivity, use general barriers
1064 ========================
1065 EXPLICIT KERNEL BARRIERS
1066 ========================
1068 The Linux kernel has a variety of different barriers that act at different
1071 (*) Compiler barrier.
1073 (*) CPU memory barriers.
1075 (*) MMIO write barrier.
1081 The Linux kernel has an explicit compiler barrier function that prevents the
1082 compiler from moving the memory accesses either side of it to the other side:
1086 This is a general barrier - lesser varieties of compiler barrier do not exist.
1088 The compiler barrier has no direct effect on the CPU, which may then reorder
1089 things however it wishes.
1095 The Linux kernel has eight basic CPU memory barriers:
1097 TYPE MANDATORY SMP CONDITIONAL
1098 =============== ======================= ===========================
1099 GENERAL mb() smp_mb()
1100 WRITE wmb() smp_wmb()
1101 READ rmb() smp_rmb()
1102 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1105 All memory barriers except the data dependency barriers imply a compiler
1106 barrier. Data dependencies do not impose any additional compiler ordering.
1108 Aside: In the case of data dependencies, the compiler would be expected to
1109 issue the loads in the correct order (eg. `a[b]` would have to load the value
1110 of b before loading a[b]), however there is no guarantee in the C specification
1111 that the compiler may not speculate the value of b (eg. is equal to 1) and load
1112 a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1113 problem of a compiler reloading b after having loaded a[b], thus having a newer
1114 copy of b than a[b]. A consensus has not yet been reached about these problems,
1115 however the ACCESS_ONCE macro is a good place to start looking.
1117 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1118 systems because it is assumed that a CPU will appear to be self-consistent,
1119 and will order overlapping accesses correctly with respect to itself.
1121 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1122 references to shared memory on SMP systems, though the use of locking instead
1125 Mandatory barriers should not be used to control SMP effects, since mandatory
1126 barriers unnecessarily impose overhead on UP systems. They may, however, be
1127 used to control MMIO effects on accesses through relaxed memory I/O windows.
1128 These are required even on non-SMP systems as they affect the order in which
1129 memory operations appear to a device by prohibiting both the compiler and the
1130 CPU from reordering them.
1133 There are some more advanced barrier functions:
1135 (*) set_mb(var, value)
1137 This assigns the value to the variable and then inserts a full memory
1138 barrier after it, depending on the function. It isn't guaranteed to
1139 insert anything more than a compiler barrier in a UP compilation.
1142 (*) smp_mb__before_atomic_dec();
1143 (*) smp_mb__after_atomic_dec();
1144 (*) smp_mb__before_atomic_inc();
1145 (*) smp_mb__after_atomic_inc();
1147 These are for use with atomic add, subtract, increment and decrement
1148 functions that don't return a value, especially when used for reference
1149 counting. These functions do not imply memory barriers.
1151 As an example, consider a piece of code that marks an object as being dead
1152 and then decrements the object's reference count:
1155 smp_mb__before_atomic_dec();
1156 atomic_dec(&obj->ref_count);
1158 This makes sure that the death mark on the object is perceived to be set
1159 *before* the reference counter is decremented.
1161 See Documentation/atomic_ops.txt for more information. See the "Atomic
1162 operations" subsection for information on where to use these.
1165 (*) smp_mb__before_clear_bit(void);
1166 (*) smp_mb__after_clear_bit(void);
1168 These are for use similar to the atomic inc/dec barriers. These are
1169 typically used for bitwise unlocking operations, so care must be taken as
1170 there are no implicit memory barriers here either.
1172 Consider implementing an unlock operation of some nature by clearing a
1173 locking bit. The clear_bit() would then need to be barriered like this:
1175 smp_mb__before_clear_bit();
1178 This prevents memory operations before the clear leaking to after it. See
1179 the subsection on "Locking Functions" with reference to UNLOCK operation
1182 See Documentation/atomic_ops.txt for more information. See the "Atomic
1183 operations" subsection for information on where to use these.
1189 The Linux kernel also has a special barrier for use with memory-mapped I/O
1194 This is a variation on the mandatory write barrier that causes writes to weakly
1195 ordered I/O regions to be partially ordered. Its effects may go beyond the
1196 CPU->Hardware interface and actually affect the hardware at some level.
1198 See the subsection "Locks vs I/O accesses" for more information.
1201 ===============================
1202 IMPLICIT KERNEL MEMORY BARRIERS
1203 ===============================
1205 Some of the other functions in the linux kernel imply memory barriers, amongst
1206 which are locking and scheduling functions.
1208 This specification is a _minimum_ guarantee; any particular architecture may
1209 provide more substantial guarantees, but these may not be relied upon outside
1210 of arch specific code.
1216 The Linux kernel has a number of locking constructs:
1225 In all cases there are variants on "LOCK" operations and "UNLOCK" operations
1226 for each construct. These operations all imply certain barriers:
1228 (1) LOCK operation implication:
1230 Memory operations issued after the LOCK will be completed after the LOCK
1231 operation has completed.
1233 Memory operations issued before the LOCK may be completed after the LOCK
1234 operation has completed.
1236 (2) UNLOCK operation implication:
1238 Memory operations issued before the UNLOCK will be completed before the
1239 UNLOCK operation has completed.
1241 Memory operations issued after the UNLOCK may be completed before the
1242 UNLOCK operation has completed.
1244 (3) LOCK vs LOCK implication:
1246 All LOCK operations issued before another LOCK operation will be completed
1247 before that LOCK operation.
1249 (4) LOCK vs UNLOCK implication:
1251 All LOCK operations issued before an UNLOCK operation will be completed
1252 before the UNLOCK operation.
1254 All UNLOCK operations issued before a LOCK operation will be completed
1255 before the LOCK operation.
1257 (5) Failed conditional LOCK implication:
1259 Certain variants of the LOCK operation may fail, either due to being
1260 unable to get the lock immediately, or due to receiving an unblocked
1261 signal whilst asleep waiting for the lock to become available. Failed
1262 locks do not imply any sort of barrier.
1264 Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
1265 equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
1267 [!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
1268 barriers is that the effects of instructions outside of a critical section
1269 may seep into the inside of the critical section.
1271 A LOCK followed by an UNLOCK may not be assumed to be full memory barrier
1272 because it is possible for an access preceding the LOCK to happen after the
1273 LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the
1274 two accesses can themselves then cross:
1283 LOCK, STORE *B, STORE *A, UNLOCK
1285 Locks and semaphores may not provide any guarantee of ordering on UP compiled
1286 systems, and so cannot be counted on in such a situation to actually achieve
1287 anything at all - especially with respect to I/O accesses - unless combined
1288 with interrupt disabling operations.
1290 See also the section on "Inter-CPU locking barrier effects".
1293 As an example, consider the following:
1304 The following sequence of events is acceptable:
1306 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK
1308 [+] Note that {*F,*A} indicates a combined access.
1310 But none of the following are:
1312 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E
1313 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F
1314 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F
1315 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E
1319 INTERRUPT DISABLING FUNCTIONS
1320 -----------------------------
1322 Functions that disable interrupts (LOCK equivalent) and enable interrupts
1323 (UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O
1324 barriers are required in such a situation, they must be provided from some
1328 SLEEP AND WAKE-UP FUNCTIONS
1329 ---------------------------
1331 Sleeping and waking on an event flagged in global data can be viewed as an
1332 interaction between two pieces of data: the task state of the task waiting for
1333 the event and the global data used to indicate the event. To make sure that
1334 these appear to happen in the right order, the primitives to begin the process
1335 of going to sleep, and the primitives to initiate a wake up imply certain
1338 Firstly, the sleeper normally follows something like this sequence of events:
1341 set_current_state(TASK_UNINTERRUPTIBLE);
1342 if (event_indicated)
1347 A general memory barrier is interpolated automatically by set_current_state()
1348 after it has altered the task state:
1351 ===============================
1352 set_current_state();
1354 STORE current->state
1356 LOAD event_indicated
1358 set_current_state() may be wrapped by:
1361 prepare_to_wait_exclusive();
1363 which therefore also imply a general memory barrier after setting the state.
1364 The whole sequence above is available in various canned forms, all of which
1365 interpolate the memory barrier in the right place:
1368 wait_event_interruptible();
1369 wait_event_interruptible_exclusive();
1370 wait_event_interruptible_timeout();
1371 wait_event_killable();
1372 wait_event_timeout();
1377 Secondly, code that performs a wake up normally follows something like this:
1379 event_indicated = 1;
1380 wake_up(&event_wait_queue);
1384 event_indicated = 1;
1385 wake_up_process(event_daemon);
1387 A write memory barrier is implied by wake_up() and co. if and only if they wake
1388 something up. The barrier occurs before the task state is cleared, and so sits
1389 between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1392 =============================== ===============================
1393 set_current_state(); STORE event_indicated
1394 set_mb(); wake_up();
1395 STORE current->state <write barrier>
1396 <general barrier> STORE current->state
1397 LOAD event_indicated
1399 The available waker functions include:
1405 wake_up_interruptible();
1406 wake_up_interruptible_all();
1407 wake_up_interruptible_nr();
1408 wake_up_interruptible_poll();
1409 wake_up_interruptible_sync();
1410 wake_up_interruptible_sync_poll();
1412 wake_up_locked_poll();
1418 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
1419 order multiple stores before the wake-up with respect to loads of those stored
1420 values after the sleeper has called set_current_state(). For instance, if the
1423 set_current_state(TASK_INTERRUPTIBLE);
1424 if (event_indicated)
1426 __set_current_state(TASK_RUNNING);
1427 do_something(my_data);
1432 event_indicated = 1;
1433 wake_up(&event_wait_queue);
1435 there's no guarantee that the change to event_indicated will be perceived by
1436 the sleeper as coming after the change to my_data. In such a circumstance, the
1437 code on both sides must interpolate its own memory barriers between the
1438 separate data accesses. Thus the above sleeper ought to do:
1440 set_current_state(TASK_INTERRUPTIBLE);
1441 if (event_indicated) {
1443 do_something(my_data);
1446 and the waker should do:
1450 event_indicated = 1;
1451 wake_up(&event_wait_queue);
1454 MISCELLANEOUS FUNCTIONS
1455 -----------------------
1457 Other functions that imply barriers:
1459 (*) schedule() and similar imply full memory barriers.
1462 =================================
1463 INTER-CPU LOCKING BARRIER EFFECTS
1464 =================================
1466 On SMP systems locking primitives give a more substantial form of barrier: one
1467 that does affect memory access ordering on other CPUs, within the context of
1468 conflict on any particular lock.
1471 LOCKS VS MEMORY ACCESSES
1472 ------------------------
1474 Consider the following: the system has a pair of spinlocks (M) and (Q), and
1475 three CPUs; then should the following sequence of events occur:
1478 =============================== ===============================
1479 ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e;
1481 ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f;
1482 ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g;
1484 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h;
1486 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
1487 through *H occur in, other than the constraints imposed by the separate locks
1488 on the separate CPUs. It might, for example, see:
1490 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M
1492 But it won't see any of:
1494 *B, *C or *D preceding LOCK M
1495 *A, *B or *C following UNLOCK M
1496 *F, *G or *H preceding LOCK Q
1497 *E, *F or *G following UNLOCK Q
1500 However, if the following occurs:
1503 =============================== ===============================
1504 ACCESS_ONCE(*A) = a;
1506 ACCESS_ONCE(*B) = b;
1507 ACCESS_ONCE(*C) = c;
1509 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e;
1511 ACCESS_ONCE(*F) = f;
1512 ACCESS_ONCE(*G) = g;
1514 ACCESS_ONCE(*H) = h;
1518 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1],
1519 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D
1521 But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
1523 *B, *C, *D, *F, *G or *H preceding LOCK M [1]
1524 *A, *B or *C following UNLOCK M [1]
1525 *F, *G or *H preceding LOCK M [2]
1526 *A, *B, *C, *E, *F or *G following UNLOCK M [2]
1529 LOCKS VS I/O ACCESSES
1530 ---------------------
1532 Under certain circumstances (especially involving NUMA), I/O accesses within
1533 two spinlocked sections on two different CPUs may be seen as interleaved by the
1534 PCI bridge, because the PCI bridge does not necessarily participate in the
1535 cache-coherence protocol, and is therefore incapable of issuing the required
1536 read memory barriers.
1541 =============================== ===============================
1551 may be seen by the PCI bridge as follows:
1553 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
1555 which would probably cause the hardware to malfunction.
1558 What is necessary here is to intervene with an mmiowb() before dropping the
1559 spinlock, for example:
1562 =============================== ===============================
1574 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
1575 before either of the stores issued on CPU 2.
1578 Furthermore, following a store by a load from the same device obviates the need
1579 for the mmiowb(), because the load forces the store to complete before the load
1583 =============================== ===============================
1594 See Documentation/DocBook/deviceiobook.tmpl for more information.
1597 =================================
1598 WHERE ARE MEMORY BARRIERS NEEDED?
1599 =================================
1601 Under normal operation, memory operation reordering is generally not going to
1602 be a problem as a single-threaded linear piece of code will still appear to
1603 work correctly, even if it's in an SMP kernel. There are, however, four
1604 circumstances in which reordering definitely _could_ be a problem:
1606 (*) Interprocessor interaction.
1608 (*) Atomic operations.
1610 (*) Accessing devices.
1615 INTERPROCESSOR INTERACTION
1616 --------------------------
1618 When there's a system with more than one processor, more than one CPU in the
1619 system may be working on the same data set at the same time. This can cause
1620 synchronisation problems, and the usual way of dealing with them is to use
1621 locks. Locks, however, are quite expensive, and so it may be preferable to
1622 operate without the use of a lock if at all possible. In such a case
1623 operations that affect both CPUs may have to be carefully ordered to prevent
1626 Consider, for example, the R/W semaphore slow path. Here a waiting process is
1627 queued on the semaphore, by virtue of it having a piece of its stack linked to
1628 the semaphore's list of waiting processes:
1630 struct rw_semaphore {
1633 struct list_head waiters;
1636 struct rwsem_waiter {
1637 struct list_head list;
1638 struct task_struct *task;
1641 To wake up a particular waiter, the up_read() or up_write() functions have to:
1643 (1) read the next pointer from this waiter's record to know as to where the
1644 next waiter record is;
1646 (2) read the pointer to the waiter's task structure;
1648 (3) clear the task pointer to tell the waiter it has been given the semaphore;
1650 (4) call wake_up_process() on the task; and
1652 (5) release the reference held on the waiter's task struct.
1654 In other words, it has to perform this sequence of events:
1656 LOAD waiter->list.next;
1662 and if any of these steps occur out of order, then the whole thing may
1665 Once it has queued itself and dropped the semaphore lock, the waiter does not
1666 get the lock again; it instead just waits for its task pointer to be cleared
1667 before proceeding. Since the record is on the waiter's stack, this means that
1668 if the task pointer is cleared _before_ the next pointer in the list is read,
1669 another CPU might start processing the waiter and might clobber the waiter's
1670 stack before the up*() function has a chance to read the next pointer.
1672 Consider then what might happen to the above sequence of events:
1675 =============================== ===============================
1682 Woken up by other event
1687 foo() clobbers *waiter
1689 LOAD waiter->list.next;
1692 This could be dealt with using the semaphore lock, but then the down_xxx()
1693 function has to needlessly get the spinlock again after being woken up.
1695 The way to deal with this is to insert a general SMP memory barrier:
1697 LOAD waiter->list.next;
1704 In this case, the barrier makes a guarantee that all memory accesses before the
1705 barrier will appear to happen before all the memory accesses after the barrier
1706 with respect to the other CPUs on the system. It does _not_ guarantee that all
1707 the memory accesses before the barrier will be complete by the time the barrier
1708 instruction itself is complete.
1710 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
1711 compiler barrier, thus making sure the compiler emits the instructions in the
1712 right order without actually intervening in the CPU. Since there's only one
1713 CPU, that CPU's dependency ordering logic will take care of everything else.
1719 Whilst they are technically interprocessor interaction considerations, atomic
1720 operations are noted specially as some of them imply full memory barriers and
1721 some don't, but they're very heavily relied on as a group throughout the
1724 Any atomic operation that modifies some state in memory and returns information
1725 about the state (old or new) implies an SMP-conditional general memory barrier
1726 (smp_mb()) on each side of the actual operation (with the exception of
1727 explicit lock operations, described later). These include:
1733 atomic_inc_return();
1734 atomic_dec_return();
1735 atomic_add_return();
1736 atomic_sub_return();
1737 atomic_inc_and_test();
1738 atomic_dec_and_test();
1739 atomic_sub_and_test();
1740 atomic_add_negative();
1741 atomic_add_unless(); /* when succeeds (returns 1) */
1743 test_and_clear_bit();
1744 test_and_change_bit();
1746 These are used for such things as implementing LOCK-class and UNLOCK-class
1747 operations and adjusting reference counters towards object destruction, and as
1748 such the implicit memory barrier effects are necessary.
1751 The following operations are potential problems as they do _not_ imply memory
1752 barriers, but might be used for implementing such things as UNLOCK-class
1760 With these the appropriate explicit memory barrier should be used if necessary
1761 (smp_mb__before_clear_bit() for instance).
1764 The following also do _not_ imply memory barriers, and so may require explicit
1765 memory barriers under some circumstances (smp_mb__before_atomic_dec() for
1773 If they're used for statistics generation, then they probably don't need memory
1774 barriers, unless there's a coupling between statistical data.
1776 If they're used for reference counting on an object to control its lifetime,
1777 they probably don't need memory barriers because either the reference count
1778 will be adjusted inside a locked section, or the caller will already hold
1779 sufficient references to make the lock, and thus a memory barrier unnecessary.
1781 If they're used for constructing a lock of some description, then they probably
1782 do need memory barriers as a lock primitive generally has to do things in a
1785 Basically, each usage case has to be carefully considered as to whether memory
1786 barriers are needed or not.
1788 The following operations are special locking primitives:
1790 test_and_set_bit_lock();
1792 __clear_bit_unlock();
1794 These implement LOCK-class and UNLOCK-class operations. These should be used in
1795 preference to other operations when implementing locking primitives, because
1796 their implementations can be optimised on many architectures.
1798 [!] Note that special memory barrier primitives are available for these
1799 situations because on some CPUs the atomic instructions used imply full memory
1800 barriers, and so barrier instructions are superfluous in conjunction with them,
1801 and in such cases the special barrier primitives will be no-ops.
1803 See Documentation/atomic_ops.txt for more information.
1809 Many devices can be memory mapped, and so appear to the CPU as if they're just
1810 a set of memory locations. To control such a device, the driver usually has to
1811 make the right memory accesses in exactly the right order.
1813 However, having a clever CPU or a clever compiler creates a potential problem
1814 in that the carefully sequenced accesses in the driver code won't reach the
1815 device in the requisite order if the CPU or the compiler thinks it is more
1816 efficient to reorder, combine or merge accesses - something that would cause
1817 the device to malfunction.
1819 Inside of the Linux kernel, I/O should be done through the appropriate accessor
1820 routines - such as inb() or writel() - which know how to make such accesses
1821 appropriately sequential. Whilst this, for the most part, renders the explicit
1822 use of memory barriers unnecessary, there are a couple of situations where they
1825 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
1826 so for _all_ general drivers locks should be used and mmiowb() must be
1827 issued prior to unlocking the critical section.
1829 (2) If the accessor functions are used to refer to an I/O memory window with
1830 relaxed memory access properties, then _mandatory_ memory barriers are
1831 required to enforce ordering.
1833 See Documentation/DocBook/deviceiobook.tmpl for more information.
1839 A driver may be interrupted by its own interrupt service routine, and thus the
1840 two parts of the driver may interfere with each other's attempts to control or
1843 This may be alleviated - at least in part - by disabling local interrupts (a
1844 form of locking), such that the critical operations are all contained within
1845 the interrupt-disabled section in the driver. Whilst the driver's interrupt
1846 routine is executing, the driver's core may not run on the same CPU, and its
1847 interrupt is not permitted to happen again until the current interrupt has been
1848 handled, thus the interrupt handler does not need to lock against that.
1850 However, consider a driver that was talking to an ethernet card that sports an
1851 address register and a data register. If that driver's core talks to the card
1852 under interrupt-disablement and then the driver's interrupt handler is invoked:
1863 The store to the data register might happen after the second store to the
1864 address register if ordering rules are sufficiently relaxed:
1866 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
1869 If ordering rules are relaxed, it must be assumed that accesses done inside an
1870 interrupt disabled section may leak outside of it and may interleave with
1871 accesses performed in an interrupt - and vice versa - unless implicit or
1872 explicit barriers are used.
1874 Normally this won't be a problem because the I/O accesses done inside such
1875 sections will include synchronous load operations on strictly ordered I/O
1876 registers that form implicit I/O barriers. If this isn't sufficient then an
1877 mmiowb() may need to be used explicitly.
1880 A similar situation may occur between an interrupt routine and two routines
1881 running on separate CPUs that communicate with each other. If such a case is
1882 likely, then interrupt-disabling locks should be used to guarantee ordering.
1885 ==========================
1886 KERNEL I/O BARRIER EFFECTS
1887 ==========================
1889 When accessing I/O memory, drivers should use the appropriate accessor
1894 These are intended to talk to I/O space rather than memory space, but
1895 that's primarily a CPU-specific concept. The i386 and x86_64 processors do
1896 indeed have special I/O space access cycles and instructions, but many
1897 CPUs don't have such a concept.
1899 The PCI bus, amongst others, defines an I/O space concept which - on such
1900 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
1901 space. However, it may also be mapped as a virtual I/O space in the CPU's
1902 memory map, particularly on those CPUs that don't support alternate I/O
1905 Accesses to this space may be fully synchronous (as on i386), but
1906 intermediary bridges (such as the PCI host bridge) may not fully honour
1909 They are guaranteed to be fully ordered with respect to each other.
1911 They are not guaranteed to be fully ordered with respect to other types of
1912 memory and I/O operation.
1914 (*) readX(), writeX():
1916 Whether these are guaranteed to be fully ordered and uncombined with
1917 respect to each other on the issuing CPU depends on the characteristics
1918 defined for the memory window through which they're accessing. On later
1919 i386 architecture machines, for example, this is controlled by way of the
1922 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
1923 provided they're not accessing a prefetchable device.
1925 However, intermediary hardware (such as a PCI bridge) may indulge in
1926 deferral if it so wishes; to flush a store, a load from the same location
1927 is preferred[*], but a load from the same device or from configuration
1928 space should suffice for PCI.
1930 [*] NOTE! attempting to load from the same location as was written to may
1931 cause a malfunction - consider the 16550 Rx/Tx serial registers for
1934 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
1935 force stores to be ordered.
1937 Please refer to the PCI specification for more information on interactions
1938 between PCI transactions.
1942 These are similar to readX(), but are not guaranteed to be ordered in any
1943 way. Be aware that there is no I/O read barrier available.
1945 (*) ioreadX(), iowriteX()
1947 These will perform appropriately for the type of access they're actually
1948 doing, be it inX()/outX() or readX()/writeX().
1951 ========================================
1952 ASSUMED MINIMUM EXECUTION ORDERING MODEL
1953 ========================================
1955 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
1956 maintain the appearance of program causality with respect to itself. Some CPUs
1957 (such as i386 or x86_64) are more constrained than others (such as powerpc or
1958 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
1959 of arch-specific code.
1961 This means that it must be considered that the CPU will execute its instruction
1962 stream in any order it feels like - or even in parallel - provided that if an
1963 instruction in the stream depends on an earlier instruction, then that
1964 earlier instruction must be sufficiently complete[*] before the later
1965 instruction may proceed; in other words: provided that the appearance of
1966 causality is maintained.
1968 [*] Some instructions have more than one effect - such as changing the
1969 condition codes, changing registers or changing memory - and different
1970 instructions may depend on different effects.
1972 A CPU may also discard any instruction sequence that winds up having no
1973 ultimate effect. For example, if two adjacent instructions both load an
1974 immediate value into the same register, the first may be discarded.
1977 Similarly, it has to be assumed that compiler might reorder the instruction
1978 stream in any way it sees fit, again provided the appearance of causality is
1982 ============================
1983 THE EFFECTS OF THE CPU CACHE
1984 ============================
1986 The way cached memory operations are perceived across the system is affected to
1987 a certain extent by the caches that lie between CPUs and memory, and by the
1988 memory coherence system that maintains the consistency of state in the system.
1990 As far as the way a CPU interacts with another part of the system through the
1991 caches goes, the memory system has to include the CPU's caches, and memory
1992 barriers for the most part act at the interface between the CPU and its cache
1993 (memory barriers logically act on the dotted line in the following diagram):
1995 <--- CPU ---> : <----------- Memory ----------->
1997 +--------+ +--------+ : +--------+ +-----------+
1998 | | | | : | | | | +--------+
1999 | CPU | | Memory | : | CPU | | | | |
2000 | Core |--->| Access |----->| Cache |<-->| | | |
2001 | | | Queue | : | | | |--->| Memory |
2002 | | | | : | | | | | |
2003 +--------+ +--------+ : +--------+ | | | |
2004 : | Cache | +--------+
2006 : | Mechanism | +--------+
2007 +--------+ +--------+ : +--------+ | | | |
2008 | | | | : | | | | | |
2009 | CPU | | Memory | : | CPU | | |--->| Device |
2010 | Core |--->| Access |----->| Cache |<-->| | | |
2011 | | | Queue | : | | | | | |
2012 | | | | : | | | | +--------+
2013 +--------+ +--------+ : +--------+ +-----------+
2017 Although any particular load or store may not actually appear outside of the
2018 CPU that issued it since it may have been satisfied within the CPU's own cache,
2019 it will still appear as if the full memory access had taken place as far as the
2020 other CPUs are concerned since the cache coherency mechanisms will migrate the
2021 cacheline over to the accessing CPU and propagate the effects upon conflict.
2023 The CPU core may execute instructions in any order it deems fit, provided the
2024 expected program causality appears to be maintained. Some of the instructions
2025 generate load and store operations which then go into the queue of memory
2026 accesses to be performed. The core may place these in the queue in any order
2027 it wishes, and continue execution until it is forced to wait for an instruction
2030 What memory barriers are concerned with is controlling the order in which
2031 accesses cross from the CPU side of things to the memory side of things, and
2032 the order in which the effects are perceived to happen by the other observers
2035 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2036 their own loads and stores as if they had happened in program order.
2038 [!] MMIO or other device accesses may bypass the cache system. This depends on
2039 the properties of the memory window through which devices are accessed and/or
2040 the use of any special device communication instructions the CPU may have.
2046 Life isn't quite as simple as it may appear above, however: for while the
2047 caches are expected to be coherent, there's no guarantee that that coherency
2048 will be ordered. This means that whilst changes made on one CPU will
2049 eventually become visible on all CPUs, there's no guarantee that they will
2050 become apparent in the same order on those other CPUs.
2053 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2054 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2059 +--------+ : +--->| Cache A |<------->| |
2060 | | : | +---------+ | |
2062 | | : | +---------+ | |
2063 +--------+ : +--->| Cache B |<------->| |
2066 : +---------+ | System |
2067 +--------+ : +--->| Cache C |<------->| |
2068 | | : | +---------+ | |
2070 | | : | +---------+ | |
2071 +--------+ : +--->| Cache D |<------->| |
2076 Imagine the system has the following properties:
2078 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2081 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2084 (*) whilst the CPU core is interrogating one cache, the other cache may be
2085 making use of the bus to access the rest of the system - perhaps to
2086 displace a dirty cacheline or to do a speculative load;
2088 (*) each cache has a queue of operations that need to be applied to that cache
2089 to maintain coherency with the rest of the system;
2091 (*) the coherency queue is not flushed by normal loads to lines already
2092 present in the cache, even though the contents of the queue may
2093 potentially affect those loads.
2095 Imagine, then, that two writes are made on the first CPU, with a write barrier
2096 between them to guarantee that they will appear to reach that CPU's caches in
2097 the requisite order:
2100 =============== =============== =======================================
2101 u == 0, v == 1 and p == &u, q == &u
2103 smp_wmb(); Make sure change to v is visible before
2105 <A:modify v=2> v is now in cache A exclusively
2107 <B:modify p=&v> p is now in cache B exclusively
2109 The write memory barrier forces the other CPUs in the system to perceive that
2110 the local CPU's caches have apparently been updated in the correct order. But
2111 now imagine that the second CPU wants to read those values:
2114 =============== =============== =======================================
2119 The above pair of reads may then fail to happen in the expected order, as the
2120 cacheline holding p may get updated in one of the second CPU's caches whilst
2121 the update to the cacheline holding v is delayed in the other of the second
2122 CPU's caches by some other cache event:
2125 =============== =============== =======================================
2126 u == 0, v == 1 and p == &u, q == &u
2129 <A:modify v=2> <C:busy>
2133 <B:modify p=&v> <D:commit p=&v>
2136 <C:read *q> Reads from v before v updated in cache
2140 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2141 no guarantee that, without intervention, the order of update will be the same
2142 as that committed on CPU 1.
2145 To intervene, we need to interpolate a data dependency barrier or a read
2146 barrier between the loads. This will force the cache to commit its coherency
2147 queue before processing any further requests:
2150 =============== =============== =======================================
2151 u == 0, v == 1 and p == &u, q == &u
2154 <A:modify v=2> <C:busy>
2158 <B:modify p=&v> <D:commit p=&v>
2160 smp_read_barrier_depends()
2164 <C:read *q> Reads from v after v updated in cache
2167 This sort of problem can be encountered on DEC Alpha processors as they have a
2168 split cache that improves performance by making better use of the data bus.
2169 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2170 access depends on a read, not all do, so it may not be relied on.
2172 Other CPUs may also have split caches, but must coordinate between the various
2173 cachelets for normal memory accesses. The semantics of the Alpha removes the
2174 need for coordination in the absence of memory barriers.
2177 CACHE COHERENCY VS DMA
2178 ----------------------
2180 Not all systems maintain cache coherency with respect to devices doing DMA. In
2181 such cases, a device attempting DMA may obtain stale data from RAM because
2182 dirty cache lines may be resident in the caches of various CPUs, and may not
2183 have been written back to RAM yet. To deal with this, the appropriate part of
2184 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2185 invalidate them as well).
2187 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2188 cache lines being written back to RAM from a CPU's cache after the device has
2189 installed its own data, or cache lines present in the CPU's cache may simply
2190 obscure the fact that RAM has been updated, until at such time as the cacheline
2191 is discarded from the CPU's cache and reloaded. To deal with this, the
2192 appropriate part of the kernel must invalidate the overlapping bits of the
2195 See Documentation/cachetlb.txt for more information on cache management.
2198 CACHE COHERENCY VS MMIO
2199 -----------------------
2201 Memory mapped I/O usually takes place through memory locations that are part of
2202 a window in the CPU's memory space that has different properties assigned than
2203 the usual RAM directed window.
2205 Amongst these properties is usually the fact that such accesses bypass the
2206 caching entirely and go directly to the device buses. This means MMIO accesses
2207 may, in effect, overtake accesses to cached memory that were emitted earlier.
2208 A memory barrier isn't sufficient in such a case, but rather the cache must be
2209 flushed between the cached memory write and the MMIO access if the two are in
2213 =========================
2214 THE THINGS CPUS GET UP TO
2215 =========================
2217 A programmer might take it for granted that the CPU will perform memory
2218 operations in exactly the order specified, so that if the CPU is, for example,
2219 given the following piece of code to execute:
2221 a = ACCESS_ONCE(*A);
2222 ACCESS_ONCE(*B) = b;
2223 c = ACCESS_ONCE(*C);
2224 d = ACCESS_ONCE(*D);
2225 ACCESS_ONCE(*E) = e;
2227 they would then expect that the CPU will complete the memory operation for each
2228 instruction before moving on to the next one, leading to a definite sequence of
2229 operations as seen by external observers in the system:
2231 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2234 Reality is, of course, much messier. With many CPUs and compilers, the above
2235 assumption doesn't hold because:
2237 (*) loads are more likely to need to be completed immediately to permit
2238 execution progress, whereas stores can often be deferred without a
2241 (*) loads may be done speculatively, and the result discarded should it prove
2242 to have been unnecessary;
2244 (*) loads may be done speculatively, leading to the result having been fetched
2245 at the wrong time in the expected sequence of events;
2247 (*) the order of the memory accesses may be rearranged to promote better use
2248 of the CPU buses and caches;
2250 (*) loads and stores may be combined to improve performance when talking to
2251 memory or I/O hardware that can do batched accesses of adjacent locations,
2252 thus cutting down on transaction setup costs (memory and PCI devices may
2253 both be able to do this); and
2255 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2256 mechanisms may alleviate this - once the store has actually hit the cache
2257 - there's no guarantee that the coherency management will be propagated in
2258 order to other CPUs.
2260 So what another CPU, say, might actually observe from the above piece of code
2263 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2265 (Where "LOAD {*C,*D}" is a combined load)
2268 However, it is guaranteed that a CPU will be self-consistent: it will see its
2269 _own_ accesses appear to be correctly ordered, without the need for a memory
2270 barrier. For instance with the following code:
2272 U = ACCESS_ONCE(*A);
2273 ACCESS_ONCE(*A) = V;
2274 ACCESS_ONCE(*A) = W;
2275 X = ACCESS_ONCE(*A);
2276 ACCESS_ONCE(*A) = Y;
2277 Z = ACCESS_ONCE(*A);
2279 and assuming no intervention by an external influence, it can be assumed that
2280 the final result will appear to be:
2282 U == the original value of *A
2287 The code above may cause the CPU to generate the full sequence of memory
2290 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2292 in that order, but, without intervention, the sequence may have almost any
2293 combination of elements combined or discarded, provided the program's view of
2294 the world remains consistent. Note that ACCESS_ONCE() is -not- optional
2295 in the above example, as there are architectures where a given CPU might
2296 interchange successive loads to the same location. On such architectures,
2297 ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2298 Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2299 special ld.acq and st.rel instructions that prevent such reordering.
2301 The compiler may also combine, discard or defer elements of the sequence before
2302 the CPU even sees them.
2313 since, without either a write barrier or an ACCESS_ONCE(), it can be
2314 assumed that the effect of the storage of V to *A is lost. Similarly:
2319 may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2324 and the LOAD operation never appear outside of the CPU.
2327 AND THEN THERE'S THE ALPHA
2328 --------------------------
2330 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2331 some versions of the Alpha CPU have a split data cache, permitting them to have
2332 two semantically-related cache lines updated at separate times. This is where
2333 the data dependency barrier really becomes necessary as this synchronises both
2334 caches with the memory coherence system, thus making it seem like pointer
2335 changes vs new data occur in the right order.
2337 The Alpha defines the Linux kernel's memory barrier model.
2339 See the subsection on "Cache Coherency" above.
2349 Memory barriers can be used to implement circular buffering without the need
2350 of a lock to serialise the producer with the consumer. See:
2352 Documentation/circular-buffers.txt
2361 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2363 Chapter 5.2: Physical Address Space Characteristics
2364 Chapter 5.4: Caches and Write Buffers
2365 Chapter 5.5: Data Sharing
2366 Chapter 5.6: Read/Write Ordering
2368 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2369 Chapter 7.1: Memory-Access Ordering
2370 Chapter 7.4: Buffering and Combining Memory Writes
2372 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2373 System Programming Guide
2374 Chapter 7.1: Locked Atomic Operations
2375 Chapter 7.2: Memory Ordering
2376 Chapter 7.4: Serializing Instructions
2378 The SPARC Architecture Manual, Version 9
2379 Chapter 8: Memory Models
2380 Appendix D: Formal Specification of the Memory Models
2381 Appendix J: Programming with the Memory Models
2383 UltraSPARC Programmer Reference Manual
2384 Chapter 5: Memory Accesses and Cacheability
2385 Chapter 15: Sparc-V9 Memory Models
2387 UltraSPARC III Cu User's Manual
2388 Chapter 9: Memory Models
2390 UltraSPARC IIIi Processor User's Manual
2391 Chapter 8: Memory Models
2393 UltraSPARC Architecture 2005
2395 Appendix D: Formal Specifications of the Memory Models
2397 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2398 Chapter 8: Memory Models
2399 Appendix F: Caches and Cache Coherency
2401 Solaris Internals, Core Kernel Architecture, p63-68:
2402 Chapter 3.3: Hardware Considerations for Locks and
2405 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2406 for Kernel Programmers:
2407 Chapter 13: Other Memory Models
2409 Intel Itanium Architecture Software Developer's Manual: Volume 1:
2410 Section 2.6: Speculation
2411 Section 4.4: Memory Access