1 | =========================================== |
2 | Control Flow Integrity Design Documentation |
3 | =========================================== |
4 | |
5 | This page documents the design of the :doc:`ControlFlowIntegrity` schemes |
6 | supported by Clang. |
7 | |
8 | Forward-Edge CFI for Virtual Calls |
9 | ================================== |
10 | |
11 | This scheme works by allocating, for each static type used to make a virtual |
12 | call, a region of read-only storage in the object file holding a bit vector |
13 | that maps onto to the region of storage used for those virtual tables. Each |
14 | set bit in the bit vector corresponds to the `address point`_ for a virtual |
15 | table compatible with the static type for which the bit vector is being built. |
16 | |
17 | For example, consider the following three C++ classes: |
18 | |
19 | .. code-block:: c++ |
20 | |
21 | struct A { |
22 | virtual void f1(); |
23 | virtual void f2(); |
24 | virtual void f3(); |
25 | }; |
26 | |
27 | struct B : A { |
28 | virtual void f1(); |
29 | virtual void f2(); |
30 | virtual void f3(); |
31 | }; |
32 | |
33 | struct C : A { |
34 | virtual void f1(); |
35 | virtual void f2(); |
36 | virtual void f3(); |
37 | }; |
38 | |
39 | The scheme will cause the virtual tables for A, B and C to be laid out |
40 | consecutively: |
41 | |
42 | .. csv-table:: Virtual Table Layout for A, B, C |
43 | :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 |
44 | |
45 | A::offset-to-top, &A::rtti, &A::f1, &A::f2, &A::f3, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, C::offset-to-top, &C::rtti, &C::f1, &C::f2, &C::f3 |
46 | |
47 | The bit vector for static types A, B and C will look like this: |
48 | |
49 | .. csv-table:: Bit Vectors for A, B, C |
50 | :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 |
51 | |
52 | A, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0 |
53 | B, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 |
54 | C, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0 |
55 | |
56 | Bit vectors are represented in the object file as byte arrays. By loading |
57 | from indexed offsets into the byte array and applying a mask, a program can |
58 | test bits from the bit set with a relatively short instruction sequence. Bit |
59 | vectors may overlap so long as they use different bits. For the full details, |
60 | see the `ByteArrayBuilder`_ class. |
61 | |
62 | In this case, assuming A is laid out at offset 0 in bit 0, B at offset 0 in |
63 | bit 1 and C at offset 0 in bit 2, the byte array would look like this: |
64 | |
65 | .. code-block:: c++ |
66 | |
67 | char bits[] = { 0, 0, 1, 0, 0, 0, 3, 0, 0, 0, 0, 5, 0, 0 }; |
68 | |
69 | To emit a virtual call, the compiler will assemble code that checks that |
70 | the object's virtual table pointer is in-bounds and aligned and that the |
71 | relevant bit is set in the bit vector. |
72 | |
73 | For example on x86 a typical virtual call may look like this: |
74 | |
75 | .. code-block:: none |
76 | |
77 | ca7fbb: 48 8b 0f mov (%rdi),%rcx |
78 | ca7fbe: 48 8d 15 c3 42 fb 07 lea 0x7fb42c3(%rip),%rdx |
79 | ca7fc5: 48 89 c8 mov %rcx,%rax |
80 | ca7fc8: 48 29 d0 sub %rdx,%rax |
81 | ca7fcb: 48 c1 c0 3d rol $0x3d,%rax |
82 | ca7fcf: 48 3d 7f 01 00 00 cmp $0x17f,%rax |
83 | ca7fd5: 0f 87 36 05 00 00 ja ca8511 |
84 | ca7fdb: 48 8d 15 c0 0b f7 06 lea 0x6f70bc0(%rip),%rdx |
85 | ca7fe2: f6 04 10 10 testb $0x10,(%rax,%rdx,1) |
86 | ca7fe6: 0f 84 25 05 00 00 je ca8511 |
87 | ca7fec: ff 91 98 00 00 00 callq *0x98(%rcx) |
88 | [...] |
89 | ca8511: 0f 0b ud2 |
90 | |
91 | The compiler relies on co-operation from the linker in order to assemble |
92 | the bit vectors for the whole program. It currently does this using LLVM's |
93 | `type metadata`_ mechanism together with link-time optimization. |
94 | |
95 | .. _address point: https://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable-general |
96 | .. _type metadata: https://llvm.org/docs/TypeMetadata.html |
97 | .. _ByteArrayBuilder: https://llvm.org/docs/doxygen/html/structllvm_1_1ByteArrayBuilder.html |
98 | |
99 | Optimizations |
100 | ------------- |
101 | |
102 | The scheme as described above is the fully general variant of the scheme. |
103 | Most of the time we are able to apply one or more of the following |
104 | optimizations to improve binary size or performance. |
105 | |
106 | In fact, if you try the above example with the current version of the |
107 | compiler, you will probably find that it will not use the described virtual |
108 | table layout or machine instructions. Some of the optimizations we are about |
109 | to introduce cause the compiler to use a different layout or a different |
110 | sequence of machine instructions. |
111 | |
112 | Stripping Leading/Trailing Zeros in Bit Vectors |
113 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
114 | |
115 | If a bit vector contains leading or trailing zeros, we can strip them from |
116 | the vector. The compiler will emit code to check if the pointer is in range |
117 | of the region covered by ones, and perform the bit vector check using a |
118 | truncated version of the bit vector. For example, the bit vectors for our |
119 | example class hierarchy will be emitted like this: |
120 | |
121 | .. csv-table:: Bit Vectors for A, B, C |
122 | :header: Class, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 |
123 | |
124 | A, , , 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, , |
125 | B, , , , , , , , 1, , , , , , , |
126 | C, , , , , , , , , , , , , 1, , |
127 | |
128 | Short Inline Bit Vectors |
129 | ~~~~~~~~~~~~~~~~~~~~~~~~ |
130 | |
131 | If the vector is sufficiently short, we can represent it as an inline constant |
132 | on x86. This saves us a few instructions when reading the correct element |
133 | of the bit vector. |
134 | |
135 | If the bit vector fits in 32 bits, the code looks like this: |
136 | |
137 | .. code-block:: none |
138 | |
139 | dc2: 48 8b 03 mov (%rbx),%rax |
140 | dc5: 48 8d 15 14 1e 00 00 lea 0x1e14(%rip),%rdx |
141 | dcc: 48 89 c1 mov %rax,%rcx |
142 | dcf: 48 29 d1 sub %rdx,%rcx |
143 | dd2: 48 c1 c1 3d rol $0x3d,%rcx |
144 | dd6: 48 83 f9 03 cmp $0x3,%rcx |
145 | dda: 77 2f ja e0b <main+0x9b> |
146 | ddc: ba 09 00 00 00 mov $0x9,%edx |
147 | de1: 0f a3 ca bt %ecx,%edx |
148 | de4: 73 25 jae e0b <main+0x9b> |
149 | de6: 48 89 df mov %rbx,%rdi |
150 | de9: ff 10 callq *(%rax) |
151 | [...] |
152 | e0b: 0f 0b ud2 |
153 | |
154 | Or if the bit vector fits in 64 bits: |
155 | |
156 | .. code-block:: none |
157 | |
158 | 11a6: 48 8b 03 mov (%rbx),%rax |
159 | 11a9: 48 8d 15 d0 28 00 00 lea 0x28d0(%rip),%rdx |
160 | 11b0: 48 89 c1 mov %rax,%rcx |
161 | 11b3: 48 29 d1 sub %rdx,%rcx |
162 | 11b6: 48 c1 c1 3d rol $0x3d,%rcx |
163 | 11ba: 48 83 f9 2a cmp $0x2a,%rcx |
164 | 11be: 77 35 ja 11f5 <main+0xb5> |
165 | 11c0: 48 ba 09 00 00 00 00 movabs $0x40000000009,%rdx |
166 | 11c7: 04 00 00 |
167 | 11ca: 48 0f a3 ca bt %rcx,%rdx |
168 | 11ce: 73 25 jae 11f5 <main+0xb5> |
169 | 11d0: 48 89 df mov %rbx,%rdi |
170 | 11d3: ff 10 callq *(%rax) |
171 | [...] |
172 | 11f5: 0f 0b ud2 |
173 | |
174 | If the bit vector consists of a single bit, there is only one possible |
175 | virtual table, and the check can consist of a single equality comparison: |
176 | |
177 | .. code-block:: none |
178 | |
179 | 9a2: 48 8b 03 mov (%rbx),%rax |
180 | 9a5: 48 8d 0d a4 13 00 00 lea 0x13a4(%rip),%rcx |
181 | 9ac: 48 39 c8 cmp %rcx,%rax |
182 | 9af: 75 25 jne 9d6 <main+0x86> |
183 | 9b1: 48 89 df mov %rbx,%rdi |
184 | 9b4: ff 10 callq *(%rax) |
185 | [...] |
186 | 9d6: 0f 0b ud2 |
187 | |
188 | Virtual Table Layout |
189 | ~~~~~~~~~~~~~~~~~~~~ |
190 | |
191 | The compiler lays out classes of disjoint hierarchies in separate regions |
192 | of the object file. At worst, bit vectors in disjoint hierarchies only |
193 | need to cover their disjoint hierarchy. But the closer that classes in |
194 | sub-hierarchies are laid out to each other, the smaller the bit vectors for |
195 | those sub-hierarchies need to be (see "Stripping Leading/Trailing Zeros in Bit |
196 | Vectors" above). The `GlobalLayoutBuilder`_ class is responsible for laying |
197 | out the globals efficiently to minimize the sizes of the underlying bitsets. |
198 | |
199 | .. _GlobalLayoutBuilder: https://github.com/llvm/llvm-project/blob/master/llvm/include/llvm/Transforms/IPO/LowerTypeTests.h |
200 | |
201 | Alignment |
202 | ~~~~~~~~~ |
203 | |
204 | If all gaps between address points in a particular bit vector are multiples |
205 | of powers of 2, the compiler can compress the bit vector by strengthening |
206 | the alignment requirements of the virtual table pointer. For example, given |
207 | this class hierarchy: |
208 | |
209 | .. code-block:: c++ |
210 | |
211 | struct A { |
212 | virtual void f1(); |
213 | virtual void f2(); |
214 | }; |
215 | |
216 | struct B : A { |
217 | virtual void f1(); |
218 | virtual void f2(); |
219 | virtual void f3(); |
220 | virtual void f4(); |
221 | virtual void f5(); |
222 | virtual void f6(); |
223 | }; |
224 | |
225 | struct C : A { |
226 | virtual void f1(); |
227 | virtual void f2(); |
228 | }; |
229 | |
230 | The virtual tables will be laid out like this: |
231 | |
232 | .. csv-table:: Virtual Table Layout for A, B, C |
233 | :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 |
234 | |
235 | A::offset-to-top, &A::rtti, &A::f1, &A::f2, B::offset-to-top, &B::rtti, &B::f1, &B::f2, &B::f3, &B::f4, &B::f5, &B::f6, C::offset-to-top, &C::rtti, &C::f1, &C::f2 |
236 | |
237 | Notice that each address point for A is separated by 4 words. This lets us |
238 | emit a compressed bit vector for A that looks like this: |
239 | |
240 | .. csv-table:: |
241 | :header: 2, 6, 10, 14 |
242 | |
243 | 1, 1, 0, 1 |
244 | |
245 | At call sites, the compiler will strengthen the alignment requirements by |
246 | using a different rotate count. For example, on a 64-bit machine where the |
247 | address points are 4-word aligned (as in A from our example), the ``rol`` |
248 | instruction may look like this: |
249 | |
250 | .. code-block:: none |
251 | |
252 | dd2: 48 c1 c1 3b rol $0x3b,%rcx |
253 | |
254 | Padding to Powers of 2 |
255 | ~~~~~~~~~~~~~~~~~~~~~~ |
256 | |
257 | Of course, this alignment scheme works best if the address points are |
258 | in fact aligned correctly. To make this more likely to happen, we insert |
259 | padding between virtual tables that in many cases aligns address points to |
260 | a power of 2. Specifically, our padding aligns virtual tables to the next |
261 | highest power of 2 bytes; because address points for specific base classes |
262 | normally appear at fixed offsets within the virtual table, this normally |
263 | has the effect of aligning the address points as well. |
264 | |
265 | This scheme introduces tradeoffs between decreased space overhead for |
266 | instructions and bit vectors and increased overhead in the form of padding. We |
267 | therefore limit the amount of padding so that we align to no more than 128 |
268 | bytes. This number was found experimentally to provide a good tradeoff. |
269 | |
270 | Eliminating Bit Vector Checks for All-Ones Bit Vectors |
271 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
272 | |
273 | If the bit vector is all ones, the bit vector check is redundant; we simply |
274 | need to check that the address is in range and well aligned. This is more |
275 | likely to occur if the virtual tables are padded. |
276 | |
277 | Forward-Edge CFI for Virtual Calls by Interleaving Virtual Tables |
278 | ----------------------------------------------------------------- |
279 | |
280 | Dimitar et. al. proposed a novel approach that interleaves virtual tables in [1]_. |
281 | This approach is more efficient in terms of space because padding and bit vectors are no longer needed. |
282 | At the same time, it is also more efficient in terms of performance because in the interleaved layout |
283 | address points of the virtual tables are consecutive, thus the validity check of a virtual |
284 | vtable pointer is always a range check. |
285 | |
286 | At a high level, the interleaving scheme consists of three steps: 1) split virtual table groups into |
287 | separate virtual tables, 2) order virtual tables by a pre-order traversal of the class hierarchy |
288 | and 3) interleave virtual tables. |
289 | |
290 | The interleaving scheme implemented in LLVM is inspired by [1]_ but has its own |
291 | enhancements (more in `Interleave virtual tables`_). |
292 | |
293 | .. [1] `Protecting C++ Dynamic Dispatch Through VTable Interleaving <https://cseweb.ucsd.edu/~lerner/papers/ivtbl-ndss16.pdf>`_. Dimitar Bounov, Rami Gökhan Kıcı, Sorin Lerner. |
294 | |
295 | Split virtual table groups into separate virtual tables |
296 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
297 | |
298 | The Itanium C++ ABI glues multiple individual virtual tables for a class into a combined virtual table (virtual table group). |
299 | The interleaving scheme, however, can only work with individual virtual tables so it must split the combined virtual tables first. |
300 | In comparison, the old scheme does not require the splitting but it is more efficient when the combined virtual tables have been split. |
301 | The `GlobalSplit`_ pass is responsible for splitting combined virtual tables into individual ones. |
302 | |
303 | .. _GlobalSplit: https://github.com/llvm/llvm-project/blob/master/llvm/lib/Transforms/IPO/GlobalSplit.cpp |
304 | |
305 | Order virtual tables by a pre-order traversal of the class hierarchy |
306 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
307 | |
308 | This step is common to both the old scheme described above and the interleaving scheme. |
309 | For the interleaving scheme, since the combined virtual tables have been split in the previous step, |
310 | this step ensures that for any class all the compatible virtual tables will appear consecutively. |
311 | For the old scheme, the same property may not hold since it may work on combined virtual tables. |
312 | |
313 | For example, consider the following four C++ classes: |
314 | |
315 | .. code-block:: c++ |
316 | |
317 | struct A { |
318 | virtual void f1(); |
319 | }; |
320 | |
321 | struct B : A { |
322 | virtual void f1(); |
323 | virtual void f2(); |
324 | }; |
325 | |
326 | struct C : A { |
327 | virtual void f1(); |
328 | virtual void f3(); |
329 | }; |
330 | |
331 | struct D : B { |
332 | virtual void f1(); |
333 | virtual void f2(); |
334 | }; |
335 | |
336 | This step will arrange the virtual tables for A, B, C, and D in the order of *vtable-of-A, vtable-of-B, vtable-of-D, vtable-of-C*. |
337 | |
338 | Interleave virtual tables |
339 | ~~~~~~~~~~~~~~~~~~~~~~~~~ |
340 | |
341 | This step is where the interleaving scheme deviates from the old scheme. Instead of laying out |
342 | whole virtual tables in the previously computed order, the interleaving scheme lays out table |
343 | entries of the virtual tables strategically to ensure the following properties: |
344 | |
345 | (1) offset-to-top and RTTI fields layout property |
346 | |
347 | The Itanium C++ ABI specifies that offset-to-top and RTTI fields appear at the offsets behind the |
348 | address point. Note that libraries like libcxxabi do assume this property. |
349 | |
350 | (2) virtual function entry layout property |
351 | |
352 | For each virtual function the distance between an virtual table entry for this function and the corresponding |
353 | address point is always the same. This property ensures that dynamic dispatch still works with the interleaving layout. |
354 | |
355 | Note that the interleaving scheme in the CFI implementation guarantees both properties above whereas the original scheme proposed |
356 | in [1]_ only guarantees the second property. |
357 | |
358 | To illustrate how the interleaving algorithm works, let us continue with the running example. |
359 | The algorithm first separates all the virtual table entries into two work lists. To do so, |
360 | it starts by allocating two work lists, one initialized with all the offset-to-top entries of virtual tables in the order |
361 | computed in the last step, one initialized with all the RTTI entries in the same order. |
362 | |
363 | .. csv-table:: Work list 1 Layout |
364 | :header: 0, 1, 2, 3 |
365 | |
366 | A::offset-to-top, B::offset-to-top, D::offset-to-top, C::offset-to-top |
367 | |
368 | |
369 | .. csv-table:: Work list 2 layout |
370 | :header: 0, 1, 2, 3, |
371 | |
372 | &A::rtti, &B::rtti, &D::rtti, &C::rtti |
373 | |
374 | Then for each virtual function the algorithm goes through all the virtual tables in the previously computed order |
375 | to collect all the related entries into a virtual function list. |
376 | After this step, there are the following virtual function lists: |
377 | |
378 | .. csv-table:: f1 list |
379 | :header: 0, 1, 2, 3 |
380 | |
381 | &A::f1, &B::f1, &D::f1, &C::f1 |
382 | |
383 | |
384 | .. csv-table:: f2 list |
385 | :header: 0, 1 |
386 | |
387 | &B::f2, &D::f2 |
388 | |
389 | |
390 | .. csv-table:: f3 list |
391 | :header: 0 |
392 | |
393 | &C::f3 |
394 | |
395 | Next, the algorithm picks the longest remaining virtual function list and appends the whole list to the shortest work list |
396 | until no function lists are left, and pads the shorter work list so that they are of the same length. |
397 | In the example, f1 list will be first added to work list 1, then f2 list will be added |
398 | to work list 2, and finally f3 list will be added to the work list 2. Since work list 1 now has one more entry than |
399 | work list 2, a padding entry is added to the latter. After this step, the two work lists look like: |
400 | |
401 | .. csv-table:: Work list 1 Layout |
402 | :header: 0, 1, 2, 3, 4, 5, 6, 7 |
403 | |
404 | A::offset-to-top, B::offset-to-top, D::offset-to-top, C::offset-to-top, &A::f1, &B::f1, &D::f1, &C::f1 |
405 | |
406 | |
407 | .. csv-table:: Work list 2 layout |
408 | :header: 0, 1, 2, 3, 4, 5, 6, 7 |
409 | |
410 | &A::rtti, &B::rtti, &D::rtti, &C::rtti, &B::f2, &D::f2, &C::f3, padding |
411 | |
412 | Finally, the algorithm merges the two work lists into the interleaved layout by alternatingly |
413 | moving the head of each list to the final layout. After this step, the final interleaved layout looks like: |
414 | |
415 | .. csv-table:: Interleaved layout |
416 | :header: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 |
417 | |
418 | A::offset-to-top, &A::rtti, B::offset-to-top, &B::rtti, D::offset-to-top, &D::rtti, C::offset-to-top, &C::rtti, &A::f1, &B::f2, &B::f1, &D::f2, &D::f1, &C::f3, &C::f1, padding |
419 | |
420 | In the above interleaved layout, each virtual table's offset-to-top and RTTI are always adjacent, which shows that the layout has the first property. |
421 | For the second property, let us look at f2 as an example. In the interleaved layout, |
422 | there are two entries for f2: B::f2 and D::f2. The distance between &B::f2 |
423 | and its address point D::offset-to-top (the entry immediately after &B::rtti) is 5 entry-length, so is the distance between &D::f2 and C::offset-to-top (the entry immediately after &D::rtti). |
424 | |
425 | Forward-Edge CFI for Indirect Function Calls |
426 | ============================================ |
427 | |
428 | Under forward-edge CFI for indirect function calls, each unique function |
429 | type has its own bit vector, and at each call site we need to check that the |
430 | function pointer is a member of the function type's bit vector. This scheme |
431 | works in a similar way to forward-edge CFI for virtual calls, the distinction |
432 | being that we need to build bit vectors of function entry points rather than |
433 | of virtual tables. |
434 | |
435 | Unlike when re-arranging global variables, we cannot re-arrange functions |
436 | in a particular order and base our calculations on the layout of the |
437 | functions' entry points, as we have no idea how large a particular function |
438 | will end up being (the function sizes could even depend on how we arrange |
439 | the functions). Instead, we build a jump table, which is a block of code |
440 | consisting of one branch instruction for each of the functions in the bit |
441 | set that branches to the target function, and redirect any taken function |
442 | addresses to the corresponding jump table entry. In this way, the distance |
443 | between function entry points is predictable and controllable. In the object |
444 | file's symbol table, the symbols for the target functions also refer to the |
445 | jump table entries, so that addresses taken outside the module will pass |
446 | any verification done inside the module. |
447 | |
448 | In more concrete terms, suppose we have three functions ``f``, ``g``, |
449 | ``h`` which are all of the same type, and a function foo that returns their |
450 | addresses: |
451 | |
452 | .. code-block:: none |
453 | |
454 | f: |
455 | mov 0, %eax |
456 | ret |
457 | |
458 | g: |
459 | mov 1, %eax |
460 | ret |
461 | |
462 | h: |
463 | mov 2, %eax |
464 | ret |
465 | |
466 | foo: |
467 | mov f, %eax |
468 | mov g, %edx |
469 | mov h, %ecx |
470 | ret |
471 | |
472 | Our jump table will (conceptually) look like this: |
473 | |
474 | .. code-block:: none |
475 | |
476 | f: |
477 | jmp .Ltmp0 ; 5 bytes |
478 | int3 ; 1 byte |
479 | int3 ; 1 byte |
480 | int3 ; 1 byte |
481 | |
482 | g: |
483 | jmp .Ltmp1 ; 5 bytes |
484 | int3 ; 1 byte |
485 | int3 ; 1 byte |
486 | int3 ; 1 byte |
487 | |
488 | h: |
489 | jmp .Ltmp2 ; 5 bytes |
490 | int3 ; 1 byte |
491 | int3 ; 1 byte |
492 | int3 ; 1 byte |
493 | |
494 | .Ltmp0: |
495 | mov 0, %eax |
496 | ret |
497 | |
498 | .Ltmp1: |
499 | mov 1, %eax |
500 | ret |
501 | |
502 | .Ltmp2: |
503 | mov 2, %eax |
504 | ret |
505 | |
506 | foo: |
507 | mov f, %eax |
508 | mov g, %edx |
509 | mov h, %ecx |
510 | ret |
511 | |
512 | Because the addresses of ``f``, ``g``, ``h`` are evenly spaced at a power of |
513 | 2, and function types do not overlap (unlike class types with base classes), |
514 | we can normally apply the `Alignment`_ and `Eliminating Bit Vector Checks |
515 | for All-Ones Bit Vectors`_ optimizations thus simplifying the check at each |
516 | call site to a range and alignment check. |
517 | |
518 | Shared library support |
519 | ====================== |
520 | |
521 | **EXPERIMENTAL** |
522 | |
523 | The basic CFI mode described above assumes that the application is a |
524 | monolithic binary; at least that all possible virtual/indirect call |
525 | targets and the entire class hierarchy are known at link time. The |
526 | cross-DSO mode, enabled with **-f[no-]sanitize-cfi-cross-dso** relaxes |
527 | this requirement by allowing virtual and indirect calls to cross the |
528 | DSO boundary. |
529 | |
530 | Assuming the following setup: the binary consists of several |
531 | instrumented and several uninstrumented DSOs. Some of them may be |
532 | dlopen-ed/dlclose-d periodically, even frequently. |
533 | |
534 | - Calls made from uninstrumented DSOs are not checked and just work. |
535 | - Calls inside any instrumented DSO are fully protected. |
536 | - Calls between different instrumented DSOs are also protected, with |
537 | a performance penalty (in addition to the monolithic CFI |
538 | overhead). |
539 | - Calls from an instrumented DSO to an uninstrumented one are |
540 | unchecked and just work, with performance penalty. |
541 | - Calls from an instrumented DSO outside of any known DSO are |
542 | detected as CFI violations. |
543 | |
544 | In the monolithic scheme a call site is instrumented as |
545 | |
546 | .. code-block:: none |
547 | |
548 | if (!InlinedFastCheck(f)) |
549 | abort(); |
550 | call *f |
551 | |
552 | In the cross-DSO scheme it becomes |
553 | |
554 | .. code-block:: none |
555 | |
556 | if (!InlinedFastCheck(f)) |
557 | __cfi_slowpath(CallSiteTypeId, f); |
558 | call *f |
559 | |
560 | CallSiteTypeId |
561 | -------------- |
562 | |
563 | ``CallSiteTypeId`` is a stable process-wide identifier of the |
564 | call-site type. For a virtual call site, the type in question is the class |
565 | type; for an indirect function call it is the function signature. The |
566 | mapping from a type to an identifier is an ABI detail. In the current, |
567 | experimental, implementation the identifier of type T is calculated as |
568 | follows: |
569 | |
570 | - Obtain the mangled name for "typeinfo name for T". |
571 | - Calculate MD5 hash of the name as a string. |
572 | - Reinterpret the first 8 bytes of the hash as a little-endian |
573 | 64-bit integer. |
574 | |
575 | It is possible, but unlikely, that collisions in the |
576 | ``CallSiteTypeId`` hashing will result in weaker CFI checks that would |
577 | still be conservatively correct. |
578 | |
579 | CFI_Check |
580 | --------- |
581 | |
582 | In the general case, only the target DSO knows whether the call to |
583 | function ``f`` with type ``CallSiteTypeId`` is valid or not. To |
584 | export this information, every DSO implements |
585 | |
586 | .. code-block:: none |
587 | |
588 | void __cfi_check(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData) |
589 | |
590 | This function provides external modules with access to CFI checks for |
591 | the targets inside this DSO. For each known ``CallSiteTypeId``, this |
592 | function performs an ``llvm.type.test`` with the corresponding type |
593 | identifier. It reports an error if the type is unknown, or if the |
594 | check fails. Depending on the values of compiler flags |
595 | ``-fsanitize-trap`` and ``-fsanitize-recover``, this function may |
596 | print an error, abort and/or return to the caller. ``DiagData`` is an |
597 | opaque pointer to the diagnostic information about the error, or |
598 | ``null`` if the caller does not provide this information. |
599 | |
600 | The basic implementation is a large switch statement over all values |
601 | of CallSiteTypeId supported by this DSO, and each case is similar to |
602 | the InlinedFastCheck() in the basic CFI mode. |
603 | |
604 | CFI Shadow |
605 | ---------- |
606 | |
607 | To route CFI checks to the target DSO's __cfi_check function, a |
608 | mapping from possible virtual / indirect call targets to the |
609 | corresponding __cfi_check functions is maintained. This mapping is |
610 | implemented as a sparse array of 2 bytes for every possible page (4096 |
611 | bytes) of memory. The table is kept readonly most of the time. |
612 | |
613 | There are 3 types of shadow values: |
614 | |
615 | - Address in a CFI-instrumented DSO. |
616 | - Unchecked address (a “trusted” non-instrumented DSO). Encoded as |
617 | value 0xFFFF. |
618 | - Invalid address (everything else). Encoded as value 0. |
619 | |
620 | For a CFI-instrumented DSO, a shadow value encodes the address of the |
621 | __cfi_check function for all call targets in the corresponding memory |
622 | page. If Addr is the target address, and V is the shadow value, then |
623 | the address of __cfi_check is calculated as |
624 | |
625 | .. code-block:: none |
626 | |
627 | __cfi_check = AlignUpTo(Addr, 4096) - (V + 1) * 4096 |
628 | |
629 | This works as long as __cfi_check is aligned by 4096 bytes and located |
630 | below any call targets in its DSO, but not more than 256MB apart from |
631 | them. |
632 | |
633 | CFI_SlowPath |
634 | ------------ |
635 | |
636 | The slow path check is implemented in a runtime support library as |
637 | |
638 | .. code-block:: none |
639 | |
640 | void __cfi_slowpath(uint64 CallSiteTypeId, void *TargetAddr) |
641 | void __cfi_slowpath_diag(uint64 CallSiteTypeId, void *TargetAddr, void *DiagData) |
642 | |
643 | These functions loads a shadow value for ``TargetAddr``, finds the |
644 | address of ``__cfi_check`` as described above and calls |
645 | that. ``DiagData`` is an opaque pointer to diagnostic data which is |
646 | passed verbatim to ``__cfi_check``, and ``__cfi_slowpath`` passes |
647 | ``nullptr`` instead. |
648 | |
649 | Compiler-RT library contains reference implementations of slowpath |
650 | functions, but they have unresolvable issues with correctness and |
651 | performance in the handling of dlopen(). It is recommended that |
652 | platforms provide their own implementations, usually as part of libc |
653 | or libdl. |
654 | |
655 | Position-independent executable requirement |
656 | ------------------------------------------- |
657 | |
658 | Cross-DSO CFI mode requires that the main executable is built as PIE. |
659 | In non-PIE executables the address of an external function (taken from |
660 | the main executable) is the address of that function’s PLT record in |
661 | the main executable. This would break the CFI checks. |
662 | |
663 | Backward-edge CFI for return statements (RCFI) |
664 | ============================================== |
665 | |
666 | This section is a proposal. As of March 2017 it is not implemented. |
667 | |
668 | Backward-edge control flow (`RET` instructions) can be hijacked |
669 | via overwriting the return address (`RA`) on stack. |
670 | Various mitigation techniques (e.g. `SafeStack`_, `RFG`_, `Intel CET`_) |
671 | try to detect or prevent `RA` corruption on stack. |
672 | |
673 | RCFI enforces the expected control flow in several different ways described below. |
674 | RCFI heavily relies on LTO. |
675 | |
676 | Leaf Functions |
677 | -------------- |
678 | If `f()` is a leaf function (i.e. it has no calls |
679 | except maybe no-return calls) it can be called using a special calling convention |
680 | that stores `RA` in a dedicated register `R` before the `CALL` instruction. |
681 | `f()` does not spill `R` and does not use the `RET` instruction, |
682 | instead it uses the value in `R` to `JMP` to `RA`. |
683 | |
684 | This flavour of CFI is *precise*, i.e. the function is guaranteed to return |
685 | to the point exactly following the call. |
686 | |
687 | An alternative approach is to |
688 | copy `RA` from stack to `R` in the first instruction of `f()`, |
689 | then `JMP` to `R`. |
690 | This approach is simpler to implement (does not require changing the caller) |
691 | but weaker (there is a small window when `RA` is actually stored on stack). |
692 | |
693 | |
694 | Functions called once |
695 | --------------------- |
696 | Suppose `f()` is called in just one place in the program |
697 | (assuming we can verify this in LTO mode). |
698 | In this case we can replace the `RET` instruction with a `JMP` instruction |
699 | with the immediate constant for `RA`. |
700 | This will *precisely* enforce the return control flow no matter what is stored on stack. |
701 | |
702 | Another variant is to compare `RA` on stack with the known constant and abort |
703 | if they don't match; then `JMP` to the known constant address. |
704 | |
705 | Functions called in a small number of call sites |
706 | ------------------------------------------------ |
707 | We may extend the above approach to cases where `f()` |
708 | is called more than once (but still a small number of times). |
709 | With LTO we know all possible values of `RA` and we check them |
710 | one-by-one (or using binary search) against the value on stack. |
711 | If the match is found, we `JMP` to the known constant address, otherwise abort. |
712 | |
713 | This protection is *near-precise*, i.e. it guarantees that the control flow will |
714 | be transferred to one of the valid return addresses for this function, |
715 | but not necessary to the point of the most recent `CALL`. |
716 | |
717 | General case |
718 | ------------ |
719 | For functions called multiple times a *return jump table* is constructed |
720 | in the same manner as jump tables for indirect function calls (see above). |
721 | The correct jump table entry (or it's index) is passed by `CALL` to `f()` |
722 | (as an extra argument) and then spilled to stack. |
723 | The `RET` instruction is replaced with a load of the jump table entry, |
724 | jump table range check, and `JMP` to the jump table entry. |
725 | |
726 | This protection is also *near-precise*. |
727 | |
728 | Returns from functions called indirectly |
729 | ---------------------------------------- |
730 | |
731 | If a function is called indirectly, the return jump table is constructed for the |
732 | equivalence class of functions instead of a single function. |
733 | |
734 | Cross-DSO calls |
735 | --------------- |
736 | Consider two instrumented DSOs, `A` and `B`. `A` defines `f()` and `B` calls it. |
737 | |
738 | This case will be handled similarly to the cross-DSO scheme using the slow path callback. |
739 | |
740 | Non-goals |
741 | --------- |
742 | |
743 | RCFI does not protect `RET` instructions: |
744 | * in non-instrumented DSOs, |
745 | * in instrumented DSOs for functions that are called from non-instrumented DSOs, |
746 | * embedded into other instructions (e.g. `0f4fc3 cmovg %ebx,%eax`). |
747 | |
748 | .. _SafeStack: https://clang.llvm.org/docs/SafeStack.html |
749 | .. _RFG: https://xlab.tencent.com/en/2016/11/02/return-flow-guard |
750 | .. _Intel CET: https://software.intel.com/en-us/blogs/2016/06/09/intel-release-new-technology-specifications-protect-rop-attacks |
751 | |
752 | Hardware support |
753 | ================ |
754 | |
755 | We believe that the above design can be efficiently implemented in hardware. |
756 | A single new instruction added to an ISA would allow to perform the forward-edge CFI check |
757 | with fewer bytes per check (smaller code size overhead) and potentially more |
758 | efficiently. The current software-only instrumentation requires at least |
759 | 32-bytes per check (on x86_64). |
760 | A hardware instruction may probably be less than ~ 12 bytes. |
761 | Such instruction would check that the argument pointer is in-bounds, |
762 | and is properly aligned, and if the checks fail it will either trap (in monolithic scheme) |
763 | or call the slow path function (cross-DSO scheme). |
764 | The bit vector lookup is probably too complex for a hardware implementation. |
765 | |
766 | .. code-block:: none |
767 | |
768 | // This instruction checks that 'Ptr' |
769 | // * is aligned by (1 << kAlignment) and |
770 | // * is inside [kRangeBeg, kRangeBeg+(kRangeSize<<kAlignment)) |
771 | // and if the check fails it jumps to the given target (slow path). |
772 | // |
773 | // 'Ptr' is a register, pointing to the virtual function table |
774 | // or to the function which we need to check. We may require an explicit |
775 | // fixed register to be used. |
776 | // 'kAlignment' is a 4-bit constant. |
777 | // 'kRangeSize' is a ~20-bit constant. |
778 | // 'kRangeBeg' is a PC-relative constant (~28 bits) |
779 | // pointing to the beginning of the allowed range for 'Ptr'. |
780 | // 'kFailedCheckTarget': is a PC-relative constant (~28 bits) |
781 | // representing the target to branch to when the check fails. |
782 | // If kFailedCheckTarget==0, the process will trap |
783 | // (monolithic binary scheme). |
784 | // Otherwise it will jump to a handler that implements `CFI_SlowPath` |
785 | // (cross-DSO scheme). |
786 | CFI_Check(Ptr, kAlignment, kRangeSize, kRangeBeg, kFailedCheckTarget) { |
787 | if (Ptr < kRangeBeg || |
788 | Ptr >= kRangeBeg + (kRangeSize << kAlignment) || |
789 | Ptr & ((1 << kAlignment) - 1)) |
790 | Jump(kFailedCheckTarget); |
791 | } |
792 | |
793 | An alternative and more compact encoding would not use `kFailedCheckTarget`, |
794 | and will trap on check failure instead. |
795 | This will allow us to fit the instruction into **8-9 bytes**. |
796 | The cross-DSO checks will be performed by a trap handler and |
797 | performance-critical ones will have to be black-listed and checked using the |
798 | software-only scheme. |
799 | |
800 | Note that such hardware extension would be complementary to checks |
801 | at the callee side, such as e.g. **Intel ENDBRANCH**. |
802 | Moreover, CFI would have two benefits over ENDBRANCH: a) precision and b) |
803 | ability to protect against invalid casts between polymorphic types. |
804 | |