As I understand it a precompiled header is basically a dump of the compiler's state after processing the given header. When you compile a source file that includes a precompiled header, you trigger reloading that state from disk, which is cheaper than recomputing that state by actually reading/parsing/compiling code in the original header.
The obvious trade-off is changing any header included by the precompiled header requires regenerating the precompiled header which will trigger a full recompile of everything (since you usually include the precompiled header in every translation unit).
- But what if I have lots of headers I never change, like all std C, C++ and POSIX headers, is there any reason to not include them all? I mostly care about incremental builds, so I'm not too concerned with the one-time cost of making the precompiled header.
- If I'm using precompiled headers, is there no compile speed benefit to say making my own header
is_same.hppwith just my own definition ofstd::is_sameso that I avoid pulling in all of<type_traits>when I only needstd::is_same? If bloating precompiled headers has no downside then there would be no benefit.
I can imagine that including more than you need could cause the compiler to have to do more work later, but I don't know how significant this is. For example, if the compiler state includes a hash table for symbols it may have more collisions (assuming a chain of buckets implementation). Or if I call an overloaded function it might have to consider more candidates. But I don't have any sense of if this is really a problem in practice, especially for std headers.
You typically do not include header files that change a lot in your precompiled header UNLESS IT IS USED BY MOST OF YOUR SOURCE FILES ANYWAY.
But if I have a .h that has its implementation .cpp, and it is needed by 2 of 99 client .cpp's, this header should NOT be included in the precompiled header, but just the 3 .cpp's (not implicitly in all 100 .cpp's via the precompiled header).