I'm using the following string class as NTTP.
struct fixed_string
{
static constexpr size_t MaxSize = 16;
constexpr fixed_string() = default;
constexpr fixed_string( const std::string_view text ) : Size( text.size() < MaxSize ? text.size() : MaxSize )
{
for ( size_t i = 0; i < Size; ++i )
{
buffer[i] = text[i];
}
}
constexpr operator bool() const { return Size; }
constexpr auto get() const { return std::string_view( buffer, Size ); }
size_t Size = 0;
char buffer[MaxSize] = {};
};
In one case I use a 'composite NTTP', a (16x16) 2d array of these strings (wrapped in a class), making its size ~4 kilobytes. The compiler (MSVC) does not seem to like this very much and compile times tank for a smallish tests project.
- What to do about it?
- How to get any insight into what is slowing the compiler down here?
- Is it even a good idea to use large NTTPs - error messages become a real pain
- Alternatives or better way to use them (e.g. wrapped/hashed)