There's a new experimental feature (probably C++20), which is the "synchronized block". The block provides a global lock on a section of code. The following is an example from cppreference.
#include <iostream>
#include <vector>
#include <thread>
int f()
{
static int i = 0;
synchronized {
std::cout << i << " -> ";
++i;
std::cout << i << '\n';
return i;
}
}
int main()
{
std::vector<std::thread> v(10);
for(auto& t: v)
t = std::thread([]{ for(int n = 0; n < 10; ++n) f(); });
for(auto& t: v)
t.join();
}
I feel it's superfluous. Is there any difference between the a synchronized block from above, and this one:
std::mutex m;
int f()
{
static int i = 0;
std::lock_guard<std::mutex> lg(m);
std::cout << i << " -> ";
++i;
std::cout << i << '\n';
return i;
}
The only advantage I find here is that I'm saved the trouble of having a global lock. Is there more advantages of using a synchronized block? When should it be preferred?
On the face of it, the
synchronizedkeyword is similar tostd::mutexfunctionally, but by introducing a new keyword and associated semantics (such the block enclosing the synchronized region) it makes it much easier to optimize these regions for transactional memory.In particular,
std::mutexand friends are in principle more or less opaque to the compiler, whilesynchronizedhas explicit semantics. The compiler can't be sure what the standard librarystd::mutexdoes and would have a hard time transforming it to use TM. A C++ compiler would be expected to work correctly when the standard library implementation ofstd::mutexis changed, and so can't make many assumptions about the behavior.In addition, without an explicit scope provided by the block that is required for
synchronized, it is hard for the compiler to reason about the extent of the block - it seems easy in simple cases such as a single scopedlock_guard, but there are plenty of complex cases such as if the lock escapes the function at which point the compiler never really knows where it could be unlocked.