locking at same time vs one by one Dining philosphers

83 Views Asked by At

What is the difference between locking at same time vs one by one in dining philospher that is lock(lck1, lck2) vs lck1.lock() lck2.lock() ? I have tried both of them they are working fine. I have read about lock(lck1, lck2) this way we can prevent deadlocks. As if one is acquired other not then it will release the acquired one and try again to acquire deadlocks. But what would be the impact of lock(lck1, lck2) this here will it have something to related to performance. Please help i am new to multithreading.

    #include <iostream>
    #include <thread>
    #include <functional>
    #include <chrono>
    #include <vector>
    #include <mutex>
    
    using namespace std;
    
    class DiningPhilosophers {
    private:
        mutex mtx[5];
        
    public:
        DiningPhilosophers() { }
        
        void wantsToEat(int philosopher, function<void()> pickLeftFork, function<void()> pickRightFork, function<void()> eat, function<void()> putLeftFork, function<void()> putRightFork) {
            int left = philosopher;
            int right = (philosopher + 1) % 5;
            
            unique_lock<mutex> lck1(mtx[left], defer_lock); // defer_lock: init lck1 without locking mtx
            unique_lock<mutex> lck2(mtx[right], defer_lock);
            
            if (philosopher % 2 == 0) {
                // lck1.lock(); // do NOT use std::lock(lck1, lck2)
                // lck2.lock();
                lock(lck1, lck2);
                pickLeftFork(); pickRightFork();
            } else {
                lck2.lock();
                lck1.lock();
                pickLeftFork(); pickRightFork();
            }
            eat(); putRightFork(); putLeftFork();
            // lck1 & lck2 are auto released after this line
        }
    };
    
    void testPhilosopher(int philosopherId, DiningPhilosophers& diningPhilosophers) {
        // Lambda functions to simulate the actions of a philosopher
        auto pickLeftFork = [=]() { cout << "Philosopher " << philosopherId << " picked left fork." << endl; };
        auto pickRightFork = [=]() { cout << "Philosopher " << philosopherId << " picked right fork." << endl; };
        auto eat = [=]() {
            cout << "Philosopher " << philosopherId << " is eating." << endl;
            // Simulate eating time
            this_thread::sleep_for(chrono::milliseconds(500));
        };
        auto putLeftFork = [=]() { cout << "Philosopher " << philosopherId << " put down left fork." << endl; };
        auto putRightFork = [=]() { cout << "Philosopher " << philosopherId << " put down right fork." << endl; };
    
        // Call the wantsToEat function for the philosopher
        diningPhilosophers.wantsToEat(philosopherId, pickLeftFork, pickRightFork, eat, putLeftFork, putRightFork);
    }
    
    int main() {
        DiningPhilosophers diningPhilosophers;
    
        // Create five threads, each representing a philosopher
        vector<thread> philosopherThreads;
        for (int i = 0; i < 5; ++i) {
            philosopherThreads.emplace_back(testPhilosopher, i, ref(diningPhilosophers));
        }
    
        // Wait for all threads to finish
        for (auto& thread : philosopherThreads) {
            thread.join();
        }
    
        return 0;
    }
1

There are 1 best solutions below

1
Howard Hinnant On

My comment under the question may be an exaggeration, I'm honestly not sure. However I can say for sure that if your code locked each mutex in the same relative order (left then right or vice-versa) then dead lock is virtually guaranteed if you have a high enough number of iterations of eating.

Explanation:

Let's presume that each philosopher locked the left mutex first and then the right mutex. If they all did this simultaneously, then they would all succeed in getting their left mutex, and then they would all block on trying to obtain the right mutex. This case is fairly easy to analyze.

As you create more complex locking patterns, the situation gets more complex to analyze. And as this happens, it becomes more likely that the analyst will accidentally miss a scenario where deadlock might occur. It is also typical in such scenarios that the deadlock becomes much more infrequent.

This is not a good thing.

The worst case scenario is when you falsely convince yourself that no deadlock could occur, and testing doesn't reveal it because the probability of it happening is so low. So you ship the code and it deadlocks 2 years later in the field. Debugging something like that is the stuff of nightmares.

The best thing to do is to use a foolproof system. One such foolproof system for simultaneously locking more than 1 mutex is std::lock, or its C++17 equivalent, std::scoped_lock<Mutex1, Mutex2, ...> (which simply calls std::lock in its member function lock()).

std::lock(lck1, lck2) will never deadlock (unless misused by locking those locks elsewhere without std::lock).

Here is a paper which explores the performance of various algorithms for implementing std::lock. This is an interesting read if you believe that always locking your locks in some predefined order is the best way to go. Locking in order is a good tool in the toolbox. However it is sub-optimal algorithm for std::lock as shown in the paper.

As I write this answer, all implementations of std::lock I'm aware of (MSVC, gcc, llvm) have a very high-performing implementation.1 So using std::lock is the smart move when the alternative is trying to figure out if your fancy pattern of locking multiple locks has the potential for deadlock or not.


1 The paper says this about the "Persistent" algorithm:

No one that I'm aware of has seriously proposed this algorithm as a good solution. However it is included here as there is at least one shipping implementation of std::lock that implements this exact algorithm.

The implementations have vastly improved in the nine years since this paper was written.