Summary

We are facing a problem, that a on-going local file writing operation will be blocked, as soon as the same file is opened in read-only mode from the remote network share by a remote data reader within Ubuntu 22, even the 2nd party opened the file After the original file has already been opened for writing.

The problem shows up when we upgrade the OS hosting our remote data reader from Ubuntu 18.04 to Ubuntu 22.04 LTS.

Here is the setup:

We are using a readonly network share to share file content across two OS:

  • OS 1: Windows 10 with IP of 192.168.1.100, a local folder shared_data is shared over network with Readonly mode.
  • App 1: Data Generator: A Win32 program written in C++ writting to a file in the same local folder.
  • OS 2: Ubuntu 22.04 LTS, residing within the same LAN as the OS 1, the same shared folder is mounted using ro mode using mount.cifs to local folder ~/fileserver
  • App 2: Remote data reader: A .NET 6.0 console application running within the Ubuntu 22 OS, reading data from same file within the mounted folder ~/fileserver
sudo mount -t cifs -o ro,vers=3.0,username=user,password=***,file_mode=0444 -v //192.168.1.100/shared_data ~/fileserver

According to mount.cifs(8) man pages, we also tried the combination of different options -o:

  • nolease
  • cache=none, and cache=loose (default is cache=strict)

Step to reproduce

App 1@Windows 10: Microsoft VC++ 14.3: A C++ program is created to continuously writing data to the a file shared_data\test.dat:

#include <iostream>
#include <string>
#include <conio.h>
#include <cerrno>


int main(int argc, char** argv)
{
    if (argc < 3)
    {
        std::cout << "Usage: targetFilePath bytesToWritten" << std::endl;
        return 0;
    }

    const char* filename = argv[1];
    std::cout << "Target File: " << filename << std::endl;
    const unsigned long size = std::stoi(argv[2]);
    std::cout << "Create File " << filename << " with " << size << " bytes per batch." << std::endl;

    FILE* f = nullptr;
    {
        f=_fsopen(filename, "wb", _SH_DENYWR);
    }
    long totalDataWritten = 0;

    const auto data=new char[size];
    for (;;) {

        std::cout << "[Enter] next batch. [Esc] Stop";
        auto k = _getch();
        if (k == 0x1b || k == 0x03)
        {
            std::cout << "\r                              \r";
            break;
        }


        if (k == '\r')
        {
            try
            {

                size_t dd;
                std::cout << "\r                              \r";

                // ==== LABEL A ====
                const auto data_written = fwrite(data, sizeof(char), size, f);
                totalDataWritten += data_written;
                if (data_written != size)
                {
                    // When the file is being opened in Ubuntu OS over network share, the file writing will fail with errno set to 13.
                    std::cout << "Data size written expected: " << size << "; Data actual written: " << data_written << "; Error code: " << errno;
                }
                else
                {
                    std::cout << size << " bytes written " << std::endl;
                }
            }
            catch (const std::exception& e)
            {
                std::cout << "exception caught" << e.what();
            }
            catch (...)
            {
                std::cout<< ":(";
            }
        }
    }

    std::cout << "Press any key to close file";
    _getch();
    
    auto err = fclose(f);
    std::cout << "Total data written: " << totalDataWritten;


}

App 2: After the App1 started on the Windows side, running the following .NET 6.0 C# program under Ubuntu 22.04 LTS towards the same file over the mounted share folder ~/fileserver/test.dat

var dataSize=(new FileInfo(filename)).Length;
await using( var fs = File.Open(_setting.TargetFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
    fs.Seek(0, SeekOrigin.Begin);
    // Simulating the data processing
    await Task.Delay(TimeSpan.FromSeconds(5));
    
    while (newDataSize > 0)
    {
        var dataSizeCurrentBatch = (int)(dataSize > MaxBlockSize ? MaxBlockSize : dataSize);
        // minimum data to be written has exceeded.
        var dataRead = await fs.ReadAsync(_buffer, 0, dataSizeCurrentBatch, stoppingToken);
        dataSize -= dataRead;
    }
}

Strangely, the behavior we've found out, is that as soon as we started App 2 over the Ubuntu, the writing action ('LABEL A' in App 1) will fail:

  • the data_written will be 0 instead of desired length
  • the errno will be 13, indicating a permission violation. Also, when we were using the same setup except using Ubuntu 18, the local file writing in Windows side is not being blocked. The problem only happens when we upgrade our Ubuntu to Ubuntu 22.

Here is the explanation from Microsoft about the error code 13: https://learn.microsoft.com/en-us/cpp/c-runtime-library/errno-constants?view=msvc-170#remarks

Permission denied. The file's permission setting doesn't allow the specified access. An attempt was made to access a file (or, in some cases, a directory) in a way that's incompatible with the file's attributes.

For example, the error can occur when an attempt is made to read from a file that isn't open. Or, on an attempt to open an existing read-only file for writing, or to open a directory instead of a file. Under MS-DOS operating system versions 3.0 and later, EACCES may also indicate a locking or sharing violation.

The error can also occur in an attempt to rename a file or directory or to remove an existing directory.

My question is:

  1. Can windows SMB file server, block a local file being written, even the file has already been opened by a local process for writing?
  2. Why doesn't mount.cifs / Ubuntu 22 / Windows SMB file server honor the App 2's shared read file open settings?
  3. How to allow the App 2 to read the data over the mounted folder share while still allowing App 1 to continuously write to the local file?
0

There are 0 best solutions below