I have an API which either adds a new row or updates the row in a SQL Server table. If the ID of payload is greater than 0, then it updates the table based on the given ID, otherwise it inserts a new row and returns the new ID.
Here is the sample SQL snippet:
BEGIN TRANSACTION;
SET @FETCHEDID = (SELECT ID FROM dbo.SAMPLETABLE WHERE ID = @ID)
IF @FETCHEDID > 0
BEGIN
UPDATE dbo.SAMPLETABLE
SET [AMOUNT] = @AMOUNT,
[DATECHANGED] = @DATEADDED,
[LASTCHANGEDBYID] = @ADDEDBYID,
WHERE [ID] = @ID;
END
ELSE
BEGIN
INSERT INTO dbo.SAMPLETABLE ([AMOUNT], [DATEADDED],[LASTCHANGEDBYID], [DATECHANGED])
VALUES (@AMOUNT, @DATEADDED, @ADDEDBYID, @DATEADDED);
END
COMMIT TRANSACTION;
When there are two successive duplicate calls from UI to this API in quick succession with ID = 0 then it ends up adding two rows in the database table.
Ideally the fist API call should have added a new row and the second one should have updated the row with new ID returned by first call.
The call is made from UI and the ID in payload can only be updated first successful call. This can be handled in UI too by avoiding the duplicate call. But I want to make sure that backend is also able to handle it.
The second API call is reading the stale value of ID from table before it is updated.
How can we ensure that second API call starts only after completion of first one.
This is a common problem when using
READ COMMITTEDisolation.The way you are doing this "Upsert" logic is a well-known anti-pattern. The locking is not being done correctly (it requires
UPDLOCK) and it even if it was it would still be inefficient.Instead, just
UPDATEthen check the row-count and conditionallyINSERT.If inserts are more likely than updates then you can flip it round.