Concurrency Model Confusion

Concurrency is confusing.  Terms line “optimistic concurrency control” and “pessimistic concurrency control” can mean quite different things depending on the context.  I hope to clarify some points surrounding concurrency as it relates to application and database design with a series of posts.  In this first post, I’ll review some basic concurrency concepts and highlight some differences in concurrency models from a database and application perspective.

In computer science, “concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible” (from the Concurrency control wiki).  Every major DBMS supports ACID transactional reliability so one might think “correct results” would be a non-issue.  In practice however, both the application and database must work in concert to provide the appropriate level of data integrity and performance while minimizing user rework to address conflicts and deadlocks.

Concurrency Control:  Optimistic, Pessimistic and Chaos

“Optimistic”, “pessimistic” and “chaos” refer to the safeguards one takes based on the likelihood of concurrent updates and how much rework is acceptable.  Optimistic concurrency control is used when it is unlikely that different users will update the same data.  In the unlikely event that the same data is updated by different users, the conflict is detected when data are saved and the second user must redo/merge changes in order to prevent overwriting the changes made by the first user.  Users of a well designed optimistic concurrency application experience fast response time and are inconvenienced only in the rare case of an update conflict. 

Pessimistic concurrency control is used when it is likely that the same data will be updated by different users.  To prevent the need to redo or merge changes, an application serializes data access so that only one user can edit data at a time.  The obvious downside is that subsequent users must wait until preceding user(s) has completed their changes and this can increase response time or data unavailability.  However, overall user productivity can be better than optimistic currency control because rework is avoided. 

Chaos concurrency control (also known as Anarchy) is used in situations when concurrent updates are not possible or “last in wins” is acceptable.  No safeguards need to be taken with chaos concurrency because there is either no chance of conflicts or overwrites are ok.  Chaos concurrency is typically used in single-user applications or in multi-user applications where data are segregated in such a way that concurrent updates are either not possible (e.g. unique web session key) or so unlikely (e.g. CustomerID key) that the risk of lower concurrency level isn’t warranted.

The choice between optimistic, pessimistic and chaos involves striking a balance between data availability, integrity, rework and development effort.  It is important to pick the concurrency model appropriate for the task at hand.  Using the wrong model can result in unnecessary blocking, long response times and data problems.  For example, using a chaos model in a multi-user system (which is often done inadvertently simply because concurrency wasn’t considered during development) can lead to lost updates.

Concurrency Control Semantics

The meanings of optimistic vs. pessimistic concurrency control are different depending on whether the context is the application or database server.  Here’s an excerpt from the SQL Server Books Online:

·         Optimistic concurrency control works to minimize reader/writer blocking. With optimistic concurrency control methods, read operations do not use read locks that block data modification operations.

·         Pessimistic concurrency control works to ensure that read operations access current data and that data being read cannot be modified. With pessimistic concurrency control methods, read operations use read locks that block data modification. The locks placed by a read operation are released when the read operation is finished.

Source: SQL Server 2005 Books Online, Types of Concurrency Control  (http://msdn.microsoft.com/en-us/library/ms189132.aspx)

Now let’s take a look at Types of Concurrency Control in the Visual Studio 2008 .NET documentation:

·         Optimistic concurrency control: A row is unavailable to other users only while the data is actually being updated. The update examines the row in the database and determines whether any changes have been made. Attempting to update a record that has already been changed results in a concurrency violation.

·         Pessimistic concurrency control: A row is unavailable to users from the time the record is fetched until it is updated in the database.

Visual Studio 2008 .NET, Types of Concurrency Control (http://msdn.microsoft.com/en-us/library/cs6hb8k4.aspx)

Here the table summarizing the VS and SQL 2005 documentation:

 

Optimistic

Pessimistic

 

SQL Server

Application

SQL Server

Application

Readers block readers

No

No

No

Yes

Readers block writers

No

No

Yes

Yes

Writers block writers

Yes

Yes

Yes

Yes

Table 2: Application Concurrency Control

You might be surprised to learn that readers block other readers when an application uses pessimistic concurrency control.  I should clarify that “readers block readers” is typically done only when the intent of reading data is a subsequent update (e.g. a user clicks edit) and it is likely that other sessions will try to edit the same data.  This approach ensures no conflicts can occur when data are later saved and prevents data overwrites but at the cost of concurrent data access.

I think some confusion about concurrency stems from the fact that applications may or may not leverage database features in concurrency control implementation.  For example, a pessimistic control application might use database transactions and locking mechanisms to ensure that only one user can edit inventory quantity.  The same pessimistic application could instead serialize data access in the middle tier and perform data access using an optimistic or even chaos model.  Concurrency control implementation details depend much on application architecture; there is not necessarily a right or wrong approach as long as concurrency objectives are met.

My next post in this concurrency model series will discuss how SQL Server transaction isolation levels and row versioning in relate to concurrency control.   I’ll also discuss how applications can leverage these features to maximize concurrency.

posted @ Sunday, May 04, 2008 7:12 PM

Print

Comments on this entry:

# Re: Concurrency Model Confusion

Left by Baha A. Ghanim at 3/9/2009 3:51 AM
Gravatar
Thank you for the post, well done... I suggest that it will be much better if you provided us with examples...

Regards,
Baha

# re: Concurrency Model Confusion

Left by Dan Guzman at 3/9/2009 7:25 PM
Gravatar
Thanks for the feedback, Baha. I'll try to provide some examples in a furture post. A thorough concurrency discussion probably requires an entire book but I would at least like to underscore the important points.

# re: Concurrency Model Confusion

Left by Prasanth I D at 6/16/2009 10:44 AM
Gravatar
nice article.Can u give an example for pessimistic concurrency (Readers block writers) ?

# re: Concurrency Model Confusion

Left by Dan Guzman at 7/25/2009 3:20 PM
Gravatar
Sorry for the belated response.

Consider a SELECT query in the default READ_COMMITTED isolation level (without the READ_COMMITTED_SNAPSHOT option on). The query will acquire shared locks while data are read. These pessimistic shared locks block writers (who require exclusive locks) in order to ensure only committed is read.
Comments have been closed on this topic.