Asked to Not Use Transactions and to Use A Workaround to Simulate OneT-SQL Stored Procedure Returns Before CompletingHandling exceptions in stored procedures called using insert-exec blocksSleeping SPID blocking other transactionsOracle GoldenGate add trandata errorsSQL Server: affect other transactions?How to handle errors in a transaction in a stored procedure?Issue with Table Naming Conventions and Policy Management in SQL Server 2016Investigating errors from strange query70-761 Practice exam Question about @@Trancount final valueKey-range RangeI-N lock compatibility in SQL Server

Does ません sometimes not negate?

Game company goes bankrupt, another company wants to make a sequel how?

Why do some journal proofs insert dozens of typesetting mistakes?

Why would a berry have a slow-acting poison?

Is there a difference between “When you are reduced to 0 hit points” and “when you would be reduced to 0 hit points”?

Make me speak L33T

Why does the hyperref documentation suggest using gather instead of equation?

Transferring $ from LLC to Personal account

Do gray aliens exist in Star Trek?

Can a Barbarian/Wizard multiclass cast a spell with a magic item while raging?

Delete a whole nested list if one of the values in that list contains an "Indeterminate" value

How can medieval knights protects themselves against modern guns?

Is Earth's Surface "In orbit"?

Guitar buzzing in G note

How could hearsay be better evidence than direct?

Is it acceptable for the secretary to have full access to our entire Outlook agenda?

Triangular domino tiling of an almost regular hexagon

How was the Luftwaffe able to destroy nearly 4000 Soviet aircraft in 3 days of operation Barbarossa?

What happens if a player suffers a time penalty but doesn't have enough time left?

Adjoints for radical and socle functors

In a world where Magic steam Engines exist what would keep people from making cars

Are members of the military allowed to wear civilian clothes when testifying in Congress?

Why does the B-2 Spirit have a pattern of thin white lines?

Movie where some soldiers go into a bunker in a city because of dangerous conditions outside



Asked to Not Use Transactions and to Use A Workaround to Simulate One


T-SQL Stored Procedure Returns Before CompletingHandling exceptions in stored procedures called using insert-exec blocksSleeping SPID blocking other transactionsOracle GoldenGate add trandata errorsSQL Server: affect other transactions?How to handle errors in a transaction in a stored procedure?Issue with Table Naming Conventions and Policy Management in SQL Server 2016Investigating errors from strange query70-761 Practice exam Question about @@Trancount final valueKey-range RangeI-N lock compatibility in SQL Server






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;









43


















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question




















  • 4





    Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

    – Paul White says GoFundMonica
    Sep 13 at 16:08

















43


















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question




















  • 4





    Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

    – Paul White says GoFundMonica
    Sep 13 at 16:08













43













43









43


7






I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question














I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!







sql-server t-sql transaction






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Sep 12 at 20:28









ForrestForrest

3712 silver badges5 bronze badges




3712 silver badges5 bronze badges










  • 4





    Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

    – Paul White says GoFundMonica
    Sep 13 at 16:08












  • 4





    Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

    – Paul White says GoFundMonica
    Sep 13 at 16:08







4




4





Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

– Paul White says GoFundMonica
Sep 13 at 16:08





Comments that do not serve their stated purpose have been deleted, or moved to the Community Wiki answer.

– Paul White says GoFundMonica
Sep 13 at 16:08










9 Answers
9






active

oldest

votes


















61



















You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



  • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


  • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


  • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


  • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






share|improve this answer



































    14



















    There are some errors which are so severe that the CATCH block is never entered. From the documentation




    Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.



    Attentions, such as client-interrupt requests or broken client connections.



    When the session is ended by a system administrator by using the KILL statement.



    ...



    Compile errors, such as syntax errors, that prevent a batch from running.



    Errors that occur ... because of deferred name resolution.




    A lot of these are easy to produce through dynamic SQL. Undo statements such as you've shown will not protect your data from such errors.






    share|improve this answer





















    • 2





      Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

      – dgould
      Sep 13 at 16:31







    • 2





      Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

      – Joshua
      Sep 13 at 20:38


















    10



















    i-one: The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).




    Does this make sense as a viable workaround for transactions?




    dan-guzman: No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction, SET XACT_ABORT ON cannot rollback anything other than the current statement.



    tibor-karaszi: You have 4 transactions, meaning more logging to the transaction log file. Remember that each transaction requires a synchronous write of the log records up to that point i.e. you get worse performance from also that aspect when using many transactions.



    rbarryyoung: If they are getting a lot of blocking then they either need to fix their data design, rationalize their table access order, or use a more appropriate isolation level. They are assuming that their problems (and failure to understand it) will become your problem. The evidence from millions of other database is that it will not.



    Also, what they are trying to manually implement is effectively a poor-mans optimistic concurrency. What they should do instead is to use some of the best optimistic concurrency in the world, already built into SQL Server. This goes to the isolation point above. In all likelihood they need to switch from whatever pessimistic concurrency isolation level they are currently using to one of the optimistic concurrency isolation levels, SNAPSHOT or READ_COMMITTED_SNAPSHOT. These will effectively do the same thing as their manual code, except it will do it correctly.



    ross-presser: If you have extremely long running processes -- like something happens today and next week something has to follow up, and if next week's thing fails then today's has to fail retroactively -- you may want to look into sagas. Strictly speaking this is outside of the database, as it requires a service bus.






    share|improve this answer



































      5



















      Bad idea code is just going to be more expensive to fix down the line.



      If there are blocking problems using explicit transaction (rollback/commit), point your DBA to the internet for some great ideas to address the issues.



      Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions




      Indexes reduce the number of seeks that must occur in a table/page to
      find a row/set of rows. They are generally seen as a method to reduce
      execution times for SELECT * queries and rightly too. They are not
      considered suitable for tables involved in large number of UPDATES. In
      fact INDEXES are found to be unfavorable in these cases as they
      increase the time taken to complete UPDATE queries.



      But this is not always the case. Delving slightly deep into the
      execution of an UPDATE statement we find that it too involves
      executing a SELECT statement first. This is a special and an often
      seen scenario where queries update mutually exclusive sets of rows.
      INDEXES here can lead to significant increase in performances of the
      database engine contrary to popular belief.







      share|improve this answer



































        4



















        The fake transaction strategy is dangerous because it allows concurrency issues that transactions specifically prevent. Consider that in the second example any of the data might be changed between statements.



        The fake transaction deletes are not GUARANTEED to run or succeed. If the database server turns off during the fake transaction, some but not all of the effects will remain. They are also not guaranteed to succeed in the say way a transaction rollback is.



        This strategy might work with inserts, but definitely would not work with updates or deletes (no time-machine SQL statements).



        If strict transaction concurrency is causing blocking there are lots of solutions, even ones that reduce the protection level... these are the correct way to solve the problem.



        Your DBA is offering a solution that might work OK if there was only one user of the database, but is absolutely unfit for any kind of serious usage.






        share|improve this answer

































          4



















          This is not a programming issue, rather it is an interpersonal/miscommunication issue. Most likely your "DBA" is worried about locks, not transactions.



          The other answers already explain why you have to use transactions... I mean that's what RDBMS do, without properly used transactions there is no data integrity, so I'll focus on how to solve the real problem, which is: find out why your "DBA" developed an allergy to transactions and convince him to change his mind.



          I think this guy is confusing "a particular scenario where bad code resulted in terrible performance" with "all transactions are bad." I wouldn't expect a competent DBA to make that mistake, so that's really weird. Maybe he had a really bad experience with some terrible code?



          Consider a scenario like this:



          BEGIN
          UPDATE or DELETE some row, which takes locks it
          ...do something that takes a while
          ...perform other queries
          COMMIT


          This style of transaction use holds a lock (or several locks) which means other transactions hitting the same rows will have to wait. If the locks are held for a long time, and especially if lots of other transactions want to lock the same rows, this can really hurt performance.



          What you could do is ask him why he has this curiously wrong idea of not using transactions, what types of queries were problematic, etc. Then try to persuade him that you will definitely avoid similar bad scenarios, that you will monitor your lock usage and performance, reassure him, etc.



          What he's telling you is "don't touch the screwdriver!" so the code you posted in your question is basically using a hammer to drive a screw. Much better option is to convince him you know how to use a screwdriver...



          I can think of several examples... well, they were on MySQL but that should work too.



          There was a forum where the full text index took a while to update. When a user submitted a post, the transaction would update the topics table to increase the post count and last post date (thus locking the topic row), then inserted the post, and the transaction would hold the lock until the fulltext index had finished updating and the COMMIT was done.



          Since this ran on a rustbucket with way too little RAM, updating said fulltext index often resulted in several seconds of intense random IO on the single slow spinning drive in the box.



          The problem was that people who clicked on the topic caused a query to increase the view count on the topic, which also required a lock on the topic row. Thus, no-one could view the topic while its fulltext index was updating. I mean, the row could be read, but updating it would lock.



          Even worse, posting would update the post count on the parent forums table and also hold the lock while the fulltext index was updating... which froze the entire forum for a few seconds and caused tons of requests to pile up in the web server queue.



          Solution was to take the locks in the correct order: BEGIN, insert the post and update the fulltext index without taking any locks, then quickly update the topic/forum rows with post count and last post date, and COMMIT. That completely solved the problem. It was just moving around a few queries, really simple.



          In this case, transactions weren't the problem... It was acquiring an unnecessary lock before a lengthy operation. Other examples of stuff to avoid while holding a lock in a transaction: waiting for user input, accessing lots of uncached data from slow spinning drives, network IO, etc.



          Of course, sometimes, you don't have a choice and you have to do a lengthy processing while holding cumbersome locks. There are tricks around this (operate on a copy of the data, etc) but quite often the performance bottleneck comes from a lock that was not intentionally acquired, and simply reordering queries solves the problem. Even better, is being aware of the locks taken while writing the queries...



          I won't repeat the other answers but really... use transactions. Your problem is convincing your "DBA", not working around the most important feature of a database...






          share|improve this answer



































            3



















            TLDR: Use proper isolation level.



            As you correctly noticed the approach without transactions and with "manual" recovery can be very complex. The high complexity means normally much more time to implement it and much more time to fix errors (because complexity leads to more errors in the implementation). It means such approach can cost your customer much more.



            The main concern of your "dba" colleague is performance. One of ways to improve it is to use proper isolation level. Suppose you have a procedure that provides some kind o
            f overview data to the user. Such procedure does not necessarily have to use SERIALIZABLE isolation level. In many cases READ UNCOMMITTED can be quite sufficient. It means, such procedure will not be blocked by your transaction that creates or modifies some data.



            I'd suggest you review all existing functions/procedures in your database, evaluate the reasonable isolation level for each one, explain the performance benefits to your customer. Then adjust these functions/procedures accordingly.






            share|improve this answer

































              2



















              You can also decide to use In-Memory OLTP tables. They of course still use transactions, but there is no blocking involved.

              Instead of blocking all operations will be successful, but during the commit phase engine will check for transaction conflicts and one of the commits may fail. Microsoft uses the term "Optimistic locking".

              If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction.

              More at: In memory OLTP






              share|improve this answer



































                -5



















                There is a way around using transactions to a limited extent and that is by changing your data model to be more object oriented. So rather than storing for example demographic data about a person in several tables and relating them to each other and requiring transactions, you could have a single JSON document that stores everything you know about that person in a single field. Of course working out far that domain stretches is a another design challenge, best done by developers not DBAs






                share|improve this answer



























                  Your Answer








                  StackExchange.ready(function()
                  var channelOptions =
                  tags: "".split(" "),
                  id: "182"
                  ;
                  initTagRenderer("".split(" "), "".split(" "), channelOptions);

                  StackExchange.using("externalEditor", function()
                  // Have to fire editor after snippets, if snippets enabled
                  if (StackExchange.settings.snippets.snippetsEnabled)
                  StackExchange.using("snippets", function()
                  createEditor();
                  );

                  else
                  createEditor();

                  );

                  function createEditor()
                  StackExchange.prepareEditor(
                  heartbeatType: 'answer',
                  autoActivateHeartbeat: false,
                  convertImagesToLinks: false,
                  noModals: true,
                  showLowRepImageUploadWarning: true,
                  reputationToPostImages: null,
                  bindNavPrevention: true,
                  postfix: "",
                  imageUploader:
                  brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                  contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                  allowUrls: true
                  ,
                  onDemand: true,
                  discardSelector: ".discard-answer"
                  ,immediatelyShowMarkdownHelp:true
                  );



                  );














                  draft saved

                  draft discarded
















                  StackExchange.ready(
                  function ()
                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');

                  );

                  Post as a guest















                  Required, but never shown


























                  9 Answers
                  9






                  active

                  oldest

                  votes








                  9 Answers
                  9






                  active

                  oldest

                  votes









                  active

                  oldest

                  votes






                  active

                  oldest

                  votes









                  61



















                  You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



                  Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



                  • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


                  • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


                  • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


                  • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


                  Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



                  In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






                  share|improve this answer
































                    61



















                    You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



                    Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



                    • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


                    • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


                    • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


                    • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


                    Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



                    In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






                    share|improve this answer






























                      61















                      61











                      61









                      You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



                      Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



                      • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


                      • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


                      • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


                      • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


                      Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



                      In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






                      share|improve this answer
















                      You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



                      Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



                      • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


                      • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


                      • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


                      • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


                      Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can come up with a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



                      In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.







                      share|improve this answer















                      share|improve this answer




                      share|improve this answer








                      edited Sep 13 at 14:12

























                      answered Sep 13 at 0:58









                      mustacciomustaccio

                      13.8k9 gold badges31 silver badges49 bronze badges




                      13.8k9 gold badges31 silver badges49 bronze badges


























                          14



















                          There are some errors which are so severe that the CATCH block is never entered. From the documentation




                          Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.



                          Attentions, such as client-interrupt requests or broken client connections.



                          When the session is ended by a system administrator by using the KILL statement.



                          ...



                          Compile errors, such as syntax errors, that prevent a batch from running.



                          Errors that occur ... because of deferred name resolution.




                          A lot of these are easy to produce through dynamic SQL. Undo statements such as you've shown will not protect your data from such errors.






                          share|improve this answer





















                          • 2





                            Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                            – dgould
                            Sep 13 at 16:31







                          • 2





                            Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                            – Joshua
                            Sep 13 at 20:38















                          14



















                          There are some errors which are so severe that the CATCH block is never entered. From the documentation




                          Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.



                          Attentions, such as client-interrupt requests or broken client connections.



                          When the session is ended by a system administrator by using the KILL statement.



                          ...



                          Compile errors, such as syntax errors, that prevent a batch from running.



                          Errors that occur ... because of deferred name resolution.




                          A lot of these are easy to produce through dynamic SQL. Undo statements such as you've shown will not protect your data from such errors.






                          share|improve this answer





















                          • 2





                            Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                            – dgould
                            Sep 13 at 16:31







                          • 2





                            Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                            – Joshua
                            Sep 13 at 20:38













                          14















                          14











                          14









                          There are some errors which are so severe that the CATCH block is never entered. From the documentation




                          Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.



                          Attentions, such as client-interrupt requests or broken client connections.



                          When the session is ended by a system administrator by using the KILL statement.



                          ...



                          Compile errors, such as syntax errors, that prevent a batch from running.



                          Errors that occur ... because of deferred name resolution.




                          A lot of these are easy to produce through dynamic SQL. Undo statements such as you've shown will not protect your data from such errors.






                          share|improve this answer














                          There are some errors which are so severe that the CATCH block is never entered. From the documentation




                          Errors that have a severity of 20 or higher that stop the SQL Server Database Engine task processing for the session. If an error occurs that has severity of 20 or higher and the database connection is not disrupted, TRY...CATCH will handle the error.



                          Attentions, such as client-interrupt requests or broken client connections.



                          When the session is ended by a system administrator by using the KILL statement.



                          ...



                          Compile errors, such as syntax errors, that prevent a batch from running.



                          Errors that occur ... because of deferred name resolution.




                          A lot of these are easy to produce through dynamic SQL. Undo statements such as you've shown will not protect your data from such errors.







                          share|improve this answer













                          share|improve this answer




                          share|improve this answer










                          answered Sep 13 at 7:22









                          Michael GreenMichael Green

                          17.6k8 gold badges36 silver badges75 bronze badges




                          17.6k8 gold badges36 silver badges75 bronze badges










                          • 2





                            Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                            – dgould
                            Sep 13 at 16:31







                          • 2





                            Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                            – Joshua
                            Sep 13 at 20:38












                          • 2





                            Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                            – dgould
                            Sep 13 at 16:31







                          • 2





                            Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                            – Joshua
                            Sep 13 at 20:38







                          2




                          2





                          Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                          – dgould
                          Sep 13 at 16:31






                          Right -- and if nothing else, the client dying while executing the code would constitute an error "so severe that the CATCH block is never entered". No matter how much you trust the software (not just your own code, but EVERY part of ALL of the software stacks involved), there's always the possibility of a hardware failure (again, potentially anywhere along the chain) stopping you cold at any moment. Keeping this in mind is a good defense against the kind thinking that leads to this kind of "workaround".

                          – dgould
                          Sep 13 at 16:31





                          2




                          2





                          Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                          – Joshua
                          Sep 13 at 20:38





                          Also, you might be a deadlock victim. Your CATCH blocks run but throw if they try to write to the database.

                          – Joshua
                          Sep 13 at 20:38











                          10



















                          i-one: The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).




                          Does this make sense as a viable workaround for transactions?




                          dan-guzman: No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction, SET XACT_ABORT ON cannot rollback anything other than the current statement.



                          tibor-karaszi: You have 4 transactions, meaning more logging to the transaction log file. Remember that each transaction requires a synchronous write of the log records up to that point i.e. you get worse performance from also that aspect when using many transactions.



                          rbarryyoung: If they are getting a lot of blocking then they either need to fix their data design, rationalize their table access order, or use a more appropriate isolation level. They are assuming that their problems (and failure to understand it) will become your problem. The evidence from millions of other database is that it will not.



                          Also, what they are trying to manually implement is effectively a poor-mans optimistic concurrency. What they should do instead is to use some of the best optimistic concurrency in the world, already built into SQL Server. This goes to the isolation point above. In all likelihood they need to switch from whatever pessimistic concurrency isolation level they are currently using to one of the optimistic concurrency isolation levels, SNAPSHOT or READ_COMMITTED_SNAPSHOT. These will effectively do the same thing as their manual code, except it will do it correctly.



                          ross-presser: If you have extremely long running processes -- like something happens today and next week something has to follow up, and if next week's thing fails then today's has to fail retroactively -- you may want to look into sagas. Strictly speaking this is outside of the database, as it requires a service bus.






                          share|improve this answer
































                            10



















                            i-one: The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).




                            Does this make sense as a viable workaround for transactions?




                            dan-guzman: No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction, SET XACT_ABORT ON cannot rollback anything other than the current statement.



                            tibor-karaszi: You have 4 transactions, meaning more logging to the transaction log file. Remember that each transaction requires a synchronous write of the log records up to that point i.e. you get worse performance from also that aspect when using many transactions.



                            rbarryyoung: If they are getting a lot of blocking then they either need to fix their data design, rationalize their table access order, or use a more appropriate isolation level. They are assuming that their problems (and failure to understand it) will become your problem. The evidence from millions of other database is that it will not.



                            Also, what they are trying to manually implement is effectively a poor-mans optimistic concurrency. What they should do instead is to use some of the best optimistic concurrency in the world, already built into SQL Server. This goes to the isolation point above. In all likelihood they need to switch from whatever pessimistic concurrency isolation level they are currently using to one of the optimistic concurrency isolation levels, SNAPSHOT or READ_COMMITTED_SNAPSHOT. These will effectively do the same thing as their manual code, except it will do it correctly.



                            ross-presser: If you have extremely long running processes -- like something happens today and next week something has to follow up, and if next week's thing fails then today's has to fail retroactively -- you may want to look into sagas. Strictly speaking this is outside of the database, as it requires a service bus.






                            share|improve this answer






























                              10















                              10











                              10









                              i-one: The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).




                              Does this make sense as a viable workaround for transactions?




                              dan-guzman: No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction, SET XACT_ABORT ON cannot rollback anything other than the current statement.



                              tibor-karaszi: You have 4 transactions, meaning more logging to the transaction log file. Remember that each transaction requires a synchronous write of the log records up to that point i.e. you get worse performance from also that aspect when using many transactions.



                              rbarryyoung: If they are getting a lot of blocking then they either need to fix their data design, rationalize their table access order, or use a more appropriate isolation level. They are assuming that their problems (and failure to understand it) will become your problem. The evidence from millions of other database is that it will not.



                              Also, what they are trying to manually implement is effectively a poor-mans optimistic concurrency. What they should do instead is to use some of the best optimistic concurrency in the world, already built into SQL Server. This goes to the isolation point above. In all likelihood they need to switch from whatever pessimistic concurrency isolation level they are currently using to one of the optimistic concurrency isolation levels, SNAPSHOT or READ_COMMITTED_SNAPSHOT. These will effectively do the same thing as their manual code, except it will do it correctly.



                              ross-presser: If you have extremely long running processes -- like something happens today and next week something has to follow up, and if next week's thing fails then today's has to fail retroactively -- you may want to look into sagas. Strictly speaking this is outside of the database, as it requires a service bus.






                              share|improve this answer
















                              i-one: The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).




                              Does this make sense as a viable workaround for transactions?




                              dan-guzman: No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction, SET XACT_ABORT ON cannot rollback anything other than the current statement.



                              tibor-karaszi: You have 4 transactions, meaning more logging to the transaction log file. Remember that each transaction requires a synchronous write of the log records up to that point i.e. you get worse performance from also that aspect when using many transactions.



                              rbarryyoung: If they are getting a lot of blocking then they either need to fix their data design, rationalize their table access order, or use a more appropriate isolation level. They are assuming that their problems (and failure to understand it) will become your problem. The evidence from millions of other database is that it will not.



                              Also, what they are trying to manually implement is effectively a poor-mans optimistic concurrency. What they should do instead is to use some of the best optimistic concurrency in the world, already built into SQL Server. This goes to the isolation point above. In all likelihood they need to switch from whatever pessimistic concurrency isolation level they are currently using to one of the optimistic concurrency isolation levels, SNAPSHOT or READ_COMMITTED_SNAPSHOT. These will effectively do the same thing as their manual code, except it will do it correctly.



                              ross-presser: If you have extremely long running processes -- like something happens today and next week something has to follow up, and if next week's thing fails then today's has to fail retroactively -- you may want to look into sagas. Strictly speaking this is outside of the database, as it requires a service bus.







                              share|improve this answer















                              share|improve this answer




                              share|improve this answer








                              edited Sep 20 at 18:28


























                              community wiki





                              3 revs
                              user126897

























                                  5



















                                  Bad idea code is just going to be more expensive to fix down the line.



                                  If there are blocking problems using explicit transaction (rollback/commit), point your DBA to the internet for some great ideas to address the issues.



                                  Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions




                                  Indexes reduce the number of seeks that must occur in a table/page to
                                  find a row/set of rows. They are generally seen as a method to reduce
                                  execution times for SELECT * queries and rightly too. They are not
                                  considered suitable for tables involved in large number of UPDATES. In
                                  fact INDEXES are found to be unfavorable in these cases as they
                                  increase the time taken to complete UPDATE queries.



                                  But this is not always the case. Delving slightly deep into the
                                  execution of an UPDATE statement we find that it too involves
                                  executing a SELECT statement first. This is a special and an often
                                  seen scenario where queries update mutually exclusive sets of rows.
                                  INDEXES here can lead to significant increase in performances of the
                                  database engine contrary to popular belief.







                                  share|improve this answer
































                                    5



















                                    Bad idea code is just going to be more expensive to fix down the line.



                                    If there are blocking problems using explicit transaction (rollback/commit), point your DBA to the internet for some great ideas to address the issues.



                                    Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions




                                    Indexes reduce the number of seeks that must occur in a table/page to
                                    find a row/set of rows. They are generally seen as a method to reduce
                                    execution times for SELECT * queries and rightly too. They are not
                                    considered suitable for tables involved in large number of UPDATES. In
                                    fact INDEXES are found to be unfavorable in these cases as they
                                    increase the time taken to complete UPDATE queries.



                                    But this is not always the case. Delving slightly deep into the
                                    execution of an UPDATE statement we find that it too involves
                                    executing a SELECT statement first. This is a special and an often
                                    seen scenario where queries update mutually exclusive sets of rows.
                                    INDEXES here can lead to significant increase in performances of the
                                    database engine contrary to popular belief.







                                    share|improve this answer






























                                      5















                                      5











                                      5









                                      Bad idea code is just going to be more expensive to fix down the line.



                                      If there are blocking problems using explicit transaction (rollback/commit), point your DBA to the internet for some great ideas to address the issues.



                                      Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions




                                      Indexes reduce the number of seeks that must occur in a table/page to
                                      find a row/set of rows. They are generally seen as a method to reduce
                                      execution times for SELECT * queries and rightly too. They are not
                                      considered suitable for tables involved in large number of UPDATES. In
                                      fact INDEXES are found to be unfavorable in these cases as they
                                      increase the time taken to complete UPDATE queries.



                                      But this is not always the case. Delving slightly deep into the
                                      execution of an UPDATE statement we find that it too involves
                                      executing a SELECT statement first. This is a special and an often
                                      seen scenario where queries update mutually exclusive sets of rows.
                                      INDEXES here can lead to significant increase in performances of the
                                      database engine contrary to popular belief.







                                      share|improve this answer
















                                      Bad idea code is just going to be more expensive to fix down the line.



                                      If there are blocking problems using explicit transaction (rollback/commit), point your DBA to the internet for some great ideas to address the issues.



                                      Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions




                                      Indexes reduce the number of seeks that must occur in a table/page to
                                      find a row/set of rows. They are generally seen as a method to reduce
                                      execution times for SELECT * queries and rightly too. They are not
                                      considered suitable for tables involved in large number of UPDATES. In
                                      fact INDEXES are found to be unfavorable in these cases as they
                                      increase the time taken to complete UPDATE queries.



                                      But this is not always the case. Delving slightly deep into the
                                      execution of an UPDATE statement we find that it too involves
                                      executing a SELECT statement first. This is a special and an often
                                      seen scenario where queries update mutually exclusive sets of rows.
                                      INDEXES here can lead to significant increase in performances of the
                                      database engine contrary to popular belief.








                                      share|improve this answer















                                      share|improve this answer




                                      share|improve this answer








                                      edited Sep 13 at 19:56

























                                      answered Sep 12 at 22:19









                                      user238855user238855

                                      1841 silver badge5 bronze badges




                                      1841 silver badge5 bronze badges
























                                          4



















                                          The fake transaction strategy is dangerous because it allows concurrency issues that transactions specifically prevent. Consider that in the second example any of the data might be changed between statements.



                                          The fake transaction deletes are not GUARANTEED to run or succeed. If the database server turns off during the fake transaction, some but not all of the effects will remain. They are also not guaranteed to succeed in the say way a transaction rollback is.



                                          This strategy might work with inserts, but definitely would not work with updates or deletes (no time-machine SQL statements).



                                          If strict transaction concurrency is causing blocking there are lots of solutions, even ones that reduce the protection level... these are the correct way to solve the problem.



                                          Your DBA is offering a solution that might work OK if there was only one user of the database, but is absolutely unfit for any kind of serious usage.






                                          share|improve this answer






























                                            4



















                                            The fake transaction strategy is dangerous because it allows concurrency issues that transactions specifically prevent. Consider that in the second example any of the data might be changed between statements.



                                            The fake transaction deletes are not GUARANTEED to run or succeed. If the database server turns off during the fake transaction, some but not all of the effects will remain. They are also not guaranteed to succeed in the say way a transaction rollback is.



                                            This strategy might work with inserts, but definitely would not work with updates or deletes (no time-machine SQL statements).



                                            If strict transaction concurrency is causing blocking there are lots of solutions, even ones that reduce the protection level... these are the correct way to solve the problem.



                                            Your DBA is offering a solution that might work OK if there was only one user of the database, but is absolutely unfit for any kind of serious usage.






                                            share|improve this answer




























                                              4















                                              4











                                              4









                                              The fake transaction strategy is dangerous because it allows concurrency issues that transactions specifically prevent. Consider that in the second example any of the data might be changed between statements.



                                              The fake transaction deletes are not GUARANTEED to run or succeed. If the database server turns off during the fake transaction, some but not all of the effects will remain. They are also not guaranteed to succeed in the say way a transaction rollback is.



                                              This strategy might work with inserts, but definitely would not work with updates or deletes (no time-machine SQL statements).



                                              If strict transaction concurrency is causing blocking there are lots of solutions, even ones that reduce the protection level... these are the correct way to solve the problem.



                                              Your DBA is offering a solution that might work OK if there was only one user of the database, but is absolutely unfit for any kind of serious usage.






                                              share|improve this answer














                                              The fake transaction strategy is dangerous because it allows concurrency issues that transactions specifically prevent. Consider that in the second example any of the data might be changed between statements.



                                              The fake transaction deletes are not GUARANTEED to run or succeed. If the database server turns off during the fake transaction, some but not all of the effects will remain. They are also not guaranteed to succeed in the say way a transaction rollback is.



                                              This strategy might work with inserts, but definitely would not work with updates or deletes (no time-machine SQL statements).



                                              If strict transaction concurrency is causing blocking there are lots of solutions, even ones that reduce the protection level... these are the correct way to solve the problem.



                                              Your DBA is offering a solution that might work OK if there was only one user of the database, but is absolutely unfit for any kind of serious usage.







                                              share|improve this answer













                                              share|improve this answer




                                              share|improve this answer










                                              answered Sep 14 at 1:13









                                              Bailey SBailey S

                                              411 bronze badge




                                              411 bronze badge
























                                                  4



















                                                  This is not a programming issue, rather it is an interpersonal/miscommunication issue. Most likely your "DBA" is worried about locks, not transactions.



                                                  The other answers already explain why you have to use transactions... I mean that's what RDBMS do, without properly used transactions there is no data integrity, so I'll focus on how to solve the real problem, which is: find out why your "DBA" developed an allergy to transactions and convince him to change his mind.



                                                  I think this guy is confusing "a particular scenario where bad code resulted in terrible performance" with "all transactions are bad." I wouldn't expect a competent DBA to make that mistake, so that's really weird. Maybe he had a really bad experience with some terrible code?



                                                  Consider a scenario like this:



                                                  BEGIN
                                                  UPDATE or DELETE some row, which takes locks it
                                                  ...do something that takes a while
                                                  ...perform other queries
                                                  COMMIT


                                                  This style of transaction use holds a lock (or several locks) which means other transactions hitting the same rows will have to wait. If the locks are held for a long time, and especially if lots of other transactions want to lock the same rows, this can really hurt performance.



                                                  What you could do is ask him why he has this curiously wrong idea of not using transactions, what types of queries were problematic, etc. Then try to persuade him that you will definitely avoid similar bad scenarios, that you will monitor your lock usage and performance, reassure him, etc.



                                                  What he's telling you is "don't touch the screwdriver!" so the code you posted in your question is basically using a hammer to drive a screw. Much better option is to convince him you know how to use a screwdriver...



                                                  I can think of several examples... well, they were on MySQL but that should work too.



                                                  There was a forum where the full text index took a while to update. When a user submitted a post, the transaction would update the topics table to increase the post count and last post date (thus locking the topic row), then inserted the post, and the transaction would hold the lock until the fulltext index had finished updating and the COMMIT was done.



                                                  Since this ran on a rustbucket with way too little RAM, updating said fulltext index often resulted in several seconds of intense random IO on the single slow spinning drive in the box.



                                                  The problem was that people who clicked on the topic caused a query to increase the view count on the topic, which also required a lock on the topic row. Thus, no-one could view the topic while its fulltext index was updating. I mean, the row could be read, but updating it would lock.



                                                  Even worse, posting would update the post count on the parent forums table and also hold the lock while the fulltext index was updating... which froze the entire forum for a few seconds and caused tons of requests to pile up in the web server queue.



                                                  Solution was to take the locks in the correct order: BEGIN, insert the post and update the fulltext index without taking any locks, then quickly update the topic/forum rows with post count and last post date, and COMMIT. That completely solved the problem. It was just moving around a few queries, really simple.



                                                  In this case, transactions weren't the problem... It was acquiring an unnecessary lock before a lengthy operation. Other examples of stuff to avoid while holding a lock in a transaction: waiting for user input, accessing lots of uncached data from slow spinning drives, network IO, etc.



                                                  Of course, sometimes, you don't have a choice and you have to do a lengthy processing while holding cumbersome locks. There are tricks around this (operate on a copy of the data, etc) but quite often the performance bottleneck comes from a lock that was not intentionally acquired, and simply reordering queries solves the problem. Even better, is being aware of the locks taken while writing the queries...



                                                  I won't repeat the other answers but really... use transactions. Your problem is convincing your "DBA", not working around the most important feature of a database...






                                                  share|improve this answer
































                                                    4



















                                                    This is not a programming issue, rather it is an interpersonal/miscommunication issue. Most likely your "DBA" is worried about locks, not transactions.



                                                    The other answers already explain why you have to use transactions... I mean that's what RDBMS do, without properly used transactions there is no data integrity, so I'll focus on how to solve the real problem, which is: find out why your "DBA" developed an allergy to transactions and convince him to change his mind.



                                                    I think this guy is confusing "a particular scenario where bad code resulted in terrible performance" with "all transactions are bad." I wouldn't expect a competent DBA to make that mistake, so that's really weird. Maybe he had a really bad experience with some terrible code?



                                                    Consider a scenario like this:



                                                    BEGIN
                                                    UPDATE or DELETE some row, which takes locks it
                                                    ...do something that takes a while
                                                    ...perform other queries
                                                    COMMIT


                                                    This style of transaction use holds a lock (or several locks) which means other transactions hitting the same rows will have to wait. If the locks are held for a long time, and especially if lots of other transactions want to lock the same rows, this can really hurt performance.



                                                    What you could do is ask him why he has this curiously wrong idea of not using transactions, what types of queries were problematic, etc. Then try to persuade him that you will definitely avoid similar bad scenarios, that you will monitor your lock usage and performance, reassure him, etc.



                                                    What he's telling you is "don't touch the screwdriver!" so the code you posted in your question is basically using a hammer to drive a screw. Much better option is to convince him you know how to use a screwdriver...



                                                    I can think of several examples... well, they were on MySQL but that should work too.



                                                    There was a forum where the full text index took a while to update. When a user submitted a post, the transaction would update the topics table to increase the post count and last post date (thus locking the topic row), then inserted the post, and the transaction would hold the lock until the fulltext index had finished updating and the COMMIT was done.



                                                    Since this ran on a rustbucket with way too little RAM, updating said fulltext index often resulted in several seconds of intense random IO on the single slow spinning drive in the box.



                                                    The problem was that people who clicked on the topic caused a query to increase the view count on the topic, which also required a lock on the topic row. Thus, no-one could view the topic while its fulltext index was updating. I mean, the row could be read, but updating it would lock.



                                                    Even worse, posting would update the post count on the parent forums table and also hold the lock while the fulltext index was updating... which froze the entire forum for a few seconds and caused tons of requests to pile up in the web server queue.



                                                    Solution was to take the locks in the correct order: BEGIN, insert the post and update the fulltext index without taking any locks, then quickly update the topic/forum rows with post count and last post date, and COMMIT. That completely solved the problem. It was just moving around a few queries, really simple.



                                                    In this case, transactions weren't the problem... It was acquiring an unnecessary lock before a lengthy operation. Other examples of stuff to avoid while holding a lock in a transaction: waiting for user input, accessing lots of uncached data from slow spinning drives, network IO, etc.



                                                    Of course, sometimes, you don't have a choice and you have to do a lengthy processing while holding cumbersome locks. There are tricks around this (operate on a copy of the data, etc) but quite often the performance bottleneck comes from a lock that was not intentionally acquired, and simply reordering queries solves the problem. Even better, is being aware of the locks taken while writing the queries...



                                                    I won't repeat the other answers but really... use transactions. Your problem is convincing your "DBA", not working around the most important feature of a database...






                                                    share|improve this answer






























                                                      4















                                                      4











                                                      4









                                                      This is not a programming issue, rather it is an interpersonal/miscommunication issue. Most likely your "DBA" is worried about locks, not transactions.



                                                      The other answers already explain why you have to use transactions... I mean that's what RDBMS do, without properly used transactions there is no data integrity, so I'll focus on how to solve the real problem, which is: find out why your "DBA" developed an allergy to transactions and convince him to change his mind.



                                                      I think this guy is confusing "a particular scenario where bad code resulted in terrible performance" with "all transactions are bad." I wouldn't expect a competent DBA to make that mistake, so that's really weird. Maybe he had a really bad experience with some terrible code?



                                                      Consider a scenario like this:



                                                      BEGIN
                                                      UPDATE or DELETE some row, which takes locks it
                                                      ...do something that takes a while
                                                      ...perform other queries
                                                      COMMIT


                                                      This style of transaction use holds a lock (or several locks) which means other transactions hitting the same rows will have to wait. If the locks are held for a long time, and especially if lots of other transactions want to lock the same rows, this can really hurt performance.



                                                      What you could do is ask him why he has this curiously wrong idea of not using transactions, what types of queries were problematic, etc. Then try to persuade him that you will definitely avoid similar bad scenarios, that you will monitor your lock usage and performance, reassure him, etc.



                                                      What he's telling you is "don't touch the screwdriver!" so the code you posted in your question is basically using a hammer to drive a screw. Much better option is to convince him you know how to use a screwdriver...



                                                      I can think of several examples... well, they were on MySQL but that should work too.



                                                      There was a forum where the full text index took a while to update. When a user submitted a post, the transaction would update the topics table to increase the post count and last post date (thus locking the topic row), then inserted the post, and the transaction would hold the lock until the fulltext index had finished updating and the COMMIT was done.



                                                      Since this ran on a rustbucket with way too little RAM, updating said fulltext index often resulted in several seconds of intense random IO on the single slow spinning drive in the box.



                                                      The problem was that people who clicked on the topic caused a query to increase the view count on the topic, which also required a lock on the topic row. Thus, no-one could view the topic while its fulltext index was updating. I mean, the row could be read, but updating it would lock.



                                                      Even worse, posting would update the post count on the parent forums table and also hold the lock while the fulltext index was updating... which froze the entire forum for a few seconds and caused tons of requests to pile up in the web server queue.



                                                      Solution was to take the locks in the correct order: BEGIN, insert the post and update the fulltext index without taking any locks, then quickly update the topic/forum rows with post count and last post date, and COMMIT. That completely solved the problem. It was just moving around a few queries, really simple.



                                                      In this case, transactions weren't the problem... It was acquiring an unnecessary lock before a lengthy operation. Other examples of stuff to avoid while holding a lock in a transaction: waiting for user input, accessing lots of uncached data from slow spinning drives, network IO, etc.



                                                      Of course, sometimes, you don't have a choice and you have to do a lengthy processing while holding cumbersome locks. There are tricks around this (operate on a copy of the data, etc) but quite often the performance bottleneck comes from a lock that was not intentionally acquired, and simply reordering queries solves the problem. Even better, is being aware of the locks taken while writing the queries...



                                                      I won't repeat the other answers but really... use transactions. Your problem is convincing your "DBA", not working around the most important feature of a database...






                                                      share|improve this answer
















                                                      This is not a programming issue, rather it is an interpersonal/miscommunication issue. Most likely your "DBA" is worried about locks, not transactions.



                                                      The other answers already explain why you have to use transactions... I mean that's what RDBMS do, without properly used transactions there is no data integrity, so I'll focus on how to solve the real problem, which is: find out why your "DBA" developed an allergy to transactions and convince him to change his mind.



                                                      I think this guy is confusing "a particular scenario where bad code resulted in terrible performance" with "all transactions are bad." I wouldn't expect a competent DBA to make that mistake, so that's really weird. Maybe he had a really bad experience with some terrible code?



                                                      Consider a scenario like this:



                                                      BEGIN
                                                      UPDATE or DELETE some row, which takes locks it
                                                      ...do something that takes a while
                                                      ...perform other queries
                                                      COMMIT


                                                      This style of transaction use holds a lock (or several locks) which means other transactions hitting the same rows will have to wait. If the locks are held for a long time, and especially if lots of other transactions want to lock the same rows, this can really hurt performance.



                                                      What you could do is ask him why he has this curiously wrong idea of not using transactions, what types of queries were problematic, etc. Then try to persuade him that you will definitely avoid similar bad scenarios, that you will monitor your lock usage and performance, reassure him, etc.



                                                      What he's telling you is "don't touch the screwdriver!" so the code you posted in your question is basically using a hammer to drive a screw. Much better option is to convince him you know how to use a screwdriver...



                                                      I can think of several examples... well, they were on MySQL but that should work too.



                                                      There was a forum where the full text index took a while to update. When a user submitted a post, the transaction would update the topics table to increase the post count and last post date (thus locking the topic row), then inserted the post, and the transaction would hold the lock until the fulltext index had finished updating and the COMMIT was done.



                                                      Since this ran on a rustbucket with way too little RAM, updating said fulltext index often resulted in several seconds of intense random IO on the single slow spinning drive in the box.



                                                      The problem was that people who clicked on the topic caused a query to increase the view count on the topic, which also required a lock on the topic row. Thus, no-one could view the topic while its fulltext index was updating. I mean, the row could be read, but updating it would lock.



                                                      Even worse, posting would update the post count on the parent forums table and also hold the lock while the fulltext index was updating... which froze the entire forum for a few seconds and caused tons of requests to pile up in the web server queue.



                                                      Solution was to take the locks in the correct order: BEGIN, insert the post and update the fulltext index without taking any locks, then quickly update the topic/forum rows with post count and last post date, and COMMIT. That completely solved the problem. It was just moving around a few queries, really simple.



                                                      In this case, transactions weren't the problem... It was acquiring an unnecessary lock before a lengthy operation. Other examples of stuff to avoid while holding a lock in a transaction: waiting for user input, accessing lots of uncached data from slow spinning drives, network IO, etc.



                                                      Of course, sometimes, you don't have a choice and you have to do a lengthy processing while holding cumbersome locks. There are tricks around this (operate on a copy of the data, etc) but quite often the performance bottleneck comes from a lock that was not intentionally acquired, and simply reordering queries solves the problem. Even better, is being aware of the locks taken while writing the queries...



                                                      I won't repeat the other answers but really... use transactions. Your problem is convincing your "DBA", not working around the most important feature of a database...







                                                      share|improve this answer















                                                      share|improve this answer




                                                      share|improve this answer








                                                      edited Sep 14 at 16:49

























                                                      answered Sep 14 at 15:40









                                                      peufeupeufeu

                                                      1813 bronze badges




                                                      1813 bronze badges
























                                                          3



















                                                          TLDR: Use proper isolation level.



                                                          As you correctly noticed the approach without transactions and with "manual" recovery can be very complex. The high complexity means normally much more time to implement it and much more time to fix errors (because complexity leads to more errors in the implementation). It means such approach can cost your customer much more.



                                                          The main concern of your "dba" colleague is performance. One of ways to improve it is to use proper isolation level. Suppose you have a procedure that provides some kind o
                                                          f overview data to the user. Such procedure does not necessarily have to use SERIALIZABLE isolation level. In many cases READ UNCOMMITTED can be quite sufficient. It means, such procedure will not be blocked by your transaction that creates or modifies some data.



                                                          I'd suggest you review all existing functions/procedures in your database, evaluate the reasonable isolation level for each one, explain the performance benefits to your customer. Then adjust these functions/procedures accordingly.






                                                          share|improve this answer






























                                                            3



















                                                            TLDR: Use proper isolation level.



                                                            As you correctly noticed the approach without transactions and with "manual" recovery can be very complex. The high complexity means normally much more time to implement it and much more time to fix errors (because complexity leads to more errors in the implementation). It means such approach can cost your customer much more.



                                                            The main concern of your "dba" colleague is performance. One of ways to improve it is to use proper isolation level. Suppose you have a procedure that provides some kind o
                                                            f overview data to the user. Such procedure does not necessarily have to use SERIALIZABLE isolation level. In many cases READ UNCOMMITTED can be quite sufficient. It means, such procedure will not be blocked by your transaction that creates or modifies some data.



                                                            I'd suggest you review all existing functions/procedures in your database, evaluate the reasonable isolation level for each one, explain the performance benefits to your customer. Then adjust these functions/procedures accordingly.






                                                            share|improve this answer




























                                                              3















                                                              3











                                                              3









                                                              TLDR: Use proper isolation level.



                                                              As you correctly noticed the approach without transactions and with "manual" recovery can be very complex. The high complexity means normally much more time to implement it and much more time to fix errors (because complexity leads to more errors in the implementation). It means such approach can cost your customer much more.



                                                              The main concern of your "dba" colleague is performance. One of ways to improve it is to use proper isolation level. Suppose you have a procedure that provides some kind o
                                                              f overview data to the user. Such procedure does not necessarily have to use SERIALIZABLE isolation level. In many cases READ UNCOMMITTED can be quite sufficient. It means, such procedure will not be blocked by your transaction that creates or modifies some data.



                                                              I'd suggest you review all existing functions/procedures in your database, evaluate the reasonable isolation level for each one, explain the performance benefits to your customer. Then adjust these functions/procedures accordingly.






                                                              share|improve this answer














                                                              TLDR: Use proper isolation level.



                                                              As you correctly noticed the approach without transactions and with "manual" recovery can be very complex. The high complexity means normally much more time to implement it and much more time to fix errors (because complexity leads to more errors in the implementation). It means such approach can cost your customer much more.



                                                              The main concern of your "dba" colleague is performance. One of ways to improve it is to use proper isolation level. Suppose you have a procedure that provides some kind o
                                                              f overview data to the user. Such procedure does not necessarily have to use SERIALIZABLE isolation level. In many cases READ UNCOMMITTED can be quite sufficient. It means, such procedure will not be blocked by your transaction that creates or modifies some data.



                                                              I'd suggest you review all existing functions/procedures in your database, evaluate the reasonable isolation level for each one, explain the performance benefits to your customer. Then adjust these functions/procedures accordingly.







                                                              share|improve this answer













                                                              share|improve this answer




                                                              share|improve this answer










                                                              answered Sep 15 at 12:03









                                                              mentallurgmentallurg

                                                              1313 bronze badges




                                                              1313 bronze badges
























                                                                  2



















                                                                  You can also decide to use In-Memory OLTP tables. They of course still use transactions, but there is no blocking involved.

                                                                  Instead of blocking all operations will be successful, but during the commit phase engine will check for transaction conflicts and one of the commits may fail. Microsoft uses the term "Optimistic locking".

                                                                  If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction.

                                                                  More at: In memory OLTP






                                                                  share|improve this answer
































                                                                    2



















                                                                    You can also decide to use In-Memory OLTP tables. They of course still use transactions, but there is no blocking involved.

                                                                    Instead of blocking all operations will be successful, but during the commit phase engine will check for transaction conflicts and one of the commits may fail. Microsoft uses the term "Optimistic locking".

                                                                    If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction.

                                                                    More at: In memory OLTP






                                                                    share|improve this answer






























                                                                      2















                                                                      2











                                                                      2









                                                                      You can also decide to use In-Memory OLTP tables. They of course still use transactions, but there is no blocking involved.

                                                                      Instead of blocking all operations will be successful, but during the commit phase engine will check for transaction conflicts and one of the commits may fail. Microsoft uses the term "Optimistic locking".

                                                                      If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction.

                                                                      More at: In memory OLTP






                                                                      share|improve this answer
















                                                                      You can also decide to use In-Memory OLTP tables. They of course still use transactions, but there is no blocking involved.

                                                                      Instead of blocking all operations will be successful, but during the commit phase engine will check for transaction conflicts and one of the commits may fail. Microsoft uses the term "Optimistic locking".

                                                                      If the scaling issue is caused by conflict between two write operations, such as two concurrent transactions trying to update the same row, In-Memory OLTP lets one transaction succeed and fails the other transaction. The failed transaction must be re-submitted either explicitly or implicitly, re-trying the transaction.

                                                                      More at: In memory OLTP







                                                                      share|improve this answer















                                                                      share|improve this answer




                                                                      share|improve this answer








                                                                      edited Sep 20 at 18:49

























                                                                      answered Sep 20 at 18:40









                                                                      PiotrPiotr

                                                                      6311 silver badge10 bronze badges




                                                                      6311 silver badge10 bronze badges
























                                                                          -5



















                                                                          There is a way around using transactions to a limited extent and that is by changing your data model to be more object oriented. So rather than storing for example demographic data about a person in several tables and relating them to each other and requiring transactions, you could have a single JSON document that stores everything you know about that person in a single field. Of course working out far that domain stretches is a another design challenge, best done by developers not DBAs






                                                                          share|improve this answer






























                                                                            -5



















                                                                            There is a way around using transactions to a limited extent and that is by changing your data model to be more object oriented. So rather than storing for example demographic data about a person in several tables and relating them to each other and requiring transactions, you could have a single JSON document that stores everything you know about that person in a single field. Of course working out far that domain stretches is a another design challenge, best done by developers not DBAs






                                                                            share|improve this answer




























                                                                              -5















                                                                              -5











                                                                              -5









                                                                              There is a way around using transactions to a limited extent and that is by changing your data model to be more object oriented. So rather than storing for example demographic data about a person in several tables and relating them to each other and requiring transactions, you could have a single JSON document that stores everything you know about that person in a single field. Of course working out far that domain stretches is a another design challenge, best done by developers not DBAs






                                                                              share|improve this answer














                                                                              There is a way around using transactions to a limited extent and that is by changing your data model to be more object oriented. So rather than storing for example demographic data about a person in several tables and relating them to each other and requiring transactions, you could have a single JSON document that stores everything you know about that person in a single field. Of course working out far that domain stretches is a another design challenge, best done by developers not DBAs







                                                                              share|improve this answer













                                                                              share|improve this answer




                                                                              share|improve this answer










                                                                              answered Sep 14 at 4:48









                                                                              user2127user2127

                                                                              1




                                                                              1































                                                                                  draft saved

                                                                                  draft discarded















































                                                                                  Thanks for contributing an answer to Database Administrators Stack Exchange!


                                                                                  • Please be sure to answer the question. Provide details and share your research!

                                                                                  But avoid


                                                                                  • Asking for help, clarification, or responding to other answers.

                                                                                  • Making statements based on opinion; back them up with references or personal experience.

                                                                                  To learn more, see our tips on writing great answers.




                                                                                  draft saved


                                                                                  draft discarded














                                                                                  StackExchange.ready(
                                                                                  function ()
                                                                                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');

                                                                                  );

                                                                                  Post as a guest















                                                                                  Required, but never shown





















































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown

































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown









                                                                                  Popular posts from this blog

                                                                                  Tamil (spriik) Luke uk diar | Nawigatjuun

                                                                                  Align equal signs while including text over equalitiesAMS align: left aligned text/math plus multicolumn alignmentMultiple alignmentsAligning equations in multiple placesNumbering and aligning an equation with multiple columnsHow to align one equation with another multline equationUsing \ in environments inside the begintabularxNumber equations and preserving alignment of equal signsHow can I align equations to the left and to the right?Double equation alignment problem within align enviromentAligned within align: Why are they right-aligned?

                                                                                  Training a classifier when some of the features are unknownWhy does Gradient Boosting regression predict negative values when there are no negative y-values in my training set?How to improve an existing (trained) classifier?What is effect when I set up some self defined predisctor variables?Why Matlab neural network classification returns decimal values on prediction dataset?Fitting and transforming text data in training, testing, and validation setsHow to quantify the performance of the classifier (multi-class SVM) using the test data?How do I control for some patients providing multiple samples in my training data?Training and Test setTraining a convolutional neural network for image denoising in MatlabShouldn't an autoencoder with #(neurons in hidden layer) = #(neurons in input layer) be “perfect”?