Microservices and stored proceduresJob Scheduling, Stored ProceduresMicroservices and user rolesMicroservices Architecture separating backend and front endHow can I create a Python microservice on AWS that both accepts REST connections and processes SQS messages?Microservices and data storageEvent Driven Microservice Architecture & Dealing with synchronous external neighboring systemsSerialization of messages on a servicebus and microservice architectureGranularity of microservices and containersApplication architecture using CQS and DDD AggregatesIs a “User” microservice a good idea?
Exactly what color was the text on monochrome terminals with green-on-black and amber-on-black screens?
Because things smell, is everything evaporating?
Is the choice of textbook really that important?
What happens if a crew comply but do not repeat ATC instructions?
Starting with D&D: Starter Set vs Dungeon Master's Guide
What is the difference between "cat < filename" and "cat filename"?
Algorithmic thinking problems
What does "Massage with salt" mean in a recipe?
Can Vice President Pence be impeached before President Trump?
Proper Case Conversion (Performance)
Select polygons with 5 or more points
Is there a word/phrase that can describe playing a musical instrument in a casual way?
How to deal with an employee who is requesting a demotion?
I can't understand how probability makes sense
How to extract parts of an expression using Cases
How can I evaluate a math equation, one per line in a file?
Line between lines of a shape in illustrator
Do one quarter of Swedes named 'Ali' have a criminal record?
How to get the sum, difference, product, and quotient from a macro in ConTeXt or plain TeX?
What mathematics activities get students physically moving?
Bought a book that is in the public domain ... but the T&A of company says I can't redistribute it
Do gray aliens exist in Star Trek?
What is the largest piece of space debris volumetrically?
A Ukrainian idiom meaning "on one's last legs"
Microservices and stored procedures
Job Scheduling, Stored ProceduresMicroservices and user rolesMicroservices Architecture separating backend and front endHow can I create a Python microservice on AWS that both accepts REST connections and processes SQS messages?Microservices and data storageEvent Driven Microservice Architecture & Dealing with synchronous external neighboring systemsSerialization of messages on a servicebus and microservice architectureGranularity of microservices and containersApplication architecture using CQS and DDD AggregatesIs a “User” microservice a good idea?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
Are stored procedures considered bad practice in a microservice architecture?
Here are my thoughts:
most books on microservices recommend one database per microservice.
Stored procedures typically work on a monolithic database.again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written, say specifically in Oracle, tightly couples the microservice to that technology.
most microservice architecture books (that I have read) recommend that microservices should be business oriented (designed using domain-driven design (DDD)). By moving business logic into stored procedures in the database this is no longer the case.
Any thoughts on this?
architecture microservices
|
show 8 more comments
Are stored procedures considered bad practice in a microservice architecture?
Here are my thoughts:
most books on microservices recommend one database per microservice.
Stored procedures typically work on a monolithic database.again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written, say specifically in Oracle, tightly couples the microservice to that technology.
most microservice architecture books (that I have read) recommend that microservices should be business oriented (designed using domain-driven design (DDD)). By moving business logic into stored procedures in the database this is no longer the case.
Any thoughts on this?
architecture microservices
10
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
12
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
3
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
5
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
3
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53
|
show 8 more comments
Are stored procedures considered bad practice in a microservice architecture?
Here are my thoughts:
most books on microservices recommend one database per microservice.
Stored procedures typically work on a monolithic database.again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written, say specifically in Oracle, tightly couples the microservice to that technology.
most microservice architecture books (that I have read) recommend that microservices should be business oriented (designed using domain-driven design (DDD)). By moving business logic into stored procedures in the database this is no longer the case.
Any thoughts on this?
architecture microservices
Are stored procedures considered bad practice in a microservice architecture?
Here are my thoughts:
most books on microservices recommend one database per microservice.
Stored procedures typically work on a monolithic database.again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written, say specifically in Oracle, tightly couples the microservice to that technology.
most microservice architecture books (that I have read) recommend that microservices should be business oriented (designed using domain-driven design (DDD)). By moving business logic into stored procedures in the database this is no longer the case.
Any thoughts on this?
architecture microservices
architecture microservices
edited Sep 19 at 1:10
Peter Mortensen
1,0812 gold badges11 silver badges14 bronze badges
1,0812 gold badges11 silver badges14 bronze badges
asked Sep 16 at 7:11
Johnny AlphaJohnny Alpha
3782 silver badges7 bronze badges
3782 silver badges7 bronze badges
10
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
12
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
3
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
5
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
3
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53
|
show 8 more comments
10
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
12
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
3
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
5
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
3
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53
10
10
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
12
12
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
3
3
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
5
5
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
3
3
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53
|
show 8 more comments
5 Answers
5
active
oldest
votes
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
|
show 5 more comments
To write software requires that you tightly couple to a technology.
At the very least to the runtime environment provided by the programming language being developed within.
More generally though you will find that your micro-service is tightly coupled to several technologies:
- Network Service Framework to provide high level HTTP/SSL/SOAP protocol implementations
- Repository/ORM/DAO Framework to provide persistence.
- Data Manipulation Frameworks to provide tools for working with data.
- Process/Threading/OS Framework to provide access to OS resources such as multi-tasking, the file system, memory, GPU compute, expansion cards, etc...
And that is to make a bare-bones micro-service.
Stored procedures
A stored procedure is simply another technology that you could choose to use or not use. It does not magically make your code monolithic, or micro.
What it is though is:
- Another technology. Each technology present in the application decreases the likely-hood that any given programmer can read, understand, and make wise design choices for that technology mix.
- A language using a different programming paradigm. It is far too easy for non-experts to try and force their own imperative, functional, OO, etc... perspective onto it, which often leads to less than stellar results.
- An API. Which must be maintained like any other class in the code base. It also means that the database is providing a non-generic interface. This makes it harder to both replace the database engine itself, and to transparently apply generic behaviour such as in memory caching.
- An artefact. Which must be versioned, tested, and deployed. This can be done, but databases are living artifacts requiring a different approach. You cannot usually just delete the original, and replace it. Often a careful orchestration of changes over-time are needed to migrate the system to the desired state.
Each of these is a real cost. In some cases the cost is justifiable, in others it is not.
You would be paying almost the same set of costs by hosting a scripting engine. The sole reduction is that you could choose the same programming paradigm as the host language.
Business Logic
Moving business rules into the database is bad practice. Just not because of stored procedures.
It's a bad practice, because the database and business logic operate on different shearing levels.
A database in a mature applications can be in use for decades. Generally these systems will have the engine periodically updated, but the database itself was migrated. It was not killed and rebuilt from the start. There is no reason a micro service cannot be equally so long lived.
Contrast decades against how quickly business rules change. In my experience an old business rule is perhaps a few years old, most however change quickly, and you can never tell which one will change next. A new requirement from a regulator, an old product being decommissioned, changes to the letter head, changes to how many employees report to a boss, etc, etc, etc...
If the business logic is distributed across the shearing layers, particularly into a slower and longer lived layer, it will generate resistance to change. This is not necessarily a bad thing. After all, the only database that has zero business logic in it is a triple store.
The mere act of specifying a table schema is moving business logic into the database.
Architecture
You are contending with using the appropriate tool for the appropriate problem, without needing too many tools, nor making it too hard to solve, in order to make and maintain a solution.
This isn't easy.
But let's think the unthinkable, how would you maintain business logic distributed across several languages?
- A catalogue... so that each business rule implementation can be tracked and maintained.
- Tests... that could be used against each business rule regardless of where and how it was implemented.
- A reference implementation.. so that when discrepancies are found, a source of truth exists (or at least a source of debate).
But this has a cost too.
- Is it better to allow the business rules to have many implementations? That can each take advantage of the team skills, and framework provisions, but needing tight quality controls to ward off having many small vagaries?
- Or is it better to have a single source of truth, written in a single language? Arguably cheaper to implement, yet also a single source of failure, itself a monolithic component that resists change in the face of different platforms, frameworks, or as yet to be invented tools?
add a comment
|
I'll preface my answer by saying that I actually maintain a couple microservices that use stored procedures. Also I've written a lot of stored procedures at various points in my career, and I definitely agree that things can go very, very wrong if they are used incorrectly.
So the short answer is, no, stored procedures aren't inherently bad in a microservice architecture. But you do need to understand:
- You're adding obstacles to the substitution of storage engines. If some operational or performance characteristics or feature limitations require you to switch storage engines, the cost will be greater because you'll be writing and testing a lot of new code. Running more than one storage engine (either during a migration phase or to isolate activities based on performance needs) can introduce consistency problems unless you use two-phase commit (2PC), which has performance issues itself.
- You've got another API to maintain, which means your dependencies can break. Adding, removing, or changing the types of parameters on procedures can break existing code. The same thing happens with tables and queries, but your tools might be less helpful with tracking down where things might go wrong. Problems with stored procedures are typically found at runtime, very late in the develop/deploy process.
- Your database permissions just got more complicated. Does a procedure run as the logged in user or as some other role? You need to think about this, and manage this (hopefully in an automated fashion.)
- You need to be able to migrate to new versions safely. Oftentimes a procedure must be dropped and re-created. Once again, permissions might cause some problems for you.
- Rollback of a failed migration can mean extra effort. When the production environment is separated from developers, things get even more challenging.
These are some uses of stored procedures that I think are often worthwhile:
- Enforcement of edit history (audit logs). Triggers are commonly used for this purpose, and triggers are stored procedures. It's also possible in some databases to disallow inserts and updates entirely for the application role: clients execute procedures which are run as a different role with appropriate permissions and which enforce all of the necessary behavior.
- Extension of check constraints. This might get you into business logic territory, but there are cases where a database's built-in constraint tools might not be sufficient for what you need. Often times the best way to express checks is with imperative code, and you risk letting bad data in if you depend on your application to do it for you.
- Encapsulation of complex queries when a view is inappropriate or too complicated. I've seen a few cases where a correct view requires some extremely complex SQL that can be expressed much more understandably in a stored procedure. This is probably rare, but it does occur.
In general, I suggest that you try out views first, and resort to procedures only when necessary. Well-designed views can actually function as an API, abstracting out the details of how underlying tables are queried. Augmenting your API (views) with stored procedures makes sense in some circumstances. It's even possible to emit JSON directly from a SQL query, bypassing the whole mess of mapping data from query results to your application's data model. Whether that's a good idea is something for you to determine based on your own needs.
Since you should already be managing your database resources (schema, permissions, etc.) through some automated tool, no, stored procedures are not inherently bad for microservices.
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
add a comment
|
Stored procedures are implementation details. Database functions, lambdas, or a shell script stored somewhere in the file system are all implementation details and irrelevant for the architecture.
most books on microservices recommend one database per microservice.
Ok, so we can code the stored procedures in these databases.
again most microservice architecture books state that they should be
autonomous and loosely coupled
Between business capabilities, development's life cycles, management,
deployments, team's locations, etc. Nothing to do with the implementation details. Microservices don't solve a technical problem (just the opposite). They come to solve problems with the management and the time-to-market. It's a strategy, not a tactic. A way to fail-fast with the least costs possible. If a certain business capability is proven to be worthless, we drop it without messing up other capabilities, deployments, projects' management, releases...
Note that the "split" already acts like a decoupling agent. Say we have two services, A is backed by Oracle and B by MongoDB. If we don’t break the golden rule of decoupling, it should be possible to drop A + Oracle with negligible side effects on B.
Using stored procedures written say specifically in Oracle, tightly
couples the microservice to that technology.
It might cause vendor lock-in. Many times, the vendor is imposed by the business due to historical or contractual reasons1. It is important to know how to not lock our code to the vendor. For example, some ORM and frameworks implement a new query language that hides the database built-in functions and features.
Although, if our services are micro enough, vendor lock-in is no longer a problem since it impacts a small part of the whole. A small part that should be possible to be replaced (or isolated) quickly.
most MSA books (that I have read) recommend that microservices should
be business oriented (designed using DDD).
It should be business-driven and here the thing. Not all business take advantage of DDD. DDD and microservices overlap in many points, but they are not cause-effect. We could end up with a microservices ecosystem composed of anaemic services. Or composed of a mix of both: services implementing a complex domain and dumb anaemic services providing POJOs directly from the DB. There's nothing wrong with that.
Regarding books, they only focus on the execution of the strategy. The tactics -how to take advantage of the distributed computing- how to make it work to success, but they are (usually) agnostic to the strategy. Strategies vary from company to company and rarely depends on developers. So, we still have to extrapolate and adapt what books say to our specific needs, requirements and constraints. The goal is to make the business strategy profitable and sustainable.
Always bear in mind that any architecture is a means to an end. The business rules. We don't build microservices ecosystems for fashion or for love to the art.
add a comment
|
It doesn't really have anything to do with microservices.
Stored procedures can make sense if your service has an 'old-style' layered architecture in which the DB is the foundation of the service, with data access and business logic layers on top. The interface between the service and the database in such an architecture is very specific to innermost details of the service. Typically there will be service-specific adapters for each kind of supported database, and the specificity of the API exposed by the adapter makes it possible to use stored procedures in the underlying layers.
There are lots of problems with architectures like that. Most importantly it makes most of the logic very difficult to unit test. These architectures are no longer in favour.
If you're using a newer-style "clean architecture", "onion architecture", or similar, then the database will be an injected dependency, specified at the outer layers. Since it's defined in the outer layers, the interface provided for the database must be generic. It cannot reflect the innermost details of the service, because those details must be hidden from the outermost layers of the architecture. Defining a generic stored procedure interface that can work with any database or unit test harness is incredibly difficult, and not really necessary, so stored procedures are not often practical in these kinds of architectures.
The relationship with microservices is just that microservices are new and ascendant -- we don't do monoliths anymore -- and that these new architectural styles are also ascendant -- we don't do flat layers anymore.
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f398436%2fmicroservices-and-stored-procedures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
|
show 5 more comments
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
|
show 5 more comments
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
There is nothing that explicitly forbids or argues against using stored procedures with microservices.
Disclaimer: I don't like stored procedures from a developer's POV, but that is not related to microservices in any way.
Stored procedures typically work on a monolith database.
I think you're succumbing to a logical fallacy.
Stored procedures are on the decline nowadays. Most stored procedures that are still in use are from an older codebase that's been kept around. Back then, monolithic databases were also much more prevalent compared to when microservices have become popular.
Stored procs and monolithic databases both occur in old codebases, which is why you see them together more often. But that's not a causal link. You don't use stored procs because you have a monololithic database. You don't have a monolithic database because you use stored procs.
most books on microservices recommend one database per microservice.
There is no technical reason why these smaller databases cannot have stored procedures.
As I mentioned, I don't like stored procs. I find them cumbersome and resistant to future maintenance. I do think that spreading sprocs over many small databases further exacerbates the issues that I already don't like. But that doesn't mean it can't be done.
again most microservice architecture books state that they should be autonomous and loosely coupled. Using stored procedures written say specifically in Oracle, tightly couples the microservice to that technology.
On the other side, the same argument can be made for whatever ORM your microservice uses. Not every ORM will support every database either. Coupling (specifically its tightness) is a relative concept. It's a matter of being as loose as you can reasonably be.
Sprocs do suffer from tight coupling in general regardless of microservices. I would advise against sprocs in general, but not particularly because you're using microservices. It's the same argument as before: I don't think sprocs are the way to go (in general), but that might just be my bias, and it's not related to microservices.
most msa books (that I have read) recommend that microservices should be business oriented (designed using ddd). By moving business logic into stored procedures in the database this is no longer the case.
This has always been my main gripe about sprocs: business logic in the database. Even when not the intention, it tends to somehow always end up that way.
But again, that gripe exists regardless of whether you use microservices or not. The only reason it looks like a bigger issue is because microservices push you to modernize your entire architecture, and sprocs are not that favored anymore in modern architectures.
edited Sep 16 at 8:21
answered Sep 16 at 8:12
FlaterFlater
11.9k3 gold badges23 silver badges34 bronze badges
11.9k3 gold badges23 silver badges34 bronze badges
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
|
show 5 more comments
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
4
4
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
I'm not sure if it is correct to say that microservices push you to modernize your entire architecture. More often than not, they end up being a thin layer over a behemoth of a mess of poorly planned code. They can be pretty good when well done, but they don't really push you in any way towards better coding than any other architecture. Still, good answer. You got a +1 from me.
– T. Sar - Reinstate Monica
Sep 16 at 11:56
11
11
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
@T.Sar modern is not the same as better. Refactoring (to microservices or whatever) means change. Change forces you to use your current ideas. We hope they are better ideas.
– candied_orange
Sep 16 at 12:19
2
2
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
@T.Sar: Hacks are timeless, and you can usually abuse any system (modern or not) to do something it can technically handle but was never intended for. Microservices urge you to do it differently (and thus reevaluate some old approaches) but they can't universally enforce it. With universal enforcement you usually suffer in the compatibility/valid fringe case department.
– Flater
Sep 16 at 12:24
4
4
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
@candied_orange "modern is not the same as better" - I think I wholeheartedly agree to that. Very good point.
– T. Sar - Reinstate Monica
Sep 16 at 13:47
3
3
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
Modern is not even synonynous of "adequate".
– Laiv
Sep 17 at 16:13
|
show 5 more comments
To write software requires that you tightly couple to a technology.
At the very least to the runtime environment provided by the programming language being developed within.
More generally though you will find that your micro-service is tightly coupled to several technologies:
- Network Service Framework to provide high level HTTP/SSL/SOAP protocol implementations
- Repository/ORM/DAO Framework to provide persistence.
- Data Manipulation Frameworks to provide tools for working with data.
- Process/Threading/OS Framework to provide access to OS resources such as multi-tasking, the file system, memory, GPU compute, expansion cards, etc...
And that is to make a bare-bones micro-service.
Stored procedures
A stored procedure is simply another technology that you could choose to use or not use. It does not magically make your code monolithic, or micro.
What it is though is:
- Another technology. Each technology present in the application decreases the likely-hood that any given programmer can read, understand, and make wise design choices for that technology mix.
- A language using a different programming paradigm. It is far too easy for non-experts to try and force their own imperative, functional, OO, etc... perspective onto it, which often leads to less than stellar results.
- An API. Which must be maintained like any other class in the code base. It also means that the database is providing a non-generic interface. This makes it harder to both replace the database engine itself, and to transparently apply generic behaviour such as in memory caching.
- An artefact. Which must be versioned, tested, and deployed. This can be done, but databases are living artifacts requiring a different approach. You cannot usually just delete the original, and replace it. Often a careful orchestration of changes over-time are needed to migrate the system to the desired state.
Each of these is a real cost. In some cases the cost is justifiable, in others it is not.
You would be paying almost the same set of costs by hosting a scripting engine. The sole reduction is that you could choose the same programming paradigm as the host language.
Business Logic
Moving business rules into the database is bad practice. Just not because of stored procedures.
It's a bad practice, because the database and business logic operate on different shearing levels.
A database in a mature applications can be in use for decades. Generally these systems will have the engine periodically updated, but the database itself was migrated. It was not killed and rebuilt from the start. There is no reason a micro service cannot be equally so long lived.
Contrast decades against how quickly business rules change. In my experience an old business rule is perhaps a few years old, most however change quickly, and you can never tell which one will change next. A new requirement from a regulator, an old product being decommissioned, changes to the letter head, changes to how many employees report to a boss, etc, etc, etc...
If the business logic is distributed across the shearing layers, particularly into a slower and longer lived layer, it will generate resistance to change. This is not necessarily a bad thing. After all, the only database that has zero business logic in it is a triple store.
The mere act of specifying a table schema is moving business logic into the database.
Architecture
You are contending with using the appropriate tool for the appropriate problem, without needing too many tools, nor making it too hard to solve, in order to make and maintain a solution.
This isn't easy.
But let's think the unthinkable, how would you maintain business logic distributed across several languages?
- A catalogue... so that each business rule implementation can be tracked and maintained.
- Tests... that could be used against each business rule regardless of where and how it was implemented.
- A reference implementation.. so that when discrepancies are found, a source of truth exists (or at least a source of debate).
But this has a cost too.
- Is it better to allow the business rules to have many implementations? That can each take advantage of the team skills, and framework provisions, but needing tight quality controls to ward off having many small vagaries?
- Or is it better to have a single source of truth, written in a single language? Arguably cheaper to implement, yet also a single source of failure, itself a monolithic component that resists change in the face of different platforms, frameworks, or as yet to be invented tools?
add a comment
|
To write software requires that you tightly couple to a technology.
At the very least to the runtime environment provided by the programming language being developed within.
More generally though you will find that your micro-service is tightly coupled to several technologies:
- Network Service Framework to provide high level HTTP/SSL/SOAP protocol implementations
- Repository/ORM/DAO Framework to provide persistence.
- Data Manipulation Frameworks to provide tools for working with data.
- Process/Threading/OS Framework to provide access to OS resources such as multi-tasking, the file system, memory, GPU compute, expansion cards, etc...
And that is to make a bare-bones micro-service.
Stored procedures
A stored procedure is simply another technology that you could choose to use or not use. It does not magically make your code monolithic, or micro.
What it is though is:
- Another technology. Each technology present in the application decreases the likely-hood that any given programmer can read, understand, and make wise design choices for that technology mix.
- A language using a different programming paradigm. It is far too easy for non-experts to try and force their own imperative, functional, OO, etc... perspective onto it, which often leads to less than stellar results.
- An API. Which must be maintained like any other class in the code base. It also means that the database is providing a non-generic interface. This makes it harder to both replace the database engine itself, and to transparently apply generic behaviour such as in memory caching.
- An artefact. Which must be versioned, tested, and deployed. This can be done, but databases are living artifacts requiring a different approach. You cannot usually just delete the original, and replace it. Often a careful orchestration of changes over-time are needed to migrate the system to the desired state.
Each of these is a real cost. In some cases the cost is justifiable, in others it is not.
You would be paying almost the same set of costs by hosting a scripting engine. The sole reduction is that you could choose the same programming paradigm as the host language.
Business Logic
Moving business rules into the database is bad practice. Just not because of stored procedures.
It's a bad practice, because the database and business logic operate on different shearing levels.
A database in a mature applications can be in use for decades. Generally these systems will have the engine periodically updated, but the database itself was migrated. It was not killed and rebuilt from the start. There is no reason a micro service cannot be equally so long lived.
Contrast decades against how quickly business rules change. In my experience an old business rule is perhaps a few years old, most however change quickly, and you can never tell which one will change next. A new requirement from a regulator, an old product being decommissioned, changes to the letter head, changes to how many employees report to a boss, etc, etc, etc...
If the business logic is distributed across the shearing layers, particularly into a slower and longer lived layer, it will generate resistance to change. This is not necessarily a bad thing. After all, the only database that has zero business logic in it is a triple store.
The mere act of specifying a table schema is moving business logic into the database.
Architecture
You are contending with using the appropriate tool for the appropriate problem, without needing too many tools, nor making it too hard to solve, in order to make and maintain a solution.
This isn't easy.
But let's think the unthinkable, how would you maintain business logic distributed across several languages?
- A catalogue... so that each business rule implementation can be tracked and maintained.
- Tests... that could be used against each business rule regardless of where and how it was implemented.
- A reference implementation.. so that when discrepancies are found, a source of truth exists (or at least a source of debate).
But this has a cost too.
- Is it better to allow the business rules to have many implementations? That can each take advantage of the team skills, and framework provisions, but needing tight quality controls to ward off having many small vagaries?
- Or is it better to have a single source of truth, written in a single language? Arguably cheaper to implement, yet also a single source of failure, itself a monolithic component that resists change in the face of different platforms, frameworks, or as yet to be invented tools?
add a comment
|
To write software requires that you tightly couple to a technology.
At the very least to the runtime environment provided by the programming language being developed within.
More generally though you will find that your micro-service is tightly coupled to several technologies:
- Network Service Framework to provide high level HTTP/SSL/SOAP protocol implementations
- Repository/ORM/DAO Framework to provide persistence.
- Data Manipulation Frameworks to provide tools for working with data.
- Process/Threading/OS Framework to provide access to OS resources such as multi-tasking, the file system, memory, GPU compute, expansion cards, etc...
And that is to make a bare-bones micro-service.
Stored procedures
A stored procedure is simply another technology that you could choose to use or not use. It does not magically make your code monolithic, or micro.
What it is though is:
- Another technology. Each technology present in the application decreases the likely-hood that any given programmer can read, understand, and make wise design choices for that technology mix.
- A language using a different programming paradigm. It is far too easy for non-experts to try and force their own imperative, functional, OO, etc... perspective onto it, which often leads to less than stellar results.
- An API. Which must be maintained like any other class in the code base. It also means that the database is providing a non-generic interface. This makes it harder to both replace the database engine itself, and to transparently apply generic behaviour such as in memory caching.
- An artefact. Which must be versioned, tested, and deployed. This can be done, but databases are living artifacts requiring a different approach. You cannot usually just delete the original, and replace it. Often a careful orchestration of changes over-time are needed to migrate the system to the desired state.
Each of these is a real cost. In some cases the cost is justifiable, in others it is not.
You would be paying almost the same set of costs by hosting a scripting engine. The sole reduction is that you could choose the same programming paradigm as the host language.
Business Logic
Moving business rules into the database is bad practice. Just not because of stored procedures.
It's a bad practice, because the database and business logic operate on different shearing levels.
A database in a mature applications can be in use for decades. Generally these systems will have the engine periodically updated, but the database itself was migrated. It was not killed and rebuilt from the start. There is no reason a micro service cannot be equally so long lived.
Contrast decades against how quickly business rules change. In my experience an old business rule is perhaps a few years old, most however change quickly, and you can never tell which one will change next. A new requirement from a regulator, an old product being decommissioned, changes to the letter head, changes to how many employees report to a boss, etc, etc, etc...
If the business logic is distributed across the shearing layers, particularly into a slower and longer lived layer, it will generate resistance to change. This is not necessarily a bad thing. After all, the only database that has zero business logic in it is a triple store.
The mere act of specifying a table schema is moving business logic into the database.
Architecture
You are contending with using the appropriate tool for the appropriate problem, without needing too many tools, nor making it too hard to solve, in order to make and maintain a solution.
This isn't easy.
But let's think the unthinkable, how would you maintain business logic distributed across several languages?
- A catalogue... so that each business rule implementation can be tracked and maintained.
- Tests... that could be used against each business rule regardless of where and how it was implemented.
- A reference implementation.. so that when discrepancies are found, a source of truth exists (or at least a source of debate).
But this has a cost too.
- Is it better to allow the business rules to have many implementations? That can each take advantage of the team skills, and framework provisions, but needing tight quality controls to ward off having many small vagaries?
- Or is it better to have a single source of truth, written in a single language? Arguably cheaper to implement, yet also a single source of failure, itself a monolithic component that resists change in the face of different platforms, frameworks, or as yet to be invented tools?
To write software requires that you tightly couple to a technology.
At the very least to the runtime environment provided by the programming language being developed within.
More generally though you will find that your micro-service is tightly coupled to several technologies:
- Network Service Framework to provide high level HTTP/SSL/SOAP protocol implementations
- Repository/ORM/DAO Framework to provide persistence.
- Data Manipulation Frameworks to provide tools for working with data.
- Process/Threading/OS Framework to provide access to OS resources such as multi-tasking, the file system, memory, GPU compute, expansion cards, etc...
And that is to make a bare-bones micro-service.
Stored procedures
A stored procedure is simply another technology that you could choose to use or not use. It does not magically make your code monolithic, or micro.
What it is though is:
- Another technology. Each technology present in the application decreases the likely-hood that any given programmer can read, understand, and make wise design choices for that technology mix.
- A language using a different programming paradigm. It is far too easy for non-experts to try and force their own imperative, functional, OO, etc... perspective onto it, which often leads to less than stellar results.
- An API. Which must be maintained like any other class in the code base. It also means that the database is providing a non-generic interface. This makes it harder to both replace the database engine itself, and to transparently apply generic behaviour such as in memory caching.
- An artefact. Which must be versioned, tested, and deployed. This can be done, but databases are living artifacts requiring a different approach. You cannot usually just delete the original, and replace it. Often a careful orchestration of changes over-time are needed to migrate the system to the desired state.
Each of these is a real cost. In some cases the cost is justifiable, in others it is not.
You would be paying almost the same set of costs by hosting a scripting engine. The sole reduction is that you could choose the same programming paradigm as the host language.
Business Logic
Moving business rules into the database is bad practice. Just not because of stored procedures.
It's a bad practice, because the database and business logic operate on different shearing levels.
A database in a mature applications can be in use for decades. Generally these systems will have the engine periodically updated, but the database itself was migrated. It was not killed and rebuilt from the start. There is no reason a micro service cannot be equally so long lived.
Contrast decades against how quickly business rules change. In my experience an old business rule is perhaps a few years old, most however change quickly, and you can never tell which one will change next. A new requirement from a regulator, an old product being decommissioned, changes to the letter head, changes to how many employees report to a boss, etc, etc, etc...
If the business logic is distributed across the shearing layers, particularly into a slower and longer lived layer, it will generate resistance to change. This is not necessarily a bad thing. After all, the only database that has zero business logic in it is a triple store.
The mere act of specifying a table schema is moving business logic into the database.
Architecture
You are contending with using the appropriate tool for the appropriate problem, without needing too many tools, nor making it too hard to solve, in order to make and maintain a solution.
This isn't easy.
But let's think the unthinkable, how would you maintain business logic distributed across several languages?
- A catalogue... so that each business rule implementation can be tracked and maintained.
- Tests... that could be used against each business rule regardless of where and how it was implemented.
- A reference implementation.. so that when discrepancies are found, a source of truth exists (or at least a source of debate).
But this has a cost too.
- Is it better to allow the business rules to have many implementations? That can each take advantage of the team skills, and framework provisions, but needing tight quality controls to ward off having many small vagaries?
- Or is it better to have a single source of truth, written in a single language? Arguably cheaper to implement, yet also a single source of failure, itself a monolithic component that resists change in the face of different platforms, frameworks, or as yet to be invented tools?
edited Sep 18 at 23:43
Peter Mortensen
1,0812 gold badges11 silver badges14 bronze badges
1,0812 gold badges11 silver badges14 bronze badges
answered Sep 16 at 10:07
Kain0_0Kain0_0
7,1187 silver badges25 bronze badges
7,1187 silver badges25 bronze badges
add a comment
|
add a comment
|
I'll preface my answer by saying that I actually maintain a couple microservices that use stored procedures. Also I've written a lot of stored procedures at various points in my career, and I definitely agree that things can go very, very wrong if they are used incorrectly.
So the short answer is, no, stored procedures aren't inherently bad in a microservice architecture. But you do need to understand:
- You're adding obstacles to the substitution of storage engines. If some operational or performance characteristics or feature limitations require you to switch storage engines, the cost will be greater because you'll be writing and testing a lot of new code. Running more than one storage engine (either during a migration phase or to isolate activities based on performance needs) can introduce consistency problems unless you use two-phase commit (2PC), which has performance issues itself.
- You've got another API to maintain, which means your dependencies can break. Adding, removing, or changing the types of parameters on procedures can break existing code. The same thing happens with tables and queries, but your tools might be less helpful with tracking down where things might go wrong. Problems with stored procedures are typically found at runtime, very late in the develop/deploy process.
- Your database permissions just got more complicated. Does a procedure run as the logged in user or as some other role? You need to think about this, and manage this (hopefully in an automated fashion.)
- You need to be able to migrate to new versions safely. Oftentimes a procedure must be dropped and re-created. Once again, permissions might cause some problems for you.
- Rollback of a failed migration can mean extra effort. When the production environment is separated from developers, things get even more challenging.
These are some uses of stored procedures that I think are often worthwhile:
- Enforcement of edit history (audit logs). Triggers are commonly used for this purpose, and triggers are stored procedures. It's also possible in some databases to disallow inserts and updates entirely for the application role: clients execute procedures which are run as a different role with appropriate permissions and which enforce all of the necessary behavior.
- Extension of check constraints. This might get you into business logic territory, but there are cases where a database's built-in constraint tools might not be sufficient for what you need. Often times the best way to express checks is with imperative code, and you risk letting bad data in if you depend on your application to do it for you.
- Encapsulation of complex queries when a view is inappropriate or too complicated. I've seen a few cases where a correct view requires some extremely complex SQL that can be expressed much more understandably in a stored procedure. This is probably rare, but it does occur.
In general, I suggest that you try out views first, and resort to procedures only when necessary. Well-designed views can actually function as an API, abstracting out the details of how underlying tables are queried. Augmenting your API (views) with stored procedures makes sense in some circumstances. It's even possible to emit JSON directly from a SQL query, bypassing the whole mess of mapping data from query results to your application's data model. Whether that's a good idea is something for you to determine based on your own needs.
Since you should already be managing your database resources (schema, permissions, etc.) through some automated tool, no, stored procedures are not inherently bad for microservices.
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
add a comment
|
I'll preface my answer by saying that I actually maintain a couple microservices that use stored procedures. Also I've written a lot of stored procedures at various points in my career, and I definitely agree that things can go very, very wrong if they are used incorrectly.
So the short answer is, no, stored procedures aren't inherently bad in a microservice architecture. But you do need to understand:
- You're adding obstacles to the substitution of storage engines. If some operational or performance characteristics or feature limitations require you to switch storage engines, the cost will be greater because you'll be writing and testing a lot of new code. Running more than one storage engine (either during a migration phase or to isolate activities based on performance needs) can introduce consistency problems unless you use two-phase commit (2PC), which has performance issues itself.
- You've got another API to maintain, which means your dependencies can break. Adding, removing, or changing the types of parameters on procedures can break existing code. The same thing happens with tables and queries, but your tools might be less helpful with tracking down where things might go wrong. Problems with stored procedures are typically found at runtime, very late in the develop/deploy process.
- Your database permissions just got more complicated. Does a procedure run as the logged in user or as some other role? You need to think about this, and manage this (hopefully in an automated fashion.)
- You need to be able to migrate to new versions safely. Oftentimes a procedure must be dropped and re-created. Once again, permissions might cause some problems for you.
- Rollback of a failed migration can mean extra effort. When the production environment is separated from developers, things get even more challenging.
These are some uses of stored procedures that I think are often worthwhile:
- Enforcement of edit history (audit logs). Triggers are commonly used for this purpose, and triggers are stored procedures. It's also possible in some databases to disallow inserts and updates entirely for the application role: clients execute procedures which are run as a different role with appropriate permissions and which enforce all of the necessary behavior.
- Extension of check constraints. This might get you into business logic territory, but there are cases where a database's built-in constraint tools might not be sufficient for what you need. Often times the best way to express checks is with imperative code, and you risk letting bad data in if you depend on your application to do it for you.
- Encapsulation of complex queries when a view is inappropriate or too complicated. I've seen a few cases where a correct view requires some extremely complex SQL that can be expressed much more understandably in a stored procedure. This is probably rare, but it does occur.
In general, I suggest that you try out views first, and resort to procedures only when necessary. Well-designed views can actually function as an API, abstracting out the details of how underlying tables are queried. Augmenting your API (views) with stored procedures makes sense in some circumstances. It's even possible to emit JSON directly from a SQL query, bypassing the whole mess of mapping data from query results to your application's data model. Whether that's a good idea is something for you to determine based on your own needs.
Since you should already be managing your database resources (schema, permissions, etc.) through some automated tool, no, stored procedures are not inherently bad for microservices.
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
add a comment
|
I'll preface my answer by saying that I actually maintain a couple microservices that use stored procedures. Also I've written a lot of stored procedures at various points in my career, and I definitely agree that things can go very, very wrong if they are used incorrectly.
So the short answer is, no, stored procedures aren't inherently bad in a microservice architecture. But you do need to understand:
- You're adding obstacles to the substitution of storage engines. If some operational or performance characteristics or feature limitations require you to switch storage engines, the cost will be greater because you'll be writing and testing a lot of new code. Running more than one storage engine (either during a migration phase or to isolate activities based on performance needs) can introduce consistency problems unless you use two-phase commit (2PC), which has performance issues itself.
- You've got another API to maintain, which means your dependencies can break. Adding, removing, or changing the types of parameters on procedures can break existing code. The same thing happens with tables and queries, but your tools might be less helpful with tracking down where things might go wrong. Problems with stored procedures are typically found at runtime, very late in the develop/deploy process.
- Your database permissions just got more complicated. Does a procedure run as the logged in user or as some other role? You need to think about this, and manage this (hopefully in an automated fashion.)
- You need to be able to migrate to new versions safely. Oftentimes a procedure must be dropped and re-created. Once again, permissions might cause some problems for you.
- Rollback of a failed migration can mean extra effort. When the production environment is separated from developers, things get even more challenging.
These are some uses of stored procedures that I think are often worthwhile:
- Enforcement of edit history (audit logs). Triggers are commonly used for this purpose, and triggers are stored procedures. It's also possible in some databases to disallow inserts and updates entirely for the application role: clients execute procedures which are run as a different role with appropriate permissions and which enforce all of the necessary behavior.
- Extension of check constraints. This might get you into business logic territory, but there are cases where a database's built-in constraint tools might not be sufficient for what you need. Often times the best way to express checks is with imperative code, and you risk letting bad data in if you depend on your application to do it for you.
- Encapsulation of complex queries when a view is inappropriate or too complicated. I've seen a few cases where a correct view requires some extremely complex SQL that can be expressed much more understandably in a stored procedure. This is probably rare, but it does occur.
In general, I suggest that you try out views first, and resort to procedures only when necessary. Well-designed views can actually function as an API, abstracting out the details of how underlying tables are queried. Augmenting your API (views) with stored procedures makes sense in some circumstances. It's even possible to emit JSON directly from a SQL query, bypassing the whole mess of mapping data from query results to your application's data model. Whether that's a good idea is something for you to determine based on your own needs.
Since you should already be managing your database resources (schema, permissions, etc.) through some automated tool, no, stored procedures are not inherently bad for microservices.
I'll preface my answer by saying that I actually maintain a couple microservices that use stored procedures. Also I've written a lot of stored procedures at various points in my career, and I definitely agree that things can go very, very wrong if they are used incorrectly.
So the short answer is, no, stored procedures aren't inherently bad in a microservice architecture. But you do need to understand:
- You're adding obstacles to the substitution of storage engines. If some operational or performance characteristics or feature limitations require you to switch storage engines, the cost will be greater because you'll be writing and testing a lot of new code. Running more than one storage engine (either during a migration phase or to isolate activities based on performance needs) can introduce consistency problems unless you use two-phase commit (2PC), which has performance issues itself.
- You've got another API to maintain, which means your dependencies can break. Adding, removing, or changing the types of parameters on procedures can break existing code. The same thing happens with tables and queries, but your tools might be less helpful with tracking down where things might go wrong. Problems with stored procedures are typically found at runtime, very late in the develop/deploy process.
- Your database permissions just got more complicated. Does a procedure run as the logged in user or as some other role? You need to think about this, and manage this (hopefully in an automated fashion.)
- You need to be able to migrate to new versions safely. Oftentimes a procedure must be dropped and re-created. Once again, permissions might cause some problems for you.
- Rollback of a failed migration can mean extra effort. When the production environment is separated from developers, things get even more challenging.
These are some uses of stored procedures that I think are often worthwhile:
- Enforcement of edit history (audit logs). Triggers are commonly used for this purpose, and triggers are stored procedures. It's also possible in some databases to disallow inserts and updates entirely for the application role: clients execute procedures which are run as a different role with appropriate permissions and which enforce all of the necessary behavior.
- Extension of check constraints. This might get you into business logic territory, but there are cases where a database's built-in constraint tools might not be sufficient for what you need. Often times the best way to express checks is with imperative code, and you risk letting bad data in if you depend on your application to do it for you.
- Encapsulation of complex queries when a view is inappropriate or too complicated. I've seen a few cases where a correct view requires some extremely complex SQL that can be expressed much more understandably in a stored procedure. This is probably rare, but it does occur.
In general, I suggest that you try out views first, and resort to procedures only when necessary. Well-designed views can actually function as an API, abstracting out the details of how underlying tables are queried. Augmenting your API (views) with stored procedures makes sense in some circumstances. It's even possible to emit JSON directly from a SQL query, bypassing the whole mess of mapping data from query results to your application's data model. Whether that's a good idea is something for you to determine based on your own needs.
Since you should already be managing your database resources (schema, permissions, etc.) through some automated tool, no, stored procedures are not inherently bad for microservices.
answered Sep 16 at 18:09
ngreenngreen
6895 silver badges8 bronze badges
6895 silver badges8 bronze badges
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
add a comment
|
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
I think all of your first bullet-points also apply, if you write the business logic in e.g. a Java-Framework. Switching the DB-Engine will change performance characteristics and require retesting and maybe rewriting statements. If you write the SQL-Statements e.g. as Strings in you application, you have the same problem with changing variables breaking stuff. You need to decide if you app uses a technical user or individual users to connect to the DB and so on...
– Falco
Sep 17 at 8:59
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
@Falco I think if you're using JPA exclusively it shouldn't bee too difficult to change databases. Performance can definitely vary substantially and always needs to be tested. A couple services I maintain aren't "micro" in the sense that they can scan or aggregate over millions or billions of data points and return arbitrarily large (often paginated) data sets. I can't imagine using JPA for them, but I can imagine changing the underlying database engines (and rewriting the SQL) while maintaining the same API.
– ngreen
Sep 20 at 15:33
add a comment
|
Stored procedures are implementation details. Database functions, lambdas, or a shell script stored somewhere in the file system are all implementation details and irrelevant for the architecture.
most books on microservices recommend one database per microservice.
Ok, so we can code the stored procedures in these databases.
again most microservice architecture books state that they should be
autonomous and loosely coupled
Between business capabilities, development's life cycles, management,
deployments, team's locations, etc. Nothing to do with the implementation details. Microservices don't solve a technical problem (just the opposite). They come to solve problems with the management and the time-to-market. It's a strategy, not a tactic. A way to fail-fast with the least costs possible. If a certain business capability is proven to be worthless, we drop it without messing up other capabilities, deployments, projects' management, releases...
Note that the "split" already acts like a decoupling agent. Say we have two services, A is backed by Oracle and B by MongoDB. If we don’t break the golden rule of decoupling, it should be possible to drop A + Oracle with negligible side effects on B.
Using stored procedures written say specifically in Oracle, tightly
couples the microservice to that technology.
It might cause vendor lock-in. Many times, the vendor is imposed by the business due to historical or contractual reasons1. It is important to know how to not lock our code to the vendor. For example, some ORM and frameworks implement a new query language that hides the database built-in functions and features.
Although, if our services are micro enough, vendor lock-in is no longer a problem since it impacts a small part of the whole. A small part that should be possible to be replaced (or isolated) quickly.
most MSA books (that I have read) recommend that microservices should
be business oriented (designed using DDD).
It should be business-driven and here the thing. Not all business take advantage of DDD. DDD and microservices overlap in many points, but they are not cause-effect. We could end up with a microservices ecosystem composed of anaemic services. Or composed of a mix of both: services implementing a complex domain and dumb anaemic services providing POJOs directly from the DB. There's nothing wrong with that.
Regarding books, they only focus on the execution of the strategy. The tactics -how to take advantage of the distributed computing- how to make it work to success, but they are (usually) agnostic to the strategy. Strategies vary from company to company and rarely depends on developers. So, we still have to extrapolate and adapt what books say to our specific needs, requirements and constraints. The goal is to make the business strategy profitable and sustainable.
Always bear in mind that any architecture is a means to an end. The business rules. We don't build microservices ecosystems for fashion or for love to the art.
add a comment
|
Stored procedures are implementation details. Database functions, lambdas, or a shell script stored somewhere in the file system are all implementation details and irrelevant for the architecture.
most books on microservices recommend one database per microservice.
Ok, so we can code the stored procedures in these databases.
again most microservice architecture books state that they should be
autonomous and loosely coupled
Between business capabilities, development's life cycles, management,
deployments, team's locations, etc. Nothing to do with the implementation details. Microservices don't solve a technical problem (just the opposite). They come to solve problems with the management and the time-to-market. It's a strategy, not a tactic. A way to fail-fast with the least costs possible. If a certain business capability is proven to be worthless, we drop it without messing up other capabilities, deployments, projects' management, releases...
Note that the "split" already acts like a decoupling agent. Say we have two services, A is backed by Oracle and B by MongoDB. If we don’t break the golden rule of decoupling, it should be possible to drop A + Oracle with negligible side effects on B.
Using stored procedures written say specifically in Oracle, tightly
couples the microservice to that technology.
It might cause vendor lock-in. Many times, the vendor is imposed by the business due to historical or contractual reasons1. It is important to know how to not lock our code to the vendor. For example, some ORM and frameworks implement a new query language that hides the database built-in functions and features.
Although, if our services are micro enough, vendor lock-in is no longer a problem since it impacts a small part of the whole. A small part that should be possible to be replaced (or isolated) quickly.
most MSA books (that I have read) recommend that microservices should
be business oriented (designed using DDD).
It should be business-driven and here the thing. Not all business take advantage of DDD. DDD and microservices overlap in many points, but they are not cause-effect. We could end up with a microservices ecosystem composed of anaemic services. Or composed of a mix of both: services implementing a complex domain and dumb anaemic services providing POJOs directly from the DB. There's nothing wrong with that.
Regarding books, they only focus on the execution of the strategy. The tactics -how to take advantage of the distributed computing- how to make it work to success, but they are (usually) agnostic to the strategy. Strategies vary from company to company and rarely depends on developers. So, we still have to extrapolate and adapt what books say to our specific needs, requirements and constraints. The goal is to make the business strategy profitable and sustainable.
Always bear in mind that any architecture is a means to an end. The business rules. We don't build microservices ecosystems for fashion or for love to the art.
add a comment
|
Stored procedures are implementation details. Database functions, lambdas, or a shell script stored somewhere in the file system are all implementation details and irrelevant for the architecture.
most books on microservices recommend one database per microservice.
Ok, so we can code the stored procedures in these databases.
again most microservice architecture books state that they should be
autonomous and loosely coupled
Between business capabilities, development's life cycles, management,
deployments, team's locations, etc. Nothing to do with the implementation details. Microservices don't solve a technical problem (just the opposite). They come to solve problems with the management and the time-to-market. It's a strategy, not a tactic. A way to fail-fast with the least costs possible. If a certain business capability is proven to be worthless, we drop it without messing up other capabilities, deployments, projects' management, releases...
Note that the "split" already acts like a decoupling agent. Say we have two services, A is backed by Oracle and B by MongoDB. If we don’t break the golden rule of decoupling, it should be possible to drop A + Oracle with negligible side effects on B.
Using stored procedures written say specifically in Oracle, tightly
couples the microservice to that technology.
It might cause vendor lock-in. Many times, the vendor is imposed by the business due to historical or contractual reasons1. It is important to know how to not lock our code to the vendor. For example, some ORM and frameworks implement a new query language that hides the database built-in functions and features.
Although, if our services are micro enough, vendor lock-in is no longer a problem since it impacts a small part of the whole. A small part that should be possible to be replaced (or isolated) quickly.
most MSA books (that I have read) recommend that microservices should
be business oriented (designed using DDD).
It should be business-driven and here the thing. Not all business take advantage of DDD. DDD and microservices overlap in many points, but they are not cause-effect. We could end up with a microservices ecosystem composed of anaemic services. Or composed of a mix of both: services implementing a complex domain and dumb anaemic services providing POJOs directly from the DB. There's nothing wrong with that.
Regarding books, they only focus on the execution of the strategy. The tactics -how to take advantage of the distributed computing- how to make it work to success, but they are (usually) agnostic to the strategy. Strategies vary from company to company and rarely depends on developers. So, we still have to extrapolate and adapt what books say to our specific needs, requirements and constraints. The goal is to make the business strategy profitable and sustainable.
Always bear in mind that any architecture is a means to an end. The business rules. We don't build microservices ecosystems for fashion or for love to the art.
Stored procedures are implementation details. Database functions, lambdas, or a shell script stored somewhere in the file system are all implementation details and irrelevant for the architecture.
most books on microservices recommend one database per microservice.
Ok, so we can code the stored procedures in these databases.
again most microservice architecture books state that they should be
autonomous and loosely coupled
Between business capabilities, development's life cycles, management,
deployments, team's locations, etc. Nothing to do with the implementation details. Microservices don't solve a technical problem (just the opposite). They come to solve problems with the management and the time-to-market. It's a strategy, not a tactic. A way to fail-fast with the least costs possible. If a certain business capability is proven to be worthless, we drop it without messing up other capabilities, deployments, projects' management, releases...
Note that the "split" already acts like a decoupling agent. Say we have two services, A is backed by Oracle and B by MongoDB. If we don’t break the golden rule of decoupling, it should be possible to drop A + Oracle with negligible side effects on B.
Using stored procedures written say specifically in Oracle, tightly
couples the microservice to that technology.
It might cause vendor lock-in. Many times, the vendor is imposed by the business due to historical or contractual reasons1. It is important to know how to not lock our code to the vendor. For example, some ORM and frameworks implement a new query language that hides the database built-in functions and features.
Although, if our services are micro enough, vendor lock-in is no longer a problem since it impacts a small part of the whole. A small part that should be possible to be replaced (or isolated) quickly.
most MSA books (that I have read) recommend that microservices should
be business oriented (designed using DDD).
It should be business-driven and here the thing. Not all business take advantage of DDD. DDD and microservices overlap in many points, but they are not cause-effect. We could end up with a microservices ecosystem composed of anaemic services. Or composed of a mix of both: services implementing a complex domain and dumb anaemic services providing POJOs directly from the DB. There's nothing wrong with that.
Regarding books, they only focus on the execution of the strategy. The tactics -how to take advantage of the distributed computing- how to make it work to success, but they are (usually) agnostic to the strategy. Strategies vary from company to company and rarely depends on developers. So, we still have to extrapolate and adapt what books say to our specific needs, requirements and constraints. The goal is to make the business strategy profitable and sustainable.
Always bear in mind that any architecture is a means to an end. The business rules. We don't build microservices ecosystems for fashion or for love to the art.
edited Sep 19 at 1:19
Peter Mortensen
1,0812 gold badges11 silver badges14 bronze badges
1,0812 gold badges11 silver badges14 bronze badges
answered Sep 16 at 12:51
LaivLaiv
8,8061 gold badge16 silver badges46 bronze badges
8,8061 gold badge16 silver badges46 bronze badges
add a comment
|
add a comment
|
It doesn't really have anything to do with microservices.
Stored procedures can make sense if your service has an 'old-style' layered architecture in which the DB is the foundation of the service, with data access and business logic layers on top. The interface between the service and the database in such an architecture is very specific to innermost details of the service. Typically there will be service-specific adapters for each kind of supported database, and the specificity of the API exposed by the adapter makes it possible to use stored procedures in the underlying layers.
There are lots of problems with architectures like that. Most importantly it makes most of the logic very difficult to unit test. These architectures are no longer in favour.
If you're using a newer-style "clean architecture", "onion architecture", or similar, then the database will be an injected dependency, specified at the outer layers. Since it's defined in the outer layers, the interface provided for the database must be generic. It cannot reflect the innermost details of the service, because those details must be hidden from the outermost layers of the architecture. Defining a generic stored procedure interface that can work with any database or unit test harness is incredibly difficult, and not really necessary, so stored procedures are not often practical in these kinds of architectures.
The relationship with microservices is just that microservices are new and ascendant -- we don't do monoliths anymore -- and that these new architectural styles are also ascendant -- we don't do flat layers anymore.
add a comment
|
It doesn't really have anything to do with microservices.
Stored procedures can make sense if your service has an 'old-style' layered architecture in which the DB is the foundation of the service, with data access and business logic layers on top. The interface between the service and the database in such an architecture is very specific to innermost details of the service. Typically there will be service-specific adapters for each kind of supported database, and the specificity of the API exposed by the adapter makes it possible to use stored procedures in the underlying layers.
There are lots of problems with architectures like that. Most importantly it makes most of the logic very difficult to unit test. These architectures are no longer in favour.
If you're using a newer-style "clean architecture", "onion architecture", or similar, then the database will be an injected dependency, specified at the outer layers. Since it's defined in the outer layers, the interface provided for the database must be generic. It cannot reflect the innermost details of the service, because those details must be hidden from the outermost layers of the architecture. Defining a generic stored procedure interface that can work with any database or unit test harness is incredibly difficult, and not really necessary, so stored procedures are not often practical in these kinds of architectures.
The relationship with microservices is just that microservices are new and ascendant -- we don't do monoliths anymore -- and that these new architectural styles are also ascendant -- we don't do flat layers anymore.
add a comment
|
It doesn't really have anything to do with microservices.
Stored procedures can make sense if your service has an 'old-style' layered architecture in which the DB is the foundation of the service, with data access and business logic layers on top. The interface between the service and the database in such an architecture is very specific to innermost details of the service. Typically there will be service-specific adapters for each kind of supported database, and the specificity of the API exposed by the adapter makes it possible to use stored procedures in the underlying layers.
There are lots of problems with architectures like that. Most importantly it makes most of the logic very difficult to unit test. These architectures are no longer in favour.
If you're using a newer-style "clean architecture", "onion architecture", or similar, then the database will be an injected dependency, specified at the outer layers. Since it's defined in the outer layers, the interface provided for the database must be generic. It cannot reflect the innermost details of the service, because those details must be hidden from the outermost layers of the architecture. Defining a generic stored procedure interface that can work with any database or unit test harness is incredibly difficult, and not really necessary, so stored procedures are not often practical in these kinds of architectures.
The relationship with microservices is just that microservices are new and ascendant -- we don't do monoliths anymore -- and that these new architectural styles are also ascendant -- we don't do flat layers anymore.
It doesn't really have anything to do with microservices.
Stored procedures can make sense if your service has an 'old-style' layered architecture in which the DB is the foundation of the service, with data access and business logic layers on top. The interface between the service and the database in such an architecture is very specific to innermost details of the service. Typically there will be service-specific adapters for each kind of supported database, and the specificity of the API exposed by the adapter makes it possible to use stored procedures in the underlying layers.
There are lots of problems with architectures like that. Most importantly it makes most of the logic very difficult to unit test. These architectures are no longer in favour.
If you're using a newer-style "clean architecture", "onion architecture", or similar, then the database will be an injected dependency, specified at the outer layers. Since it's defined in the outer layers, the interface provided for the database must be generic. It cannot reflect the innermost details of the service, because those details must be hidden from the outermost layers of the architecture. Defining a generic stored procedure interface that can work with any database or unit test harness is incredibly difficult, and not really necessary, so stored procedures are not often practical in these kinds of architectures.
The relationship with microservices is just that microservices are new and ascendant -- we don't do monoliths anymore -- and that these new architectural styles are also ascendant -- we don't do flat layers anymore.
answered Sep 17 at 3:33
Matt TimmermansMatt Timmermans
3811 silver badge3 bronze badges
3811 silver badge3 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Software Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f398436%2fmicroservices-and-stored-procedures%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
10
@RandomUs1r sorry, this doesn't make sense to me. Why does the DB structure have to be non-relational? Sure, it may have external references, but its internal structure may well be 100% relational
– IMil
Sep 17 at 0:09
12
The problem with your points is that all of your premises are wrong. The statement that microservices should be autonomous and loosely coupled means, first and foremost, that they should be loosely coupled to each other; how you manage coupling of internal components is a different matter - and of secondary importance (but not unimportant) - especially if you can just replace the whole microservice in an update. So no reason why you can't use sprocs within those confines. Also, DDD does not forbid sprocs, or paradigm mixing; some aspects of some problems aren't best suited for OO.
– Filip Milovanović
Sep 17 at 8:48
3
How monolithic your database is has to do with your Data Design and implementation, it has nothing to do with using or not using stored procedures.
– RBarryYoung
Sep 17 at 12:23
5
"Stored procedures typically work on a monolith database." You should strongly consider discarding any information or advice you get from whatever source that shared that "fact" with you.
– StingyJack
Sep 18 at 4:28
3
@RandomUs1r Umm no, the only thing you really lose is that you can't use foreign key constraints on reference keys - which is rather the point of microservices. For one the idea that NoSql databases are somehow magically faster has been disproven repeatedly, but even if they were faster (they''re not), you also get all the existing infrastructure, knowledge and existing code for free - which is huge. CERN and many others manages terabytes of data using relational databases just fine. NoSql databases have their use but those are independent of whether you use microservices or not.
– Voo
Sep 18 at 15:53