Software Engineering Asked on December 29, 2021
I have a situation where my business logic depends on data coming from an external service. I initially though this would be a prime candidate for a domain service, but now I’m starting to wonder if that’s even needed. With years of working mostly with relational database for fetching data, I think my brain may be stuck in a pattern that’s not really helpful. Theoretically, whether the data comes from a database or other SOAP/external service doesn’t matter, right? What matters is if this is a key part of the aggregate, which I believe it is. There is also database data that’s relevant, so it’s a mix.
My guess is that while the domain service could technically be OK, I could also create a repository that fetches the data from both the database and the external service, building my aggregate (probably using a factory to be safe). I can then encapsulate all the business logic within the aggregate itself instead of using a domain service.
The disadvantage of this is that my business object is now somewhat coupled to an external service (I say somewhat because technically the object is a plain object, but there’d be no way of constructing one in the context of the app without using the external service), but the fact of the matter is that it needs to be coupled, there’s no way for the behavior/methods I need for this object to work if the external service is not up. It’s critical to enforcing business rules.
So is there anything wrong with this approach that I’m not thinking of? Any major drawback that would mean using a domain service is a better approach? My assumption would be I’d need a domain service if whatever I need to do has to involve multiple aggregates, but if not I can keep it within the aggregate itself.
So is there anything wrong with this approach that I'm not thinking of?
The usual design is to keep the domain model decoupled from the source of the information that it consumes.
In an idealized form, what this would normally look like is that your application code is coupled to "the plumbing" that knows how to obtain a copy of the data, the domain code accepts a copy of that data as an argument, and does its work.
In other words, you should be able to completely test how the domain model consumes information without needing to couple the test to any external information sources.
(Incidentally, the same separation holds when we are sending information to some external system; the domain cpde is responsible for computing the information to send, the application code figures out how to deliver that information).
The way I usually describe it: the single responsibility of the domain model is "the bookkeeping", which is to say the manipulation of in memory data structures. Data transfer concerns are "somewhere else".
That said, you will see a lot of designs where data transfer capabilities are passed in to the domain model (usually as abstractions that can easily be mocked/stubbed when you are running in a test environment).
Does it matter? Nobody is giving out prizes for "following the rules".
One opportunity that you get by separating the domain from the plumbing is that you release your temporal coupling constraints - the domain code does it's thing when the information is available. In other words, you don't necessarily need to change that domain code when you move to a context where information delivery is asynchronous.
But it's a trade off - there are contexts in which sticking to a more procedural style makes sense.
Answered by VoiceOfUnreason on December 29, 2021
Theoretically, whether the data comes from a database or other SOAP/external service doesn't matter, right?
When you consider the data handling algorithm, the method of fetching the data doesn't matter. But that doesn't lead to the conclusion that domain services are part of the domain.
Quite the opposite, in fact. It suggests that the domain shouldn't change just because the source of the (same) data changes, which in turn suggests that the data fetching should be a domain service. Specifically so that the domain doesn't care where the data came from.
accessing data from external service (REST, SOAP, etc.) as part of domain
The fact that you describe it as an external service proves the point that this resource endpoint is not part of the current domain.
Just because your domain has a necessary dependency (i.e. data source) doesn't mean that it personally embodies that dependency. That would violate the basic principle of what a "domain" is.
What matters is if this is a key part of the aggregate, which I believe it is.
Your external resource can map to an aggregate in your domain, or part of aggregate. That's perfectly fine either way.
But your aggregate is not your dependent resource's aggregate (or vice versa), even if they happen to have the same properties. Different domain, different aggregates, regardless of whether you domain is an extension of the other one.
With years of working mostly with relational database for fetching data, I think my brain may be stuck in a pattern that's not really helpful.
My guess is that while the domain service could technically be OK, I could also create a repository that fetches the data from both the database and the external service, building my aggregate (probably using a factory to be safe).
What I think is going on here is that you consider your database repository to not be a domain service, hence why you're arguing your external resource shouldn't be either.
It's the opposite: database repositories are also external services as far as your domain is concerned. Your reasoning then supports itself: since database repositories are domain services, and it doesn't matter where you get the data from, then your other external resources should also be domain services.
My assumption would be I'd need a domain service if whatever I need to do has to involve multiple aggregates, but if not I can keep it within the aggregate itself.
Whether your external data gives you (a) partial/single/multiple aggregate(s) is irrelevant. Taking a simple example:
In all cases, my ImdbService
is a domain service. Not because of how much data it supplies me with, but because it is external to my application.
but the fact of the matter is that it needs to be coupled
You're oversimplifying your assumption. It needs to be coupled:
Over time, your external resource is liable to change. Whether it expands its API, refactors it, or plainly goes down and migrates to a new platform; in all cases you're going to have to deal with the changes in the environment.
Even if you/your company manages the external resource, you cannot know what technological advancements we'll achieve in the near future, which could give a solid reason to update the platform to reap its benefits.
The tighter you couple your domain to your external resource, the more difficult and troublesome the implementation of the changes is going to be.
Your argument is definitely not the first time anyone has argued this. But it essentially boils down to cutting a corner today to save on a bit of effort, while glossing over how big the consequences of that cut corner might end up in the future.
Skipping good practice is like not getting insurance. Sure, if nothing bad ever happens then you've saved a bit of money/effort, but are you willing to take the full weight of the consequences when something bad does happen, even if outside of your control?
Answered by Flater on December 29, 2021
Your approach, fetching the data in the repository that also gets data from the db, will work, but it does indeed create some coupling. There's also no nice seperation of concerns and the repository will no longer have a single reason to change.
Another approach could be to create an interface in your domain layer and inject the implementation into the constructor of your aggregate.
One of the nice side-effects will be that the decision to query the external service is made in runtime.
Answered by Rik D on December 29, 2021
What is the kind of this dependency on the external service?
If we are talking DDD, I'd imagine that it works on the write side. I.e. it handles changing a state of a system. Then, when something initiates a change, or expresses an intent, the DDD part has to make up a decision whether to change or to reject. It is Ok then, if the decision cannot be made completely autonomously. A number of external sources should provide a supporting information/details.
For example, when you apply for a foreign visa, a visa center needs to collect some additional data about you from police, from bank, from visa archives and so on.
But as soon as a decision is made up, all these external data sources do not matter anymore. You may archive the responses that had affected the decision, but the integrations themselves are no longer relevant. The decision is all that is important from the point of view of the system to which DDD is applied.
So, in this case an external system is a clear repository (decision-support data source) for me.
Is there any other scenario you are talking about?
Answered by iTollu on December 29, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP