I am currently preparing to take the Microsoft Dynamics 365 + Power Platform Solution Architect certification. Or “MB 600” for short! As I prepare I plan to write revision notes in this post I will cover topics related to capturing requirements.
Note: Microsoft have recently updated some of their terminology. Therefore we should consider the terms like CDS and Dataverse as interchangeable. For the exam I would expect them to begin using the term DataVerse at some point but that change is unlikely to be immediate. Meaning in the short term either term could be used. I created this blog post before the revised terms were announced. I have tried to apply updates but a few “outdated” references may still exist!
A Power Platform Solution Architect must appreciate how solutions tackle the broader business and technical requirements of organizations. The most important question an Architect can ask is “Why??”, as understanding the reason for the requirement helps identify the root problem.
When capturing requirement a solution Architect’s role may involve;
- Leading discussions to evolve the requirements, so a solution can be clearly defined.
- Identifying and defining the high-level components of the proposed solution.
- Defining what is and importantly what isn’t in scope.
- Identifying any integration requirements.
- Proposing proof of concepts to mitigate risks, confirm feasibility or provide further clarification to requirements.
- Looking for opportunities … such as using proactive insights / AI instead of reactive reviewing of reports and analytics.
Refine high level requirement
When evaluating requirements the Solution Architect should consider how complete the requirements are, look for common (recurring) requirements, look for gaps and importantly consider any regulatory, compliance or security issues.
Once any gaps or incomplete requirements have been identified the Solution Architect would develop a plan with the wider project team to fill these gaps. The target being to create a well formed, usable set of high level requirements.
All “good” requirements will have certain attributes in common! They will be …
- Clear – anyone reading the requirement should immediately understand the issue to be addressed and why it is necessary. Clear requirements are also cohesive, each requirement should completely address one thing. (A requirement that attempts to address multiple things should probably be split into multiple requirements.) Each requirement should be consistent and never at odds with another requirement. Ideally each requirement should not have excessive dependencies.
- Actionable – the requirement is defined in such a way that it can be progressed, this means it is completely formed and not vague in any way. Plus all interested parties have been consulted, all questions answered etc.
- Feasible – good requirements are always feasible. Meaning we think the required technologies exist, no budget constraints exist plus any regulatory questions have been answered etc. Additionally, we may need to consider if the users will actually use the feature? For example, there is little point adding fields to the opporuntity entity if the sales team aren’t able or willing to capture the additional information.
- Testable – how will we know the requirement has been met? What are the acceptance criteria?
You have no doubt heard the phrase “SMART” requirements before. It is good practice to ensure all requirements are Specific, Measurable, Attainable, Realistic and Time-bound. It may also be useful to “sense check” requirements by confirming you are solving the root problem and not just easing a symptom of a wider issue.
User-stories are often used to articulate requirements in a consistent manner. There isn’t a fixed way to write a user story but there is a common pattern that you may wish to consider.
As a …<<persona>>… I want to …<<task>> .. so that …<<goal>>
It is often useful to consider who the requirement serves ….
- Requirements stated from a business perspective may define the needs to the organization.
- Whilst requirements written from a user perspective may address how groups of users will interact with the system to achieve their goal.
- Requirements written from a solution perspective may express technical or non-functional requirements. These might relate to operability, performance, security, usability etc.
Identify functional requirements
Often functional requirements will be those written from a business perspective or a user perspective. Meaning the requirements that address specific business needs to govern how users interact with the system.
When identifying functional requirements there are common pitfalls that the Solution Architect should try to avoid;
- Rebuilding the legacy system – in workshops users may have a tendency to explain the way the outgoing system works. Creating a new system that simply does what the old system does is probably a lost opportunity!
- Excessive dependencies – if a single requirement is dependent on multiple other requirements then the risk it can not be delivered will also increase.
- Missing non-functional requirements – have we defined how many users are needed, what the application load profile will look like, what are the performance expectations. These are often requirements that potential end users will not consider and will therefore not be raised in requirement gathering workshops.
- Requirements that reference other requirements – linking requirements can be useful but can create bottlenecks!
- And more!!! (As part of your revision maybe consider some of the pitfalls you’ve seen in your projects.)
Identify technical requirements
Technical requirements may often be written from a solution perspective, as these will define things like the security model, performance requirements etc.
You may also have technical requirements covering systems integration / interoperability. These would define how the proposed solution will co-exist with existing internal and external production systems.
All requirements should be clearly documented avoiding any technical design language. (e.g. “a plugin is needed to …”) This is certainly true of functional requirements but even when documenting technical requirements it is best to avoid technical statements that could later incorrectly become the basis of design assumptions.
When defining / refining requirements understanding which are functional and which are non-functional (or technical) requirements maybe important. Non-functional requirements might include those that cover;
- System availability
- Compliance / regulatory constraints
- Data retention / residency
- Recovery time
- And more …
When considering technical requirements we have many options for the creation of automations, including (but not limited to);
- Business rules
- Dataverse (CDS) classic workflows
- Power Automate
- Dataverse (CDS) plug-ins
- Business Process Flows
These are great for simple validations or setting of values.
Business rules are optimized to run as part of the transaction that will occur as records are modified. Their scope can allow them to be applied to a single form, all forms or a table (entity).
Business rules allow us to complete tasks with no code which previously would have required developer created form scripts. This can result in potential challenges if a mixture of business rules and scripting is implemented. (Changes completed by scripts will not trigger business rules.)
Another common challenge with business rules is that they will not execute if not all the columns referenced in the rule are present on the form.
Business rules can only reference columns on the table they are associated with. They have no ability to use related records or Dataverse connectors. (Although they could reference related record vis a calculated fields based on a n:1 relationship.)
I describe more about business rules in this post.
Dataverse (CDS) Classic Workflow
We have had “classic workflows” in the Power Platform for a considerable amount of time. In fact prior to the existence of Power Automate they were the “go to” approach for all workflows.
Now we have Power Automate it should be the first choice for background operations. However classic workflows can also run real-time and maybe useful when real-processing is required.
I describe more about classic workflows in this post.
Power Automate (formally known as Flow) should be your primary choice for non-real time automation. (aka background processes.)
Power Automate supports triggering on a “near real-time” approach. Power Automate (and Canvas Apps) make use of Dataverse (CDS) connectors.
In addition to the Dataverse connector Power Automate supports 300+ publicly available connectors. Additionally custom connectors can be created for other REST APIs. You may need to be aware that in addition to the Dataverse (CDS) connector we do have a Dynamics 365 connector. The Dynamics 365 connector has effectively been replaced. When we connect to the Dataverse we can connect to a specific environment or use the “current environment” option. When including the Power Automate in a solution then the current environment option should be selected. A specific environment would only be selected if we are building the Power Automate outside of a solution or it needs to integrate across multiple environments. (Tip: If you don’t select the “current environment” option there may be a post deployment step to update the connectors!)
Power Automate supports the use of connectors and UI Automations. Additionally can work with Business Process Flows. (As can on-demand classic workflows!)
Each Power Automate has a trigger. This maybe on record create, update etc. Filtering and scoping options can be used to reduce unnecessary execution and potential infinite loops. Alternatively “flows” can be triggered manually from buttons visible from the Power Automate apps or on demand when the Dataverse selected trigger is used.
Unlike classic workflows Power Automate can work on lists of data. FetchXML filters can be used to do advanced criteria matching when listing data from related entities. You should enable pagination when working on more than one page of data. The page side is determined by the connector, for the Dataverse this is 10245 rows. There is a limit of 100,000 records and if this is to be exceeded further partitioning of your query maybe required.
The solution architect should consider error handling when designing a Power Automate. Usefully we have a conditional branch called “configure run after”. This allows steps within the Power Automate to only execute if the proceeding action was successful, failed, skipped to timed out. Additionally the architect can group steps that create, update or delete Dataverse rows into change sets. (Meaning all the steps in the change set would be rolled back if any failed.)
The Architect may also need to ensure they design of any Power Automates stay within various limits. For example, I have already mentioned the 100,000 records limit. Other limits include “Do Until” loops having a limit of 60 loops or 1 hour execution. Connectors may also have throttling limits, you should check the documentation for each connector used for details of any throttling limits. Flows can only execute for a maximum of 30 days their design may need to plan around this constraint if longer running scenarios exist.
I describe more about Power Automate in this post.
Business Process Flows
Business process flows provide an interactive guide to help users get work done. They help track the major milestones (stages) in long running processes. Business process flows encourage outcomes. (They shouldn’t be considered as simple “wizard data capture” controls!)
Business Process Flows can create seamless links between related and unrelated entity types used in a single business process. They include automatic form transition as the flow moves from entity to entity. Each business process flow can include up to 5 entities.
Business process flows can trigger Flows and classic workflows. This can be achieved (for example) as a stage in the business process flow is entered or exited.
Multiple business process flows can be created for a single table. We can then decide which flows are included in which apps. Alternatively access to the flows can be tailored on the basis of the user’s security role. If multiple flows exist that one user can access they can switch from flow to flow at any time.
Business process flows can also include conditional branches. On occasion the solution architect will need to decide if using one business process flow with conditional branches is preferable to have multiple flows. One thing to consider is if the processes actually need to run concurrently or not. (As with a current process maybe you’d need two business process flows allowing the users to switch from one to another as required.)
I describe more about business process flows here.
Dataverse (CDS) Plug-ins
Plug-ins allow developers to create custom logic as an extension to Dataverse operations. They have the ability to modify the request and response on the fly.
Plug-ins allow us to create complex custom logic. But require a developer with coding skills.
Plug-ins can be both synchronous or asynchronous.
The Power Platform Solution Architect may not be required to actually create the code contained with the plug-in. But the Architect should appreciate when a custom plug-in should be created in preference to other low code automations.
Confirm requirements meet organization’s goals
As part of your revision I suggest you review some of your real-world requirements. Which ones are good? How could they be improved? Are they written with the correct persona in mind and do they avoid jargon etc etc.
It is important to ensure that the requirements meet the organizations goals, it may help Solution Architects to confirm this by considering multiple things;
- Think about the requirements from the user / business perspective. It can be hard but try not to think like a developer!
- Ask the same question in different ways. (And maybe also to different people.) Then make sure you get the same answer.
- Listen, this sounds simple but it is difficult! When someone are describing a requirement it is often hard for an architect to avoid starting to think about the final solution. You need to avoid doing this until you’ve fully listened to the requirement.
- Focus on the desired outcome, ensure all requirements can be referenced back to the original business goal. (If they don’t they are probably out of scope.)
Use real-world scenarios to help paint the picture of what is needed. (But make sure the scenarios are typical. Sometimes you need to avoid wasting time on “edge cases”.)
When suggesting solutions or requirements the Solution Architect should be mindful not to over-promise. It can be really useful to suggest great features which contribute to a big goal but whilst doing this the Architect should try to encourage small (achievable) steps.
We are going to need to test the solution to ensure it meets the stated requirements. Often considering testing even at requirements gathering stage will help ensure the requirement is achieved. For example, knowing what the expected peak loads are on a system and documenting this in the requirements will aid performance testing.