Microsoft Power Automate — Complete Guide
Basics · Triggers & Actions · Expressions · Error Handling · Advanced · Scenarios · Cheat Sheet
Table of Contents
- Core Concepts — Basics
- Triggers, Actions & Conditions
- Expressions & Data Operations
- Error Handling & Reliability
- Advanced Topics
- ALM & Deployment
- Scenario-Based Questions
- Cheat Sheet — Quick Reference
1. Core Concepts — Basics
What is Power Automate and what are its main use cases?
Power Automate is a cloud-based workflow automation service from Microsoft. It enables users to automate repetitive tasks, integrate apps and services, and orchestrate business processes without writing code.
Main use cases:
- Automated notifications and alerts
- Approval workflows (purchase orders, leave requests, document sign-off)
- Data synchronisation between systems
- Scheduled data processing and reporting
- Document management and generation
- Integration with Microsoft 365 services (SharePoint, Teams, Outlook, Dataverse)
What are the different types of flows in Power Automate?
| Flow Type | Description |
|---|---|
| Cloud flows — Automated | Event-triggered (when X happens in a connected service) |
| Cloud flows — Instant | Manually triggered by a user from a button or app |
| Cloud flows — Scheduled | Time-based (recurrence — every day at 8am, every 15 min) |
| Desktop flows (RPA) | Run on a local machine to automate UI-based tasks |
| Business process flows | Guide users through structured stages in model-driven apps |
| Process mining flows | Analyse existing processes for bottlenecks and insights |
What is the difference between Automated, Instant, and Scheduled flows?
Automated flow: triggered automatically by an event — e.g., when a new email arrives, a SharePoint item is created, or a Dataverse record changes.
Instant flow: triggered manually by a user — from a button in Power Apps, a Teams message action, or the Power Automate mobile app. This is the type called from Power Apps and Copilot Studio.
Scheduled flow: triggered on a time schedule — e.g., every day at 8am, every Monday, or every 15 minutes. Best for batch processing, daily reports, and data sync jobs.
What is a connector and what types exist?
A connector is a wrapper around a REST API that exposes triggers and actions for a service.
| Connector Type | Description | Examples |
|---|---|---|
| Standard | Included in all licences | SharePoint, Outlook, Teams, OneDrive, Excel |
| Premium | Require premium licence | Dataverse, SQL Server, HTTP, Salesforce, ServiceNow |
| Custom | Built by makers for any REST/SOAP API | Any internal or third-party API |
| On-premises gateway | Connect to on-premises systems | SQL Server on-prem, SharePoint on-prem, file systems |
What is the difference between a connection and a connector?
A connector is the definition — the template describing what triggers and actions are available for a service (e.g., the SharePoint connector).
A connection is a specific authenticated instance of a connector — it stores the credentials (OAuth token, API key, username/password) for a specific account. Many flows can share the same connection; one connector can have multiple connections for different accounts.
ALM Note: Connections must be recreated per environment via Connection References — they cannot be packaged in solutions.
What is a Connection Reference and why is it important for ALM?
A Connection Reference is a solution component that acts as a pointer to a connection. Instead of hardcoding a specific connection into a flow, the flow references a Connection Reference object. When the solution is deployed to a new environment, makers update the Connection Reference to point to the environment-appropriate connection — without editing the flow itself.
Tip: Always use Connection References in solution-aware flows. Hardcoded connections break when solutions are exported and imported across environments.
2. Triggers, Actions & Conditions
What is the difference between a trigger and an action?
A trigger is the first step of every flow — it defines what starts the flow. A flow can have only one trigger. An action is every subsequent step — it does something (create a record, send an email, call an API, etc.). A flow can have unlimited actions.
Trigger types:
- Event-based: when X happens in a connected service
- Manual (Instant): button click, app call
- Scheduled: recurrence on a time pattern
What are trigger conditions and how do you use them?
Trigger conditions are expressions evaluated before the flow runs. If false, the flow exits immediately without executing any actions — saving run counts and avoiding unnecessary processing.
Example — only run when SharePoint item status = "Approved":
@equals(triggerOutputs()?['body/Status/Value'], 'Approved')
Example — only run for emails with attachments:
@equals(triggerOutputs()?['body/hasAttachments'], true)
Example — only run when Dataverse record amount > 1000:
@greater(triggerOutputs()?['body/cr123_amount'], 1000)
Tip: Trigger conditions are evaluated server-side before the flow starts. Far more efficient than starting the flow and using a Condition action to exit — especially for high-volume triggers.
What is the "Apply to each" action and what are its performance implications?
"Apply to each" loops over an array and executes the inner actions for every item. By default it runs sequentially.
Performance implications:
- Sequential processing is slow for large arrays — 1000 items × 1 second each = ~17 minutes
- Enable concurrency control to run up to 50 iterations in parallel
- Avoid nesting Apply to each inside another Apply to each — creates N×M iterations
- Use batch operations where possible (e.g., Dataverse "Perform a changeset" instead of looping individual updates)
Warning: Power Automate has action limits per flow run (10,000 for standard, 100,000 for premium). Large nested loops can hit this limit.
What is the difference between "Do until" and "Apply to each"?
Apply to each: iterates over every item in an array. Number of iterations = array size. Used when you have a known collection to process.
Do until: loops until a condition becomes true. Number of iterations is unknown upfront. Used for polling scenarios — wait for a record to reach a status, retry until an API succeeds. Has configurable limits (count + timeout) to prevent infinite loops.
What are Scope actions and their uses?
A Scope is a container action that groups related actions together.
Key uses:
- Try-Catch pattern — wrap risky actions in a "Try" scope; configure a "Catch" scope to run if the Try scope fails (set "Run after" → "has failed")
- Logical grouping — organise complex flows into named sections for readability
- Selective error handling — target error handling to a specific group of actions
Tip: The Try-Catch-Finally pattern using three Scopes is the gold-standard error handling approach.
What is the HTTP action and when would you use it over a connector?
The HTTP action (premium) sends a direct REST/HTTP request to any URL — it is the universal connector for APIs that don't have a dedicated connector. Use it when:
- No connector exists for the target service
- The connector exists but doesn't expose the specific API endpoint needed
- Calling Microsoft Graph API endpoints not covered by connectors
- Calling the Dataverse Web API with custom OData queries
Supports all HTTP methods, custom headers, and authentication: None, Basic, Client Certificate, Azure AD OAuth, API Key.
3. Expressions & Data Operations
What are expressions in Power Automate and where are they used?
Expressions are formulas written in the Power Automate expression language (based on Azure Logic Apps syntax). Used in action inputs, conditions, trigger conditions, and variable assignments to compute values dynamically.
@{utcNow()} → current UTC datetime
@{formatDateTime(utcNow(),'dd/MM/yyyy')} → formatted date string
@{length(outputs('Get_items')?['body/value'])} → array count
@{toUpper(triggerOutputs()?['body/Title'])} → uppercase string
What are the most important expression functions to know?
String functions:
concat('Hello', ' ', 'World') → Hello World
toUpper(string) / toLower(string)
substring(string, start, length)
replace(string, 'old', 'new')
trim(string)
split(string, ',') → array
Date functions:
utcNow() → current UTC datetime
formatDateTime(utcNow(),'yyyy-MM-dd')
addDays(utcNow(), 7) → 7 days from now
addHours(utcNow(), -24) → 24 hours ago
convertTimeZone(utcNow(),'UTC','India Standard Time')
Array & collection:
length(array) → item count
first(array) / last(array) → first/last item
union(arr1, arr2) → combined array
contains(array, value) → boolean
Logical:
if(condition, trueVal, falseVal)
equals(a, b)
and(bool1, bool2) / or(bool1, bool2)
not(bool)
empty(value)
coalesce(val1, val2, val3) → first non-null value
What is the difference between variables and compose actions?
Variable actions (Initialize variable, Set variable, Append to array/string): declare and mutate values across the entire flow including inside loops. Variables persist across scopes and loops.
Compose action: computes a value once and stores it as the static output of that action step. Lightweight — no initialisation needed. Immutable — cannot be updated. Output accessed via outputs('Compose_action_name').
Rule of thumb: Use Compose for one-time transformations or constructing complex JSON. Use Variables when the value needs to be updated in a loop (e.g., accumulating a count or building a string iteratively).
How do you parse and work with JSON in Power Automate?
Use the Parse JSON action — provide a JSON sample and it auto-generates a schema. After parsing, fields become available as dynamic content tokens in subsequent actions.
Input (from HTTP response body):
{"orderId": "ORD-001", "amount": 1500, "status": "pending"}
After Parse JSON → orderId, amount, status available as dynamic tokens
Without Parse JSON — access via expression:
@{body('HTTP_action_name')?['orderId']}
@{body('HTTP_action_name')?['nested']?['field']}
Warning: Always use Parse JSON after HTTP actions returning JSON. Accessing nested JSON without it via expressions is error-prone and hard to maintain.
What is the Select action and how does it differ from Filter Array?
Select action: transforms an array — maps each item to a new shape. Like SQL's SELECT. Input: array of objects. Output: new array with only the fields you choose, optionally renamed.
Filter Array action: filters an array — keeps only items matching a condition. Like SQL's WHERE. Input: array of objects. Output: subset of the original array matching your condition.
Select example:
Input: [{name:'Alice', age:30, dept:'IT', salary:50000}]
Output: [{fullName:'Alice', department:'IT'}]
Filter example:
Input: 100 orders with various statuses
Output: only orders where status = 'Approved'
What is the Data Operations — Join action?
The Join action concatenates all elements of an array into a string with a specified delimiter.
Input array: ['Alice', 'Bob', 'Charlie']
Delimiter: ', '
Output: 'Alice, Bob, Charlie'
Useful for building comma-separated lists for email bodies, SharePoint multi-value fields, or API parameters.
4. Error Handling & Reliability
What is "Run After" and how is it used for error handling?
"Run After" controls when an action executes based on the result of the previous action. Options: is successful (default), has failed, is skipped, has timed out. Multiple states can be selected.
Pattern: send failure notification only when an action fails
→ "Send email" action → Run After → HTTP call → "has failed"
Pattern: always run cleanup regardless of outcome (Finally)
→ "Cleanup" action → Run After → previous action
→ tick ALL: is successful + has failed + is skipped + timed out
Tip: "Run After" on all four states = equivalent of a "finally" block — it always runs. Essential for guaranteed cleanup steps.
Explain the Try-Catch-Finally pattern in Power Automate.
Implemented using three Scope actions:
Structure:
- Try scope — contains the main business logic. Runs normally.
- Catch scope — Run After → Try scope → "has failed". Contains error notification, logging, and compensation logic.
- Finally scope — Run After → Try AND Catch → all states (success, failed, skipped, timed out). Contains guaranteed cleanup actions.
Accessing error details in Catch scope:
result('Try_scope')[0]['error']['message'] ← error message
result('Try_scope')[0]['error']['code'] ← error code
result('Try_scope')[0]['name'] ← name of failed action
result('Try_scope')[0]['status'] ← Failed / Succeeded / Skipped
Tip: This is the single most-asked Power Automate advanced pattern. Know it cold.
How do you implement retry logic in Power Automate?
Option 1 — Built-in retry policy (on action settings):
- Configure Count (1–90) and Interval
- Types: None, Fixed interval, Exponential interval
- Automatically retries on HTTP 408, 429, and 5xx error responses
Option 2 — Do Until loop (custom retry):
Initialize variable: RetryCount = 0
Initialize variable: Success = false
Do Until: Success = true OR RetryCount >= 3
→ HTTP action
→ If HTTP succeeded: Set Success = true
→ If HTTP failed: Set RetryCount = RetryCount + 1, Delay 30 seconds
Rule: Built-in retry for transient HTTP errors. Do Until for business logic retries (e.g., retry until a record reaches expected state).
How do you handle concurrency and race conditions in flows?
By default, a flow can run multiple instances simultaneously. This causes race conditions (two runs both read and write the same value, resulting in data corruption).
Mitigations:
- Enable concurrency control on the trigger → Degree of Parallelism = 1 (forces sequential runs)
- Use Dataverse optimistic concurrency — include ETag in update requests to detect conflicting writes
- Design flows to be idempotent — safe to run multiple times with the same result
- Use Azure Service Bus queues as a buffer to serialise high-volume triggers
What is the difference between terminate and cancel for flows?
Terminate action: explicitly ends the current flow run with a specified status — Succeeded, Failed, or Cancelled. Used to exit a flow early with a controlled outcome. The status is visible in the run history.
Cancel: externally cancels a running flow instance from the Power Automate portal. Marks the run as Cancelled.
Example — exit flow early with failure:
Terminate
Status: Failed
Code: VALIDATION_ERROR
Message: 'Amount exceeds approved limit'
5. Advanced Topics
What are Child flows and when should you use them?
A Child flow is an Instant cloud flow triggered by another flow (the parent) using the "Run a Child Flow" action. It accepts inputs and returns outputs — like a function or subroutine.
Use when:
- The same logic is needed in multiple flows — centralise in one child flow
- A flow is getting too complex — break into modular child flows
- Different teams own different parts of a process
- Reusing error handling, notification, or logging logic across flows
Warning: Child flows must be in the same environment and solution as the parent. They must use the "Manually trigger a flow" trigger.
What is the difference between Power Automate and Azure Logic Apps?
| Power Automate | Azure Logic Apps | |
|---|---|---|
| Audience | Business users, makers | Developers, IT/integration teams |
| Hosting | Microsoft-managed (SaaS) | Azure subscription |
| Pricing | Per user / per flow licence | Pay-per-execution |
| Infrastructure | No control | Full VNET, ISE, managed identity |
| Source control | Solution packages | ARM/Bicep/Terraform templates |
| Best for | M365 automation, citizen dev | Enterprise integration, high-volume |
Under the hood they share the same connector ecosystem and expression language.
How do you call Microsoft Graph API from Power Automate?
Use the HTTP action with Azure AD OAuth authentication:
- Create an Azure AD app registration with required Graph API permissions
- HTTP action settings:
- Method: GET / POST / PATCH / DELETE
- URL:
https://graph.microsoft.com/v1.0/users/{id}/manager - Authentication: Active Directory OAuth
- Tenant ID, Client ID, Client Secret
- Resource:
https://graph.microsoft.com
- Parse the JSON response for use in subsequent actions
GET https://graph.microsoft.com/v1.0/me/manager
GET https://graph.microsoft.com/v1.0/groups/{id}/members
POST https://graph.microsoft.com/v1.0/me/sendMail
Body: { "message": { "subject": "...", "body": {...}, "toRecipients": [...] } }
How do Approval flows work and what types are available?
Approval actions send an approval request via email and the Power Automate Approvals centre. The flow pauses until all required responses are received.
| Approval Type | Behaviour |
|---|---|
| Approve/Reject — First to respond | Any one approver's response decides outcome |
| Approve/Reject — Everyone must approve | All must respond; any rejection fails the approval |
| Custom responses | Define your own options (e.g., Approve / Reject / Need more info) |
Sequential approvals (Manager → Director → VP): chain multiple approval actions with conditions between each level.
Key fact: Approval flows can pause for days or weeks — Power Automate persists the state in Dataverse. No Azure infrastructure needed.
What are environment variables in Power Automate and why are they important?
Environment variables are solution components storing configuration values (URLs, API keys, feature flags, email addresses) that differ per environment.
| Variable Type | Use Case |
|---|---|
| Text | API URLs, email addresses, SharePoint site URLs |
| Number | Thresholds, limits, retry counts |
| Boolean | Feature flags (enable/disable functionality) |
| JSON | Complex config objects |
| Secret | Sensitive values stored in Azure Key Vault |
| Data source | Connection Reference |
Warning: Environment variable values are set per environment after solution import — they are not exported with the solution definition.
What is Desktop Flow (RPA) and how does it differ from cloud flows?
Desktop flows use Robotic Process Automation to automate UI-based tasks on a local Windows machine — clicking buttons, filling forms in legacy apps, reading from screens.
| Desktop Flow | Cloud Flow | |
|---|---|---|
| Runs on | Local Windows machine | Microsoft cloud |
| API required | No — interacts with UI | Yes — uses APIs/connectors |
| Licence | Premium + attended/unattended RPA | Premium or per-flow |
| Machine needed | Yes — Power Automate Desktop agent | No |
| Triggered by | Cloud flow "Run a desktop flow" action | Events, schedule, manual |
6. ALM & Deployment
How do you deploy Power Automate flows across environments?
Flows packaged in Power Platform solutions are the recommended approach:
# Export solution via CLI
pac solution export --path ./solution.zip --name MyFlowSolution --managed
# Import to target environment
pac solution import --path ./solution.zip
ALM pipeline steps:
- Build and test flows in development environment
- Add all flows, connection references, and environment variables to a solution
- Export as managed solution for UAT/production
- Import into target environment
- Configure connection references and set environment variable values
- Test all flows in target environment
- Automate with Azure DevOps / GitHub Actions using Power Platform Build Tools
What is the difference between managed and unmanaged solutions?
| Managed | Unmanaged | |
|---|---|---|
| Editable in target env | No (read-only) | Yes |
| Used in | Production, UAT | Development |
| Delete behaviour | Deleting solution removes all components | Components remain |
| Recommended for | Deployment pipeline | Active development |
What should be included in a Power Automate solution?
- Cloud flows (all flows in scope)
- Connection references (one per connector used)
- Environment variables (definitions — values set post-import)
- Custom connectors (if used)
- Canvas apps or Power Pages that call the flows (if applicable)
- Dataverse tables/columns referenced by flows (in same or dependent solution)
7. Scenario-Based Questions
Scenario: Build a multi-level approval flow for purchase orders.
Requirement: PO under £1,000 → manager only. £1,000–£10,000 → manager + finance. Over £10,000 → manager + finance + CFO.
Design:
- Trigger: Dataverse — when PO record status changes to "Submitted"
- Condition: check
PO Amountthreshold → three branches - Branch 1 (under £1,000):
- Approval → manager only → if approved, update status → notify requester
- Branch 2 (£1,000–£10,000):
- Approval → manager (if approved) → Approval → finance (if approved) → update status
- Branch 3 (over £10,000):
- Sequential chain: manager → finance → CFO
- Any rejection: update status to "Rejected" → send rejection email with comments
- Timeout: if no response in 48h → send reminder → escalate to approver's manager (via Graph API to get manager)
- Try-Catch: log failures, notify flow owner
Scenario: A flow polls every minute but usually finds no data. How do you optimise it?
Problem: Scheduled flow runs every minute, queries SharePoint, exits with "no items" 95% of the time. Wastes runs and hits rate limits.
Solutions:
- Replace scheduled trigger with an event-driven trigger ("When an item is created") — eliminates empty runs entirely
- If polling is unavoidable: add a trigger condition so the flow exits before executing any actions
- Use OData filter queries on "Get items" — never retrieve all records and filter inside the flow
- Queue-based approach: write events to Service Bus on creation, trigger flow from queue messages
- Increase recurrence interval and accept near-real-time instead of real-time processing
Rule: Event-driven > scheduled polling. Always prefer native event triggers over recurrence + query patterns.
Scenario: A flow fails intermittently when calling an external API. How do you make it resilient?
Problem: HTTP action returns 429 (rate limit) or 503 (unavailable) occasionally, failing the entire flow.
Solution:
- Enable built-in retry policy on HTTP action — Exponential interval, Count: 4, Min interval: 20s. Handles 429/5xx automatically.
- Wrap in Try-Catch scope. In Catch: check
result('Try')[0]['error']['code'] - If retryable code: Do Until loop with delay and retry counter (max 3 attempts)
- Respect
Retry-Afterheader from 429 responses — use it as the Delay duration - Log failures to Dataverse or Application Insights with full error detail
- Send Teams/email alert in Catch scope for repeated failures
Scenario: Nightly sync between an external API and Dataverse without duplicating records.
Design:
- Scheduled trigger: daily at 2am
- HTTP GET: all records from external API since last sync timestamp
- Timestamp stored in Dataverse config record or environment variable
- Parse JSON: extract records array
- Apply to each (concurrency: 10):
- Dataverse — list rows: filter by external ID field
- Condition: does record exist?
- Yes: compare
ModifiedOn— if changed, update Dataverse record (Patch) - No: create new Dataverse record
- Yes: compare
- After loop: update last sync timestamp config record
- Try-Catch: log errors per record; send summary email (created / updated / skipped / failed counts)
Scenario: How do you prevent duplicate flow runs for the same record?
Problem: A Dataverse trigger fires twice for the same update (e.g., due to cascading updates), causing duplicate processing.
Solutions:
- Idempotency key: check a
ProcessedOnfield on the record. At start of flow: if field is already populated with today's date, terminate (already processed). Otherwise, immediately set it before processing — acts as a lock. - Trigger conditions: add conditions to filter out the specific column changes that would cause re-triggering
- Concurrency control: set trigger concurrency = 1 to prevent two simultaneous runs for the same record
- Dataverse change tracking: use the
@odata.etag/ row version to detect if the record was already processed
Scenario: Build a flow that sends a Teams alert only if unacknowledged for 24 hours.
Requirement: Avoid notification spam — only alert if no one has responded in 24 hours.
Design:
- Trigger: when high-priority Dataverse ticket is created or escalated
- Condition: is
LastNotifiedOnnull ORaddHours(LastNotifiedOn, 24)<utcNow()? - Yes: Send Teams message → Update
LastNotifiedOn=utcNow()on ticket record - No: Terminate — notification already sent within 24 hours
- Separate flow: when ticket acknowledged (status changes) → clear
LastNotifiedOnto reset the 24h window
Tip: Storing state in Dataverse (LastNotifiedOn) is the correct pattern — never rely on flow run history for business logic decisions.
8. Cheat Sheet — Quick Reference
Flow Types at a Glance
Automated cloud flow → triggered by events (SharePoint, Dataverse, email)
Instant cloud flow → triggered manually (Power Apps, Teams, mobile app)
Scheduled cloud flow → triggered by time (recurrence)
Child flow → triggered by parent flow (reusable subroutine)
Desktop flow (RPA) → triggered by cloud flow, runs on local machine
Key Action Reference
| Action | Purpose |
|---|---|
Initialize variable |
Declare a variable (must be first use) |
Set variable |
Update variable value |
Append to string/array variable |
Add to existing variable |
Compose |
Compute and store an immutable value |
Parse JSON |
Make JSON fields available as dynamic tokens |
Select |
Transform array items to a new shape |
Filter array |
Keep only items matching a condition |
Apply to each |
Loop over array items |
Do until |
Loop until condition is true |
Scope |
Group actions (Try/Catch/Finally) |
Condition |
If/else branching |
Switch |
Multi-branch by value |
Terminate |
End flow with Succeeded/Failed/Cancelled |
Delay |
Wait for duration or until datetime |
HTTP |
Call any REST API (premium) |
Run a child flow |
Call another flow as a subroutine |
Try-Catch-Finally Template
[Try scope]
→ Main business logic actions
→ Set variable: Success = true at end
[Catch scope] ← Run After: Try → has failed
→ Compose error: result('Try_scope')[0]['error']['message']
→ Log to Dataverse error log table
→ Send Teams notification with error details
→ Terminate: Failed
[Finally scope] ← Run After: Try+Catch → all states
→ Update audit/status record
→ Release any locks
→ Always runs regardless of success or failure
Essential Expressions
Date & time:
utcNow()
formatDateTime(utcNow(), 'yyyy-MM-ddTHH:mm:ssZ')
addDays(utcNow(), -7)
convertTimeZone(utcNow(), 'UTC', 'India Standard Time')
Null / empty handling:
coalesce(variables('MyVar'), 'default value')
empty(outputs('Get_item')?['body/Title'])
if(empty(variables('x')), 'fallback', variables('x'))
String manipulation:
concat(variables('FirstName'), ' ', variables('LastName'))
replace(triggerBody()?['Description'], '\n', '<br>')
toLower(triggerBody()?['Email'])
Array:
length(outputs('Get_items')?['body/value'])
first(outputs('Get_items')?['body/value'])
contains(variables('ApprovedList'), variables('CurrentId'))
Performance Best Practices
DO:
✓ Use trigger conditions to filter before flow starts
✓ Use OData $filter on Get items / List rows actions
✓ Enable concurrency on Apply to each (up to 50 parallel)
✓ Use Child flows for reusable logic
✓ Use batch operations (Dataverse changeset) over per-item loops
✓ Event-driven triggers over scheduled polling
✓ Select only needed columns in Get items ($select)
DON'T:
✗ Nest Apply to each inside Apply to each
✗ Retrieve all records and filter inside the flow
✗ Use variables for values that never change (use Compose)
✗ Put heavy actions inside triggers without conditions
✗ Hard-code URLs, emails, or environment-specific values
Common Errors & Fixes
| Error | Likely Cause | Fix |
|---|---|---|
| Flow fails after solution import | Connection not configured | Set up Connection References in target env |
| Apply to each hits action limit | Too many iterations | Enable concurrency; use batch operations |
| HTTP action returns 403 | Auth misconfigured | Check OAuth settings, token scope, app permissions |
| Parse JSON fails | Schema mismatch | Regenerate schema from latest sample payload |
| Approval flow stuck | Approver didn't receive email | Check spam/junk; verify approver email; check DLP |
| Variable "already initialised" error | Initialize inside a loop | Move Initialize variable to top of flow, outside all loops |
| Flow triggers on its own updates | Self-triggering loop | Add trigger condition to filter out flow's own user identity |
| Do Until runs forever | Condition never becomes true | Always set a max count/timeout; add a counter variable |
Top 10 Tips
- Try-Catch-Finally using Scopes is the #1 advanced pattern asked in every senior . Know the
result('scope_name')expression syntax. - Trigger conditions vs Condition action — trigger conditions are server-side and don't consume action counts. Always prefer them for filtering.
- Connection References are mandatory for ALM. Hardcoded connections are the most common cause of broken flows post-deployment.
- Environment variables for every config value that changes per environment — URL, API key, email address, feature flag.
- Apply to each concurrency — know how to enable it and the max parallelism (50). The default sequential behaviour surprises .
- Compose vs Variable — Compose is immutable and lightweight; Variables are mutable and persist across loops.
- Event-driven triggers are always preferred over scheduled polling — mention this proactively in optimisation questions.
- Child flows for reuse — same pattern as functions/subroutines in code. Must be in same solution and environment.
- Approval flow types — know all three (First to respond / Everyone must approve / Custom responses) and sequential approval chaining.
- Power Automate vs Logic Apps — know when to recommend each. at enterprise level always ask this.
No comments:
Post a Comment