Thursday, May 7, 2026

Liquid code in Power Pages - FetchXML

FetchXML is the native query language of Microsoft Dataverse. If you're building anything serious on the Power Platform — flows, plugins, portals, or canvas apps — you will hit a wall with OData eventually. This guide is the syllabus I wish existed when I started: every feature, every operator, real patterns you'll actually use in production.

1. What is FetchXML

FetchXML is an XML-based query language built specifically for Microsoft Dataverse (formerly Common Data Service / CDS). It's the closest thing Dataverse has to SQL — you write structured XML describing which records you want, and Dataverse translates it internally into the right database query.

It predates OData in the Dynamics ecosystem and remains more powerful for complex scenarios: multi-level joins, aggregates, cross-entity filtering, and paging over large datasets.

When to use FetchXML vs the alternatives

  • FetchXML — complex joins, aggregates, paging over 5,000 records, cross-entity conditions, working inside plugins or C# code
  • OData ($filter) — simple single-entity queries via Web API, quick REST calls, Power Automate "List Rows" without complex joins
  • LINQ (early-bound C#) — type-safe queries in plugin/tool code, good for simple lookups, limited aggregate support

Where FetchXML runs

  • Power Automate — "List rows" action on Dataverse connector → FetchXML input
  • Power Apps — via Collect with a Dataverse table or inside canvas app formulas using the connector
  • Plugins / C#IOrganizationService.RetrieveMultiple(new FetchExpression(fetchXml))
  • Web APIGET /api/data/v9.2/entity?fetchXml=<encoded xml>
  • XrmToolBox — FetchXML Builder plugin for visual building and live testing
  • Power Pages / Portals — FetchXML-based data lists and liquid templates
<!-- Minimal valid FetchXML -->
<fetch>
  <entity name="account">
    <attribute name="name" />
    <attribute name="emailaddress1" />
  </entity>
</fetch>

2. Basic Structure

Every FetchXML query follows the same skeleton. Learn this once and everything else is layers on top.

The fetch element

AttributeValuesWhat it does
topintegerLimits result count (like SQL TOP). Omit for full result set.
distincttrue / falseRemoves duplicate rows. Essential when using outer joins.
no-locktrue / falseAdds NOLOCK hint. Use for read-heavy reporting queries.
aggregatetrue / falseSwitches the query to aggregate mode (section 6).
pageintegerPage number for paging (section 7).
countintegerPage size for paging. Max 5000.

entity, attribute, order

  • <entity name="logical_name"> — the table you're querying. Always use the logical name (lowercase, prefixed).
  • <attribute name="field_name" /> — columns to return. Omit completely to return all columns (not recommended for production).
  • <order attribute="field_name" descending="false" /> — sort order. Stack multiple for multi-column sort.
<fetch top="50" distinct="true" no-lock="true">
  <entity name="contact">
    <attribute name="fullname" />
    <attribute name="emailaddress1" />
    <attribute name="createdon" />
    <order attribute="createdon" descending="true" />
    <order attribute="fullname" descending="false" />
  </entity>
</fetch>
Tip: Never omit all <attribute> elements in production queries. Selecting all columns is a silent performance killer — you end up fetching columns you never use across potentially thousands of rows.

3. Filtering — <filter> and <condition>

Filters are where FetchXML earns its keep. The <filter> element wraps one or more <condition> elements and can be nested arbitrarily deep.

condition anatomy

<condition attribute="fieldname" operator="eq" value="somevalue" />

Complete Operator Reference

OperatorSQL EquivalentNotes
eq= valueExact match
ne!= valueNot equal
gt> valueGreater than
lt< valueLess than
ge>= valueGreater than or equal
le<= valueLess than or equal
likeLIKE '%val%'Use % wildcard. Leading % kills index — avoid.
not-likeNOT LIKEInverse of like
inIN (v1,v2,v3)Uses <value> child elements
not-inNOT INUses <value> child elements
betweenBETWEEN v1 AND v2Uses two <value> child elements
nullIS NULLNo value attribute needed
not-nullIS NOT NULLNo value attribute needed
eq-useridCurrent user IDNo value needed. Dynamic current user.
ne-useridNot current user
eq-businessidCurrent BU ID
today= CAST(GETDATE() AS DATE)Date fields only
tomorrowTomorrow's date
yesterdayYesterday's date
this-weekCurrent calendar week
this-monthCurrent calendar month
this-yearCurrent year
last-seven-daysRolling 7 days back
last-x-daysRolling N days backRequires value="N"
next-x-daysRolling N days forwardRequires value="N"
last-x-monthsRolling N months backRequires value="N"
next-x-monthsRolling N months forward
onExact dateDate fields, value = YYYY-MM-DD
on-or-before<= date
on-or-after>= date
begins-withLIKE 'val%'Index-safe unlike leading wildcard like
ends-withLIKE '%val'Avoid on large tables
contain-valuesMulti-select option setUses <value> child elements
not-contain-valuesMulti-select exclusion

Examples

<!-- IN operator -->
<condition attribute="statuscode" operator="in">
  <value>1</value>
  <value>2</value>
  <value>290560001</value>
</condition>

<!-- BETWEEN operator -->
<condition attribute="createdon" operator="between">
  <value>2024-01-01</value>
  <value>2024-12-31</value>
</condition>

<!-- Dynamic date operators -->
<condition attribute="modifiedon" operator="last-x-days" value="30" />

<!-- Null check -->
<condition attribute="emailaddress1" operator="null" />

<!-- Current user -->
<condition attribute="ownerid" operator="eq-userid" />

<!-- Like with trailing wildcard only (index-safe) -->
<condition attribute="name" operator="like" value="Contoso%" />

4. AND vs OR Logic

By default, all conditions inside a <filter> are ANDed together. Set type="or" to switch to OR. Nest filters to build complex boolean trees.

<!-- Default AND: active accounts in the US -->
<filter>
  <condition attribute="statecode" operator="eq" value="0" />
  <condition attribute="address1_country" operator="eq" value="United States" />
</filter>
<!-- OR: either status applies -->
<filter type="or">
  <condition attribute="statuscode" operator="eq" value="1" />
  <condition attribute="statuscode" operator="eq" value="290560001" />
</filter>
<!-- Mixed AND + OR nesting:
     (statecode = 0) AND (country = 'US' OR country = 'Canada') -->
<filter type="and">
  <condition attribute="statecode" operator="eq" value="0" />
  <filter type="or">
    <condition attribute="address1_country" operator="eq" value="United States" />
    <condition attribute="address1_country" operator="eq" value="Canada" />
  </filter>
</filter>
<!-- Real pattern: projects that are completed OR cancelled OR have no tasks -->
<fetch>
  <entity name="project">
    <attribute name="name" />
    <attribute name="statuscode" />
    <filter type="and">
      <condition attribute="ownerid" operator="eq-userid" />
      <filter type="or">
        <condition attribute="statuscode" operator="eq" value="290560002" />
        <condition attribute="statuscode" operator="eq" value="290560003" />
        <condition entityname="tsk" attribute="taskid" operator="null" />
      </filter>
    </filter>
    <link-entity name="task" from="regardingobjectid" to="projectid"
                 link-type="outer" alias="tsk">
      <attribute name="taskid" />
    </link-entity>
  </entity>
</fetch>
Watch out: The entityname attribute on a <condition> lets you filter on a joined entity's columns from the parent filter. This only works with outer joins and is how you implement "NOT EXISTS" patterns. The value of entityname must match the alias on the <link-entity>.

5. Link-Entities (Joins)

FetchXML joins are called link-entities. They work like SQL JOINs and can be nested multiple levels deep.

link-type values

link-typeSQL EquivalentWhen to use
innerINNER JOINOnly return parent records that have a matching child. Default if omitted.
outerLEFT OUTER JOINReturn parent records even when no child exists. Required for "not exists" patterns.

Basic inner join

<!-- Contacts and their parent Account name -->
<fetch>
  <entity name="contact">
    <attribute name="fullname" />
    <attribute name="emailaddress1" />
    <link-entity name="account" from="accountid" to="parentcustomerid"
                 link-type="inner" alias="acc">
      <attribute name="name" alias="accountname" />
      <attribute name="telephone1" alias="accountphone" />
    </link-entity>
  </entity>
</fetch>

Multi-level joins

<!-- Three levels deep: Task → Stage → Project -->
<fetch>
  <entity name="task">
    <attribute name="subject" />
    <attribute name="statuscode" />
    <link-entity name="stage" from="stageid" to="stageid"
                 link-type="inner" alias="stg">
      <attribute name="name" alias="stagename" />
      <link-entity name="project" from="projectid" to="projectid"
                   link-type="inner" alias="proj">
        <attribute name="name" alias="projectname" />
        <attribute name="statuscode" alias="projectstatus" />
      </link-entity>
    </link-entity>
  </entity>
</fetch>

Filtering on a joined entity

<!-- Contacts whose account is in a specific industry -->
<fetch>
  <entity name="contact">
    <attribute name="fullname" />
    <link-entity name="account" from="accountid" to="parentcustomerid"
                 link-type="inner" alias="acc">
      <attribute name="name" alias="accountname" />
      <filter>
        <condition attribute="industrycode" operator="eq" value="7" />
      </filter>
    </link-entity>
  </entity>
</fetch>

Cross-entity condition (entityname attribute)

This pattern filters on a linked entity's column from the parent entity's filter block. Most commonly used for "NOT EXISTS" — find parent records where no child row matches.

<!-- Accounts with NO contacts -->
<fetch distinct="true">
  <entity name="account">
    <attribute name="name" />
    <filter>
      <condition entityname="con" attribute="contactid" operator="null" />
    </filter>
    <link-entity name="contact" from="parentcustomerid" to="accountid"
                 link-type="outer" alias="con">
      <attribute name="contactid" />
    </link-entity>
  </entity>
</fetch>
Rule: Cross-entity conditions (entityname=) always require the join to be link-type="outer". An inner join would already exclude the non-matching rows before the NULL check runs.

6. Aggregate Queries

Set aggregate="true" on <fetch> to switch to aggregate mode. In this mode, every <attribute> must declare an aggregate function or be a groupby column.

Aggregate functions

FunctionDescription
countCount of all rows (including nulls)
countcolumnCount of non-null values in a column
sumSum of numeric column
avgAverage of numeric column
minMinimum value
maxMaximum value
<!-- Count contacts per account -->
<fetch aggregate="true">
  <entity name="contact">
    <attribute name="contactid" aggregate="count" alias="contactcount" />
    <attribute name="parentcustomerid" groupby="true" alias="accountid" />
  </entity>
</fetch>
<!-- Sum and average of estimated revenue grouped by industry -->
<fetch aggregate="true">
  <entity name="opportunity">
    <attribute name="estimatedvalue" aggregate="sum" alias="totalrevenue" />
    <attribute name="estimatedvalue" aggregate="avg" alias="avgrevenue" />
    <attribute name="opportunityid" aggregate="count" alias="dealcount" />
    <attribute name="industrycode" groupby="true" alias="industry" />
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
  </entity>
</fetch>
<!-- Distinct count: unique companies with open cases -->
<fetch aggregate="true">
  <entity name="incident">
    <attribute name="customerid" aggregate="countcolumn" distinct="true" alias="uniquecustomers" />
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
  </entity>
</fetch>

Grouping by date parts

<!-- Cases created per month -->
<fetch aggregate="true">
  <entity name="incident">
    <attribute name="incidentid" aggregate="count" alias="casecount" />
    <attribute name="createdon" groupby="true" dategrouping="month" alias="createmonth" />
    <attribute name="createdon" groupby="true" dategrouping="year" alias="createyear" />
    <order alias="createyear" descending="false" />
    <order alias="createmonth" descending="false" />
  </entity>
</fetch>
Note: Aggregate FetchXML results are capped at 50,000 groups. For reporting over very large datasets, consider Azure Synapse Link or Dataverse long-term data export instead.

7. Paging

Dataverse returns a maximum of 5,000 records per request. For anything larger you must page through results using either simple page numbering or paging cookies.

Simple page/count attributes

<!-- Page 2, 100 records per page -->
<fetch page="2" count="100">
  <entity name="contact">
    <attribute name="fullname" />
    <attribute name="createdon" />
    <order attribute="createdon" descending="true" />
  </entity>
</fetch>

Paging cookies (correct approach for large datasets)

Simple page numbering degrades at scale because the database still scans from the beginning every time. Paging cookies pass a server-side bookmark that picks up exactly where the previous page ended — far more efficient.

The response from Dataverse includes a @Microsoft.Dynamics.CRM.fetchxmlpagingcookie annotation. Decode it and inject it into the next request:

<!-- First request: no cookie -->
<fetch page="1" count="5000">
  <entity name="contact">
    <attribute name="fullname" />
    <order attribute="contactid" descending="false" />
  </entity>
</fetch>

<!-- Subsequent requests: inject the decoded cookie -->
<fetch page="2" count="5000"
  paging-cookie="&lt;cookie page=&quot;1&quot;&gt;&lt;contactid ... /&gt;&lt;/cookie&gt;">
  <entity name="contact">
    <attribute name="fullname" />
    <order attribute="contactid" descending="false" />
  </entity>
</fetch>
Important: Always include an <order> on a unique field (like the primary key) when paging. Without a deterministic sort order, you can get duplicate or skipped records across pages.

Paging in C# (plugin or tool)

// C# paging loop example
string fetchXml = @"<fetch count='5000'>
  <entity name='contact'>
    <attribute name='fullname' />
    <order attribute='contactid' />
  </entity>
</fetch>";

var allRecords = new List<Entity>();
int pageNumber = 1;
string pagingCookie = null;

while (true)
{
    string pagedFetch = PatchFetchXmlPage(fetchXml, pageNumber, pagingCookie);
    var result = service.RetrieveMultiple(new FetchExpression(pagedFetch));
    allRecords.AddRange(result.Entities);

    if (!result.MoreRecords) break;

    pagingCookie = result.PagingCookie;
    pageNumber++;
}

8. FetchXML in Power Automate

The Dataverse connector's List rows action accepts a FetchXML query directly. This unlocks joins, aggregates, and paging that you simply cannot do with OData filter expressions.

Basic setup

  1. Add action: Microsoft Dataverse → List rows
  2. Set Table Name to your entity
  3. Expand Advanced options
  4. Paste your FetchXML into the Fetch Xml Query field
<fetch top="500" no-lock="true">
  <entity name="contact">
    <attribute name="fullname" />
    <attribute name="emailaddress1" />
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
  </entity>
</fetch>

Injecting dynamic values safely

In Power Automate, FetchXML is a string. You can inject dynamic expressions using @{expression} syntax. The critical rule: never inject user-controlled input directly — always validate or use GUIDs from trigger/action outputs.

<!-- Safe: injecting a trigger GUID value -->
<fetch>
  <entity name="task">
    <attribute name="subject" />
    <attribute name="statuscode" />
    <filter>
      <condition attribute="regardingobjectid" operator="eq"
        value="@{triggerOutputs()?['body/accountid']}" />
      <condition attribute="createdon" operator="last-x-days"
        value="@{variables('DaysBack')}" />
    </filter>
  </entity>
</fetch>

Dynamic IN clause from an array variable

<!-- Step 1: Build <value> tags using Select action -->
<!-- Select "from": array of IDs, "map": concat('<value>', item(), '</value>') -->

<!-- Step 2: Join them and inject into FetchXML -->
<fetch>
  <entity name="project">
    <attribute name="name" />
    <filter>
      <condition attribute="projectid" operator="in">
        @{join(body('Select_ID_values'), '')}
      </condition>
    </filter>
  </entity>
</fetch>

Handling paging in Power Automate

The List Rows action does not auto-page. Use a Do Until loop: check the response for @odata.nextLink or use the paging cookie from the response headers to fetch the next batch.

Tip: For flows processing large volumes, consider moving paging logic to an Azure Function or plugin. Power Automate's 30-day / 30-second action limits make deep paging expensive.

9. FetchXML in Plugins / C#

Inside a Dataverse plugin or any code using IOrganizationService, pass your FetchXML to RetrieveMultiple via a FetchExpression.

Basic retrieval

string fetchXml = @"
<fetch top='50'>
  <entity name='contact'>
    <attribute name='fullname' />
    <attribute name='emailaddress1' />
    <filter>
      <condition attribute='statecode' operator='eq' value='0' />
    </filter>
  </entity>
</fetch>";

EntityCollection results = service.RetrieveMultiple(new FetchExpression(fetchXml));

foreach (Entity entity in results.Entities)
{
    string name = entity.GetAttributeValue<string>("fullname");
    string email = entity.GetAttributeValue<string>("emailaddress1");
}

Reading aliased join attributes

When you alias an attribute from a joined entity, it comes back in the Entity as an AliasedValue.

// FetchXML has: <attribute name="name" alias="accountname" /> on the link-entity
var aliasedValue = entity.GetAttributeValue<AliasedValue>("accountname");
string accountName = aliasedValue?.Value as string;

Reading aggregate results

// FetchXML aggregate result: alias="contactcount"
var aliasedCount = entity.GetAttributeValue<AliasedValue>("contactcount");
int count = Convert.ToInt32(aliasedCount?.Value);

Building FetchXML dynamically in C#

// Use XElement for safe construction — no string concatenation
var fetch = new XElement("fetch", new XAttribute("top", 100),
    new XElement("entity", new XAttribute("name", "contact"),
        new XElement("attribute", new XAttribute("name", "fullname")),
        new XElement("filter",
            new XElement("condition",
                new XAttribute("attribute", "statecode"),
                new XAttribute("operator", "eq"),
                new XAttribute("value", "0")
            )
        )
    )
);

string fetchXml = fetch.ToString();
var results = service.RetrieveMultiple(new FetchExpression(fetchXml));

10. FetchXML via Web API

You can execute FetchXML directly against the Dataverse Web API by passing it as a URL query parameter. This is useful for integrations, testing from Postman, or server-side code calling the API directly.

Endpoint pattern

GET https://<org>.api.crm.dynamics.com/api/data/v9.2/<entity_plural>?fetchXml=<url-encoded-xml>

URL-encode your FetchXML

<!-- Raw FetchXML -->
<fetch top="10">
  <entity name="contact">
    <attribute name="fullname" />
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
  </entity>
</fetch>

<!-- URL encoded (key characters) -->
<    →  %3C
>    →  %3E
"    →  %22
/    →  %2F
=    →  %3D

Full example in JavaScript

const fetchXml = `<fetch top="10">
  <entity name="contact">
    <attribute name="fullname" />
    <filter><condition attribute="statecode" operator="eq" value="0" /></filter>
  </entity>
</fetch>`;

const encoded = encodeURIComponent(fetchXml);
const url = `${orgUrl}/api/data/v9.2/contacts?fetchXml=${encoded}`;

const response = await fetch(url, {
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'OData-MaxVersion': '4.0',
    'OData-Version': '4.0',
    'Accept': 'application/json'
  }
});

const data = await response.json();
console.log(data.value); // array of records

Request formatted values

To get the display label of option sets (formatted values) in the response, add a Prefer header:

Prefer: odata.include-annotations="OData.Community.Display.V1.FormattedValue"

Formatted values then appear in the response as fieldname@OData.Community.Display.V1.FormattedValue.

11. XrmToolBox: FetchXML Builder

FetchXML Builder (by Jonas Rapp) is the fastest way to build, test, and debug FetchXML queries against a live environment. Every serious Dataverse developer should have it installed.

Key features to use

  • Visual query builder — drag-and-drop entities and fields, set filters via dropdowns. Good starting point even if you'll hand-edit the XML.
  • Execute and preview — run the query and see results instantly against your connected environment. No need to deploy a flow or plugin to test.
  • View as SQL — converts your FetchXML to approximate T-SQL. Useful for understanding what's happening under the hood and for writing the equivalent SQL for reporting.
  • Copy as code — exports the query as ready-to-use C#, JavaScript, or Power Automate-compatible string.
  • Aggregate mode toggle — switches the builder into aggregate mode with a checkbox.
  • Paging cookie test — lets you page through results directly in the tool.

Workflow tip

  1. Build the query visually in FetchXML Builder
  2. Switch to XML view and refine edge cases (nested filters, cross-entity conditions)
  3. Execute and validate results against real data
  4. Copy the final XML into your flow, plugin, or API call
Tip: FetchXML Builder also supports the no-lock hint and shows row counts per page. Use it to benchmark query performance before committing to production code.

12. Common Real-World Patterns

Pattern 1: Find records with no related records (NOT EXISTS)

<!-- Accounts with zero contacts -->
<fetch distinct="true">
  <entity name="account">
    <attribute name="name" />
    <attribute name="accountid" />
    <filter>
      <condition entityname="con" attribute="contactid" operator="null" />
    </filter>
    <link-entity name="contact" from="parentcustomerid" to="accountid"
                 link-type="outer" alias="con">
      <attribute name="contactid" />
    </link-entity>
  </entity>
</fetch>

Pattern 2: Get the latest record per group

<!-- Most recent case per customer -->
<fetch aggregate="true">
  <entity name="incident">
    <attribute name="customerid" groupby="true" alias="customerid" />
    <attribute name="createdon" aggregate="max" alias="latestcase" />
  </entity>
</fetch>

Pattern 3: Multi-condition OR across linked entities

<!-- Projects that are complete OR cancelled OR have stages with no tasks -->
<fetch distinct="true" no-lock="true">
  <entity name="project">
    <attribute name="name" />
    <attribute name="statuscode" />
    <filter type="and">
      <condition attribute="retailerid" operator="eq"
        value="abf8e77a-845b-f011-bec1-6045bda966da" />
      <filter type="or">
        <condition attribute="statuscode" operator="eq" value="290560003" />
        <condition attribute="statuscode" operator="eq" value="290560002" />
        <condition entityname="tsk" attribute="taskid" operator="null" />
      </filter>
    </filter>
    <link-entity name="stage" from="projectid" to="projectid"
                 link-type="outer" alias="stg">
      <link-entity name="task" from="stageid" to="stageid"
                   link-type="outer" alias="tsk">
        <attribute name="taskid" />
      </link-entity>
    </link-entity>
  </entity>
</fetch>

Pattern 4: Hierarchy query — tasks for a specific project via stages

<!-- All tasks under a project by traversing Project → Stage → Task -->
<fetch>
  <entity name="task">
    <attribute name="subject" />
    <attribute name="statuscode" />
    <attribute name="ownerid" />
    <link-entity name="stage" from="stageid" to="stageid"
                 link-type="inner" alias="stg">
      <attribute name="name" alias="stagename" />
      <link-entity name="project" from="projectid" to="projectid"
                   link-type="inner" alias="proj">
        <attribute name="name" alias="projectname" />
        <filter>
          <condition attribute="projectid" operator="eq"
            value="3fa85f64-5717-4562-b3fc-2c963f66afa6" />
        </filter>
      </link-entity>
    </link-entity>
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
  </entity>
</fetch>

Pattern 5: Active records modified in the last N days, owned by current user

<fetch top="200">
  <entity name="opportunity">
    <attribute name="name" />
    <attribute name="estimatedvalue" />
    <attribute name="modifiedon" />
    <filter>
      <condition attribute="statecode" operator="eq" value="0" />
      <condition attribute="ownerid" operator="eq-userid" />
      <condition attribute="modifiedon" operator="last-x-days" value="14" />
    </filter>
    <order attribute="modifiedon" descending="true" />
  </entity>
</fetch>

Pattern 6: Count of records per status with a join condition

<!-- Task count per project, only for active projects -->
<fetch aggregate="true">
  <entity name="task">
    <attribute name="taskid" aggregate="count" alias="taskcount" />
    <link-entity name="stage" from="stageid" to="stageid"
                 link-type="inner" alias="stg">
      <link-entity name="project" from="projectid" to="projectid"
                   link-type="inner" alias="proj">
        <attribute name="projectid" groupby="true" alias="projectid" />
        <attribute name="name" groupby="true" alias="projectname" />
        <filter>
          <condition attribute="statecode" operator="eq" value="0" />
        </filter>
      </link-entity>
    </link-entity>
  </entity>
</fetch>

13. Performance Tips

1. Never use a leading wildcard in LIKE

<!-- BAD: forces a full table scan -->
<condition attribute="name" operator="like" value="%contoso%" />

<!-- GOOD: index-seekable -->
<condition attribute="name" operator="like" value="contoso%" />
<!-- Or use begins-with explicitly -->
<condition attribute="name" operator="begins-with" value="contoso" />

2. Always use no-lock on read-only reporting queries

<fetch no-lock="true">
  <!-- Adds NOLOCK hint. Prevents blocking on busy tables.
       Only use for reporting — not for business-critical transactional reads. -->
</fetch>

3. Select only the columns you need

<!-- BAD: returns all columns -->
<entity name="contact">
  <!-- no attribute elements = SELECT * -->
</entity>

<!-- GOOD: return only what you use -->
<entity name="contact">
  <attribute name="fullname" />
  <attribute name="emailaddress1" />
</entity>

4. Filter as early as possible — push filters to the lowest join level

<!-- Filter on the join entity directly, not at the top -->
<link-entity name="account" from="accountid" to="parentcustomerid"
             link-type="inner" alias="acc">
  <filter>
    <!-- This runs during the join, not after -->
    <condition attribute="statecode" operator="eq" value="0" />
  </filter>
</link-entity>

5. Use top and paging — never retrieve all records speculatively

<!-- Always bound your queries in automation flows -->
<fetch top="5000" no-lock="true">
  <entity name="contact">
    <attribute name="fullname" />
  </entity>
</fetch>

6. Avoid deeply nested joins unless necessary

Each additional join level multiplies the SQL complexity. If you're going 4+ levels deep, consider whether a Dataverse Custom API or pre-computed rollup field could reduce query depth.

7. Use distinct only when you have to

distinct="true" forces a DISTINCT on the SQL side which means sorting the entire result set. Only use it when outer joins actually produce duplicates that matter for your output.

8. Index-backed fields filter faster

Dataverse automatically indexes primary keys, lookup fields, and some system fields like statecode, statuscode, ownerid, createdon, and modifiedon. Filtering on custom text fields without enabling a custom index will always be slower.

14. FetchXML vs OData

CapabilityFetchXMLOData ($filter)
Simple single-entity queries✅ (simpler syntax)
Multi-level joins❌ (expand limited to 1 level)
Outer joins / NOT EXISTS
Aggregate functions (sum, count, avg)
Group by with date parts
Dynamic date operators (last-x-days, this-week)
Cross-entity conditions (entityname=)
Paging with cookies✅ (@odata.nextLink)
Readable / writeable by non-developers❌ (verbose XML)✅ (concise URL params)
Works in Power Automate List Rows
Works in Canvas App formula barLimited
Direct URL testing (Postman, browser)⚠️ (URL-encode required)✅ (natural params)
Formatted value annotations✅ (via API header)✅ (via API header)
Tool support (FetchXML Builder)✅ (excellent)
NOLOCK hint

Decision rule

  • Use OData for simple filters on a single table, quick API tests, Canvas App formulas.
  • Use FetchXML for anything involving joins, aggregates, dynamic dates, NOT EXISTS, or paging beyond page 1 of a large dataset.
  • In plugins and C# code, prefer FetchXML — it's more predictable than LINQ and gives you full control over what the database executes.

Quick Reference Cheat Sheet

fetch attributes

AttributePurpose
top="N"Limit result count
distinct="true"Remove duplicate rows
no-lock="true"NOLOCK hint for read queries
aggregate="true"Enable aggregate mode
page="N"Page number
count="N"Page size (max 5000)
paging-cookie="..."Server-side cursor for efficient paging

Comparison operators

OperatorMeaning
eq / neEqual / Not equal
gt / ltGreater than / Less than
ge / leGreater or equal / Less or equal
like / not-likeString pattern match (% wildcard)
begins-withIndex-safe prefix match
in / not-inSet membership (use <value> children)
betweenRange (two <value> children)
null / not-nullIS NULL / IS NOT NULL

Dynamic date operators

OperatorNotes
today / yesterday / tomorrowNo value needed
this-week / this-month / this-yearNo value needed
last-seven-daysNo value needed
last-x-days / next-x-daysvalue="N" required
last-x-months / next-x-monthsvalue="N" required
on / on-or-before / on-or-aftervalue = YYYY-MM-DD

Aggregate functions

FunctionUsage
countRow count including nulls
countcolumnNon-null value count
sumNumeric sum
avgNumeric average
min / maxMin / Max value
groupby="true"Group by this column (no aggregate function)
dategroupingValues: day, week, month, quarter, year

link-entity

AttributeValues / Notes
nameLogical name of the entity to join
fromField on the child entity
toField on the parent entity
link-typeinner (default) or outer
aliasRequired for accessing attributes and cross-entity conditions

Cross-entity condition (NOT EXISTS)

<filter>
  <condition entityname="alias" attribute="pkfield" operator="null" />
</filter>
<link-entity ... link-type="outer" alias="alias">
  <attribute name="pkfield" />
</link-entity>

Skeleton query

<fetch top="500" distinct="false" no-lock="true">
  <entity name="entity_logical_name">
    <attribute name="field1" />
    <attribute name="field2" />
    <order attribute="field1" descending="false" />
    <filter type="and">
      <condition attribute="statecode" operator="eq" value="0" />
    </filter>
    <link-entity name="related_entity" from="fk_field" to="pk_field"
                 link-type="inner" alias="rel">
      <attribute name="name" alias="relatedname" />
    </link-entity>
  </entity>
</fetch>

Microsoft Dataverse Advanced Complete Guide

Microsoft Dataverse Advanced — Complete Guide

Plugins · PCF Controls · Web API · Business Rules · Security · Performance · ALM · Scenarios · Cheat Sheet


Table of Contents

  1. Core Concepts — Advanced Dataverse
  2. Plugins — Deep Dive
  3. PCF Controls — Deep Dive
  4. Dataverse Web API
  5. Business Logic & Automation
  6. Security & ALM
  7. Scenario-Based Questions
  8. Cheat Sheet — Quick Reference

1. Core Concepts — Advanced Dataverse

What is Microsoft Dataverse and what makes it enterprise-grade?

Microsoft Dataverse is the cloud-based data platform underpinning the entire Power Platform and Dynamics 365. It provides a structured, governed data store with built-in business logic, security, and extensibility.

What makes Dataverse enterprise-grade:

Relational data model:
→ Tables (entities), Columns (attributes), Relationships
→ Standard tables: Account, Contact, Lead, Opportunity, Case, etc.
→ Custom tables: create your own business-specific tables
→ Virtual tables: surface external data as Dataverse tables (no copy)

Business logic layers:
→ Column-level: data type, format, required, default value
→ Business rules: no-code field validation and show/hide logic
→ Plugins: C# server-side event handlers (full .NET access)
→ Custom APIs: define your own messages and operations
→ Workflows (deprecated) / Power Automate: process automation

Security model:
→ Role-based: security roles define table/column/record access
→ Business units: hierarchical org structure for data isolation
→ Field security profiles: restrict specific column access
→ Row-level security: owner, sharing, team-based record access

Platform services:
→ Audit logging: track every record change
→ Duplicate detection: prevent duplicate records
→ Change tracking: delta sync for external integration
→ File/Image columns: native binary storage
→ Calculated/Rollup columns: formula-based values

What is the Dataverse event pipeline?

The event pipeline is the ordered sequence in which Dataverse processes operations. Understanding it is essential for plugin and custom API development.

Dataverse Event Pipeline (for synchronous plugins):

1. Pre-Validation stage (Stage 10):
   → Runs BEFORE input validation
   → Outside the database transaction
   → Can stop the operation before it begins
   → Use for: permission checks, prerequisite validation
   → Rollback: NOT rolled back if later steps fail

2. Pre-Operation stage (Stage 20):
   → Runs AFTER input validation, BEFORE database write
   → Inside the database transaction
   → Target entity in context: BEFORE values (can modify Target)
   → Use for: default values, data enrichment, pre-write validation
   → Rollback: YES — rolled back if main operation fails

3. Main Operation:
   → The actual database write (Create/Update/Delete/etc.)
   → Dataverse performs the core operation

4. Post-Operation stage (Stage 40):
   → Runs AFTER database write
   → Inside the database transaction (synchronous)
   → Target entity has the AFTER values (incl. new record ID)
   → Use for: related record creation, notifications, downstream updates
   → Rollback: YES (synchronous) / NO (asynchronous)

Synchronous vs Asynchronous:
Synchronous: executes inline, user waits, part of transaction
Asynchronous: queued for later execution, user continues, NOT in transaction

Pre-Image vs Post-Image:
Pre-Image:  snapshot of record BEFORE the operation (registered separately)
Post-Image: snapshot of record AFTER the operation (registered separately)
Use: compare old vs new values in Update plugins

What are Custom APIs in Dataverse?

Custom APIs allow you to define your own messages (operations) in Dataverse — creating reusable, callable actions that can be invoked from Power Automate, Power Apps, PCF, Web API, or Copilot Studio.

Custom API vs Custom Actions (deprecated approach):
Custom Actions (older): defined in solution, limited capabilities
Custom API (modern):    richer definition, request/response params,
                        plugin association, discoverable via metadata

Custom API components:
Name:             unique message name (e.g., contoso_CalculateDiscount)
Binding type:     Global (no entity) / Entity / Entity Collection
Request params:   input parameters (string, int, entity, etc.)
Response props:   output values returned to caller
Plugin type:      optional C# plugin implementing the logic
Enabled for:      Workflow / Power Automate / Plug-in Step / Custom Processing

Calling Custom API:
// Web API:
POST https://contoso.crm.dynamics.com/api/data/v9.2/contoso_CalculateDiscount
{
  "OrderAmount": 5000,
  "CustomerTier": "Gold"
}

// Power Automate:
Action: "Perform an unbound action" OR "Perform a bound action"
  → Select custom API from action dropdown

// C# SDK:
var request = new OrganizationRequest("contoso_CalculateDiscount");
request["OrderAmount"] = 5000m;
request["CustomerTier"] = "Gold";
var response = service.Execute(request);
decimal discount = (decimal)response["DiscountPercentage"];

2. Plugins — Deep Dive

What is a Dataverse Plugin and what can it do?

A Dataverse Plugin is a C# class implementing IPlugin that executes server-side business logic in response to Dataverse events (Create, Update, Delete, Retrieve, Associate, custom messages).

using Microsoft.Xrm.Sdk;
using System;

public class AccountPostCreatePlugin : IPlugin
{
    public void Execute(IServiceProvider serviceProvider)
    {
        // 1. Get services from the service provider
        var context = (IPluginExecutionContext)
            serviceProvider.GetService(typeof(IPluginExecutionContext));
        var serviceFactory = (IOrganizationServiceFactory)
            serviceProvider.GetService(typeof(IOrganizationServiceFactory));
        var tracingService = (ITracingService)
            serviceProvider.GetService(typeof(ITracingService));

        // 2. Validate context — defensive programming
        if (context.InputParameters.Contains("Target") &&
            context.InputParameters["Target"] is Entity)
        {
            var target = (Entity)context.InputParameters["Target"];

            // 3. Validate entity type
            if (target.LogicalName != "account")
                return;

            try
            {
                tracingService.Trace("AccountPostCreatePlugin: starting");

                // 4. Get the Org service (running as system or calling user)
                var service = serviceFactory
                    .CreateOrganizationService(context.UserId);

                // 5. Business logic — create a default Contact for new Account
                var defaultContact = new Entity("contact");
                defaultContact["firstname"] = "Primary";
                defaultContact["lastname"] = "Contact";
                defaultContact["parentcustomerid"] =
                    new EntityReference("account", target.Id);

                service.Create(defaultContact);

                tracingService.Trace("Default contact created successfully");
            }
            catch (Exception ex)
            {
                throw new InvalidPluginExecutionException(
                    $"AccountPostCreatePlugin failed: {ex.Message}", ex);
            }
        }
    }
}

What are the key Plugin Registration settings?

Plugin Registration Tool (PRT) settings:

Message:     Dataverse operation that triggers the plugin
  Common: Create, Update, Delete, Retrieve, RetrieveMultiple,
          Associate, Disassociate, Assign, SetState, Merge,
          GrantAccess, ModifyAccess, RevokeAccess

Primary Entity: which table the message applies to
  e.g., account, contact, incident, custom_table

Stage:       when in the pipeline the plugin runs
  10 = Pre-Validation (outside transaction)
  20 = Pre-Operation (inside transaction, before write)
  40 = Post-Operation (inside transaction for sync,
                       outside for async)

Execution Mode:
  Synchronous: runs inline, user waits, in transaction
  Asynchronous: queued, user continues, out of transaction

Filtering Attributes (for Update):
  Specify which columns trigger the plugin
  e.g., only trigger when "revenue" or "accountcategorycode" changes
  Without filtering: plugin fires on ANY field update (expensive!)

Rank: order of execution when multiple plugins on same message/stage
  Lower rank = executes first (rank 1 before rank 2)

Secure/Unsecure Configuration:
  Config strings passed to plugin constructor at registration
  Secure: encrypted, only visible to admins
  Unsecure: visible in solution, use for non-sensitive config

Images:
  Pre-Image: snapshot before the operation (register separately)
  Post-Image: snapshot after the operation
  Use: compare old vs new values in Update plugins

What are best practices for Plugin development?

1. Always check context defensively:
if (!context.InputParameters.Contains("Target")) return;
if (context.MessageName != "Create") return;
if (context.PrimaryEntityName != "account") return;

2. Use Filtering Attributes on Update:
Register with specific columns — don't fire on every field update
"revenue,accountcategorycode,primarycontactid"

3. Never use heavyweight operations in Pre-Validation:
Pre-Validation is outside transaction → expensive operations waste time
Move heavyweight logic to Post-Operation async

4. Use Tracing extensively:
tracingService.Trace("Entering: {0}", nameof(MyPlugin));
tracingService.Trace("Target ID: {0}", target.Id);
// Trace logs appear in plugin execution exception details
// Essential for debugging — plugins have no debugger in prod

5. Throw InvalidPluginExecutionException for business errors:
throw new InvalidPluginExecutionException(
    "Cannot close case: missing resolution description");
// Shows as user-friendly error in the UI

6. Never query all columns — use ColumnSet:
var account = service.Retrieve("account", accountId,
    new ColumnSet("name", "revenue", "primarycontactid"));
// NOT: new ColumnSet(true) — fetches all columns (expensive)

7. Avoid N+1 queries — use RetrieveMultiple with FetchXML:
var query = new FetchExpression("<fetch>...</fetch>");
var results = service.RetrieveMultiple(query);
// NOT: loop + individual Retrieve calls

8. Plugin Depth limit:
Context.Depth tracks recursive calls
if (context.Depth > 1) return; // Prevent infinite loops
// Plugin A creates record → Plugin B fires → loops back to A

9. Use Dependency Injection for testability:
Constructor injection: pass IOrganizationService for unit testing
Avoid direct new() instantiation of service objects

10. Register in Solutions for ALM:
Use Plugin Registration Tool to register steps as solution components
Never register directly to production — always promote via solution

What is Plugin Profiler and how do you debug plugins?

Local debugging with Plugin Profiler:

1. Install Plugin Registration Tool (PRT)
2. In PRT: Install Profiler → deploys profiler plugin to environment
3. Select your plugin step → "Start Profiling"
4. Execute the operation in the app (triggers plugin)
5. Profiler captures: execution context snapshot to clipboard/file
6. In Visual Studio: attach debugger to "PluginRegistration.exe"
7. PRT: Replay Profile → loads context into local plugin execution
8. Breakpoints hit in Visual Studio → debug with real production data

Tracing for production debugging:
tracingService.Trace("Processing account: {0}", accountId);
→ Trace appears in: Plugin Execution Exception detail
→ Enable plugin trace logs: Settings → Administration →
  System Settings → Customization → Enable logging to plug-in trace log
→ View traces: Settings → Plug-in Trace Log

Unit testing plugins:
// Mock IOrganizationService using Moq:
var mockService = new Mock<IOrganizationService>();
mockService.Setup(s => s.Retrieve("account", It.IsAny<Guid>(),
    It.IsAny<ColumnSet>()))
    .Returns(new Entity("account") { Id = Guid.NewGuid() });

var plugin = new MyPlugin(mockService.Object);
// Test plugin logic without Dataverse connection

3. PCF Controls — Deep Dive

What is a PCF Control and when do you use one?

PCF (PowerApps Component Framework) controls are custom UI components built with TypeScript/JavaScript that replace or augment the standard fields and views in model-driven apps, canvas apps, and Power Pages.

When to use PCF:
→ Standard Dataverse controls don't meet UX requirements
→ Need third-party library integration (charts, maps, editors)
→ Complex inline editing or validation logic
→ Custom visualisation of data (progress bars, star ratings, etc.)
→ Reusable branded UI components across multiple apps

PCF control types:
Field control:
→ Replaces the rendering of a single field on a form
→ Gets/sets the column value
→ Examples: colour picker, rich text editor, star rating,
            formatted phone number display

Dataset control:
→ Replaces an entire subgrid or view (list of records)
→ Gets the collection of records from the dataset
→ Examples: custom calendar view, Kanban board,
            map view of address records, custom grid

PCF vs Canvas App component:
PCF:         TypeScript + React/vanilla JS, compiled, runs natively
Canvas comp: Power Fx + Power Apps controls, low-code, easier
Use PCF:     complex rendering, performance-critical, third-party libs
Use canvas:  maker-level complexity, simple reusable components

What is the PCF Control lifecycle?

// PCF Control implementing StandardControl interface
import { IInputs, IOutputs } from "./generated/ManifestTypes";

export class StarRatingControl
    implements ComponentFramework.StandardControl<IInputs, IOutputs> {

    private _container: HTMLDivElement;
    private _context: ComponentFramework.Context<IInputs>;
    private _notifyOutputChanged: () => void;
    private _currentRating: number;

    // init: called ONCE when control is loaded
    public init(
        context: ComponentFramework.Context<IInputs>,
        notifyOutputChanged: () => void,
        state: ComponentFramework.Dictionary,
        container: HTMLDivElement
    ): void {
        this._context = context;
        this._notifyOutputChanged = notifyOutputChanged;
        this._container = container;

        // Get initial value from bound column
        this._currentRating = context.parameters.rating.raw ?? 0;

        // Render initial UI
        this.renderControl();
    }

    // updateView: called when bound column value changes externally
    // OR when form refreshes
    public updateView(
        context: ComponentFramework.Context<IInputs>
    ): void {
        this._context = context;
        // Check if value changed from outside
        if (context.parameters.rating.raw !== this._currentRating) {
            this._currentRating = context.parameters.rating.raw ?? 0;
            this.renderControl();
        }
    }

    // getOutputs: called when notifyOutputChanged() was called
    // Returns new values to write back to bound column
    public getOutputs(): IOutputs {
        return {
            rating: this._currentRating
        };
    }

    // destroy: called when control is removed from DOM
    public destroy(): void {
        // Clean up event listeners, third-party libraries
        this._container.innerHTML = "";
    }

    private renderControl(): void {
        this._container.innerHTML = "";
        for (let i = 1; i <= 5; i++) {
            const star = document.createElement("span");
            star.textContent = i <= this._currentRating ? "★" : "☆";
            star.style.cursor = "pointer";
            star.style.fontSize = "24px";
            star.addEventListener("click", () => {
                this._currentRating = i;
                this._notifyOutputChanged(); // tells PCF: value changed
                this.renderControl();
            });
            this._container.appendChild(star);
        }
    }
}

What is the PCF manifest file?

<!-- ControlManifest.Input.xml — defines the control's contract -->
<?xml version="1.0" encoding="utf-8" ?>
<manifest>
  <control namespace="Contoso.Controls" constructor="StarRatingControl"
           version="1.0.0" display-name-key="StarRating_Display_Key"
           description-key="StarRating_Desc_Key" control-type="standard">

    <!-- Property: binding to a Dataverse column -->
    <property name="rating"
              display-name-key="Rating_Display_Key"
              description-key="Rating_Desc_Key"
              of-type="Whole.None"
              usage="bound"
              required="true" />

    <!-- Input-only property: configuration value -->
    <property name="maxStars"
              display-name-key="MaxStars_Display_Key"
              of-type="Whole.None"
              usage="input"
              required="false" />

    <!-- Resources: JS bundle and CSS -->
    <resources>
      <code path="index.ts" order="1" />
      <css path="css/StarRating.css" order="1" />
    </resources>

    <!-- Feature usage: declares needed platform features -->
    <feature-usage>
      <uses-feature name="Device.captureAudio" required="false" />
      <uses-feature name="WebAPI" required="false" />
    </feature-usage>
  </control>
</manifest>

What are PCF WebAPI and Navigation capabilities?

// PCF provides context.webAPI for Dataverse operations:
public async init(context, notifyOutputChanged, state, container) {

    // CREATE a record:
    const result = await context.webAPI.createRecord("account", {
        name: "Contoso Ltd",
        revenue: 1000000
    });
    console.log("Created account ID:", result.id);

    // RETRIEVE a record:
    const account = await context.webAPI.retrieveRecord(
        "account",
        accountId,
        "?$select=name,revenue,primarycontactid"
    );

    // RETRIEVE MULTIPLE (with OData filter):
    const accounts = await context.webAPI.retrieveMultipleRecords(
        "account",
        "?$select=name,revenue&$filter=revenue gt 100000&$top=10"
    );
    accounts.entities.forEach(a => console.log(a.name));

    // UPDATE a record:
    await context.webAPI.updateRecord("account", accountId, {
        revenue: 2000000
    });

    // DELETE a record:
    await context.webAPI.deleteRecord("account", accountId);
}

// Navigation: open forms, URLs:
// Open a record form:
context.navigation.openForm({
    entityName: "account",
    entityId: accountId
});

// Open a URL in a dialog:
context.navigation.openUrl("https://contoso.com");

// Device capabilities:
// Take photo (mobile):
const image = await context.device.captureImage({ allowEdit: true });

PCF Build and Deployment

# Prerequisites: Node.js, Power Platform CLI, VS Code

# Create new PCF project:
pac pcf init --namespace Contoso.Controls --name StarRatingControl --template field

# Install dependencies:
npm install

# Build (TypeScript compilation):
npm run build

# Start test harness (hot reload in browser):
npm start

# Build for production:
npm run build -- --mode production

# Push to Dataverse environment directly (dev only):
pac pcf push --publisher-prefix contosodev

# Package in solution for ALM deployment:
# 1. Create solution project:
pac solution init --publisher-name Contoso --publisher-prefix contoso

# 2. Add PCF reference to solution:
pac solution add-reference --path ../StarRatingControl

# 3. Build solution:
dotnet build

# 4. Deploy managed solution:
pac solution import --path ./bin/Release/Solution.zip

4. Dataverse Web API

What is the Dataverse Web API and how is it structured?

Dataverse Web API:
→ RESTful OData v4 API for all Dataverse operations
→ Base URL: https://{org}.crm.dynamics.com/api/data/v9.2/
→ Authentication: OAuth 2.0 Bearer token (Entra ID)
→ Content-Type: application/json

CRUD operations:
CREATE (POST):
POST /api/data/v9.2/accounts
{
  "name": "Contoso Ltd",
  "revenue": 1000000,
  "primarycontactid@odata.bind": "/contacts(guid)"
}
Returns: 204 No Content, OData-EntityId header = new record URL

RETRIEVE (GET):
GET /api/data/v9.2/accounts(guid)?$select=name,revenue
GET /api/data/v9.2/accounts?$select=name&$filter=revenue gt 500000&$top=10

UPDATE (PATCH):
PATCH /api/data/v9.2/accounts(guid)
{ "revenue": 2000000 }
Returns: 204 No Content

DELETE (DELETE):
DELETE /api/data/v9.2/accounts(guid)
Returns: 204 No Content

UPSERT (PATCH with If-None-Match):
PATCH /api/data/v9.2/accounts(guid)
If-Match: *          → update only (fail if not exists)
If-None-Match: *     → create only (fail if exists)
(no header)          → upsert: create if not exists, update if exists

What are the key OData query options in the Dataverse Web API?

$select — specify columns to return:
GET /accounts?$select=name,revenue,statecode

$filter — filter records:
GET /accounts?$filter=revenue gt 1000000
GET /accounts?$filter=statecode eq 0 and contains(name,'Contoso')
GET /contacts?$filter=parentcustomerid/accountid eq {guid}

$orderby — sort results:
GET /accounts?$orderby=revenue desc,name asc

$top — limit results:
GET /accounts?$top=50

$skip — pagination offset:
GET /accounts?$top=10&$skip=20  ← page 3

$expand — include related entity inline:
GET /accounts?$expand=primarycontactid($select=fullname,emailaddress1)
GET /incidents?$expand=customerid_account($select=name,revenue)

$count — include total count:
GET /accounts?$count=true

$apply — aggregations (OData aggregation):
GET /opportunities?$apply=aggregate(estimatedvalue with sum as TotalValue)

Lookup / EntityReference filter:
GET /incidents?$filter=customerid_account/accountid eq {guid}
GET /contacts?$filter=_parentcustomerid_value eq {guid}

Alternative key (e.g., by email):
GET /contacts(emailaddress1='alice@contoso.com')

How do you execute Custom APIs and Functions via Web API?

Unbound functions (no specific record):
GET /api/data/v9.2/WhoAmI()
Response: { "UserId": "guid", "BusinessUnitId": "guid", "OrganizationId": "guid" }

GET /api/data/v9.2/RetrieveVersion()
Response: { "Version": "9.2.24031.00140" }

Bound function (on a specific record):
GET /api/data/v9.2/accounts(guid)/Microsoft.Dynamics.CRM.RetrievePrincipalAccess
  (Target=@target)?@target={"@odata.id":"accounts(guid)"}

Unbound actions:
POST /api/data/v9.2/SendEmail
{ "EmailId": "guid", "IssueSend": true }

Custom API (unbound action):
POST /api/data/v9.2/contoso_CalculateDiscount
{
  "OrderAmount": 5000,
  "CustomerTier": "Gold"
}
Response: { "DiscountPercentage": 15.0 }

Custom API (bound to entity):
POST /api/data/v9.2/accounts(guid)/Microsoft.Dynamics.CRM.contoso_ApproveAccount
{}

Batch requests ($batch):
POST /api/data/v9.2/$batch
Content-Type: multipart/mixed;boundary=batch_boundary

--batch_boundary
Content-Type: multipart/mixed;boundary=changeset_boundary

--changeset_boundary
Content-Type: application/http
Content-Transfer-Encoding: binary

POST /api/data/v9.2/accounts HTTP/1.1
Content-Type: application/json
{ "name": "Contoso Ltd" }

--changeset_boundary--
--batch_boundary--

What is Change Tracking in the Dataverse Web API?

Change tracking enables delta sync — retrieve only records
changed since the last sync, without fetching everything.

Enable change tracking:
→ Table Settings → "Change Tracking" = On
→ Required for delta sync and Dataverse integration patterns

Initial full sync:
GET /api/data/v9.2/accounts?$select=name,revenue
Response headers:
  Preference-Applied: odata.track-changes
  OData-DeltaLink: /api/data/v9.2/accounts?$deltatoken=xyz123

Store the DeltaLink token.

Incremental sync (only changes since last token):
GET /api/data/v9.2/accounts?$deltatoken=xyz123
Response:
{
  "value": [
    { "accountid": "guid1", "name": "Updated Name" },     // updated
    { "accountid": "guid2", "@removed": { "reason": "deleted" } } // deleted
  ],
  "@odata.deltaLink": "...?$deltatoken=abc456"  // new token
}

Store new deltaLink for next sync.

Use cases:
→ Sync Dataverse data to external data warehouse
→ Trigger downstream processes on record changes
→ Replicate to Azure SQL or Fabric via ADF / Logic Apps
→ Mobile app offline sync (sync changes when reconnected)

5. Business Logic & Automation

What are Business Rules in Dataverse and what can they do?

Business Rules: no-code/low-code logic attached to a table
Applied: client-side (model-driven forms) and/or server-side

Business Rule capabilities:
→ Show/Hide fields: conditionally show/hide columns
→ Enable/Disable fields: make fields read-only conditionally
→ Set field value: pre-populate or calculate field value
→ Set required/not required: dynamic required fields
→ Set default value: set a column's default on new records
→ Validate data: show error message when condition is met
→ Recommend value: suggest a value (user can override)

Scope options:
Entity:           server-side — applies to ALL interfaces (forms, API, plugin)
All Forms:        client-side — applies to all forms in the app
Specific Form:    client-side — applies to one specific form only

Example Business Rule:
IF Status = "On Hold" AND HoldReason is empty
THEN:
  Show error: "Please provide a Hold Reason"
  Set HoldReason = required

Best practices:
→ Use Entity scope for validation that should ALWAYS apply
→ Use Form scope for UX-only changes (show/hide)
→ Avoid complex multi-condition rules — use plugins instead
→ Test: rules with Entity scope fire via API too — test both UI and API

Limitations:
→ Cannot query related records (no lookups across tables)
→ Cannot call external services
→ Cannot create/update other records
→ For complex logic: use plugins or Power Automate

What are Dataverse Calculated and Rollup columns?

Calculated Columns:
→ Value computed from formula at retrieval time
→ No stored value — calculated each time the record is fetched
→ Formula types: date/time arithmetic, string concatenation,
                 numeric calculations, conditional (if/else)

Examples:
FullName = FirstName + " " + LastName
DaysUntilExpiry = DaysFromToday(ExpiryDate)
AnnualRevenue_USD = Revenue * ExchangeRate
Priority_Label = IF(Priority = 1, "High", IF(Priority = 2, "Medium", "Low"))

Limitations:
→ Cannot reference related table columns directly in formula
→ Cannot be used in Power Automate filter conditions efficiently
→ Not stored → cannot index for query performance

Rollup Columns:
→ Aggregation over related child records
→ Calculated on a schedule (every 12 hours default, manual trigger available)
→ Result stored in the parent record

Examples:
Account.TotalOpportunityValue = SUM(Opportunity.EstimatedValue)
  WHERE Opportunity.StatusCode = Open

Account.NumberOfCases = COUNT(Case.CaseId)
  WHERE Case.StateCode = Active

Contact.LastInteractionDate = MAX(Activity.ActualEnd)

Performance note:
→ Rollup columns are stored — efficient to query and index
→ 12-hour refresh lag — not suitable for real-time dashboards
→ Trigger immediate recalculation: CalculateRollupField message

What are Virtual Tables in Dataverse?

Virtual Tables (Virtual Entities):
→ Tables that appear in Dataverse but data lives in an external source
→ No data duplication — queries passed through to external system
→ Appear to apps, Power Automate, and API consumers as native tables

How they work:
1. Register a Data Provider plugin (custom C# code)
2. Plugin receives Retrieve/RetrieveMultiple requests
3. Plugin queries the external system (REST API, SQL, SAP)
4. Plugin returns results as Entity collection
5. Dataverse returns results as if they were native records

Use cases:
→ Read data from SAP without replicating to Dataverse
→ Surface Azure SQL data in model-driven apps
→ Integrate external REST API data as a Dataverse table
→ Display SharePoint list data in a model-driven subgrid

Limitations:
→ No native create/update/delete (unless provider implements it)
→ No offline capability
→ No relationships with standard Dataverse tables (limited)
→ No Dataverse search indexing
→ No audit logging
→ Performance depends on the external system

Virtual connector providers (out-of-box):
→ SharePoint connector provider (virtual SharePoint tables)
→ SQL Server provider
→ Excel Online provider

6. Security & ALM

What is the Dataverse security model in depth?

Security layers:

1. Authentication: Entra ID OAuth — who can access the environment

2. Licensing: user must have appropriate Power Apps or D365 licence

3. Security Roles (what operations on which tables):
   Each role has privileges for each table:
   Privilege levels:
   None (0):         no access
   User (1):         own records only
   Business Unit (2): records in user's BU + sub-BUs
   Parent BU (3):    user's BU + parent BUs (rarely used)
   Organisation (4): ALL records in the tenant

   Operations per table:
   Create, Read, Write, Delete, Append, AppendTo, Assign, Share

   Column Security Profiles (field-level security):
   → Restrict Read/Update/Create of specific columns
   → Apply to: Social Security Number, Salary, sensitive fields
   → Overrides security role access for those columns
   → Assign profiles to users or teams

4. Business Units (org hierarchy for data isolation):
   Root BU → Regional BU → Departmental BU
   Users in a BU inherit access to records in their BU's scope
   Move user to different BU = changes their data access scope

5. Record sharing (extending access beyond roles):
   → Share a specific record with a user/team
   → Grants access beyond what their security role allows
   → Use sparingly — creates complex access patterns
   → Prefer: reassign to shared team instead

6. Teams:
   Owner team: can own records (records assigned to team)
   Access team: shares access to specific records (no ownership)
   Entra ID group team: membership managed via Entra ID group

What are Dataverse ALM best practices?

Solution types:
Unmanaged solution:
→ Development environment ONLY
→ Fully editable, no restrictions
→ Components can be modified or deleted

Managed solution:
→ Target environments (Test, UAT, Production)
→ Read-only — cannot edit managed components
→ Uninstall = removes all managed components (clean)
→ Layering: multiple managed solutions can coexist

Solution layering:
Base solution (ISV or Microsoft base)
  → Customisation layer 1 (your org solution)
  → Customisation layer 2 (environment-specific patches)

Active layer: what the environment actually uses

ALM workflow:
Dev environment:
→ Develop in unmanaged solution
→ pac solution export --managed (creates managed zip)

CI/CD pipeline:
→ pac solution export → Git commit
→ PR review → merge
→ pac solution import to Test → UAT → Production

Solution segmentation:
→ Separate solutions by domain: CRM, Finance, HR
→ Core solution: shared base (tables, security roles)
→ Feature solutions: depend on core, add features
→ Reduces merge conflicts, enables independent deployment

Environment variables:
→ Use for config that differs between environments
→ API endpoints, SharePoint URLs, email addresses
→ Set per-environment without solution change
→ Types: String, Number, Boolean, JSON, Secret (Key Vault)

7. Scenario-Based Questions

Scenario: Prevent duplicate account creation based on phone number.

Approach: Pre-Operation plugin on Account Create

public class AccountPreCreateDuplicateCheck : IPlugin
{
    public void Execute(IServiceProvider serviceProvider)
    {
        var context = (IPluginExecutionContext)
            serviceProvider.GetService(typeof(IPluginExecutionContext));
        var factory = (IOrganizationServiceFactory)
            serviceProvider.GetService(typeof(IOrganizationServiceFactory));
        var tracing = (ITracingService)
            serviceProvider.GetService(typeof(ITracingService));

        if (context.MessageName != "Create" ||
            context.PrimaryEntityName != "account") return;

        var target = (Entity)context.InputParameters["Target"];

        if (!target.Contains("telephone1")) return;

        var phone = target.GetAttributeValue<string>("telephone1");
        if (string.IsNullOrWhiteSpace(phone)) return;

        var service = factory.CreateOrganizationService(context.UserId);

        // Query for existing accounts with same phone
        var query = new QueryExpression("account")
        {
            ColumnSet = new ColumnSet("accountid", "name"),
            TopCount = 1
        };
        query.Criteria.AddCondition("telephone1",
            ConditionOperator.Equal, phone);
        query.Criteria.AddCondition("statecode",
            ConditionOperator.Equal, 0); // Active only

        var results = service.RetrieveMultiple(query);

        if (results.Entities.Count > 0)
        {
            var existing = results.Entities[0];
            throw new InvalidPluginExecutionException(
                $"An active account with phone {phone} already exists: " +
                $"'{existing["name"]}'. Please use the existing account.");
        }
    }
}

Registration: Message=Create, Entity=account, Stage=Pre-Operation (20), Mode=Synchronous


Scenario: Design a PCF control that displays a progress bar for opportunity probability.

// ProgressBarControl.ts
export class ProgressBarControl
    implements ComponentFramework.StandardControl<IInputs, IOutputs> {

    private _container: HTMLDivElement;
    private _bar: HTMLDivElement;
    private _label: HTMLSpanElement;

    public init(context, notifyOutputChanged, state, container): void {
        this._container = container;

        // Create wrapper
        const wrapper = document.createElement("div");
        wrapper.style.cssText = "display:flex;align-items:center;gap:8px;width:100%";

        // Create progress bar track
        const track = document.createElement("div");
        track.style.cssText = "flex:1;height:8px;background:#e0e0e0;border-radius:4px;overflow:hidden";

        // Create progress fill
        this._bar = document.createElement("div");
        this._bar.style.cssText = "height:100%;border-radius:4px;transition:width 0.3s";

        // Create label
        this._label = document.createElement("span");
        this._label.style.cssText = "min-width:36px;font-size:12px;font-weight:600";

        track.appendChild(this._bar);
        wrapper.appendChild(track);
        wrapper.appendChild(this._label);
        this._container.appendChild(wrapper);

        this.updateView(context);
    }

    public updateView(context): void {
        const value = context.parameters.probability.raw ?? 0;
        const clamped = Math.max(0, Math.min(100, value));

        this._bar.style.width = `${clamped}%`;
        this._label.textContent = `${clamped}%`;

        // Colour coding: green > 70%, amber > 40%, red <= 40%
        this._bar.style.backgroundColor =
            clamped > 70 ? "#107C10" :
            clamped > 40 ? "#FFB900" : "#C50F1F";
    }

    public getOutputs(): IOutputs { return {}; }
    public destroy(): void { this._container.innerHTML = ""; }
}

Scenario: Implement a Web API call to create a Case linked to an Account from an external application.

// External app creating a Dataverse Case via Web API

async function createCase(accessToken, accountId, caseData) {
    const orgUrl = "https://contoso.crm.dynamics.com";

    const response = await fetch(
        `${orgUrl}/api/data/v9.2/incidents`,
        {
            method: "POST",
            headers: {
                "Authorization": `Bearer ${accessToken}`,
                "Content-Type": "application/json",
                "OData-MaxVersion": "4.0",
                "OData-Version": "4.0",
                "Prefer": "return=representation"  // Return created record
            },
            body: JSON.stringify({
                "title": caseData.title,
                "description": caseData.description,
                "prioritycode": caseData.priority,  // 1=High, 2=Normal, 3=Low
                "casetypecode": 1,  // Question
                // Link to Account:
                "customerid_account@odata.bind":
                    `/accounts(${accountId})`,
                // Optionally link to Contact:
                "primarycontactid@odata.bind":
                    `/contacts(${caseData.contactId})`
            })
        }
    );

    if (!response.ok) {
        const error = await response.json();
        throw new Error(
            `Failed to create case: ${error.error?.message}`);
    }

    const created = await response.json();
    console.log("Created case ID:", created.incidentid);
    console.log("Case number:", created.ticketnumber);
    return created;
}

Scenario: How do you implement cross-table validation that cannot be done with Business Rules?

Requirement: When closing an Opportunity as Won, validate that a signed Contract exists linked to the opportunity.

Approach: Pre-Operation plugin on Opportunity Update (filtering on statecode)

// Plugin registered on: Update, opportunity, Pre-Operation
// Filtering attributes: statecode
public class OpportunityWinValidation : IPlugin
{
    public void Execute(IServiceProvider serviceProvider)
    {
        var context = (IPluginExecutionContext)
            serviceProvider.GetService(typeof(IPluginExecutionContext));
        var factory = (IOrganizationServiceFactory)
            serviceProvider.GetService(typeof(IOrganizationServiceFactory));

        var target = (Entity)context.InputParameters["Target"];

        // Only fire when statecode changes to 1 (Won)
        if (!target.Contains("statecode")) return;
        var newState = target.GetAttributeValue<OptionSetValue>("statecode");
        if (newState?.Value != 1) return;  // 1 = Won

        var service = factory.CreateOrganizationService(context.UserId);

        // Check for signed contract linked to this opportunity
        var query = new QueryExpression("contoso_contract")
        {
            ColumnSet = new ColumnSet("contoso_contractid"),
            TopCount = 1
        };
        query.Criteria.AddCondition("contoso_opportunityid",
            ConditionOperator.Equal, context.PrimaryEntityId);
        query.Criteria.AddCondition("contoso_status",
            ConditionOperator.Equal, 2); // 2 = Signed

        var contracts = service.RetrieveMultiple(query);

        if (contracts.Entities.Count == 0)
        {
            throw new InvalidPluginExecutionException(
                "Cannot mark opportunity as Won: no signed contract found. " +
                "Please attach a signed contract before closing.");
        }
    }
}

8. Cheat Sheet — Quick Reference

Plugin Stage Summary

Stage 10 (Pre-Validation):
  → Outside transaction
  → Before input validation
  → Use: permission checks, external validation
  → NOT rolled back on failure

Stage 20 (Pre-Operation):
  → Inside transaction
  → After validation, BEFORE database write
  → Can modify Target (set defaults, enrich data)
  → Rolled back on failure
  → Use: default values, data enrichment, business validation

Main Operation:
  → Actual database write

Stage 40 (Post-Operation Sync):
  → Inside transaction
  → AFTER database write
  → Target.Id available (new record GUID)
  → Rolled back on failure
  → Use: create related records, update related data

Stage 40 (Post-Operation Async):
  → Outside transaction
  → User interaction not blocked
  → NOT rolled back — fire and forget
  → Use: notifications, background processing, non-critical updates

PCF Lifecycle Summary

init():        called once on load — set up DOM, get initial value
updateView():  called when bound column changes externally or form refreshes
getOutputs():  called after notifyOutputChanged() — return new values
destroy():     called on removal — clean up listeners and libraries

Key API:
context.parameters.{name}.raw         → get current value
context.parameters.{name}.formatted    → get formatted display value
this._notifyOutputChanged()            → signal value changed
context.webAPI.createRecord(table, data)    → create record
context.webAPI.retrieveRecord(table, id, query) → retrieve record
context.webAPI.retrieveMultipleRecords(table, query) → list records
context.navigation.openForm({entityName, entityId}) → open record

Web API Quick Reference

Base URL: https://{org}.crm.dynamics.com/api/data/v9.2/

CRUD:
POST   /{tablePluralName}                    → Create
GET    /{tablePluralName}(guid)?$select=...  → Retrieve
PATCH  /{tablePluralName}(guid)              → Update
DELETE /{tablePluralName}(guid)              → Delete

OData query options:
$select     → columns to return
$filter     → filter conditions
$orderby    → sort order
$top        → limit count
$skip       → offset (pagination)
$expand     → include related entity
$count=true → include total count

Lookup binding in POST/PATCH:
"parentcustomerid@odata.bind": "/accounts(guid)"

Lookup filter in GET:
$filter=_parentcustomerid_value eq {guid}
$filter=parentcustomerid/accountid eq {guid}

Change tracking (delta sync):
Prefer: odata.track-changes header on initial request
Returns OData-DeltaLink → use for incremental sync

Batch:
POST /$batch with multipart/mixed content type
Up to 1,000 operations per batch

Business Rule vs Plugin Decision

Use Business Rule when:
→ Simple show/hide or enable/disable on a form
→ Basic field validation with fixed message
→ Setting default values on new records
→ Non-developer makers need to maintain the logic
→ Client-side only behaviour acceptable

Use Plugin when:
→ Logic requires querying other tables
→ Cross-record validation (e.g., check related records exist)
→ Need to create/update/delete other records
→ External service call needed
→ Complex conditional logic beyond Business Rule capability
→ Must fire on API operations (not just form save)
→ Performance: reusable across all operations

Use Power Automate when:
→ Async background processing (no user wait)
→ Multi-system integration (non-Dataverse services)
→ Long-running processes
→ Notifications (email, Teams)
→ Scheduled processing
→ Non-developer maintainability required

Top 10 Tips

  1. Know the event pipeline stages — Pre-Validation (outside transaction), Pre-Operation (inside, before write), Post-Operation Sync (inside, after write), Post-Operation Async (outside). The most tested plugin concept in every .

  2. Filtering attributes on Update plugins — always register Update plugins with specific filtering attributes. Without them, the plugin fires on every field update — massive performance impact on busy environments.

  3. InvalidPluginExecutionException for user-facing errors — throwing this shows a friendly error in the UI. Throwing any other exception shows an ugly system error. Know this difference for any UX-related plugin question.

  4. PCF getOutputs + notifyOutputChanged are paired — when your control's value changes, call notifyOutputChanged() to signal Dataverse. Dataverse then calls getOutputs() to retrieve the new value. Never set the value directly without this flow.

  5. Web API OData-bind for lookups"parentcustomerid@odata.bind": "/accounts(guid)" is the correct syntax for setting lookup values in POST/PATCH. Knowing this notation separates developers who have actually used the Web API.

  6. Pre-Operation to set defaults, Post-Operation to create related records — set defaults in Pre-Op (Target can be modified before write). Create related records in Post-Op (the new record ID is available in Target.Id).

  7. Plugin depth tracking prevents infinite loops — when Plugin A creates a record that triggers Plugin B which modifies the original record triggering Plugin A again. Check context.Depth > 1 and return early.

  8. Managed solutions for all non-dev environments — unmanaged solutions in production are a governance disaster (cannot track changes, cannot cleanly uninstall). Always import managed solutions to Test/UAT/Prod.

  9. Environment variables for cross-environment config — never hardcode URLs, API keys, or environment-specific values in plugins or solutions. Use Dataverse environment variables and read them at runtime.

  10. Virtual tables for zero-copy external data — surface SAP, SQL, or REST API data as Dataverse tables without replication. The data provider plugin intercepts Retrieve/RetrieveMultiple calls and queries the external system. Know this pattern for any enterprise integration architecture discussion.



Microsoft Copilot Studio Advanced Topics Guide

Microsoft Copilot Studio — Advanced Topics Guide

Generative AI · Knowledge Sources · Actions & Plugins · Autonomous Agents · Multi-Channel · ALM · Security · Scenarios · Cheat Sheet


Table of Contents

  1. Core Concepts — Advanced Positioning
  2. Generative AI & Knowledge Sources
  3. Actions, Plugins & Connectors
  4. Autonomous Agents & Orchestration
  5. Multi-Channel & Integration
  6. ALM, Security & Governance
  7. Scenario-Based Questions
  8. Cheat Sheet — Quick Reference

1. Core Concepts — Advanced Positioning

What is Microsoft Copilot Studio and how has it evolved?

Microsoft Copilot Studio (formerly Power Virtual Agents) is a low-code platform for building AI-powered conversational agents — chatbots, copilots, and autonomous agents — that can interact with users across multiple channels and take actions on their behalf.

Evolution:

  • Power Virtual Agents (PVA) (2019–2023): rule-based topic-driven chatbot builder
  • Copilot Studio (2023–present): expanded to include generative AI (GPT), knowledge sources, plugins, multi-agent orchestration, and autonomous agent capabilities

Key shift — from scripted to generative:

Traditional PVA (rule-based):
→ Rigid topic trees with defined trigger phrases
→ Every conversation path explicitly authored
→ Falls back to "I don't understand" for unrecognised inputs
→ Maintenance-heavy — add new trigger phrases constantly

Copilot Studio with Generative AI:
→ Generative answers from knowledge sources (no topic required)
→ Natural language understanding via LLM
→ Gracefully handles queries not covered by topics
→ Self-improving with connected knowledge sources
→ Autonomous task execution with minimal author guidance

What are the Copilot Studio agent types?

Copilot (classic chatbot + generative AI):
→ Topic-based conversation flows
→ Generative answers filling gaps between topics
→ Actions called within topic flows
→ Multi-channel deployment
→ Best for: customer service, HR helpdesk, IT support

Autonomous agent:
→ Goal-oriented — given a goal, determines steps to achieve it
→ Selects and chains actions automatically (no rigid topic flow)
→ Runs in background without continuous user interaction
→ Can be triggered by events (email received, record created)
→ Best for: complex workflows, multi-step business process automation

Copilot extension (plugin):
→ Extends Microsoft 365 Copilot with org-specific capabilities
→ Deployed as a declarative agent in M365 Copilot
→ Users interact via M365 Copilot in Teams, Outlook, etc.
→ Best for: extending M365 Copilot with business data

Key capability comparison:
                    Classic   Generative   Autonomous
Topic authoring:    Required  Optional     Optional
Generative answers: No        Yes          Yes
Action execution:   Manual    Manual       Automatic (AI-driven)
Proactive:          No        No           Yes (event-triggered)
User interaction:   Always    Always       Optional

What is the Copilot Studio topic architecture?

Topic components:
Trigger:    what starts this topic
  → Trigger phrases (classic): "I need help", "track my order"
  → Events: conversationStart, conversationEnd, escalate
  → System topics: pre-built (greeting, escalate, end of conversation)

Nodes:    the conversation flow steps
  Message:       send text/adaptive card to user
  Question:      ask user for input, store in variable
  Condition:     if/else branching based on variable values
  Action:        call Power Automate flow or plugin
  Variable:      set/get topic or global variables
  Redirect:      jump to another topic
  End topic:     end this topic (return to parent)
  End conversation: close the conversation
  Generative answers: search knowledge and generate response

Variables:
  Topic variables:   scoped to current topic (cleared when topic ends)
  Global variables:  persist across the entire conversation session
  System variables:  built-in (User.DisplayName, Activity.Text, etc.)
  Environment variables: Power Platform env vars (config values)

System topics (always present):
  Conversation Start  → greeting when conversation begins
  End of Conversation → closing message + satisfaction survey
  Escalate            → hand off to human agent
  Fallback            → when no topic matches (AI generative fallback)
  Error               → when an unexpected error occurs
  Sign In             → authentication flow

2. Generative AI & Knowledge Sources

What are Knowledge Sources in Copilot Studio and what types are supported?

Knowledge Sources are the data repositories that Copilot Studio searches to generate AI-powered answers when no specific topic matches a user query.

Supported knowledge source types:

Public websites:
→ Specify URLs — Copilot Studio crawls and indexes content
→ Supports: sitemap-based crawling, specific URL paths
→ Content indexed: text content of web pages
→ Best for: public documentation, FAQ pages, product info

SharePoint sites:
→ Connect to SharePoint Online sites or specific libraries
→ Content indexed: document text (Word, PDF, PowerPoint, text files)
→ Security: respects SharePoint permissions when using user authentication
→ Best for: internal knowledge bases, HR policies, IT documentation

Dataverse tables:
→ Search structured data in Dataverse (knowledge articles, products, FAQs)
→ Supports: Dataverse knowledge articles (publishedknowledgearticle table)
→ Best for: Dynamics 365 Customer Service KB, custom data tables

Files (uploaded):
→ Upload documents directly (PDF, Word, PowerPoint)
→ Indexed immediately — no external URL needed
→ Best for: static documents (product manuals, policy PDFs)

Custom data (Azure AI Search):
→ Connect to Azure AI Search index for enterprise search
→ Best for: large-scale document repositories, custom indexed content

Microsoft Copilot Studio knowledge store:
→ Q&A pairs authored directly in Copilot Studio
→ Alternative name: Question and Answer (Q&A) knowledge
→ Best for: specific FAQ content, controlled answers

How does Generative Answers work in Copilot Studio?

Generative Answers flow:
1. User sends message not matched by any topic trigger
2. Copilot Studio sends query to the Generative Answers AI model
3. AI searches configured knowledge sources
4. Relevant content retrieved from knowledge sources
5. GPT generates a natural language answer with citations
6. Answer returned to user with source references

Configuring Generative Answers:
At agent level:
  → Settings → AI → Enable "Generative AI features"
  → Add knowledge sources
  → Set content moderation level (Low / Medium / High)

At topic level:
  → Add "Generative answers" node in topic flow
  → Override knowledge sources for specific contexts
  → Input: topic variable containing search query

Generative answers node inputs:
  Search query: {{ Activity.Text }}  ← user's message
  OR custom query built from variables

Content moderation levels:
  Low:    more creative, may include unverified information
  Medium: balanced (default)
  High:   strict — only answers directly supported by source content

Citations:
  → Answers include source references (filename, URL)
  → User can click to view source document
  → Configure: show citations (default) or suppress citations

Fallback behaviour:
  → If no relevant content found: "I couldn't find information about that"
  → Customise fallback message in system Fallback topic

What is Generative Orchestration (AI-driven topic selection)?

Classic topic selection (keyword matching):
→ Topic triggers contain specific phrases
→ "track my order", "order status" → triggers Order Tracking topic
→ Requires extensive trigger phrase authoring
→ Misses paraphrased queries not in trigger list

Generative Orchestration (AI-driven):
→ LLM determines the most relevant topic for each user message
→ No need for exhaustive trigger phrase lists
→ Understands intent, not just keyword matching
→ Can select between topics, actions, AND knowledge sources

How it works:
User: "Where is my purchase?"  (not in trigger phrases)
  ↓ LLM analysis
Determines intent: order tracking
  ↓ Selects best match
Executes: Order Tracking topic or calls Get Order Status action

Generative Orchestration with Multiple Agents:
→ A primary agent has access to multiple "skills" (sub-agents)
→ Orchestrator determines which skill/sub-agent handles each query
→ Routes to: Topic in current agent | Action | Sub-agent
→ Result returned and synthesised into coherent response

Enable Generative Orchestration:
Settings → AI → Generative AI features → Generative orchestration
(Note: uses more AI credits — balance capability vs cost)

What are Generative Actions (AI plugin calling)?

Generative Actions = AI-selected action execution within conversations

Classic action calling:
→ Author explicitly adds "Call an action" node in topic
→ Action always called at that specific point in the topic
→ Rigid — must anticipate every scenario

Generative Actions:
→ Register actions (Power Automate flows, plugins) with descriptions
→ LLM decides when to call which action based on user intent
→ No explicit "call action" node needed — AI selects at runtime
→ Parameters auto-extracted from conversation context

Example:
Actions registered:
  → "Get order status" — retrieves order by order number
  → "Cancel order" — cancels an order
  → "Create support case" — opens a new support ticket

User: "Can you cancel order 12345 for me?"
AI determines:
  → Intent: cancel order
  → Calls: "Cancel order" action with orderNumber = "12345"
  → Returns: "Order 12345 has been cancelled successfully"

No explicit topic needed — generative AI handles the flow end-to-end

Action description best practices:
→ Write clear, specific descriptions for each action
→ Include: what the action does, when it should be used
→ Include: what parameters it requires and their format
→ Bad:  "Get order" ← too vague
→ Good: "Retrieve the current status and estimated delivery date
         of a customer order. Use when the user asks about their
         order status, shipping, or delivery. Requires: orderNumber (string)"

3. Actions, Plugins & Connectors

What are the types of Actions in Copilot Studio?

Action types:

1. Power Automate flows:
→ Call a Power Automate cloud flow from within a topic
→ Pass input variables to the flow, receive output variables
→ Most flexible — access 1,000+ connectors via Power Automate
→ Use for: data operations, external API calls, complex logic

2. HTTP connector (direct REST calls):
→ Call any REST API directly without Power Automate
→ Configure: URL, method, headers, authentication, body
→ Parse JSON response into Copilot Studio variables
→ Use for: simple GET/POST to known APIs
→ Lower latency than flow-based actions

3. AI Builder prompts (Prompt Actions):
→ Call an AI Builder GPT prompt from within a topic
→ Pass conversation context as input to the prompt
→ Use for: text generation, summarisation, classification within topics

4. Power Platform connectors:
→ Use connector actions directly (Dataverse, SharePoint, etc.)
→ Configure connection in Copilot Studio settings
→ Use for: direct CRUD operations on connected data sources

5. MCP (Model Context Protocol) tools:
→ Connect to MCP servers exposing tools
→ AI calls MCP tools via standardised protocol
→ Use for: integrating with GitHub Copilot ecosystem, dev tools

Flow action configuration:
In topic → "Call an action" node → select flow
  Input: variable name → flow input parameter
  Output: flow output parameter → variable name

Flow response in topic:
Set variable: varOrderStatus = {flow output: OrderStatus}
Message: "Your order status is {{ varOrderStatus }}"

What are Copilot Studio Plugins and how do they differ from Actions?

Actions vs Plugins:

Actions (internal to agent):
→ Called explicitly in topic nodes or via Generative Actions
→ Available only to the current agent
→ Defined within the agent

Plugins (shareable, reusable):
→ Reusable capability packages — share across multiple agents
→ Three plugin types in Copilot Studio:

1. Connector plugins:
   → Wrap a Power Platform connector action as a plugin
   → e.g., "Salesforce — Create Lead" as a reusable plugin
   → Callable from any agent that adds the connector plugin

2. Conversational plugins:
   → A complete topic packaged as a reusable component
   → Another agent can "call" this plugin as a skill
   → Multi-agent architecture: orchestrator delegates to skill agents

3. AI plugin (OpenAI-compatible):
   → Exposes agent capabilities as an OpenAI-format plugin
   → Callable from Microsoft 365 Copilot, Custom Copilots
   → Defined with: manifest URL, actions, parameters

Copilot Studio as a plugin for M365 Copilot:
→ Build a conversational plugin in Copilot Studio
→ Publish to Teams app catalogue
→ M365 Copilot can invoke the plugin when relevant
→ User in Teams Copilot: "Check my open support tickets"
→ M365 Copilot invokes the Copilot Studio plugin → returns data

How do you configure authentication for Actions in Copilot Studio?

Authentication options for actions/connectors:

No authentication (anonymous):
→ Calls unauthenticated public APIs
→ Use for: public data, weather, news APIs

Azure AD (OAuth):
→ User authenticates via Entra ID SSO
→ Action called with user's delegated token (on behalf of user)
→ Sees only data the user can access
→ Use for: Graph API calls, SharePoint data as the user

Service account (application credentials):
→ Store client ID + client secret in environment variables
→ Application-level access (all users share same identity)
→ Use for: backend services, non-user-specific operations

Managed Identity:
→ When Copilot Studio runs in Standard Logic Apps/Azure environment
→ No credential management

Key Vault:
→ Store API keys/secrets in Azure Key Vault
→ Reference from Copilot Studio action configuration
→ Never hardcode secrets in action configuration

OAuth generic:
→ For non-Entra ID OAuth 2.0 providers
→ Configure: authorize URL, token URL, client ID/secret, scopes
→ Use for: Salesforce, GitHub, Google APIs

4. Autonomous Agents & Orchestration

What are Autonomous Agents in Copilot Studio?

Autonomous agents = agents that execute multi-step tasks
without continuous human interaction, triggered by events

Key characteristics:
→ Event-triggered (not user message triggered)
→ Goal-oriented: given a goal, AI determines execution steps
→ Action-chaining: selects and chains multiple actions
→ Can run background tasks (no chat UI needed)
→ Observability: detailed run logs for monitoring

Autonomous agent triggers:
→ Power Automate trigger → start agent run
→ Dataverse record created/updated → start agent run
→ Schedule (via Power Automate recurrence trigger)
→ Teams message received
→ Email received (via Power Automate)

Autonomous agent lifecycle:
1. Trigger fires (e.g., new support case created in Dataverse)
2. Agent receives context (case title, description, customer ID)
3. AI determines steps to achieve the goal
   "Resolve this case: find knowledge article, update case,
    notify customer"
4. Executes actions:
   → Search knowledge base (Knowledge source action)
   → Update case status (Dataverse action)
   → Send email (Office 365 action)
5. Logs each step with result
6. Completion: mark task complete, notify if needed

Differences from classic agent:
Classic: user asks → agent responds (reactive, synchronous)
Autonomous: event fires → agent acts (proactive, asynchronous)

What is Multi-Agent Orchestration in Copilot Studio?

Multi-agent architecture:
One primary "orchestrator" agent coordinates multiple
"skill" agents, each specialising in a domain.

Architecture:
Orchestrator Agent (primary)
  ├── IT Support Agent (skill)
  │    → hardware issues, software installation, VPN
  ├── HR Agent (skill)
  │    → leave requests, payroll, benefits
  ├── Finance Agent (skill)
  │    → expense claims, purchase orders, invoices
  └── Knowledge Agent (skill)
       → company policies, procedures, documentation

How orchestration works:
1. User sends message to orchestrator
2. Orchestrator's AI evaluates: which skill agent handles this?
3. Delegates to appropriate skill agent
4. Skill agent processes and responds
5. Orchestrator synthesises or passes back to user

Benefits:
→ Separation of concerns: each agent has focused knowledge
→ Smaller agents = better accuracy, easier maintenance
→ Reusability: skill agents used by multiple orchestrators
→ Governance: different teams own different skill agents

Configuration:
In orchestrator agent → Add skill
  → Select another Copilot Studio agent as a skill
  → Provide description: "Use this skill for HR-related questions
     about leave, payroll, and benefits"
  → Orchestrator AI uses description to route appropriately

Cross-tenant skills:
→ Skill agents can be in different environments (but same tenant)
→ Enterprise: central orchestrator + department-owned skill agents

What is the Copilot Studio Agent Framework and memory?

Agent memory types:

Conversation memory (session):
→ Persists within a single conversation session
→ Topic and global variables
→ Cleared when conversation ends (or session timeout)
→ Use for: tracking conversation state, user preferences in session

Long-term memory (cross-session):
→ NOT natively built into Copilot Studio
→ Implement via:
  - Dataverse: save/load user preferences between sessions
  - Azure Cosmos DB: external memory store via HTTP action
  - Power Automate: read/write user context at conversation start

Memory pattern:
Conversation Start topic:
  → Call flow: Get User Profile (read from Dataverse)
  → Store in global variable: Global.UserName, Global.Preferences
  → Use throughout conversation

Conversation End topic:
  → Call flow: Save Session Summary (write to Dataverse)
  → Store: summary, preferences updated, unresolved issues

System variables (always available):
  System.User.DisplayName    → authenticated user's display name
  System.User.Email          → authenticated user's email
  System.User.Id             → authenticated user's Object ID
  System.Activity.Text       → user's most recent message
  System.Conversation.Id     → unique conversation session ID
  System.Channel             → channel name (Teams, WebChat, etc.)
  System.Bot.Name            → the agent's name

5. Multi-Channel & Integration

What channels does Copilot Studio support for deployment?

Native channels (built-in):
Microsoft Teams:
→ Most common enterprise channel
→ Personal app in Teams app bar
→ Channel/group chat integration
→ Teams admin approval required for org-wide deploy
→ Supports: adaptive cards, Teams SSO authentication

Demo website:
→ Auto-generated test URL for quick testing
→ Not for production use

Custom website (web chat):
→ Embed JavaScript snippet in any website
→ Copilot appears as chat widget
→ Full customisation: colours, position, greeting, avatar
→ Authentication: anonymous or custom token server

Mobile app:
→ Embed in native iOS/Android app via Direct Line API
→ Direct Line SDK for React Native, Swift, Kotlin

Microsoft 365 Copilot:
→ Deploy as a declarative agent or plugin in M365 Copilot
→ Available in Teams Copilot, Outlook, Microsoft 365 apps
→ High-value channel for enterprise internal copilots

Power Pages:
→ Embed in a Power Pages portal
→ External customer-facing web chat
→ Passes portal user authentication context to the agent

Direct Line (custom channels):
→ REST API for any custom integration
→ WhatsApp (via Azure Communication Services)
→ Facebook Messenger (via Azure Bot Service)
→ Slack, Telegram, custom mobile apps

Telephony / Voice:
→ Azure Communication Services integration
→ IVR (Interactive Voice Response) via Nuance or ACS
→ Voice-to-text and text-to-voice conversion
→ Use for: call centre automation

How does Copilot Studio integrate with D365 Customer Service (Omnichannel)?

Integration architecture:
Customer starts chat on website/Teams
    ↓
Copilot Studio bot handles initial conversation
  → Answers FAQs via knowledge sources
  → Collects customer details (name, account, issue type)
  → Attempts automated resolution via actions
    ↓ (if cannot resolve)
Escalates to D365 Customer Service Omnichannel
  → Context passed: customer details, conversation transcript,
    bot-determined category, bot escalation reason
    ↓
D365 agent receives conversation with full context
  → No customer repetition
  → Pre-populated case fields
  → Routed to correct queue based on bot category

Configuration steps:
1. In D365 Customer Service Admin:
   → Create a workstream for chat
   → Set up bot: connect Copilot Studio bot to the workstream
   → Configure: bot handles first N minutes, then escalates

2. In Copilot Studio:
   → System topic "Escalate":
     → Set context variables before hand-off:
       va_CustomerName = {{ System.User.DisplayName }}
       va_IssueCategory = {{ Topic.Category }}
       va_BotTranscript = {{ System.Conversation.Id }}
     → "Transfer conversation" node → hand off to agent

3. D365 receives:
   → All context variables surfaced in agent workspace
   → Full conversation transcript displayed

Benefits:
→ Deflection: bot resolves 20-40% of contacts
→ No repetition: customer experience improved
→ Context: agent knows issue before picking up conversation
→ Analytics: track deflection rate, escalation reasons

How do you implement user authentication in Copilot Studio?

Authentication modes:

No authentication (anonymous):
→ Bot does not know who the user is
→ Suitable for: public FAQ bots, product info bots
→ Cannot access user-specific data

Manual authentication (per-topic):
→ Author adds "Authenticate" node in specific topics
→ User prompted to sign in when they reach that node
→ After sign-in: System.User.* variables populated
→ Use for: bots where MOST queries are anonymous but
  some require identity (e.g., "Check my account" topic)

Require sign-in for all users:
→ Bot requires authentication before any conversation starts
→ User must sign in via configured IdP before proceeding
→ Use for: enterprise internal bots (all users are employees)

Authentication providers:
→ Entra ID (recommended for internal bots)
→ Azure AD B2C (customer-facing bots with consumer identity)
→ Generic OAuth 2.0 (any OAuth-compatible provider)

Teams SSO (automatic):
→ When deployed to Microsoft Teams
→ User's Teams identity used automatically (no sign-in prompt)
→ Token passed via Teams SSO mechanism
→ Seamless experience — user never sees a login screen

After authentication, available variables:
System.User.DisplayName  → "Alice Smith"
System.User.Email        → "alice@contoso.com"
System.User.Id           → Entra ID Object ID (GUID)
System.User.IsLoggedIn   → true/false
AuthToken                → OAuth access token (for API calls)

6. ALM, Security & Governance

How do you manage ALM for Copilot Studio agents?

Copilot Studio ALM approach:

1. Dataverse Solutions:
   → Copilot Studio agents are Dataverse solution components
   → Package in a solution for environment-to-environment promotion
   → Solution contains: bot definition, topics, variables, connections
   → Export as managed solution for target environments

2. Environment strategy:
   Development → Test → UAT → Production
   → Each environment: separate Copilot Studio agent instance
   → Use environment variables for channel IDs, connection strings

3. Source control (Git):
   → pac copilot export — export agent definition to YAML files
   → Commit YAML to Git repository
   → PR review: review topic changes, variable additions
   → pac copilot import — import from YAML back to environment

4. CI/CD pipeline (GitHub Actions / Azure DevOps):
   steps:
   - pac auth create (service principal)
   - pac solution export --name CopilotSolution
   - pac solution import --path CopilotSolution_managed.zip
   - Run automated test (via Test Framework or custom)
   - Publish bot channels

5. Test Framework:
   → Copilot Studio Test Framework (preview)
   → Define test conversations: input → expected output
   → Run automated regression tests before promoting to prod
   → Assert: topic triggered, variable values, actions called

6. Connection references:
   → Connectors use connection references (not direct connections)
   → Each environment: connection reference points to env-specific connection
   → Prevents hard-coded connection IDs from breaking on import

What are the key security and governance controls for Copilot Studio?

Data Loss Prevention (DLP):
→ Power Platform DLP policies apply to Copilot Studio
→ Control which connectors can be used in agents
→ Prevent sensitive data exfiltration via connectors
→ Configure: Power Platform Admin Centre → DLP Policies
→ Copilot Studio connector classified: Premium, Business, Non-business

Content Moderation:
→ Configure AI content moderation level per agent (Low/Medium/High)
→ High: blocks inappropriate inputs and outputs
→ Filter: profanity, harmful content, personal information
→ Configure in agent Settings → AI → Content moderation

Authentication control:
→ Require sign-in: enforce identity before any conversation
→ Teams Only: restrict deployment to Teams channel only
→ IP restrictions: restrict agent access to specific IP ranges

Admin controls (Power Platform Admin Centre):
→ Enable/disable Copilot Studio for specific environments
→ Control who can create agents (maker policy)
→ Monitor: AI usage, capacity consumption, active agents
→ Copilot usage reports: sessions, escalations, resolution rates

Data residency:
→ Copilot Studio processes data in the same region as the environment
→ AI (LLM) calls: data sent to Azure OpenAI (regional compliance)
→ Configure: environment region selection at creation
→ EU Data Boundary: available for GDPR compliance

Generative AI governance:
→ Human review of AI-generated answers before enabling for users
→ Monitoring: flag conversations where AI generated incorrect answers
→ Feedback: thumbs up/down integrated into conversation
→ Review: Copilot Studio analytics → topic coverage and accuracy

What are Copilot Studio analytics and key metrics?

Built-in analytics dashboard:

Engagement metrics:
Total sessions:       total conversation sessions in period
Engaged sessions:     sessions where user sent ≥ 1 message
Engagement rate:      engaged / total sessions

Resolution metrics:
Resolution rate:      % sessions resolved without escalation/abandon
Escalation rate:      % sessions transferred to human agent
Abandon rate:         % sessions where user left without resolution
CSAT:                 satisfaction score from end-of-conversation survey

Topic metrics:
Top topics:           most frequently triggered topics
Topic coverage:       % sessions handled by a specific topic
Unrecognised phrases: queries that didn't match any topic
                      → review these to improve topic coverage

Generative AI metrics:
Generative answer rate:   % responses generated by AI (not topics)
Knowledge source hits:    which sources are being cited
Low confidence answers:   answers below confidence threshold

Channel breakdown:
Sessions per channel: Teams vs WebChat vs Direct Line vs other

Using analytics to improve:
1. Review "Unrecognised phrases" → create new topics or add to KB
2. Review high-abandon topics → identify confusing conversation flows
3. Review escalation topics → improve resolution with better actions
4. Review low-rated AI answers → improve knowledge source content

7. Scenario-Based Questions

Scenario: Design an enterprise IT helpdesk copilot for 10,000 employees.

Architecture:

  1. Agent type: Copilot deployed in Microsoft Teams. All employees access via Teams app. Teams SSO — no sign-in prompt needed.

  2. Knowledge sources:

    • SharePoint site: IT documentation, software guides, network policies
    • Dataverse: IT knowledge articles (from D365 Customer Service KB)
    • Public Microsoft docs: linked for common M365 issues
  3. Topics (explicit flows for high-volume issues):

    • Password reset: authenticates user → calls Graph API → initiates reset
    • VPN setup: device type question → provides correct guide link
    • Software installation: collects software name → checks catalogue → raises request
    • Raise IT ticket: multi-step form → creates Dataverse incident via Power Automate
  4. Generative answers: fills gap for everything not covered by explicit topics. Sources: IT SharePoint KB.

  5. Escalation to D365 Customer Service: when user says "speak to IT" or bot cannot resolve → transfers with context to D365 Omnichannel.

  6. Multi-agent: orchestrator agent routes to skill agents:

    • IT Hardware Agent: laptop issues, peripherals
    • IT Software Agent: application issues, licences
    • IT Network Agent: VPN, WiFi, connectivity
  7. ALM: Dev → UAT → Prod via Dataverse solutions + GitHub Actions pipeline. pac CLI for source control.

  8. Governance: DLP policy blocking non-business connectors. High content moderation. Admin monitoring dashboard.


Scenario: Build an autonomous agent that processes incoming vendor invoices.

Trigger: Email received in shared mailbox (vendor-invoices@contoso.com)

Agent goal: "Process this vendor invoice: extract data, validate, route for approval, and update ERP"

Autonomous actions:

  1. Extract invoice data: call AI Builder Invoice Processing model → extract vendor, amount, date, line items
  2. Validate: check vendor exists in Dataverse vendor table. Check duplicate invoice ID. Validate amount within tolerance.
  3. Route for approval: if amount > £10,000 → create approval request via Power Automate Approvals → wait for response. If ≤ £10,000 → auto-approve.
  4. Update ERP: call ERP REST API via HTTP action → create invoice record with extracted data.
  5. Notify: send Teams message to AP team with processing summary and any exceptions.
  6. Log: write processing record to Dataverse audit table.

Monitoring: each agent run logged with: trigger time, actions executed, results, any errors. Power BI dashboard on agent run logs.


Scenario: How do you improve a Copilot Studio agent with a 60% abandon rate?

Diagnose:

  1. Analytics review: Copilot Studio analytics → identify topics with highest abandon rate. Check "Unrecognised phrases" — topics users ask for that the bot doesn't handle.

  2. Conversation transcript review: review actual abandoned conversations — where did users drop off? What was the last bot message?

  3. Common causes and fixes:

    • No matching topic: user asks something bot doesn't cover → add new topic or add to knowledge source
    • Poor escalation: users can't reach a human → review escalation topic, make it more discoverable
    • Too many clarifying questions: bot asks for too much info → simplify forms, use defaults
    • Long wait: bot calling slow Power Automate flow → optimise flow performance or show "thinking" indicator
    • No answer from knowledge: knowledge source not indexed correctly → re-crawl, add content, improve source quality
    • Confusing response: AI-generated answer is unclear → add explicit topic to override with clearer response
  4. Improvements:

    • Enable Generative Orchestration: handles paraphrased queries better
    • Add more knowledge source content: fill gaps identified from unrecognised phrases
    • Add proactive suggestions: after each answer, suggest related topics
    • Improve escalation: make "Talk to a person" always discoverable
  5. Measure impact: re-evaluate abandon rate after 2 weeks of changes.


Scenario: How do you extend Microsoft 365 Copilot with Copilot Studio for a sales team?

Requirement: Sales team uses M365 Copilot in Teams and Outlook. They want Copilot to help them check CRM data, create leads, and get deal status — without leaving the M365 Copilot interface.

Solution: Declarative Agent / Conversational Plugin

  1. Build Copilot Studio agent with D365 Sales actions:

    • Get opportunity status (by name or account)
    • Create new lead (from natural language description)
    • Get account summary (recent activities, open opps)
    • Schedule follow-up task
  2. Configure as M365 Copilot plugin:

    • Copilot Studio → Channels → Microsoft 365 Copilot
    • Plugin manifest: name, description, actions exposed
    • Publish to Teams app catalogue
  3. Deploy to sales team:

    • Teams Admin Centre → approve the plugin
    • Assign to "Sales" security group
    • Sales team sees "Sales CRM Assistant" in M365 Copilot
  4. Usage:

    • User in Teams Copilot: "What's the status of the Contoso deal?"
    • M365 Copilot invokes Sales CRM plugin → queries D365 Sales → returns current stage, value, next steps
    • User: "Create a follow-up task for Alice for next Monday"
    • Plugin creates task in D365 Sales linked to the opportunity
  5. Security: Actions use user's Entra ID delegated token (on behalf of user) → user only sees their own pipeline data.


8. Cheat Sheet — Quick Reference

Copilot Studio Node Types

Message:          send text, adaptive card, or image to user
Question:         ask user for input → store in variable
Condition:        if/else/else-if branching
Call an action:   execute Power Automate flow or plugin
Redirect:         jump to another topic (preserve context)
End topic:        end current topic, return to parent
End conversation: close conversation session
Generative answers: AI search and respond from knowledge sources
Variable management: set / get / clear variables
Send adaptive card: rich card with buttons/inputs
Authentication:   trigger sign-in flow
Transfer to agent: hand off to human in Omnichannel

Variable Types

Topic variables:    scoped to current topic only (Topic.VarName)
Global variables:   persist entire conversation (Global.VarName)
System variables:   built-in read-only (System.User.Email, etc.)
Environment vars:   Power Platform config (no secrets in topics)

Key system variables:
System.User.DisplayName    → signed-in user's name
System.User.Email          → signed-in user's email
System.User.Id             → Entra ID Object ID
System.Activity.Text       → user's latest message
System.Conversation.Id     → session identifier
System.Channel             → Teams / WebChat / DirectLine
System.User.IsLoggedIn     → true / false
AuthToken                  → OAuth token for API calls

Knowledge Source Selection Guide

Use case                              → Knowledge Source
Public website content                → Public website URL
SharePoint documents (internal)       → SharePoint site/library
Dynamics 365 Knowledge Articles       → Dataverse (KB table)
Small set of static PDFs              → Uploaded files
Large enterprise document repository  → Azure AI Search index
Specific Q&A pairs (controlled)       → Q&A pairs in Copilot Studio

Analytics KPIs

Resolution rate:    (Sessions resolved / Total sessions) × 100
Target: > 70% for well-configured agent

Escalation rate:    (Escalated sessions / Total sessions) × 100
Target: < 20% for mature agent

CSAT:               end-of-conversation satisfaction score
Target: > 4.0 / 5.0

Abandon rate:       sessions where user left without resolution
Target: < 15%

Topic coverage:     % sessions handled by defined topics
Generative rate:    % responses from AI knowledge search
Unrecognised:       queries matched to fallback — review these

ALM Commands (pac CLI)

# Authenticate to environment
pac auth create --url https://contoso.crm.dynamics.com

# Export solution containing agent
pac solution export --name CopilotSolution --path ./solutions

# Import solution to target environment
pac solution import --path ./solutions/CopilotSolution_managed.zip

# Export agent definition to YAML (source control)
pac copilot export --botId {botId} --path ./copilot-source

# Import agent from YAML
pac copilot import --path ./copilot-source

# Publish agent channels after import
pac copilot publish --botId {botId}

Top 10 Tips

  1. Generative Answers fills the gaps between topics — you don't need a topic for every question. Configure knowledge sources and let the LLM handle queries not covered by explicit topics. This dramatically reduces authoring effort.

  2. Generative Orchestration replaces keyword matching — AI determines topic intent instead of trigger phrases. More resilient to paraphrasing, fewer trigger phrase maintenance headaches. Know the trade-off: uses more AI credits.

  3. Autonomous agents are event-triggered, not user-triggered — they run in the background, execute multi-step tasks, and don't require a chat UI. Know the difference from classic chatbots for any architecture question.

  4. Action descriptions are prompt engineering — the quality of generative action descriptions determines whether the AI calls the right action at the right time. Treat descriptions as careful prompt engineering, not just documentation.

  5. Multi-agent = separation of concerns — orchestrator routes, skill agents specialise. Each skill agent has focused knowledge, fewer topics, and better accuracy. Know this pattern for enterprise scale questions.

  6. Teams SSO is seamless — when deployed to Teams, users are automatically authenticated via Teams SSO. No sign-in prompt needed. This is the preferred deployment for internal enterprise agents.

  7. Bot-to-human handoff must pass context — set context variables before the Transfer node. D365 Omnichannel agent receives customer details, conversation transcript, and bot-determined category. "No customer repetition" is the key value.

  8. DLP applies to Copilot Studio connectors — Power Platform DLP policies control which connectors agents can use. The Copilot Studio connector itself is subject to DLP classification. Know this for governance questions.

  9. Unrecognised phrases = improvement backlog — review the analytics "Unrecognised phrases" report regularly. Each unrecognised query is an opportunity to create a new topic or add content to a knowledge source.

  10. pac CLI for ALMpac copilot export/import + pac solution export/import are the production-grade deployment tools. Manual export from the UI is for development only. Always use pac CLI in CI/CD pipelines.



Featured Post

Liquid code in Power Pages - FetchXML

FetchXML is the native query language of Microsoft Dataverse. If you're building anything serious on the Power Platform — flows, plu...

Popular posts