Azure Integration Services — Complete Guide
Logic Apps · Service Bus · API Management · Event Grid · Integration Patterns · Scenarios · Cheat Sheet
Table of Contents
- Azure Integration Services Overview
- Azure Logic Apps — Deep Dive
- Azure Service Bus — Deep Dive
- Azure API Management — Deep Dive
- Azure Event Grid
- Integration Patterns & Architecture
- Scenario-Based Questions
- Cheat Sheet — Quick Reference
1. Azure Integration Services Overview
What are Azure Integration Services and what problems do they solve?
Azure Integration Services is a suite of cloud services for connecting applications, data, and processes across cloud and on-premises environments.
| Service | Purpose |
|---|---|
| Azure Logic Apps | Workflow orchestration — automate processes and integrate systems using a designer-based workflow engine |
| Azure Service Bus | Enterprise message broker — reliable, ordered, durable message queuing and pub/sub |
| Azure API Management (APIM) | API gateway — publish, secure, transform, and monitor APIs |
| Azure Event Grid | Event routing — reactive, event-driven architecture at scale |
| Azure Event Hubs | Big data streaming — high-throughput event ingestion (millions/sec) |
Key positioning: Logic Apps = orchestration (do things in sequence). Service Bus = decoupling (reliable async messaging). APIM = API facade (secure and manage). Event Grid = events (react to things that happened). Event Hubs = telemetry streaming.
What is the difference between Logic Apps and Power Automate?
Logic Apps and Power Automate share the same underlying engine and connector library.
| Logic Apps | Power Automate | |
|---|---|---|
| Audience | Developers, IT, enterprise architects | Business users, makers |
| Hosting | Azure subscription | Microsoft-managed SaaS |
| Pricing | Pay-per-execution (Consumption) or App Service Plan (Standard) | Per user/flow licence |
| VNET/private endpoints | Yes (Standard and Premium) | No |
| Source control / IaC | ARM/Bicep templates, workflow JSON | Solution packages |
| B2B EDI | Yes (Integration Account) | No |
| Local development | Yes (VS Code, Standard tier) | No |
Tip: Same engine, different audience and infrastructure control. Enterprise integration with VNET, IaC, and B2B = Logic Apps. Business user automation = Power Automate.
What is the difference between Logic Apps Consumption and Standard plans?
| Consumption | Standard | |
|---|---|---|
| Infrastructure | Shared multi-tenant | Dedicated App Service Plan |
| Pricing | Pay per action execution | Fixed plan cost |
| VNET integration | No | Yes |
| Private endpoints | No | Yes |
| Workflows per resource | One | Many |
| Local development | No | Yes (VS Code) |
| Stateless workflows | No | Yes (much faster) |
| Best for | Low-volume, simple, quick deploy | Enterprise production, high-volume |
What is Azure Event Grid and how does it differ from Service Bus?
| Event Grid | Service Bus | |
|---|---|---|
| Model | Pub/sub event routing | Message broker |
| Delivery guarantee | At-least-once | At-least-once with acknowledgement |
| Ordering | Not guaranteed | FIFO with sessions |
| Message retention | 24h (default, max 7 days) | Up to 14 days |
| Throughput | 10M events/sec | Up to 1000 msg/sec (Premium) |
| Use for | Something happened — notify subscribers | Message MUST be processed exactly once |
| Dead letter | Yes | Yes (DLQ) |
Use Event Grid when:
→ Azure resource events (blob created, VM deallocated)
→ High-volume, low-latency event fan-out
→ Fire-and-forget is acceptable
→ Many subscribers need the same event
Use Service Bus when:
→ Message MUST be processed exactly once
→ Order of processing matters (sessions)
→ Consumer may be temporarily offline
→ Transactional messaging required
→ Complex routing with filter rules
2. Azure Logic Apps — Deep Dive
What are the key components of a Logic Apps workflow?
| Component | Description |
|---|---|
| Trigger | Starts the workflow. Types: polling, push (webhook), recurrence, manual (HTTP) |
| Actions | Steps after trigger. Each is a connector operation. |
| Connectors | Wrappers around APIs. 400+ built-in. Custom connectors via OpenAPI. |
| Control flow | Condition (if/else), Switch, For Each, Until, Scope |
| Variables | Mutable state within a workflow run |
| Expressions | @{body('action')}, @{utcNow()} — same syntax as Power Automate |
What is the difference between stateful and stateless workflows in Logic Apps Standard?
Stateful workflow: each step's input/output persisted in Azure Storage. Full run history available. Can pause for external callbacks or approvals. Supports long-running workflows (days/weeks). Higher latency, higher cost.
Stateless workflow: data kept in memory only — not persisted. No run history. Cannot pause/resume. Maximum duration: minutes. Much faster (no storage I/O). Much lower cost. Best for high-volume synchronous processing.
Tip: Use stateless for high-volume, fast API transformations (hundreds per second). Use stateful for long-running processes, human-in-the-loop approvals, and workflows needing audit trails.
How does error handling work in Logic Apps?
Logic Apps uses the same Run After and Scope patterns as Power Automate:
Try-Catch-Finally pattern:
[Try Scope]
→ Main workflow steps
[Catch Scope] ← Run After: Try scope → Failed
→ Alert via email/Teams
→ Log to Application Insights
→ Access error: @{result('Try_scope')[0]['error']['message']}
→ Access failed action: @{result('Try_scope')[0]['name']}
[Finally Scope] ← Run After: Try + Catch → all states
→ Cleanup (always runs)
Retry policy per action:
→ None / Fixed / Exponential
→ Count: 1–90, Interval: configurable
→ Auto-retries on HTTP 408, 429, 5xx
What is an Integration Account in Logic Apps?
An Integration Account is an Azure resource providing B2B EDI integration capabilities.
| Feature | Description |
|---|---|
| Trading partners | Define business partners and their identities |
| Agreements | Message exchange agreements (send/receive settings) |
| Schemas | XML schemas for validating/transforming EDI messages |
| Maps (XSLT) | Transform message formats between partners |
| Certificates | Message signing and encryption |
| EDI protocols | AS2 (secure HTTP), X12 (US EDI), EDIFACT (international EDI) |
Tip: Integration Account is the answer to any question about B2B EDI, trading partners, or AS2/X12/EDIFACT in Logic Apps .
How do you implement Logic Apps in a DevOps CI/CD pipeline?
Consumption plan (ARM templates):
# Export workflow as ARM template
# Store in Git
# Deploy via Azure CLI or DevOps ARM task
az deployment group create \
--resource-group myRG \
--template-file logicapp.json \
--parameters @params.dev.json
Standard plan (workflow JSON — like Azure Functions):
# GitHub Actions deployment
- name: Deploy Logic App Standard
uses: Azure/functions-action@v1
with:
app-name: my-logic-app-standard
package: ./src/logicapp
Environment-specific config:
App Settings per environment (dev/test/prod):
→ Connection strings
→ API endpoints
→ Feature flags
Key Vault references for secrets:
@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/apikey)
What are managed connectors vs built-in connectors in Logic Apps Standard?
Managed connectors: run in Microsoft's shared cloud infrastructure. Make outbound calls to external services (Salesforce, SAP, ServiceNow). Subject to connector throttling limits. Same as Power Automate connectors.
Built-in connectors (Standard only): run inside the Logic App's own host process. Faster, no throttling limits. Include: HTTP, Service Bus, Event Hubs, Azure Blob, SQL, Dataverse, B2B operations. Use built-in over managed where available for better performance.
3. Azure Service Bus — Deep Dive
What is Azure Service Bus and what are its key components?
| Component | Description |
|---|---|
| Namespace | Top-level container. Unique hostname: contoso.servicebus.windows.net |
| Queue | Point-to-point. One sender, competing consumers. Each message processed once. |
| Topic | Pub/sub. One publisher, multiple subscriptions each receiving a copy. |
| Subscription | Named receiver on a topic. Has its own queue + filter rules. |
| Message | Body (256KB Standard / 100MB Premium) + system + custom properties |
Queue (point-to-point):
Producer → [Queue] → Consumer A (competing)
→ Consumer B (competing)
→ ONE consumer processes each message
Topic/Subscription (pub/sub):
Publisher → [Topic] → [Sub A: filter Region='EMEA'] → Consumer A
→ [Sub B: filter Priority='High'] → Consumer B
→ [Sub C: no filter] → Consumer C
→ EACH subscription gets its own independent copy
What are Service Bus tiers and which should you use for production?
| Tier | Queues | Topics | Max Msg Size | VNET | Best For |
|---|---|---|---|---|---|
| Basic | ✓ | ✗ | 256KB | ✗ | Dev/test only |
| Standard | ✓ | ✓ | 256KB | ✗ | Low-criticality workloads |
| Premium | ✓ | ✓ | 100MB | ✓ | Enterprise production |
Premium features: dedicated capacity units, geo-disaster recovery, VNET/private endpoints, predictable performance, availability zones.
Warning: Always use Premium for enterprise production — dedicated resources, VNET integration, and predictable latency. Standard has noisy neighbour risk and no network isolation.
What is the Dead Letter Queue (DLQ) and when are messages sent to it?
The DLQ is a sub-queue storing messages that cannot be processed. Every queue and subscription has its own DLQ.
Messages are dead-lettered when:
- Max delivery count exceeded — consumer abandons (nacks) message more than
MaxDeliveryCounttimes (default: 10) - TTL expired — message not consumed before Time-To-Live expires (if dead-lettering on expiry enabled)
- Subscription filter error — filter expression throws an exception
- Consumer explicitly dead-letters — calls
deadLetterAsync()for unprocessable messages
DLQ message properties:
DeadLetterReason → WHY it was dead-lettered
DeadLetterErrorDescription → Detailed error message
EnqueuedTimeUtc → When original message was sent
DeliveryCount → How many times delivery was attempted
DLQ path:
Queue: myqueue/$DeadLetterQueue
Subscription: mytopic/mysubscription/$DeadLetterQueue
Critical: An overflowing DLQ means data loss. Always monitor DLQ counts with Azure Monitor alerts and have a process for reviewing, resubmitting, or archiving dead-lettered messages.
What are Sessions in Service Bus and when do you use them?
Sessions enable FIFO ordered processing of related messages. A session groups messages by SessionId — all messages with the same SessionId are processed in order by the same consumer instance.
Without sessions (no ordering guarantee):
Producer sends: Order-123-Created, Order-123-Paid, Order-123-Shipped
3 competing consumers → may process Shipped before Paid → wrong state
With sessions (SessionId = "Order-123"):
→ All messages with SessionId="Order-123" locked to ONE consumer
→ Processed in order: Created → Paid → Shipped
→ Other sessions (Order-456, Order-789) processed in parallel by other consumers
→ Scale: number of concurrent sessions = throughput
Enable sessions on queue/subscription:
RequiresSession = true (must be set at creation time)
Send with session:
message.SessionId = orderId;
Receive with session:
var session = await receiver.AcceptNextSessionAsync();
Tip: Sessions are the answer to "how do you guarantee message ordering in Service Bus." Without sessions, ordering is not guaranteed even with a single consumer.
What is the difference between peek-lock and receive-and-delete?
Peek-Lock (recommended):
- Consumer receives message — it is locked (invisible to others) for lock duration
- Consumer processes and calls
Complete()→ deleted from queue - Or calls
Abandon()→ released back to queue for retry - Or calls
DeadLetter()→ moved to DLQ - If consumer crashes — lock expires, message re-queues automatically
- Guaranteed at-least-once delivery
Receive-and-Delete:
- Message deleted immediately on receive
- If consumer crashes after receive but before processing → message is permanently lost
- Use only for non-critical, idempotent scenarios (metrics, logging)
Warning: Always use Peek-Lock for business-critical messages. Receive-and-Delete risks data loss if the consumer fails between receiving and completing processing.
What is duplicate detection in Service Bus?
Duplicate detection allows Service Bus to discard messages with a previously seen MessageId within a configurable time window (1 min – 7 days).
// Enable at queue/topic creation:
new CreateQueueOptions("myqueue") {
RequiresDuplicateDetection = true,
DuplicateDetectionHistoryTimeWindow = TimeSpan.FromMinutes(10)
};
// Set MessageId on the sender side:
var message = new ServiceBusMessage(body) {
MessageId = $"{orderId}-{timestamp}"
};
Use when producers may retry sending on network failure — prevents double-processing of the same business event.
4. Azure API Management — Deep Dive
What is Azure API Management and what problem does it solve?
Azure API Management (APIM) is a fully managed API gateway between API consumers (clients) and API backends.
Core capabilities:
| Capability | Description |
|---|---|
| Security | OAuth 2.0, JWT validation, subscription keys, client certificates, IP filtering |
| Rate limiting | Rate limit and quota per subscription, IP, or custom key |
| Transformation | Modify requests/responses without touching backend code |
| Versioning | Manage multiple API versions, route to different backends |
| Developer portal | Self-service API documentation and subscription management |
| Caching | Cache responses to reduce backend load |
| Analytics | Request logs, metrics, tracing via Azure Monitor + Application Insights |
Key value: Decouple API consumers from backends. Backend can change without consumers knowing — APIM handles the translation.
What are APIM policies and what are the most important ones to know?
Policies are XML-based rules applied in four sections: inbound, backend, outbound, on-error.
<policies>
<inbound>
<!-- Validate JWT (Entra ID) -->
<validate-jwt header-name="Authorization" failed-validation-httpcode="401">
<openid-config url="https://login.microsoftonline.com/{tenantId}/v2.0/.well-known/openid-configuration"/>
<required-claims>
<claim name="aud"><value>api://my-api-id</value></claim>
</required-claims>
</validate-jwt>
<!-- Rate limit per subscription: 100 calls/60s -->
<rate-limit-by-key calls="100" renewal-period="60"
counter-key="@(context.Subscription.Id)"/>
<!-- Add internal key to backend request -->
<set-header name="X-Internal-Key" exists-action="override">
<value>{{internal-api-key-named-value}}</value>
</set-header>
<!-- Route to different backend based on header -->
<choose>
<when condition="@(context.Request.Headers.GetValueOrDefault('X-Version','v1') == 'v2')">
<set-backend-service base-url="https://v2.api.backend.com"/>
</when>
<otherwise>
<set-backend-service base-url="https://v1.api.backend.com"/>
</otherwise>
</choose>
</inbound>
<outbound>
<!-- Remove internal header from response -->
<set-header name="X-Powered-By" exists-action="delete"/>
<!-- Cache successful responses for 5 minutes -->
<cache-store duration="300"/>
<!-- Transform XML response to JSON -->
<xml-to-json kind="direct" apply="always"/>
</outbound>
<on-error>
<return-response>
<set-status code="@((int)context.Response.StatusCode)" reason="@(context.Response.StatusReason)"/>
<set-body>@("Error: " + context.LastError.Message)</set-body>
</return-response>
</on-error>
</policies>
What are APIM tiers?
| Tier | SLA | VNET | Multi-region | Scale Units | Best For |
|---|---|---|---|---|---|
| Developer | None | No | No | 1 | Dev/test only |
| Basic | 99.95% | No | No | 2 | Small, non-critical |
| Standard | 99.95% | No | No | 4 | Medium volume |
| Premium | 99.99% | Yes | Yes | 31 | Enterprise production |
| Consumption | 99.95% | No | No | Serverless | Very low volume, serverless |
Tip: Premium is the only tier with VNET integration and multi-region deployment. Required when backends are in private networks or when global availability is needed.
What are APIM Products, APIs, and Subscriptions?
API: a backend service exposed through APIM
→ Has operations: GET /orders, POST /orders, DELETE /orders/{id}
→ Has policies applied at API or operation level
Product: a bundle of one or more APIs
→ Controls access: who can subscribe
→ Applies shared rate limits and quotas
→ Types: Open (no approval needed) | Protected (subscription required)
Subscription: a consumer's access key to a Product
→ Primary and secondary keys (for key rotation without downtime)
→ Passed in header: Ocp-Apim-Subscription-Key: {key}
→ Can be scoped to: All APIs | Product | Single API
Example:
APIs: Orders, Inventory, Customers, Analytics
Products:
"Developer Tier" → Orders API only, 10 calls/min, free
"Standard Tier" → Orders + Inventory, 100 calls/min
"Enterprise" → All APIs, 1000 calls/min, SLA guaranteed
Consumer A subscribes to "Standard Tier":
→ Gets subscription key
→ Can call Orders and Inventory up to 100 calls/min
→ Cannot call Customers or Analytics
How do you implement API versioning in APIM?
Three versioning schemes:
| Scheme | URL Format | Example |
|---|---|---|
| URL path | /v{version}/resource |
/v1/orders, /v2/orders |
| Query string | /resource?api-version={ver} |
/orders?api-version=2024-01-01 |
| Header | Api-Version: {version} |
Header: Api-Version: 2024-01-01 |
Each version can point to a different backend, have different policies, and be independently documented. A version set groups all versions together in the developer portal.
Tip: Always create a version set when deploying v1. Retrofitting versioning to an existing API with consumers is painful and disruptive.
What are Named Values in APIM and why are they important?
Named Values (formerly Properties) are key-value pairs stored in APIM and referenced in policies using {{name}} syntax.
<!-- Instead of hardcoding: -->
<set-header name="X-API-Key"><value>abc123supersecret</value></set-header>
<!-- Use Named Value: -->
<set-header name="X-API-Key"><value>{{backend-api-key}}</value></set-header>
Types:
- Plain: static string value
- Secret: encrypted, not visible after saving
- Key Vault reference: value retrieved from Azure Key Vault at runtime
Best practice: Always use Named Values (preferably Key Vault-backed) for secrets and environment-specific values in APIM policies. Never hardcode secrets in policy XML.
5. Azure Event Grid
What is Azure Event Grid and what are its key components?
Event Grid is a fully managed event routing service for building reactive, event-driven architectures.
| Component | Description |
|---|---|
| Event source | What emits events: Azure resources (Blob Storage, Resource Groups, Service Bus) or custom apps |
| Topic | Endpoint where events are sent. System topics (Azure resources) or custom topics |
| Event subscription | Maps a topic to an event handler with optional filter rules |
| Event handler | What processes the event: Azure Functions, Logic Apps, Event Hubs, Service Bus, Webhooks |
Azure Blob Storage (event source)
→ BlobCreated event published to System Topic
→ Event Grid routes to:
Subscription 1 (filter: blobType=image) → Azure Function (resize image)
Subscription 2 (no filter) → Logic App (archive to cold storage)
Subscription 3 (filter: container=reports) → Power Automate (notify team)
When should you use Event Grid vs Event Hubs vs Service Bus?
Event Grid:
→ Discrete events (something happened)
→ Azure resource events (blob created, VM stopped)
→ Low volume, reactive architecture
→ Fan-out to many subscribers
→ Events expire after 24h–7 days
Event Hubs:
→ High-throughput streaming (millions/sec)
→ Time-series data, telemetry, logs
→ Replay capability (retention up to 90 days)
→ Big data pipelines (Spark, Stream Analytics)
Service Bus:
→ Reliable transactional messaging
→ Message must be processed exactly once
→ Order matters (sessions)
→ Consumer may be offline
→ Complex routing with business rules
6. Integration Patterns & Architecture
What is the Competing Consumers pattern?
Multiple consumer instances process messages from a single queue concurrently — scaling throughput horizontally. Each message processed by exactly one consumer.
Service Bus Queue with 3 consumers:
Producer → [Queue: 1000 messages]
Consumer 1 (Azure Function instance) ← picks up messages
Consumer 2 (Azure Function instance) ← picks up messages
Consumer 3 (Azure Function instance) ← picks up messages
→ Each message processed by EXACTLY ONE consumer
→ Scale out by adding consumers
→ Azure Functions + Service Bus trigger auto-scales based on queue depth
What is the Saga pattern and how do you implement it?
The Saga pattern manages long-running distributed transactions without a central transaction coordinator. Each step publishes an event; compensating transactions undo completed steps if a later step fails.
Order Processing Saga — Orchestration (Logic App):
Step 1: Reserve Inventory API
→ Success: continue
→ Failure: STOP (nothing to compensate)
Step 2: Charge Payment API
→ Success: continue
→ Failure: call Inventory API to RELEASE reservation (compensate Step 1)
Step 3: Create Shipment API
→ Success: Saga complete
→ Failure: call Payment API to REFUND (compensate Step 2)
+ call Inventory API to RELEASE (compensate Step 1)
Choreography (Service Bus Topics):
Each service subscribes to its trigger event
Publishes result event → triggers next service
Compensating events flow in reverse on failure
No central orchestrator — fully decoupled
What is the Claim Check pattern?
Offloads large message payloads to external storage and sends only a reference in the Service Bus message.
Problem: Message payload = 5MB → exceeds 256KB Service Bus limit
Solution:
Producer:
1. Upload full payload to Azure Blob Storage
2. Get SAS URL (time-limited access token)
3. Send small message: { "claimCheck": "{sasUrl}", "type": "OrderCreated" }
Consumer:
1. Receive small message from Service Bus
2. Download full payload from Blob using claimCheck URL
3. Process the full payload
4. Delete the blob after successful processing
How do Logic Apps, Service Bus, and APIM work together in enterprise integration?
External systems / Trading partners / Mobile apps
↓ HTTPS
[APIM] ← API Gateway (North-South traffic)
→ Authenticate: validate JWT/OAuth/mTLS
→ Rate limit external callers
→ Route to correct backend version
→ Transform: REST→SOAP, JSON→XML
↓
[Logic Apps] ← Orchestration layer
→ Receives HTTP request from APIM
→ Calls multiple backend services (SAP, D365, SQL)
→ Handles retry, error handling, compensation
→ Sends responses back and notifications
↓
[Service Bus] ← Async decoupling layer (East-West traffic)
→ Decouples Logic App from slow backend systems
→ Reliable delivery to downstream consumers
→ Topics/subscriptions fan out to multiple consumers
↓
Backend services (SAP, Dynamics 365, SQL, Custom APIs)
Design principle:
APIM = North-South gateway (external ↔ internal boundary)
Service Bus = East-West bus (internal service ↔ service decoupling)
Logic Apps = Orchestration across both layers
What is the Throttling / Rate Limiting pattern in APIM?
<!-- Rate limit per subscriber: 100 calls per 60 seconds -->
<rate-limit-by-key calls="100" renewal-period="60"
counter-key="@(context.Subscription.Id)"
increment-condition="@(context.Response.StatusCode >= 200
and context.Response.StatusCode < 300)"/>
<!-- Rate limit per IP address -->
<rate-limit-by-key calls="50" renewal-period="60"
counter-key="@(context.Request.IpAddress)"/>
<!-- Weekly quota (business tier limit) -->
<quota-by-key calls="50000" bandwidth="102400"
renewal-period="604800"
counter-key="@(context.Subscription.Id)"/>
<!-- Spike arrest: max 10 calls/second to protect backend -->
<rate-limit calls="10" renewal-period="1"/>
When limit exceeded:
- Returns HTTP 429 Too Many Requests
- Include
Retry-Afterheader - Custom error body via policy
Tip: Implement both rate-limit (short window, protect from spikes) and quota (long window, enforce business tier limits) for complete throttling strategy.
7. Scenario-Based Questions
Scenario: Design a reliable order processing system using Service Bus for 10,000 orders/hour.
- Service Bus Premium namespace: dedicated capacity, ~1000 msg/sec throughput, VNET integration
- Topic:
orderswith subscriptions:inventory-sub: filterOrderType = 'Physical'payment-sub: no filter (all orders), sessions enabled (SessionId = OrderId)analytics-sub: no filter (all orders)
- Azure Functions consumers with Service Bus trigger: KEDA-based auto-scaling
- Dead Letter Queue monitoring: Azure Monitor alert when DLQ count > 0. Logic App notifies ops team.
- Duplicate detection:
MessageId = OrderId + SubmissionTimestamp— prevents double-processing on producer retry - Geo-Disaster Recovery: paired secondary namespace in secondary region. Failover RTO < 1 minute.
- Message TTL: 24h — orders not processed in 24h → DLQ alert for manual review
Scenario: A backend API averages 3 second responses. How do you use APIM to improve consumer experience?
- Response caching for stable reference data (product catalogue, lookup tables):
<cache-lookup vary-by-developer="false"> <vary-by-header>Accept</vary-by-header></cache-lookup><!-- In outbound: --><cache-store duration="300"/> - Circuit breaker pattern: on repeated backend failures, return cached last-good response or a meaningful error:
<retry condition="@(context.Response.StatusCode >= 500)" count="3" interval="2" first-fast-retry="true"/> - Async pattern for operations > 1s:
- APIM accepts request → sends to Service Bus → returns
202 Accepted + jobId - Backend processes async → stores result
- Consumer polls
GET /jobs/{jobId}for status
- APIM accepts request → sends to Service Bus → returns
- Mock response for dev/test environments — no backend call
- Backend timeout policy: set explicit timeout so slow responses don't exhaust APIM threads
- Backend load balancing: configure multiple backend pool members — APIM round-robins or health-checks
Scenario: How do you expose an on-premises SAP API securely to external partners via APIM?
- APIM Premium with Internal VNET mode: APIM deployed inside a VNET. SAP reachable via ExpressRoute/VPN into the same VNET.
- Application Gateway in front of APIM: AG provides public endpoint, WAF, SSL termination. Forwards to internal APIM only.
- Partner authentication: mutual TLS (client certificates) or OAuth 2.0 client credentials. APIM validates certificates against trusted CA store.
- Request transformation: APIM policy transforms REST JSON → SOAP for SAP. Partners never see SAP's native SOAP interface.
- Rate limiting: per-partner subscription limits prevent any partner overwhelming SAP.
- Schema validation: APIM validates request payloads at gateway — bad requests rejected before reaching SAP.
- Logging: all partner API calls → Azure Monitor + Application Insights for audit compliance.
Scenario: Design a Logic App that syncs orders from e-commerce to Dynamics 365 every 15 minutes with deduplication.
- Trigger: Recurrence — every 15 minutes
- Fetch new orders: HTTP GET
?modifiedAfter=@{addMinutes(utcNow(),-15)} - Parse JSON: extract orders array
- For Each (concurrency 5): parallel processing of 5 orders at a time
- Query D365 by external order ID → does it exist?
- No: create new D365 record
- Yes: compare
modifiedOn→ if changed, update
- Try-Catch per order: wrap each order in a Try scope. Catch: log failed
orderId + errorto Azure Table Storage. Continue to next order — don't fail entire run on one bad record. - Summary notification: Teams message after loop: "Sync complete: X created, Y updated, Z failed. See log for details."
- Persist last sync timestamp: update a D365 config record with current timestamp for next run's filter query.
Scenario: How do you implement a pub/sub notification system where different teams receive different order events?
Architecture:
Order Service → publishes to Service Bus Topic: "order-events"
Subscriptions with SQL filter rules:
inventory-team-sub:
Filter: OrderStatus = 'Confirmed' OR OrderStatus = 'Cancelled'
Action: Set RouteTo = 'inventory-queue'
finance-team-sub:
Filter: OrderTotal > 1000 AND PaymentStatus = 'Charged'
shipping-team-sub:
Filter: OrderStatus = 'ReadyToShip' AND DeliveryType = 'Express'
analytics-sub:
No filter — receives ALL events for reporting
Implementation:
Order service sends with properties:
message.ApplicationProperties["OrderStatus"] = "Confirmed";
message.ApplicationProperties["OrderTotal"] = 1500.00;
message.ApplicationProperties["DeliveryType"] = "Express";
Each team's consumer only receives events matching their filter.
Teams can be added/removed without changing the producer.
8. Cheat Sheet — Quick Reference
Service Selection Guide
Need to... → Use
Orchestrate a multi-step process → Logic Apps
Decouple services reliably → Service Bus Queue
Fan out events to multiple consumers → Service Bus Topic OR Event Grid
React to Azure resource events → Event Grid
Ingest millions of telemetry events/sec → Event Hubs
Secure and manage APIs → API Management
Transform request/response format → APIM policy
Rate limit API callers → APIM rate-limit policy
Guarantee message order → Service Bus + Sessions
Handle large payloads (>256KB) → Claim Check pattern + Blob Storage
B2B EDI (AS2, X12, EDIFACT) → Logic Apps + Integration Account
Service Bus Quick Reference
Namespace tiers: Basic (queues only) | Standard | Premium (enterprise)
Max message size: 256KB (Standard) | 100MB (Premium)
Message retention: up to 14 days
Max delivery count: default 10 (configurable)
Queue (point-to-point):
→ Competing consumers
→ Each message processed once
→ Enable sessions for FIFO ordering
Topic (pub/sub):
→ Multiple subscriptions
→ SQL filter rules per subscription
→ Each subscription gets independent copy
Message receive modes:
Peek-Lock: safe, acknowledged, retryable ← always use for business data
Receive-Delete: immediate delete, risk loss ← only for non-critical
DLQ: every queue/subscription has one
Monitor: alert when DLQ count > 0
Path: myqueue/$DeadLetterQueue
APIM Policy Reference
<!-- JWT validation -->
<validate-jwt header-name="Authorization" failed-validation-httpcode="401">
<openid-config url="https://login.microsoftonline.com/{tid}/v2.0/.well-known/openid-configuration"/>
</validate-jwt>
<!-- Rate limit per subscription -->
<rate-limit-by-key calls="100" renewal-period="60"
counter-key="@(context.Subscription.Id)"/>
<!-- Cache response -->
<cache-store duration="300"/>
<!-- Transform XML to JSON -->
<xml-to-json kind="direct" apply="always"/>
<!-- Set backend URL -->
<set-backend-service base-url="https://api.backend.com"/>
<!-- Add header -->
<set-header name="X-Key" exists-action="override">
<value>{{named-value}}</value>
</set-header>
<!-- Remove response header -->
<set-header name="X-Powered-By" exists-action="delete"/>
<!-- Mock response -->
<mock-response status-code="200" content-type="application/json"/>
<!-- Return custom error -->
<return-response>
<set-status code="400"/>
<set-body>{"error": "Invalid request"}</set-body>
</return-response>
Logic Apps — Connectivity Modes
Consumption plan:
→ Shared infrastructure
→ Pay per action (~$0.000025/action)
→ No VNET
→ One workflow per resource
→ Good for: low-volume, quick start
Standard plan:
→ Dedicated App Service Plan
→ Fixed cost, predictable
→ VNET integration + private endpoints
→ Multiple workflows per resource
→ Stateless workflows (fast, no history)
→ Local development in VS Code
→ Good for: enterprise production
Top 10 Tips
- Logic Apps vs Power Automate — same engine, different infrastructure control. Enterprise = Logic Apps (VNET, IaC, B2B). Business users = Power Automate. Never say "they're the same thing."
- Standard vs Consumption — Standard is the enterprise choice (VNET, stateless, multi-workflow). Consumption for quick prototypes only. Know this before any architecture question.
- Service Bus sessions = FIFO — sessions are the ONLY way to guarantee message ordering. Without them, ordering is not guaranteed even with a single consumer.
- Peek-Lock, always — Receive-and-Delete risks message loss on consumer crash. Peek-Lock with Complete/Abandon is the correct pattern for any business-critical message.
- DLQ monitoring is non-negotiable — an overflowing DLQ is silent data loss. Azure Monitor alert on DLQ count > 0 is a standard architecture requirement.
- APIM Premium for VNET — it's the only tier with private network integration. If backends are on-premises or in a private VNET, you need Premium.
- Named Values for secrets in APIM policies — never hardcode API keys or connection strings in policy XML. Named Values (Key Vault-backed) are the correct approach.
- Products in APIM — how you bundle APIs and control consumer access tiers. Many candidates know APIs but miss Products as the access control layer.
- Integration Account for B2B EDI — AS2, X12, EDIFACT in Logic Apps requires an Integration Account. This is the expected answer for any B2B/trading partner question.
- APIM + App Gateway pattern — exposing APIM in Internal VNET mode behind an Application Gateway is the standard enterprise pattern for secure public API exposure with backend network isolation.
No comments:
Post a Comment