Microsoft Azure AZ-204 Sample Questions

  1. Home
  2. Microsoft Azure AZ-204 Sample Questions
Microsoft Azure AZ-204 Sample Questions

This Microsoft Azure AZ-204 exam is designed to assess your ability to do the following technical tasks: Developing Azure compute solutions, developing for Azure storage, and integrating Azure security are some of these jobs. Monitoring, debugging, and improving Azure solutions are also included, as well as connecting to and consuming Azure and third-party services. Candidates should also have subject area experience in designing, implementing, testing, and supporting cloud applications and services on Microsoft Azure before taking this exam.

Participating in all phases of cloud development, from establishing requirements to designing, is one of an Azure Developer’s primary tasks. Along with development, deployment, and upkeep. tweaking and monitoring of performance. The article provides a list of Microsoft Azure AZ-204 Sample Questions that cover core exam topics including –

  • Develop Azure compute solutions (25-30%)
  • Develop Azure compute solutions (25-30%)
  • Implement Azure security (15-20%)
  • Monitor, troubleshoot, and optimize Azure solutions (10-15%)
  • Connect to and consume Azure services and third-party services (25-30%)

Advanced Sample Questions

What is Azure Virtual Machines used for?

  • A) To run virtual machines in the cloud.
  • B) To store and manage data in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To run virtual machines in the cloud.

Explanation: Azure Virtual Machines is a service that enables organizations to run virtual machines in the cloud. It provides a fast and simple way to create and manage virtual machines, and enables organizations to run a variety of operating systems and applications in the cloud. Azure Virtual Machines supports a variety of operating systems, including Windows, Linux, and macOS, and can be easily integrated with other Azure services, such as Azure App Service and Azure Functions. By using Azure Virtual Machines, organizations can reduce the cost and complexity of managing virtual machines in the cloud, and simplify the deployment and management of their applications and services.

What is Azure Resource Manager (ARM)?

  • A) A deployment and management tool for Microsoft Azure resources.
  • B) A virtual network in Azure.
  • C) An Azure service that provides data storage and retrieval.

Answer: A) A deployment and management tool for Microsoft Azure resources.

Explanation: Azure Resource Manager (ARM) is a deployment and management tool for Microsoft Azure resources. It provides a single management plane to deploy, manage, and monitor all the resources in an Azure solution. ARM templates are JSON files that describe the resources, configuration, and deployment for an Azure solution. By using ARM, organizations can manage their resources in a consistent and predictable manner, automate the deployment and management of their solutions, and monitor their resources in real-time.

What is the purpose of an Azure App Service?

  • A) To host web and mobile applications in the cloud.
  • B) To store and manage data in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To host web and mobile applications in the cloud.

Explanation: Azure App Service is a platform for hosting web and mobile applications in the cloud. It provides a scalable and reliable environment for deploying and managing web and mobile applications, and offers a range of features and services to support the development and deployment of these applications. Azure App Service provides a scalable, secure, and highly available environment for deploying and running applications, and makes it easy to manage and monitor the performance of these applications.

What is Azure Blob Storage used for?

  • A) To store and manage data in the cloud.
  • B) To host web and mobile applications in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To store and manage data in the cloud.

Explanation: Azure Blob Storage is used to store and manage unstructured data, such as text and binary data, in the cloud. It is a scalable and highly available storage solution that provides organizations with a secure and reliable way to store and manage large amounts of data. Azure Blob Storage can be used for a variety of data scenarios, including the storage of documents, images, audio, and video files. By using Azure Blob Storage, organizations can reduce the cost and complexity of managing data storage and retrieval, and improve the performance and scalability of their data storage solutions.

What is the purpose of Azure Functions?

  • A) To run code in response to events.
  • B) To store and manage data in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To run code in response to events.

Explanation: Azure Functions is a serverless compute service that enables organizations to run code in response to events. It provides a way to run event-driven, scalable, and highly available code without having to manage the underlying infrastructure. Azure Functions can be triggered by a wide range of events, including changes in data, message queues, and HTTP requests, and can run code written in a variety of programming languages. By using Azure Functions, organizations can simplify the development and deployment of event-driven applications, and reduce the cost and complexity of managing infrastructure.

What is Azure Cosmos DB used for?

  • A) To store and manage globally distributed data.
  • B) To host web and mobile applications in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To store and manage globally distributed data.

Explanation: Azure Cosmos DB is a globally distributed, multi-model database service that is used to store and manage data. It provides organizations with a highly scalable, highly available, and low-latency data storage solution that supports multiple data models, including document, graph, key-value, and columnar data. Azure Cosmos DB provides a variety of consistency options, including strong, eventual, and session consistency, and enables organizations to easily replicate data to any number of regions to provide low-latency access to data for global users. By using Azure Cosmos DB, organizations can build highly scalable and globally distributed applications with a high degree of confidence in the performance and reliability of their data storage solutions.

What is Azure Virtual Network used for?

  • A) To host web and mobile applications in the cloud.
  • B) To store and manage data in the cloud.
  • C) To securely connect Azure resources to each other.

Answer: C) To securely connect Azure resources to each other.

Explanation: Azure Virtual Network (VNet) is used to securely connect Azure resources to each other. It provides organizations with a way to create a private network in the cloud and control the flow of inbound and outbound network traffic. Azure VNet enables organizations to create secure connections between resources in the cloud, and to connect to on-premises resources through site-to-site or point-to-site VPN connections. By using Azure VNet, organizations can create a secure and highly available network environment in the cloud, and simplify the deployment and management of their network infrastructure.

What is Azure App Service used for?

  • A) To host web and mobile applications in the cloud.
  • B) To store and manage data in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To host web and mobile applications in the cloud.

Explanation: Azure App Service is a fully managed platform for building, deploying, and scaling web and mobile applications in the cloud. It provides organizations with a way to quickly and easily build, deploy, and manage web and mobile applications, and enables developers to focus on writing code instead of managing infrastructure. Azure App Service supports a variety of programming languages, including .NET, Java, Node.js, PHP, and Python, and provides a highly scalable, highly available, and secure environment for running applications. By using Azure App Service, organizations can simplify the development and deployment of their applications, and reduce the cost and complexity of managing infrastructure.

What is Azure Container Instances used for?

  • A) To run containers in the cloud without managing infrastructure.
  • B) To store and manage data in the cloud.
  • C) To manage and monitor resources in Azure.

Answer: A) To run containers in the cloud without managing infrastructure.

Explanation: Azure Container Instances is a service that enables organizations to run containers in the cloud without having to manage infrastructure. It provides a fast and simple way to run containers, and enables organizations to run containers on demand, without having to manage a container orchestration service. Azure Container Instances provides organizations with a highly scalable, highly available, and secure environment for running containers, and can be easily integrated with other Azure services, such as Azure Functions and Azure App Service. By using Azure Container Instances, organizations can reduce the cost and complexity of running containers in the cloud, and simplify the deployment and management of their containerized applications.

What is Azure Monitor used for?

  • A) To store and manage data in the cloud.
  • B) To manage and monitor resources in Azure.
  • C) To host web and mobile applications in the cloud.

Answer: B) To manage and monitor resources in Azure.

Explanation: Azure Monitor is a service that enables organizations to manage and monitor resources in Azure. It provides organizations with a centralized view of their Azure resources, and enables them to monitor the performance and health of their applications and services. Azure Monitor provides a variety of features, including log analytics, performance monitoring, and alerting, and can be used to monitor resources across a variety of services, including Azure VMs, Azure Functions, and Azure App Service. By using Azure Monitor, organizations can gain a deeper understanding of the performance and health of their applications and services, and take proactive measures to address issues and improve performance.

Basic Sample Questions

Q1) You are in charge of creating a website. The website will be host in Azure. After the website is launch, you anticipate a huge number of traffic. You must keep the website accessible and responsive while keeping costs low. You must launch the webpage. So, what are your options?

  1. Set up a virtual computer to host the website. When the CPU demand is high, configure the virtual machine to automatically scale.
  2. Use the Shared service layer to deploy the website to an App Service. Configure the App Service strategy to automatically scale when CPU demand is high.
  3. Set up a virtual computer to host the website. When the CPU load is high, use a Scale Set to increase the virtual machine instance count.
  4. Use the Standard service tier to deploy the website to an App Service. When the CPU demand is high, configure the App Service strategy to automatically scale.

Correct Answer: Use the Standard service tier to deploy the website to an App Service. When the CPU demand is high, configure the App Service strategy to automatically scale.

Explanation: WAWS (Windows Azure Web Sites) comes in three modes: Standard, Free, and Shared. Even for sites with only one instance, Standard mode has an enterprise-grade SLA (Service Level Agreement) of 99.9% monthly. Standard mode differs from the other ways to purchase Windows Azure Web Sites in that it runs on dedicated instances.

Refer: Best Practices: Windows Azure Websites (WAWS)

Q2) To process Azure Storage blob data, you create an HTTP triggered Azure Function app. An output binding on the blob is used to start the app. After four minutes, the app continues to time out. The blob data must be processed by the program. You must guarantee that the app does not time out and that the blob data is processed. To process the blob data, use the Durable Function async pattern. Is the solution effective in achieving the goal?

  • Yes
  • No

Correct Answer: No

Explanation: Instead, send the HTTP trigger payload to an Azure Service Bus queue, where it will be handled by a queue trigger function, and you’ll get an HTTP success response right away. Large, long-running functions can result in unanticipated timeouts. Refactor huge functions into smaller function sets that work together and produce results quickly whenever possible, according to general best practices. A webhook or HTTP trigger function, for example, may need an acknowledgment response within a particular time limit; webhooks frequently demand an immediate response. The HTTP trigger payload can be placed in a queue and processed by a queue trigger function. This method allows you to postpone the actual task and respond quickly.

Refer: Best practices for reliable Azure Functions

Q4) You will not be able to return to this section after answering a question. As a result, the review screen will not include these questions. To process Azure Storage blob data, you create an HTTP-triggered Azure Function app. An output binding on the blob is used to start the app.
After four minutes, the app continues to time out. The blob data must be processed by the program. You must guarantee that the app does not time out and that the blob data is processed. Solution: Return an immediate HTTP success response bypassing the HTTP trigger payload into an Azure Service Bus queue to be handled by a queue trigger function. Is the solution effective in achieving the goal?

  • Yes
  • No

Correct Answer: Yes

Explanation: Large, long-running functions can result in unanticipated timeouts. Refactor huge functions into smaller function sets that work together and produce results quickly whenever possible, according to general best practices. A webhook or HTTP trigger function, for example, may need an acknowledgment response within a particular time limit; webhooks frequently demand an immediate response. The HTTP trigger payload can be placed in a queue and processed by a queue trigger function. This method allows you to postpone the actual task and respond quickly.

Refer: Best practices for reliable Azure Functions

Q5) To process Azure Storage blob data, you create an HTTP triggered Azure Function app. An output binding on the blob is used to start the app. After four minutes, the app continues to time out. The blob data must be processed by the program. You must guarantee that the app does not time out and that the blob data is processed. Solution: Enable the Always On setting and configure the app to use an App Service hosting plan. Is the solution effective in achieving the goal?

  • Yes
  • No

Correct Answer: No

Explanation: Instead, send the HTTP trigger payload to an Azure Service Bus queue, where it will be handled by a queue trigger function, and you’ll get an HTTP success response right away. Large, long-running functions can result in unanticipated timeouts. Refactor huge functions into smaller function sets that work together and produce results quickly whenever possible, according to general best practices. A webhook or HTTP trigger function, for example, may need an acknowledgment response within a particular time limit; webhooks frequently demand an immediate response. The HTTP trigger payload can be placed in a queue and processed by a queue trigger function. This method allows you to postpone the actual task and respond quickly.

Refer: Best practices for reliable Azure Functions

Q6) You create a software-as-a-service (SaaS) application for managing images. The photographs are uploaded to a web service, which subsequently stores them in Azure Storage Blob storage. General-purpose V2 is the storage account type. When photographs are submitted, they must be processed so that a mobile-friendly version of the image can be created and saved. In less than one minute, the process of creating a mobile-friendly version of the image must begin. You must create the procedure that initiates the photo processing.
Solution: Photo processing should be moved to an Azure Function that is triggered by the blob upload. Is the solution effective in achieving the goal?

  • Yes
  • No

Correct Answer: Yes

Explanation: Applications can react to events using Azure Storage events. Image or video processing, search indexing, or any file-oriented workflow are examples of common Blob storage event scenarios. Azure Event Grid pushes events to subscribers like Azure Functions, Azure Logic Apps, and even your own HTTP listener.

Refer: Reacting to Blob storage events

Q7) For auditing purposes, the application must access the transaction logs of all modifications to the blobs and blob metadata in the storage account. Only create, update, delete, and copy operations are allowed, and the changes must be kept in the sequence in which they occurred for compliance reasons. The transaction logs must be processed asynchronously. So, what are your options?

  1.  Process all Azure Blob storage events by using Azure Event Grid with a subscriber Azure Function app.
  2.  Enable the change feed on the storage account and process all changes for available events.
  3.  Process all Azure Storage Analytics logs for successful blob events.
  4.  Use the Azure Monitor HTTP Data Collector API and scan the request body for successful blob events.

Correct Answer: Enable the change feed on the storage account and process all changes for available events.

Explanation: The goal of the change feed is to give transaction logs of all modifications made to your storage account’s blobs and blob metadata. The change feed provides a read-only log of these modifications that is organized, guaranteed, durable, and immutable. Client applications can read these logs in streaming or batch mode at any time. The change feed enables you to create cost-effective and scalable solutions for processing change events in your Blob Storage account.

Refer: Change feed support in Azure Blob Storage

Q8)You’re working on an Azure Function App that processes photos uploaded to an Azure Blob storage container. After images are submitted, they must be processed as rapidly as possible, with the solution minimising latency. When the Function App is triggered, you write code to process photos. The Function App must be configured. So, what are your options?

  1. Use an App Service plan. Configure the Function App to use an Azure Blob Storage input trigger.
  2.  Use a Consumption plan. Configure the Function App to use an Azure Blob Storage trigger.
  3. Use a Consumption plan. Configure the Function App to use a Timer trigger.
  4. Use an App Service plan. Configure the Function App to use an Azure Blob Storage trigger.
  5. Use a Consumption plan. Configure the Function App to use an Azure Blob Storage input trigger.

Correct Answer: Use a Consumption plan. Configure the Function App to use an Azure Blob Storage trigger.

Explanation: When a new or updated blob is discovered, the Blob storage trigger starts a function. The function receives the contents of the blob as input. A function app on a single virtual machine (VM) is limited to 1.5 GB of memory on the Consumption plan.

Refer: Azure Blob storage trigger for Azure Functions

Q9)You’re getting ready to publish a website from a GitHub repository to an Azure Web App. A script generates static material for the webpage. You intend to use the continuous deployment functionality of Azure Web Apps. Before the website starts delivering traffic, you must run the static generating script. What are two options for achieving this goal? Each accurate response provides a comprehensive solution. NOTE: One point is awarded for each correct answer.

  1.  Add the path to the static content generation tool to WEBSITE_RUN_FROM_PACKAGE setting in the host.json file.
  2.  Add a PreBuild target in the websites csproj project file that runs the static content generation script.
  3. Create a file named run.cmd in the folder /run that calls a script which generates the static content and deploys the website.
  4. Create a file named .deployment in the root of the repository that calls a script which generates the static content and deploys the website.

Correct Answer: Add the path to the static content generation tool to WEBSITE_RUN_FROM_PACKAGE setting in the host.json file and Create a file named .deployment in the root of the repository that calls a script which generates the static content and deploys the website.

Explanation: Your functions can be run straight from a deployment package file in your function app in Azure. To enable your function app to run from a package, just add a WEBSITE RUN FROM PACKAGE setting to your function app settings. Include a.deployment file in the repository root to personalise your deployment. You only need to add a file with the name.deployment and the following content to the root of your repository:
COMMAND TO RUN FOR DEPLOYMENT [config] command = YOUR COMMAND TO RUN FOR DEPLOYMENT This command can simply be used to run a script (batch file) that contains everything you need for your deployment, such as moving files from the repository to the web root directory.

Refer: Run your functions from a package file in Azure

Q10)You’re working on a web application that’s being secure by the Azure Web Application Firewall (WAF). The web app’s traffic is route through an Azure Application Gateway instance that is share by several web apps.  Contoso.azurewebsites.net is the URL for the web app. SSL must be use to secure all traffic. Multiple web apps use the Azure Application Gateway instance.For the web app, you must configure Azure Application Gateway.Which of the two acts should you take? Each accurate response reveals a piece of the solution.

  1. In the Azure Application Gateway’s HTTP setting, enable the Use for App service setting.
  2.  Convert the web app to run in an Azure App service environment (ASE).
  3. Add an authentication certificate for contoso.azurewebsites.net to the Azure Application Gateway.
  4. In the Azure Application Gateway’s HTTP setting, set the value of the Override backend path option to contoso22.azurewebsites.net.

Correct Answer: In the Azure Application Gateway’s HTTP setting, enable the Use for App service setting. In the Azure Application Gateway’s HTTP setting, set the value of the Override backend path option to contoso22.azurewebsites.net.

Explanation:The HTTP settings define the ability to specify a host override, which may be applied to any back-end pool during rule construction.
The ability to extract the host name from the back-end pool members’ IP or FQDN. If configured with the option to derive host name from an individual back-end pool member, HTTP settings now enable the option to dynamically pick the host name from the FQDN of a back-end pool member. With multi-tenant services, SSL termination and end-to-end SSL are require. Trusted Azure services, such as Azure App service web apps, do not require whitelisting the backends in the application gateway when using end-to-end SSL.As a result, no authentication certificates are require.

Refer: Configure App Service with Application Gateway

Q11)You’re creating a website that stores data on Azure Blob storage. After 30 days, you configure the Azure Blob storage lifecycle to migrate all blobs to the archive layer. For data older than 30 days, customers have sought a service-level agreement (SLA). The minimal service level agreement (SLA) for data recovery must be document. What type of SLA should you use?

  1.  at least two days
  2. between one and 15 hours
  3. at least one day
  4. between zero and 60 minutes

Correct Answer: between one and 15 hours

Explanation: The lowest storage cost is in the archive access tier. However, in comparison to the hot and cool tiers, it has higher data retrieval costs. Depending on the priority of the request, retrieving data from the archive tier can take several hours. In the case of minor objects, a high priority rehydrate may be able to retrieve the object from the archive in less than an hour.

Refer: Hot, Cool, and Archive access tiers for blob data

Q12) You are in charge of creating Azure solutions. When an Azure virtual machine finishes processing data, a message must be sent to a.NET application. The communications must not be kept after the receiving program has processed them. The.NET object that will receive the messages must be implement. Which object do you think you should use?

  1. QueueClient
  2. SubscriptionClient
  3. TopicClient
  4. CloudQueueClient

Correct Answer: CloudQueueClient

Explanation: A queue allows a single consumer to handle a message. To access the Azure VM, you’ll need a CloudQueueClient.

Refer: Service Bus queues, topics, and subscriptions

Q13)You already have an Azure storage account where you store enormous amounts of data in various containers. All data from the previous storage account must be copied to the new storage account. The copying procedure must meet the following criteria: Data movement should be automated. Reduce the amount of user input necessary to complete the operation. Ascertain that the data transportation procedure can be recover. What type of material should you use?

  1.  AzCopy
  2. Azure Storage Explorer
  3.  Azure portal
  4. .NET Storage Client Library

Correct Answer: AzCopy

Explanation: Using the AzCopy v10 command-line utility, you can copy blobs, folders, and containers between storage accounts. Since the copy operation is synchronous, when the command completes, it means all files have been copied.

Refer: Copy blobs between Azure storage accounts by using AzCopy

Q14)You’re utilising the Azure Cosmos DB SQL API to create an Azure Cosmos DB solution. There are millions of documents in the database. Hundreds of properties can be found in a single document. There are no distinct partitioning values in the document properties. Azure Cosmos DB must scale individual database containers to fulfil the application’s performance requirements by distributing the workload evenly across all partitions over time. You must choose a partition key. Which two partition keys are available to you? Each accurate response provides a comprehensive solution.

  1.  a single property value that does not appear frequently in the documents
  2.  a value containing the collection name
  3.  a single property value that appears frequently in the documents
  4.  a concatenation of multiple property values with a random suffix appended
  5.  Further, a hash suffix appended to a property value

Correct Answer: a concatenation of multiple property values with a random suffix appended and a hash suffix appended to a property value

Explanation: Concatenating numerous property values into a single artificial partition key property can be use to create a partition key.  Appending a random integer to the end of the partition key value is another way to divide the burden more equitably. You can do parallel write operations across partitions when you distribute items this way.

Refer: Create a synthetic partition key

Q15)You’ve added a new Azure subscription to your account. You are developing an internal website for employees to view sensitive data. For authentication, the website uses Azure Active Directory (Azure AD). For the website, you must use multifactor authentication. Which of the two acts should you take? Each accurate response reveals a piece of the solution.

  1. Firstly, configure the website to use Azure AD B2C.
  2.  Secondly, in Azure AD, create a new conditional access policy.
  3. Next, upgrade to Azure AD Premium.
  4. In Azure AD, enable application proxy.
  5. In Azure AD conditional access, enable the baseline policy.

Correct Answer:  In Azure AD, create a new conditional access policy.

Explanation: Conditional access policy enables MFA. It’s the most adaptable way to give your users two-step verification. Conditional access policy is a premium feature of Azure AD that only works with Azure MFA in the cloud.

Refer: Plan an Azure Active Directory Multi-Factor Authentication deployment

Q16) You’re working on a Java application that stores key and value data in Cassandra. In the application, you intend to leverage a new Azure Cosmos DB resource and the Cassandra API. To allow provisioning of Azure Cosmos accounts, databases, and containers, you create an Azure Active Directory (Azure AD) group named Cosmos DB Creators. The Azure AD group should not have access to the keys needed to access the data. Access to the Azure AD group must be restrict. Which type of role-based access control should you implement?

  1. Firstly, documentDB Accounts Contributor
  2. Secondly, cosmos Backup Operator
  3. Next, Cosmos DB Operator
  4. Cosmos DB Account Reader

Correct Answer: Cosmos DB Operator

Explanation: Cosmos DB Operator is a new RBAC role in Azure Cosmos DB. This new role allows you to create Azure Cosmos accounts, databases, and containers, but it does not grant you access to the keys needed to access the data. This role is intended for circumstances where the ability to allow Azure Active Directory service principals access to control Cosmos DB deployment processes, including the account, database, and containers, is require.

Refer: Azure Cosmos DB Operator role for role-based access control (RBAC) is now available

Q17)You have an Azure Web app and many Azure Function apps in your application. Azure Key Vault stores application secrets such as connection strings and certificates. Secrets should not be kept in the application or runtime environment. Azure Active Directory (Azure AD) changes must be kept to a minimum. You must devise a method for loading application secrets. So, what are your options?

  1.  Firstly, create a single user-assigned Managed Identity with permission to access Key Vault and configure each App Service to use that Managed Identity. 
  2. Secondly, create a single Azure AD Service Principal with permission to access Key Vault and use a client secret from within the App Services to access Key Vault.
  3. Next, create a system assigned Managed Identity in each App Service with permission to access Key Vault.
  4. Create an Azure AD Service Principal with Permissions to access Key Vault for each App Service and use a certificate from within the App Services to access Key Vault.

Correct Answer: Create a single user-assigned Managed Identity with permission to access Key Vault and configure each App Service to use that Managed Identity. 

Explanation: For App Service and Azure Functions, use Key Vault references. Only system-assign manage identities are presently supported by Key Vault references. User-created IDs aren’t allowed to be use.

Refer: Use Key Vault references for App Service and Azure Functions

Q18)You have an Azure Web app and many Azure Function apps in your application. Azure Key Vault stores application secrets such as connection strings and certificates. Secrets should not be kept in the application or runtime environment. Azure Active Directory (Azure AD) changes must be kept to a minimum. You must devise a method for loading application secrets. So, what are your options?

  1. Firstly, copy blobs to Container2 by using the Put Blob operation of the Blob Service REST API
  2. Secondly, create an Event Grid topic that uses the Start-AzureStorageBlobCopy cmdlet
  3. Further, use AzCopy with the Snapshot switch to copy blobs to Container2
  4. Download the blob to a virtual machine and then upload the blob to Container2

Correct Answer: Create an Event Grid topic that uses the Start-AzureStorageBlobCopy cmdlet

Explanation: The Start-AzureStorageBlobCopy cmdlet begins copying a blob in Azure Storage.

Refer: Start-AzureStorageBlobCopy

Q19)You’re working on an ASP.NET Core site with Azure FrontDoor. Researchers can use the service to create custom weather data sets. Users can download data sets in Comma Separated Value (CSV) format. Every ten hours, the data is update. Based on the Response Header values, specific files must be removed from the FrontDoor cache. Individual assets must be remove from the Front Door cache. Which cache purge method should you use?

  1. single path 
  2.  wildcard
  3. root domain

Correct Answer: single path 

Explanation: In the lists of purge pathways, these forms are supported:

  • Purge individual assets by supplying the asset’s full path (without the protocol and domain), as well as the file extension.
  • Asterisk () can be use as a wildcard in purging. Purge all subfolders and files under a given folder by specifying the folder followed by /, for example, /pictures/*.
  • Purge the root domain of the endpoint by adding “/” to the path.

Refer: Caching with Azure Front Door

Q20)You work as a developer for a SaaS firm that provides a variety of web services. The following conditions must be met by all web services provided by the company:

  • Firstly, to gain access to the services, use API Management.
  • Secondly, for authentication, use OpenID Connect.
  • Next, avoid using your computer anonymously.
  • Several web services can be called without any authentication, according to a recent security audit.
  • What API Management policy should you use?
  1.  jsonp
  2. authentication-certificate
  3. check-header
  4. validate-jwt

Correct Answer: validate-jwt

Explanation: To validate the OAuth token for every incoming request, add the validate-jwt policy.

Refer: Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory

Microsoft Azure AZ-204 free practice test
Menu