AWS Certified Solutions Architect – Associate (SAA-C03) Sample Questions

  1. Home
  2. AWS Certified Solutions Architect – Associate (SAA-C03) Sample Questions
AWS Certified Solutions Architect - Associate (SAA-C03) Sample Questions

Cloud computing services are housed under AWS Certified Solutions Architect – Associate (SAA-C03), opening up a plethora of lucrative career opportunities. AWS offers more than 70 services, including those for compute, networking, databases, storage, analytics, application services, management, mobile deployment, developer tools, and the Internet of things. AWS furthermore provides cloud certifications that attest to your capability to work in the cloud. Each of the several certifications that AWS offers at various levels unlocks a wide range of improved employment options. You can choose to participate in either of the two certification paths recommended by AWS, depending on your interests and professional objectives.

The entry-level AWS Certified Solutions Architect Associate exam was created with people looking to enter this line of work in mind.

Candidates who are qualified to perform the duties of a solutions architect should take the AWS Certified Solutions Architect Associate exam. They must have at least one year of practical experience building fault-tolerant, scalable, cost-effective, and available distributed AWS systems. This test confirms a candidate’s ability to use Amazon Web Services technology to create and deliver secure and reliable applications. The article provides a list of AWS Certified Solutions Architect – Associate (SAA-C03) Sample Questions that cover core exam topics including –

  • Module 1: Overview of Design Secure Architectures
  • Module 2: Overview of Design Resilient Architectures
  • Module 3: Overview of Design High-Performing Architectures
  • Module 4: Overview of Design Cost-Optimized Architectures

Advanced Sample Questions

Which AWS service can be used to store and retrieve any amount of data at any time, from anywhere on the web?

  • a. Amazon Elastic Compute Cloud (EC2)
  • b. Amazon Simple Storage Service (S3)
  • c. Amazon Relational Database Service (RDS)
  • d. Amazon DynamoDB

Answer: b. Amazon Simple Storage Service (S3)

Explanation: Amazon S3 is a highly scalable, durable, and secure object storage service that can store and retrieve any amount of data from anywhere on the web. Amazon EC2 is a web service that provides resizable compute capacity in the cloud. Amazon RDS is a web service that makes it easy to set up, operate, and scale a relational database in the cloud. Amazon DynamoDB is a fully managed NoSQL database service.

Which AWS service can be used to monitor and manage your AWS resources and applications?

  • a. Amazon CloudFront
  • b. Amazon CloudFormation
  • c. Amazon CloudWatch
  • d. Amazon Elastic Load Balancer

Answer: c. Amazon CloudWatch

Explanation: Amazon CloudWatch is a monitoring service that provides data and actionable insights for AWS resources and applications. Amazon CloudFront is a global content delivery network (CDN) service. Amazon CloudFormation is a service that allows you to model and provision AWS resources. Amazon Elastic Load Balancer is a service that automatically distributes incoming application traffic across multiple targets, such as EC2 instances.

Which of the following is a best practice for security in AWS?

  • a. Using long, complex passwords that are difficult to remember.
  • b. Granting permissions to users and applications on a need-to-know basis.
  • c. Storing encryption keys in plain text. d. Disabling multi-factor authentication.

Answer: b. Granting permissions to users and applications on a need-to-know basis.

Explanation: Granting permissions to users and applications on a need-to-know basis is a best practice for security in AWS. It is also recommended to use multi-factor authentication, strong and unique passwords, and to encrypt sensitive data. Storing encryption keys in plain text is not recommended, as it makes them vulnerable to theft or misuse.

Which AWS service can be used to automatically scale EC2 instances based on demand?

  • a. Amazon S3
  • b. Amazon RDS
  • c. Amazon CloudFormation
  • d. Amazon EC2 Auto Scaling

Answer: d. Amazon EC2 Auto Scaling

Explanation: Amazon EC2 Auto Scaling is a service that automatically adjusts the number of EC2 instances in a group according to demand. It can help ensure that your application is always available to handle incoming traffic, while minimizing costs during periods of low demand. Amazon S3 is an object storage service, Amazon RDS is a relational database service, and Amazon CloudFormation is a service that allows you to model and provision AWS resources.

Which AWS service can be used to distribute incoming traffic to multiple endpoints based on specified rules?

  • a. Amazon S3
  • b. Amazon Route 53
  • c. Amazon CloudFront
  • d. Amazon Elastic Load Balancer

Answer: b. Amazon Route 53

Explanation: Amazon Route 53 is a highly available and scalable cloud DNS service that can be used to route traffic to various endpoints based on specified rules. It supports multiple routing policies, including weighted, latency-based, and failover routing. Amazon S3 is an object storage service, Amazon CloudFront is a content delivery network, and Amazon Elastic Load Balancer is a service that automatically distributes incoming application traffic across multiple targets.

Which AWS service can be used to deploy and manage Docker containers on a cluster of EC2 instances?

  • a. Amazon EKS
  • b. Amazon ECS
  • c. Amazon ECR
  • d. Amazon ElastiCache

Answer: b. Amazon ECS

Explanation: Amazon ECS (Elastic Container Service) is a fully managed service that makes it easy to run, stop, and manage Docker containers on a cluster of EC2 instances. It integrates with other AWS services like Elastic Load Balancing, EC2 Auto Scaling, and IAM for security and access control. Amazon EKS (Elastic Kubernetes Service) is another service for deploying and managing containerized applications using Kubernetes. Amazon ECR (Elastic Container Registry) is a fully-managed Docker container registry, and Amazon ElastiCache is a managed in-memory data store.

Which AWS service can be used to analyze and process large volumes of data in real time?

  • a. Amazon S3
  • b. Amazon Kinesis
  • c. Amazon Redshift
  • d. Amazon RDS

Answer: b. Amazon Kinesis

Explanation: Amazon Kinesis is a fully managed service that makes it easy to collect, process, and analyze real-time, streaming data at scale. It can be used to build custom applications that can respond to data in real time, and can be integrated with other AWS services like Lambda, S3, and DynamoDB. Amazon S3 is an object storage service, Amazon Redshift is a data warehousing service, and Amazon RDS is a relational database service.

Which AWS service can be used to store and manage secrets, such as database passwords and API keys?

  • a. AWS Secrets Manager
  • b. AWS IAM
  • c. AWS Certificate Manager
  • d. AWS KMS

Answer: a. AWS Secrets Manager

Explanation: AWS Secrets Manager is a service that enables you to easily store and manage secrets, such as database passwords and API keys. It can automatically rotate secrets to help meet compliance requirements, and can be integrated with other AWS services like RDS, DocumentDB, and Lambda. AWS IAM is a service that enables you to manage access to AWS resources, AWS Certificate Manager is a service that enables you to provision, manage, and deploy SSL/TLS certificates for use with AWS services, and AWS KMS is a managed encryption service.

Which AWS service can be used to run code in response to events, such as changes to objects in an S3 bucket?

  • a. Amazon S3
  • b. Amazon SQS
  • c. AWS Lambda
  • d. Amazon Kinesis

Answer: c. AWS Lambda

Explanation: AWS Lambda is a compute service that lets you run code in response to events, such as changes to objects in an S3 bucket or messages in an SQS queue. It automatically scales to handle any amount of traffic, and only charges you for the compute time that you consume. Amazon S3 is an object storage service, Amazon SQS is a managed message queue service, and Amazon Kinesis is a real-time data streaming service.

Which AWS service can be used to create and manage virtual private networks (VPNs) that connect your on-premises data centers to your AWS resources?

  • a. Amazon VPC
  • b. AWS Direct Connect
  • c. Amazon Route 53
  • d. AWS Firewall Manager

Answer: b. AWS Direct Connect

Explanation: AWS Direct Connect is a network service that enables you to create private, dedicated network connections between your on-premises data centers and your AWS resources. This can help reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. Amazon VPC is a service that enables you to launch Amazon Web Services resources into a virtual network that you’ve defined, and AWS Firewall Manager is a service that centralizes the management of AWS WAF rules across multiple accounts and resources. Amazon Route 53 is a highly available and scalable cloud DNS service.

Basic Sample Questions

Q1)A business gathers information about the temperature, humidity, and air pressure in cities on many continents. The firm obtains 500 GB of data every day on average from each facility. Every location has a fast Internet connection. The business needs to swiftly compile the data from each of these international websites into a single Amazon S3 bucket. The answer needs to reduce operational complexity. Which option satisfies these criteria in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. On the destination S3 bucket, enable S3 Transfer Acceleration. To transfer site data directly to the desired S3 bucket, use multipart uploads.
  • B. Transfer the information from every site to an S3 bucket in the neighbourhood Region. For item replication to the target S3 bucket, use S3 Cross-Region Replication. After that, delete the data from the initial S3 bucket.
  • C. Daily AWS Snowball Edge Storage Optimized device jobs should be scheduled to move data from each location to the nearest Region. For item replication to the target S3 bucket, use S3 Cross-Region Replication.
  • D. Add each site’s data to an Amazon EC2 instance in the nearby Region. Put the information in a volume of the Amazon Elastic Block Store (Amazon EBS). Take an EBS snapshot and copy it to the Region containing the destination S3 bucket on a regular basis. the EBS volume in that Region be restored.

Correct Answer: A

Q2)The ability to examine the log files of a proprietary application is a requirement for a business. The logs are kept in an Amazon S3 bucket in JSON format. Simple, on-demand queries will be used. The analysis must be carried out by a solutions architect with the least amount of modifications to the current architecture. What should the solutions architect do to fulfil these criteria with the LEAST amount of administrative burden in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Load all the content into one location using Amazon Redshift, then execute the necessary SQL queries from there.
  • B. Keep the logs in Amazon CloudWatch Logs. Run SQL queries from the Amazon CloudWatch console as necessary.
  • C. Run the queries as necessary using Amazon Athena directly with Amazon S3.
  • D. To organise the logs, use AWS Glue. Run the SQL queries on Amazon EMR using a temporary Apache Spark cluster.

Correct Answer: C

Q3)AWS Organizations are used by a firm to manage various AWS accounts for various departments. Project reports are stored in an Amazon S3 bucket under the management account. The business only wants users of accounts belonging to the organisation in AWS Organizations to have access to this S3 bucket. Which approach satisfies these criteria with the SMALLEST operational overhead in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Update the S3 bucket policy to include the aws PrincipalOrgID global condition key with a reference to the organisation ID.
  • B. Establish a unit of organisation (OU) for each department. The S3 bucket policy should now include the aws:PrincipalOrgPaths global condition key.
  • C. Track the InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events using AWS CloudTrail. Adjust the S3 bucket policy as necessary.
  • D. Assign a tag to each user who requires S3 bucket access. The S3 bucket policy should now include the aws:PrincipalTag global condition key.

Correct Answer: A

Q4)On an Amazon EC2 instance within a VPC, an application runs. The programme analyses log files that are kept in an Amazon S3 bucket. Without internet access, the EC2 instance must access the S3 bucket. Which option will link Amazon S3 to a private network inAWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Establish a gateway S3 endpoint in the VPC.
  • B. Stream the logs to CloudWatch Logs on Amazon. Logs should be exported to an S3 bucket.
  • C. On Amazon EC2, make an instance profile that permits S3 access.
  • D. Establish a private link for the S3 endpoint in an Amazon API Gateway API.

Correct Answer: A

Q5)A business is employing a single Amazon EC2 instance to host a web application on AWS, and an Amazon EBS volume to store user-uploaded files. The business duplicated the architecture, generated a second EC2 instance and EBS volume in a different availability zone, and put both behind an application load balancer to improve scalability and availability. After this update was made, users complained that they could see one subset of their documents or the other every time they refreshed the website, but never all of their documents at once. How can users view all of their documents at once, according to a solutions architect’s recommendation in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Copies the data so that all documents are present in both EBS volumes.
  • B. Set the Application Load Balancer to send users to the server hosting the documents.
  • C. Transfer the information from the two EBS volumes to Amazon EFS. Change the programme so that fresh documents are saved to Amazon EFS
  • D. Set the Application Load Balancer up so that it sends the request to both servers. Bring each document back from the proper server.

Correct Answer: C

Q6)Large video files are kept in on-site network attached storage by a corporation using NFS. The size of each video file varies from 1 MB to 500 GB. Storage capacity has reached 70 TB and is no longer increasing. The business makes the choice to move the video files to Amazon S3. In order to use the least amount of network traffic, the organisation must move the video files as quickly as possible. Which approach will satisfy these needs?

  • A. Create an S3 bucket, first. Make a writeable IAM role with access to the S3 bucket. To locally copy all files to the S3 bucket, use the AWS CLI.
  • B. Establish a Snowball Edge task in AWS. Get a Snowball Edge device on the property. To send data to the device, use the Snowball Edge client. Return the device so that Amazon S3 may receive the data from AWS.
  • C. Install an S3 File Gateway locally. To connect to the S3 File Gateway, create a public service endpoint. Establish an S3 bucket. On the S3 File Gateway, create a brand-new NFS file share. Indicate the S3 bucket on the new file share. To the S3 File Gateway, move the data from the current NFS file share.
  • D. Create a direct link between the on-premises network and AWS using AWS Direct Connect. Install an S3 File Gateway locally. To connect to the S3 File Gateway, create a public virtual interface (VIF). Establish an S3 bucket. On the S3 File Gateway, create a brand-new NFS file share. Indicate the S3 bucket on the new file share. To the S3 File Gateway, move the data from the current NFS file share.

Correct Answer: C

Q7)A business has a programme that processes incoming communications. Then, these messages are swiftly consumed by dozens of additional applications and microservices. The volume of texts varies greatly and occasionally spikes above 100,000 per second. The business wishes to boost scalability and decouple the solution. Which option satisfies these criteria?

  • A. Keep sending the messages to Kinesis Data Analytics on Amazon. Set up the consumer applications so that they can read and handle the messages.
  • B. To scale the number of EC2 instances based on CPU measurements, deploy the ingestion application on Amazon EC2 instances in an Auto Scaling group.
  • C. Use a single shard to write the messages to Amazon Kinesis Data Streams. Messages can be preprocessed and saved in Amazon DynamoDB using an AWS Lambda function. Set up the consumer applications so they can process messages by reading from DynamoDB.
  • D. Post the messages to a subject on an Amazon Simple Notification Service (Amazon SNS) that has multiple subscriptions to an Amazon Simple Queue Service (Amazon SOS). Set up the consumer programmes to handle messages from the queues.

Correct Answer: A

Q8)A distributed application is being migrated by a business to AWS. The workload served by the application is flexible. The central server of the old platform manages jobs across numerous compute nodes. The business wishes to update the application with a method that maximises scalability and resilience. How ought a solutions architect to create the architecture to satisfy these needs?

  • A. Establish an Amazon Simple Queue Service (Amazon SQS) queue as the jobs’ final destination. Utilize Amazon EC2 instances that are control by an Auto Scaling group to implement the computing nodes. Scheduled scaling can be enable in EC2 Auto Scaling.
  • B. Set up an Amazon Simple Queue Service (Amazon SQS) queue to serve as the jobs’ final destination. Utilize Amazon EC2 instances that are control by an Auto Scaling group to implement the computing nodes. Set up EC2 Auto Scaling according to the queue size.
  • C. Use Amazon EC2 instances that are control by an Auto Scaling group to implement the main server and the compute nodes. Set up AWS CloudTrail as the jobs’ destination. Set up EC2 Auto Scaling to take into account the load on the main server.
  • D. Use Amazon EC2 instances that are control by an Auto Scaling group to implement the main server and the compute nodes. Set the jobs’ destination to Amazon EventBridge (Amazon CloudWatch Events). Set up EC2 Auto Scaling according to how busy the compute nodes are.

Correct Answer: C

Q9)In a company’s data centre, an SMB file server is in operation. For the first few days after the files are created, the file server houses big files that are often request. The files are rarely access beyond 7 days. The overall amount of data is growing and is getting near to filling the company’s entire storage capacity. The company’s storage space has to be expand by a solutions architect without sacrificing low-latency access to the most recently accessed information. To prevent further storage problems, the solutions architect must additionally offer file lifecycle management. Which approach will satisfy these needs?

  • A. Copy data from the SMB file server to AWS using AWS DataSync if it has been there for more than 7 days.
  • B. Construct an Amazon S3 File Gateway to increase the business’s storage capacity. To move the data to S3 Glacier Deep Archive after 7 days, create an S3 Lifecycle policy.
  • C. To increase the company’s storage capacity, develop an Amazon FSx for Windows File Server file system.
  • D. Set up an application to access Amazon S3 on each user’s machine. To move the data to S3 Glacier Flexible Retrieval after 7 days, create an S3 Lifecycle policy.

Correct Answer: D

Q10)A business has an application that utilises an Amazon Aurora database and operates on Amazon EC2 instances. The user names and passwords for the EC2 instances are kept locally in a file, which they utilise to connect to the database. The business seeks to reduce the operational costs associated with credential management. What steps should a solutions architect take to reach this objective?

  • A. Use AWS Secrets Manager. Set the rotation to automatic.
  • B. AWS Systems Manager Parameter Store. Set the rotation to automatic.
  • C. AWS Key Management Service (AWS KMS) encryption key-encrypt objects can be stored in an Amazon S3 bucket that has been create specifically for that purpose. The credential file should be move to the S3 bucket. Application should be point at the S3 bucket.
  • D. For every EC2 instance, create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Each EC2 instance should have the new EBS volume attached. To the new EBS storage, migrate the credential file. Indicate the new EBS volume in the application.

Correct Answer: B

Q11)A multinational corporation uses Amazon EC2 instances to host its web application behind an Application Load Balancer (ALB). Both dynamic and static data are present in the web application. The business uses an Amazon S3 bucket to store its static data. Next, the business aims to boost efficiency and cut latency for both static and dynamic data. The business makes use of a custom domain name that is register through Amazon Route 53. What steps should a solutions architect take to fulfil these demands?

  • A. Create a distribution on Amazon CloudFront with the S3 bucket and the ALB as origins. To direct traffic to the CloudFront distribution, configure Route 53.
  • B. Establish an ALB as the origin of an Amazon CloudFront distribution. Make a normal accelerator for AWS Global Accelerator with the S3 bucket as an endpoint. To direct traffic to the CloudFront distribution, configure Route 53.
  • C. Set up an S3 bucket as the origin of an Amazon CloudFront distribution. Make an ALB and a CloudFront distribution endpoint for an AWS Global Accelerator standard accelerator. An individual domain name should be made that points to the accelerator DNS name. Use the unique domain name as the web application’s endpoint.
  • D. Establish an ALB as the origin of an Amazon CloudFront distribution. Make a normal accelerator for AWS Global Accelerator with the S3 bucket as an endpoint. two domain names should be made. For dynamic content, point one domain name to the CloudFront DNS name. For static content, point the other domain name to the accelerator DNS name. Use the domain names as the web application’s endpoints.

Correct Answer: C

Q14)A corporation maintains its AWS infrastructure on a monthly basis. The business must switch the login information for its Amazon RDS for MySQL databases among several AWS Regions throughout these maintenance procedures. Which approach will fulfil these demands with the LEAST operational burden in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Keep the credentials in AWS Secrets Manager as secrets. For the necessary Regions, utilise multi-Region secret replication. Rotate the secrets according to a schedule by configuring Secrets Manager.
  • B. Create a secure string parameter in AWS Systems Manager and save the credentials there as secrets. For the necessary Regions, utilise multi-Region secret replication. Set up Systems Manager such that the secrets are change periodically.
  • C. Keep the login information in a server-side encryption (SSE)-enabled Amazon S3 bucket. Invoke an AWS Lambda function using Amazon EventBridge (Amazon CloudWatch Events) to rotate the credentials.
  • D. Use multi-Region customer controlled keys for AWS Key Management Service (AWS KMS) to encrypt the credentials as secrets. Put the secrets in a global table for Amazon DynamoDB. Take advantage of an AWS Lambda function to get the secrets out of DynamoDB. Rotate the secrets using the RDS API.

Correct Answer: A

Q15)Behind an Application Load Balancer, a business runs an e-commerce application on Amazon EC2 instances. The instances operate across various Availability Zones in an Amazon EC2 Auto Scaling group. Based on parameters related to CPU consumption, the Auto Scaling group scales. The transaction data is kept by the e-commerce application in a MySQL 8.0 database that is housed on a sizable EC2 instance. As the amount of applications increases, the database’s performance rapidly deteriorates. The application processes more read transactions than write ones. The business is looking for a solution that would automatically grow the database while ensuring high availability to handle the demand of fluctuating read workloads. Which approach will satisfy these needs in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Use a single node of Amazon Redshift for leader and computing functionality.
  • B. Use a Single-AZ deployment using Amazon RDS. Set up Amazon RDS so that reader instances can be added in another Availability Zone.
  • C. Implement a Multi-AZ deployment using Amazon Aurora. Aurora Auto Scaling can be configured using Aurora Replicas.
  • D. Use EC2 Spot Instances and Amazon ElastiCache for Memcached.

Correct Answer: C

Q16)A business that has made the switch to AWS needs to put in place a solution to secure the traffic entering and leaving the production VPC. The business’ on-site data centre had an inspection server. The inspection server carried out particular tasks like traffic filtering and flow inspection. The business desires the identical features in the AWS Cloud. Which approach will satisfy these needs in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. In the production VPC, use Amazon GuardDuty to inspect and filter traffic.
  • B. Mirror traffic from the production VPC using traffic mirroring for traffic inspection and filtering.
  • C. To establish the necessary rules for traffic inspection and traffic filtering for the production VPC, use AWS Network Firewall.
  • D. Create the necessary traffic inspection and filtering rules for the production VPC using AWS Firewall Manager.

Correct Answer: C

Q17)On AWS, a business hosts a data lake. Data in Amazon S3 and Amazon RDS for PostgreSQL make up the data lake. The business requires a reporting solution with data visualisation and integration of all data sources in the data lake. All the visualisations should only be accessible to the company’s management staff. Only a small portion of the company’s employees should have access. Which approach will satisfy these needs in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Use Amazon QuickSight to create an analysis. Create new datasets by connecting all the data sources. Create dashboards to display data. Give the proper IAM roles access to the dashboards.
  • B. Use Amazon QuickSight to create an analysis. Create new datasets by connecting all the data sources. Create dashboards to display data. Give the right individuals and groups access to the dashboards.
  • C. Create a table and crawler for the data in Amazon S3 using AWS Glue. To generate reports, create an extract, transform, and load (ETL) job in AWS Glue. Reports should be published to Amazon S3. To restrict access to the reports, use the S3 bucket policies.
  • D. Make a table and crawler for the data in Amazon S3 using AWS Glue. To gain access to data in Amazon RDS for PostgreSQL, use Amazon Athena Federated Query. Use Amazon Athena to produce reports. Reports should be publish to Amazon S3. To restrict access to the reports, use the S3 bucket policies.

Correct Answer: D

Q18)A business is putting a fresh business application into use. Two Amazon EC2 instances are used to execute the programme, and an Amazon S3 bucket is use to store documents. The S3 bucket must be accessible to the EC2 instances, according to a solutions architect. What steps should the solutions architect take to fulfil this demand in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Create an S3 bucket-accessible IAM role. Connect the EC2 instances to the role.
  • B. Establish an S3 bucket access policy in an IAM policy. Connect the EC2 instances to the policy.
  • C. Establish an S3 bucket access-granting IAM group. EC2 instances should be connected to the group.
  • D. Make a user in IAM who has permission to access the S3 bucket. Connect the EC2 instances to the user account.

Correct Answer: A

Q19)A microservice that shrinks and compresses huge photos is being create by an application development team. The microservice should save an image that a user uploads via the web interface in an Amazon S3 bucket, process and compress the picture using an AWS Lambda function, and then store the compressed version of the image in another S3 bucket. A solutions architect must create a system that automatically processes the photos using dependable, stateless components. Which set of actions will be sufficient to fulfil these demands in AWS Certified Solutions Architect – Associate (SAA-C03) ? (Select two.)

  • A. Establish a queue with Amazon Simple Queue Service (Amazon SQS). Set up the S3 bucket such that whenever an image is upload, a notification is sent to the SQS queue.
  • B. Set the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source for the Lambda function. Delete the message from the queue once the SQS message has been successfully process.
  • C. Set up the Lambda function to check the S3 bucket periodically for fresh uploads. Write the file name of each detected uploaded image to a text file in memory, then utilise the text file to keep track of the processed images.
  • D. Start an Amazon Simple Queue Service (Amazon SQS) queue monitoring Amazon EC2 instance. Log the file name in a text file on the EC2 instance and call the Lambda function when items are added to the queue.
  • E. Set up a CloudWatch Events (Amazon EventBridge) event on Amazon to track the S3 bucket. Send a notification containing the application owner’s email address when an image is upload to an Amazon Simple Notification Service (Amazon SNS) topic for follow-up action.

Correct Answer: A and B

Q20)A three-tier web application for a company is set up on AWS. The web servers are set up in a VPC’s public subnet. The database servers and application servers are set up in separate private subnets within the same VPC. In an inspection VPC, the organisation has installed a virtual firewall appliance from the AWS Marketplace. An IP interface on the appliance is set up to accept IP packets. To analyse all application traffic before it reaches the web server, a solutions architect must integrate the appliance with the web application. Which approach will fulfil these demands with the LEAST operational burden in AWS Certified Solutions Architect – Associate (SAA-C03) ?

  • A. Construct a Network Load Balancer in the application’s VPC’s public subnet to direct traffic to the packet inspection appliance.
  • B. Setup a load balancer for the application in the application’s VPC’s public subnet to direct traffic to the appliance for packet inspection.
  • C. Set up a transit gateway in the inspection VPConfigure route tables to be use as a conduit for incoming packets.
  • D. Install a gateway load balancer in the VPC used for inspection. To receive the incoming packets and send them on to the appliance, create a Gateway Load Balancer endpoint.

Correct Answer: B

AWS Certified Solutions Architect - Associate (SAA-C03) free practice test
Menu