AWS Certified Solutions Architect Associate Sample Questions

  1. Home
  2. AWS Certified Solutions Architect Associate Sample Questions
AWS Certified Solutions Architect Associate Sample Questions

Candidates who are capable of performing the role of solutions architect should take the AWS Certified Solutions Architect Associate exam. They must have at least one year of hands-on experience building available, cost-effective, fault-tolerant, and scalable AWS distributed systems. This test evaluates a candidate’s ability to use Amazon Web Services technology to design and deploy secure and resilient applications. Furthermore, it boosts your earning potential, as the average yearly pay for an AWS Certified Solutions Architect Associate is $118,266. The article provides a list of AWS Certified Solutions Architect Associate Sample Questions that cover core exam topics including –

Q1)A company is using containers to build a web application on AWS. The organization requires three instances of the web application to be operating at any given time. To keep up with rising demand, the application must be scalable. While management is concerned about costs, they agree that the program should be very user-friendly. What kind of advice should a solutions architect give?

  1. Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
  2.  Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal.
  3. Add a resource-based policy to the function with lambda:’* as the action and Service:events.amazonaws.com as the principal.
  4.  Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal.

Correct Answer: Add a resource-based policy to the function with lambda:’* as the action and Service:events.amazonaws.com as the principal.

Explanation: For Lambda functions and layers, AWS Lambda provides resource-based authorization controls. On a per-resource basis, resource-based policies allow you to grant usage permission to other AWS accounts or organisations. A resource-based policy is also used to allow an AWS service to call your function on your behalf.

You can grant authorization to an account to call or manage Lambda functions. In AWS Organizations, you can also grant permissions to a complete organization using a single resource-based policy. You can also grant invoke access to an AWS service that runs a function in response to activity in your account using resource-based controls.

Refer: Using resource-based policies for AWS Lambda

Q2)To manage a fleet of web servers, a company employs an Amazon RDS for PostgreSQL database instance. The firm adopts a standard requiring all production databases to have a recovery point objective (RPO) of less than one second following a routine compliance review.

  1. Enable a Multi-AZ deployment for the DB instance. 
  2. Enable auto scaling for the DB instance in one Availability Zone.
  3. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone.
  4. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.

Correct Answer: Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.

Explanation: Amazon RDS (Relational Database Service) is a managed service that makes setting up, operating, and scaling a relational database a lot easier. Amazon RDS supports the MySQL, SQL Server, PostgreSQL, MariaDB, and Oracle database engines and is built on AWS high-performance compute and storage. It provides a comprehensive collection of provisioning, patching, monitoring, and disaster recovery solutions (DR). This blog discusses three Amazon RDS capabilities that help with disaster recovery: automated backups, manual backups, and Read Replicas.

Refer: Implementing a disaster recovery strategy with Amazon RDS

Q3) A image hosting service is offered by a company in the US-east-1 region. The programme allows users from all around the world to submit and explore photographs. Some photos acquire a large number of views over several months, while others receive a small number of views for less than a week. Uploads of up to 20 MB in size are supported by the application. Based on the photo information, the service decides which photos to show each user.

Which of the following options provides the most cost-effective access to the most appropriate users?

  1.  Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
  2. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
  3. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata.
  4. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon Elasticsearch Service (Amazon ES).

Correct Answer: Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.

Explanation: A storage class is assigned to each object in Amazon S3. When you list the objects in an S3 bucket, for example, the console displays the storage class for each object in the list. For the things you store, Amazon S3 provides a variety of storage classes. Depending on your use case scenario and performance access needs, you select a class. All of these storage types are extremely long-lasting.

Refer: Using Amazon S3 storage classes

Q4)A company is building a website that will use an Amazon S3 bucket to host static photos. For all future requests, the company’s goal is to lower both latency and cost. What approach should a solutions architect take when recommending a service configuration?

  1. Deploy a NAT server in front of Amazon S3.
  2. Deploy Amazon CloudFront in front of Amazon S3. 
  3. Deploy a Network Load Balancer in front of Amazon S3.
  4. Configure Auto Scaling to automatically adjust the capacity of the website.

Correct Answer: Deploy Amazon CloudFront in front of Amazon S3. 

Explanation:

Refer: Deliver Content Faster

Q5) A web application is deployed in the AWS Cloud. It’s a two-tier design with a web layer and a database layer. On the web server, cross-site scripting (XSS) attacks are feasible. What is the best course of action for a solutions architect to take in order to solve the flaw?

  1. Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  2. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  3. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.
  4. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.

Correct Answer: Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF.

Explanation:

  • Matching conditions with cross-site scripting
    • In order to exploit vulnerabilities in web applications, attackers may introduce scripts into web requests. You can provide one or more cross-site scripting match conditions to specify which sections of web requests, such as the URI or query string, you want AWS WAF Classic to look for malicious scripts in. When you construct a web ACL later in the process, you may choose whether to accept or reject requests that appear to include harmful scripts.
  • Web Application Firewall
  • You may now safeguard your web apps on your Application Load Balancers with AWS WAF. AWS WAF is a web application firewall that helps protect your online applications from typical web exploits that can cause application downtime, compromise security, or waste a lot of resources.

Refer: Classic web conditions

Q6)A method for preserving client case files must be devised by a solutions architect. The documents are vital to the company’s operations. The number of files will grow over time.The files must be accessible by several application servers running on Amazon EC2 instances at the same time. The solution must have built-in redundancy. Which solution meets these requirements?

  1. Amazon Elastic File System (Amazon EFS) 
  2. Amazon Elastic Block Store (Amazon EBS)
  3. Amazon S3 Glacier Deep Archive
  4. AWS Backup

Correct Answer: Amazon Elastic File System (Amazon EFS) 

Explanation: Amazon Elastic File System (Amazon EFS) is a serverless, set-and-forget elastic file system that makes it simple to set up, scale, and cost-optimize file storage on Amazon Web Services (AWS). With a few clicks in the AWS Management Console, you can create file systems that are accessible via a file system interface (using standard operating system file I/O APIs) to Amazon Elastic Compute Cloud (EC2) instances, Amazon container services (Amazon Elastic Container Service [ECS], Amazon Elastic Kubernetes Service [EKS], and AWS Fargate), and AWS Lambda functions. Full file system access semantics, such as strong consistency and file locking, are also supported.

Refer: Amazon Elastic File System

Q7)An image hosting company uses Amazon S3 buckets to store its objects. The company wants to avoid accidentally exposing the contents of the S3 buckets to the public. The AWS account as a whole must keep all S3 contents private. Which solution will meet these requirements?

  1.  Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public.
  2.  Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access.
  3.  Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change.
  4.  Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account.

Correct Answer: Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account. 

Explanation: Amazon S3 is an object storage service with industry-leading scalability, data availability, security, and performance. Customers of all sizes and sectors can store and protect nearly any amount of data for use cases including data lakes, cloud-native apps, and mobile apps. You can optimise expenses, organise data, and establish fine-tuned access restrictions to suit specific business, organisational, and compliance requirements with cost-effective storage classes and easy-to-use administration tools.

Refer: Amazon S3

Q8) Transactional data is stored on an Amazon RDS MySQL Multi-AZ DB instance for the company’s website. This database instance is queried by other internal systems to obtain data for batch processing. Internal systems requesting data from the RDS DB instance cause the RDS DB instance to slow down dramatically. This affects the read and write performance of the website, resulting in slow response times for users. Which strategy will lead to a boost in website performance?

  1.  Use an RDS PostgreSQL DB instance instead of a MySQL database.
  2.  Use Amazon ElastiCache to cache the query responses for the website.
  3. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance.
  4. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.

Correct Answer: Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.

Explanation: Read Replicas on Amazon RDS –

Enhanced performance – By sending read queries from your apps to the read replica, you can lessen the burden on your source DB instance. For read-heavy database workloads, read replicas allow you to elastically extend out beyond the capacity restrictions of a single DB instance. Read replicas are valuable as part of a sharding implementation since they can be upgraded to master status.
To improve read performance even more, Amazon RDS for MySQL lets you add table indexes straight to Read Replicas, even if they aren’t present on the master.

Refer: Amazon RDS Read Replicas

Q9) Currently, a company’s legacy application relies on a single instance of an unencrypted Amazon RDS MySQL database. To meet with new compliance rules, all current and new data in this database must be encrypted. How will this be accomplished?

  1.  Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance.
  2. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to delete the original instance.
  3. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.
  4.  Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance.

Correct Answer: Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot.

  • Explanation: The steps below can be used with Amazon RDS for MySQL, Oracle, SQL Server, PostgreSQL, or MariaDB.
  • Important: You can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster provided you specify an AWS Key Management Service (AWS KMS) encryption key when restoring from the unencrypted DB cluster snapshot if you utilise Amazon Aurora. See Amazon RDS Encrypted DB Instances Limitations for further details.
  • From the navigation pane of the Amazon RDS dashboard, select Snapshots.
  • Choose the snapshot you’d like to encrypt.
  • Select Copy Snapshot from the Snapshot Actions menu.
  • Enter your New DB Snapshot Identifier after selecting your Destination Region.
  • Change Yes to Enable Encryption.
  • Choose Copy Snapshot after selecting your Master Key from the list.
  • The Encrypted field will be True once the snapshot status is available, indicating that the snapshot is encrypted.
  • Your database now has an encrypted snapshot. You can restore the DB instance from the encrypted DB snapshot using this encrypted DB snapshot.

Refer: How do I encrypt Amazon RDS snapshots using a KMS key?

Q10)A business runs an application on a cluster of Amazon Linux EC2 machines. For regulatory concerns, the business must keep all application log files for seven years. A reporting programme will examine the log files, and it will require concurrent access to all files. In terms of cost-effectiveness, which storage system best meets these criteria?

  1.  Amazon Elastic Block Store (Amazon EBS)
  2. Amazon Elastic File System (Amazon EFS)
  3. Amazon EC2 instance store
  4.  Amazon S3 

Correct Answer: Amazon S3 

Explanation: Amazon S3 requests can be authenticated or anonymous. Authenticated access necessitates the creation of credentials that AWS can use to verify your requests. When utilising valid credentials to make REST API requests straight from your code, you build a signature and include it in your request. Amazon S3 is an object storage service with industry-leading scalability, data availability, security, and performance. Customers of various sizes and sectors can use it to store and safeguard any quantity of data for a variety of applications, including websites, mobile apps, backup and restore, archiving, corporate apps, IoT devices, and big data analytics.

Refer: Amazon S3

Q11)An software built by a firm collects data from Internet of Things (IoT) sensors mounted in automobiles. The data is sent to and stored in Amazon S3 using Amazon Kinesis Data Firehose. Data generates billions of S3 objects every year. The company retrains a set of machine learning (ML) models every morning using data from the previous 30 days. The company examines and trains other machine learning models using data from the previous year four times a year. For a period of up to one year, the data must be accessible with a minimum of delay. After one year, data must be retained for archival purposes. In terms of cost-effectiveness, which storage system best meets these criteria?

  1.  Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
  2. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automativally move objects to S3 Glacier Deep Archive after 1 year.
  3. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
  4. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year.

Correct Answer: Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automativally move objects to S3 Glacier Deep Archive after 1 year.

Explanation: S3 Intelligent-Tiering is a storage class that optimises storage costs by shifting data to the most cost-effective access tier without affecting performance or adding overhead. S3 Intelligent-Tiering was added to Amazon S3 to address the issue of selecting the appropriate storage class and reducing costs when access patterns are erratic.

When access patterns change, S3 Intelligent-Tiering provides automated cost reductions by shifting data on a granular object level across access tiers. When you need to reduce storage costs for data with uncertain or unexpected access patterns, this is the storage type to use. S3 Intelligent-Tiering watches access patterns and shifts items automatically from one tier to another for a nominal monthly object monitoring and automation cost.

Refer: S3 Intelligent-Tiering Adds Archive Access Tiers

Q12)A company can host a website on Amazon EC2 Linux instances. A few of the examples aren’t working. According to the troubleshooting, the failing instances do not have enough swap space. For this, the operations team’s lead requires a monitoring system. What kind of advice should a solutions architect give?

  1. Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch.
  2. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch.
  3. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch.
  4. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.

Correct Answer:Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch.

Explanation:

Refer: Monitor memory and disk metrics for Amazon EC2 Linux instances

Q13)A company wants to move from multiple separate Amazon Web Services accounts to a single, multi-account setup. For its business divisions, the corporation plans to create a huge number of new AWS accounts. To authenticate access to these AWS accounts, the company must use a single corporate directory service.In order to meet these needs, what measures should a solutions architect recommend? (Choose two.)

  1.  Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization. 
  2.  Set up an Amazon Cognito identity pool. Configure AWS Single Sign-On to accept Amazon Cognito authentication.
  3.  Configure a service control policy (SCP) to manage the AWS accounts. Add AWS Single Sign-On to AWS Directory Service.
  4.  Create a new organization in AWS Organizations. Configure the organization’s authentication mechanism to use AWS Directory Service directly.
  5.  Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO, and integrate it with the company’s corporate directory service. 

Correct Answer:  Set up an Amazon Cognito identity pool. Configure AWS Single Sign-On to accept Amazon Cognito authentication.  Configure a service control policy (SCP) to manage the AWS accounts. Add AWS Single Sign-On to AWS Directory Service

Explanation: Amazon Cognito allows you to quickly and easily add user sign-up, sign-in, and access management to your online and mobile apps. Amazon Cognito enables sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, as well as enterprise identity providers via SAML 2.0 and OpenID Connect, and scalable to millions of users.

Refer: Amazon Cognito

Q14)The data for a business’s web application is stored in an Amazon RDS PostgreSQL database instance. Accountants run a large number of queries at the beginning of each month during the financial close period, which has a detrimental impact on database performance due to overuse. The company wants to lessen the impact of reporting on the web application. What can a solutions architect do to reduce the database’s impact with the least amount of effort?

  1. Create a read replica and direct reporting traffic to the replica.
  2. Create a Multi-AZ database and direct reporting traffic to the standby.
  3. Create a cross-Region read replica and direct reporting traffic to the replica.
  4. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.

Correct Answer: Create a read replica and direct reporting traffic to the replica.

Explanation: Then Amazon RDS takes a snapshot of the source instance and uses the snapshot to construct a read-only instance. When the source DB instance is changed, Amazon RDS employs the asynchronous replication mechanism for the DB engine to update the read replica. The read replica works like a database instance that only accepts read-only connections. Applications connect to a read replica in the same manner that they connect to any other database. All databases in the source DB instance are replicated via Amazon RDS.

The built-in replication functionality of the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines is used by Amazon RDS to construct a read replica from a source DB instance. Asynchronously, changes made to the source DB instance are replicated to the read replica. By sending read queries from your applications to the read replica, you can lessen the burden on your source DB instance.
When you construct a read replica, you must first provide a source DB instance.

Refer: Working with read replicas

Q15)A company runs an ASP.NET MVC application on a single Amazon EC2 instance. Users are experiencing slow response times during lunch hours due to a recent spike in application usage. The company must deal with this problem with the fewest number of settings feasible. What suggestions should a solutions architect provide to meet these criteria?

  1. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours.
  2. Move the application to Amazon Elastic Container Service (Amazon ECS). Create an AWS Lambda function to handle scaling during lunch hours.
  3. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours.
  4. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.

Correct Answer: Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours.

Explanation: Configure your Amazon EC2 Auto Scaling group to modify its instance count on a schedule to optimise your environment’s use of Amazon EC2 instances during expected periods of peak traffic. You can set up a recurrent action in your environment to scale up in the morning and scale down at night when traffic is low. If you have a marketing event that will drive visitors to your site for a limited time, for example, you can schedule a one-time event to scale up when it begins and another to scale down when it concludes.

Refer: Scheduled Auto Scaling actions

Q16)Amazon RDS for PostgreSQL databases powers a company’s data layer. Database password rotation must be implemented by the company. Which alternative has the LEAST amount of operational overhead and meets this criterion?

  1. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret. 
  2. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter.
  3. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password.
  4. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).

Correct Answer:  Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.

Exaplanation: AWS Secrets Manager is a service that makes rotating, managing, and retrieving database credentials, API keys, and other secrets during their lifecycle more easier. Secrets Manager may be set up to automatically cycle secrets, which can help you satisfy your security and compliance requirements. Secrets Manager has native support for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, as well as the ability to rotate credentials for these databases. Using fine-grained AWS Identity and Access Management (IAM) policies, you can restrict who has access to your secrets. Employees retrieve secrets by calling the Secrets Manager APIs instead of hard-coding them in source code. This eliminates the need to hard-code secrets in source code, update configuration files, and redeploy code when secrets are rotated.

Refer: Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Q17)A newly acquired corporation must set up its own infrastructure on AWS and migrate its apps to the cloud within a month after being purchased. Each application necessitates the transfer of approximately 50 TB of data. Following the transition, this company and its parent will require a secure network connection with steady throughput between its data centres and apps. A solutions architect must ensure that data is transferred just once and that the network connection is kept. Which solution will meet these requirements?

  1. AWS Direct Connect for both the initial transfer and ongoing connectivity.
  2. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
  3. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
  4. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

Correct Answer: AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.

Explanation: The fastest path to your AWS resources is through the AWS Direct Connect cloud service. Your network traffic is never exposed to the public internet while in transit since it stays on the AWS global network. This decreases the likelihood of hitting bottlenecks or experiencing unexpected latency increases. When you create a new connection, you have the option of using a hosted connection supplied by an AWS Direct Connect Delivery Partner or a dedicated connection offered by AWS—both of which can be deployed at over 100 AWS Direct Connect locations around the world. You can use AWS Direct Connect SiteLink to transmit data between AWS Direct Connect locations and construct private network connections between your global network’s offices and data centres.

Refer: AWS Direct Connect

Q18)A business operates an application on Amazon EC2 instances. During business hours, the volume of traffic to the website increases dramatically, then decreases. An Amazon EC2 instance’s CPU utilisation is a good indicator of the application’s end-user demand. For an Auto Scaling group, the organisation has specified a minimum group size of two EC2 instances and a maximum group size of ten EC2 instances. The company is concerned that the Auto Scaling team’s current scaling policy is wrong. The company must avoid overprovisioning EC2 instances and incurring unnecessary expenses. What suggestions should a solutions architect provide to meet these criteria?

  1.  Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours.
  2. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling.
  3. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values. 
  4. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.

Correct Answer:  Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.

Explanation: You can specify scaling metrics and threshold settings for the CloudWatch alarms that trigger the scaling process with step scaling and simple scaling. You can also specify how your Auto Scaling group should be scaled if a threshold is exceeded for a predetermined number of evaluation periods.

Refer: Step and simple scaling policies for Amazon EC2 Auto Scaling

Q19)A company is building a web-based application that will run on Amazon EC2 instances spread across multiple Availability Zones. The internet application will provide access to a text content collection of approximately 900 TB. The company anticipates periods of high demand for the online application. The text document storage component must be able to scale to meet the application’s demand at all times, according to a solutions architect. The company is concerned about the whole cost of the solution. In terms of cost-effectiveness, which storage system best meets these criteria?

  1.  Amazon Elastic Block Store (Amazon EBS)
  2. Amazon Elastic File System (Amazon EFS)
  3. Amazon Elasticsearch Service (Amazon ES)
  4. Amazon S3 

Correct Answer: Amazon Elasticsearch Service (Amazon ES)

Explanation:You can use Amazon OpenSearch Service to do interactive log analytics, real-time application monitoring, internet search, and other tasks. OpenSearch is a distributed search and analytics package based on Elasticsearch that is open source. Amazon OpenSearch Service is the successor to Amazon Elasticsearch Service, and it includes the most recent versions of OpenSearch, as well as support for 19 different versions of Elasticsearch (from 1.5 to 7.10), as well as visualisation features via OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). Currently, Amazon OpenSearch Service has tens of thousands of active clients and manages hundreds of thousands of clusters that process hundreds of trillions of queries every month.

Refer: Amazon OpenSearch Service

Q20)A company uses Amazon S3 to keep private audit records. The S3 bucket provides bucket limits to limit access to audit team IAM user credentials, based on the notion of least privilege. Executives are concerned about document destruction in the S3 bucket by accident and require a more secure solution.What actions should a solutions architect take to make sure audit documents are secure?

  1. Enable the versioning and MFA Delete features on the S3 bucket.
  2. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
  3. Add an S3 Lifecycle policy to the audit team’s IAM user accounts to deny the s3:DeleteObject action during audit dates.
  4. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.

Correct Answer: Enable the versioning and MFA Delete features on the S3 bucket.

Explanation: When developing and implementing your own security rules, Amazon S3 offers a number of security elements to consider. The best practises listed here are only guidelines and do not constitute a complete security solution. Treat these best practises as suggestions rather than prescriptions because they may not be relevant or sufficient for your situation.

Refer: Security Best Practices for Amazon S3

AWS Certified Solutions Architect Associate free practice test
Menu