Use VCE Exam Simulator to open VCE files

AWS Certified Developer - Associate DVA-C02 Amazon Practice Test Questions and Exam Dumps
Question No 1:
A company is deploying an application on Amazon EC2 instances to process transactions. If a transaction is determined to be invalid, the application must send a chat message to the company's support team using a chat API that requires an access token for authentication. The token must be encrypted at rest and in transit and accessible across multiple AWS accounts.
What is the solution that meets these requirements with the least management overhead?
A. Use an AWS Systems Manager Parameter Store SecureString parameter that uses an AWS Key Management Service (AWS KMS) AWS managed key to store the access token. Add a resource-based policy to the parameter to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Parameter Store. Retrieve the token from Parameter Store with the decrypt flag enabled. Use the decrypted access token to send the message to the chat.
B. Encrypt the access token by using an AWS Key Management Service (AWS KMS) customer managed key. Store the access token in an Amazon DynamoDB table. Update the IAM role of the EC2 instances with permissions to access DynamoDB and AWS KMS. Retrieve the token from DynamoDDecrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the message to the chat.
C. Use AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.
D. Encrypt the access token by using an AWS Key Management Service (AWS KMS) AWS managed key. Store the access token in an Amazon S3 bucket. Add a bucket policy to the S3 bucket to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Amazon S3 and AWS KMS. Retrieve the token from the S3 bucket. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the massage to the chat.
Correct Answer: C
Explanation:
To determine the best solution, we must examine each option based on three core criteria from the question: encryption in transit and at rest, cross-account access, and minimal management overhead.
Option A uses AWS Systems Manager Parameter Store with a SecureString parameter and an AWS KMS managed key. While SecureString parameters do support encryption at rest and in transit, and resource-based policies can enable cross-account access, there are some limitations. First, resource-based policies in Parameter Store do not provide as fine-grained or straightforward cross-account access controls as Secrets Manager. Second, although it's viable, Parameter Store is not optimized for managing secrets like access tokens, particularly when rotation or versioning becomes necessary. Therefore, while functional, it does not offer the least management overhead compared to Secrets Manager.
Option B involves manually encrypting the access token with a KMS customer managed key and storing it in DynamoDB. This option introduces complexity because the developer must manage encryption and decryption operations manually, integrate them into the application logic, and manage access control for both DynamoDB and KMS. Additionally, cross-account access to DynamoDB is more complex to configure and audit. Overall, this adds considerable overhead in terms of development and maintenance effort.
Option C is the most streamlined and appropriate choice. AWS Secrets Manager is purpose-built for securely storing and retrieving sensitive information like access tokens. It handles encryption at rest using KMS and encrypts data in transit automatically. It also supports resource-based policies for secure cross-account access. Importantly, Secrets Manager manages secret versioning, automatic rotation (if needed), and auditing via CloudTrail. The developer only needs to grant IAM permissions to the EC2 instances for retrieval. This drastically reduces operational overhead compared to other approaches and aligns perfectly with the use case.
Option D proposes encrypting the token and storing it in S3. Although S3 supports encryption and cross-account access via bucket policies, this approach is not optimal for storing sensitive credentials. It requires additional logic to securely retrieve and decrypt the secret and opens up broader security risks if bucket permissions are misconfigured. Furthermore, unlike Secrets Manager, S3 doesn’t provide secret-specific features like automatic rotation, access logging specific to secret usage, or easy versioning for secrets.
Thus, while several options meet the core security and accessibility requirements, C stands out as the most efficient and lowest-overhead solution by providing built-in encryption, easy cross-account access, and secure retrieval—all specifically designed for managing secrets.
Question No 2:
A company operates Amazon EC2 instances across multiple AWS accounts. A developer needs to create an application that gathers all EC2 instance lifecycle events from these accounts. The gathered events must be centralized and stored in a single Amazon Simple Queue Service (Amazon SQS) queue located in the company’s primary AWS account.
Which solution will satisfy this requirement?
A. Configure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle events. Add the SQS queue as a target of the rule.
B. Use the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queue. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle events. Add the SQS queue in the main account as a target of the rule.
C. Write an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle changes. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle change. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
D. Configure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.
Correct Answer: D
Explanation:
To fulfill the requirement of aggregating EC2 lifecycle events across multiple AWS accounts and delivering them to a single SQS queue located in a central (main) account, you need a solution that allows for cross-account event delivery, event filtering, and centralized processing. Option D is the best-fit solution because it leverages Amazon EventBridge’s native cross-account event routing capabilities along with the ability to target an SQS queue for processing in the central account.
Let’s examine each option and why D is correct while the others are not.
Option A: This suggests configuring EC2 to directly send events from all accounts to the EventBridge event bus of the main account, and then attaching a rule in the main account to route those to the SQS queue. While the second half of this solution (the rule and SQS target) is valid, this solution does not explain how cross-account event delivery is set up. By default, an EventBridge event bus in one account cannot receive events from other accounts unless permissions are explicitly granted and rules are created in the source accounts. This option omits those essential cross-account steps, making it incomplete.
Option B: This involves giving each account permission to write directly to the central SQS queue and setting up rules in each account to deliver events to that queue. While this could work in theory, there are two issues. First, direct cross-account delivery to SQS requires explicit queue policy changes and can be harder to manage and secure across many accounts. Second, EventBridge does not natively support cross-account SQS targets, so the rules in source accounts cannot directly target an SQS queue in another account. This makes the implementation not feasible under standard AWS configurations.
Option C: This is the least optimal. It proposes using a Lambda function to poll EC2 instance states across accounts. This is highly inefficient, because:
It requires custom polling logic, which does not scale well.
You must handle permissions and API throttling across multiple accounts.
It’s not real-time, since it runs on a schedule. This option bypasses the native event-driven infrastructure provided by AWS and is essentially a workaround rather than a best practice.
Option D: This is the correct and most complete approach. Here’s why:
It starts by configuring permissions on the main account’s EventBridge event bus to receive events from other AWS accounts. This is a supported feature of EventBridge.
Then, in each individual account, an EventBridge rule is created that matches EC2 instance lifecycle events and forwards those events to the main account’s event bus. This is achievable via EventBridge’s cross-account event routing.
In the main account, a rule is defined to capture the EC2 lifecycle events from the shared event bus and route them to a local SQS queue, which is fully supported as a native EventBridge target.
This solution satisfies all the key requirements:
Centralization of events from multiple accounts.
Scalable and maintainable use of AWS EventBridge features.
Efficient routing of events to an SQS queue in the main account for further processing.
Security and permissions are explicitly managed through event bus policies and IAM.
Thus, the most secure, scalable, and AWS-native way to collect and centralize EC2 instance lifecycle events from multiple accounts is described in Option D.
Question No 3:
An application uses Amazon Cognito user pools and identity pools for secure authentication and authorization. A developer is adding functionality to allow each authenticated user to upload and download their own files to and from Amazon S3. File sizes range from 3 KB to 300 MB. The developer needs to ensure files are accessed and stored securely, and that each user can only interact with their own files.
Which of the following provides the highest level of security for this use case?
A. Use S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
B. Save the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
C. Use Amazon API Gateway and an AWS Lambda function to upload and download files. Validate each request in the Lambda function before performing the requested operation.
D. Use an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.
Correct Answer: D
Explanation:
When building an application that uses Amazon Cognito for user authentication and Amazon S3 for file storage, a critical security concern is making sure that users can only access their own files and not anyone else’s. This is especially important in applications where user-generated data (such as uploaded files) is stored in a shared resource like an S3 bucket. To enforce this securely and efficiently, the recommended approach is to use Amazon Cognito identity pools in conjunction with IAM policies that define permissions scoped to user-specific S3 paths.
Option D presents the most secure and scalable approach. By using IAM policies with Cognito identity pool roles, you can dynamically restrict access to specific prefixes in an S3 bucket. For example, each user can be restricted to a folder path such as s3://mybucket/private/${cognito-identity-id}/. AWS Identity and Access Management (IAM) supports policy variables like ${cognito-identity-id}, which automatically resolves to the authenticated user’s unique identity ID. This ensures that users can only access their own folder in the S3 bucket, and access to all other folders is automatically denied.
This method ensures:
Fine-grained access control enforced by AWS IAM.
No need for custom backends to manage file access.
Seamless integration with Amazon Cognito for authentication and temporary credentials.
Let's examine why the other options fall short:
A. Using S3 Event Notifications is not an effective security mechanism. These notifications are triggered after an event (such as a file upload) and cannot prevent unauthorized access. They also do not enforce security policies; they are more useful for asynchronous processing (e.g., triggering a Lambda after an upload).
B. Saving file metadata in a DynamoDB table and filtering the UI based on the user ID provides UI-level filtering only, not actual security. A malicious user could bypass the UI and make direct requests to S3. Without IAM restrictions, they could potentially access files that don’t belong to them. Relying solely on client-side filtering is not secure.
C. Using API Gateway and Lambda to proxy all file uploads and downloads can be secure if implemented properly. However, this architecture introduces significant overhead, especially for large files (up to 300 MB), and may not scale efficiently. Lambda has timeouts and memory limits that make it suboptimal for handling large file transfers. Additionally, you would have to manually implement all authentication and access control logic, which is error-prone compared to using built-in IAM policies.
In contrast, Option D leverages AWS's built-in security model with minimal custom code, better performance, and well-defined security boundaries using IAM policies. It is also the most maintainable solution, aligning with AWS best practices for securing user-specific access to shared resources.
Therefore, Option D is the correct and most secure choice.
Question No 4:
A company is developing a scalable data processing system using AWS services to boost development speed and adaptability. This system must ingest large volumes of data from diverse sources and apply a sequence of business rules and transformations to the data. These business rules must be executed in a specific order, and the solution should also handle error reprocessing in case failures occur during rule execution. The company needs a scalable, low-maintenance orchestration method for automating these data workflows.
Which AWS service best fulfills these orchestration and automation requirements?
A. AWS Batch
B. AWS Step Functions
C. AWS Glue
D. AWS Lambda
Correct Answer: B
Explanation:
When designing a cloud-native data processing solution, it's critical to consider both scalability and operational simplicity. In this scenario, the business requires a system that can orchestrate complex workflows consisting of sequential data transformations and business rules, with the added complexity of error handling and reprocessing. Let’s analyze the options.
A. AWS Batch is primarily designed for batch computing workloads. It enables developers to run hundreds to thousands of batch computing jobs efficiently on AWS infrastructure. While it does offer some job dependency functionality, AWS Batch is not primarily built for complex workflow orchestration involving conditional logic, retries, or integration with multiple AWS services in a highly visual and maintainable way. As such, while it could process data at scale, it does not provide the low-maintenance orchestration and error handling features that this use case demands.
B. AWS Step Functions is explicitly designed for building and managing workflows that coordinate multiple AWS services in a sequential and reliable fashion. It provides visual workflow modeling, built-in error handling, retry logic, and state tracking, which are exactly the capabilities needed to meet the requirements. Step Functions can invoke Lambda functions, Glue jobs, Batch jobs, and more—allowing it to orchestrate virtually any type of workload. The service automatically scales with demand and eliminates the need for manual monitoring of workflow execution. Because of its native ability to sequence tasks, retry on failure, and track state transitions, it is the best fit for orchestrating the described data flows.
C. AWS Glue is an ETL (extract, transform, load) service optimized for data cataloging and transformation, and it is well-suited for data lakes and large-scale data processing. While it can define jobs and triggers, AWS Glue is not ideal for complex control flow orchestration, such as multi-step business logic with dynamic decision-making or reprocessing logic. Additionally, Glue workflows are not as visually intuitive or as flexible for sequencing complex rules as Step Functions.
D. AWS Lambda is excellent for event-driven computing and can process individual tasks in a serverless fashion. However, Lambda itself does not provide workflow orchestration. Developers would need to build and manage their own orchestration logic, state management, and error handling—leading to higher maintenance overhead. Lambda functions are ideal components of a larger solution, but not the orchestration tool itself.
Therefore, AWS Step Functions stands out as the most suitable service in this context because it provides a scalable, low-maintenance, and feature-rich environment for orchestrating sequential business logic, handling errors gracefully, and automating data workflows across a variety of AWS services.
Question No 5:
A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created.
However, the function fails when it attempts to write to the DynamoDB table. What is the MOST likely cause of this issue?
A. The Lambda function's concurrency limit has been exceeded.
B. DynamoDB table requires a global secondary index (GSI) to support writes.
C. The Lambda function does not have IAM permissions to write to DynamoDB.
D. The DynamoDB table is not running in the same Availability Zone as the Lambda function.
Correct Answer: C
Explanation:
When an AWS Lambda function fails during execution, especially when attempting to interact with another AWS service such as DynamoDB, the issue often lies in permissions rather than resource configuration or limitations. In this scenario, the function is successfully invoked by an S3 event, indicating that the function is correctly deployed, triggered, and has sufficient permissions to access Amazon S3. The failure point is clearly during the attempt to write to Amazon DynamoDB, which narrows the problem to issues associated with that specific action.
Option A, "The Lambda function's concurrency limit has been exceeded," is unlikely to be the primary cause. If the Lambda function were throttled due to exceeding concurrency limits, it would not fail specifically at the point of writing to DynamoDB; it would either fail to invoke altogether or exhibit more generalized failures. Additionally, AWS provides robust throttling handling, and concurrency issues would more likely cause timeouts or retry behavior than a failure at a specific service call.
Option B, "DynamoDB table requires a global secondary index (GSI) to support writes," is incorrect. Global Secondary Indexes (GSIs) in DynamoDB are not required to perform write operations to the base table. GSIs are optional and are only used for querying data based on alternate keys. Their presence or absence does not affect the ability to write to the table. Therefore, this is not a likely explanation for the failure.
Option C, "The Lambda function does not have IAM permissions to write to DynamoDB," is the most likely and correct answer. AWS Lambda functions use execution roles, defined using AWS Identity and Access Management (IAM), to obtain temporary credentials and perform actions on other AWS services. If the IAM role associated with the Lambda function does not include permissions to perform PutItem, UpdateItem, or BatchWriteItem operations on the target DynamoDB table, the function will fail when it tries to execute those actions. AWS logs would typically show an AccessDeniedException or a similar error, clearly pointing to insufficient IAM permissions.
Option D, "The DynamoDB table is not running in the same Availability Zone as the Lambda function," is based on a misunderstanding of how AWS services work. Amazon DynamoDB is a fully managed, region-based service, not tied to specific Availability Zones. As such, any AWS resource in the same region can access it, and there is no requirement for being in the same Availability Zone. Moreover, AWS services are designed to be highly available and redundant across multiple Availability Zones.
To summarize, insufficient IAM permissions is a common cause of inter-service communication failures in AWS. When a Lambda function is able to access S3 but fails to interact with DynamoDB, the IAM role should be the first place to check. The role must explicitly allow actions like dynamodb:PutItem on the relevant table ARN. Additionally, the policy should not be overly restrictive in terms of conditions or resource ARNs. Ensuring that the role has the correct permissions will typically resolve such issues.
Thus, the most likely cause of the failure is the lack of appropriate IAM permissions for the Lambda function to write to DynamoDB.
Question No 6:
How can a developer ensure that only approved Amazon EC2 instance types are used in an AWS CloudFormation template when deploying resources across multiple AWS accounts?
A. Create a separate CloudFormation template for each EC2 instance type in the list.
B. In the Resources section of the CloudFormation template, create resources for each EC2 instance type in the list.
C. In the CloudFormation template, create a separate parameter for each EC2 instance type in the list.
D. In the CloudFormation template, create a parameter with the list of EC2 instance types as AllowedValues.
Correct Answer: D
Explanation:
When designing a CloudFormation template to allow deployment flexibility while enforcing constraints, using parameters with AllowedValues is the best practice. This approach enables template users to choose from a list of valid options without hardcoding each choice into the template structure. It simplifies maintenance, improves reusability, and ensures compliance with approved configurations.
Option D suggests using a single parameter in the CloudFormation template and specifying a set of AllowedValues. This is the correct approach because it lets users select an EC2 instance type from a predefined list when launching the stack. For
This configuration ensures that users deploying the CloudFormation stack can only select one of the predefined instance types. If they attempt to use an unapproved type, the deployment will fail validation before any resources are provisioned. This not only enhances security and compliance, but also reduces the risk of incurring unnecessary costs from using expensive instance types.
Option A suggests creating a separate template for each EC2 instance type. This approach is inefficient and introduces significant redundancy. It leads to template sprawl and makes management more cumbersome, especially if changes are required across all templates.
Option B proposes creating separate resource entries for each instance type in the Resources section. This would mean defining multiple EC2 resources, even though only one will be used. This wastes resources and could cause confusion during deployment. Moreover, CloudFormation doesn't support conditional creation of resources purely based on instance types unless using conditions and complex logic — an unnecessary complication in this context.
Option C involves creating a separate parameter for each instance type. This does not offer any selection mechanism or validation; it merely creates multiple parameters, which can complicate the input process and confuse the user. There’s no inherent logic that selects between the parameters unless additional scripting or logic is layered on top, which is inefficient and error-prone.
By contrast, Option D follows a best-practice methodology promoted by AWS for handling approved configurations. Using AllowedValues within a parameter is a straightforward and maintainable way to control which EC2 instance types are permitted in a stack deployment.
In summary, using a parameter with AllowedValues allows the developer to enforce policy-driven infrastructure choices within a single, flexible, and easy-to-maintain CloudFormation template. It provides user input validation, enhances template reusability, and promotes governance across multiple AWS accounts without complexity.
Question No 7:
A developer is using the BatchGetItem operation of Amazon DynamoDB in their application to make batch requests. The API often returns items under the UnprocessedKeys element, indicating that some items were not processed.
What steps can the developer take to enhance the application’s resilience when these unprocessed keys are present?
A. Retry the batch operation immediately.
B. Retry the batch operation with exponential backoff and randomized delay.
C. Update the application to use an AWS software development kit (AWS SDK) to make the requests.
D. Increase the provisioned read capacity of the DynamoDB tables that the operation accesses.
E. Increase the provisioned write capacity of the DynamoDB tables that the operation accesses.
Correct answers: B, D
Explanation:
When using Amazon DynamoDB’s BatchGetItem API operation, it's common for responses to include UnprocessedKeys, especially when the throughput limits of the table are approached or exceeded. This field indicates that DynamoDB couldn’t process certain keys due to temporary capacity constraints. To build resilience into an application that deals with this behavior, two key strategies are typically recommended: implementing exponential backoff and ensuring sufficient read capacity.
Why B is correct:
Implementing exponential backoff with jitter (randomized delay) is a best practice for handling throttling and other temporary issues in AWS services, including DynamoDB. Rather than hammering the service with immediate retries, which can exacerbate throttling, exponential backoff introduces increasing wait times between retries. Adding a random component (jitter) helps to prevent large numbers of clients from retrying in lockstep, which could otherwise overwhelm the service. This strategy increases the chances that subsequent retry attempts will succeed and is specifically recommended in AWS documentation when handling UnprocessedKeys in BatchGetItem responses.
Why D is correct:
If your DynamoDB tables frequently return UnprocessedKeys, it's a sign that the read capacity of the tables may be insufficient to handle the load. Increasing the provisioned read capacity units (RCUs) allows DynamoDB to process more read operations per second. Since BatchGetItem is a read operation, boosting read capacity can directly reduce the likelihood of throttling and unprocessed keys, leading to improved resilience and performance.
Why A is incorrect:
Retrying the operation immediately without any delay is not recommended because it does not address the root issue of throttling and could make the problem worse. It increases contention and the chance of repeated failures, especially in a high-throughput environment.
Why C is incorrect:
While using the AWS SDK is generally a good practice and can help manage retries, it’s not a specific or guaranteed solution to the UnprocessedKeys issue unless the SDK is configured to handle retries with exponential backoff. The problem lies in capacity constraints, not the method of making requests.
Why E is incorrect:
This is a BatchGetItem operation, which involves reading data, not writing it. Therefore, increasing the write capacity would have no effect on the issue at hand. Write capacity applies to operations like PutItem, UpdateItem, and BatchWriteItem.
In conclusion, to make an application more resilient to the UnprocessedKeys behavior, the developer should both implement exponential backoff with jitter and ensure the read capacity is adequate for the workload. These steps directly address the reasons DynamoDB cannot process all requested items and provide a robust strategy for improving performance and reliability.
Question No 8:
A company has a custom application running on on-premises Linux servers. These servers are accessed through Amazon API Gateway, and AWS X-Ray tracing is enabled on the test stage of the API.
What is the simplest way for a developer to enable X-Ray tracing on the on-premises servers with minimal configuration effort?
A. Install and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service.
B. Install and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.
C. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTraceSegments API call.
D. Capture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTelemetryRecords API call.
Correct Answer: B
Explanation:
To determine the most appropriate and least complex solution for enabling AWS X-Ray tracing on on-premises servers, we must first understand how AWS X-Ray works and what components are needed to trace requests through a distributed system that includes on-premises servers.
AWS X-Ray is a service that helps developers analyze and debug distributed applications. It does this by collecting data about requests that your application serves, and then providing tools to view and filter this data to identify performance bottlenecks and issues. X-Ray supports applications hosted in AWS and also applications running in on-premises environments.
For an on-premises server to participate in X-Ray tracing, the server must:
Generate trace segments using the X-Ray SDK.
Send these segments to the X-Ray service using the X-Ray daemon.
Let’s analyze the options:
Option A suggests using the X-Ray SDK alone. While the SDK is required to instrument the application code and generate trace segments, the SDK by itself cannot send data directly to the X-Ray service. Instead, it sends segments to a local daemon process (X-Ray daemon), which batches and uploads them to the X-Ray service. Therefore, this option is incomplete and insufficient.
Option B is the correct and most efficient approach. The X-Ray daemon is specifically designed to listen for trace segment data from the SDK and then forward it to AWS X-Ray over the internet. By installing and running the X-Ray daemon on the on-premises servers, the developer enables the existing SDK or instrumented components to relay their data to X-Ray with minimal configuration. This approach requires little effort—essentially downloading the daemon, configuring access credentials if necessary, and running the service. Moreover, the daemon is lightweight and doesn't require complex setup.
Option C proposes capturing requests and manually sending segments using AWS Lambda and the PutTraceSegments API. While technically feasible, this adds significant complexity. It requires custom code to format trace data properly, handle retries, and authenticate securely with X-Ray. It also necessitates designing an orchestration mechanism to trigger the Lambda function in response to on-premises events. This is not a minimal configuration option and is best reserved for advanced custom pipelines or environments without direct daemon support.
Option D involves using the PutTelemetryRecords API via Lambda. This API is intended for sending telemetry data such as health metrics about the X-Ray daemon—not actual trace segments. Therefore, using it to capture application traces is inappropriate and would not satisfy the requirements. Moreover, the setup complexity is high and misaligned with the API’s intended purpose.
In summary, Option B—installing and running the X-Ray daemon—is the most straightforward and effective method. It aligns directly with AWS’s recommended approach for enabling tracing in non-AWS environments. It avoids the complexity of manually handling trace segment transmission, ensures compatibility with existing SDKs, and keeps the configuration minimal.
Question No 9:
A company needs to share information with a third-party system through an HTTP API. The company already has the necessary API key for access. The solution must enable programmatic management of the API key, ensure strong security, and not negatively impact the application's performance.
Which approach is the most secure and efficient for this use case?
A. Store the API credentials in AWS Secrets Manager. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
B. Store the API credentials in a local code variable. Push the code to a secure Git repository. Use the local code variable at runtime to make the API call.
C. Store the API credentials as an object in a private Amazon S3 bucket. Restrict access to the S3 object by using IAM policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
D. Store the API credentials in an Amazon DynamoDB table. Restrict access to the table by using resource-based policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
Correct Answer: A
Explanation:
To securely manage and use sensitive information like an API key, AWS provides several managed services that support credential storage and access. Among the four options presented, Option A is the most secure and operationally efficient choice, especially when performance and security are both priorities.
Let’s evaluate each option:
Option A:
Storing API credentials in AWS Secrets Manager is the most appropriate and secure approach for managing sensitive data such as API keys. Secrets Manager is a fully managed service designed specifically for this use case. It provides several advantages:
Encryption at rest using AWS Key Management Service (AWS KMS).
Fine-grained IAM-based access control for who can retrieve secrets.
Audit logging through AWS CloudTrail for every secrets access or management event.
Automatic rotation capabilities, although not needed in this case, are useful for future security compliance.
Supports runtime retrieval through the AWS SDK with minimal latency that doesn’t meaningfully impact application performance.
Applications can retrieve the secret programmatically using short-lived, IAM-authenticated sessions—ensuring that credentials are never hardcoded and that access can be revoked instantly if needed. The SDK call to Secrets Manager is optimized and efficient for most use cases, especially if caching is enabled.
Option B:
This is a high-risk and insecure practice. Storing credentials directly in the codebase—even if pushed to a “secure” Git repository—is dangerous. Anyone with access to the repository may extract the credentials. Even private repositories have exposure risks, including insider threats or misconfiguration. Additionally, this approach does not support rotation or secure auditing and is considered an anti-pattern in modern secure software development.
Option C:
Using Amazon S3 to store sensitive credentials is better than storing them in code but still suboptimal. While S3 supports encryption and access control via IAM, it lacks the purpose-built features of Secrets Manager. There’s no automatic secret rotation, and security auditing is less integrated. Additionally, managing secure access to individual objects can be complex and error-prone, especially across environments. The application would also need to handle deserialization and parsing, increasing complexity and potential for bugs.
Option D:
Using Amazon DynamoDB for secret storage is technically possible but not ideal. DynamoDB is not designed as a secure secret management service. While it does support encryption and fine-grained access control, it lacks the secret management capabilities that Secrets Manager offers, including rotation, versioning, access history, and integration with KMS for enhanced auditing. Additionally, implementing secure retrieval logic yourself adds unnecessary complexity and potential for misconfiguration. This approach is sometimes used in legacy systems, but it’s not recommended for new implementations.
Performance Consideration:
Secrets Manager is built to deliver low-latency secret retrieval, and you can implement local caching using AWS SDKs or third-party libraries to avoid making a network call every time. This balances both security and performance, which the question explicitly prioritizes.
Conclusion:
Option A (AWS Secrets Manager) is the most secure, scalable, and manageable method for handling API credentials. It directly aligns with best practices for secret management in cloud-native applications and satisfies both the performance and security requirements stated in the scenario.
Question No 10:
A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The application needs to securely store and retrieve variables, such as authentication information for a remote API, the API URL, and credentials. These variables should be available across different environments, including development, testing, and production, for all current and future versions of the application.
How should the developer retrieve the variables with the fewest application changes?
A. Update the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.
B. Update the application to retrieve the variables from AWS Key Management Service (AWS KMS). Store the API URL and credentials as unique keys for each environment.
C. Update the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
D. Update the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
Correct Answer: A
Explanation:
When deploying applications to Amazon ECS, especially across multiple environments such as development, testing, and production, securely storing and retrieving sensitive information (such as API credentials and authentication data) is critical. The solution must be flexible, secure, and easy to manage, while minimizing changes to the application code. Let’s analyze each option based on these criteria:
Option A: Using AWS Systems Manager Parameter Store to securely store the variables and AWS Secrets Manager for credentials is a highly recommended approach for this scenario. Parameter Store can store application variables such as URLs and authentication information, while Secrets Manager is designed specifically to handle sensitive information like credentials. Both services integrate seamlessly with ECS, and they allow variables to be retrieved dynamically at runtime without requiring significant changes to the application. The developer can use unique paths in Parameter Store for each environment (e.g., /dev/api/url, /prod/api/url), making it easy to access different variables for different environments. Additionally, AWS KMS will automatically encrypt the data both at rest and in transit, ensuring security. This approach minimizes application changes and leverages managed services optimized for securely handling configuration and secrets.
Option B: AWS Key Management Service (KMS) is primarily designed for managing encryption keys, not for storing or managing application variables. While KMS can encrypt and decrypt data, it is not a secret management service in the same sense as Secrets Manager. Storing API URLs and credentials as unique keys would require custom application logic to manage encryption and decryption operations, adding unnecessary complexity to the process. Therefore, using KMS directly to store application variables is not the most effective approach for this use case.
Option C: Storing encrypted files with application variables is another approach, but it comes with a few issues. First, the application would need to include additional logic to decrypt the file at runtime, which could lead to security concerns and additional maintenance overhead (e.g., rotating encryption keys). Additionally, managing these encrypted files across different environments and ensuring they stay up to date with the application’s requirements can become cumbersome. This solution is less scalable and flexible compared to using AWS-native services like Parameter Store and Secrets Manager.
Option D: Defining the variables directly in the ECS task definition can be a simple solution for storing environment-specific data. However, it is less secure than using Secrets Manager or Parameter Store, as task definitions are typically stored in plaintext. Moreover, this method doesn't scale well across multiple environments because it would require modifying the ECS task definition each time an application is updated. Also, storing sensitive information like credentials directly in task definitions is not recommended due to security best practices.
In conclusion, Option A provides a secure, scalable, and flexible solution with the least amount of configuration and management overhead. By utilizing AWS Systems Manager Parameter Store and Secrets Manager, the developer can securely store and retrieve application variables, ensuring that these variables are available across environments without making extensive changes to the application code.
Would you like to explore how to implement this solution in more detail?
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.