Professional Cloud Architect Google Practice Test Questions and Exam Dumps

Question No 1:

Your company is undergoing a major revision of its API to improve the developer experience. The objective is to keep the old version of the API accessible and deployable while allowing new customers and testers to try out the new API. Additionally, the company wants to maintain the same SSL and DNS records for serving both versions of the API.

What should they do?

A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer

Answer: D. Use separate backend pools for each API path behind the load balancer

Explanation:

When managing different versions of an API, especially in cases where you want to maintain the old version while introducing a new one, it’s essential to ensure that both versions can coexist under the same domain and SSL certificates. At the same time, the traffic must be properly routed to the appropriate API version based on the specific request or path.

Let's break down the provided options:

  1. Option A: Configure a new load balancer for the new version of the API

This option suggests setting up a new load balancer for the new API, which would typically require new DNS entries, potentially new SSL certificates, and additional infrastructure overhead. Since the goal is to maintain the same DNS records and SSL certificates, creating a new load balancer is not the most efficient approach.

  1. Option B: Reconfigure old clients to use a new endpoint for the new API

While reconfiguring old clients to use a new endpoint might be a solution, it doesn’t align with the requirement to keep the old API available for existing clients. Additionally, it would require modifying each client that consumes the API, which is not practical or scalable. The solution must ensure both APIs are accessible without making significant changes to client configurations.

  1. Option C: Have the old API forward traffic to the new API based on the path

This solution introduces the concept of the old API forwarding traffic to the new API. However, forwarding traffic is generally less efficient than directly routing requests to the correct API version, especially as both APIs will likely be under heavy load. This approach can also complicate the architecture, making it harder to manage traffic flow and causing additional latency.

  1. Option D: Use separate backend pools for each API path behind the load balancer

This is the best solution. By using separate backend pools behind a load balancer, the company can configure the load balancer to route traffic to different API versions based on the API path (e.g., /v1/ for the old API and /v2/ for the new API). This approach allows both versions to run concurrently under the same domain, without requiring new DNS records or SSL certificates. Each version of the API can be independently scaled and managed in its respective backend pool, improving both efficiency and performance. The load balancer ensures traffic is directed to the correct API version based on the request path.

Thus, the correct answer is D. Use separate backend pools for each API path behind the load balancer.

Question No 2:

Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24 hours a day. Your business analysts are only familiar with using a SQL interface.

How should you store the data to optimize it for ease of analysis?

A. Load data into Google BigQuery
B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore

Answer: B. Insert data into Google Cloud SQL

Explanation:

When migrating large datasets to the cloud, it’s important to choose the appropriate storage solution that aligns with the needs of your business analysts and the data's availability requirements. Since the business analysts are familiar with SQL interfaces, the solution should be optimized for ease of analysis with SQL queries.

Let’s evaluate the options:

  1. Option A: Load data into Google BigQuery

Google BigQuery is a powerful, fully managed data warehouse designed for large-scale data analysis and is ideal for running complex queries on very large datasets. While BigQuery is excellent for analysis, it may not be the most suitable choice if the company needs to maintain a SQL-based interface for daily interactions, especially since the analysts are only familiar with SQL. Additionally, BigQuery is designed for more analytical workloads and might not be the best option for real-time, transactional data storage.

  1. Option B: Insert data into Google Cloud SQL

Google Cloud SQL is a fully managed relational database that supports popular SQL-based databases (MySQL, PostgreSQL, SQL Server). It provides a familiar SQL interface that your business analysts are used to, which simplifies their workflow. It’s also suitable for transactional workloads and offers features like automatic backups and high availability. Although Cloud SQL is not specifically optimized for multi-petabyte data sets, it provides a reliable and easy-to-use SQL interface for managing the data, which is the key requirement in this scenario.

  1. Option C: Put flat files into Google Cloud Storage

Google Cloud Storage is an object storage service that is optimized for unstructured data, like large files and backups. While it is highly scalable and provides 24/7 availability, it does not provide a native SQL interface. Business analysts would need additional tools or services to analyze the data, such as Google Cloud Dataproc or BigQuery. Therefore, this option does not meet the requirement of having a SQL interface for analysis.

  1. Option D: Stream data into Google Cloud Datastore

Google Cloud Datastore is a NoSQL document database, and it is optimized for applications that require high availability and scalability. However, it does not provide a SQL interface, making it unsuitable for analysts who are only familiar with SQL-based querying. Furthermore, Datastore is not ideal for storing large, multi-petabyte datasets that need to be queried in a relational format.

Thus, the best option for providing an easy SQL interface for your business analysts while ensuring that the data is always available is B. Insert data into Google Cloud SQL. This solution meets the SQL familiarity requirement and ensures data availability and manageability. However, for larger datasets, it’s essential to consider performance and scalability constraints as the data set grows over time.

Question No 3:

The operations manager has asked for a list of best practices to consider when migrating a J2EE application to the cloud. You need to recommend three practices that will ensure the application runs efficiently and securely in a cloud environment. Which three of the following practices should you recommend? (Choose three.)

A. Port the application code to run on Google App Engine.
B. Integrate Cloud Dataflow into the application to capture real-time metrics.
C. Instrument the application with a monitoring tool like Stackdriver Debugger.
D. Select an automation framework to reliably provision the cloud infrastructure.
E. Deploy a continuous integration tool with automated testing in a staging environment.
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable.

Answer: C. Instrument the application with a monitoring tool like Stackdriver Debugger.

D. Select an automation framework to reliably provision the cloud infrastructure.

E. Deploy a continuous integration tool with automated testing in a staging environment.

Explanation:

When migrating a J2EE (Java 2 Platform, Enterprise Edition) application to the cloud, it’s important to ensure the application not only works but also operates efficiently, is easy to manage, and is scalable. The following recommended practices ensure that the migration is successful and that the application meets cloud-native requirements.

C. Instrument the application with a monitoring tool like Stackdriver Debugger:

Monitoring and troubleshooting are critical for any application, especially when migrating to the cloud. Using a monitoring tool like Stackdriver Debugger (now part of Google Cloud Operations Suite) allows you to gain insights into the application's performance and identify any issues. Instrumenting the application with such a tool helps in detecting real-time issues, logging errors, and understanding how the application behaves under load in the cloud environment. This practice helps you ensure that the application is running as expected after migration, which is crucial for maintaining operational efficiency and quickly addressing problems that may arise.

D. Select an automation framework to reliably provision the cloud infrastructure:

Cloud environments can be complex, especially when it comes to provisioning and managing infrastructure. To ensure that the cloud infrastructure is set up consistently and reliably, using an automation framework is a key best practice. Tools such as Terraform, Ansible, or Google Cloud Deployment Manager help automate the provisioning of cloud resources. Automating the infrastructure setup eliminates the risks associated with manual configurations and ensures that resources are provisioned correctly every time, which is critical when scaling the application or deploying updates.

E. Deploy a continuous integration tool with automated testing in a staging environment:

Continuous integration (CI) is essential for ensuring that changes to the application are tested and integrated regularly without introducing errors. By deploying a continuous integration tool (e.g., Jenkins, GitLab CI, or Cloud Build in Google Cloud), you can automatically test and validate the application in a staging environment before moving it to production. Automated testing helps catch issues early in the development cycle, making the migration process smoother and reducing downtime. In addition, testing the application in an isolated staging environment helps ensure that it behaves correctly in the cloud before making it available to end-users.

Why the Other Options Are Not Ideal:

  • A. Port the application code to run on Google App Engine:
    While Google App Engine can be an option for hosting cloud applications, it is not always the best solution for every J2EE application. Porting the entire application code to run on App Engine might require significant rework or redesign. Instead, a more flexible solution like Google Kubernetes Engine (GKE) or Compute Engine might be more suitable for J2EE applications that require more control over the environment and runtime.

  • B. Integrate Cloud Dataflow into the application to capture real-time metrics:
    Cloud Dataflow is primarily designed for processing large streams of data, rather than capturing real-time metrics of an application. While it can be useful for specific use cases involving big data, it is not the right tool for application monitoring or performance tracking. Instead, tools like Stackdriver would be more appropriate for monitoring J2EE applications.

  • F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable:
    Migrating from a relational database like MySQL to a NoSQL database like Cloud Datastore or Bigtable is a significant architectural change and should only be considered if the application’s data model and use case justify the change. For many J2EE applications, a managed MySQL service like Cloud SQL or maintaining the existing relational model might be a better fit unless there’s a specific need for NoSQL.

To ensure a successful migration of a J2EE application to the cloud, the recommended best practices involve instrumenting the application with a monitoring tool, automating infrastructure provisioning, and implementing continuous integration with automated testing. These practices help ensure the application performs well, scales effectively, and remains reliable after migration, allowing for better management and fewer issues in the long term.

Question No 4:

An application development team is working on a new cloud-based product and believes that their current logging tool will not be sufficient for their needs. They are seeking a better solution to effectively capture errors and analyze historical log data. As a solution architect or support advisor, your task is to help them find the right tool for their project.

Which of the following steps should you take to help them select the most appropriate logging solution?

A. Recommend that they download and install the Google StackDriver logging agent. 

B. Provide a list of online resources about best practices in logging. 

C. Assist them in defining their specific requirements and evaluating available logging tools. 

D. Help them upgrade their existing tool to utilize any new features or capabilities.

The development team is facing challenges with their current logging tool, which they feel won't be adequate for the logging needs of their new cloud-based product. They need a solution that can efficiently capture errors and provide insights into their historical log data. The best course of action in this scenario is to help them define their requirements and assess viable logging tools (Option C).

Answer: C. Help them define their requirements and assess viable logging tools.

Explanation:

When selecting a new tool for logging, especially in the context of a cloud-based application, it is crucial to first understand the specific needs and goals of the team. A logging tool needs to support a variety of features, such as scalability, flexibility, real-time data processing, and advanced analytics, which may vary depending on the application. Additionally, the development team may have specific requirements related to integration with their existing infrastructure, compliance needs, or cloud services.

By helping the team define their requirements—such as the need for error tracking, analysis of historical logs, or integration with other cloud-based tools—you can ensure that the solution they choose will meet their technical, operational, and business needs. Once these requirements are clear, you can assist in evaluating various logging tools, considering factors like compatibility with their platform, cost, ease of implementation, and support for the specific types of analysis they require.

  • Option A (recommending Google StackDriver) might be a good tool in certain cases but lacks customization for the team’s unique needs and doesn't help them define what they truly need.

  • Option B (providing online resources) can be helpful but doesn't actively involve the team in evaluating tools and solutions.

  • Option D (upgrading their existing tool) might be an option if the current tool has potential for improvement, but it does not address the need for a more capable, cloud-native solution or the possibility that a completely new tool might be better suited.

Ultimately, understanding the team’s requirements and evaluating a range of logging tools will ensure that the team selects a solution that fits their product’s needs and scales with future growth.

Question No 5:

You need to reduce the number of unplanned rollbacks of erroneous production deployments on your company’s web hosting platform. The improvements made to the QA and testing processes have already led to an 80% reduction in rollbacks. 

What additional two strategies can you implement to further reduce the likelihood of rollbacks? (Choose two.)

A. Implement a green-blue deployment model
B. Replace the QA environment with canary releases
C. Break the monolithic platform into microservices
D. Minimize the platform’s reliance on relational databases
E. Replace the platform’s relational databases with a NoSQL database

Answer:

A. Implement a green-blue deployment model
B. Replace the QA environment with canary releases

Explanation:

To further reduce the number of unplanned rollbacks, it’s essential to focus on deployment strategies and architectural changes that ensure smoother, safer production releases. After achieving an 80% reduction in rollbacks through improvements in QA and testing, the next step is to introduce strategies that can catch issues early in the deployment process, minimize risks, and enhance overall stability.

  • Green-Blue Deployment Model (Option A):
    A green-blue deployment model allows for minimizing downtime and reducing risks during production deployments. In a green-blue deployment, two identical environments are maintained: one is the "blue" environment (the currently live production environment), and the other is the "green" environment (the new version of the application). The green environment is tested and validated before traffic is switched from blue to green. This strategy significantly reduces the risk of introducing errors into production, as any issues with the new release can be quickly detected and fixed by rolling back to the blue environment without downtime or disruption. This proactive approach to managing deployments is a proven method to minimize unplanned rollbacks.

  • Canary Releases (Option B):
    Canary releases involve gradually rolling out a new version of the application to a small subset of users (the "canary") while the majority of users continue to use the old version. This method allows teams to monitor the new release in real-world production with a controlled set of users, making it easier to catch issues early before they affect the entire user base. If any problems arise, the release can be rolled back for the canary group without impacting the entire user base, ensuring that only a small portion of users are affected by potential errors. This incremental approach to deployment makes it easier to detect issues early and significantly reduces the likelihood of widespread errors or rollbacks.

Why the Other Options Are Less Effective:

  • Microservices (Option C): While breaking a monolithic platform into microservices can improve scalability and flexibility, it does not directly address the issue of deployment rollbacks. Microservices introduce their own complexities, such as managing inter-service communication and testing, and may not immediately reduce the likelihood of deployment failures.

  • Minimizing Dependency on Relational Databases (Option D): Reducing reliance on relational databases might improve scalability or performance in some cases, but it does not directly address the issue of reducing deployment rollbacks. Rollbacks are typically related to deployment processes and application issues rather than database architecture.

  • NoSQL Databases (Option E): Switching to NoSQL databases may be beneficial in certain use cases, but this change does not directly address the issue of reducing rollbacks. A change in database technology would require significant re-architecting of the platform and does not inherently solve deployment-related issues.

Question No 6:

To reduce costs, the Director of Engineering has mandated that all developers move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop cycles throughout the day and require state persistence. You have been asked to design a solution for running development environments on Google Cloud while providing the finance department with cost visibility. 

Which two steps should you take to meet these requirements? (Choose two.)

A. Use the --no-auto-delete flag on all persistent disks and stop the VM
B. Use the --auto-delete flag on all persistent disks and terminate the VM
C. Apply VM CPU utilization labels and include them in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate costs with specific groups
E. Store all state in local SSDs, snapshot the persistent disks, and terminate the VM
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM

Answer:

C. Apply VM CPU utilization labels and include them in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate costs with specific groups

Explanation:

In a cloud environment like Google Cloud, cost visibility and state persistence are critical when transitioning development infrastructure from on-premises to the cloud. The finance department needs visibility into resource usage to track costs accurately. The solutions you implement must focus on efficient resource management while providing clear cost allocation for each developer or project group.

  • VM CPU Utilization Labels and BigQuery Billing Export (Option C and D):
    Applying labels to resources such as VMs enables detailed tracking and reporting of resource usage. By labeling VMs based on factors such as project, department, or user, you can associate specific costs with particular groups or activities. Google Cloud’s BigQuery billing export allows for exporting detailed billing data to BigQuery, where it can be analyzed and queried. This approach provides the finance department with granular insights into costs, including CPU utilization, storage, and networking. By leveraging labels in combination with BigQuery exports, the finance department can easily track costs and generate reports that reflect the cloud infrastructure usage across different teams or projects.

  • Why the Other Options Are Less Effective:

  • --no-auto-delete Flag on Persistent Disks (Option A): While using the --no-auto-delete flag ensures that persistent disks are not automatically deleted when VMs are stopped, it does not help with cost visibility or state persistence during VM termination. This approach could lead to unnecessary costs if disks are not properly managed or deleted when no longer in use.

  • --auto-delete Flag on Persistent Disks (Option B): Using the --auto-delete flag on persistent disks can help reduce costs by automatically deleting disks when VMs are terminated. However, this might not meet the requirement for state persistence, as it would delete the persistent disk along with the VM, potentially losing valuable state data.

  • Local SSDs and Persistent Disk Snapshots (Option E and F): Storing state on local SSDs or snapshots could be useful for preserving data, but it doesn’t directly address the need for cost visibility. Storing data in Google Cloud Storage (Option F) might be a good choice for state persistence, but it doesn’t provide the necessary cost allocation and billing transparency features required by the finance department.

In conclusion, applying labels to VMs and using BigQuery for detailed billing exports ensures that the finance department has the necessary insights into resource usage and costs, while also meeting the requirement for state persistence.

Question No 7: 

Your company wants to track whether someone is present in a meeting room that has been reserved for a scheduled meeting. There are 1,000 meeting rooms spread across 5 offices on 3 different continents. Each room is equipped with a motion sensor that sends data every second. The motion sensor data includes a sensor ID and several discrete items of information related to the room's status. This data, along with information about account owners and office locations, will be used by analysts.

Which type of database should you use to store and manage this data?

A. Flat file
B. NoSQL
C. Relational
D. Blobstore

Answer: B. NoSQL

Explanation:

In this scenario, the company needs to efficiently track and store sensor data from 1,000 meeting rooms across multiple offices and continents. Each motion sensor generates data at frequent intervals (every second), and the system needs to support not just tracking individual sensors but also storing large amounts of sensor data efficiently.

Here’s why NoSQL (Option B) is the most appropriate choice:

1. Data Volume and Structure:

Each motion sensor generates data continuously, and over time, this data will accumulate quickly across all 1,000 rooms. NoSQL databases are designed to handle large volumes of unstructured or semi-structured data, such as time-series data, logs, or sensor readings. NoSQL databases can store data in flexible formats, such as key-value pairs, documents, or wide-column stores, which is ideal for handling the diverse sensor data coming from each meeting room.

2. Scalability:

With sensors sending data every second across a global network of meeting rooms, scalability is essential. NoSQL databases, particularly those designed for horizontal scaling (such as Cassandra, MongoDB, or Bigtable), can manage vast amounts of data by distributing it across multiple servers. This is crucial as the company has offices on different continents and needs to ensure that the system can scale as more sensors are added or as the data volume grows.

3. Flexible Schema:

The motion sensor data likely includes sensor IDs, timestamps, room status, and possibly other metadata. NoSQL databases allow you to work with flexible schemas, meaning you don’t have to define all the possible attributes upfront. This flexibility is beneficial since the data might evolve over time, with new types of sensor readings or additional metadata being added as the system develops.

4. Performance:

NoSQL databases can provide high performance for read and write operations, which is essential in this case, as the system will need to process large volumes of incoming sensor data in real time. These databases are optimized for fast writes and can handle the high-frequency data streams from the sensors.

Why Not the Other Options?

A. Flat File:

While flat files are easy to set up and can handle small amounts of data, they are not suitable for large-scale, real-time applications. Flat files do not provide the necessary features like indexing, querying, or scalability, which would be required to handle the volume and complexity of data in this use case.

C. Relational Database:

Relational databases (RDBMS) are great for structured data with fixed relationships and schemas. However, the data from the motion sensors is more dynamic and could involve large volumes with different attributes per sensor. Relational databases may struggle with this type of unstructured or semi-structured data, especially in terms of scalability and performance for real-time updates, making them less suited for this case.

D. Blobstore:

Blobstore (binary large object storage) is typically used for storing large binary files, such as images or videos. It’s not designed for handling structured or semi-structured data like sensor readings. While it could be used to store the raw data in its original format, it would not be efficient for querying, indexing, or handling large volumes of incoming time-series data from motion sensors.

Given the high volume, dynamic nature, and global distribution of the sensor data, NoSQL is the most appropriate choice. It offers the scalability, flexibility, and performance needed to manage large-scale, real-time data generated by motion sensors in a global environment. This type of database will support both the immediate needs and future growth of the system.

Question No 8:

You have set up an autoscaling instance group to handle web traffic for an upcoming product launch. After configuring the instance group as a backend service for an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and relaunched every minute. The instances do not have a public IP address. You've confirmed that the expected web response is being generated by each instance when using the curl command. However, you're concerned that the backend might not be correctly configured.

What action should you take to ensure the backend is correctly set up?

A. Verify that a firewall rule exists to allow incoming HTTP/HTTPS traffic to reach the load balancer.
B. Assign a public IP address to each VM instance and configure a firewall rule to allow the load balancer to reach the VM instances' public IPs.
C. Confirm that a firewall rule exists allowing load balancer health checks to reach the instances in the instance group.
D. Create a tag for each instance with the load balancer’s name, then configure a firewall rule that allows traffic from the load balancer’s source to reach the instances with the corresponding tag.

Answer: C. Confirm that a firewall rule exists allowing load balancer health checks to reach the instances in the instance group.

Explanation:

In this scenario, the VMs in the autoscaling instance group are being terminated and relaunched every minute, which suggests that the load balancer is not able to determine the health of the backend instances. Typically, load balancers use health checks to assess whether an instance is responding appropriately. If an instance fails the health check, it is considered unhealthy, leading to its termination and replacement.

Since you’ve verified that the expected web response is coming from each instance (using curl), the issue most likely lies in the configuration of the health checks. Specifically, the load balancer’s health check might not be able to access the instances to confirm they are healthy. For the load balancer to perform health checks, the instances need to be accessible on the ports used for health checking (often HTTP/HTTPS).

  • Option C (Confirming a firewall rule for load balancer health checks) is the correct approach. The firewall rule must explicitly allow traffic from the load balancer’s health check service to the instances. This ensures that the health check requests can reach the VMs and receive the appropriate responses.

  • Option A (Verifying a firewall rule for HTTP/HTTPS traffic) is important for allowing traffic to reach the load balancer, but it doesn't address the issue of health checks not reaching the instances, which is the root cause here.

  • Option B (Assigning public IPs and creating firewall rules) is unnecessary since the instances do not need public IPs to communicate with the load balancer, especially in a private network configuration. The private IPs should suffice for health checks and internal communication.

  • Option D (Creating a tag for instances with the load balancer’s name) and configuring firewall rules is not the most efficient solution for ensuring health checks can access the VMs. Firewall rules should specifically allow health check traffic, which is more directly addressed in Option C.

Question No 9:

You have written a Python script that is intended to connect to Google BigQuery from a Google Compute Engine virtual machine. However, the script is returning errors indicating it cannot establish a connection to BigQuery.

What should you do to resolve this issue?

A. Install the latest BigQuery API client library for Python.
B. Run the script on a new virtual machine with the BigQuery access scope enabled.
C. Create a new service account with BigQuery access and run the script with that service account.
D. Install the bq component for gcloud with the command gcloud components install bq.

Answer: B. Run the script on a new virtual machine with the BigQuery access scope enabled.

Explanation:

To connect to Google BigQuery from a Google Compute Engine virtual machine (VM), the VM must have appropriate permissions and access scopes. When you create a VM, you assign it an IAM role and scopes that define what resources the VM can access. Specifically, the BigQuery access scope is required for the VM to communicate with BigQuery.

  • Option B (Running the script on a new VM with the correct access scope) is the most straightforward solution. When creating the VM, ensure that the BigQuery access scope is enabled. This grants the VM the required permissions to interact with BigQuery via the Google Cloud API. You can do this when creating the VM or by modifying the VM’s access scope if it is already running.

  • Option A (Installing the latest BigQuery API client library) may be helpful if the client library is outdated or missing, but it is not the root cause of the issue here. The key issue seems to be related to the access permissions (scope), not the API client itself.

  • Option C (Creating a new service account with BigQuery access) would be necessary only if you are using a specific service account to run the script and the existing account lacks the appropriate BigQuery permissions. However, using the correct access scope for the VM is a simpler and more direct solution.

  • Option D (Installing the bq component) is unrelated to the Python script. The bq command-line tool is used for manual interactions with BigQuery and is not necessary for a Python script that uses the BigQuery API client.

In summary, enabling the correct BigQuery access scope for the VM is the most effective way to ensure the script can successfully connect to BigQuery.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.