Google Professional Cloud Architect Google Cloud Certified – Professional Cloud Architect Exam Dumps and Practice Test Questions Set 10 Q181-200

Visit here for our full Google Professional Cloud Architect exam dumps and practice test questions.

Question 181:

A multinational bank wants to implement a real-time anti-money laundering (AML) system. The system must ingest transaction data, compute risk scores, detect anomalies, and alert compliance teams instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for predictive scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time AML requires ingestion of high-volume transaction data from ATMs, POS terminals, online banking, and mobile apps. Pub/Sub provides a scalable, globally distributed ingestion layer that decouples producers from downstream pipelines. This ensures durability and fault tolerance, particularly during peak transaction periods or spikes in network traffic.

Dataflow performs real-time computation of AML features, such as transaction velocity, geographic anomalies, unusual transaction amounts, and account behavior patterns. Stateful and windowed processing enables detection of short-term anomalies while maintaining awareness of historical patterns. Feature enrichment with historical transactions and account metadata improves predictive accuracy.

Bigtable stores operational lookups for low-latency retrieval, such as blacklisted accounts, suspicious IP addresses, or known fraud patterns. Millisecond-level queries enable immediate risk scoring for incoming transactions.

BigQuery stores historical transaction data for analytics, trend detection, and ML feature extraction. Analysts can investigate anomalies, assess risk models, and comply with regulatory reporting.

Vertex AI hosts ML models for predictive risk scoring. Models can include supervised classification, unsupervised anomaly detection, or ensemble approaches. Real-time inference ensures instant alerts to compliance teams, supporting immediate intervention and regulatory adherence.

Cloud Run exposes APIs for alerting and integration with compliance workflows. Batch-only or Cloud SQL solutions cannot provide low-latency, predictive, real-time AML detection at a global scale.

This architecture enables a globally scalable, low-latency, predictive AML platform, minimizing financial crime, ensuring compliance, and improving operational efficiency.

Question 182:

A multinational airline wants to implement predictive aircraft maintenance. Telemetry includes engine metrics, vibration, GPS, and fuel consumption. The system must detect anomalies, forecast component failures, and alert maintenance crews proactively. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML

Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker

Explanation:

Predictive aircraft maintenance requires ingestion of high-frequency telemetry data from multiple aircraft, including engine performance, vibration metrics, GPS coordinates, and fuel consumption. Pub/Sub ensures scalable, durable, and reliable ingestion. It decouples data sources from downstream processing pipelines, ensuring fault tolerance during network disruptions or bursts in telemetry traffic.

Dataflow computes real-time features, including rolling averages, vibration frequency analysis, temperature trends, and anomaly detection. Stateful and windowed processing allows the system to detect immediate anomalies and long-term degradation patterns. Feature enrichment using historical maintenance records and aircraft metadata improves predictive accuracy.

Cloud Storage retains raw telemetry for archival purposes and regulatory compliance. Encrypted, durable storage ensures secure, auditable retention of telemetry data for operational, maintenance, and legal requirements.

BigQuery stores structured datasets derived from telemetry for analytics, trend detection, and ML feature extraction. Analysts can detect recurring issues, optimize maintenance schedules, and evaluate model performance across fleets.

Vertex AI hosts predictive ML models to forecast component failures and maintenance needs. Models may include regression, time-series forecasting, or deep learning techniques. Real-time inference ensures maintenance crews receive timely alerts for proactive interventions.

Looker dashboards visualize fleet health, predicted failures, and anomalies, enabling operational teams to make informed decisions quickly. Batch-only or SQL-based architectures cannot meet real-time predictive maintenance needs at fleet scale.

This architecture ensures a scalable, low-latency, predictive fleet maintenance platform, improving safety, reducing downtime, and optimizing operational efficiency.

Question 183:

A global e-commerce platform wants to implement real-time product recommendations. The system must ingest user interactions, compute behavioral features, update recommendations instantly, and support ML-based personalization for millions of users. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for analytics, Vertex AI for ML inference, Cloud Run for recommendation delivery
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time recommendation systems require ingestion of high-frequency user interactions, including clicks, views, purchases, and cart actions. Pub/Sub provides scalable, durable, and globally distributed ingestion. It decouples front-end services from backend pipelines, ensuring high availability and reliability during peak traffic periods.

Dataflow performs real-time computation of behavioral features, such as session activity, product affinities, engagement metrics, and short-term trends. Stateful and windowed processing captures both immediate behavior and historical patterns. Feature enrichment using user profiles, product metadata, and historical interaction data improves prediction accuracy.

BigQuery stores historical interactions and transactional datasets for analytics, trend detection, and ML feature extraction. Analysts can create training datasets, evaluate model performance, and identify emerging user behavior patterns.

Vertex AI hosts ML models for real-time scoring and recommendation inference. Models may include collaborative filtering, content-based filtering, or hybrid approaches. Continuous retraining ensures models adapt to evolving user behavior and product inventory. Real-time inference enables instant personalization for millions of users.

Cloud Run exposes APIs to web and mobile apps, delivering low-latency recommendations. Autoscaling ensures system performance under heavy load. Batch-only or SQL-only architectures cannot meet global-scale, low-latency, predictive recommendation requirements.

This architecture ensures a scalable, low-latency, real-time recommendation platform, enhancing user engagement, conversion rates, and customer satisfaction.

Question 184:

A global ride-hailing platform wants to implement a predictive surge pricing system. The system must ingest ride requests, driver telemetry, traffic data, and weather updates, compute dynamic fares using ML, and deliver prices instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch scripts
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational metrics, Vertex AI for predictive pricing, Cloud Run for fare APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Predictive surge pricing requires real-time ingestion of ride requests, driver telemetry, traffic congestion, and weather data. Pub/Sub ensures scalable, reliable ingestion, decoupling front-end and back-end pipelines. It can handle spikes in ride requests without losing data.

Dataflow computes real-time features, such as supply-demand ratios, driver availability, ETA predictions, and historical ride patterns. Stateful and windowed processing captures both short-term surges and long-term trends. Feature enrichment improves model accuracy, ensuring pricing fairness.

Bigtable stores operational metrics and low-latency data, including driver locations, availability, and traffic conditions. Wide-column storage allows efficient queries for multiple metrics per driver or region.

Vertex AI hosts ML models for dynamic fare prediction. Real-time inference allows immediate pricing updates to riders and drivers, optimizing revenue and maintaining user satisfaction. Models may use regression, ensemble, or hybrid approaches to improve prediction accuracy.

Cloud Run exposes APIs for mobile and web apps, providing low-latency delivery of surge pricing. Batch-only or SQL-only solutions cannot meet real-time predictive and globally scalable requirements.

This architecture provides a predictive, low-latency surge pricing system, optimizing revenue, improving user satisfaction, and dynamically responding to supply-demand fluctuations.

Question 185:

A multinational retail company wants to implement predictive inventory management. It must ingest purchase transactions, returns, and warehouse updates, compute real-time inventory levels, forecast stockouts using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch replication
B) Pub/Sub for event ingestion, Dataflow for feature computation, Spanner for global inventory management, BigQuery for analytics, Vertex AI for predictive stockout modeling
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Predictive inventory management requires ingestion of high-volume events from stores, warehouses, and online channels. Pub/Sub provides globally scalable, durable ingestion, decoupling producers from downstream pipelines and ensuring spikes during seasonal peaks or promotions are handled without data loss.

Dataflow computes features, aggregates inventory changes from purchases, returns, and warehouse transfers, and enriches data with product metadata, warehouse location, and supplier information. Stateful and windowed processing ensures accurate real-time inventory calculations while capturing short-term fluctuations.

Spanner provides globally consistent, transactional storage. Strong consistency ensures inventory data remains accurate across all regions, preventing overselling and supporting high-throughput operations. Horizontal scalability accommodates global inventory volumes.

BigQuery stores historical inventory and transaction datasets for analytics, trend detection, reporting, and ML feature extraction. Analysts can detect demand patterns, seasonal trends, and warehouse performance to feed predictive models.

Vertex AI hosts ML models for stockout detection and inventory replenishment optimization. Real-time inference enables proactive restocking, reducing inventory shortages and lost revenue. Batch-only or SQL-based solutions cannot provide globally consistent, predictive, low-latency inventory management.

This architecture ensures a scalable, predictive, low-latency inventory management system, improving operational efficiency, reducing stockouts, and enhancing customer satisfaction.

Question 186:

A multinational logistics company wants to implement a predictive delivery optimization system. Vehicles emit GPS, engine, and fuel telemetry. The system must compute optimal routes, detect anomalies, forecast delays using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive delivery, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Predictive delivery optimization requires ingestion of high-frequency telemetry streams from vehicles, including GPS, engine, and fuel metrics. Pub/Sub ensures scalable, durable ingestion that decouples vehicle telemetry from downstream pipelines, guaranteeing fault tolerance and availability during traffic spikes.

Dataflow computes real-time features such as travel time estimates, route deviations, fuel efficiency, and anomaly detection. Stateful and windowed processing allows aggregation across time intervals to detect delays, unusual patterns, or maintenance requirements. Enrichment with historical route and vehicle data improves prediction accuracy.

Bigtable stores operational metrics for low-latency access, enabling fleet managers to monitor vehicle positions, ETA predictions, and anomaly alerts in near real time. Wide-column design efficiently supports multiple metrics per vehicle and location.

BigQuery stores historical telemetry and operational datasets for analytics, trend detection, and ML feature extraction. Analysts can evaluate recurring delays, optimize fleet utilization, and improve routing strategies.

Vertex AI hosts predictive models to forecast delivery delays and suggest optimal routes. Real-time inference enables proactive interventions, reducing service disruptions. Models may leverage time-series forecasting, regression, or ensemble techniques for enhanced accuracy.

Cloud Run exposes dashboards and APIs for operations teams to monitor fleet performance, detect anomalies, and implement route optimization. Batch-only or SQL-based solutions cannot provide predictive, low-latency capabilities at ga lobal scale.

This architecture ensures a globally scalable, predictive, low-latency delivery optimization platform, improving operational efficiency, reducing delays, and enhancing customer satisfaction.

Question 187:

A global airline wants to implement a real-time passenger experience monitoring system. It must ingest check-in data, flight telemetry, customer service interactions, and loyalty program data, compute engagement metrics, detect anomalies, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for event ingestion, Dataflow for real-time feature computation, BigQuery for analytics, Vertex AI for anomaly detection, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Looker

Explanation:

Monitoring passenger experience in real time requires ingestion of high-frequency events from multiple sources: check-in counters, mobile apps, flight telemetry, customer service systems, and loyalty program data. Pub/Sub provides scalable, reliable ingestion, decoupling data producers from downstream pipelines, ensuring low-latency and fault-tolerant processing.

Dataflow computes real-time engagement metrics, aggregating check-in durations, flight delays, service interactions, and loyalty activity. Stateful and windowed processing detects anomalies, such as long queues, delayed baggage, or service inconsistencies. Feature enrichment with historical passenger, flight, and loyalty data improves predictive accuracy.

BigQuery stores structured datasets for analytics, trend detection, reporting, and ML feature extraction. Analysts can detect systemic service issues, identify patterns in passenger satisfaction, and optimize operations.

Vertex AI hosts ML models for anomaly detection. Real-time inference identifies deviations in passenger behavior, service response, and loyalty engagement, providing actionable alerts to operations teams.

Looker dashboards visualize passenger experience, anomalies, and predictive insights, enabling operational teams to act proactively. Batch-only or Cloud SQL solutions cannot provide real-time anomaly detection at a global scale.

This architecture ensures scalable, low-latency, predictive monitoring of passenger experience, allowing proactive interventions to improve customer satisfaction and operational efficiency.

Question 188:

A multinational retail company wants to implement predictive inventory management. It must ingest purchases, returns, and warehouse updates, compute real-time inventory levels, forecast stockouts using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch replication
B) Pub/Sub for event ingestion, Dataflow for feature computation, Spanner for global inventory management, BigQuery for analytics, Vertex AI for predictive stockout modeling
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Real-time inventory management requires ingestion of high-volume events from stores, warehouses, and online platforms. Pub/Sub provides scalable, durable ingestion, decoupling producers from downstream pipelines and ensuring that spikes during seasonal demand or promotions are handled without data loss.

Dataflow computes features and aggregates inventory changes from purchases, returns, and warehouse transfers. It enriches data with product metadata, warehouse location, and supplier information. Stateful and windowed processing ensures accurate rolling inventory calculations while capturing short-term trends and anomalies.

Spanner provides globally consistent, transactional storage. Strong consistency guarantees that inventory data remains accurate across regions, preventing overselling and supporting high-throughput operations. Horizontal scalability supports global operations and multi-region synchronization.

BigQuery stores historical inventory and transaction datasets for analytics, trend detection, reporting, and ML feature extraction. Analysts can detect demand patterns, seasonal trends, and warehouse performance to feed predictive models.

Vertex AI hosts ML models for stockout detection and replenishment optimization. Real-time inference enables proactive restocking, minimizing shortages and lost revenue. Batch-only or SQL-based solutions cannot meet global, low-latency, predictive inventory requirements.

This architecture provides a scalable, predictive, low-latency inventory management platform, improving operational efficiency, reducing stockouts, and enhancing customer satisfaction.

Question 189:

A global ride-hailing company wants to implement a real-time driver allocation system. It must ingest ride requests, driver telemetry, and traffic updates, compute optimal driver-to-passenger assignments using ML, and provide APIs for mobile apps instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for low-latency driver lookup, Vertex AI for predictive demand modeling, Cloud Run for assignment APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Real-time driver allocation requires ingestion of high-frequency events, including ride requests, driver telemetry, and traffic updates. Pub/Sub provides globally scalable, reliable ingestion, decoupling mobile applications from backend pipelines and ensuring high availability during spikes in ride requests.

Dataflow computes features like driver availability, ETA predictions, supply-demand ratios, and driver performance metrics. Stateful and windowed processing captures rolling aggregations to detect surges or shortages in real time. Feature enrichment with historical driver performance improves allocation accuracy.

Bigtable stores operational metrics and driver location data for low-latency retrieval. Millisecond-level access enables immediate assignment decisions critical to maintaining customer satisfaction. Wide-column storage efficiently supports multiple metrics per driver and region.

Vertex AI hosts ML models to forecast ride demand and suggest optimal driver positioning. Real-time inference dynamically adjusts allocations based on live conditions, ensuring efficient fleet utilization.

Cloud Run exposes APIs for mobile apps to deliver driver assignments instantly. Batch-only, Cloud SQL, or Firestore-based solutions cannot meet the low-latency, predictive, and globally scalable requirements for real-time driver allocation.

This architecture ensures a globally scalable, low-latency, predictive driver allocation platform, optimizing rider satisfaction and driver utilization.

Question 190:

A multinational logistics provider wants to implement a predictive fleet maintenance system. Vehicles emit GPS, engine, and fuel telemetry. The system must detect anomalies, forecast component failures using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Predictive fleet maintenance requires ingestion of high-frequency telemetry streams, including GPS, engine, and fuel metrics. Pub/Sub provides globally scalable, durable ingestion, decoupling vehicles from downstream processing pipelines and ensuring fault tolerance during spikes or network disruptions.

Dataflow computes features in real time, including rolling averages, anomaly detection, and telemetry enrichment with vehicle metadata. Stateful and windowed processing enables early detection of component failures, allowing proactive maintenance scheduling.

Bigtable stores operational metrics for low-latency queries, enabling fleet managers to monitor vehicle health and receive alerts in near real time. Wide-column design efficiently supports multiple metrics per vehicle.

BigQuery stores historical telemetry and operational datasets for analytics, trend detection, and ML feature extraction. Analysts can identify recurring failures, optimize maintenance schedules, and support predictive modeling.

Vertex AI hosts ML models to forecast component failures and recommend proactive maintenance actions. Real-time inference ensures timely alerts for operations teams.

Cloud Run exposes dashboards and APIs for monitoring, anomaly detection, and maintenance optimization. Batch-only or SQL-based solutions cannot provide globally scalable, real-time predictive fleet maintenance.

This architecture ensures a low-latency, predictive, globally scalable fleet maintenance platform, improving reliability, reducing downtime, and optimizing operational efficiency.

Question 191:

A global bank wants to implement a real-time fraud detection system for online transactions. It must ingest transaction events, compute risk features, detect anomalies using ML, and alert fraud analysts instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for low-latency lookups, BigQuery for analytics, Vertex AI for ML inference, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Fraud detection requires ingestion of high-volume, high-velocity transaction events from multiple channels, including web, mobile, and ATMs. Pub/Sub provides a globally scalable, durable, and fault-tolerant ingestion layer, decoupling upstream producers from downstream pipelines. It ensures events are delivered reliably even during spikes or network interruptions.

Dataflow performs real-time computation of fraud features such as transaction velocity, geolocation anomalies, transaction amount deviations, device fingerprinting, and account behavior patterns. Stateful and windowed processing allows detection of both short-term anomalies and long-term behavioral deviations. Feature enrichment with historical transaction data and known fraud patterns improves predictive accuracy.

Bigtable stores low-latency operational data such as blacklisted accounts, suspicious IP addresses, and device IDs for fast lookups during real-time scoring. Its wide-column design supports multiple metrics per user or account efficiently.

BigQuery stores historical transactions and derived datasets for analytics, ML training, and regulatory compliance. Analysts can detect trends, evaluate model performance, and generate insights for risk mitigation.

Vertex AI hosts ML models for fraud detection and risk scoring. Models can include supervised classification, unsupervised anomaly detection, and ensemble approaches. Real-time inference ensures instant alerts to analysts or automated workflows for immediate intervention.

Cloud Run exposes APIs and alerting endpoints for integration with dashboards, notification systems, or automated remediation workflows. Batch-only, Cloud SQL, or Firestore-only solutions cannot provide globally scalable, low-latency, predictive fraud detection.

This architecture provides a globally scalable, real-time, predictive fraud detection system, minimizing financial risk, ensuring regulatory compliance, and improving operational efficiency.

Question 192:

A multinational airline wants to implement predictive maintenance for its fleet. Telemetry includes engine metrics, vibration data, GPS, and fuel consumption. The system must detect anomalies, forecast failures, and alert maintenance crews proactively. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML

Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker

Explanation:

Predictive aircraft maintenance requires ingestion of high-frequency telemetry data from multiple aircraft, including engine metrics, vibrations, GPS coordinates, and fuel consumption. Pub/Sub provides scalable, durable, and globally distributed ingestion, decoupling data producers from downstream processing pipelines. It ensures fault tolerance during network disruptions and spikes in telemetry.

Dataflow performs feature computation in real time, calculating rolling averages, vibration frequency analysis, fuel efficiency trends, and detecting anomalies. Stateful and windowed processing allows short-term anomaly detection while tracking long-term degradation patterns. Enrichment with historical maintenance records and aircraft metadata improves predictive model accuracy.

Cloud Storage retains raw telemetry for archival, compliance, and audit purposes. Durable, encrypted storage ensures secure retention for operational, regulatory, and legal requirements.

BigQuery stores structured datasets derived from telemetry for analytics, trend detection, and ML feature extraction. Analysts can investigate recurring issues, optimize maintenance schedules, and generate training datasets for ML models.

Vertex AI hosts predictive models that forecast component failures and maintenance needs. Real-time inference ensures maintenance crews receive timely alerts for proactive interventions. Models may include regression, time-series forecasting, or deep learning techniques for enhanced accuracy.

Looker dashboards provide visualizations of fleet health, predicted failures, and anomalies, allowing operational teams to make informed decisions quickly. Batch-only or SQL-based architectures cannot support real-time predictive maintenance at fleet scale.

This architecture provides a globally scalable, predictive, low-latency maintenance platform, improving aircraft reliability, reducing downtime, and optimizing operational efficiency.

Question 193:

A global e-commerce platform wants to implement real-time product recommendations. The system must ingest user interactions, compute behavioral features, update recommendations instantly, and support ML-based personalization for millions of users. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for analytics, Vertex AI for ML inference, Cloud Run for recommendation delivery
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time recommendation systems require ingestion of high-frequency events such as clicks, product views, purchases, and cart actions. Pub/Sub ensures globally scalable, durable, and fault-tolerant ingestion. It decouples front-end applications from backend pipelines, ensuring high availability during peak traffic periods.

Dataflow computes real-time behavioral features, including session activity, product affinity, engagement metrics, and short-term trends. Stateful and windowed processing captures both immediate user behavior and historical patterns, enabling effective personalization. Feature enrichment with product metadata and historical interactions improves model performance.

BigQuery stores historical datasets for analytics, trend detection, and ML feature extraction. Analysts can evaluate user behavior patterns, create training datasets, and monitor model performance.

Vertex AI hosts ML models for real-time scoring and recommendation inference. Models may include collaborative filtering, content-based, or hybrid approaches. Continuous retraining ensures adaptation to changing user behavior and inventory. Real-time inference allows instant delivery of personalized recommendations.

Cloud Run exposes APIs for web and mobile applications to deliver recommendations at low latency. Autoscaling ensures performance under heavy load. Batch-only or SQL-only architectures cannot meet global-scale, low-latency predictive personalization requirements.

This architecture enables a globally scalable, low-latency, predictive recommendation system, enhancing user engagement, conversion, and overall customer satisfaction.

Question 194:

A global ride-hailing platform wants to implement predictive surge pricing. The system must ingest ride requests, driver telemetry, traffic data, and weather updates, compute dynamic fares using ML, and deliver prices instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch scripts
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational metrics, Vertex AI for predictive pricing, Cloud Run for fare APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Predictive surge pricing requires real-time ingestion of ride requests, driver telemetry, traffic, and weather data. Pub/Sub provides scalable, globally distributed ingestion, decoupling upstream producers from downstream processing pipelines, and ensuring data durability during spikes.

Dataflow computes features in real time, including supply-demand ratios, driver availability, ETA predictions, historical ride patterns, and anomaly detection. Stateful and windowed processing captures short-term surges and long-term trends, enabling accurate pricing decisions.

Bigtable stores low-latency operational metrics such as driver location, availability, and traffic conditions. Wide-column storage supports multiple metrics per region or driver efficiently, enabling millisecond-level lookups for pricing algorithms.

Vertex AI hosts ML models for predictive surge pricing. Real-time inference ensures that riders and drivers receive updated fares immediately. Models can include regression, ensemble, or hybrid approaches to maximize prediction accuracy.

Cloud Run exposes APIs to mobile and web applications, delivering dynamic prices with low latency. Batch-only or SQL-only architectures cannot meet globally scalable, low-latency, predictive surge pricing requirements.

This architecture ensures a predictive, low-latency, real-time surge pricing system, optimizing revenue, maintaining fairness, and dynamically responding to supply-demand fluctuations.

Question 195:

A multinational retail company wants to implement predictive inventory management. It must ingest purchases, returns, and warehouse updates, compute real-time inventory levels, forecast stockouts using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch replication
B) Pub/Sub for event ingestion, Dataflow for feature computation, Spanner for global inventory management, BigQuery for analytics, Vertex AI for predictive stockout modeling
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Real-time inventory management requires ingestion of high-volume events from stores, warehouses, and online channels. Pub/Sub provides globally scalable, durable ingestion, decoupling producers from downstream pipelines, and ensuring spikes during seasonal peaks or promotions are handled without data loss.

Dataflow computes features and aggregates inventory changes from purchases, returns, and warehouse transfers. It enriches events with product metadata, warehouse locations, and supplier information. Stateful and windowed processing ensures accurate rolling inventory calculations while capturing short-term trends.

Spanner provides globally consistent, transactional storage. Strong consistency ensures inventory levels remain accurate across regions, preventing overselling and supporting high-throughput operations. Horizontal scalability enables global operations and multi-region synchronization.

BigQuery stores historical inventory and transaction datasets for analytics, trend detection, reporting, and ML feature extraction. Analysts can detect patterns, seasonality, and warehouse performance for predictive modeling.

Vertex AI hosts ML models for stockout detection and inventory replenishment optimization. Real-time inference enables proactive restocking, minimizing shortages and lost revenue. Batch-only or SQL-based architectures cannot provide globally consistent, predictive, low-latency inventory management.

This architecture provides a globally scalable, predictive, low-latency inventory management platform, improving operational efficiency, reducing stockouts, and enhancing customer satisfaction.

Question 196:

A multinational logistics company wants to implement a predictive delivery optimization system. Vehicles emit GPS, engine, and fuel telemetry. The system must compute optimal routes, detect anomalies, forecast delays using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive delivery, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Predictive delivery optimization requires ingestion of high-frequency telemetry from fleet vehicles, including GPS, engine metrics, and fuel consumption. Pub/Sub provides globally scalable, durable ingestion, decoupling data producers from downstream pipelines. This guarantees high availability during peak traffic periods and resilience against transient network failures.

Dataflow processes the data in real time, computing features such as travel time estimates, route deviations, fuel efficiency, and anomaly detection. Stateful and windowed processing enables aggregation of short-term events and identification of patterns over time, allowing early detection of delays and vehicle performance issues. Enriching telemetry with historical route and fleet performance data improves prediction accuracy.

Bigtable stores operational metrics with low-latency access. Its wide-column design efficiently supports multiple metrics per vehicle or region, enabling fleet managers to query vehicle positions, ETAs, and anomalies in near real time.

BigQuery stores historical telemetry and operational datasets for analytics, reporting, and ML feature extraction. Analysts can identify recurring delays, optimize routing strategies, and evaluate fleet performance over time.

Vertex AI hosts ML models to forecast delivery delays and suggest optimal routing. Real-time inference enables proactive interventions to reduce disruptions, improve delivery times, and optimize fuel usage. Models may leverage time-series forecasting, regression, or ensemble methods for higher accuracy.

Cloud Run exposes dashboards and APIs for operations teams, allowing monitoring of fleet performance, detection of anomalies, and dynamic route adjustments. Batch-only or SQL-based architectures cannot provide globally scalable, low-latency predictive delivery optimization.

This architecture provides a predictive, low-latency, globally scalable delivery optimization system, improving operational efficiency, reducing delays, and enhancing customer satisfaction.

Question 197:

A global airline wants to implement a real-time passenger experience monitoring system. It must ingest check-in data, flight telemetry, customer service interactions, and loyalty program data, compute engagement metrics, detect anomalies, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for event ingestion, Dataflow for real-time feature computation, BigQuery for analytics, Vertex AI for anomaly detection, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Looker

Explanation:

Monitoring passenger experience in real time requires ingestion of high-frequency events from check-in counters, mobile apps, flight telemetry, customer service systems, and loyalty programs. Pub/Sub provides scalable, durable, and globally distributed ingestion, decoupling producers from downstream pipelines and ensuring low-latency, fault-tolerant event delivery.

Dataflow computes engagement metrics in real time, aggregating check-in times, flight delays, service interactions, and loyalty activity. Stateful and windowed processing enables the detection of anomalies such as long queues, baggage delays, or low engagement scores. Feature enrichment with historical passenger, flight, and loyalty data improves predictive accuracy.

BigQuery stores structured datasets for analytics, reporting, and ML feature extraction. Analysts can detect systemic service issues, identify trends in passenger satisfaction, and optimize operational procedures.

Vertex AI hosts ML models for anomaly detection. Real-time inference identifies deviations in passenger behavior, service performance, and loyalty engagement, enabling immediate alerts to operations teams.

Looker dashboards visualize key passenger experience metrics, anomalies, and predictive insights, allowing operational teams to act proactively. Batch-only or Cloud SQL solutions cannot provide real-time, predictive monitoring at ga lobal scale.

This architecture ensures scalable, low-latency, predictive monitoring of passenger experience, enabling proactive interventions to improve customer satisfaction and operational efficiency.

Question 198:

A multinational retail company wants to implement predictive inventory management. It must ingest purchases, returns, and warehouse updates, compute real-time inventory levels, forecast stockouts using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch replication
B) Pub/Sub for event ingestion, Dataflow for feature computation, Spanner for global inventory management, BigQuery for analytics, Vertex AI for predictive stockout modeling
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Predictive inventory management requires ingestion of high-volume events from stores, warehouses, and online platforms. Pub/Sub provides globally scalable, durable ingestion, decoupling upstream producers from downstream pipelines. It ensures events are reliably delivered even during promotional spikes or seasonal demand surges.

Dataflow performs real-time feature computation, aggregating inventory changes from purchases, returns, and warehouse transfers. Data is enriched with product metadata, warehouse location, and supplier information. Stateful and windowed processing ensures accurate rolling inventory calculations while capturing short-term trends and anomalies.

Spanner provides globally consistent, transactional storage for inventory data. Strong consistency ensures inventory levels remain accurate across multiple regions, preventing overselling. Horizontal scalability supports global operations and multi-region synchronization.

BigQuery stores historical inventory and transaction datasets for analytics, trend detection, reporting, and ML feature extraction. Analysts can identify demand patterns, seasonality, and warehouse performance, feeding predictive models for optimized inventory planning.

Vertex AI hosts ML models for stockout detection and replenishment optimization. Real-time inference enables proactive restocking, reducing shortages and lost revenue. Batch-only or SQL-based solutions cannot meet global, predictive, low-latency inventory management requirements.

This architecture provides a globally scalable, low-latency, predictive inventory management platform, improving operational efficiency, reducing stockouts, and enhancing customer satisfaction.

Question 199:

A global ride-hailing company wants to implement a real-time driver allocation system. It must ingest ride requests, driver telemetry, and traffic updates, compute optimal driver-to-passenger assignments using ML, and provide APIs for mobile apps instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for low-latency driver lookup, Vertex AI for predictive demand modeling, Cloud Run for assignment APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Real-time driver allocation requires ingestion of high-frequency events, including ride requests, driver telemetry, and traffic updates. Pub/Sub provides scalable, durable, and globally distributed ingestion, decoupling front-end applications from backend pipelines, ensuring high availability during spikes in ride requests.

Dataflow computes features such as driver availability, ETA predictions, supply-demand ratios, and driver performance metrics. Stateful and windowed processing supports aggregation and detection of short-term trends and anomalies, enabling efficient driver allocation. Feature enrichment with historical performance data improves predictive accuracy.

Bigtable stores operational metrics and driver location data for low-latency access. Millisecond-level retrieval enables instant assignment decisions critical for maintaining customer satisfaction. Wide-column storage supports multiple metrics per driver or region efficiently.

Vertex AI hosts ML models to forecast ride demand and recommend optimal driver assignments. Real-time inference dynamically adjusts allocations to ensure efficient fleet utilization.

Cloud Run exposes APIs for mobile applications to deliver driver assignments instantly. Batch-only, Cloud SQL, or Firestore-only architectures cannot meet low-latency, predictive, globally scalable requirements.

This architecture ensures a globally scalable, low-latency, predictive driver allocation system, optimizing rider satisfaction and driver utilization.

Question 200:

A multinational logistics provider wants to implement a predictive fleet maintenance system. Vehicles emit GPS, engine, and fuel telemetry. The system must detect anomalies, forecast component failures using ML, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Predictive fleet maintenance requires ingestion of high-frequency telemetry from vehicles, including GPS, engine metrics, and fuel consumption. Pub/Sub provides globally scalable, durable ingestion, decoupling vehicle telemetry from downstream pipelines, ensuring resilience to spikes and network failures.

Dataflow performs real-time feature computation, including rolling averages, anomaly detection, and telemetry enrichment with vehicle metadata. Stateful and windowed processing enables early detection of potential component failures, allowing proactive maintenance scheduling.

Bigtable stores operational metrics for low-latency queries, allowing fleet managers to monitor vehicle health and receive alerts in near real time. Its wide-column structure efficiently handles multiple metrics per vehicle.

BigQuery stores historical telemetry and operational datasets for analytics, trend detection, and ML feature extraction. Analysts can identify recurring failures, optimize maintenance schedules, and support predictive modeling.

Vertex AI hosts ML models for predicting component failures and recommending maintenance actions. Real-time inference ensures timely alerts for operational teams, reducing downtime and improving reliability.

Cloud Run exposes dashboards and APIs for monitoring, anomaly detection, and maintenance management. Batch-only or SQL-based solutions cannot provide globally scalable, low-latency predictive fleet maintenance.

This architecture provides a globally scalable, predictive, low-latency fleet maintenance system, improving operational efficiency, reducing downtime, and enhancing overall fleet reliability.

img