Google Professional Cloud Architect Google Cloud Certified – Professional Cloud Architect Exam Dumps and Practice Test Questions Set 7 Q121-140

Visit here for our full Google Professional Cloud Architect exam dumps and practice test questions.

Question 121:

A multinational bank wants to implement a real-time transaction monitoring system for fraud detection. The system must ingest millions of transactions per second, compute risk scores, detect anomalies across accounts, and alert compliance teams immediately. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for ingestion, Dataflow for real-time processing, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for ML scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time fraud detection requires ingesting a massive stream of transactions from multiple sources, including ATMs, online banking, POS systems, and mobile apps. Pub/Sub provides high-throughput, durable, and globally scalable ingestion, ensuring no transactions are lost. It decouples data producers from downstream processing pipelines, allowing for flexibility and resilience during peak transaction loads.

Dataflow processes these streams in real time, computing features such as transaction velocity, geolocation deviations, account activity correlations, and aggregations. Stateful and windowed processing enables detection of structured fraudulent activities, rapid transfers, or unusual patterns designed to evade detection. Feature enrichment with historical account behavior improves the accuracy of real-time anomaly detection.

Bigtable stores operational lookup tables such as blacklisted accounts, suspicious IP addresses, and historical risk indicators. Its low-latency access ensures transactions are scored in milliseconds, which is critical for stopping fraud before it completes.

BigQuery stores historical transaction data, supporting analytics, model training, and regulatory reporting. Analysts can extract features, back-test models, and identify patterns of fraudulent activity over time.

Vertex AI hosts ML models for predictive scoring, using supervised, unsupervised, or ensemble methods. Real-time inference allows scoring of each transaction immediately as it is processed, providing an actionable risk score for compliance workflows.

Cloud Run exposes APIs to deliver alerts to fraud detection teams, integrate with workflow systems, or trigger automated actions such as transaction blocking. Alternative solutions like Cloud SQL, Firestore, or batch processing fail to provide the required scale, latency, and predictive analytics capabilities.

This architecture ensures a highly scalable, low-latency, predictive fraud detection platform that can operate globally, protecting the bank and its customers in real time.

Question 122:

A global logistics company wants to implement a real-time fleet tracking and optimization system. Vehicles continuously emit GPS coordinates, fuel usage, engine telemetry, and vibration metrics. The system must detect anomalies, optimize routes, predict maintenance needs, and provide dashboards for operations. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for telemetry ingestion, Dataflow for processing, Bigtable for operational queries, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time fleet tracking requires high-frequency telemetry from vehicles. Pub/Sub provides scalable, durable ingestion of GPS, fuel, engine, and vibration data, decoupling devices from downstream pipelines and ensuring resilience under traffic spikes.

Dataflow performs real-time transformations and feature computation, including rolling averages, anomaly detection, and enrichment with metadata like vehicle type and driver assignment. Stateful processing enables detection of irregularities such as engine overheating, excessive vibration, or route deviations. Windowed aggregations provide operational metrics across different time intervals.

Bigtable stores operational data for low-latency access, enabling fleet managers to monitor vehicle positions, health metrics, and alerts in near real time. Its wide-column structure allows efficient retrieval of multiple metrics per vehicle.

BigQuery stores historical telemetry and operational data for analytics, model training, and trend analysis. Analysts can compute maintenance statistics, identify patterns of component failures, and optimize fleet utilization.

Vertex AI hosts predictive maintenance models that forecast component failures, fuel inefficiencies, or required service. Real-time inference allows preemptive interventions to avoid breakdowns and improve operational efficiency.

Cloud Run exposes APIs and dashboards for operations managers to visualize fleet performance, receive alerts, and access predictive insights. Batch-only solutions, Firestore, or Cloud SQL cannot deliver real-time predictive capabilities at fleet scale.

This architecture ensures scalable, real-time fleet monitoring, predictive maintenance, and route optimization, enhancing operational efficiency and reliability for global logistics operations.

Question 123:

A global retail company wants to implement a real-time recommendation engine for its e-commerce platform. The system must ingest user interactions, compute features, update recommendations instantly, and support ML-based personalization for millions of users. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch scripts
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for analytics, Vertex AI for ML inference, Cloud Run for recommendation delivery
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Personalized recommendations require ingesting millions of user interactions such as clicks, purchases, and product views. Pub/Sub ensures high-throughput, durable ingestion and decouples front-end applications from backend processing pipelines. It can scale globally to handle large spikes during promotions or peak shopping periods.

Dataflow processes streams in real time, computing features such as session behavior, product affinities, and engagement patterns. Stateful and windowed processing allows aggregation of features over rolling periods, capturing both short-term and long-term trends. Data enrichment with metadata like product categories or user profiles improves model accuracy.

BigQuery stores historical interaction data for analytics, trend analysis, and ML feature extraction. Analysts can explore user behavior, product popularity, and seasonal trends. Partitioning and clustering optimize queries over large datasets.

Vertex AI hosts ML models for real-time inference. Models can use collaborative filtering, content-based filtering, or hybrid approaches. Continuous retraining ensures recommendations adapt to changing user preferences. Real-time inference allows instantaneous updates to recommendations while users interact with the platform.

Cloud Run exposes APIs to deliver recommendations to the e-commerce front end. Autoscaling ensures the system can handle millions of concurrent users without latency. Batch processing, Firestore, or Cloud SQL are insufficient for real-time, scalable personalization.

This architecture provides scalable, low-latency, real-time personalized recommendations, improving user engagement and conversion rates.

Question 124:

A financial services firm wants to implement a real-time risk management platform. The system must ingest trades, compute risk metrics, detect anomalies, feed ML models for predictive scoring, and provide dashboards for traders and compliance teams. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for trade ingestion, Dataflow for real-time analytics, Bigtable for operational risk metrics, BigQuery for historical analysis, Vertex AI for predictive scoring, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Looker

Explanation:

Global real-time risk management requires ingestion of trades and events from multiple markets. Pub/Sub handles high-throughput ingestion, ensuring trades are captured reliably and decoupling producers from downstream processing.

Dataflow performs streaming analytics, computing metrics such as Value-at-Risk, portfolio exposure, and rolling risk aggregations. Stateful and windowed processing enables anomaly detection in trading patterns, sudden exposures, or irregular market activity.

Bigtable stores operational risk metrics for low-latency access, allowing traders and compliance teams to monitor positions, exposure, and risk indicators in near real time.

BigQuery stores historical trade and risk data for analytics, back-testing, feature extraction, and regulatory reporting. Analysts can compute trends, perform scenario analysis, and support ML model training.

Vertex AI hosts predictive models that forecast market risk, counterparty risk, or potential portfolio exposure. Real-time inference delivers actionable risk scores to dashboards and alerting systems.

Looker dashboards provide visualizations for traders and compliance officers, showing anomalies, predictive metrics, and historical trends. Batch-only or SQL-based solutions cannot meet low-latency, high-throughput, and predictive requirements.

This architecture ensures a scalable, real-time, predictive, and actionable global risk management platform.

Question 125:

A global logistics provider wants to implement a predictive delivery system. Vehicles emit GPS, fuel, and engine telemetry. The system must compute optimal routes, predict delays using ML models, detect anomalies, and provide dashboards for operations. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for low-latency operational queries, BigQuery for analytics, Vertex AI for predictive delivery, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time predictive delivery requires ingesting high-frequency telemetry from vehicles, including GPS location, fuel, and engine performance. Pub/Sub ensures high-throughput, reliable, and globally scalable ingestion. It decouples vehicle sensors from downstream pipelines, allowing for burst handling and fault tolerance.

Dataflow computes real-time features such as rolling average speeds, route deviations, fuel consumption, ETA predictions, and anomaly detection. Windowed processing allows detection of sudden delays or deviations that may impact delivery schedules.

Bigtable stores operational metrics for low-latency queries. Fleet managers can access real-time vehicle positions, fuel status, and predictive alerts for proactive rerouting or maintenance.

BigQuery stores historical telemetry for analytics, model training, and trend analysis. Analysts can identify recurring delays, traffic patterns, and vehicle performance issues.

Vertex AI hosts ML models for predictive delivery and route optimization. Real-time inference provides estimated arrival times, predicts delays, and suggests optimal routing adjustments.

Cloud Run exposes dashboards and APIs for operations teams to monitor fleet performance, receive predictive alerts, and take corrective actions. Batch-only or SQL-based architectures cannot deliver real-time predictive insights.

This architecture ensures scalable, real-time, predictive delivery optimization for global logistics operations.

Question 126:

A global e-commerce company wants to implement a real-time inventory tracking and alert system. The system must reflect purchases, returns, and warehouse transfers instantly, and alert managers when stock levels fall below thresholds. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for event ingestion, Dataflow for feature computation, Bigtable for operational inventory queries, BigQuery for analytics, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Cloud Run

Explanation:

Real-time inventory tracking requires ingestion of high-frequency events from multiple sources, including point-of-sale systems, e-commerce transactions, and warehouse management systems. Pub/Sub provides scalable, reliable, and durable ingestion, decoupling data producers from downstream pipelines and ensuring no event loss during peak shopping periods.

Dataflow performs stream processing in real time, aggregating events to compute current inventory levels, detect anomalies, and enrich data with metadata such as product categories, warehouse location, and supplier information. Windowed and stateful processing enables the detection of sudden inventory drops or unusually high sales for specific products.

Bigtable stores operational inventory data for low-latency access. Fleet and inventory managers can query the current stock for any SKU instantly, ensuring fast alerting and replenishment decisions. Its wide-column design allows multiple metrics to be associated with each product or warehouse efficiently.

BigQuery stores historical inventory and transaction data for analytics, reporting, trend analysis, and feature extraction for ML models. Analysts can identify seasonal trends, high-demand products, and warehouse performance metrics.

Cloud Run exposes APIs to trigger alerts when inventory falls below thresholds, enabling automated notifications to operations teams or integration with procurement systems. Batch-only or SQL-only architectures cannot meet real-time, global-scale requirements.

This architecture ensures scalable, low-latency, and predictive inventory management with proactive alerts, improving operational efficiency and preventing stockouts across the e-commerce platform.

Question 127:

A global ride-hailing company wants to implement a predictive driver allocation system. It must ingest ride requests, driver locations, and traffic data, compute optimal driver-to-passenger matches, and provide APIs to mobile applications in real time. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for event ingestion, Dataflow for real-time feature computation, Bigtable for low-latency location lookups, Vertex AI for predictive demand, Cloud Run for assignment APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Predictive driver allocation requires ingestion of millions of ride requests, driver telemetry, and traffic updates in real time. Pub/Sub provides high-throughput, durable ingestion and decouples mobile applications from downstream processing pipelines. It ensures that spikes in ride requests can be handled without data loss or latency spikes.

Dataflow performs feature computation in real time, calculating metrics such as driver availability, traffic-adjusted ETA, regional demand-supply ratios, and driver ratings. Stateful and windowed computations enable the system to aggregate data over short periods, detect anomalies, and prepare inputs for predictive models.

Bigtable stores operational metrics and driver location data for low-latency queries. Its fast read/write capability allows driver matching decisions to occur within milliseconds, which is crucial for customer satisfaction and operational efficiency.

Vertex AI hosts ML models that forecast ride demand and optimize driver allocation dynamically. Models incorporate historical trends, real-time traffic, and driver availability, generating predicted demand hotspots and suggesting optimal driver positioning. Real-time inference ensures allocations are updated continuously.

Cloud Run exposes APIs to mobile applications for dispatching drivers and providing riders with real-time updates. Batch-only, Cloud SQL, or Firestore solutions cannot meet the latency, predictive, and global scaling requirements for a real-time ride-hailing platform.

This architecture ensures scalable, predictive, and low-latency driver allocation, optimizing rider experience and driver utilization.

Question 128:

A global logistics company wants to implement a predictive delivery optimization system. Vehicles emit GPS, engine, and fuel telemetry. The system must compute optimal routes, predict delays using ML, detect anomalies, and provide dashboards to operations managers. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive delivery, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time predictive delivery optimization requires ingestion of telemetry streams from a large fleet of vehicles. Pub/Sub ensures high-throughput, globally scalable ingestion, decoupling vehicles from downstream pipelines, and provides resilience to network disruptions or bursts of telemetry data.

Dataflow processes telemetry streams in real time, computing features such as route deviations, travel time estimates, fuel efficiency, and anomaly detection. Stateful and windowed processing allows detection of deviations, delays, and performance issues on individual vehicles or routes.

Bigtable stores operational metrics for low-latency queries. Fleet managers can access vehicle locations, ETA predictions, and anomaly alerts instantly, enabling timely operational interventions. Its wide-column design is ideal for efficiently storing telemetry per vehicle.

BigQuery stores historical telemetry and route data for analytics, trend detection, and feature extraction. Analysts can compute traffic patterns, recurring delays, and predictive features for ML models.

Vertex AI hosts ML models that forecast delivery delays, recommend optimal routing, and predict potential maintenance needs. Real-time inference allows operations teams to reroute vehicles and proactively prevent delays.

Cloud Run exposes APIs and dashboards for operations managers to monitor fleet performance, delivery status, and predictive insights. Batch-only or SQL-based solutions cannot deliver low-latency predictive optimization at scale.

This architecture provides real-time predictive delivery optimization and fleet monitoring, improving operational efficiency, reducing delays, and increasing customer satisfaction.

Question 129:

A global airline wants to implement a predictive maintenance platform for aircraft. Telemetry includes engine parameters, vibration data, GPS, and fuel metrics. The system must detect anomalies, forecast component failures, and notify maintenance crews proactively. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML

Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker

Explanation:

Aircraft telemetry is high-frequency and globally distributed. Pub/Sub provides reliable, scalable ingestion of engine parameters, vibration metrics, GPS locations, and fuel consumption data. Decoupling ingestion from downstream processing ensures that telemetry streams can be processed without loss or delay.

Dataflow performs real-time feature computation, including rolling averages, vibration frequency analysis, temperature trends, and anomaly detection. Windowed and stateful processing enables both short-term anomaly detection and long-term trend analysis to identify potential failures early.

Cloud Storage stores raw telemetry for archival, auditing, and regulatory compliance purposes. Data encryption ensures compliance with aviation regulations and secure long-term storage.

BigQuery stores structured datasets derived from telemetry, enabling analytics, trend analysis, and model training. Analysts can evaluate fleet performance, incident history, and historical anomalies to improve predictive accuracy.

Vertex AI hosts predictive models that forecast component or engine failures using historical and real-time features. Models may employ regression, time-series analysis, or deep learning to predict maintenance needs accurately. Real-time inference allows proactive alerts to maintenance teams, reducing unscheduled downtime.

Looker dashboards provide visualizations of fleet health, predicted failures, and anomaly trends, allowing operational and maintenance teams to plan interventions effectively. Batch-only or SQL-based solutions cannot meet real-time predictive requirements at fleet scale.

This architecture ensures scalable, secure, real-time predictive maintenance, improving fleet safety and operational efficiency.

Question 130:

A global retail company wants to implement a real-time personalized marketing platform. It must ingest user interactions, compute behavioral features, score users with ML models, and deliver personalized offers to websites and mobile apps instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for event ingestion, Dataflow for feature computation, BigQuery for analytics, Vertex AI for ML scoring, Cloud Run for delivering offers
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Personalized marketing requires ingestion of millions of user interactions per second, including clicks, purchases, product views, and engagement with promotions. Pub/Sub provides scalable, reliable ingestion and decouples the front end from backend analytics pipelines, ensuring real-time processing without delays.

Dataflow computes behavioral features in real time, including session activity, product affinity scores, browsing patterns, and engagement metrics. Windowed and stateful processing allows aggregation over rolling periods and detection of changes in user behavior. Data enrichment with metadata, such as demographics, purchase history, and loyalty program tier, improves predictive accuracy.

BigQuery stores historical interaction and transaction data for analytics, trend analysis, and feature extraction for ML models. Analysts can identify patterns, segment users, and prepare datasets for model training and validation.

Vertex AI hosts ML models for real-time scoring, predicting user preferences, and recommending personalized offers. Models can employ collaborative filtering, content-based filtering, or hybrid approaches. Continuous retraining ensures models adapt to evolving user behavior.

Cloud Run exposes APIs for delivering personalized offers to web and mobile platforms. Autoscaling ensures low latency even under millions of concurrent requests. Batch-only, Firestore, or Cloud SQL solutions cannot meet the real-time predictive and global scaling requirements for personalized marketing.

This architecture ensures scalable, low-latency, real-time personalized marketing, increasing user engagement and conversion rates globally.

Question 131:

A global bank wants to implement a real-time credit scoring system. The system must ingest loan applications, financial transactions, and behavioral data, compute risk features, generate credit scores using ML models, and provide dashboards for underwriters. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for feature computation, BigQuery for analytics, Vertex AI for credit scoring, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Looker

Explanation:

Real-time credit scoring requires ingestion of data from multiple sources, including loan applications, banking transactions, and behavioral logs. Pub/Sub ensures high-throughput, scalable, and reliable ingestion while decoupling data producers from downstream pipelines. It can handle spikes during peak application periods and provides durability for sensitive financial data.

Dataflow processes streams in real time, computing features such as credit utilization, payment history, income-to-debt ratio, and behavioral indicators. Windowed and stateful processing allows aggregation over time and calculation of complex features for accurate scoring. Feature enrichment with historical credit and transaction data improves model performance.

BigQuery stores structured historical and current datasets, enabling analysts to perform detailed analytics, detect trends, and generate training datasets for ML models. Partitioning and clustering optimize queries over large datasets spanning years of financial transactions.

Vertex AI hosts ML models for credit scoring, using supervised learning to predict the probability of default, risk tiers, or creditworthiness. Real-time inference ensures loan officers or automated underwriting systems receive scores instantly to make informed decisions. Continuous retraining keeps models up-to-date with evolving financial behaviors.

Looker dashboards provide visualizations for underwriters, showing scores, feature contributions, and trends across portfolios. Batch-only or Cloud SQL-based solutions cannot handle real-time, high-throughput scoring while providing predictive insights.

This architecture ensures scalable, real-time credit scoring with predictive insights, supporting global banking operations and regulatory compliance.

Question 132:

A global e-commerce platform wants to implement a real-time dynamic pricing system. The system must ingest user interactions, competitor pricing, inventory levels, and market conditions, compute optimal prices using ML, and update the platform in near real time. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch updates
B) Pub/Sub for event ingestion, Dataflow for feature computation, BigQuery for analytics, Vertex AI for price optimization, Cloud Run for updating prices
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Dynamic pricing requires real-time ingestion of multiple data sources, including user interactions, competitor prices, inventory levels, and market signals. Pub/Sub provides high-throughput, globally scalable ingestion, ensuring no events are lost and decoupling producers from processing pipelines.

Dataflow computes real-time features such as product demand, competitor price differences, conversion trends, inventory constraints, and seasonal factors. Windowed processing enables rolling computations to detect sudden shifts in demand or supply. Data enrichment with historical sales data and product metadata improves model accuracy.

BigQuery stores historical and current datasets for analytics, trend analysis, and ML feature extraction. Analysts can compute patterns in demand, price elasticity, and seasonality to inform model training and validation.

Vertex AI hosts ML models that perform predictive pricing optimization. Models may use regression, reinforcement learning, or ensemble methods to recommend optimal prices in real time. Continuous inference ensures near-instantaneous price adjustments on the platform.

Cloud Run exposes APIs to update prices in the e-commerce system, ensuring low-latency delivery to front-end applications. Alternative architectures like Cloud SQL or batch-only processing cannot provide sub-second dynamic pricing at a global scale.

This architecture enables scalable, predictive, and real-time dynamic pricing, maximizing revenue and competitiveness.

Question 133:

A multinational airline wants to implement a real-time passenger experience analytics platform. The system must ingest check-in data, flight telemetry, customer service interactions, and loyalty program data, compute engagement metrics, detect anomalies, and provide dashboards. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for real-time processing, BigQuery for analytics, Vertex AI for anomaly detection, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Looker

Explanation:

Passenger experience analytics involves high-frequency data ingestion from check-in systems, mobile apps, in-flight telemetry, and customer service logs. Pub/Sub ensures scalable, durable ingestion and decouples sources from processing pipelines. It can handle peaks during check-in periods and multiple time zones.

Dataflow performs real-time feature computation, aggregating metrics such as boarding times, flight delays, service interactions, and loyalty program engagement. Stateful and windowed processing allows detection of anomalies such as unusual service complaints, delayed flights affecting specific routes, or loyalty discrepancies.

BigQuery stores historical datasets for analytics, trend detection, and ML feature extraction. Analysts can compute customer satisfaction scores, identify bottlenecks, and perform cohort analysis for loyalty programs.

Vertex AI hosts predictive models for anomaly detection, identifying unusual patterns in customer interactions or operational delays. Real-time inference allows proactive interventions, such as prioritizing affected passengers or sending notifications.

Looker dashboards provide visualizations for operations, marketing, and customer service teams, highlighting anomalies, trends, and actionable insights. Batch-only architectures or SQL-based solutions cannot handle real-time global-scale analysis.

This architecture ensures a scalable, predictive, and real-time platform for monitoring passenger experience and improving operational efficiency.

Question 134:

A global bank wants to implement a real-time anti-fraud system for mobile payments. The system must ingest millions of mobile transactions, compute risk scores, detect anomalies, and alert compliance teams instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time mobile fraud detection involves ingesting high-frequency events from millions of mobile transactions. Pub/Sub provides scalable, reliable, and global ingestion. It ensures that no events are lost during peak usage, decouples mobile payment apps from processing pipelines, and allows independent scaling.

Dataflow computes features such as transaction velocity, device geolocation changes, transaction amount deviations, and historical account patterns. Stateful and windowed processing allows the detection of both short-term spikes and longer-term suspicious behaviors. Feature enrichment with historical behavior improves predictive model accuracy.

Bigtable stores operational lookup tables such as blacklisted accounts, suspicious devices, and historical risk indicators. Its low-latency access allows transactions to be scored in milliseconds, which is critical to prevent fraudulent actions.

BigQuery stores historical transaction datasets for analytics, trend analysis, feature extraction, and compliance reporting. Analysts can back-test models, detect patterns, and compute metrics for model validation.

Vertex AI hosts ML models for real-time scoring of transactions. Supervised, unsupervised, or ensemble models generate risk scores in real time, enabling actionable insights.

Cloud Run exposes APIs to deliver alerts to fraud detection teams or automated workflows for blocking transactions. Batch-only, Cloud SQL, or Firestore-based solutions cannot meet the required throughput, low latency, and predictive capabilities.

This architecture ensures scalable, predictive, real-time mobile fraud detection while maintaining regulatory compliance.

Question 135:

A global ride-hailing company wants to implement a real-time driver allocation system. It must ingest ride requests, driver location updates, and traffic data, compute optimal assignments, and provide APIs for driver and rider applications. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for real-time computation, Bigtable for low-latency location queries, Vertex AI for predictive demand, Cloud Run for assignment APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Real-time driver allocation requires ingestion of millions of ride requests, driver telemetry, and traffic data. Pub/Sub ensures high-throughput, reliable, and scalable ingestion, decoupling ride request sources from processing pipelines.

Dataflow computes features in real time, including driver availability, traffic-adjusted ETA, demand-supply ratios, and dynamic routing. Stateful processing ensures rides are allocated optimally based on the current fleet status. Windowed computations enable aggregation for short-term predictions and operational metrics.

Bigtable stores operational driver and rider data for low-latency queries. Fast read/write access ensures allocation decisions occur within milliseconds to avoid delays.

Vertex AI hosts predictive models that forecast demand surges, enabling preemptive driver positioning. Real-time inference allows dynamic pricing and matching decisions to improve efficiency and reduce rider wait times.

Cloud Run exposes APIs for mobile and web applications, providing assignment responses instantly. Alternative solutions such as Cloud SQL, Firestore, or batch processing cannot provide the necessary low-latency, real-time predictive allocation at scale.

This architecture ensures scalable, predictive, and low-latency driver allocation for global ride-hailing operations.

Question 136:

A global airline wants to implement a predictive maintenance system for its aircraft fleet. Telemetry includes engine metrics, vibration data, GPS, and fuel consumption. The system must detect anomalies, predict failures, and notify maintenance crews proactively. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML
B) Pub/Sub for telemetry ingestion, Dataflow for real-time feature computation, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive maintenance, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML

Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker

Explanation:

Aircraft telemetry is high-frequency and global in scale. Pub/Sub provides scalable, reliable ingestion of engine metrics, vibration data, GPS, and fuel consumption. Decoupling ingestion from downstream processing ensures fault tolerance and resilience.

Dataflow computes real-time features such as rolling averages, vibration frequency analysis, temperature trends, and anomaly detection. Windowed and stateful processing allows detection of both immediate anomalies and long-term trends indicating potential failures.

Cloud Storage stores raw telemetry for archival, audit, and compliance purposes. Data is encrypted and stored securely to meet aviation regulatory requirements.

BigQuery stores structured datasets derived from telemetry, enabling analytics such as fleet-wide performance trends, incident analysis, and historical anomaly investigation. Partitioning and clustering ensure performance for querying large datasets.

Vertex AI hosts predictive models that forecast component failures. Models use time-series, regression, or deep learning approaches to predict engine or component degradation. Real-time inference ensures timely maintenance alerts to crews.

Looker dashboards provide visualizations of fleet health, anomalies, and predictive maintenance insights, allowing proactive interventions. Batch-only or SQL-based architectures cannot meet real-time, predictive maintenance needs at fleet scale.

This architecture ensures secure, scalable, real-time predictive maintenance for aircraft, reducing downtime and improving operational safety.

Question 137:

A global retail chain wants to implement a real-time inventory and replenishment system. Inventory updates must reflect purchases, returns, and transfers instantly, while ML models predict stockouts and optimize replenishment. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with regional replication
B) Pub/Sub for event ingestion, Dataflow for real-time processing, Spanner for global inventory, BigQuery for analytics, Vertex AI for stockout prediction
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Global inventory management requires real-time ingestion of transactions from stores, warehouses, and online channels. Pub/Sub provides scalable ingestion and decouples event producers from downstream pipelines.

Dataflow computes real-time features, such as inventory updates, stock aggregations, anomaly detection, and enrichment with product metadata. Windowed operations detect rapid sales trends and potential stockouts.

Cloud Spanner provides globally consistent transactional storage, ensuring that inventory updates are reflected in real time across regions. This prevents overselling and supports high transaction volumes.

BigQuery stores historical data for analytics, reporting, and trend analysis. Analysts can evaluate product demand, regional variations, and inventory efficiency.

Vertex AI hosts predictive models for stockout detection and replenishment optimization. Models use historical and real-time features to forecast inventory needs and guide automated replenishment. Real-time inference ensures timely stock management.

Alternatives such as Cloud SQL, Firestore, or batch-only processing cannot meet real-time, global consistency, or predictive requirements.

This architecture provides scalable, consistent, real-time inventory management with predictive replenishment across regions.

Question 138:

A global ride-hailing company wants to implement a real-time surge pricing and demand prediction system. It must ingest ride requests, driver locations, traffic, and weather data, compute dynamic fares using ML, and provide APIs for the platform. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch scripts
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational metrics, Vertex AI for predictive pricing, Cloud Run for fare APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Surge pricing requires high-frequency ingestion of ride requests, driver telemetry, traffic updates, and weather information. Pub/Sub ensures global, scalable ingestion and decouples producers from processing pipelines.

Dataflow computes features in real time, such as demand-supply ratios, ETA-adjusted driver availability, traffic congestion, and historical demand patterns. Windowed and stateful processing supports predictive analysis.

Bigtable stores operational metrics for low-latency access, allowing sub-second fare computations.

Vertex AI hosts predictive models for dynamic pricing, incorporating real-time and historical features to determine optimal surge fares. Real-time inference ensures immediate application to the ride-hailing platform.

Cloud Run exposes APIs to deliver updated fares to mobile and web applications. Batch-only or SQL-based solutions cannot meet the real-time predictive and scaling requirements.

This architecture enables predictive, real-time surge pricing to optimize revenue and reduce rider wait times.

Question 139:

A global financial institution wants to implement a real-time anti-money-laundering (AML) platform. It must ingest millions of financial transactions, compute risk metrics, detect suspicious activity, and alert compliance teams instantly. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for real-time feature computation, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for risk scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

AML platforms require ingestion of massive transaction streams from multiple sources, including wire transfers, credit card transactions, and account activities. Pub/Sub provides high-throughput, globally scalable ingestion and ensures reliable delivery.

Dataflow computes real-time features like transaction velocity, geolocation deviations, account correlations, and aggregate balances. Stateful and windowed processing detects structured attempts to bypass compliance, such as rap, ID, and small transactions.

Bigtable stores operational lookups such as blacklisted accounts, high-risk IPs, and historical risk scores for low-latency access during scoring.

BigQuery stores historical transactions for analytics, model training, and regulatory reporting. Analysts can extract features, perform trend analysis, and validate predictive models.

Vertex AI hosts predictive models for risk scoring, generating real-time assessments for AML compliance. Cloud Run exposes APIs for alerting and workflow integration. Batch-only or SQL-based architectures cannot provide the necessary real-time, predictive capabilities.

This architecture ensures scalable, low-latency, predictive AML monitoring, supporting regulatory compliance globally.

Question 140:

A global logistics provider wants to implement a predictive maintenance system for its delivery fleet. Vehicles emit GPS, engine, and fuel telemetry. The system must detect anomalies, predict component failures, optimize routes, and provide dashboards. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Predictive maintenance requires ingestion of high-frequency vehicle telemetry, including GPS, engine, and fuel data. Pub/Sub ensures durable, scalable, and fault-tolerant ingestion of telemetry streams from a large fleet.

Dataflow performs real-time feature computation, including rolling averages, anomaly detection, and enrichment with vehicle metadata. Windowed and stateful processing enables detection of anomalies and predictive indicators for component wear.

Bigtable stores operational metrics for low-latency access, enabling fleet managers to query vehicle health in near real time.

BigQuery stores historical telemetry for analytics, model training, and trend analysis. Analysts can identify recurring failures, optimize maintenance schedules, and evaluate operational performance.

Vertex AI hosts predictive maintenance models, forecasting component failures, and recommending proactive interventions. Real-time inference ensures timely alerts for fleet operators.

Cloud Run exposes dashboards and APIs for operational visibility, providing insights on vehicle status, maintenance alerts, and route optimizations. Batch-only or SQL-based solutions cannot deliver the necessary predictive, real-time capabilities.

This architecture ensures scalable, predictive, and real-time fleet maintenance and operational efficiency for global logistics operations. The described architecture leverages Vertex AI and Cloud Run to create a fully integrated, predictive, and real-time fleet maintenance solution for global logistics operations. Vertex AI serves as the core engine for predictive analytics by hosting machine learning models specifically designed to forecast component failures across the fleet. These models analyze historical maintenance records, sensor telemetry, and operational data to identify patterns that indicate impending failures. By performing real-time inference, Vertex AI enables fleet operators to receive timely alerts about potential issues before they escalate, allowing proactive scheduling of maintenance and reducing unexpected downtime. This predictive capability ensures higher fleet availability, minimizes operational disruptions, and helps optimize maintenance resources and costs.

Cloud Run complements Vertex AI by providing a flexible, serverless platform to expose dashboards and APIs for operational visibility. Through these dashboards, logistics managers and operators gain an intuitive, real-time overview of vehicle status, maintenance alerts, and recommended interventions. Cloud Run’s autoscaling ensures that API endpoints and dashboards remain responsive, even during periods of high user demand or when analyzing data from thousands of vehicles simultaneously. Furthermore, APIs hosted on Cloud Run allow integration with route planning systems, ERP platforms, and mobile applications, enabling automated workflows such as adjusting delivery routes based on vehicle health or dispatching maintenance crews efficiently.

Compared to batch-only or traditional SQL-based solutions, this architecture offers significant advantages. Batch processing introduces latency that prevents timely intervention, while SQL-based analytics alone cannot leverage machine learning to predict failures or optimize operations proactively. In contrast, the combination of Vertex AI for predictive modeling and Cloud Run for real-time delivery provides actionable insights immediately when they are needed, transforming reactive maintenance into proactive management.

Overall, this architecture delivers a scalable, predictive, and real-time fleet management solution. It reduces downtime, enhances operational efficiency, and supports data-driven decision-making across complex logistics operations, ensuring that vehicles are maintained proactively, routes are optimized, and fleet performance remains at peak levels globally.

img