Google Professional Cloud Architect Google Cloud Certified – Professional Cloud Architect Exam Dumps and Practice Test Questions Set 6 Q101-120
Visit here for our full Google Professional Cloud Architect exam dumps and practice test questions.
Question 101:
A global e-commerce platform wants to implement a real-time recommendation engine for product suggestions. It must handle millions of user interactions, track browsing and purchase events, update recommendations instantly, and support machine learning-based personalization. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch scripts
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for historical analytics, Vertex AI for model training and real-time inference, Cloud Run for serving recommendations
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run
Explanation:
Real-time recommendation systems require high-throughput ingestion of user events such as clicks, page views, purchases, and add-to-cart actions. Pub/Sub is ideal for this because it scales globally, supports millions of messages per second, and decouples producers (websites, mobile apps) from consumers.
Dataflow processes the event streams in real time. It performs feature extraction, session aggregation, event enrichment with user profile or product metadata, and sliding-window computations. Dataflow’s support for stateful processing enables computation of running metrics, behavioral patterns, and user embeddings critical for real-time recommendations.
BigQuery stores historical event data, providing analysts and ML pipelines access to large-scale datasets for feature extraction and model training. Historical insights such as trending products, seasonal behaviors, and product co-purchases are necessary for building accurate recommendation models.
Vertex AI trains machine learning models using BigQuery data and serves real-time inference endpoints. Models can use collaborative filtering, deep learning, or hybrid techniques to generate personalized recommendations. Continuous retraining ensures that the recommendations remain relevant as user behavior evolves.
Cloud Run exposes recommendation APIs to the front-end applications, providing low-latency access for personalized suggestions. Autoscaling ensures that traffic spikes do not impact performance.
Options A, C, and D are not suitable. Cloud SQL with batch processing cannot handle millions of real-time events, Firestore and Cloud Functions may experience latency and scaling issues, and batch ML on Cloud Storage introduces unacceptable delay for personalization.
This architecture ensures high-throughput ingestion, real-time feature computation, predictive modeling, and low-latency recommendation delivery, meeting the requirements of a global e-commerce platform.
Question 102:
A healthcare provider wants to monitor patients’ vital signs from IoT-enabled medical devices. The system must ingest high-frequency telemetry, detect anomalies in real time, store data securely for compliance, support predictive modeling, and notify clinicians immediately. Which architecture should the Cloud Architect recommend?
A) Firebase for device telemetry and Firestore for storage
B) Pub/Sub with CMEK for secure ingestion, Dataflow with VPC-SC for real-time processing, Cloud Storage with CMEK for raw data, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Cloud SQL with cron jobs for analysis
D) Compute Engine VMs storing raw telemetry with batch ML
Answer: B) Pub/Sub + Dataflow + Cloud Storage + BigQuery + Vertex AI + Looker
Explanation:
Remote patient monitoring involves continuous ingestion of telemetry from multiple medical devices such as heart monitors, oxygen sensors, and blood pressure monitors. These streams can generate hundreds of thousands of events per second for large hospitals or regional health networks. Pub/Sub ensures durable, high-throughput ingestion and supports CMEK for encryption at rest, which is essential for HIPAA compliance. Pub/Sub also decouples devices from downstream processing systems, allowing devices to continue sending data even if processing pipelines temporarily scale up or down.
Dataflow processes streams in real time. It detects anomalies such as sudden drops in oxygen levels, irregular heart rates, or unusual blood pressure patterns. Dataflow’s stateful processing enables rolling metrics, aggregation, and complex event detection. VPC Service Controls ensure that patient data cannot leave the secured network boundary, which is crucial for compliance.
Cloud Storage with CMEK stores raw telemetry for auditing, archival, and historical analysis. BigQuery provides structured storage for analytics, allowing clinicians and administrators to query trends across patients, detect long-term patterns, and extract features for ML models. Partitioning and clustering optimize performance for large datasets.
Vertex AI hosts predictive models to forecast patient deterioration or detect early signs of medical conditions. Models can use LSTM, time-series, or classification algorithms. Real-time inference allows instant alerts for critical patient conditions.
Looker dashboards provide clinicians with actionable insights, highlighting patients at risk and allowing drill-down analysis of trends. Combined with real-time alerting from Dataflow, clinicians can proactively intervene.
Other options fail to meet scalability, security, or real-time requirements. Firebase and Firestore are not designed for high-frequency medical telemetry. Cloud SQL and batch processing cannot provide instant alerts. Compute Engine batch processing lacks managed scalability and compliance features.
This architecture provides a secure, compliant, scalable, real-time patient monitoring system with predictive analytics capabilities.
Question 103:
A logistics company wants to implement a real-time fleet tracking and predictive maintenance system. Vehicles continuously emit GPS coordinates, fuel levels, engine telemetry, and vibration metrics. The system must detect anomalies, predict failures, optimize routes, and provide operational dashboards. Which architecture should the Cloud Architect recommend?
A) Cloud SQL and Compute Engine batch jobs
B) Pub/Sub for telemetry ingestion, Dataflow for processing and enrichment, Bigtable for low-latency operational queries, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards and APIs
C) Firestore with Cloud Functions
D) Cloud Storage for logs and nightly ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
Fleet tracking generates high-frequency telemetry that must be ingested, processed, and analyzed in real time. Pub/Sub provides scalable and durable ingestion of GPS coordinates, engine metrics, fuel usage, and vibration data. Decoupling ingestion from downstream processing ensures that temporary spikes in vehicle telemetry do not overwhelm the system.
Dataflow processes incoming streams to perform aggregation, feature computation, and anomaly detection. For example, rolling averages of fuel consumption, abnormal vibration patterns indicating mechanical issues, and deviations from planned routes are computed in real time. Dataflow allows enrichment with metadata such as vehicle type, driver assignment, and maintenance history.
Bigtable stores operational data for low-latency queries. Dashboards showing vehicle locations, route deviations, or maintenance alerts require millisecond response times, which Bigtable provides. Its wide-column structure accommodates multiple telemetry metrics per vehicle efficiently.
BigQuery stores historical data for analytics and model training. Analysts can identify trends in vehicle performance, route efficiency, or maintenance history. BigQuery supports large-scale aggregation queries for predictive feature extraction.
Vertex AI hosts predictive maintenance models that analyze telemetry trends to forecast failures before they occur. Time-series models or deep learning approaches can predict engine failures, battery degradation, or component wear. Real-time inference ensures proactive maintenance scheduling.
Cloud Run serves APIs and dashboards for operations managers. It allows dynamic scaling to handle fluctuating numbers of queries and vehicle telemetry streams. Alternative architectures using Cloud SQL, Firestore, or batch-only solutions cannot deliver the real-time, low-latency, predictive, and scalable requirements necessary for fleet operations.
This architecture enables real-time tracking, predictive maintenance, anomaly detection, and operational visibility.
Question 104:
A global ride-hailing company wants to implement a dynamic pricing engine that adjusts fares based on real-time demand, supply, traffic conditions, and weather. The system must scale to millions of concurrent requests and provide pricing within milliseconds. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch scripts
B) Pub/Sub for event ingestion, Dataflow for real-time feature computation, Bigtable for operational metrics, Vertex AI for pricing predictions, Cloud Run for fare calculation APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run
Explanation:
Dynamic pricing engines require real-time processing of ride requests, driver locations, traffic, and weather conditions. Pub/Sub ingests all these high-frequency event streams reliably and at scale, decoupling producers from downstream consumers.
Dataflow computes real-time features such as demand-supply ratios per region, driver availability, average ETA, and congestion metrics. Sliding-window computations and stateful processing allow the system to maintain live metrics necessary for accurate fare calculation.
Bigtable stores operational metrics for low-latency lookups during pricing calculations. Real-time data is required to calculate fares within milliseconds for each ride request. Its high throughput and low latency make it suitable for dynamic pricing scenarios with millions of concurrent users.
Vertex AI hosts predictive models that determine optimal fare adjustments based on historical patterns, current demand, traffic, and weather data. ML models can use regression, reinforcement learning, or time-series forecasting to produce pricing decisions in real time.
Cloud Run exposes fare calculation APIs to mobile apps. Autoscaling ensures low latency even during traffic spikes. Batch processing or SQL-based solutions cannot meet the strict latency and scaling requirements of a real-time dynamic pricing engine.
This architecture supports high-throughput ingestion, real-time computation, predictive modeling, and sub-second fare delivery, meeting the operational needs of a global ride-hailing platform.
Question 105:
A cybersecurity company wants to implement a real-time threat detection platform that ingests firewall logs, VPN logs, endpoint telemetry, and authentication events. The system must detect anomalies, integrate machine learning-based threat scoring, and alert security operations teams. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for stream processing, Bigtable for low-latency threat lookups, BigQuery for analytics, Vertex AI for ML-based threat scoring, Cloud Run for alerting APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly ML jobs
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
Real-time cybersecurity monitoring requires ingestion of massive streams of security events. Pub/Sub ensures scalable, durable ingestion from firewalls, VPNs, endpoint telemetry, and authentication systems. It supports high throughput and allows decoupling of producers from processing pipelines.
Dataflow performs stream processing, including feature extraction, aggregation, and real-time anomaly detection. Sliding windows, stateful processing, and event correlation enable detection of brute-force attacks, lateral movement, and suspicious activity patterns.
Bigtable stores operational threat indicators such as blacklists, user risk scores, and threat signatures, enabling millisecond lookup for real-time alerting. BigQuery stores historical security logs for threat hunting, model training, and compliance reporting.
Vertex AI hosts predictive models for threat detection, scoring events based on the probability of malicious activity. Models may include anomaly detection, clustering, or supervised classification approaches. Real-time inference allows immediate response to emerging threats.
Cloud Run exposes alert APIs to integrate with Security Information and Event Management (SIEM) systems or security operations dashboards. Alternative solutions fail to meet real-time, scalable, and predictive requirements for modern threat detection.
This architecture provides continuous ingestion, low-latency operational lookups, predictive threat scoring, and alerting for global security operations.
Question 106:
A global airline wants to implement a real-time flight monitoring system. Aircraft sends telemetry including altitude, speed, engine metrics, and GPS location. The system must detect anomalies, provide alerts to operations teams, store data securely for compliance, and support predictive maintenance. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch jobs
B) Pub/Sub for telemetry ingestion, Dataflow for real-time processing, Cloud Storage for raw data, BigQuery for analytics, Vertex AI for predictive maintenance, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker
Explanation:
Flight telemetry generates high-frequency data streams from hundreds of aircraft globally. Each aircraft emits multiple data points per second, including altitude, airspeed, engine temperature, vibration, and GPS location. Pub/Sub provides reliable, scalable ingestion of these massive telemetry streams, ensuring that even during peak traffic or communication spikes, no data is lost. Pub/Sub decouples telemetry producers from downstream processing systems, allowing independent scaling and fault tolerance.
Dataflow performs real-time stream processing, including anomaly detection, feature computation, and enrichment with flight metadata such as aircraft type, route, and weather conditions. Rolling averages, threshold-based alerts, and statistical anomaly detection enable immediate identification of irregular engine behavior, deviations from planned routes, or sudden altitude changes. Dataflow’s stateful processing and windowing capabilities allow detection of both short-term spikes and long-term trends.
Cloud Storage stores raw telemetry for archival, audit, and regulatory compliance. Aviation regulators often require long-term storage of operational flight data. Cloud Storage with CMEK ensures data is encrypted and securely managed according to airline compliance requirements.
BigQuery stores structured datasets derived from telemetry for analytics, enabling historical analysis of flight performance, anomaly trends, and operational efficiency. Analysts can query data for fleet performance, engine health trends, and incident investigations. Partitioning and clustering optimize query performance for large-scale datasets.
Vertex AI hosts predictive maintenance models trained on historical telemetry and maintenance records. Predictive models analyze patterns to forecast potential engine or component failures, enabling proactive maintenance scheduling. Real-time inference ensures that critical alerts are delivered before failures occur.
Looker dashboards provide operations teams with actionable insights, including live flight metrics, predictive maintenance alerts, and anomaly notifications. Dashboards allow drill-down for individual aircraft or fleet-wide analysis.
Alternatives A, C, and D cannot meet real-time requirements or scale. Cloud SQL with batch jobs introduces unacceptable latency, Firestore with Cloud Functions cannot handle high-frequency telemetry efficiently, and batch-only Cloud Storage pipelines fail to provide real-time operational insights.
This architecture ensures secure, compliant, real-time flight monitoring, predictive maintenance, and operational visibility.
Question 107:
A multinational retail chain wants to implement a real-time inventory management system across stores and warehouses worldwide. Inventory updates must reflect purchases, returns, and transfers instantly. The system must predict stockouts and optimize replenishment using machine learning. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with regional replication
B) Pub/Sub for event ingestion, Dataflow for real-time processing, Spanner for global inventory, BigQuery for analytics, Vertex AI for stockout prediction
C) Firestore multi-region with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI
Explanation:
Global inventory management requires strong consistency across multiple regions to ensure that stock levels reflect all transactions in real time. Cloud Spanner provides globally distributed, strongly consistent transactional storage. It ensures that no two stores sell the same unit simultaneously and supports horizontal scaling to handle high transaction volumes across regions.
Pub/Sub ingests events from point-of-sale systems, online orders, warehouse management systems, and transfer requests. High-throughput ingestion ensures the system can handle peak shopping periods, flash sales, or seasonal spikes without dropping events. Pub/Sub decouples event producers from processing pipelines.
Dataflow processes these streams in real time, performing transformations such as inventory updates, stock aggregations, anomaly detection, and enrichment with product metadata. Windowed operations allow rolling calculations to detect rapid sales trends, potential stockouts, or unusual activity.
BigQuery stores historical inventory data for analytics, reporting, and trend analysis. Analysts can measure product demand patterns, regional sales variations, and supply chain efficiency. Historical data is also essential for training machine learning models.
Vertex AI provides predictive models for stockout detection and replenishment optimization. Time-series and regression models analyze historical sales, inventory levels, and external factors like promotions or seasonality to forecast stock requirements. Real-time predictions enable proactive inventory replenishment to prevent lost sales.
Alternative options fail to meet real-time, consistent, and predictive requirements. Cloud SQL cannot provide strong global consistency at scale, Firestore with Cloud Functions cannot support transactional updates across thousands of stores, and batch Cloud Storage processing introduces unacceptable latency for inventory-critical systems.
This architecture ensures consistent, real-time global inventory management, predictive stockout alerts, and optimized replenishment.
Question 108:
A global financial institution wants to implement a real-time fraud detection platform for credit card transactions. The system must ingest millions of transactions per second, compute risk scores, correlate activity across accounts, and alert security teams within milliseconds. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch ML
B) Pub/Sub for ingestion, Dataflow for real-time feature computation, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for ML-based scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
Fraud detection requires ingestion of massive, high-velocity transaction streams. Pub/Sub supports millions of messages per second, providing reliable, decoupled ingestion from point-of-sale systems, mobile apps, ATMs, and online transactions. It ensures that no transaction is lost and supports regional scaling.
Dataflow performs real-time transformations and feature computation, including aggregations like transaction velocity, geolocation changes, device fingerprinting, and account history. Windowed computations enable detection of unusual patterns such as structuring, rapid transfers, or deviations from normal behavior.
Bigtable stores operational lookups such as blacklists, suspicious accounts, and risk indicators. Low-latency access is crucial for scoring transactions in milliseconds to prevent fraudulent operations from completing.
BigQuery stores historical transaction data, enabling analysts to investigate trends, generate features for ML models, and conduct regulatory reporting. Large-scale queries on transaction histories provide context for anomaly detection.
Vertex AI hosts predictive models that score transactions in real time. Models can include anomaly detection, supervised classification, and ensemble approaches. Real-time inference ensures immediate scoring and alerting.
Cloud Run exposes APIs for alerting and integration with security operations workflows. Alternatives cannot meet scale or real-time latency requirements. Batch SQL or storage-based solutions introduce delays, while Firestore lacks throughput for high-frequency transactions.
This architecture delivers scalable, low-latency, predictive fraud detection with real-time operational alerting.
Question 109:
A global telecommunications provider wants to implement a network anomaly detection platform. Millions of events, including device telemetry, throughput metrics, connection logs, and authentication events, must be ingested, processed, and analyzed in real time to detect outages, congestion, and security threats. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for stream processing, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive detection, Cloud Monitoring for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Monitoring
Explanation:
Network monitoring requires high-throughput ingestion of telemetry from routers, switches, endpoints, and authentication systems. Pub/Sub ensures reliable ingestion, decouples producers from consumers, and scales globally to handle millions of events per second.
Dataflow performs real-time aggregation, anomaly detection, and feature computation. Stateful processing enables detection of patterns such as spikes in traffic, device failures, or unusual login patterns. Sliding windows capture both short-term spikes and longer-term trends.
Bigtable stores operational metrics for low-latency access, allowing network engineers to query current device performance, throughput, or connectivity status.
BigQuery stores historical telemetry for trend analysis, root-cause investigations, and predictive modeling. Vertex AI analyzes historical and streaming data to detect anomalies and predict potential outages or security threats.
Cloud Monitoring integrates alerts, dashboards, and automated incident workflows. Batch or SQL-only architectures cannot provide real-time, scalable anomaly detection. Firestore and Cloud Functions cannot handle the throughput or complex correlation needs.
This architecture ensures real-time, scalable network monitoring, anomaly detection, and predictive insights.
Question 110:
A global logistics company wants to implement a predictive maintenance system for its delivery fleet. Vehicles continuously emit engine telemetry, GPS coordinates, fuel levels, and vibration metrics. The system must detect anomalies, predict failures, and notify maintenance teams proactively. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch ML
B) Pub/Sub for ingestion, Dataflow for feature computation, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML
Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker
Explanation:
Predictive maintenance requires continuous ingestion of telemetry from a large fleet of vehicles. Pub/Sub handles high-throughput streams, ensuring no data is lost during spikes in vehicle activity.
Dataflow computes real-time features such as rolling averages of engine temperature, vibration analysis, fuel consumption, and anomaly detection. These computed features feed predictive models.
Cloud Storage stores raw telemetry for auditing, historical analysis, and retraining ML models. BigQuery stores structured data for analytics, enabling analysis of trends in vehicle performance and historical failures.
Vertex AI hosts predictive models that analyze telemetry streams to forecast potential engine failures or maintenance needs. Real-time inference ensures alerts are delivered proactively to maintenance teams.
Looker dashboards visualize anomalies, vehicle health trends, and predictive maintenance insights, allowing fleet managers to prioritize repairs and reduce downtime.
Alternatives like Cloud SQL, Firestore, or batch-only storage fail to meet real-time ingestion, scale, or predictive analysis requirements.
This architecture provides scalable, real-time telemetry ingestion, predictive maintenance, and operational visibility for fleet management.
Question 111:
A global e-commerce company wants to implement a real-time shopping cart analytics system. The system must track millions of users adding and removing items, compute conversion rates, detect abandoned carts, and feed machine learning models for personalized promotions. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch jobs
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for analytics, Vertex AI for ML scoring, Cloud Run for delivering promotions
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run
Explanation:
Real-time shopping cart analytics involves continuous ingestion of user interaction events such as adding items, removing items, viewing products, and checking out. Pub/Sub is the ideal choice for ingesting millions of events per second with guaranteed delivery and global scaling. It decouples the e-commerce front end from the analytics pipeline, enabling flexible scaling and fault tolerance.
Dataflow processes the event streams in real time, performing transformations like computing session-level metrics, aggregation of items per cart, tracking user behavior patterns, and detecting abandoned carts. Sliding-window processing allows the system to detect short-term patterns, such as users adding items but not purchasing within minutes. Dataflow also enriches events with user profile information, such as loyalty tier or previous purchase history.
BigQuery stores historical data, enabling analytics at scale. Analysts can compute long-term conversion rates, segment users based on purchasing patterns, and extract features for machine learning models. BigQuery’s partitioning and clustering optimize performance for large datasets spanning multiple months or years.
Vertex AI hosts ML models that generate personalized promotions or recommendations. Models can predict which users are likely to abandon carts, recommend additional items to increase conversion, or offer discounts to retain users. Real-time inference ensures that promotions can be applied instantly while the user is actively shopping.
Cloud Run exposes APIs that deliver personalized promotions to the front-end applications. Autoscaling ensures that millions of concurrent requests can be served without latency issues.
Alternative architectures are insufficient. Cloud SQL with batch jobs cannot handle high-velocity event streams, Firestore with Cloud Functions may experience latency and scaling issues, and batch-only Cloud Storage processing introduces unacceptable delays for real-time promotions.
This architecture provides a scalable, real-time shopping cart analytics system with predictive modeling and personalized promotion delivery, improving conversions and revenue.
Question 112:
A global bank wants to implement a real-time anti-money-laundering (AML) system. The platform must ingest millions of transactions per second, compute risk scores, detect unusual activity across accounts, and alert compliance teams immediately. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch ML
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for operational lookups, BigQuery for analytics, Vertex AI for ML scoring, Cloud Run for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
AML systems require continuous ingestion of high-volume transaction data. Pub/Sub enables scalable, global ingestion, ensuring no transactions are lost. It decouples sources such as ATMs, online banking, and branch transactions from downstream processing.
Dataflow processes streams in real time, computing features such as transaction velocity, geolocation deviations, account-to-account correlations, and aggregated balances. Stateful and windowed processing allows detection of structured transactions designed to evade detection.
Bigtable stores operational lookup tables such as blacklists, high-risk accounts, and historical risk scores. Low-latency access ensures immediate computation of risk scores for each transaction.
BigQuery stores historical transaction data, supporting analytics, audits, and feature extraction for ML models. Analysts can query trends, perform risk segmentation, and validate models.
Vertex AI hosts predictive ML models that assess transaction risk, using supervised and unsupervised approaches. Real-time inference generates risk scores within milliseconds, enabling timely alerts.
Cloud Run exposes APIs for alerting and integration with compliance workflows. Alternative options fail due to insufficient throughput, latency, or predictive capabilities.
This architecture ensures high-throughput ingestion, real-time processing, predictive scoring, and immediate AML alerts, meeting regulatory requirements.
Question 113:
A global ride-hailing company wants to implement a real-time driver matching system. It must handle millions of ride requests, consider driver availability, location, traffic, and predicted demand, and match drivers to riders within milliseconds. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch scripts
B) Pub/Sub for event ingestion, Dataflow for processing, Bigtable for low-latency location queries, Vertex AI for predictive demand, Cloud Run for matching APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run
Explanation:
Real-time driver matching requires ingestion of millions of ride requests and driver telemetry events. Pub/Sub provides high-throughput, reliable ingestion, decoupling producers from downstream processing, and ensuring scalability during peak demand.
Dataflow computes features in real time, such as driver availability, ETA calculations, traffic-adjusted travel times, and region-level demand-supply ratios. Stateful processing enables dynamic allocation of rides based on current and predicted demand.
Bigtable stores operational data for low-latency location queries, ensuring that driver matching decisions occur in milliseconds. Its wide-column design supports fast read/write access for each driver and rider entity.
Vertex AI models predict demand surges, enabling proactive driver positioning to reduce wait times. Real-time inference supports dynamic allocation and surge pricing adjustments.
Cloud Run exposes APIs for front-end applications to request matches and accept driver assignments. Autoscaling ensures responsiveness under fluctuating demand. Batch processing or SQL-only solutions fail to deliver the necessary low-latency and predictive capabilities.
This architecture provides scalable, real-time, predictive, and low-latency driver matching for global ride-hailing operations.
Question 114:
A telecommunications company wants to implement a real-time network traffic anomaly detection system. It must ingest millions of events per second from devices, routers, and endpoints, detect anomalies, predict outages, and provide alerts to engineers. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch processing
B) Pub/Sub for ingestion, Dataflow for stream processing, Bigtable for operational metrics, BigQuery for analytics, Vertex AI for predictive detection, Cloud Monitoring for alerts
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Monitoring
Explanation:
Network anomaly detection requires ingestion of high-frequency telemetry from routers, switches, endpoints, and authentication systems. Pub/Sub supports global scale, reliability, and durability, allowing ingestion of millions of events per second.
Dataflow processes these streams in real time, performing aggregations, feature computations, anomaly detection, and event enrichment. Stateful processing allows correlation across time windows to detect unusual behavior patterns.
Bigtable stores operational metrics for low-latency queries, enabling real-time dashboards and rapid troubleshooting. Engineers can query device status, throughput, and latency instantly.
BigQuery stores historical data for trend analysis, root-cause investigations, and ML model training. Vertex AI predicts network anomalies, congestion, and potential outages using historical and real-time data.
Cloud Monitoring integrates alerts, dashboards, and automated workflows, notifying engineers immediately. Batch-only or SQL-based architectures cannot meet the latency, scale, and predictive requirements of modern network monitoring.
This architecture ensures real-time, scalable, and predictive network monitoring with actionable insights.
Question 115:
A logistics company wants to implement a real-time fleet optimization and predictive maintenance system. Vehicles continuously emit GPS, fuel, and engine telemetry. The system must detect anomalies, predict failures, optimize delivery routes, and provide dashboards for operations. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch jobs
B) Pub/Sub for ingestion, Dataflow for processing, Bigtable for low-latency queries, BigQuery for analytics, Vertex AI for predictive maintenance, Cloud Run for dashboards and APIs
C) Firestore with Cloud Functions
D) Cloud Storage only with batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
Fleet telemetry generates high-frequency streams, including GPS coordinates, fuel levels, engine temperature, and vibration metrics. Pub/Sub provides high-throughput, reliable ingestion while decoupling vehicle sensors from downstream pipelines.
Dataflow performs real-time transformations, feature computation, and anomaly detection. Rolling averages, threshold checks, and enriched metadata enable detection of maintenance needs and route deviations.
Bigtable stores operational data for low-latency queries, enabling real-time dashboards and rapid decision-making. BigQuery stores historical telemetry for analysis, trend detection, and ML feature extraction.
Vertex AI hosts predictive maintenance models, forecasting component failures or required maintenance before failures occur. Cloud Run provides APIs and dashboards for operations managers to view fleet status and receive proactive alerts.
Alternatives cannot provide real-time insights, predictive analytics, or scale for a global fleet. This architecture ensures proactive fleet management and operational efficiency.
Question 116:
A global airline wants to implement a predictive maintenance system for its aircraft fleet. Each aircraft streams telemetry, including engine temperature, vibration, fuel levels, and GPS location. The system must detect anomalies, forecast component failures, and notify maintenance crews proactively. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch ML
B) Pub/Sub for telemetry ingestion, Dataflow for real-time processing, Cloud Storage for raw telemetry, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage only with offline ML
Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker
Explanation:
Predictive maintenance for aircraft involves ingesting high-frequency telemetry from engines, sensors, and GPS systems. Each aircraft emits multiple metrics per second, producing a massive global data stream. Pub/Sub provides durable, scalable ingestion and decouples telemetry sources from downstream analytics pipelines, ensuring high availability during peak traffic or unexpected spikes.
Dataflow performs real-time stream processing, computing rolling averages, anomalies, and derived features such as vibration frequency analysis and temperature trends. Stateful and windowed processing allows detection of both short-term anomalies and longer-term trends that may indicate early signs of component degradation. Dataflow also enriches events with aircraft metadata, including model, maintenance history, and flight route for contextual analysis.
Cloud Storage stores raw telemetry for archival, audit, and regulatory compliance purposes. Aviation regulators require secure, long-term storage of flight and engine data. CMEK encryption ensures compliance with security requirements.
BigQuery stores structured telemetry data for analytics, supporting queries like fleet-wide performance trends, incident investigations, and historical anomaly analysis. Partitioning and clustering enable fast querying of terabytes of historical telemetry data.
Vertex AI hosts predictive models that analyze telemetry patterns to forecast potential failures, such as engine component wear or fuel system anomalies. Models can use time-series forecasting, regression, or deep learning approaches. Real-time inference enables alerts to be sent immediately to maintenance crews, reducing downtime and avoiding costly failures.
Looker dashboards provide operations and maintenance teams with visualizations of fleet health, anomalies, and predictive alerts. Teams can drill down to individual aircraft or component metrics, ensuring timely interventions.
Alternative architectures fail to meet scale, latency, or predictive requirements. Cloud SQL cannot handle high-frequency ingestion and analysis, Firestore with Cloud Functions lacks the throughput and low-latency capabilities, and batch-only Cloud Storage processing introduces unacceptable delays for predictive maintenance.
This architecture ensures secure, real-time, and predictive monitoring of aircraft telemetry, enabling proactive maintenance and operational safety.
Question 117:
A multinational retail chain wants to implement a real-time personalized marketing engine. It must ingest user interactions, compute behavioral features, predict customer preferences using ML models, and deliver targeted offers via mobile apps and email. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch jobs
B) Pub/Sub for event ingestion, Dataflow for feature computation, BigQuery for analytics, Vertex AI for ML scoring, Cloud Run for delivering personalized offers
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run
Explanation:
Personalized marketing engines require real-time ingestion of millions of events, including product views, clicks, purchases, and engagement with promotional campaigns. Pub/Sub ensures high-throughput, reliable ingestion of these events while decoupling producers from downstream processing pipelines, allowing seamless scaling during peak traffic.
Dataflow computes real-time behavioral features such as session activity, browsing patterns, time-on-page, and purchase frequency. Windowed and stateful processing allows computation of rolling metrics and complex behavioral signals critical for personalized recommendations. Dataflow also enriches events with user metadata, loyalty tier, demographics, and location for context-aware predictions.
BigQuery stores historical user behavior and transaction data for analytics and feature extraction for ML models. Analysts can segment users, compute trends, and generate training datasets for recommendation models.
Vertex AI hosts ML models for real-time scoring and prediction of user preferences. Models can use collaborative filtering, content-based filtering, or hybrid methods to generate personalized offers and recommendations. Continuous retraining ensures models adapt to changing user behaviors.
Cloud Run exposes APIs that deliver personalized recommendations and offers to mobile apps, websites, and email platforms. Autoscaling ensures low-latency responses even under high concurrent traffic.
Alternatives such as Cloud SQL with batch jobs, Firestore with Cloud Functions, or batch-only Cloud Storage processing fail to meet real-time, scalable, and predictive requirements. Batch-only solutions introduce latency that undermines personalization effectiveness.
This architecture ensures a scalable, real-time, predictive, personalized marketing platform that adapts dynamically to user behavior across multiple channels.
Question 118:
A global ride-hailing company wants to implement a predictive surge pricing system. The system must ingest real-time ride requests, driver locations, traffic, and weather data, compute demand-supply ratios, and predict optimal pricing adjustments using ML models. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch scripts
B) Pub/Sub for ingestion, Dataflow for feature computation, Bigtable for low-latency operational metrics, Vertex AI for predictive pricing, Cloud Run for fare calculation APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run
Explanation:
Predictive surge pricing requires high-frequency ingestion of ride requests, driver telemetry, traffic updates, and weather data. Pub/Sub provides global, durable, and high-throughput ingestion capable of scaling to millions of messages per second. It decouples producers from downstream pipelines and allows bursts of traffic to be absorbed without loss.
Dataflow computes real-time features including demand-supply ratios, average wait times, ETA predictions, traffic congestion metrics, and weather-adjusted travel times. Windowed and stateful computations allow rolling calculations of demand surges and driver availability, essential for dynamic pricing.
Bigtable stores operational metrics for low-latency access during fare calculations. Its fast read/write capabilities ensure that pricing decisions occur within milliseconds to maintain competitive response times for ride requests.
Vertex AI hosts predictive ML models that use historical and real-time features to forecast optimal surge pricing adjustments. Models can incorporate demand trends, time-of-day patterns, traffic conditions, and driver availability to determine dynamic fares. Real-time inference ensures immediate application of pricing changes.
Cloud Run exposes fare calculation APIs to mobile apps, websites, and driver applications. Autoscaling ensures low latency under fluctuating demand.
Alternative solutions fail to meet the low-latency, high-throughput, predictive requirements. Batch processing, Cloud SQL, or Firestore cannot scale to millions of concurrent transactions while delivering sub-second predictions.
This architecture provides a scalable, real-time predictive surge pricing system that dynamically adjusts fares based on demand, supply, traffic, and weather.
Question 119:
A financial services company wants to implement a global real-time risk management platform. The system must ingest trade events, compute risk metrics, detect anomalies, feed machine learning models for predictive risk scoring, and provide dashboards for traders and compliance officers. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch scripts
B) Pub/Sub for trade event ingestion, Dataflow for streaming analytics, Bigtable for real-time risk metrics, BigQuery for historical analysis, Vertex AI for predictive scoring, Looker for dashboards
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Looker
Explanation:
Global financial risk management requires ingestion of millions of trade events per second from multiple markets, exchanges, and trading platforms. Pub/Sub provides highly scalable, durable ingestion and decouples trade producers from downstream processing, ensuring reliability during market spikes.
Dataflow performs real-time streaming analytics, computing per-trade metrics such as Value-at-Risk, rolling exposure, and portfolio-level aggregations. Stateful and windowed computations allow detection of anomalies, such as unusual trading patterns, concentration risks, or sudden market shifts.
Bigtable stores operational risk metrics for low-latency access. Traders and compliance officers require sub-second visibility into positions, exposures, and risk indicators. Its high-throughput, low-latency access ensures timely decisions.
BigQuery stores historical trade data for analytics, back-testing, regulatory reporting, and ML feature extraction. Analysts can perform large-scale queries on historical trends and identify systemic risk patterns.
Vertex AI hosts predictive ML models that evaluate market risk, counterparty risk, and potential portfolio exposure. Models use both historical and streaming features to provide real-time predictive scores. Real-time inference ensures traders receive actionable insights instantly.
Looker dashboards integrate operational and predictive metrics, providing visibility across portfolios, accounts, and risk factors. Traders and compliance officers can drill down to individual trades, monitor anomalies, and take corrective actions.
Batch-only or SQL-based alternatives fail to meet the latency, scale, and predictive analytics requirements of global real-time risk management.
This architecture ensures secure, scalable, real-time, predictive, and actionable risk monitoring for a global financial services firm.
Question 120:
A global logistics company wants to implement a real-time route optimization and predictive delivery system. Vehicles continuously emit GPS, fuel, and engine telemetry. The system must compute optimal routes, predict delays using ML, detect anomalies, and provide dashboards for operations teams. Which architecture should the Cloud Architect recommend?
A) Cloud SQL with batch jobs
B) Pub/Sub for telemetry ingestion, Dataflow for feature computation, Bigtable for low-latency operational queries, BigQuery for analytics, Vertex AI for predictive delivery, Cloud Run for dashboards and APIs
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML
Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run
Explanation:
Real-time route optimization requires ingestion of high-frequency telemetry from a fleet of vehicles, including GPS location, fuel consumption, and engine performance. Pub/Sub provides global, high-throughput, and reliable ingestion, decoupling telemetry producers from downstream analytics pipelines.
Dataflow computes real-time features such as rolling average speeds, predicted arrival times, fuel efficiency metrics, and route deviations. Windowed processing allows detection of anomalies like sudden slowdowns or deviations from planned routes.
Bigtable stores operational metrics for low-latency queries. Fleet managers need sub-second access to vehicle locations and route metrics to make timely routing decisions.
BigQuery stores historical telemetry data for analytics and feature extraction. Analysts can identify traffic patterns, delivery trends, and recurring delays, which feed into predictive ML models.
Vertex AI hosts predictive models that forecast delivery delays and suggest optimized routes. Models use historical telemetry, real-time GPS, traffic data, and weather conditions. Real-time inference ensures proactive rerouting to minimize delivery delays.
Cloud Run exposes APIs and dashboards for operations teams to monitor vehicle status, route performance, and predictive alerts. Batch-only or SQL-based solutions fail to deliver real-time predictive insights at scale.
This architecture ensures scalable, real-time, predictive route optimization and fleet management for global logistics operations.
Popular posts
Recent Posts
