Google Professional Cloud Architect Google Cloud Certified – Professional Cloud Architect Exam Dumps and Practice Test Questions Set 5 Q81-100

Visit here for our full Google Professional Cloud Architect exam dumps and practice test questions.

Question 81:

A financial trading platform wants to implement a real-time risk analysis system. The system must analyze millions of trades per minute, compute exposure metrics, detect anomalies, run predictive ML models, and feed dashboards used by risk officers. Latency must be under one second. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL for storing trades and run analysis every minute using Cloud Functions
B) Use Pub/Sub for ingestion, Dataflow for real-time processing, Bigtable for sub-second lookups, BigQuery for analytical queries, and Vertex AI for model scoring with Cloud Run providing APIs
C) Use Cloud Storage for logs and run Dataflow batch jobs every hour
D) Use Compute Engine VMs with cron jobs and store results in Firestore

Answer: B) Use Pub/Sub for ingestion, Dataflow for real-time processing, Bigtable for sub-second lookups, BigQuery for analytics, and Vertex AI for model scoring with Cloud Run providing APIs

Explanation:

Real-time risk analysis in financial trading requires extremely fast ingestion, processing, and decision-making because trade volume is high and risk exposure changes every second. Pub/Sub is the optimal ingestion component because it supports millions of messages per second and can handle market surges such as opening and closing hours. It also ensures reliable message delivery and decouples trading applications from analytic systems. This prevents backpressure and keeps the trading platform operating smoothly even during extreme spikes.

Dataflow then processes these streams in real time. Risk analysis requires complex transformations such as aggregating trade values, computing exposure by asset class, identifying leverage ratios, and updating mark-to-market valuations. Dataflow’s streaming mode supports stateful processing, windowing functions, and complex event processing that allows the system to maintain real-time running totals, sliding windows of trade activity, and anomaly detection features such as sudden price deviations or high-frequency trading spikes. Dataflow pipelines can also generate alert triggers based on predefined thresholds or ML-based risk scores.

Bigtable acts as the low-latency operational store for storing current positions, exposure metrics, and sub-second risk indicators. Bigtable offers extremely high write throughput and single-digit millisecond read latency, making it perfect for instant dashboards or risk visualization tools. For example, a risk officer monitoring a dashboard needs to see changes within milliseconds, which Bigtable supports. Bigtable’s wide-column structure also supports storing many different metrics per trader, asset, or desk.

BigQuery complements Bigtable by storing long-term trade history. Risk modeling, regulatory reporting, and advanced analytics often require querying years of data. BigQuery supports massive datasets, fast analytical queries, partitioning by date, clustering by trader or instrument, and SQL-driven analysis. Historical queries can reveal patterns such as portfolio drift, unusual liquidity events, or volatility clustering. These insights improve the accuracy of real-time risk scoring models.

Vertex AI provides machine learning infrastructure for predictive risk models such as anomaly detection, trade fraud prediction, reinforcement learning for portfolio risk, or regression-based exposure forecasting. Vertex AI endpoints provide real-time scoring, enabling Dataflow or Cloud Run to call ML models with minimal latency. Because risk scoring must happen within milliseconds to block high-risk trades, Vertex AI real-time prediction endpoints are the right fit.

Cloud Run exposes APIs for dashboards, alerts, and internal systems. Risk analysts need tools to view real-time results, acknowledge alerts, override systems, or modify risk rules. Cloud Run scales quickly and reduces operational overhead while allowing containerized logic to perform complex computations.

Option A cannot scale, Option C lacks real-time capability, and Option D would introduce unacceptable latency. Thus, Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run is the correct architecture.

Question 82:

A healthcare analytics company must build a HIPAA-compliant system to process patient sensor data from medical devices. The system must support real-time data ingestion, secure PHI storage, ML-based anomaly detection, and clinician dashboards. What architecture should the Cloud Architect propose?

A) Use Firebase for data ingestion and store PHI in Firestore
B) Use Pub/Sub with CMEK, Dataflow with VPC-SC, Cloud Storage with CMEK for raw data, BigQuery for analytics, Vertex AI for ML models, and Looker for dashboards
C) Use Cloud SQL for PHI and Python scripts for ML
D) Use publicly accessible APIs through App Engine

Answer: B) Use Pub/Sub with CMEK, Dataflow with VPC-SC, Cloud Storage with CMEK for raw data, BigQuery, Vertex AI, and Looker

Explanation:

HIPAA compliance requires strict adherence to best practices such as encryption at rest, encryption in transit, audit logging, VPC Service Controls, key management through CMEK, IAM least-privilege policies, and restricted data movement. Healthcare device data typically includes PHI such as patient identifiers, heart rate signals, glucose readings, oxygen saturation, and other vital metrics. These data streams must be processed in real time to detect anomalies such as arrhythmias, hypoxia, or dangerous glucose fluctuations.

Pub/Sub with CMEK ensures encrypted, compliant messaging at ingestion. Using CMEK ensures healthcare organizations retain full control over encryption keys, satisfying compliance auditors. Pub/Sub decouples medical device agents from back-end systems and supports bursts of telemetry, which is common in healthcare environments when sensor activity spikes.

Dataflow with VPC-SC is used to process telemetry in real time. Health analytics pipelines may compute rolling averages, detect abnormal patterns, compare readings to clinical thresholds, or enrich data with patient profiles. VPC Service Controls prevent data exfiltration, which is essential under HIPAA. Dataflow supports secure private IP communication with other GCP services.

Cloud Storage with CMEK stores raw medical files such as sensor logs, ECG strips, waveform data, or batch uploads. It offers long-term, secure, low-cost storage. Versioning and lifecycle rules allow compliance with record retention policies.

BigQuery is used for analytical queries on aggregated medical data. Healthcare analytics often require identifying trends across thousands of patients, correlating symptoms with device readings, or generating clinical insights. BigQuery provides fast, scalable querying while integrating with VPC-SC and CMEK for PHI protection.

Vertex AI hosts ML models for detecting abnormal sensor behaviors. Healthcare anomaly detection might involve neural networks, time-series models, autoencoders, or clinical rule-based engines. Vertex AI supports secure model training in isolation using private endpoints, CMEK protection, and restricted access.

Looker creates dashboards for clinicians, allowing them to view real-time patient alerts, historical data, and predictive risk indicators. Looker integrates with BigQuery without extracting PHI.

Option A is not HIPAA-compliant. Option C lacks scalability and security. Option D violates PHI protection requirements.

Thus, Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker with full security controls is correct.

Question 83:

A transportation app wants to build a surge-pricing engine that adjusts fares based on real-time demand, weather, driver availability, and traffic. The system needs to process large-scale data continuously and return pricing decisions in milliseconds. What is the best architecture?

A) Cloud Functions for all computations and Firestore for data
B) Pub/Sub for ingestion, Dataflow for real-time calculations, Bigtable for instantaneous lookups, Vertex AI for pricing models, and Cloud Run for the pricing API
C) Cloud SQL for storing traffic data and Compute Engine cron jobs
D) Cloud Storage for logs and offline ML models

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Surge pricing requires combining multiple data sources—driver activity, rider requests, traffic conditions, weather patterns, supply-demand ratios, and historical pricing trends. Because ridesharing systems see rapid fluctuations during events, storms, or commute peaks, the architecture must handle real-time scale.

Pub/Sub ingests ride requests, driver locations, and traffic events with high throughput. The system may receive tens of thousands of events per second in large cities. Pub/Sub ensures message durability and decouples mobile apps from pricing systems.

Dataflow processes these events and computes real-time metrics such as demand density, supply availability, and local congestion factors. It can aggregate counts per neighborhood, compute supply-demand ratios, and generate streaming features needed for ML models. Dataflow can also enrich events with weather feeds or historical trends.

Bigtable stores real-time demand metrics and supply data. Because pricing decisions must occur within milliseconds, Bigtable’s extremely low-latency reads and high-throughput writes make it ideal for storing city heat maps, demand clusters, driver-to-rider ratios, and localized pricing factors.

Vertex AI hosts the surge-pricing ML model. Surge pricing often uses reinforcement learning, regression models, or dynamic optimization algorithms. Vertex AI endpoints provide real-time inference, enabling Dataflow or Cloud Run to compute new prices instantly.

Cloud Run exposes the final pricing API to the mobile apps. Drivers and riders need instantaneous fare estimates, so Cloud Run scales quickly and supports containerized ML logic with minimal cold-start time.

Alternatives fail because Cloud Functions and Firestore cannot support the required scale and latency, Cloud SQL and cron jobs are too slow, and Cloud Storage is unsuitable for real-time computation.

Question 84:

A banking company wants to migrate its fraud rules engine to Google Cloud. The system must evaluate transactions in under 300 ms, store customer profiles, integrate ML models, and update rules dynamically. What should the Cloud Architect recommend?

A) Store rules in Cloud Storage and run evaluations in Cloud Functions
B) Use Memorystore for caching rules, Bigtable for customer profiles, Pub/Sub + Dataflow for processing transactions, Vertex AI for ML scoring, and Cloud Run for the rules engine API
C) Run everything on Compute Engine managed instance groups
D) Use Cloud SQL for profiles and cron jobs to evaluate transactions

Answer: B) Memorystore + Bigtable + Pub/Sub + Dataflow + Vertex AI + Cloud Run

Explanation:

Fraud rules engines require extremely low-latency evaluation of incoming transactions. Rules must be applied instantly, such as geolocation mismatches, abnormal spending, merchant category anomalies, or velocity checks. Memorystore is perfect for storing dynamic rules because it provides sub-millisecond latency and supports frequent updates. Fraud analysts can update rules without redeploying services.

Bigtable stores customer profiles, transaction history summaries, risk levels, and merchant data. Bigtable’s high throughput ensures that each customer lookup occurs within milliseconds.

Pub/Sub ingests all transactions at high scale, ensuring durable, decoupled message flow. Dataflow processes these messages and prepares features for the rules engine and ML models. It can compute rolling windows, enrich transactions with customer metadata, and perform initial checks.

Vertex AI scores transactions using ML models trained on fraud patterns. Cloud Run serves as the rules engine API, combining rules-based logic with ML scoring. Cloud Run scales automatically under high load, ensuring low-latency performance.

Question 85:

A global travel platform needs to implement a recommendation engine for hotels, flights, and activities. It must process browsing behavior, user profiles, pricing trends, and contextual data such as seasonality. The solution must generate recommendations in real time. What architecture is best?

A) Cloud SQL + Cloud Functions
B) Pub/Sub + Dataflow + BigQuery + Vertex AI + Cloud Run
C) Firestore + App Engine
D) Cloud Storage + batch ML

Answer: B) Pub/Sub + Dataflow + BigQuery + Vertex AI + Cloud Run

Explanation:

Travel recommendation engines rely on massive datasets such as user searches, clickstream behavior, historical bookings, seasonal travel data, pricing patterns, and destination popularity. Pub/Sub ingests user activity and browsing signals. Dataflow transforms these into features such as recent searches, preferred destinations, and time-based trends. BigQuery stores analytical data and feature tables. Vertex AI trains and deploys ML models using deep learning or collaborative filtering. Cloud Run exposes recommendation APIs and scales globally. This architecture supports real-time personalized recommendations that adapt instantly to user behavior.

Question 86:

A global entertainment company wants to build a real-time streaming analytics platform for its video service. The system must process playback events, buffering metrics, CDN logs, session start/stop data, and QoS indicators from millions of viewers simultaneously. The platform must generate insights on latency, playback errors, user engagement, bitrate adaptation, and regional performance. Data must be available to dashboards within seconds and must feed ML models for predicting churn. What architecture should the Cloud Architect choose?

A) Use Cloud SQL for all event logs and Cloud Functions to process incoming streams
B) Use Pub/Sub for ingestion, Dataflow for real-time ETL, Bigtable for operational analytics, BigQuery for historical reporting, and Vertex AI for churn prediction
C) Use Cloud Storage to store logs and run nightly Dataflow jobs
D) Use App Engine to collect logs directly and process using Firestore

Answer: B) Use Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI

Explanation:

Real-time video analytics is one of the most demanding workloads in cloud architecture because video platforms generate extremely high event volumes. Every time a user starts a session, changes the bitrate, pauses the video, experiences buffering, or encounters an error, multiple metrics are emitted. With millions of users active at once, video services may generate tens of millions of events per second globally. This traffic pattern requires a decoupled, horizontally scalable event ingestion system. Pub/Sub is designed precisely for this scenario. It supports enormous throughput, global distribution, and durable at-least-once delivery. CDN logs, playback metrics, mobile app telemetry, and session events can all enter the platform via Pub/Sub topics configured per region or per event type.

Dataflow then processes these events using streaming pipelines. Real-time transformations can include cleansing malformed logs, enriching events with geo-metadata, detecting spikes in buffering ratios, computing rolling engagement metrics, calculating abandonment rates, and producing session-level aggregates. Bitrate adaptation events often need to be correlated with network performance indicators, and Dataflow’s support for joins across streams allows these correlations in real time. Companies often use Dataflow to detect emerging QoS issues before customers complain, reducing churn and improving experience.

Bigtable is ideal for storing operational analytics that must be queried in real time. Dashboards showing live metrics such as current buffering percentage, real-time user counts per region, or average bitrate must respond within milliseconds. Bigtable’s low-latency reads and high ingestion throughput make it a perfect operational store. Video platforms often maintain a time-series schema in Bigtable where each row represents a stream or region, and each column stores metrics such as bitrates, error codes, and latency spikes.

BigQuery complements Bigtable as the analytical warehouse for long-term data. Video companies analyze viewing patterns across weeks, months, or years, which requires extremely large datasets. BigQuery can run massive JOINs or aggregations to understand seasonality, device performance, ISP reliability, and customer churn correlations. Partitioning and clustering optimize these tables for fast queries. BigQuery ML can be used for simpler churn models, but more advanced models benefit from Vertex AI.

Vertex AI powers churn prediction, QoS anomaly detection, recommendation engines, and session quality forecasting. Churn models may analyze user watch behavior, historical quality metrics, error patterns, and demographic data. Predicting churn enables targeted retention offers such as discounts. Vertex AI pipelines automate training, evaluation, and deployment of these models.

Option A and D cannot scale to millions of events per second, and Cloud SQL or Firestore would bottleneck. Option C provides only batch processing, which is unacceptable for real-time QoS monitoring. Thus, Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI is the correct real-time streaming architecture.

Question 87:

A retail chain needs to build a real-time inventory management system that updates store inventory levels instantly as customers make purchases, return items, or order products online. Stores across multiple countries must share consistent inventory data. The system must also forecast stockouts using ML and notify distribution centers. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL for global inventory tables and regional replicas
B) Use Pub/Sub for ingestion, Dataflow for processing, Spanner for global inventory, BigQuery for analytics, and Vertex AI for forecasting
C) Use Firestore multi-region mode with Cloud Functions for updates
D) Use Cloud Storage for storing logs and training ML models offline

Answer: B) Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI

Explanation:

Real-time global inventory systems must guarantee consistent, synchronized stock levels across thousands of stores and online channels. Traditional relational databases cannot support global transaction consistency with low latency. Cloud Spanner is specifically designed for this use case because it provides globally distributed, strongly consistent transactions. Inventory updates such as purchases, returns, and transfers must be applied atomically to ensure that no two stores sell the last item simultaneously. Spanner’s horizontal scaling means it can handle thousands of inventory updates per second across regions.

Pub/Sub is required for ingesting events from point-of-sale systems, online purchases, warehouse systems, and barcode scanners. Retail workloads often involve unpredictable spikes such as seasonal shopping, flash sales, or product releases. Pub/Sub decouples stores from the core inventory system, preventing overload and enabling smooth scaling.

Dataflow processes these events in real time, enriching them with metadata such as SKU details, store information, or customer segments. It can also perform windowing to detect sudden changes in demand or to aggregate events such as rapid sell-through alerts. Dataflow pipelines can route updates to Spanner while sending analytical data to BigQuery simultaneously.

BigQuery then stores historical inventory changes, product sales trends, and customer purchasing behavior. Analysts can use BigQuery to explore patterns such as regional demand spikes, seasonal variations, or price sensitivity. BigQuery is also useful for long-term forecasting using ML features.

Vertex AI provides ML capabilities for stockout predictions, replenishment recommendations, and demand forecasting. Supply-chain forecasting models often rely on time-series techniques or machine learning models such as LSTM networks. Vertex AI Pipelines can run training automatically using BigQuery data and deploy models to prediction endpoints.

Option A cannot guarantee global consistency. Option C cannot support high-throughput inventory workloads or cross-region transactions. Option D does not provide real-time functionality. Thus, the correct architecture is Pub/Sub → Dataflow → Spanner → BigQuery → Vertex AI.

Question 88:

A cybersecurity company wants to build a threat detection platform that ingests firewall logs, VPN logs, user login activity, and network flows. The system must detect anomalies such as brute-force attacks, lateral movement, unusual data transfers, and privilege escalation. It must process data in near-real time and support ML-based detection. What architecture should the Cloud Architect recommend?

A) Use Cloud Functions to upload logs to Cloud Storage and analyze nightly
B) Use Pub/Sub for ingestion, Dataflow for streaming analytics, BigQuery for SIEM queries, Bigtable for fast threat lookups, and Vertex AI for ML-based threat detection
C) Use Firestore and Cloud Functions only
D) Use Compute Engine VMs to manually parse logs

Answer: B) Pub/Sub → Dataflow → BigQuery → Bigtable → Vertex AI

Explanation:

Cybersecurity threat detection requires large-scale streaming analytics because threats emerge continuously. Logs from firewalls, networks, authentication systems, and endpoints must be ingested instantly to identify attacks as they happen. Pub/Sub can support millions of log messages per second and provides durable, scalable ingestion.

Dataflow processes these logs in real time. It can detect brute-force login attempts by counting login failures per IP or user ID within sliding windows. It can correlate network flows across time to detect lateral movement. It can also identify unusual data transfers by comparing against historical baselines. Dataflow supports windowing, stateful transforms, pattern detection, and complex event processing essential for security analytics.

BigQuery serves as the analytical backend for SIEM-style queries. Security analysts need the ability to run large-scale searches across billions of log entries to find indicators of compromise. BigQuery’s fast SQL queries enable threat hunting, compliance reporting, and incident investigations.

Bigtable is used as the real-time threat lookup store. Many detection engines rely on lookup tables of known malicious IPs, user risk scores, attack signatures, and device reputation. Bigtable’s low-latency reads make it ideal for powering interactive dashboards or investigation tools.

Vertex AI hosts ML-based threat detection models such as anomaly detection, clustering, predictive risk scoring, or neural-network-based threat classification. These models can detect advanced threats that rules cannot catch. Vertex AI allows scalable deployment and low-latency prediction needed for real-time security systems.

Other options fail to support the scale, speed, or ML needs of cybersecurity analytics. Thus, Pub/Sub → Dataflow → BigQuery → Bigtable → Vertex AI is the correct choice.

Question 89:

An online education platform must build an analytics system that tracks student behavior: video engagement, quiz submissions, reading activity, time spent, and session events. The system must provide near-real-time dashboards for instructors and feed ML models predicting student dropout risk. What architecture should the Cloud Architect recommend?

A) Cloud Functions + Firestore
B) Pub/Sub + Dataflow + BigQuery + Vertex AI + Looker
C) Cloud SQL + batch ETL
D) Cloud Storage + Python scripts

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Looker

Explanation:

Education analytics require tracking diverse student interactions across video players, quizzes, reading pages, and mobile apps. These events can generate significant data volume. Pub/Sub reliably ingests these event streams from global student populations. It scales easily during peak study periods or exam seasons.

Dataflow processes events in real time, computing engagement metrics such as average watch duration, quiz accuracy trends, time spent per module, or inactivity periods. These metrics allow instructors to see live performance dashboards and identify students who may need intervention.

BigQuery stores historical activity data and supports advanced analytics. Teachers and administrators can run queries such as identifying difficult modules, measuring course completion patterns, or analyzing engagement differences across demographics.

Vertex AI builds ML models to predict dropout risk or recommend personalized study paths. Student success models often combine behavioral patterns with course difficulty metrics. Vertex AI Pipelines automate training and deployment.

Looker dashboards visualize both real-time and historical data for instructors, giving them actionable insights.

Question 90:

A global ride-hailing company wants to implement a driver matching engine that assigns nearby drivers to riders based on ETA, traffic, pricing, and driver preferences. The system must return results in under 200 ms and scale to millions of users. What architecture should the Cloud Architect choose?

A) Firestore with Cloud Functions
B) Pub/Sub for ingestion, Dataflow for real-time location processing, Bigtable for fast geospatial lookups, Vertex AI for ETA models, and Cloud Run for matching APIs
C) Cloud SQL with synchronous stored procedures
D) Cloud Storage for location logs

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Ride-hailing platforms must track driver GPS updates, rider requests, surge pricing, and traffic data. Pub/Sub ingests these high-volume streams. Dataflow processes them, computes real-time driver availability, and performs geospatial indexing computations. Bigtable stores the live driver index and provides fast lookups needed for sub-200 ms matching. Vertex AI provides ETA and route scoring models. Cloud Run exposes the matching API with instant autoscaling. Other options cannot meet the latency or throughput needs.

Question 91:

A global financial services company wants to build a real-time stock market analytics platform. The system must ingest millions of trade events per second, compute per-stock metrics such as moving averages, detect anomalies in trading behavior, feed machine learning models for predictive analytics, and provide dashboards for traders. What architecture should the Cloud Architect recommend?

A) Store trade events in Cloud SQL and run batch analytics on Compute Engine every hour
B) Use Pub/Sub for trade event ingestion, Dataflow for streaming analytics, Bigtable for real-time operational metrics, BigQuery for historical analytics, Vertex AI for predictive modeling, and Looker for dashboards
C) Use Cloud Storage for trade logs and run batch ML pipelines
D) Use Firestore for event storage and Cloud Functions for analytics

Answer: B) Use Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Looker

Explanation:

Real-time stock market analytics involves processing extremely high-velocity data streams while maintaining low latency. Each trade event contains multiple fields, such as stock ticker, price, volume, timestamp, and order type, and millions of such events can arrive per second. To handle this scale, Pub/Sub is ideal for ingesting trade events. It supports global distribution, high throughput, and durable message delivery while decoupling data producers from downstream consumers. This ensures the system can absorb bursts of market activity without bottlenecks.

Once ingested, Dataflow handles real-time stream processing. It can compute per-stock metrics such as moving averages, volume-weighted average price, cumulative volume, or rolling standard deviations. Dataflow’s stateful processing allows tracking temporal windows, which is essential for financial indicators that rely on minute- or second-level aggregations. Dataflow can also enrich trade events with reference data, such as corporate actions, sector classifications, or market indices, which is vital for anomaly detection or predictive models.

Bigtable serves as the operational store for real-time metrics. Traders and risk analysts need sub-second access to the latest indicators for each stock, including volume spikes or price anomalies. Bigtable’s low-latency read and high-throughput write capabilities allow real-time dashboards and alerts to operate at scale, handling millions of updates per second. Each stock can be represented as a row, with columns tracking different metrics and moving averages for multiple time intervals.

BigQuery complements Bigtable as the analytical data warehouse for historical analytics. Analysts may query trade histories spanning years, run risk calculations, back-test trading strategies, or measure market volatility over time. BigQuery supports large-scale aggregation queries, partitioned tables, and clustering, making it suitable for exploratory analysis, reporting, and model feature extraction.

Vertex AI enables predictive analytics, such as forecasting stock trends or detecting anomalous trading patterns. Machine learning models may use time-series analysis, deep learning, or ensemble methods. Vertex AI pipelines can automate model training, hyperparameter tuning, evaluation, and deployment to prediction endpoints. Real-time inference allows the system to score new trades immediately, providing alerts for potential market anomalies or opportunities.

Looker dashboards provide an interface for traders, analysts, and risk managers. Real-time metrics from Bigtable and historical analytics from BigQuery can be visualized with dynamic dashboards, alerts, and drill-down capabilities. Users can monitor volatility, trade volumes, or predictive model scores and make timely decisions.

Options A, C, and D are unsuitable. Cloud SQL and Compute Engine batch analytics (A) cannot scale to millions of trades per second. Cloud Storage batch processing (C) introduces unacceptable latency, and Firestore with Cloud Functions (D) cannot handle high-frequency financial streams or complex ML inference. The recommended architecture ensures real-time ingestion, processing, predictive modeling, and visualization, meeting the demanding requirements of global financial markets.

Question 92:

A large retail company wants to implement a personalized marketing platform that provides targeted offers to customers in real time based on their browsing behavior, purchase history, and loyalty profile. The system must scale to millions of customers and update recommendations within milliseconds. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL to store customer events and run batch scripts for recommendations
B) Use Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for analytical storage, Vertex AI for recommendation model training and prediction, and Cloud Run for serving personalized offers
C) Use Cloud Storage to store logs and run nightly ML pipelines
D) Use Firestore to store events and Cloud Functions for model inference

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time personalized marketing requires processing massive event streams with low latency. Events such as page views, clicks, add-to-cart actions, and completed purchases generate high-velocity data. Pub/Sub is ideal for capturing these events because it is fully managed, scalable, and supports at-least-once message delivery. Pub/Sub decouples the event producers (websites, mobile apps, POS systems) from downstream consumers, ensuring reliable ingestion even during peak traffic periods like holidays or flash sales.

Dataflow processes these events in real time. Stream processing includes feature extraction (e.g., customer’s last viewed product, purchase frequency, average spend), event enrichment (combining loyalty scores or demographic data), session-level aggregation, and anomaly filtering. Dataflow pipelines can also perform windowing to track user behavior over specific intervals, which is essential for near-real-time recommendation scoring.

BigQuery stores historical customer behavior, transactional data, and aggregated metrics, enabling analytics and batch ML pipelines. Analysts can query large datasets to identify patterns, segment customers, and extract features for model training. BigQuery’s partitioning, clustering, and optimized storage ensure fast query performance over terabytes or petabytes of data.

Vertex AI trains recommendation models using data from BigQuery, including collaborative filtering, content-based filtering, or hybrid approaches. Models can be deployed to Vertex AI endpoints for real-time scoring, allowing recommendations to update instantly as customers interact with the platform. Automated pipelines ensure retraining on fresh data to maintain model accuracy.

Cloud Run exposes APIs for serving personalized offers to websites, mobile apps, and email platforms. The system can provide low-latency recommendations, scale automatically based on traffic, and integrate seamlessly with existing applications.

Other options fail to meet real-time or scale requirements. Cloud SQL with batch scripts (A) cannot handle millions of events or provide millisecond response times. Cloud Storage batch jobs (C) introduce unacceptable latency, and Firestore with Cloud Functions (D) cannot scale efficiently or support complex ML scoring for millions of users.

This architecture ensures high-throughput ingestion, real-time feature computation, robust ML training, and low-latency recommendation delivery at a global scale.

Question 93:

A healthcare provider wants to implement a remote patient monitoring system. Medical devices stream vital signs such as heart rate, blood pressure, and oxygen saturation continuously. The system must alert clinicians if anomalies are detected, store patient data securely, comply with HIPAA, and support predictive modeling for patient deterioration. Which architecture should the Cloud Architect recommend?

A) Use Firebase for device telemetry and store patient data in Firestore
B) Use Pub/Sub with CMEK for secure event ingestion, Dataflow with VPC-SC for stream processing, Cloud Storage with CMEK for raw data, BigQuery for analytics, Vertex AI for predictive modeling, and Looker for clinician dashboards
C) Use Cloud SQL for patient data and cron jobs for analysis
D) Use Compute Engine VMs to store sensor files and run batch ML

Answer: B) Pub/Sub + Dataflow + Cloud Storage + BigQuery + Vertex AI + Looker

Explanation:

Remote patient monitoring requires continuous ingestion of high-velocity medical telemetry, immediate processing, secure storage, and real-time alerting. Pub/Sub with CMEK ensures that every device event is captured securely with encryption keys managed by the healthcare provider. Pub/Sub decouples devices from downstream processing pipelines, ensuring high availability and reliability, even if some devices experience connectivity issues.

Dataflow with VPC-SC processes the streams in real time, performing anomaly detection, aggregation, and enrichment. For example, rolling averages, sudden drops in vital signs, or abnormal patterns can be computed immediately. VPC-SC ensures sensitive patient data is confined within the trusted boundary of the organization, preventing data exfiltration.

Cloud Storage with CMEK stores raw sensor data for historical analysis, audits, and compliance. BigQuery stores structured datasets for analytics, supporting queries such as trends in vital signs, correlation with interventions, and long-term health monitoring. Partitioning and clustering improve performance for large datasets.

Vertex AI hosts predictive models that analyze patterns in telemetry and identify patients at risk of deterioration. Machine learning models such as LSTM networks, regression, or classification models can predict conditions like arrhythmia, sepsis risk, or hypoxia events. Real-time scoring ensures clinicians receive alerts promptly.

Looker dashboards provide clinicians with an interactive interface to monitor patient health, track trends, and prioritize interventions. Combined with automated alerts from Dataflow pipelines, this enables proactive healthcare.

Options A, C, and D lack either security compliance, scalability, or real-time processing capability. Firestore is not designed for high-throughput telemetry; Cloud SQL cannot handle continuous streams; Compute Engine batch jobs do not provide timely alerts.

This architecture ensures HIPAA compliance, secure storage, real-time analytics, predictive modeling, and clinician visibility.

Question 94:

A logistics company wants to implement a real-time fleet tracking and route optimization system. Vehicles emit GPS, fuel, and sensor telemetry every few seconds. The system must update driver locations in real time, optimize routes based on traffic and deliveries, predict delays using ML, and provide dashboards to operations managers. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL for storing telemetry and Compute Engine batch jobs for route optimization
B) Use Pub/Sub for ingestion, Dataflow for real-time telemetry processing, Bigtable for operational location queries, BigQuery for historical analytics, Vertex AI for delay prediction, and Cloud Run for route optimization APIs
C) Use Firestore to store vehicle events and Cloud Functions for routing
D) Use Cloud Storage to store GPS logs and run nightly ML jobs

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Fleet tracking requires ingestion of high-frequency GPS and telemetry streams. Pub/Sub is ideal for scalable, real-time ingestion. Dataflow processes streams to compute driver locations, detect anomalies (such as deviations from planned routes), and enrich telemetry with contextual data like weather or traffic.

Bigtable stores live vehicle locations, supporting low-latency geospatial queries for driver assignment and route calculations. This enables sub-second responses to rider or delivery requests. BigQuery stores historical vehicle telemetry for analysis, identifying patterns in delays, fuel usage, and route efficiency.

Vertex AI provides predictive modeling for delays and ETA forecasting using historical data and real-time telemetry. Models can detect traffic congestion, predict late deliveries, and optimize driver routing. Cloud Run hosts APIs for routing and driver assignment logic, allowing operations managers to view dashboards and dynamically adjust assignments.

Other options do not meet real-time, low-latency requirements. Cloud SQL and batch processing cannot handle high-frequency telemetry. Firestore and Cloud Functions may scale poorly under large fleet conditions. Cloud Storage batch processing introduces unacceptable latency.

This architecture ensures scalable, real-time ingestion, low-latency operational queries, predictive analytics, and optimized fleet routing.

Question 95:

A global e-commerce platform wants to build a fraud detection system that evaluates transactions in real time. The system must ingest transaction events, detect anomalies, integrate ML models for scoring, and notify downstream payment and fraud workflows. The solution must scale to millions of events per second. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL for transactions and run batch ML scoring
B) Use Pub/Sub for ingestion, Dataflow for streaming ETL, Bigtable for low-latency fraud lookups, BigQuery for analytics, Vertex AI for predictive scoring, and Cloud Run for downstream APIs
C) Use Firestore to store transactions and Cloud Functions for scoring
D) Use Cloud Storage for logs and nightly batch processing

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Fraud detection requires rapid ingestion and evaluation of massive transaction streams. Pub/Sub handles millions of events per second, supporting bursty traffic and decoupling producers from consumers. Dataflow performs real-time ETL, feature extraction, and anomaly detection. Features may include transaction velocity, geolocation deviations, device fingerprinting, and historical patterns.

Bigtable provides low-latency access for fraud lookups, such as blacklisted accounts, suspicious IPs, or previous fraudulent patterns. BigQuery stores historical transaction data for analytics, model training, and regulatory reporting. Vertex AI trains predictive models (e.g., classification, anomaly detection) and provides real-time scoring endpoints.

Cloud Run hosts APIs for fraud workflows and downstream notifications, scaling automatically with event volume. This architecture ensures low-latency, scalable, and predictive fraud detection. Options A, C, and D cannot handle real-time high-throughput workloads or complex ML requirements.

Question 96:

A multinational bank wants to build a real-time customer transaction monitoring system to detect money laundering. The system must ingest millions of transactions per second, compute risk scores, correlate transactions across accounts and regions, and alert compliance teams. What architecture should the Cloud Architect recommend?

A) Store transactions in Cloud SQL and run batch anomaly detection
B) Use Pub/Sub for ingestion, Dataflow for stream processing, Bigtable for operational scoring lookups, BigQuery for analytics, Vertex AI for ML-based risk scoring, and Cloud Run for alerting APIs
C) Use Firestore for transactions and Cloud Functions for anomaly detection
D) Use Cloud Storage for logs and nightly batch processing

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Run

Explanation:

Anti-money laundering requires high-throughput real-time monitoring and correlation across millions of transactions. Pub/Sub provides durable, scalable ingestion capable of handling global transaction bursts. It decouples transaction producers (branch systems, ATMs, online banking) from downstream analytics.

Dataflow handles stream processing, applying transformations such as transaction enrichment with customer profiles, regional aggregation, and windowed event correlation. This allows detection of unusual patterns like structuring, rapid fund movements, or cross-account transfers indicative of money laundering. Dataflow supports stateful processing, enabling rolling aggregates and multi-step anomaly detection.

Bigtable stores operational scoring data such as blacklists, risk scores, and watchlists for fast sub-second lookups during real-time evaluation. Low-latency access ensures transactions are scored within milliseconds, critical for compliance systems to block suspicious activity before settlement.

BigQuery stores historical transaction data for deeper analysis, model training, trend analysis, and regulatory reporting. Analysts can identify long-term suspicious patterns, validate models, and conduct audits.

Vertex AI hosts ML-based risk scoring models trained on historical transactions. Models predict the likelihood of suspicious behavior using supervised and unsupervised learning. Real-time endpoints allow immediate scoring of incoming transactions.

Cloud Run exposes APIs for alerts, compliance workflows, and downstream integrations. This architecture ensures regulatory compliance, real-time scoring, and global scalability. Alternatives A, C, and D fail due to insufficient scale, latency, or analytics capability.

Question 97:

A global streaming service wants to implement a recommendation engine that adapts in real time to user interactions with content. The system must handle millions of users, track engagement events, update recommendations instantly, and support ML-driven personalization. What architecture should the Cloud Architect recommend?

A) Cloud SQL with batch ML scoring
B) Pub/Sub for event ingestion, Dataflow for stream processing, BigQuery for historical analytics, Vertex AI for model training and inference, Cloud Run for serving recommendations
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch processing

Answer: B) Pub/Sub → Dataflow → BigQuery → Vertex AI → Cloud Run

Explanation:

Real-time recommendation engines require ingestion of massive streams of user interaction events, such as video views, likes, skips, and searches. Pub/Sub is optimal for ingesting high-volume, global events reliably and asynchronously. It decouples producers (apps, web clients, devices) from the recommendation pipelines, providing durability and elasticity during traffic spikes.

Dataflow processes streams to extract features, compute aggregates (session-level engagement, content similarity, user embedding vectors), and enrich events with metadata like genre, region, and device type. Stateful processing allows calculation of rolling features necessary for live personalization.

BigQuery stores historical engagement data, user profiles, and content metadata for analytics and training ML models. Analysts can explore trends, measure content performance, and extract features for recommendation models.

Vertex AI trains models using BigQuery datasets and serves predictions via real-time endpoints. Models can use collaborative filtering, deep learning, or hybrid approaches. Continuous retraining ensures recommendations adapt to changing user preferences.

Cloud Run serves recommendations via APIs to apps or websites. Autoscaling ensures responsiveness during peak periods. Alternatives A, C, and D fail to deliver low-latency, scalable, and adaptive recommendation services.

Question 98:

A global telecommunications provider wants to implement a real-time network anomaly detection platform. Millions of network events, including device telemetry, connection logs, and throughput metrics, must be ingested, processed, and analyzed to detect outages, congestion, and potential security threats. Which architecture should the Cloud Architect recommend?

A) Use Cloud SQL and batch processing
B) Use Pub/Sub for ingestion, Dataflow for stream processing, Bigtable for low-latency operational metrics, BigQuery for historical analysis, Vertex AI for predictive detection, Cloud Monitoring for alerts
C) Use Firestore for telemetry and Cloud Functions for analysis
D) Use Cloud Storage with nightly batch jobs

Answer: B) Pub/Sub → Dataflow → Bigtable → BigQuery → Vertex AI → Cloud Monitoring

Explanation:

Network anomaly detection requires ingesting high-frequency telemetry from devices, routers, and switches. Pub/Sub ensures reliable, high-throughput ingestion while decoupling network devices from processing pipelines.

Dataflow performs real-time transformations, aggregations, and anomaly detection computations. This includes sliding-window metrics, threshold-based alerts, statistical deviation detection, and feature generation for ML models.

Bigtable stores operational metrics for low-latency dashboards, such as device status, throughput, or regional congestion metrics. Fast lookups enable immediate visibility into anomalies.

BigQuery stores historical telemetry for trend analysis, root-cause investigations, and model training. Vertex AI predicts anomalies and potential outages using historical patterns and streaming metrics.

Cloud Monitoring integrates with real-time alerts, dashboards, and automated workflows for network engineers. Other options cannot meet the scale, latency, or predictive requirements.

Question 99:

A logistics company wants to implement a predictive maintenance system for its delivery vehicles. Vehicle telemetry includes engine temperature, vibration, fuel usage, and GPS location. The system must detect anomalies, predict failures, and alert maintenance teams. What architecture should the Cloud Architect recommend?

A) Use Cloud SQL for telemetry and batch ML
B) Pub/Sub for ingestion, Dataflow for processing, Cloud Storage for raw data, BigQuery for analytics, Vertex AI for predictive modeling, Looker for dashboards
C) Firestore for telemetry and Cloud Functions for alerts
D) Cloud Storage only with offline ML

Answer: B) Pub/Sub → Dataflow → Cloud Storage → BigQuery → Vertex AI → Looker

Explanation:

Predictive maintenance relies on continuous ingestion of telemetry from vehicles. Pub/Sub ensures high-throughput event capture and decoupling of vehicle data sources.

Dataflow transforms raw telemetry, computes features (rolling averages, vibration frequency analysis, anomaly detection), and enriches events with vehicle metadata. Cloud Storage stores raw telemetry for auditing, historical analysis, and retraining ML models.

BigQuery stores structured features for historical analysis and supports analytics queries for maintenance trends, failure patterns, and vehicle utilization.

Vertex AI hosts ML models that predict failures or required maintenance. Models analyze patterns over time, correlating multiple telemetry streams. Looker dashboards provide maintenance teams with alerts, KPIs, and predictive insights for fleet management.

Question 100:

A global ride-hailing company wants to implement a dynamic pricing engine that calculates fares based on real-time demand, supply, traffic, and weather conditions. The system must scale to millions of concurrent requests and update prices within milliseconds. Which architecture should the Cloud Architect recommend?

A) Cloud SQL with batch scripts
B) Pub/Sub for event ingestion, Dataflow for real-time feature computation, Bigtable for operational metrics, Vertex AI for pricing predictions, Cloud Run for serving fare calculations
C) Firestore with Cloud Functions
D) Cloud Storage with nightly batch ML

Answer: B) Pub/Sub → Dataflow → Bigtable → Vertex AI → Cloud Run

Explanation:

Dynamic pricing relies on continuous streams of ride requests, driver availability, traffic updates, and weather information. Pub/Sub handles real-time ingestion at an extensive scale.

Dataflow processes streams to compute real-time features such as local demand-supply ratios, average ETA per region, and traffic congestion metrics.

Bigtable stores operational metrics for rapid lookups during pricing calculations, ensuring sub-200 millisecond responses.

Vertex AI predicts optimal fare adjustments using ML models trained on historical data and real-time features. Cloud Run serves APIs for fare calculation to apps, scaling automatically under high load.

img