Microsoft PL-300 Microsoft Power BI Data Analyst Exam Dumps and Practice Test Questions Set 3 Q41-60
Visit here for our full Microsoft PL-300 exam dumps and practice test questions.
Question 41:
You are designing a Power BI model where multiple fact tables reference a shared Date table. You need to ensure time-intelligence functions work correctly across the entire model. What should you do to the Date table?
A) Mark the table as a date table and ensure it has a continuous date column
B) Add the Date table to each fact table as a calculated column
C) Create inactive relationships between the Date table and all fact tables
D) Use the Auto Date/Time feature instead of a separate Date table
Answer:
A) Mark the table as a date table and ensure it has a continuous date column
Explanation:
The correct answer is A) Mark the table as a date table and ensure it has a continuous date column. In Power BI, time-intelligence functions depend heavily on having a properly configured Date table. This includes having an unbroken, continuous column of dates and marking that table as the official date table within the model settings. Once this is done, Power BI recognizes this table as the standard reference for all time calculations, enabling functions such as year-to-date, month-to-date, rolling averages, same period last year comparisons, and custom time-intelligence scenarios that rely on accurate chronological sequencing.
When a Date table is marked correctly, the model becomes context-aware for any time-based filter, allowing DAX functions to understand which dates belong to which segments of time. This remains foundational in PL-300, because candidates are expected to build models that behave consistently when users apply slicers, drill through data, or evaluate multi-year datasets. For example, if a user selects a certain quarter in a slicer, all measures referencing that Date table automatically adjust based on the correct hierarchy.
Option B is incorrect because adding date values as calculated columns in fact tables does not replace the role of a proper Date table. Fact tables often contain gaps in dates, lack proper hierarchies, and do not support time-intelligence functions because they are not designed to serve as a fully structured calendar. Calculated columns also add unnecessary data to the model, increasing memory usage without offering any modeling advantages.
Option C is incorrect because relationships from the Date table to fact tables should normally be active. Inactive relationships are only used in special scenarios, typically resolved by using the USERELATIONSHIP function. By default, active one-to-many relationships between the Date table and each fact table are required for time-intelligence functions to compute accurately. Using inactive relationships as the primary structure would break the model’s ability to propagate date filters naturally.
Option D is incorrect because Auto Date/Time is a simplified feature designed for small, ad-hoc reports, not for enterprise-level modeling. It creates hidden date tables for each date column, which leads to multiple redundant date tables. This prevents the use of global time-intelligence logic and creates inconsistent behavior across the model. Large or production-level datasets should always use a well-structured, manually defined Date table.
Understanding why marking a Date table is essential involves fully grasping how filter context, row context, and evaluation context work together in Power BI. Time-intelligence functions such as DATEADD, DATESINPERIOD, SAMEPERIODLASTYEAR, TOTALYTD, and many others use the logic of time intervals that depend on the sequence of dates. Without a marked Date table, these functions fail because Power BI cannot identify which column should be interpreted as the primary source of chronological order.
A correctly configured Date table also allows multi-fact-table models to function smoothly. For example, a Sales table and an Inventory table may both reference the same Date table. When they filter by month, quarter, or year, the Date table ensures synchronized filtering and accurate comparisons. This is important in models where combined visuals show measures from different tables but still align on the same calendar periods, enabling unified cross-table insights.
In addition, the Date table supports hierarchies that help users drill down from years to quarters to months and days, creating interactive storytelling within reports. This helps analysts identify trends, seasonal patterns, performance cycles, and key turning points. For instance, a retail business can easily analyze holiday season sales spikes because the Date table provides structured breakdowns of those periods.
Finally, marking a Date table helps avoid unpredictable behavior in visuals. A report without a properly configured Date table may display incorrect calculations, empty visuals, or inconsistent totals. For the PL-300 exam, candidates must demonstrate the ability to avoid these issues by configuring models correctly. Understanding the relationship between the Date table and fact tables is fundamental for building robust and scalable analytical solutions.
Question 42:
You need to optimize the performance of a Power BI model that includes multiple large fact tables. You want to reduce memory consumption without significantly affecting aggregation accuracy. What is the best approach?
A) Use column-level data profiling to identify columns for removal or datatype reduction
B) Convert all columns to text so they compress better
C) Merge all fact tables into a single wide table
D) Use calculated columns to pre-aggregate numerical fields
Answer:
A) Use column-level data profiling to identify columns for removal or datatype reduction
Explanation:
The correct answer is A) Use column-level data profiling to identify columns for removal or datatype reduction. In Power BI, performance optimization begins with reducing the memory footprint of tables, because VertiPaq, the in-memory engine behind Power BI, compresses data based on column cardinality and datatypes. When you analyze each column, you can identify opportunities to reduce datatypes, eliminate unnecessary fields, and remove columns that do not contribute to analysis or relationships.
Data profiling helps determine whether numeric fields can be stored as whole numbers instead of decimals, whether date/time fields can be simplified to date-only formats, or whether highly unique columns such as GUIDs can be removed if not explicitly required. Reducing cardinality increases compression efficiency and can dramatically improve model performance. This is a critical skill tested in PL-300, where candidates must demonstrate the ability to optimize models for scalability and efficiency.
Option B is incorrect because converting columns to text does not improve compression. In fact, text columns are the least compressible in VertiPaq due to their high cardinality and lack of numeric structure. Storing data as text when not necessary leads to significant memory inefficiency and degraded performance.
Option C is incorrect because merging all fact tables into a single wide table increases memory usage, destroys granularity, and breaks the dimensional modeling approach recommended for Power BI. Wide tables increase column count, reduce compression efficiency, and make the model significantly harder to maintain. Proper star schema modeling is always preferred over unnecessarily merging large fact tables.
Option D is also incorrect because creating calculated columns increases the physical data stored within the model. Calculated columns occupy memory just like imported columns, and they do not provide the row-level performance improvements needed for large datasets. Measures, not calculated columns, should be used for aggregation because they do not occupy stored memory.
Performance optimization in Power BI involves understanding how VertiPaq processes and compresses data. Columns with lower cardinality compress better, reducing storage costs and improving query execution time. By profiling columns, analysts can make informed decisions about removing redundant fields from fact tables, such as columns used only for operational purposes that do not contribute to business reporting.
In addition, data profiling helps ensure that relationships between tables remain efficient. Reducing unnecessary columns from dimension tables improves relationship evaluation speed and helps visuals load faster. For instance, a dimension table containing dozens of descriptive columns may be trimmed down by removing fields irrelevant to reporting, reducing model bloat.
For large models, datatype optimization plays a central role. For example, changing decimal numbers to whole numbers reduces memory usage. Changing large text labels into surrogate keys further decreases cardinality and improves lookup performance. Analysts who understand these optimizations can build scalable datasets capable of supporting millions of records while maintaining performance.
Power BI provides several tools for this purpose, including the column statistics feature in Power Query, the VertiPaq Analyzer for DAX Studio, and memory usage metrics. The ability to interpret these metrics and make decisions about column retention reflects deep understanding of data modeling principles.
In real-world environments, proper data profiling ensures that business-critical dashboards run smoothly without forcing users to wait for visuals to load. For example, a finance dashboard with multiple high-cardinality fields would perform poorly without proper optimization. By removing or reducing such fields, dashboard performance becomes more consistent and predictable.
Overall, using column-level profiling reflects a professional approach to model optimization, making it the best practice aligned with PL-300 expectations.
Question 43:
A business requirement states that a specific measure must always ignore all user-applied filters except the Date slicer. What DAX pattern should you use?
A) CALCULATE with ALL and KEEPFILTERS
B) REMOVEFILTERS on all columns including the Date table
C) SUMX over FILTER
D) Using only a basic SUM measure
Answer:
A) CALCULATE with ALL and KEEPFILTERS
Explanation:
The correct answer is A) CALCULATE with ALL and KEEPFILTERS. When a measure needs to ignore all filters except those applied to the Date table, the best DAX approach is to use CALCULATE to modify filter context. ALL removes filters from specific tables or columns, while KEEPFILTERS ensures that the Date filter remains intact. This pattern gives precise control over which filters are overridden and which must persist.
Option B is incorrect because removing filters from the Date table would violate the requirement to preserve the Date slicer. Using REMOVEFILTERS globally would wipe out all filter context, making the measure unusable for time-sensitive calculations.
Option C is incorrect because SUMX over FILTER does not inherently override or preserve specific filters. Without CALCULATE to modify context, the measure remains fully dependent on existing user interactions.
Option D is incorrect because a plain SUM measure simply aggregates the visible rows and cannot override or selectively preserve filters.
The ability to selectively override filters is a core concept in advanced DAX. This requirement appears frequently in real-world scenarios, such as calculating total sales regardless of product, region, or customer selections, while still respecting the selected time period. Such measures often support KPIs, executive summaries, or benchmark calculations.
When using CALCULATE with ALL, Power BI expands the filter context so that the specified table or column behaves as if no filters were applied, enabling consistent calculations across reports. The KEEPFILTERS function allows designers to preserve specific slicer inputs, such as those from the Date table, ensuring the measure aligns with user-selected time periods.
This pattern is also used in year-to-date comparisons, baseline revenue calculations, or scenario analysis where only the time parameter should remain dynamic. It allows analysts to maintain alignment with time dimensions while freeing calculations from unwanted interferences by other slicers.
Mastering selective filter removal is essential for PL-300 candidates because it demonstrates the ability to produce stable, predictable, and strategic measures within complex report environments.
Question 44:
You need to allow users to switch between multiple measures, such as Sales, Profit, and Cost, using a slicer. Which Power BI feature should you use?
A) Field parameters
B) Bookmarks
C) Conditional formatting
D) Model relationships
Answer:
A) Field parameters
Explanation:
The correct answer is A) Field parameters. Field parameters allow report consumers to select which measures or dimensions they want to display in visuals, enabling dynamic switching between multiple analytical perspectives. This feature improves flexibility and user control, allowing a single visual to adapt instantly based on slicer selections.
Option B is incorrect because bookmarks capture static states and are not intended for dynamic data switching. Option C is unrelated because conditional formatting changes the appearance of data, not which measures are displayed. Option D does not enable measure selection.
Field parameters are essential for providing interactive, customizable reports that empower users and reduce clutter by eliminating the need for multiple visuals performing similar functions. They also support advanced storytelling by allowing users to explore insights more freely.
Question 45:
You want to prevent users from viewing detailed transactional data while still allowing them to see aggregated results. Which Power BI security feature is appropriate?
A) Row-level security with data masking or summarized views
B) Hiding columns only
C) Changing visual types
D) Disabling drill-down
Answer:
A) Row-level security with data masking or summarized views
Explanation:
The correct answer is A) Row-level security with data masking or summarized views. This approach restricts access to transactional-level data while still exposing aggregated metrics. Using RLS ensures sensitive rows are hidden, and summarized views or aggregation tables allow users to see totals without revealing granular information.
Options B, C, and D cannot reliably protect sensitive data. Hiding columns does not secure data, visual changes do not prevent access via export, and disabling drill-down still leaves underlying data exposed unless proper RLS is implemented.
Securing details while preserving analytical value is essential in enterprise BI environments, aligning with PL-300 expectations for governance and responsible design.
Question 46:
You want to ensure that a Power BI report refreshes significantly faster without modifying the underlying data sources. The dataset contains several large tables, and only a few of those tables are used in the report visuals. What is the most effective approach to improve performance?
A) Disable loading of unused tables in Power Query
B) Change the storage mode of all tables to DirectQuery
C) Create multiple relationships between tables
D) Replace all DAX measures with calculated columns
Answer:
A) Disable loading of unused tables in Power Query
Explanation:
The correct answer is A) Disable loading of unused tables in Power Query. Power BI allows you to prevent specific tables from being loaded into the report model even though they remain available during transformations in Power Query. This technique reduces the size of the model, decreases memory usage, improves compression, and results in faster refreshes. Since VertiPaq compression works best with smaller and cleaner datasets, excluding unnecessary tables is a practical and effective optimization technique aligned with PL-300 best practices.
Power Query frequently contains intermediate tables used only for shaping or staging data. These tables are often generated as part of merge operations, custom transformations, or exploratory steps while building queries. If these staging tables remain loaded, they occupy storage even when not needed in analysis or visualizations. By disabling the load on such tables, the developer ensures that only relevant, clean, and optimized tables remain in the data model. This greatly reduces the dataset’s size, which directly influences refresh times.
Option B is not correct because converting all tables to DirectQuery would not automatically improve performance. In fact, DirectQuery tends to slow down report interaction and places heavy load on the underlying data source. DirectQuery is appropriate only when very large datasets cannot be imported or when real-time data is needed. It is not a performance solution for reports with already optimized import tables.
Option C is incorrect because creating multiple relationships adds complexity and can degrade performance. Power BI allows only one active relationship at a time between two tables. Additional relationships become inactive and require the USERELATIONSHIP function, which increases query complexity and processing overhead. Adding more relationships does not improve refresh performance and may introduce ambiguity.
Option D is incorrect because calculated columns are stored physically in the model, increasing dataset size and slowing refresh. Measures, by contrast, are evaluated at query time and do not increase storage. Replacing DAX measures with calculated columns worsens model efficiency and goes directly against recommended modeling practices.
Understanding why disabling unused tables boosts performance requires understanding how VertiPaq processes models during refresh. When Power BI refreshes a dataset, it first retrieves data from each query, loads it into memory, applies compression, builds dictionary encoding, and creates internal indexes. Each additional table increases processing time and memory consumption. Removing unnecessary tables streamlines all these steps, making refresh faster.
Another reason this method is effective is that Power BI Desktop loads data twice during refresh: once during evaluation and once during model encryption/compression. Eliminating unused tables means less duplication and fewer transformations, which ensures quicker refresh cycles.
In enterprise contexts, large models often include staging tables from initial development phases that were never removed. PL-300 candidates must demonstrate the ability to clean these tables from the final model, ensuring only analysis-relevant tables remain. This is especially important when datasets are scheduled to refresh multiple times per day or when they are used in Premium capacity environments.
Disabling load also improves model maintainability. Smaller models are easier to audit, easier to understand, and quicker to troubleshoot. When only relevant tables remain, relationship diagrams become clearer, allowing developers and analysts to understand data dependencies without confusion. This step also helps optimize DAX performance, since smaller models reduce the number of columns evaluations must scan.
Finally, dataset refresh improvements contribute directly to user experience. Faster refresh cycles prevent delays in scheduled data updates, reduce strain on gateway connections, and ensure business stakeholders receive near-real-time insights where possible. In organizations where daily or hourly refreshes are critical, disabling unnecessary tables is a vital modeling technique.
Question 47:
A dataset uses DirectQuery mode because the underlying data exceeds Power BI’s import limits. However, report users complain that visuals load slowly and time out during interaction. What is the best way to improve performance while still working with large data?
A) Implement aggregations in Import mode with DirectQuery detail tables
B) Convert all visuals to table visuals
C) Disable relationship enforcement
D) Duplicate the DirectQuery tables to improve performance
Answer:
A) Implement aggregations in Import mode with DirectQuery detail tables
Explanation:
The correct answer is A) Implement aggregations in Import mode with DirectQuery detail tables. Aggregations allow Power BI to store summarized versions of large DirectQuery tables in import mode while leaving detailed rows in DirectQuery. When visuals reference high-level summaries, Power BI retrieves data from fast, cached, in-memory imported tables instead of querying the database directly. Only when users drill to granularity that the import table cannot handle does Power BI fall back to DirectQuery mode.
This hybrid design boosts performance considerably while still allowing massive datasets to remain accessible. It also ensures that business users can interact with dashboards quickly, even when data volumes exceed Power BI’s import limits. This approach aligns with PL-300 modeling principles, which emphasize optimization, hybrid techniques, and balancing performance with accuracy.
Option B is incorrect because converting all visuals to table visuals does nothing to improve data retrieval speeds. DirectQuery delays occur at the data source level, not in the Power BI visualization layer. Table visuals often run slower because they request more rows, increasing the load on query execution.
Option C is incorrect because relationship enforcement ensures accuracy and prevents incorrect joins. Disabling this behavior does not improve performance and can result in incorrect results or broken visuals. Query optimization should not compromise data integrity.
Option D is incorrect because duplicating DirectQuery tables actually increases model complexity, worsens performance, and increases query load on the source. It also makes relationships harder to manage and does not reduce query execution time.
Implementing aggregations involves understanding business questions and identifying which aggregated level stakeholders commonly use. For instance, sales datasets may be summarized by Product, Month, and Region. This summary table can be created in Import mode and synchronized with the main DirectQuery source. The summarized table becomes the primary table for most report visuals, allowing fast in-memory analysis.
Aggregations also reduce load on data warehouses or SQL engines by decreasing the number of DirectQuery calls. Large organizations typically experience performance bottlenecks when hundreds of users interact with dashboards simultaneously. Aggregations allow most queries to be served locally in Power BI Service, improving reliability and scalability.
Implementing aggregations properly also requires configuring storage mode to dual mode, enabling Power BI to choose intelligently between Import and DirectQuery depending on context. This dynamic switching is part of what makes aggregations a powerful optimization tool.
Additionally, aggregations can be combined with incremental refresh policies for further efficiency. While incremental refresh optimizes data refresh cycles, aggregations optimize query performance, together creating a robust model capable of handling large data at scale.
This entire concept is central to PL-300 training because modern data models often involve billions of rows, and candidates must understand how to architect solutions that maintain performance while enabling rich analytics.
Question 48:
You are building a DAX measure that calculates rolling 90-day revenue. The requirement states that the calculation must respond to filters but must always use the last date in the current filter context as the reference point. Which DAX function is essential for this?
A) LASTDATE
B) EARLIER
C) RANDOM
D) SUMX
Answer:
A) LASTDATE
Explanation:
The correct answer is A) LASTDATE. Rolling window calculations, such as rolling 30, 60, or 90 days, depend on identifying the final date within the active filter context. LASTDATE returns the last chronological date from the current context, allowing developers to anchor the rolling calculation correctly. Once that date is identified, functions like DATESINPERIOD can construct the preceding window, enabling dynamic rolling measures.
Option B, EARLIER, is incorrect because it is used in row-context scenarios for calculated columns, not dynamic measures. It does not help identify date boundaries for rolling calculations.
Option C, RANDOM, is irrelevant and has no analytical role in time-intelligence functions.
Option D, SUMX, is a row iterator and does not identify context boundaries. It may be used in combination with rolling windows but cannot define the rolling period on its own.
Rolling window calculations are essential in analytics because they reveal short-term performance trends unaffected by calendar boundaries. Businesses often prefer rolling measures because they provide smoother insights compared to monthly or quarterly summaries. For example, rolling 90-day revenue helps identify consistent performance trends and avoids distortions caused by month-end or quarter-end anomalies.
Understanding LASTDATE involves understanding both filter context and time-intelligence behavior. For example, if a user selects only Q1 of a given year, LASTDATE returns March 31. If a user selects multiple years, it returns the final date in that selection. Rolling measures then adjust dynamically based on what users choose, making dashboards interactive and context-aware.
In combination with DATESINPERIOD, LASTDATE becomes the cornerstone of rolling window calculations. The general pattern for rolling 90-day revenue looks like:
CALCULATE( SUM(Revenue[Amount]), DATESINPERIOD( Date[Date], LASTDATE(Date[Date]), -90, DAY ) )
This formula recalculates automatically whenever the filter context changes. Such flexibility is a hallmark of advanced DAX and is frequently assessed in PL-300, where candidates must demonstrate the ability to create dynamic, responsive calculations that serve real-world analytics needs.
Finally, LASTDATE ensures that reports tied to daily, weekly, or ad-hoc selections remain accurate and up-to-date, making it indispensable in modern BI scenarios.
Question 49:
A report requires grouping customers into three categories based on their annual spending: low, medium, and high. The categories must update dynamically as sales data changes. What feature should you use?
A) DAX calculated column with SWITCH and logic based on a measure
B) Manual groups created in Power BI Desktop
C) Hierarchy levels in a dimension table
D) Formatting options in the visual
Answer:
A) DAX calculated column with SWITCH and logic based on a measure
Explanation:
The correct answer is A) DAX calculated column with SWITCH and logic based on a measure. Customer segmentation based on dynamic logic can be achieved by creating calculated columns or measures that categorize customers depending on their spending levels. Using SWITCH allows developers to define clear rules, such as categorizing customers into low, medium, or high tiers. When coupled with a dynamic measure, the segmentation can adjust as new data is added or when filters modify sales totals.
Option B is incorrect because manual groups are static and do not update dynamically. They require manual maintenance and do not respond to changes in the data model.
Option C is incorrect because hierarchy levels represent structured relationships and are not intended for segmentation based on numeric thresholds.
Option D is incorrect because formatting does not categorize data or create logical groupings.
Segmentation plays a major role in performance analysis, marketing, customer retention strategies, and predictive analytics. Dynamic segmentation allows analysts to track customer movement across categories and spot emerging high-value customers. PL-300 emphasizes building semantic models that adapt to evolving business conditions, making dynamic logic essential.
Using DAX to create segmentation also enables integration into visual filters, slicers, and advanced drill-through scenarios. For example, dashboards can show how many customers fall into each segment, their revenue contributions, and how these segments evolve over time. This leads to actionable insights while maintaining minimal maintenance overhead.
Furthermore, segmentation using DAX ensures consistent logic across reports. When segmentation logic is built into the model rather than visuals, every report page reflects the same interpretation, ensuring accuracy and trust.
Finally, using SWITCH provides readability and ease of modification, supporting clean modeling practices and long-term maintainability.
Question 50:
You need to display a table visual showing only the top 20 customers by sales, but the visual must still show the correct total sales for all customers, not just the top 20. Which DAX function is most appropriate?
A) TOPN in a visual-level filter
B) SUMMARIZECOLUMNS
C) REMOVEFILTERS only
D) HAVING clause in Power Query
Answer:
A) TOPN in a visual-level filter
Explanation:
The correct answer is A) TOPN in a visual-level filter. Applying TOPN in the visual filters pane allows you to display only the top 20 customers within a table visual while ensuring the measures still calculate totals based on the full dataset. Visual-level filters do not alter the filter context used in total-level calculations inside measures unless explicitly overwritten, allowing totals to remain global while the displayed rows are restricted.
Option B is incorrect because SUMMARIZECOLUMNS generates summary tables but does not control visual-level filtering in a report.
Option C is incorrect because REMOVEFILTERS does not implement top-N filtering and would prevent any row-wise filtering.
Option D is incorrect because Power Query is not used for dynamic top-N filtering based on user interactions.
Understanding how TOPN interacts with filters is crucial for PL-300. Power BI separates filter context applied at the visual level from the calculation context inside measures. By applying TOPN only at the visual level, the displayed rows are limited, but total-level calculations still reference the entire dataset unless measures override this behavior with CALCULATE or ALL.
This technique is commonly used in executive dashboards where users need to monitor key contributors (e.g., top 10 customers or top 5 products) while still viewing overall metrics. Maintaining accurate totals ensures correct benchmarking and prevents misleading interpretations.
TOPN also preserves interactivity. When users apply slicers or drill into specific segments, the top-N selection recalculates dynamically, updating the visual based on context. This flexibility allows meaningful comparisons across time periods, regions, or product lines.
By mastering how TOPN interacts with filter context, analysts can create dynamic, accurate, and responsive reports, demonstrating a deep understanding of core DAX behaviors expected in PL-300.
Question 51:
You are designing a Power BI dataset for an enterprise that uses multiple CRM systems. Each CRM stores customer IDs in different formats, including numeric strings, alphanumeric patterns, and GUID-like structures. You need to standardize these IDs so that they can be matched reliably across all systems during the modeling phase. Which Power Query transformation is most appropriate to perform first to ensure consistency before applying additional logic?
A) Replace values
B) Format as clean text using Trim, Clean, and Lowercase
C) Pivot columns
D) Group by
Answer:
B) Format as clean text using Trim, Clean, and Lowercase
Explanation:
The most suitable first step when trying to standardize customer IDs originating from multiple CRM systems is option B) Format as clean text using Trim, Clean, and Lowercase. Customer IDs often contain inconsistent spacing, hidden characters, casing inconsistencies, or formatting noise that can prevent accurate matching later in the modeling or reporting layer. Power Query provides simple but crucial text-cleaning functions such as Trim, Clean, Lowercase, and in some cases even Replace or Extract functions to ensure that the columns being merged or used as keys have a common, predictable structure.
Before handling any complex logic like conditional columns, fuzzy matching, or custom transformations, the data must be normalized at the most basic level. This includes removing leading or trailing spaces, eliminating non-printable characters (which Clean handles), and converting text to a consistent case (usually lowercase) for matching purposes. Different CRM systems may store IDs such as “AB-123”, “ab-123”, “AB–123 ” with a trailing space, or even “AB–123” containing a visually identical character that is not technically the same ASCII symbol. Without cleaning, Power BI might interpret these as different values, which disrupts merge operations and relationships in the data model.
Using Replace values (option A) before cleaning is risky because it presupposes that the values you are replacing are clean and consistent already. If an ID contains hidden characters, accidental formatting, or inconsistent casing, Replace might miss them entirely. Clean text transformations allow Replace operations to become more reliable after normalization.
Pivot columns (option C) is unrelated to text standardization. Pivoting reshapes data but does not assist in preparing text fields for matching or joining. Pivoting should only occur after data integrity is confirmed and after important key fields like customer IDs have been standardized. Attempting to pivot early in the process may even split identical IDs into different segments because of inconsistent formatting.
Group by (option D) also does not help as a first step. Grouping is used for aggregation or restructuring, not for cleaning. If you group dirty text values, you risk generating incorrect groups or multiple groups for values that should belong to the same customer but appear different due to hidden spaces or inconsistent capitalization.
Trimming and cleaning are essential before creating relationships in Power BI. Relationships rely on exact key matches, especially in star schema structures where dimension tables must connect accurately to fact tables. If customer IDs differ by even one hidden character, the relationship will not match, resulting in blank values or incomplete filtering behavior in visualizations.
Lowercasing IDs is also important because some CRM exports preserve upper- and lowercase variants depending on user entry or system behavior. Although Power BI relationships technically treat text values case-sensitively depending on the underlying engine, consistent casing avoids unpredictable mismatches.
Another advantage of performing the Trim, Clean, Lowercase combination first is that it creates a foundation for secondary transformations. Once text is standardized, you can then reliably apply transformations like Extract, Split, Merge Queries, or Conditional Columns. If you apply these transformations before cleaning, your results may still contain irregularities.
Cleaning also leads to more efficient table merging. When merging multiple CRM extracts, consistency in text formatting ensures higher accuracy. Without this step, fuzzy matching algorithms might be required instead. Fuzzy matching is helpful but should be used only after basic cleaning because fuzzy matching on dirty data can create incorrect joins, duplicate records, or false positives.
This cleaning step also improves performance by reducing unnecessary distinct values. For example, if trailing spaces cause multiple unique IDs, Power BI will treat each as a distinct category. This increases the cardinality of the column, which harms performance in memory and compression. Cleaning data reduces cardinality and improves the VertiPaq engine’s compression efficiency.
In enterprise-scale BI systems, standardized keys are critical not only for relationships but for row-level security filters as well. If an RLS rule uses customer IDs as part of security filtering and the IDs do not match exactly, users may either be denied access to data they should see or accidentally gain access to data they should not see. Clean, consistent text formatting avoids these security risks.
Another angle is that multiple CRM systems sometimes include formatting characters like hyphens, underscores, or prefixes. Once the basic cleaning is applied, you may add additional steps such as Replace values or Extract text to remove or standardize these characters. But again, these follow-up steps become reliable only when the baseline cleaning is done first.
Power Query best practices also recommend applying text standardization early in the query steps because it prevents errors from propagating into downstream transformations. A system that tries to merge tables and only then discovers formatting issues must backtrack and redo many transformations. By doing cleanup early, you build a more stable and maintainable data preparation pipeline.
Therefore, the correct first step to ensure consistent customer ID formatting across multiple CRM systems is to apply Trim, Clean, and Lowercase transformations in Power Query. This prepares the data for high-quality matching, merging, and modeling, ensuring an efficient, reliable, and secure Power BI solution.
Question 52:
You are designing a report for the finance department. They want a matrix visual that shows monthly revenue but also requires the ability to drill into daily revenue when needed. Which configuration should you use to allow users to expand and collapse the time hierarchy directly in the matrix?
A) Add a date hierarchy to the matrix rows
B) Create a disconnected table for the date levels
C) Use bookmarks to navigate between two separate pages
D) Create separate date columns and use them in slicers
Answer:
A) Add a date hierarchy to the matrix rows
Explanation:
The correct approach is option A) Add a date hierarchy to the matrix rows because the matrix visual in Power BI is specifically designed to work seamlessly with hierarchies. When you add a date hierarchy to the rows field well, users can expand or collapse the hierarchy levels directly inside the visual. These hierarchy levels typically include Year, Quarter, Month, and Day. By placing them in the matrix, users gain a built-in capability to navigate through time at different granularity levels without changing slicers, switching pages, or applying extra calculations. For financial reporting, this is extremely important, as finance teams often want to start at an aggregated level, such as monthly results, and then drill deeper into daily or even transaction-level data when needed.
Disconnected tables, as shown in option B, serve other purposes, such as scenario analysis, what-if parameters, or selector-style slicers that influence measures. However, they are not suitable for controlling visual-level drilldown within a matrix. A disconnected date table cannot directly replace the functionality of a date hierarchy. Even if you created separate hierarchies manually, they would not behave like a true date hierarchy tied to the model’s date dimension. In general, the best practice in Power BI modeling is to maintain a single, fully populated, continuous date table marked as a date table. This ensures that visuals such as matrices can handle time intelligence properly.
Option C, which suggests using bookmarks, is too cumbersome and is not ideal for drilldown behavior. Bookmarks can take snapshots of report states or help with navigation, but they cannot replicate the native expand/collapse features built into the matrix visual. Using bookmarks to navigate between multiple pages just to simulate drilling into daily data is inefficient and creates unnecessary complexity. It also forces report consumers to switch contexts and remember where they navigated from, decreasing the report’s usability. Drilldown functionality within the visual itself is much more intuitive.
Option D suggests using separate date columns in slicers. While slicers can influence the granularity of the data that users can see, they do not provide hierarchical drill functionality. For example, if a user selects a particular month in a slicer, they might see only daily data for that month, but they cannot expand and collapse the hierarchy levels. Slicers filter data, but they do not allow users to interact with the hierarchical structure itself. Furthermore, managing multiple slicers for different date granularity levels complicates the user experience and increases the likelihood of conflicting filter selections.
Hierarchies in Power BI are fundamental elements that support drilldown capabilities. When users click the plus or down-arrow icons in the matrix visual, they can immediately expand from months to days. This ability is especially beneficial for financial reporting because finance analysts frequently need to compare performance at different time levels. For instance, they may want to compare monthly revenue trends but also investigate whether a specific low-performing month was affected by particular days or holiday periods. Having the hierarchy in the matrix provides this insight with a single click.
Another advantage of using the date hierarchy is that it enhances time intelligence calculations. When the date table is properly marked as a date table and the hierarchy is based on that table, DAX measures like total YTD, MTD, and QTD behave correctly. If you used disconnected tables or separate columns in slicers, you would lose this benefit, as measures might not calculate correctly when the time dimension is fragmented. Strong modeling practices emphasize using a single date table with consistent relationships to the fact tables. The date hierarchy should be derived from this table to maintain predictability and accuracy.
Power BI’s Auto Date/Time feature can create hierarchies automatically, but best practice is to turn off Auto Date/Time and instead rely on a robust date dimension. When you use a proper date table, the hierarchy is explicit and easier to maintain. That hierarchy, once added to the matrix, behaves predictably across all visuals, not just one. This ensures consistency across the financial reporting ecosystem.
Lastly, using the date hierarchy in the matrix supports future scalability. If additional levels such as hourly data or fiscal periods need to be added later, the hierarchy can be modified easily within the date table. Users will instantly gain access to these new levels through the matrix visual without redesigning the report. This flexibility is crucial for enterprise environments where time granularity requirements often evolve.
For all these reasons, option A is the most appropriate and most efficient solution.
Question 53:
You are preparing Power BI data for a large retail company. They want to categorize stores into performance tiers based on sales, profitability, and customer satisfaction metrics. This categorization requires multiple conditional rules and needs to be refreshable automatically. Which technique should you use in Power Query?
A) Create a conditional column
B) Add a calculated table in DAX
C) Build a hierarchy in the model
D) Use field parameters
Answer:
A) Create a conditional column
Explanation:
The most appropriate approach when needing to categorize stores based on multiple dynamic, rule-based conditions is to create a conditional column in Power Query, making option A the correct choice. In retail analytics, performance tiers such as Platinum, Gold, Silver, and Bronze often depend on a combination of key performance indicators. These KPIs can include total sales, gross profit margin, customer satisfaction scores, foot traffic, or even return rates. Power Query’s conditional column feature is specifically designed for applying multi-level logic that transforms raw data into usable categories. It allows you to define tier logic based on thresholds that can be modified, expanded, or layered.
Option B, adding a calculated table in DAX, is not appropriate here because calculated tables do not recalculate during refresh in the same way. They are computed at the model level and evaluate after the data is loaded. While they can be useful for static or relationship-based transformations, they are not intended for row-level transformation logic that depends on multiple numeric and conditional rules. If you attempted to categorize stores using a calculated table, you’d need complex DAX expressions, and even then, the logic would not integrate seamlessly with refresh operations and might not update correctly if underlying measures change.
Option C, building a hierarchy, has nothing to do with categorization. Hierarchies help with navigation and drilldown within visuals but do not create classification logic. A hierarchy is useful when the data already contains levels such as Country > Region > Store, but it cannot compute categories based on performance metrics.
Option D, field parameters, is designed for switching dimensions or measures in visuals. These allow users to choose which metrics they want to display but do not compute store performance tiers. While field parameters are powerful for creating interactive reports, they cannot replace the need for deterministic, rule-based calculations in data preparation.
Power Query’s conditional column has additional advantages. It preserves a single version of truth by embedding categorizations before the data enters the model. This enhances data governance, as users consuming the dataset—via reports, dashboards, or third-party BI tools—receive consistent tier classifications. If the logic were implemented at the visual or DAX level, different reports could potentially use different definitions, creating confusion and inconsistency across the business.
Another major benefit is transparency. Power Query provides step-by-step transformations that are easy to review, audit, and modify. When defining performance tiers, business rules often evolve—perhaps new metrics get added, thresholds change seasonally, or corporate goals shift. Conditional column definitions can be updated quickly through the user interface without writing complex code, making them suitable for analysts who may not be experts in DAX or SQL.
Power Query also offers advanced options for conditional logic beyond the basic UI. You can create custom columns using the M language, allowing for nested conditions, case-insensitive text comparisons, data type conversions, and multi-criteria evaluation. For example, you could define a tier rule such as:
If Sales > 1 million AND Profit Margin > 20% AND Customer Satisfaction > 85% → Platinum.
Power Query can easily support this logic in a readable format.
Row-level transformations belong in Power Query because the model layer should primarily be used for relationships, measures, and aggregations—not for computing static classifications. Performance tiers are classifications, not calculations dependent on user selections. Therefore, transforming them early in the data pipeline ensures that storage and calculation requirements are minimized. By computing tiers during refresh, the VertiPaq engine stores the final categorical results efficiently, improving overall model performance.
Question 54:
You need to monitor data quality in your Power BI dataset by identifying nulls, duplicates, and out-of-range values. You want the monitoring to occur automatically during refresh. Which feature should you use?
A) Data profiling in Power Query
B) Report-level filters
C) Page tooltips
D) Matrix conditional formatting
Answer:
A) Data profiling in Power Query
Explanation:
The correct option is A) Data profiling in Power Query because data profiling is the dedicated feature in Power Query responsible for assessing data quality during the preparation stage. When working with datasets that must be monitored for quality issues such as null values, duplicates, inconsistencies, or invalid ranges, it is critical to identify and resolve these issues before the data loads into the model. Power Query provides built-in profiling tools that highlight the quality, distribution, and statistics of each column, ensuring that quality assessment integrates seamlessly into the refresh pipeline.
Data profiling runs automatically during data refresh in Power BI Desktop and Power BI Service (when using dataflows or published datasets). This means the dataset is continually monitored without requiring manual checks or separate validation queries. As organizations increasingly rely on automated BI pipelines, embedding data quality verification in the foundational layer ensures reliability throughout reporting.
Option B, report-level filters, is entirely unrelated to data quality monitoring. Filters simply restrict what data appears in reports. If there are nulls or duplicates in the dataset, the data model will still contain them; the filter does not help identify or fix them. Using report-level filters to monitor data quality is neither scalable nor reliable and does not ensure automated refresh-based detection.
Option C, page tooltips, is a visualization enhancement that helps explain or annotate visuals. Tooltips only appear when hovering over visuals; they cannot be used to detect or monitor data quality. Tooltips do not access metadata about data integrity and should never be relied upon for monitoring.
Option D, matrix conditional formatting, is a display-focused feature. It highlights patterns or values visually but does not monitor or detect data quality issues during refresh. Conditional formatting operates only on the presented data, not the underlying structure or completeness of the dataset.
Embedding data profiling into the refresh pipeline strengthens the entire BI system. It ensures that downstream calculations, relationships, and visualizations rely on clean and accurate data. Once issues are identified through profiling, analysts can incorporate cleaning steps directly within Power Query—such as replacing nulls, removing duplicates, correcting data types, or trimming inconsistent text.
Question 55:
Your organization’s sales report needs to allow users to switch between multiple fact tables, such as sales transactions, returns data, and inventory movements, using a single slicer. What should you implement?
A) Field parameters
B) Group by transformation
C) Synonyms in the model
D) Quick measures
Answer:
A) Field parameters
Explanation:
The correct answer is A) Field parameters because field parameters are specifically designed to allow report consumers to switch between different measures, dimensions, and even entire fact tables using a single slicer or selector. This functionality is essential when building flexible, interactive reports where users want to explore different types of data but do not want separate visuals or pages for each fact table. Field parameters allow you to combine multiple related metrics into one dynamic selection interface.
Field parameters work by creating a table behind the scenes that includes metadata about which fields should be displayed under certain conditions. Each entry corresponds to a field—such as a measure or a column—and the slicer created from the parameter table updates the visual dynamically. In this scenario, where the organization wants users to toggle between fact tables, you can add fields from separate fact tables into one parameter. When a user selects “Sales Transactions,” the report visual will display the appropriate columns or measures. Selecting “Returns” or “Inventory Movements” shifts the visual to the corresponding dataset fields.
Option B, Group by transformation, is unrelated to switching datasets. Grouping aggregates data within Power Query and is part of data preparation. It does not influence visual interactivity or offer a mechanism for users to toggle between separate data sources.
Option C, Synonyms in the model, is used for Q&A functionality, enabling natural-language search. Synonyms make it easier for users to type questions like “revenue” instead of “total sales,” but they do not assist with switching fact tables.
Option D, Quick measures, is useful for generating common calculations such as running totals or time intelligence metrics. However, quick measures do not allow users to switch between datasets and cannot provide dynamic field selection.
Field parameters also simplify report design. Before field parameters existed, analysts had to create complex DAX expressions or multiple visuals layered on top of each other, using bookmarks or selection panes to simulate field switching. This approach was difficult to maintain and error-prone. With field parameters, Power BI manages all switching logic automatically.
Field parameters help avoid clutter by consolidating multiple report views into one. For example, instead of creating separate pages for sales, returns, and inventory, you can build one universal page with visuals powered by field parameters. Users simply select the dataset they want, and the visuals update accordingly. This makes reports cleaner, more intuitive, and much easier to scale.
Question 56:
You are designing a Power BI report for executives who need to compare the performance of multiple KPIs such as revenue, profit margin, customer satisfaction, and average order size, all within a single visual. Which visual is most appropriate to display multiple KPI trends clearly without overcrowding the report?
A) Clustered column chart
B) Multi-row card
C) Line chart with multiple measures
D) Stacked area chart
Answer:
C) Line chart with multiple measures
Explanation:
The answer is C) Line chart with multiple measures because this type of visual is specifically built to compare trends of multiple key performance indicators over time without overwhelming report viewers. A line chart allows you to display several metrics simultaneously, each represented by a separate line. This makes it easy to interpret performance patterns as long as the report designer manages colors, measure formatting, and axis scaling appropriately. When dealing with executive-level analytics, clarity and trend visibility are crucial, and line visuals excel in these requirements.Option A, the clustered column chart, is useful for comparing discrete values across categories, but it quickly becomes cluttered when more than two or three metrics are added. If you attempted to show four KPIs in a clustered column chart, the visual would become visually chaotic, with overlapping or tightly packed bars that make it difficult for executives to extract meaningful insights. Column charts are powerful for categorical comparisons but not ideal for comparing time-based KPI trends simultaneously.
Option B, the multi-row card, is good for displaying current KPI values or high-level summaries, but it does not show trends. Executives rarely rely on cards to analyze patterns because cards only tell you the value, not how the value shifted over time. Cards work well at the top of a dashboard to present at-a-glance numbers, but they do not provide historical context or comparative movement, which is essential for understanding performance changes.
Option D, the stacked area chart, can show multiple measures that contribute to a whole, but it fails when KPIs have independent scales or when one KPI shouldn’t stack on another. Stacked visuals distort comparability because the baseline changes as each measure is layered. When comparing KPIs like customer satisfaction and revenue, stacking them makes no conceptual sense because they do not sum together or represent parts of a whole. Executives reviewing stacked visuals may misinterpret the results because the shape of each layer depends on the values beneath it.
Question 57:
You are preparing a Power BI model where several tables require a shared calendar table to support time-intelligence functions. Which characteristic must be true for a valid date table in Power BI?
A) It must include fiscal year and fiscal quarter columns
B) It must contain a continuous date range with no gaps
C) It must be loaded from a SQL Server database
D) It must contain a column named DateKey
Answer:
B) It must contain a continuous date range with no gaps
Explanation:
The answer is B) It must contain a continuous date range with no gaps because Power BI’s time-intelligence functions rely on sequential, uninterrupted dates to calculate year-to-date, quarter-to-date, or same-period-last-year metrics. A valid date table must have every day accounted for, typically covering a full range that includes all dates needed for analysis. Without a continuous date range, DAX functions may produce incorrect results or fail entirely.
Time-intelligence functions such as TOTALYTD, SAMEPERIODLASTYEAR, DATESYTD, and DATEADD rely on a chronological set of dates that Power BI can traverse step-by-step. Gaps disrupt this traversal, causing misalignment in calculations. For example, if the table excludes weekends or holidays, a DAX function that moves back seven days may land on a date that does not exist in the table, yielding empty or incorrect values. Therefore, having a complete range of dates is essential for accurate analytical modeling.
Option A, including fiscal year and quarter columns, is helpful but not mandatory. A date table can function perfectly without fiscal attributes. These fields can be added for convenience, but Power BI does not require them for time-intelligence operations. The essential requirement remains that the table contains a continuous date column marked as the official date table.
Option C is incorrect because Power BI does not care about the data source of the date table. You can generate a date table manually, using DAX, using Power Query, using a CSV file, or from any external system. What matters is structure, not origin. The date table must follow a predictable sequence, but it does not need to come from SQL Server or any specific platform.
Option D suggests a column named DateKey, which is not necessary. A date table only requires at least one column of type Date that contains unique values. Naming conventions are flexible, and you can call the column anything as long as it is properly formatted and continuous. DateKey columns are commonly used in star-schema models, especially when integrating with data warehouse systems, but Power BI does not mandate this.
By enforcing this requirement, Power BI ensures that analysts can trust calculations across multiple fact tables that rely on the same chronological structure. Therefore, the requirement for a continuous, gap-free date range is fundamental to the functionality and reliability of time-intelligence features.
Question 58:
You imported a dataset into Power BI and noticed that many columns contain nulls, whitespace, and inconsistent capitalization. Which transformation step should you perform first to ensure reliable downstream modeling?
A) Replace capitalization inconsistencies
B) Remove duplicate rows
C) Trim and clean columns
D) Create a DAX measure to handle nulls
Answer:
C) Trim and clean columns
Explanation:
The answer is C) Trim and clean columns because data quality issues such as whitespace, invisible characters, and inconsistent formatting can significantly affect joins, relationships, grouping, filtering, and calculations in Power BI. Performing a Trim and Clean transformation early ensures that all subsequent operations behave predictably.
Whitespace, leading spaces, trailing spaces, and non-printable characters often cause silent failures in data modeling. For example, if a key column contains “Product A” in one table and “Product A ” (with trailing whitespace) in another, Power BI will treat them as different values even though they look identical to users. This leads to broken relationships, unmatched lookups, blank visual rows, and misleading aggregations. Cleaning and trimming data removes these invisible obstacles and ensures columns behave as intended.Option A, replacing capitalization inconsistencies, is useful but not the first priority. While capitalization affects grouping (for example, “London” and “LONDON” being treated separately), whitespace and hidden characters cause deeper structural issues that influence relationships, merges, and calculations. It is more important to remove invisible formatting problems first, then standardize capitalization later if needed.
Option B, removing duplicates, should not be done before ensuring textual fields are clean. If a dataset contains values that appear as duplicates but differ due to trailing spaces or hidden characters, removing duplicates prematurely may eliminate valid entries or fail to remove invalid ones. Cleaning data ensures that duplicates are identified correctly. Furthermore, many datasets require domain knowledge before removing duplicates, so this should be done after ensuring all values are formatted consistently.
Option D, creating a DAX measure to handle nulls, is not part of Power Query transformation and should never be the first step. Null handling should be done during transformation, not during DAX calculation. DAX should only be used after the model is clean, structured, and optimized. Handling nulls within DAX is inefficient and often leads to unnecessary complications.
Transformations should always follow a structured order, and cleaning should appear near the beginning. This order typically follows the pattern: remove errors, clean and standardize, filter, shape, merge, aggregate, and finally load. Performing trimming and cleaning early saves work and prevents the complications that arise from poorly formatted text fields.
Because Power BI heavily relies on star-schema principles for exam success, clean dimension keys are essential for building the relationships that star schemas depend on. Without clean keys, the model becomes unreliable, inaccurate, and difficult to troubleshoot. Therefore, choosing Trim and Clean as the first step aligns with best practices and ensures the entire model is built on a strong foundation.
Question 59:
You want to restrict users from exporting underlying data from a Power BI report while still allowing them to interact with visuals. Which setting should you modify?
A) Edit the dataset permissions in the workspace
B) Disable the “Export data” option in the report settings
C) Remove the user from the workspace
D) Change the relationship cross-filter direction
Answer:
B) Disable the “Export data” option in the report settings
Explanation:
The answer is B) Disable the “Export data” option in the report settings because this setting directly controls whether users can export summarized or underlying data from visuals. Power BI allows granular control over export permissions at the report level. By disabling this option, you prevent users from downloading the data behind the visuals while still allowing them to interact with filtering, slicing, highlighting, and drilling. This satisfies scenarios where data privacy must be enforced while maintaining report interactivity.
Export permissions are often associated with sensitive data governance rules. Companies may allow users to view analytics but not to extract raw data for offline manipulation. Exporting underlying data can expose individual-level records, violating compliance requirements, especially in industries like healthcare, finance, or government. Power BI supports this requirement by enabling administrators to disable exports while keeping the report functional.
Option A, editing dataset permissions in the workspace, affects access to the dataset as a whole. If you adjust those permissions incorrectly, you may block users entirely from viewing or interacting with the report. Dataset permissions are not granular enough to restrict exports without affecting overall access. Thus, this is not the correct path for controlling export functionality.
Option C, removing the user from the workspace, is too extreme and prevents all report access, not just exports. Removing a user eliminates their ability to interact with visuals, defeating the purpose of allowing them to use the report while controlling data extraction. The question specifies restricting export while still allowing interaction, so removing users contradicts the requirement.
Option D, changing the relationship cross-filter direction, relates to modeling behavior, not export permissions. Cross-filter direction affects how filters propagate through relationships but has no connection to data export settings. Modifying model relationships will not prevent a user from exporting data, so this option is irrelevant.
Question 60:
You created a Power BI composite model using DirectQuery and Import tables. You notice slow performance when users slice the report. Which optimization step should you consider first?
A) Reduce cardinality of columns used in relationships
B) Convert all DirectQuery tables to Import mode
C) Disable automatic date/time hierarchies
D) Remove all calculated columns
Answer:
A) Reduce cardinality of columns used in relationships
Explanation:
The answer is A) Reduce cardinality of columns used in relationships because high-cardinality columns in DirectQuery mode create heavy query loads against the backend source. When slicing or filtering by such columns, Power BI must generate complex SQL queries that require significant processing time. Reducing cardinality makes relationships more efficient and reduces the computational burden involved in resolving cross-filtering logic.
DirectQuery models depend heavily on how effectively relationships can filter tables at the data source. If relationship columns contain millions of unique values, every filter operation forces the data source to process large sets of values. Reducing cardinality — for example, by using surrogate keys, removing unnecessary unique identifiers, or normalizing composite keys — greatly improves performance.
Option B, converting all DirectQuery tables to Import mode, may improve performance but contradicts scenarios requiring real-time or near-real-time data access. Composite models exist specifically to blend Import and DirectQuery, allowing designers to optimize performance without sacrificing freshness. Converting everything to Import defeats the purpose and may not be allowed if data must remain in DirectQuery for compliance or latency reasons. It is not the best first step.
Option C, disabling automatic date/time hierarchies, can help with performance in Import models, but it provides minimal performance benefit in DirectQuery scenarios. Auto hierarchies primarily affect model size and visual clarity, not slicing speed. Disabling them will not resolve slow filtering caused by high-cardinality relationship columns.
Option D, removing calculated columns, reduces in-memory storage for Import tables but has limited effect on DirectQuery performance. Calculated columns do not optimize slicing operations because DirectQuery pushes logic to the source. While removing calculated columns may help optimize refresh performance or memory usage, it does not directly address the root cause of slow slicing in composite models.
Popular posts
Recent Posts
