ACD200 Appian Practice Test Questions and Exam Dumps

Question 1

You are facing issues when attempting to establish a SAML connection to an identity provider. You determine you need to increase the authentication-specific logging levels so that you can view trace-level statements about the connection attempt in the application server log. 

Which property file should you update to modify the log output level? (Choose the best answer.)

A. commons-logging.properties
B. appian_log4j.properties
C. logging.properties
D. custom.properties

Correct Answer: C. logging.properties

Explanation:

To address issues related to SAML authentication and modify the log output level to capture trace-level statements, you need to adjust the logging properties for your application server. The logging.properties file is commonly used in Java-based applications (like those using Tomcat or similar servers) to configure the logging framework, such as the level of logging output for specific packages, components, or actions.

Why logging.properties is the correct answer:

The logging.properties file is a standard configuration file used by many Java applications to define how logging should behave, including setting the logging level (such as INFO, DEBUG, TRACE) for various components. This file is often used to enable more verbose logging, such as trace level, specifically for debugging authentication or other connection issues.

Why the other options are incorrect:

  • A. commons-logging.properties: This file is used for configuring the Commons Logging library, which serves as a façade for various logging libraries like Log4j and JUL (Java Util Logging). However, it's typically not used to adjust application-specific logging levels for SAML or other connection-related issues. It's primarily concerned with the common logging configuration, but not with detailed trace-level logging for specific services like authentication.

  • B. appian_log4j.properties: This file is specific to applications that use Appian and Log4j for logging. If you are using Appian, this file could be relevant. However, based on the question's context, where you are troubleshooting a SAML connection on an application server, the logging.properties file is more commonly used for this type of task, particularly in Java-based application servers.

  • D. custom.properties: This file is typically used for custom application-specific configurations, and it is not a standard configuration file for logging or adjusting the log level. Unless the application has been specifically configured to use this file for logging, it is not likely to be used for modifying the log output level.

To modify the log output level for SAML authentication issues, the correct configuration file to update is typically logging.properties, as it is the standard for setting up and managing logging levels in many Java-based applications and application servers.

Therefore, the correct answer is C. logging.properties.

Question 2

When creating a Web API, which two items are configured in the Administration Console? (Choose two.)

A. LDAP Authentication
B. API Key
C. Connected System
D. Service Account

Correct Answers: B. API Key, C. Connected System

Explanation:

When creating a Web API, several configurations need to be made to ensure proper security and connectivity. Among the given options, the two most relevant to configuring a Web API are API Key and Connected System. Let's break down the correct answers:

Why B. API Key is correct:

An API Key is a critical part of securing and authenticating API requests. API Keys are used to identify and authenticate the client making the request to the Web API. In the Administration Console, you configure API Keys to ensure that only authorized clients can access the API. This is essential for controlling access and managing security at the API level.

Why C. Connected System is correct:

A Connected System refers to the system or service that your Web API communicates with or integrates with. In the Administration Console, you would configure the connected systems to establish communication between your Web API and external systems or services. This could include specifying APIs that the Web API needs to connect with, defining endpoints, and ensuring proper configuration for seamless integration.

Why the other options are incorrect:

  • A. LDAP Authentication: LDAP Authentication is typically used for managing user authentication and directory services, not specifically for configuring a Web API in the Administration Console. While it may be a part of the overall security configuration, LDAP authentication is not directly related to creating or configuring the Web API itself.

  • D. Service Account: A Service Account is typically used for authentication purposes in many systems and is more relevant to managing permissions and access for automated tasks or background services. While a service account may be needed to interact with the Web API, it is not typically configured in the Administration Console when creating the API itself.

The two most relevant configurations when creating a Web API are API Key (for authentication and authorization of API calls) and Connected System (for establishing the integration with external systems or services).

Therefore, the correct answers are B. API Key and C. Connected System.

Question 3

Using a View, you pull a report on different employee transactions. You receive the following error:
“a!queryEntity: An error occurred while retrieving the data.”
What is the most likely root cause? (Choose the best answer.)

A. The view contains a large number of rows, requiring more time to fetch the data.
B. The view doesn’t have a column mapped as a Primary Key in its corresponding CDT.
C. The required inputs were not provided.
D. The rule contains a missing syntax.

Correct Answer: B. The view doesn’t have a column mapped as a Primary Key in its corresponding CDT.

Explanation:

In Appian, the a!queryEntity function is used to retrieve data from a Data Store Entity (DSE), which typically references a Database View or Table through a Custom Data Type (CDT). This error — “a!queryEntity: An error occurred while retrieving the data.” — can be caused by various issues, but the most common and critical reason is the lack of a properly mapped Primary Key in the CDT associated with the view.

Why B. The view doesn’t have a column mapped as a Primary Key in its corresponding CDT is correct:

Appian requires that every CDT used for querying a data store entity must have a primary key field mapped, even if the underlying database structure is a view. Views in databases do not always contain a natural or enforced primary key, but Appian needs one defined for data consistency, pagination, and efficient query handling.

If the view’s corresponding CDT doesn’t have a column marked with @Id, Appian’s a!queryEntity cannot perform the query reliably, resulting in this error. The system depends on that primary key to manage row uniqueness and handle data retrieval efficiently.

Why the other options are incorrect:

  • A. The view contains a large number of rows, requiring more time to fetch the data:
    While performance issues might occur due to a large dataset, they generally cause timeouts or slow queries, not the specific error mentioned here. The error message in question typically reflects a data model issue, not performance.

  • C. The required inputs were not provided:
    If required inputs were missing, Appian would more likely raise an error related to missing parameters or inputs in the expression evaluation phase, not while executing a!queryEntity.

  • D. The rule contains a missing syntax:
    Syntax errors would be caught during design-time evaluation in the rule editor and would lead to a different type of error message, such as "invalid expression" or “syntax error,” not a runtime error during data fetching.

This type of error typically signals a structural misconfiguration rather than a simple syntax or performance issue. When using views in Appian, always ensure that your CDT has a defined primary key, even if the underlying view doesn't enforce it at the database level.

Thus, the correct answer is: B. The view doesn’t have a column mapped as a Primary Key in its corresponding CDT.

Question 4

During the design review, you identified slow-operating expression rules querying a specific data store.
Which metric from the data_store_details.csv file will help you understand the “number of operations against data store?” (Choose the best answer.)

A. Transform Count
B. Query Count
C. Total Count
D. Execute Count

Correct Answer: C. Total Count

Explanation:

The data_store_details.csv file is part of Appian’s performance monitoring tools, often used during performance tuning or system analysis to identify bottlenecks related to data store entity (DSE) access. When expression rules, interfaces, or process models interact with DSEs, these interactions are tracked and logged, allowing engineers to investigate inefficiencies.

To determine how frequently a particular data store is being accessed, you need to look at the Total Count metric.

Why C. Total Count is correct:

  • Total Count in data_store_details.csv reflects the total number of operations performed on a given data store entity.

  • This includes all types of operations such as queries, inserts, updates, and deletes.

  • It provides a comprehensive view of usage, which is essential when diagnosing performance slowdowns that might result from high-frequency access or inefficient query patterns.

This metric gives you a quantitative insight into how often your design is hitting a particular data store, allowing you to correlate performance degradation with access patterns.

Why the other options are incorrect:

  • A. Transform Count:
    This refers to data transformations applied to the query result sets or objects — such as mapping or conversion activities. It doesn’t reflect the number of times the data store itself is queried.

  • B. Query Count:
    While Query Count sounds relevant, it's usually more narrowly scoped to read operations only, not capturing full transactional activity (writes, updates, etc.).
    It doesn’t provide the full picture of data store usage.

  • D. Execute Count:
    This is typically associated with specific rule or expression executions and doesn’t exclusively or directly measure data store operations.

To fully understand how heavily a data store is being used — across all operations — and identify if frequent interactions could be slowing down system performance, the most appropriate and comprehensive metric is:

C. Total Count

This enables better performance analysis and guides targeted optimizations such as indexing, CDT restructuring, or caching strategies.

Question 5

You have configured a process model to send an email to one or more recipients using the out-of-the-box Send E-Mail node.
Executing the process model results in the following error:
“Error: Email could not be sent”

Where do you go first to find more details on why the node encountered an error?
(Choose the best answer.)

A. Raise a support case within My Appian so a cloud engineer can investigate.
B. Review the system.csv log.
C. Run and review the Health Check report.
D. Investigate the application server stdout log.

Correct Answer: D. Investigate the application server stdout log

Explanation:

When the Send E-Mail smart service node in a process model fails and returns the error "Email could not be sent", the first step in troubleshooting is to access detailed runtime logs that contain error stack traces and related messaging.

Why D. Investigate the application server stdout log is correct:

  • The application server stdout log (also known as stdout or server.log, depending on the hosting environment) captures detailed runtime output, including exceptions and failures related to smart services like email.

  • This log includes the underlying error message, SMTP configuration problems, missing credentials, invalid email addresses, or server connection failures.

  • Reviewing this log helps developers or administrators pinpoint the technical cause of the failure without delay.

This is the most direct and immediate place to begin troubleshooting such an error.

Why the other options are incorrect:

  • A. Raise a support case within My Appian so a cloud engineer can investigate:
    This is not the first step. It should be done only after you’ve performed internal diagnostics and cannot resolve the issue. Appian support also requires details from the logs, which you’d gather first.

  • B. Review the system.csv log:
    This log provides summary-level performance data and execution statistics, not the technical error details needed to debug a failed email service.

  • C. Run and review the Health Check report:
    Health Check is used for overall system performance, design risk, and best practices. It does not analyze specific process node failures or runtime errors like email delivery issues.

When an email fails to send in an Appian process, the fastest and most informative place to start is the application server stdout log, as it contains the necessary technical diagnostics to determine what went wrong.

Correct Answer: D. Investigate the application server stdout log

Question 6

Which review format is the most efficient way to coach team members and improve code quality?
(Choose the best answer.)

A. Peer Dev Review
B. Automated Code Scanning
C. Retrospectives
D. User Acceptance Testing

Correct Answer: A. Peer Dev Review

Explanation:

Improving code quality and coaching developers effectively require a review format that enables direct feedback, promotes shared learning, and encourages collaboration. Among the listed options, Peer Development Review (often referred to as code review) is the most efficient and targeted way to achieve these goals.

Why A. Peer Dev Review is correct:

  • Direct Knowledge Transfer: Developers get real-time feedback from their peers, which helps in understanding better coding practices, design patterns, and performance considerations.

  • Code Quality Enforcement: Ensures the code adheres to team or organizational coding standards, reducing bugs and technical debt.

  • Skill Building: Acts as a learning opportunity for junior developers, as they observe and understand the reasoning behind changes suggested by more experienced team members.

  • Immediate Correction: Mistakes or inefficiencies are caught early in the development cycle, making it cheaper and faster to correct.

  • Team Alignment: Fosters a culture of collaboration and shared responsibility over code.

Peer reviews are an interactive process and often include discussion, which is more effective for coaching than passive formats.

Why the other options are less efficient for coaching and improving code quality:

  • B. Automated Code Scanning:
    While valuable for detecting syntax errors, security vulnerabilities, or style violations, this approach is impersonal and not educational. It doesn’t offer coaching or rationale, which limits learning for developers.

  • C. Retrospectives:
    These focus on the team's development process, not specific code. They’re valuable for improving workflow or communication, but don’t directly address code quality or offer coaching at the code level.

  • D. User Acceptance Testing (UAT):
    UAT verifies that the software meets business requirements. It is end-user focused and occurs late in the process, offering little opportunity to coach developers on improving technical implementation.

Summary:

Among all review formats, Peer Development Reviews provide the most hands-on, immediate, and interactive form of coaching and quality assurance. They align the team on best practices and lead to a stronger, more maintainable codebase.

Correct Answer: A. Peer Dev Review

Question 7

A lead designer receives this requirement:
Every time a record is modified, the data changed must be stored for audit.
Which design is the most efficient and has the least impact on the Appian application?
(Choose the best answer.)

A. Create a custom plugin that can write an audit trail to a log file.
B. Create a trigger on the database table to capture the audit trail to a table.
C. Create an Appian process to capture the change history and write the audit trail to the database.
D. Create a web API call to an audit history system and write the audit trail to file.

Correct Answer: B. Create a trigger on the database table to capture the audit trail to a table

Explanation:

When implementing auditing for data changes in an Appian application, the goal is to efficiently track modifications with minimal impact on performance and without overloading the application layer.

Why B. Create a trigger on the database table to capture the audit trail to a table is correct:

  • Database triggers are executed automatically at the database level whenever INSERT, UPDATE, or DELETE operations occur.

  • They do not require changes in Appian design objects like interfaces, processes, or expressions, making the solution lightweight and non-intrusive.

  • This approach ensures all data changes are reliably captured, even if changes occur outside the Appian application (e.g., through integrations or manual DB updates).

  • It offers the best performance since operations are handled close to the data, minimizing Appian processing load.

  • Triggers can be designed to write changes to an audit table, including timestamps, user IDs (if available), and before/after values.

This is a backend-centric, scalable, and industry-standard method for auditing.

Why the other options are less efficient or more intrusive:

  • A. Create a custom plugin that can write an audit trail to a log file:
    This requires Java development, plugin maintenance, and log file management, making it complex and high-maintenance. It's also less accessible for querying historical data.

  • C. Create an Appian process to capture the change history and write the audit trail to the database:
    While feasible, this approach adds runtime overhead on every data update, involves design complexity, and increases process instance load, which could degrade performance over time.

  • D. Create a web API call to an audit history system and write the audit trail to file:
    This introduces network dependency, latency, and failure risk. It's also more suited to external system logging rather than internal audit tracking, and not ideal for tightly coupled Appian systems.

To meet the requirement of tracking every modification with minimal impact on the Appian application, using a database trigger is the most efficient and lightweight solution. It ensures consistent, reliable, and high-performance audit tracking without burdening the Appian design.

Correct Answer: B. Create a trigger on the database table to capture the audit trail to a table

Question 8

You are creating an ERD that models the data for a college and includes a Many-to-Many relationship, Student-to-Class, where a student can be enrolled in multiple classes, and a class can enroll multiple students.
How can you handle this relationship so that it can be supported in Appian and remain in at least First Normal Form?
(Choose the best answer.)

A. A joining table can be used to hold instances of Student/Class relationships.
B. The Student table should have a Class field to hold an array of Class IDs.
C. The Class table should have a Student field to hold an array of Student IDs.
D. It cannot be done, because Appian CDTs cannot handle Many-to-Many relationships.

Correct Answer: A. A joining table can be used to hold instances of Student/Class relationships

Explanation:

In relational databases and Appian's data modeling, Many-to-Many relationships are not supported directly in a single table due to normalization constraints. To maintain First Normal Form (1NF)—which requires atomic values in each cell and no repeating groups—a join table is used to resolve such relationships.

Why A. A joining table can be used to hold instances of Student/Class relationships is correct:

  • Many-to-Many relationships (like Students and Classes) cannot be directly represented in just two tables without data duplication or violating normalization rules.

  • A join (or junction) table acts as an intermediary that breaks the many-to-many relationship into two one-to-many relationships:

    • One student can be linked to many entries in the join table.

    • One class can be linked to many entries in the join table.

  • For example, a table named Enrollment could store rows with student_id and class_id, where each row represents one unique enrollment.

  • This structure is easily supported in Appian using CDTs (Complex Data Types) and Data Store Entities.

  • This method ensures data remains in 1NF and supports scalability, consistency, and referential integrity.

Why the other options are incorrect:

  • B. The Student table should have a Class field to hold an array of Class IDs:
    Storing arrays or multiple class IDs in a single field violates 1NF, as each field should contain only atomic values. Additionally, Appian doesn’t natively support arrays of foreign keys in database mappings.

  • C. The Class table should have a Student field to hold an array of Student IDs:
    This has the same issue as option B—violates normalization and doesn’t scale well. Managing large arrays in a single field becomes complex and inefficient for querying or updating.

  • D. It cannot be done, because Appian CDTs cannot handle Many-to-Many relationships:
    Incorrect. While Appian CDTs don’t support direct many-to-many mappings like some ORM tools (e.g., Hibernate), the relationship can be implemented effectively through a join table and associated CDTs and relationships.

To support a Many-to-Many relationship in a normalized way and maintain compatibility with Appian's data modeling practices, the most appropriate and efficient method is to use a joining table. This maintains database normalization, supports scalability, and allows for proper relational querying and reporting within Appian.

Correct Answer: A. A joining table can be used to hold instances of Student/Class relationships

Question 9

You need to show joined data from 5 tables. Each table contains a large number of rows and could generate a large result set after executing the Joins.
The business is not expecting live data, and a 2-hour refresh is acceptable. Performance is a top priority.
What should you use? (Choose the best answer.)

A. Table
B. View
C. Stored procedure
D. Materialized view

Correct Answer: D. Materialized view

Explanation:

In scenarios where multiple large tables are being joined and performance is crucial, but real-time data is not required, the best approach is to use a Materialized View.

Why D. Materialized view is correct:

  • A materialized view is a precomputed result set that stores data physically, unlike a regular view which runs the underlying SQL each time it is queried.

  • Since the business accepts a 2-hour data refresh, the materialized view can be scheduled to refresh every 2 hours. This significantly improves query performance because users access pre-joined and pre-aggregated data.

  • Especially with large datasets and complex joins across multiple tables, querying raw views or executing real-time joins would be inefficient and slow. Materialized views avoid this by reducing computation at query time.

  • Materialized views are ideal when:

    • Query performance is critical.

    • Real-time data isn't necessary.

    • Complex joins or aggregations are involved.

Why the other options are incorrect:

  • A. Table:
    A base table cannot be used to show joined data from multiple tables unless you manually create and manage it with insert/update logic, which would require complex maintenance. It's not practical for dynamic or joined data from several tables.

  • B. View:
    Views are virtual tables that don't store data. Every time you query a view, it performs real-time joins on the underlying tables. This can result in poor performance, especially with large tables and complex joins. While suitable for real-time needs, it’s not optimal here because the business accepts delayed updates.

  • C. Stored procedure:
    Stored procedures execute SQL logic, but they don’t persist data. You’d have to call the procedure to generate the joined data and store it temporarily, which is inefficient for repeated access. It's also not optimized for end-user querying or UI presentation unless additional steps are taken.

When joining large datasets across multiple tables where real-time accuracy is not required, and performance is the priority, a materialized view is the best option. It stores the result of a query physically, allows scheduled refresh, and provides fast data access without recalculating joins every time. It strikes the right balance between performance and data freshness for this scenario.

Correct Answer: D. Materialized view

Question 10

You are creating a table to store book information for a library. The book has a reference number (ISBN_ID), as well as a unique identifier (BOOK_ID).
For the CDT to be created, which data type should you choose for the BOOK_ID? (Choose the best answer.)

A. Number (Integer)
B. Number (Decimal)
C. Date
D. Boolean

Correct Answer: A. Number (Integer)

Explanation:

In database design, when assigning a unique identifier to each record—such as a BOOK_ID—the most efficient and conventional data type to use is a Number (Integer). This ensures that each row can be uniquely identified with a numeric value that is easy to index and auto-increment if needed.

Why A. Number (Integer) is correct:

  • Primary keys and unique identifiers are most commonly created using integer values due to their simplicity, performance, and indexing capabilities.

  • Many database systems support auto-increment features (like sequences) with integers, which makes them ideal for generating unique IDs.

  • In the context of Appian, when creating a CDT (Custom Data Type), using Integer for BOOK_ID aligns with best practices for defining a primary key or unique identifier.

  • It also optimizes joins and queries on this field, particularly in large datasets.

Why the other options are incorrect:

  • B. Number (Decimal):
    Decimal values are typically used for fields involving precision, such as financial amounts or measurements. They're not appropriate for unique identifiers due to their complexity and unnecessary precision for this use case.

  • C. Date:
    Dates are used for recording time-related data like publishing dates, due dates, etc. A date cannot guarantee uniqueness across records and is not appropriate for a primary key or unique book identifier.

  • D. Boolean:
    A Boolean can only hold true or false, which means only two possible values. It cannot represent a unique value for each book in the library and is therefore unsuitable as a data type for BOOK_ID.

For creating a unique identifier in a table storing book information, especially when used as a primary key or reference ID, the best and most efficient data type is Integer. It provides simplicity, fast lookups, and compatibility with most database auto-increment features, making it the optimal choice for BOOK_ID.

Correct Answer: A. Number (Integer)


UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.