PMI-ACP Certification Practice Test Questions, PMI-ACP Exam Dumps

Get 100% Latest PMI-ACP Practice Tests Questions, Accurate & Verified Answers!
30 Days Free Updates, Instant Download!

PMI PMI-ACP Premium Bundle
$69.97
$49.99

PMI-ACP Premium Bundle

  • Premium File: 322 Questions & Answers. Last update: Dec 8, 2024
  • Training Course: 68 Video Lectures
  • Study Guide: 587 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

PMI-ACP Premium Bundle

PMI PMI-ACP Premium Bundle
  • Premium File: 322 Questions & Answers. Last update: Dec 8, 2024
  • Training Course: 68 Video Lectures
  • Study Guide: 587 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Download Free PMI-ACP Exam Questions in VCE Format

File Name Size Download Votes  
File Name
pmi.examcollection.pmi-acp.v2024-10-05.by.albie.418q.vce
Size
883.38 KB
Download
105
Votes
1
 
Download
File Name
pmi.test-inside.pmi-acp.v2021-09-01.by.adam.394q.vce
Size
634.51 KB
Download
1229
Votes
1
 
Download
File Name
pmi.braindumps.pmi-acp.v2021-04-16.by.esme.324q.vce
Size
752.3 KB
Download
1381
Votes
2
 
Download

PMI-ACP Certification Practice Test Questions, PMI-ACP Exam Dumps

ExamSnap provides PMI-ACP Certification Practice Test Questions and Answers, Video Training Course, Study Guide and 100% Latest Exam Dumps to help you Pass. The PMI-ACP Certification Exam Dumps & Practice Test Questions in the VCE format are verified by IT Trainers who have more than 15 year experience in their field. Additional materials include study guide and video training course designed by the ExamSnap experts. So if you want trusted PMI-ACP Exam Dumps & Practice Test Questions, then you have come to the right place Read More.

PMI-ACP Exam Domain: Value Driven Delivery

4. Value Prioritization in Agile

Throughout our conversation so far, we've talked about the idea of a product backlog where we have a list of user stories or feedback features that have different levels of priority assigned to them. So we put the most important things at the top (the product owner does this with the team) and the least important things at the bottom of that backlog. Well, that's what we're going to talk about in this lecture. What do we do with that prioritization then? How do we begin to estimate and predict the duration of those things that have been prioritized? Also tied to this is the idea of grooming the backlog so that changes can come into play, priorities can change, and so we have to address those as well in this product backlog. Let's begin our conversation by talking about the customer value prioritization. Agile teams work on the items that yield the highest value to the customer first. And that's what we already talked about already. The product backlog is what the customer values. The product owner has those user stories or those features, and we put the most important ones at the top of that list and the less important ones down lower.

And so we address the most important ones, the ones with the highest value first. The product owner is the person responsible for keeping items in the backlog prioritized by business value. So the product owner has to know what the customer values and they're a customer representative. At some point, when changes are added to the backlog, the product owner has to look at those and say, "Okay, well, these changes are either more important than other things or less important than what may already be at the top of the product backlog." And then the customer is the person who will declare what success looks like. This is really important for everyone involved in the project. It is to have consensus on what success looks like. And that's one of the things that we've talked about this product vision and the definition of done by everyone being in agreement with what's done, what success looks like. The team will discuss with the customer at the end of each iteration the priority of the remaining work items. So you may recall before we go into a sprint or an iteration that the product owner and the team worked together on priorities. And so, while the product owner is responsible for it,the team is there and they help with that process. It's not done in isolation from everything else on the whole project. Now there are some approaches to these prioritisation schemes about how the team and the product owner may prioritise work.

Now, whatever approach that we take here, everyone needs to be in agreement on it before we begin to use one of these different prioritisation schemes. Now, I say it's a scheme, but really it's just what methodology will we use to prioritise the different items in the product backlog. So there are lots of different schemes that we're going to look at in this lecture. But which ones will the product team and the project team, I should say, in the development team, which ones will they use? And everyone has to use the same one and agree upon this approach. And we use that throughout the duration of the project because, remember, we can come back and reprioritize the backlog. So there are a lot of different schemes. The first one is a very simple scheme, and they're just ranked where we look at an item,a story or a feature or what have you, and we say, okay, this is a priority one. It's very high, and this one is priority two. So it's medium. This one is priority three. So it's low. And we go to every item and say it's a one, two, or three. The issue with this is that your customer, or sometimes the product owner, will say, "Oh, everything is priority one, everything is top priority." Well, that's just not feasible. You have to have some things that are more important than others. Now, in Moscow, there is a prioritisation scheme where the MSC and W are ways of prioritising items. So we say that it's a must have, should have, could have, and would like to have, but not at this time. Pay attention to Moscow for your exam. So Moscow must have, should have, could have, and would have, but not at this time. Another prioritisation theme is the idea of monopoly money or play dollars. It's where everybody gets a certain amount of money.

Let's say in this case, they get $3,000 in play money. And so then the team, as they look at each priority, they spend their monopoly money, their play money, on the different items to say, "Well, this is most important." And so I'm giving this one $500. And this is pretty little. It's not a big priority. I'm giving it one dollar. And so each item in the backlog gets an amount of money assigned to it, and then that determines the prioritisation of the requirements in the backlog. very similar We have a 100 point method. Everybody gets 100 points. And then you can take those points and re-distribute them among what you think is a high priority to a low priority. So if there's something that's very important to you, you might give it 60 points, and then for other things, you might give just one point or five points.

But you only get 100 points to distribute across the items, the features, and the user stories that will be developed. So the points are assigned to the most important requirements, and everyone gets 100. Everyone spends those 100 dollars. very similar to the playmoney or monopoly money approach. Dot voting or multi-voting is kind of the same idea. Everybody gets some stickers or little dots, and then you put the dots on the business features that you think are most important; the highest priority versus the lowest priority. So it's a way of just assigning value by putting these different dots on these different items. So dots, you could have check marks, or usually they're like little stickers that you might get for a garage sale or to put on file folders, little round stickers, and everybody gets the same amount. So it's kind of the same approach as the monopoly money that we looked at a moment ago. All right, a different approach here, of a priority scheme, and that's cano analysis.

Canoe analysis is where we say what items are delighters and exciters that people want to have and they're thrilled to have them, and what items we are just satisfied with. Okay, that's good. It's there. And then which items are we not so thrilled about? The dissatisfiers and then the indifferent don't really care one way or the other. So canoe analysis is a way of saying what items promote performance. Are we going to be happy about that as a customer, and what items are we not thrilled about or indifferent about? Keto analysis is anytime you have this grid, if the requirements are fulfilled, how happy will I be? If it's unfulfilled, how sad will I be?

So it's a way of ranking, this is what's going to make me happy. And for these, I'm going to be unhappy if they aren't there. So the lighters and exciters versus satisfied and dissatisfied are even indifferent if they don't really care. And then we have this idea, this requirements prioritisation model where it's a Likert scale. Use a scale of one to nine and then you give a penalty cost. The risk of every feature is rated. So you say, all right, what's the benefit score, what's the penalty, what's the cost, and what's the risk? And so it's a way of assigning scores to each one, and then you add them all up for each user story, each feature. And then that tells you how you rank them. Finally, the relative prioritisation ranking is just the priority of features. You take the features from most important to least important, and then we have the termination made to meet budget and schedule. We can only do so many user stories and requirements with the amount of time and amount of money that we have, so if changes happen,we have to pre-prioritize the list. And then yes, changes can bump some priorities off the list. And that's the backlog grooming. So if changes can come into play, you have to reprioritize.

5. Managing an Agile Project through Incremental Delivery

A guiding principle And Agile projects are incremental delivery. Unlike a waterfall approach or an a predictive project like building a house, agile gives increments of delivery. Like building a house, it's all at once. You have to finish everything for the house to be usable and livable. Or in Agile, we can do incremental deliveries, so we can have these little portions or chunks of delivery. Now, it doesn't always mean that it's leased, that it can go out and be used, but sometimes that's what it means. like a website. You might do incremental delivery and actually publish each increment. Or you could say a piece of software that you're going to have increments that are released but they go into a test environment or a production environment rather. Alright, let's hop in and talk about optimizing the value for delivery. So, as I mentioned, incremental delivery is the team that regularly deploys working increments of the solution to the product.

Usually it goes into a test environment so you can evaluate it, and then as the next increment is put on top of that, it grows and you can continue to evaluate it’s. Basically, though, an opportunity for an early return on investment. So here's what that means: Let's say we have a huge project. We're going to make a big website, alright? So rather than waiting all the way to the end for the whole project to be done and then release the website, we could take our most important user stories or features, which is how we do it in Agile, right? We prioritize, we deliver those first, and then we release an incremental delivery of that website. So it's not the whole website that's been released, but it's an increment, so that has some functionality that our customers can begin using that website, and then in another four weeks or so, we have the next increment. So it grows a little bit in functionality, but those are lower priority items because we've already attacked or released the high priority items. And then it continues on where we deliver an increment each time until we finally reach the end of the project. And the whole project is done, the whole website is built, and we have a whole release and a whole return on investment. Well, at some point.

But the idea here is that it's an opportunity for an early return on investment because we have some level of viability, we have some level of usability. Now the word I just use is viability. Because that introduces our next topic, the thought of a minimum viable product, that's just complete enough to be useful. Now it's small enough that it doesn't represent the entire project like our website we were just discussing, but it's complete enough that it has some usability. It's now also known as the "minimum marketable feature." So the website is a great example of this because we were able to quickly publish a website with some usability and a minimal viable product. It's the bare bones essentials of a product. So it just goes along with the idea of these incremental releases where we don't have the whole thing like a house. We have the opportunity to release chunks or smaller portions. Another concept that's tied here to iterations and the idea of Sprint is agile tooling. Basically, this means that agile teams as they work through the project need to appreciate low-tech, high-touch tools over sophisticated computerized models. So a great example of this is that I love Microsoft Project, and maybe you do too, or you like Base Camp or even Primavera, whatever approach you like. If everyone doesn't have access to that piece of software, or sometimes if everyone does, it makes it kind of a mess. But you don't know how to use it, you don't have access to it, you can't look at it, you don't have permission to see it. You're somewhat left in the dark as to what's happening in the project. So I can make a schedule in Microsoft Project.

I can have a list of all of my items that are prioritized. I can have all these great charts, but you can't see them unless I let you. So that's a high-tech tool, and it's something that you have to have the software and permission to use the software. Where if we just had a big white board, we could draw that stuff on the board or some variation of it, and that would be a low tech high touch. Everyone could interact with it and see it. So it prevents team members from being excluded and from interacting. You do consider high-tech tools for scheduling if data accuracy perception increases. If we have to have very precise data, our customers may say, "Well, you can't have data precision on a whiteboard." It has to be in a piece of software. That's not really true. I'm here to tell you we can do it on a whiteboard. It may be faster in Project or Excel, but we could do it on a whiteboard and be just as accurate. Tied to that is, hey, a bad estimate is a bad estimate. If you say this activity is going to take a week and it's wrong, it won't matter if you're using a whiteboard or Excel or Microsoft project. A bad estimate is a bad estimate.

It's only garbage in, garbage out. It doesn't matter what tool you're using. Barriers for stakeholder interaction are also created when we want to go to high-tech tools. Hey, and sometimes I'm kind of mean here, but sometimes that's not a bad thing. It's not exactly what we want for our PMI ACP exam, but I'll step out into the real world in a moment. Sometimes it's kind of nice to keep people out of your way and to not question everything you do. So a high-tech tool can prevent or create a barrier before your exam? No, we don't want barriers. We want everyone to be involved and to have synergy. So disregard that comment I just made for your exam. So what are some examples of low-tech high-touch tools? Hey, three by five cards, or sticky notes, or charts. Those are all low-tech, high touch. You’ve heard this term a couple of times. information radiator. An information radiator is just away from radiating information. So it could be a big whiteboard, it could be a chart that you print out and draw on, anything that communicates what's happening in the project. So an information radiator will typically include items such as: what's in the backlog, what's in the Sprint backlog, what's our work in progress, what's our velocity, burnup or burn down charts, and so on. It's just a quick way, it's up on the wall, everybody can see it. This is a quick way to go and see what's happening in the project.

Any type of tool that promotes communication and collaboration is great. You should use it. Anything that promotes learning and knowledge transfer, yes,we want to use that as well. Here's an example of an information radiator. What's wrong with this picture? They have the SNP number. What impediments are in the way? The pull request They've got a burn down chart, they've got some analysis going on. What's the build number? So what's wrong with this information radiator? Hey, it's really nice, it's really fancy. But this is a high-tech, low-touch tool. So for your exam, we want just the opposite. We want low-tech, high touch.You have to have the software to go and use this. You can go up and look at it, but you can't interact with it. So that's one problem. It's really cool, don't get me wrong, and maybe it would be good for your project. I want to be critical of whoever made this, but it's not for our exam. It's the opposite of what we want. We want things like a whiteboard and sticky notes that we can move around and interact with this.You have to have the software to interact with it. All you can do here is read it. Now, scheduling software versus a can-band board Recall that a can band board uses this pull system. It's also a task board. That is, one activity is finished. It pulls the next one in line all the way down the path there. It helps the team monitor the whip.

The work in progress We want the whip to be low. Remember the idea of Little's Law? So I think we've talked about Little's Law. If not, it's coming up. But we want the work to be below. Yes, we talked about Little's law. Remember, the more items that are in queue, the longer the queue takes. The WIP needs to be low. That helps our velocity and our throughput. So the idea of WIP, you've heard me mention it a couple of times. It's work in progress or work in process. Sometimes it's called work in play. Whip represents risk because there is no value in unfinished work. And when you have a work in progress, there's a risk of failure. So, the more whip you have, the greater the risk. Whips can hide bottlenecks. You've got all this motion but you don't see where things are queuing up, you don't see where the bottleneck is. Maybe it's testing, maybe it's compiling, maybe it's refactoring, who knows? The whip requires investment, but you don't get any return on that investment until all of the whip is completed. So you want to reduce the size of your whip in order to increase throughput. So how do you go about limiting your work in progress? So Agile does this by remembering our monopoly money, our story points about estimating how much we can do in an iteration. So that's one approach to limiting whip. We can now ban boards and also have whiplimits that we only allow so many items to go into development at a time.

So can bands limit whipping? Now, whip limits also keep a team focused on just trying to take on too much. Most developers, in my experience, overestimate how much they can get done. And that's not to say anything bad about developers. They're very optimistic and they think they can get it done. However, learn about your team. Can they really truly do that? Are they being realistic, or are they being optimistic on how many items they can get done in an iteration? The other thing we can do is early in the project, take on what you think is correct. And, despite claiming that we can complete ten features in this iteration, we only complete seven.

All right, well, let's see what we do the next time we do that for two or three iterations. And then the team, being self-organizing and self-leading, should begin to recognize, "Well, we've had three iterations in a row where all that we've gotten done is seven." So it begins to stabilise as to what's a realistic amount to put in the whip. The Whip, as I mentioned, will reveal bottlenecks. And how we do this is through a chart. That's a cumulative flow diagram. So what is this chart? We're going to look at one in a second. What this chart does is it helps track and forecast the delivery of value. So when does something come in all the way when it's through that process? It helps to reveal the total activities that are in progress. So our whip and also what's completed So let's look at a cumulative flow diagram. Alright, so this is it. It's layers of what's in progress. These are all our features, our user stories, what have you and where they are in the whole process.

So this whole process is called lead time. So over here in the gray, that's our ready to start, all the way to blue, is what's been deployed. So how long it takes to go through the whole thing is the lead time. As a result, we have a larger backlog over here. The bigger this grey area, the more items there are in the backlog. So, over each iteration, the backlog size should be reduced. In this blue area, what's been deployed should increase. Let's look at the next one. Here we have this idea of this purple area. This is our whip, what's really in progress here, our work in progress. So when we have our work in progress, it has to go through the purple, which is in progress, the red, that's in testing, and the yellow, ready for approval, as in the green, ready to deploy. So the amount of time it takes to go from when an activity, one of our user stories, is beginning to be developed all the way through the end of green, ready to deploy, that's the cycle time. So it's the actual activities, it's waiting in queue, it's waiting in our backlog all the way through deployment that's lead time. Cycle time is how long it takes to go through the processes, so how long does it take to be created, to be tested, to be prepared for deployment, approved and then actually deployed? So that's what these different colours mean. Please take note that we have work in progress up here. So we have our purple that's in progress,but it's all the way through to deployment. And then we have our remaining work to be done. You can see this is going down smaller and smaller. So the cumulative flow diagram shows the accumulation of activities that as we get work done, those different bands should get smaller and smaller. And the blue area, what's left, will get larger and larger.

At the beginning, we're going to have a very small blue area because everything's in queue and they're going to have a big grey area because that's the amount of work that's waiting to be done. You'll probably have one or two questions about the cumulative flow diagram. Now, I talked about bottlenecks. A bottleneck is like you're on the interstate and you have three lanes that suddenly have to come down to one lane. So it's kind of a pain and everything is pushed together and everybody is trying to get over to that one lane.

Well, in a cumulative flow diagram, the bottleneck is represented. You'll find it by looking for a thin band like this yellow band here. "Ready for approval" is a good example. It's kind of a weak example of a bottleneck, but it's not the great big heavy bands behind it, it's the skinny one that's usually the bottleneck, because that's only letting a little bit through at a time and everybody's piled up against that bottleneck, just like on the highway, everybody. It's not the one lane. The one lane is where everybody's trying to go. It's everything leading up to it getting into the one lane. So it's not all the backlog, it's the actual one lane that becomes our bottleneck. Well, bottlenecks are tied to the theory of constraints. A constraint is anything that limits your options.

Time, cost, and scope are typical constraints. Constraints, though, can also be the throughput of capacity. And what I mean by that is, if you have a development team and they say they can only do five user stories per four-week sprint, that could be a bottleneck or a constraint. Because they can only take on so much. Maybe they're still learning the software. Who knows? But their capacity is small compared to other teams. So a thin line in that cumulative flow diagram can reveal a bottleneck. Now, if you're a PMP, you've probably heard of Gold rat’s Theory of Constraints. Even if you're not a PMP, you may have heard of Gold rat’s Theory of Constraints. But basically, what we do here is we focus on these constraints and see how we can eliminate them or use them to our advantage. So, first and foremost, identify the constraint. Can we exploit this constraint? We take advantage of it. Can we subordinate all other processes to exploit the constraint? If after steps two and three are done, maybe you just need more capacity, which would tell us that, no, we can't exploit it. Changing the other processes, slowing down the other work, won't really resolve the constraint. So we need more effort. We need more labor. Basically, we crash the project to increase the constraints. And then if the constraint is not moved, even after you add more labor or effort, you gulp all the way back to step one. I doubt if you see too much of this. Maybe one question about this on your exam. So I would just be topically familiar with the idea of a constraint: big constraints, time, cost, scope, but it could also be the team's capacity for performance.

6. Contracting in Agile Projects

A topic that you'll definitely see on your PMI ACP exam is contracting in Agile projects. So we have to look at this from two different perspectives.

One, if you are an organization hiring a vendor, so you are the buyer and the vendor is the seller, you must consider how Agile will work in that relationship. Then we have the inverse of that. If your organization is coming in as the seller, and you're going to go and work for your customer, how will Agile and contracting work in that relationship? So there's lots of things we have to consider from both points of view that we're going to talk about in this lecture and contracting in Agile projects. One of the first pieces that will be addressed will be the request for proposal. So the request for proposal is usually associated with a statement of work. What do you want the vendor to do for you? So a request from the buyer to the seller is made. Now, if the seller is required to use Agile practices, you have to define that in the RFP, the Request for Proposal. So if your company uses Agile and you want to contract out a piece of that work and you're going to require that the seller also use Agile, you have to let them know. You have to put it in the requirements because they may look at it and say, "Oh, we use waterfall or we use the cinnamon roll or whatever approach they use." So you have to define what you want the seller to do.

Now the buyer may need to educate the vendor about Agile practices. So if you're just going to bring in some contractors, if you're just going to bring in a seller for a portion of the project, you'll probably have to do a little bit of training about how Agile works in your environment. Agile projects welcome change, and we know this is true that change happens often in Agile. Well, vendors, they might be a little hesitant to accept that because most vendors say, okay, how much work do you want to do? This is how much time it will take. This is how much it's going to cost you. And so if the beginning of that changes, if you have lots of changes, which you're probably going to have in Agile, then there's a little bit of confusion for the vendor as to well, how much and what's this change going to cost you and why are you doing all these changes? So we have to talk about that with the vendor if they're not familiar with Agile and the anticipation of changes. Now, with Agile constraints and contracts, as we know, is really flexible. Contracts, not so much. Contracts are not flexible.

Contracts are a form of constraint because a contract says "this is the offer in consideration." We offer to create the piece of software in consideration for the $54,000 you're going to pay for our time. So we have an offer into consideration and then within that there's a decomposition of what exactly do you want the vendor to do so, but in Agile that creates a challenge because it's just the opposite of that. The scope has lots of variety to it, but recall that our time and cost are fixed, so contracts don't always mesh with the Agile mindset. Contracts are constrained, like the offer in consideration. Recall, though, one of our principles in the Agile Manifesto was collaboration over contracts. So if we're working with a vendor that uses Agile and they understand Agile, it may be a little easier to have some collaboration over contracts. With agile constraints and contracts now available, Agile projects constrain time and cost. Agile projects allow the scope to change. Remember that inverted triangle that we had? The time and cost are set, but the scope varies. Contracts do just the opposite. Typically, they try to balance time, cost, and scope so they have the triple constraints of traditional project management. This is how much time it will take in order to create something within your scope and this is how much it will cost.So they try to balance those three things. It's kind of tough to do in Agile.

So, considerations for contracts: we have to talk about scopechanges, we have to talk about priorities, and then we have to talk about time and cost; those are usually fixed as opposed to being a variable. But it is possible to have contracts and Agile. We say, okay, these are our priorities and explain that priorities are going to change, but ourscope may change based on priorities changing, but we're going to have a fixed time and cost. So there may be some things that drop out of scope because they don't qualify for the amount of time and the amount of cost we have available. Now on your exam, you're definitely going to have some questions about contracting and the challenges of contracting and agile customercollaboration over contract negotiation. Know that, embrace that. The idea being that yes, we want to do what's fair and equitable, but the way we do it is to collaborate with the customer and we don't always run back and try to be litigious that we have incontracts or often have in contracting. Customers are also more involved with Agile than with traditional projects. So that's another thing to be aware of.

For your exam, there is a contract type that we can use. It's a graduated fixed price contract where both parties share the risk and reward. So if a vendor delivers on time, they get paid for their work at the hourly rate. So if our goal is to have this published by August 1 and we publish on August 1 or release or whatever constitutes done by August 1,then we get paid what was promised. if the vendor delivers early. So instead of August 1, we come out on July 15, so we're done early, everybody's happy. But instead of the vendor being penalised for being done early because they go by the hour. So if they went all the way to August 1, they would get the whole contract. So instead of them being penalised for getting done early, the vendor gets a slightly higher hourly rate. So it's kind of like a bonus for getting done early. Now the opposite is true. If the vendor was supposed to deliver on August 1 and they come out on August 15, alright,you're still going to get paid, but now your hourly rate goes down, so you're penalized.

So a graduated fixed price contract could have these terms in it, but they're all defined upfront. It's very important that as we go into this, everyone understands how this works, what our milestones are or dates are, and then what that does to our hourly rate. So that's something you want to pay attention to for your exam. That's one way of approaching the challenges in Agile, which is a graduated fixed-priced contract. Another approach we can take is a fixed price work package. Now, project management, you probably know that we have this idea of a work breakdown structure, something we really don't have in Agile. But the smallest item in the WBS, or each of your user stories, each of your requirements in Agile, is called a work package.

So, each work package, each user story, each feature is estimated for cost. So when there's a change to scope, you estimate the change for those work packages. So each work package has a value associated with it. And if you're removing work packages and adding new ones, we'll re-estimate for those. So it's kind of this way to float, well, floats, not the best words. kind of this way to re-estimate changes that happen to the backlog. So the price of the work remains constant. It's the value of each individual work package that adds up to the price of the work. We can always customize the contracts. It's a beautiful thing about contracts. You have an offer and a consideration. Both parties agree to it. So the buyer and seller can make any agreement you want, any legal agreement. I should say that you want procurement, though it's always tricky with Agile. It takes some more planning, a little more creative activity, and always takes communication. The key thing is that, though, we want customer collaboration. Collaboration over contracts.

7. Value: Verification and Validation

Value verification and validation. A lot of these are here to discuss. What we're really talking about in this lecture is about ensuring that value exists in your Agile project. Now for your PMI ACP exam, you got some really important information in this lecture. So really pay attention to this lecture. Take some good notes. You might want to watch this one more than once. All right, let's hop in and look at these topics about how we ensure value verification and validation in Agile projects. Let's start by defining the gulf of evaluation. It's the difference between what is said and what is understood. How many times in your project have you talked to a customer or a stakeholder and do you think you understand what they're talking about? You go about doing the work with your team and then you get back together like, "No, that's not at all what I was talking about." So you have this misunderstanding between what was said and what was really understood.

So the gulf of evaluation is the difference between what one party wanted and what the other party got that the other party created. So intangible projects, they frequently encounter the skull. I'm describing to you, and let's say, three other developers, an app that I want that when I take this off my phone, it'll immediately tell me what type of tacos are in the area and how much they are. Well, you three would have three different approaches. Do you have three different ways of finding tacos in my neighborhood? So unless we're really clear, we have an in-depth conversation, we have prototypes, and we have lots of checking in to validate what I want, we're probably going to end up with three different designs from each one of these developers. So that's the gulf of evaluation is really the misunderstanding. But how do we have to take time to ensure that there is not a gulf between what was wanted and what was created or delivered? So one approach to doing that is frequent verification and validation. So we have these testing checkpoints and reviews. So it's a way for us to stay in touch with the customer and demonstrate that what we're making is exactly what they wanted.

We have frequent verification and validation that has happened throughout the whole project, so we have lots of opportunities to confirm this is what you want, right? This is what you're expecting. This is the taco joint that you expect to show up in your app or whatever app you're creating. The goal is to build consensus between the project team and the project stakeholders. The consensus on what is done is that everybody understands what "done" means for the project. In XP, there are some really good examples of verification and validation. Remember pair programming, where two developers work together? One actually writes the code and the other checks it as it's being written. And then occasionally, these guys will change roles and the other one will check it as it's being written and the first guy goes about developing. So pair programming verification, unit testing, that's unit testing to confirm that what's been written will compile and integrate customer collaboration. That we talk to business people every day in Agile, that we collaborate. Customer collaboration is a big theme, obviously, in Agile projects.

Remember our standup meetings or Scrum, our daily scrum where we take 15 minutes and we say "What'd you do yesterday?" What are you going to do today? And then, are there any impediments in your way? Then we have acceptance testing. That's once we have compiled it and then we can go and showand what was created. And then at the end of the iteration, we have demonstrations. So remember this Sprint idea, our Sprint review, where we demonstrate what we created. The same thing happens in XP. We do an iteration demo, and then at some point we have the product release. So we have all these checkpoints of confirming that we've created what was asked for. That's everyone's consensus. Now a new term for you is exploratory testing. So the tester aims to discover issues in unexpected behavior. So a great example of this is my mom and dad getting my dad a new computer, putting Microsoft Office on there and Facebook, all the basic stuff, and wow, if he doesn't generate some interesting errors that I have no idea how he stumbled upon. So that's, you know, they explore, they get in there and try different things out, like an end user is going to. So exploratory testing means you're just playing around with the software. You're trying out different things, seeing what does and doesn't work. And what happens if I hit CTRL P instead of going to file to print? Does that work?

And what if I click this button and I want to send to a PDF doc instead of an actual printer and, you know, so on? So they're in there exploring, trying to see if there are any other unexpected behaviours when you do certain activities. So the tester is exploring. This is done in addition to what most of us are familiar with, which is scripted testing, where we have scripts that we follow to do the testing. Now usability testing. This is my favourite type of testing. I'm a little biassed towards usability testing because I find it the most realistic. So how will a user respond to the system under realistic conditions? So it's the usability. It's where the individual that's going to be using that software, whether it's a tester or an end user, they go through and they see how easy it is to use the system and then what improvements need to be made for usability. So we do usability testing. You might call them UAT (user acceptance testing) or usability testing. The same idea applies here. It's how easy it is for the user to use the system and should you make any improvements. Continuous integration in agile projects So, continuous integration is that we integrate our code frequently. We have frequent commits, so we incorporate new and changed code into the code repository. So you have this, you do it frequently and in small little batches.

Then, typically it's going to rely on some automated tools to integrate that code when new code is checked in. So it will run a test to make sure that what you're trying to check in will integrate successfully. That it doesn't create new problems with what's already been checked in. So we have a continuous integration system, we have a source code control system. This is all about versioning. You want version control. You don't want to be overwriting each other's work. Build tools. They're the things that actually compile the code and get it ready to be used. Test tools for unit testing Just test the functionality that it operates as you expect it to operate. Then you might have a scheduler or a trigger that builds whatever has been checked in so that it's relaunched on a schedule or based on certain conditions. So whatever you define in these rules, your continuous integration system may send you an email or a text message to let you know what happened as a result of the build. Now why do we do continuous integration? What's the point of this? Well, the first point is that it's an early warning about a broken, conflicting, or incompatible code. That's what we do. It often breaks something when we check it in and compile it or integrate it, then we know where the problem is, we know there's a conflict with whatever was recently checked in. So problems then can be more isolated because they're smaller chunks of code that we are checking in, so we address the problems when they happen. It also gives us immediate feedback. We have these each time that we integrate. That's it's.Some feedback Was it successful or did it fail?

And then we have frequent unit testing again, defining issues quickly, making it easy to reverse the codeback to the last known good state. So for your exam, embrace this concept. I'm sure you already do this, but embrace this concept of continuous integration. Is there any downside to continuous integration? Well, I'm glad you asked. Yes. The setup time, unless you're doing lots and lots of software development, can be kind of a pain. The setup time is lengthy. Now that setup time is often called "iteration zero" because all you're doing is preparing for the development environment. You have the cost of a dedicated server, which let's be honest, if you're redoing a lot of development, who cares? But you could have the cost of a dedicated server and the time to set it up, and then you also have to set up whatever your integration suite is and how it will reply, whether it's a text message or an email. And really, that's not that big of a deal. So the main disadvantage is just the time it takes to set all this up and configure it. But it takes time because it saves time. So there are some disadvantages here,but I wouldn't get too worried about the disadvantages of continuous integration. The benefits far outweigh the disadvantages.

Another way that we could do validation and find value is through test-driven development. Test-driven development, also called test-first development, Here's the concept that we write tests first before we develop any code. We write the test first, then we develop a little bit of code and push through that test knowing that it's going to fail because it hasn't passed the test yet. Then we go back and continue developing. And then when we integrate it, it fails. And at some point, it's going to pass because it's going to develop the goals to get through that test. But by developing the test first, it gives us a goal to work towards as we know what we're trying to accomplish. So tests are written before the code is written and we still do unit testing in a unit or J unit. That's it's.A way of testing in units orprogressively as we go towards that release. So the code continues to be developed and edited until the code passes all of the tests.

And then there's that magic word again. Refactoring is the final step to cleaning up the code. Recall that refactoring means Okay, you passed the test, but now we have to go and clean all this up. Any type of redundancy or sloppy code, code that doesn't adhere to our standards, we clean all that up, run it through the test once more to make sure it still passes. And then now we have some nice clean code that's easier to support and that other people can work with. Test driven Develop the concept of refactoring, also known as "red green refactor" or "red green clean." The focus is on the test first. You got that. Early testing helps catch defects early in development. So we start off by doing some tests and catching defects. That provides us with a solid foundation upon which to build. You do not want the developers to write their own tests. The test should be written by someone else than the guy who's actually writing the development. So it's too easy to write your own test and know what it takes to pass it. It's better to have a real test, a real tester to write the tests that need to happen to pass and to adhere to what the requirements are. And the definition of "done" may be Now acceptance Test-driven development again, testing. Its focus is on business requirements.

The functionality of the software is represented by tests. It's all about desired behavior. Sometimes, This is called fitness or fitness. So it's a framework for integrated testing. You might see that on your exam, fitness, but it's just a way of writing the test first. So it's still test-driven development. More on the acceptance test driven development cycle. You discuss the requirements. So developers, as well as the product owner, have questions that are designed to gather the acceptance criteria. The definition of "done" is distilled the latest in a framework-friendly format. Basically, you get the test ready to be entered into your acceptance test tool, develop the code, and run the test. Most likely, the test is going to fail because you haven't written all the code yet. Then you go back and you rewrite the test, rewrite the test, and continue until you pass the test. Then you're ready for the demo. So you have some automatic acceptance testing scripts and demonstrations of the software, just like we saw. Well, it's very similar to what we saw in the Sprint review where we did a demo. All right, for your exam, really pay attention to this lecture and the topics in this lecture. I would encourage you to maybe go back and watch it one more time, take some good notes, because remember, for the exam, anything on value, anything on improving value is what you want to lean towards. All right. Great job.

Study with ExamSnap to prepare for PMI-ACP Practice Test Questions and Answers, Study Guide, and a comprehensive Video Training Course. Powered by the popular VCE format, PMI-ACP Certification Exam Dumps compiled by the industry experts to make sure that you get verified answers. Our Product team ensures that our exams provide PMI-ACP Practice Test Questions & Exam Dumps that are up-to-date.

Comments (5)

Add Comment

Please post your comments about PMI-ACP Exams. Don't share your email address
Asking for PMI-ACP braindumps or PMI-ACP exam pdf files.

  • livian
  • Netherlands
  • Dec 06, 2024

@raila, @Uhuru, basically, the real exam questions are similar to the ones from dumps but pls remember that they are NOT the SAME. And of course, they are valid and go with robust explanations.

  • Uhuru
  • Ireland
  • Nov 19, 2024

hey who used the PMI-ACP sample questions? are they credible?

  • raila
  • New Zealand
  • Oct 29, 2024

@alikiba, Well-done! Thanks a lot for your insights! I now do PMI-ACP practice questions and answers, but they are pretty difficult. Was the final exam similar to these questions?

  • alikiba
  • Mexico
  • Oct 11, 2024

At last, I earned the PMI-ACP certificate!!! It took a while as I once retook the exam but now it all doesn’t matter! I’m here to say thx for your resources, guys!
For future partakers: I recommend that you prepare much in advance with varied materials. Also, be confident in your skills, and a pass is guaranteed!!
Hope this helps!!!

  • Сinderella
  • United Kingdom
  • Sep 23, 2024

hae guys, for you to attain and maintain good results pliz apply the latest pmi-acp test questions , they are very useful and help in training, you should not miss this,

Add Comment

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.