Video - Half Double methodology results by Associate Professor Per Svejvig

A selection of Half Double Institute Videos covering a wide range of topics including the methodology, research, literature and case studies.
View Video On Youtube
Half Double Institute logo image

Project Half Double - A Proven Concept 2018 April

Transcript - Half Double methodology results by Associate Professor Per Svejvig

[00:00:04] Thank you very much.

[00:00:06] Good morning, everybody. I'm really excited to be here. I think we've been on a long journey, seen from a research point of view, three years of work. And I'm going to present some of the results we have received. I also like to say now I'm from academia, have been 25 years in the industry and been now 10 years in academia. And I think it was interesting to hear the presentation about his Ph.D. I hope that generally do better. But I'll come back to that issue with you, Neal. So I have a comment to that that will come later. So I will wait a little bit. But let's then before digging into my presentation, I would like to go back to June 2015, where Michael showed a slide from the kick off we had in June 2015.

[00:00:59] And at that point of time, I said that the research team will focus on three things. The first thing was evaluation. The second thing was documentation. And the third thing was communication and dissemination.

[00:01:14] And I would say that this is what we have been focusing on seeing from a research point of view. First of all, evaluation, we have been evaluating nine pilot projects in process. I'll come back to how we evaluate what kind of evaluation we do, documentation we have released three available reports you can download with a lot of results from the different pilot projects. We have published seven academic journals, not journal papers, but conference paper and journal papers and more in process and concerning communication and dissemination.

[00:01:52] I don't know how many meetings I've been involved in. Where have I presented, resolved and discussed? Half double, both together with implement, but also together with a lot of organizations who has invited me and others from the research team.

[00:02:06] So I think we have fulfilled what was the goal back in June 2015 and now it's the fruit. Now I want to share with you what did come out of all this. And I think we have been three, four researchers involved in doing the job. So this is what I'm going to tell you more about. Just to give you a brief overview, what will I try to present in the next twenty minutes? First of all, I would like to give you an overview of the results a little bit about the research process, what is into it. Then we have tried to say how can we present five important results? And this has been what we call high level findings. And if you're interested in reading more, we have a small booklet which has been prepared for the conference which are available at the research station. You can come and get it. And here you can get a very brief overview of the research results. So that's the second part of my presentation. So first, an overview and then the five high level findings. And if you have a questions to what we have been doing, we'll be at the station after all, later today after my presentation, I think. So that's it. Let's turn into a really concentrated presentation of the results. This is a single slide presenting the overview of the results we have from the half double project. What you see on this slide is that we have nine organizations.

[00:03:42] We have a different project types. Then we have a column saying impact from Half double methodology. And we have another column saying Fulfilling Project Success Criteria.

[00:03:56] The first column about impact from half the methodology. It could be high impact, medium impact and low impact. And as you see, those two projects, which has had low impact, is seven project, which has a high or medium impact. So I think that's quite a good result. Then you can ask how do we evaluate this? And the way we do it is that we find three reference project in the same organization where we compare the pilot project. That's the way we come to this conclusion. Whether they have double methodology, has high impact, medium impact and low impact. So it's in fact an internal benchmarking process which gives this result. The other column called Fulfilling Project Success Criteria, you're probably more familiar with many projects, state some success criteria before starting the project.

[00:04:49] Here we have evaluated to which degree are these success criteria fulfilled.

[00:04:56] And as you can see, eight out of nine projects has either fulfilled or partly fulfilled the success criteria. That's quite good compared to the 33 percent that I presented earlier. So I think just looking at the figure up here, that's more green than red, that indicates at least a little bit that half double to some extent has been successful. But of course, there's a lot of detail as to all this discussion.

[00:05:31] Another way of presenting the results from the earlier figure is to make this drawing here where you have success criteria at the X axis, you have the impact at the Y axis.

[00:05:47] And then we have met all the projects and all the projects above the diagonal line is green. We call them green because they have been successful. In some ways they have either partly fulfilled or fulfilled. This is excretory and all they have had medium to high impact from half double. Again, I think that's very good, so to speak. We can also come with more details why the ground frozen Siemens pilot project did not achieve either fully the success criteria or only had low impact from half double.

[00:06:28] So there's a lot of reasons behind that. And if you download the report we have published, you'll be able to find a lot of explanation about that.

[00:06:39] Still, I think it's it's it's a good picture, it shows something about to which degree this has been successful, then I would like to give you a little bit of insight to our research process, because I think it's it's important to understand what have been working with how have we been working, first of all, the formal part of the research process starting in 2015. But in fact, I've been working with this topic from a research point of view since 2012, as Michael mentioned. And this also means that we have a lot of research data because we have been in nine organizations. More are in process and because we map for projects at least in each organizations we have more than 36 projects are detailed met. So this is kind of comprehensive knowledge about project data. And I don't think there's a lot out there with so detailed mapping of project. Typically what we do is that we have four to six interviews, we have workshops, we have review meetings. We come back to the organization and discuss our findings and we try to agree on is this the right way to specify the given results from the organization? Then we have comprehensive documentation. We have the three reports published. But for each of the organizations we have been working, will we have internal write up reports which detailed compares the project?

[00:08:14] And sometimes and this internal report are confidential because there's a lot of confidential information in the report and sometimes we know something inside which we can't publish outside because the organizations don't want us to do that.

[00:08:27] So that's a balance. But we always discuss with the organization what can we publish? Then finally, we are using what is called mixed message analysis, where we combine quantitative and qualitative studies. That's not something I'll go so deep into. But every time we working with research, there's two very important question. And that's about generalization and it's about limitations. Generalization is about how can we use the data from these nine organizations in other settings? How can you generalize from one place to another place?

[00:09:11] And I would say that you can take the results from the nine pilot projects and use them in your organization. If you look after that they have similar patterns, then I'm quite sure you'll also be able to do it. But remember, there's a lot of issues. It's not just about using the half double methodology. It's also your project governance. It's majority of people. That's a lot of contextual issues also. But I would say, of course, you can generalize the results from the nine pilot projects to other settings, but it's not a one to one generalization.

[00:09:46] Very often. The second thing is limitations. There's a lot of limitations to our research. I'm sorry to say, normally there is a lot of limitations to research.

[00:09:58] And as an overall bold statement, I can say that applying the half double methodology is not a guarantee for success. So that's a bold limitation because it would be stupid to think that if you just start using the wheel or the half double concept directly and then say, OK, then I'll get success, that's not what we can see.

[00:10:22] But we can say that if you think you think about using it, use it in a good manner, look at similar patterns, try to do the translation in a good way from the results we have in the pilot projects to your own settings, then there is a good chance for having the same kind of success as we see in the nine pilot projects.

[00:10:52] So this was the overview of the findings. Now we'll try to present five important issues or topics which we think are very important to understand about the harvestable methodology.

[00:11:09] And the first one is quite simple. Half double methodology works. How do I dare to say that as a researcher? Yes, I dare to say it because we can see from the nine pilot projects that there are so many signs that it is working. And this leads to our statement up here, which says they have double methodology can lead to higher impact from pilot projects compared to similar reference projects in the same organization. So what we say here is if we compare with reference projects in the same organization, you will be able to achieve better results with the the methodology. At least it can do. We don't say will. We say can. That's also important little word. So this is about the half double missed already, another thing which I have touched upon is that scene from a success criteria point of view. The results are also promising. Eight out of nine project has either fulfilled or partly fulfilled the project's success criteria. That's quite good. It's quite promising compared to many studies about project. So that's another way of saying it works. This was the first finding, important finding. The next finding is about where can I apply the half double methodology?

[00:12:39] Is it a universal methodology? I can apply everywhere. And here I have to be a little bit cautious because we're still working with the data and trying to find what are the factors which could specify whether it works or not.

[00:12:56] But at least so far, and with this caution in mind, we can say that projects within supply chain, organizational change, information technology, e-commerce market and product development has been successful. So that's an example. We can also say that the two projects this was at Grunfeld and Siemens doing lots engineering product development was not successful when we applied the methodology at that point of time. But it might be in the future that we will be able to apply the methodology also within engineering project. I think it's possible to do, but there is a lot of reasons why it didn't work in these two organisations, which might not only relate to the project methodology itself, it could be a lot of contextual factors impacting it.

[00:13:48] But at least the green area, I wouldn't mind to say that if you have a supply chain project, I think it's quite obvious that you will be able to utilise the findings so far. One thing is to discuss how it all works and now we discussed a bit in which kind of project types. Does it work? Second thing is, what are the practices you should use in order to get good results? That's also something we have been digging into. What we have done here is that for each of the pilot projects, we have compared them with the practices used in the reference project with the practices used in the pilot projects.

[00:14:33] And then we had looked at which practices did make a difference. What was the difference between what the organizations were normally doing with the project compared to what they were doing in the pilot project? That's the result you see up here.

[00:14:49] And this leads to practices which will say powerful practices, because we think they make a difference. And the first one is related to flow that's short and fat projects. That's what Nils talked about, colocation, highly educated people. We say that people should be allocated more than 50 percent or at least 50 percent to the project. So we really want to put a lot of resources into the project in a short time. That's your own pet projects. And we can then tell you if you do that, it works.

[00:15:25] People will also say that common sense. But that's one thing. But if it is common sense, why are so few organizations doing it? You can ask yourself.

[00:15:36] Let's flow then we have impact where we have three practices and I'll take two practices together, that's the impact case, that's impact solution, that is the value creation perspective that they talked about, that, of course, we do projects in order to create impact. And our results shows that this highly clear focus all over the project on impact and impact solution design has an effect. That is one of the reasons why we achieve good results. Then a third impact fact is pulse check. Pulse check is where you get a feeling, where you take the temperature of your stakeholders, your project, you the project. And this is a really good way to get an understanding of are we on the right track or do we need to do something? So pulse check is also a very important tool and it's really a simple tool.

[00:16:31] So it's easy for many organizations to implement even if you don't go for the full half double package. Pulse check is something you can just start using if you want to do it.

[00:16:41] Finally, we have a leadership practice which relates to active project ownership that the active project owner is highly engaged with the project and is focusing on development, inspiring, being together with the project group and not just sitting at a steering committee meeting every six week and get a presentation on PowerPoint and take a decision. That's not the way it works. We need much more engaged project owners.

[00:17:11] And I think when so many money is spent on projects, Nils talked about 35 percent and these 35 percent in organization are the most uncertain 35 percent, because I would say that much production is much more much more routinized and much more certain.

[00:17:30] Why do managers not focus more on projects, the 35 percent, whereas the big uncertainty? That's something I really often wonder about. Why why not do that? It makes sense to spend time on projects, I would say, but that's the results of which practices make a difference. And locally, these results converge with other results. This is another study.

[00:17:57] This study says that, uh, relevant and realistic information for making authorization, the systems in the business case and target benefits is the strongest predictor of project success overall. That's exactly as impact case and impact solution. It also says down here additional the existence of the project on role as a single point of accountability in the organization is a strong predictor of project success. So I would say the results we have found in half double converts very nicely with other studies. And I think from a research point that's very good to find other studies which come up with the same results.

[00:18:47] Then to finding number four, maybe a finding which is not directly related to the different pilot projects, but something we have observed, which is maybe just as important, it's quite easy to explain the half double methodology.

[00:19:09] In fact, you can stand with this single piece and in 10 minutes I'll be able to give you an overview. What is half double talking about?

[00:19:17] I would never be able to do that with Prince two or Pembrook or other kind of standards. And I think that's that's a good thing in itself. The methodology could be explained in such a small booklet, less than 50 pages. Think of that. I think that's a strong thing itself. So that's the reason why we say simplicity is the key word to have Dobel methodology. And that's something I haven't seen in other best practices. There is a tendency that all the best practices are becoming longer and longer and longer. Now, Pembrook is nearly 2000 pages.

[00:19:55] The sixth edition with Agile Practice Guy, and you should really like to read it. If you want to read thousand pages, you really or you should be become certified, then of course you had to do it.

[00:20:07] Then you're forced to do it if you want to answer the question. So that's an example of that is growing and growing, and I think it's it's nice that we have a simple methodology that the keyword is simplicity. And I think we and you should understand that that makes it usable in many ways. But there's always a but there's a trade off, because if you want people to use the methodology, then you need a project called team members, project managers and project owners who are really what we call reflective practitioners. They need to have something to build upon if they should be able to use it so they can't do a robot project management. They have to be reflective and adaptive and understand how to use it. But I think it's it's really nice that you can explain the methodology very briefly and very quickly. Finding five is from our roots, our own research process, because in order to establish a language lingo for evaluating all the pilot projects, we have to, what should I say, set up a project evaluation framework. And of course, we could use something which were already there. And then we have added something.

[00:21:38] But we come up with what we call a multifaceted evaluation framework shown here, which could be used as part of the learning process. So we also need to establish the language around how we evaluate and discuss projects.

[00:21:52] The five dimensions we have in this framework is the classical triangle. And I think all here in this room know what the classical triangle is on time, on budget, on Pskov and so forth. It's a very bad measurement, or at least it it's not a sufficient measurement, I should say. Maybe not bad, but not sufficient. Then we have the success criteria, specific success criteria, measurement. That was the one I showed on the first slide where we saw the comparison to to what degree the project fulfilled success criteria. Then the third dimension is internal benchmarking. Internal benchmarking was what we used when we took the pilot projects and compared with three reference projects in the same organization.

[00:22:41] And I think a lot of organizations could learn a lot about doing internal benchmarking and use that for portfolio management. That's something which could be used within a half double setting or another setting. So we have specified how you can work with this internal benchmarking finally. Uh, so the fifth dimension is external benchmarking, where you benchmark projects across organization. And you can see the first slide I showed with the overview of results is in fact, external benchmarking.

[00:23:14] But now I come back to Younus. That's a fifth dimension called learning. I think that's a very overlooked factor.

[00:23:24] So it might be that Mills' feels the 800 pages Dietz's was a waste, but I'm not so sure it might be that we have never seen a comfortable if you hadn't didn't this is BSD and arrive for a journey to where it is, because you build up a lot of learning when when you do things and the same happens in a project. So so we can have a long discussion about that. But I think. Yeah, yeah. I think take care of thinking of what is waste. Sometimes waste is maybe learning and that's much more important than we understand. A failed project could be the best learning area for an organization. That's something we have to think of and what we see in half double is that the the learning points from each pilot project show that the project has left its clear footprint. All the organization, no matter to what degree they were successful, applying the half double methodology has learned a lot. And that's also part of doing these things. And I think it would be dangerous to overlook the learning factor. And that's something we have to combine with our waste thinking. And that's, I understand, seen from a consultancy point of view, it's easy to talk about waste. And I also think there are a lot of waste in projects. I agree on that. But I just think to say be a little bit cautious because learning is also part of what we are doing. Reflexion learning, adapting all these issues is also something we need to to focus on.

[00:25:05] This leads to my final slide. Just summarizing the five findings we think are important for, first of all, have double methodology works.

[00:25:15] There are areas where we can see sweet spots, meaning that we can see the methodology looks to fit very well. There are some powerful practices which seems to make a difference. Simplicity has been an important key word for the half double methodology and the presentation and finding five multifaceted evaluation is part of the learning process. We simply need to establish an evaluation process where we can learn from. Thank you very much for listening.

See all Half Double Research information.