How to assess, communicate and manage uncertainty and risk with Agility?



The thought that disaster is impossible often leads to an unthinkable disaster.

Gerald M. Weinberg

Doubt is not a pleasant condition, but certainty is absurd.
Voltaire



Every time there’s a good business opportunity to develop a new product or evolve an existing one, executives want to know required investment amount, expected ROI and probability, and risk involved.

Therefore product and development teams are asked to figure out if the new product or product evolution is technically feasible, how long it will take to implement, how much it will cost, when will be finished, and how much risk and uncertainty is involved.
Then Project/Release/Iteration manager will present a release plan where time, effort and costs estimation are expressed with ranges. The degree of uncertainty and the level of risk determines amplitude of the estimations range.

As result, the degree of uncertainty and the level of risk will impact the investment decision, budgeting, governance of products portfolio, release plan, and external customer where one is involved.

 



How to assess and communicate risk and uncertainty

Visualising and communicating risk and uncertainty with agility it means to do that in a lightweight form that’s quick and simple to understand, easy to constantly and frequently update, and straightforward to iteratively enact.
This is very different from heavyweight approaches for documenting risks with long documents that quickly become outdated, often remain unread and very seldom if ever lead to timely action.


The lightweight assessment is
constantly and frequently updated
as new learnings and new information emerge


The assessment of risk and uncertainty is typically done after initial high-level identification of features and user stories and after initial high-level effort estimation. The assessment is then updated periodically as new learnings and new information emerge.
A high-level way to assess and visualise the overall degree of uncertainty and the level of risk consists in categorising the implementation of the new product or the evolution of the product as an Exploration type of work or as a Production type of work.



Investing more time detailing upfront requirements,
specifications and design wont reduce risk or uncertainty



- Production type of work is characterised by known problems and known solutions. It occurs when implementation work is similar to previous implementations. As a result it has very low degree of uncertainty and very low level of risk. Estimation ranges can be narrow because of low estimation errors expected.

- Exploration type of work is characterised by high level of uncertainties and unknowns in the problem and/or in the solution, and in the product/market fit. In addition to this, there could be plenty of things outside the area of control or influence that can change anytime impacting the success of the implementation and of the product. For this type of work, investing more time detailing upfront requirements, specifications and design wont reduce risk or uncertainty.

There’s a continuum between the very low risk and uncertainty production type of work and the high risk and uncertainty exploration type of work.






Different opinions and different points of view
are invaluable sources of information,
therefore are encouraged and explored


Product and development teams’ members that will do the actual work, should collectively vote where they think the current implementation work stands, placing each one a dot between the two extremes, discussing whenever they see differences among their votes and revoting after the discussion. In case of persistent differences in opinion among team members, it’s advisable to raise the level of risk & uncertainty. This evaluation can alternatively be done by team's Product Owner together with the Tech-Lead and the Project/Release/Iteration Manager.



Actual effort or scope can be up to 4 times the initial estimate



For an implementation work estimated to be on the extreme left-end, a pure exploration type of work, the estimation error can be up to 300% of the initial estimate. In other words the actual effort or scope can be 4 times the initial estimate. For a pure production type of work the estimation error can be 5-10%. While for an implementation work that stands in the middle, the estimation error can be up to the 50%. For examples see the Cone of uncertainty, based on research in the software industry and validated by NASA's Software Engineering Lab.



How to detail and explain risk and uncertainty

When executives and managers want to know reasons behind estimated overall risk and uncertainty level, when product and development teams want to improve the assessment accuracy, when they want to verify their assessment or present it in more detail, it’s useful to drill down into these three main risk and uncertainty components:

1) people and market
2) domain and requirements
3) technology and architecture

For each component the level of risk and uncertainty can be rated Low (green), Medium (amber) or High (red) and presented, for example, like this:






The following lists break down each of the three main risk and uncertainty components into multiple elements. While discussing the elements, new elements specific to teams’ and product context and circumstances could emerge and should be added to the list.


The elements in the lists below are placeholders
to provoke discussions and provide basic guidance


Note that the elements in the lists are placeholders to provoke discussions and provide basic guidance. A team experienced in dealing with risk, uncertainty and complexity would probably start with empty lists and populate them throughout discussions.



1) People and market





2) Domain and requirements





3) Technology and Architecture





Each element of the lists should be discussed one at time by the product and development teams to rate the level of risk and uncertainty.

The rating of each element should be given for example considering historical data, or absence of historical data, listening to professional judgment of team members, and considering the weight of each element in the current context and circumstances.  Whenever there are differences among team members' estimate, there should be a discussion to learn from each other point of view, and after that a re-estimation. In case of persistent differences in opinion among team members, it’s advisable to raise the level of risk & uncertainty.


This approach should help to rate the overall level of risk and uncertainty for each of the three components and explain reasons behind each rating.


Every new information, findings and learnings
are used to verify, validate, update and enhance
results and decisions from previous stages


From those estimations and from the conversations that lead to the estimations, product and development teams should be able to re-vote where the current implementation work stands in the continuum between the production type of work and the exploration type of work, considering possible interactions among the components, and producing a more accurate assessment.

Tip: Keep it simple! Update the assessment frequently using new info and new learnings available over time.




How to manage high risk and high uncertainty

When work required to deliver a new product, or a product evolution, is expected to be mostly of Exploration type, estimation of costs, effort and time are expressed with extremely wide ranges and with estimation errors up to 300% of the initial estimate (for examples, see the Cone of uncertainty mentioned before). Therefore a linear upfront investment based on initial estimate for Exploration type of work is hazardous.



Small experiments and prototypes
are designed, built, deployed and tried out

with real users, early adopters and customers
in hours, days, or few weeks



In these cases it’s more convenient an initial exploration phase with the goal to identify and reduce main risks and uncertainties by exploring the problem space, testing assumptions, validating the solution, verifying product/market fit, clarifying the scope, and learning new info.

During the exploration phase an experimental approach is adopted: shortest possible experiments are designed and executed to gather data with minimum effort, and smallest possible prototypes are built, deployed and tried out involving as much as possible real users, early adopters and customers. Each is done in the space of hours, days, or few weeks. One of these experiments is the minimum viable product or MVP.


Investment decision and estimates
are finalized only after the end of the
exploration phase



An exploration phase ends only when risk and uncertainty are reduced enough so that a good, informed investment decision become possible. For an overall effort of one year, the exploration phase could last, for example, 2 months.





After exploration phase, sometimes estimates still have a wide range and estimation errors are up to 30-50%. I.e. the development time could be estimated in the range of 6 to 12 months.


Low priority requirements can be used as a safety net
to deal with residual risk and uncertainty



When this happens it’s particularly convenient to prioritise the requirements in the backlog using the MoSCoW prioritisation method in order to use the requirements classified Should as a safety net.

Here the best case scenario forecast, that assumes the highest velocity of the team, will include in the scope requirements classified as Must together with requirements classified as Should.
The worst case scenario forecast, that assumes the lowest velocity of the team, will include in the scope only requirements classified as Must.
As a result, in this example, the initial estimation range of 6 to 12 months, that is 6 months wide, is turned into a range of 6 to 8 months, only 2 months wide.






After a short period of time
from the beginning of the implementation work,
the investment decision and estimates
are verified against real progress



After about two months of work implementing the solution, it’s worth to observe the trend of team’s velocity:
  • When the velocity is stable or converging and is inside the ballpark, a new more accurate estimation can be done and plans can be updated accordingly.
  • When the velocity is stable outside the ballpark, this is a sign that investment decision should be re-evaluated.
  • When the velocity is diverging outside the ballpark, this is a sign that there could be still risks and uncertainties that need to be explored extending the exploration phase.

Conclusions

This lightweight, gradual, iterative approach to assess, communicate and manage uncertainty and risk is a simple and effective way to monitor and react timely to a variety of circumstances that impact investment decisions, budgeting, governance of products portfolio, and release planing. It also encourages and supports conversations on risk and uncertainty between executives, managers, product and development teams, and customers.

It is based on current lean/agile literature and on personal experience.

The mechanic of this approach is simple. In addition to it, few organisation cultural traits and leadership mindset characteristics are useful to make it work: transparency, trust, teamwork, and tolerance for experimentation.

If you are interested into the topics discussed in this post, those four suggested readings are for you:
  • Book: Lean Enterprise: How High Performance Organizations Innovate at Scale. Jez Humble, Joanne Molesky, and Barry O’Reilly. 2015
  • Article: 4 Significant Ways to Improve Your Ability to Innovate. Joanne Molesky. 2015
  • Book: Agile Project Management: Creating Innovative Products (2nd Edition).  Jim Highsmith. 2009
  • Article: A Leader’s Framework for Decision Making. David J. Snowden, Mary E. Boone. 2007
  • Article: How to prioritize risks on your business model, Ash Maurya

 

Thanks to Maurizio Pedriale and Carlo Bottiglieri for their help in the review of the draft of this post.

Overcoming the one weakness of OOP


Abstract:
OOP does not provide a built-in support, comparable to encapsulation for flexibility, in object interrelationship such that relationships between objects or structure of objects in a relationship can change, without affecting the rest of the program. 

The work of Professor Karl Lieberherr on Adaptive Object-Oriented software programming, based on the Law of Demeter, is the first to highlight this weakness of OOP, to explain its relevance and to introduce a solution based on a specific design, high-level specifications and automatic code-generation tools.


This post describes the work and the solution introduced by 
Karl Lieberherr and other well-known authors such as Steve Freeman, Nat Pryce, Tim Mackinnon and Sandi Metz. The presented solutions are based on designs that use composition, dependency injection pattern, visitor-like pattern, composite pattern and duck typing.




Glossary of ambiguous terms, as used here
Object graph: the web of relationships among a group of interrelated objects.
Object relationship
: association such as composition and aggregation, and dependency.
Object structure
: the definition of the publicly accessible object’s state and behaviour, its data and functions, its fields and methods.



Which is the one weakness of OOP?

A key advantage of object-oriented programming is certainly the kind of flexibility that object encapsulation provides.
Thanks to that flexibility, the representation of an object (the inside of the object, the internal implementation details, the internals) can be changed without affecting the rest of the program.
As a consequence, it is easy to locally understand an object in isolation, change it, extend it, evolve it, and reuse it.

When talking about encapsulation, the focus often goes to encapsulating the object’s state, the information, the data, the internal object structure.
What about relationships among objects?
OOP does not provide a built-in support, comparable to encapsulation for flexibility, in object interrelationship such that relationships can change without affecting the rest of the program.






The one weakness of OO code is that it is brittle in the face of changes to the relationship between objects and to the structure of objects that are part of an object graph.
Two common examples follow:
  1. When an object’s unstable dependency changes, it can trigger changes into the object itself and this can start a chain reaction of changes in dependent objects.
  2. When an object graph changes or some responsibilities or state is moved from one object to another, code traversing the objects and making computations on the traversed objects needs to be changed too, to adapt to the new object graph and the new structure of the objects.

Examples with source code are also available here [1] and here [2].

Overcoming the one weakness of OOP is a common design challenge for many software developers building systems with OO languages.
What solutions have been found so far? How do they relate to encapsulation?



How to overcome the one weakness of OOP?

How to make OO code less brittle in the face of changes to the relationship between objects and to the structure of objects that are part of an object graph?


This post introduces solutions from three well-known sources.
The three solutions follow below.


1. Professor Karl Lieberherr work on Adaptive programming and the Law of Demeter



In 1987 Ian Holland at Northeastern University formulated a style rule for designing object-oriented systems called The Law of Demeter [3].


The Law of Demeter is best known in its formulation at the method level that pertains to how methods are written for a set of class definitions. Typical examples of violation for this formulation include method call chains such as  dog.getBody().getTail().wag() that is colloquially known as a train wreck [4]. Conformance to this formulation of The Law of Demeter supports object encapsulation.


The formulation of the The Law of Demeter that applies to the structure of the classes, is less known. This formulation makes the notion of unnecessary coupling very explicit. Conformance to this formulation supports modularity and low coupling in object relationships, and it makes code less brittle in the face of changes in the relationships and in the related objects [4]. In other words, conformance to this formulation helps to overcome the one weakness of OO code. But how to achieve this conformance?


Between 1991 and 1996 Professor Karl Lieberherr [5] developed Adaptive Object-Oriented software programming and the Demeter Method [0], a concept that takes encapsulation to a new level. This work clearly identifies and describes the one weakness of OOP and provides a working solution.


The solution as shown here [1] conforms to the Law of Demeter and uses programming by composition so that each composite object shields other objects from changes in the composed objects and in their relationships. The solution also replaces hard-coded navigation paths used to traverse the object graph with higher-level navigation specifications. Automatic tools called Demeter Tools use the navigation specifications to regenerate and adapt the code that traverse the object graph and invoke the functions, whenever the object graph or an object structure changes.


Here I name this solution as: “Programming by Composition + Demeter Tools”.


2. Mock Objects and Growing object-oriented software, guided by tests

Thanks to Steve Freeman, Nat Pryce and Tim Mackinnon for reviewing the draft of this post!


Between 1999 and 2010 a group of people from Connextra team first and then from Extreme Tuesday Club (Connextra team members: Tim Mackinnon, Tung Mac, Matthew Cooke, Iva More, Peter Marks, John Nolan; Extreme Tuesday Club members: Steve Freeman, Philip Craig, Oli Bye, Paul Simmons; Joe Walnes from ThoughtWorks and Nat Pryce) explored, experimented and developed a new way of writing object-oriented software that revolves around the practice of test-driven development (TDD), tests automation and a new technique called Mock Objects.

The full story is documented here too [10]. The initial work between 1999 and 2004 has been documented into two papers [6][7], and in 2010 in a new book [8] that summarised the whole experience produced by all the people involved.



The trigger for the discovery of the technique was when John Nolan set the challenge of writing code without getters and then favouring void methods over non-void ones. Later Peter Marks helped coin the name ‘Mock'.

The driver for the technique was literally a pragmatic way to practice TDD and write good tests without going against what was felt were good design principles, for example without exposing object internals for the sake of testing, or shying away from composition. The work was then inspired and influenced also by the Law of Demeter and Lieberherr’s work and also by [12][13].


Probably the most important effects on coding style were the development of the Mock Objects, and favouring composition over inheritance that led toward programming by composition. Programming by composition led to decouple object behaviour from the structure of the object graph. In addition to that, each composite object shields, to some degree, other objects from changes in the composed objects.

Programming by composition also led toward a design pattern that nowadays is called dependency injection [9] not to be confused with the dependency injection frameworks (a.k.a. IoC frameworks or IoC containers) that were never needed in the large Connextra code base. In turn dependency injection led to minimising objects’ dependencies and to decoupling dependencies.

Unlike the solution of Lieberherr, this solution does not use automatic tools for code generation.

Here I name this solution as: “Programming by Composition + Dependency Injection”.


A secondary effect on coding style was the tendency to push behaviour towards Visitor-like objects, objects resembling the Internal Iterator pattern [11] that have similarities with the Visitor pattern. A team member from Connextra remember using the Visitor-like pattern together with the Composite pattern used among other things to abstract away differences between the traversed objects.

The abstraction introduced with the Composite protects the code, to some degree, from changes in the object graph and in the structure of the object.
The Visitor-like design reverses the direction of an unstable dependency relationship (from a stable object to an unstable one), turning it into a stable relationship (from the unstable object acting as the "visitor" to the unstable one acting as the"element").




While the focus on this solution diminishes after the first paper, it is documented here because of a similarity with the solution presented by Sandi Metz.

Here I name this solution as: “Visitor-like + Composites”.



An interesting after thought about the technique of TDD with mocks from Tim Mackinnon: I can’t stress enough how most people have missed, and still do, that connection with CRC cards, mocks and role play. Mocks and the technique really came from the idea of working with a partner to act out what you expected the design/objects to do - and then “asserting” those interactions. That was the number one design point. For us, this was the aha moment.

On the same line Nat Pryce comments: There was quite a change in emphasis between the Endo-Testing paper and the Mock Roles, Not Objects paper and GOOS book. For example, the former recommended using mock objects to fake third-party APIs that are difficult or slow to use for real in tests, such as JDBC. The latter recommended *not* mocking third-party APIs, but rather discovering the appropriate interfaces for mocking from the need of client objects. Also the Mock Roles, Not Objects and GOOS style focused more on messages and protocols between objects.


3. Less, The path to better design



Sandi Metz in her speech 'Less, The path to better design' [2] takes on the challenge of the one weakness of OOP and presents design solutions that deal with changes in unstable object’s dependencies, both in the object structure and in the object graph.


The approach she presents is based on the idea that designers cannot predict the future but they can guard against it choosing carefully object dependencies, identifying those that are less stable and surrounded by more uncertainty and then aggressively decoupling them.

This approach is in tune with the second formulation of the The Law of Demeter that applies to the structure of the classes.


The solution that Sandi Metz presents, employs a design similar to the visitor pattern to decouple from unstable dependencies.

In the problem presented there are two objects that needs to interact to execute a task. The first object is known, under control and more stable, the second object is less stable because surrounded by more uncertainty. In order to reduce the coupling of the first one to the second, the second object acts like a Visitor in the Visitor pattern while the first object plays the role of the visited element. This design reverses the direction of the dependency, doing so it turns the unstable dependency relationship into a stable one.

Unlike the previous solution, this one does not make use of the Composite pattern to guard the code against changes in the object graph, because the language she uses is Ruby that supports duck typing, which serves the same purpose.


Here I name this solution as: “Visitor-like + Duck Typing”.



Comparing solutions

These are some similarities about the solutions suggested by the authors:

  • One solution uses tools and higher-level navigation specifications to regenerate code whenever there are changes to the relationship between objects and to the structure of objects.

  • Two solutions include the use of Programming by Composition: to decouple object behaviour from the structure of the object graph, and to shield, to some degree, other objects from changes in the composed objects.

  • One solution include the use of the dependency injection pattern: to minimize objects’ dependencies and to decouple dependencies.

  • Two solutions include the use of a Visitor-like pattern, used essentially to reverse the direction of an unstable dependency and so turning the dependency relationship into stable.

  • Two solutions abstract away differences between objects traversed in an object graph to protects the code, to some degree, from changes in the object graph and in the structure of objects:
    • One solution for statically typed languages does this with the Composite pattern,
    • The other solution for dynamically typed languages does this with Duck Typing.

  • All the solutions explicitly tell how to carefully choose and limit dependencies, with different but substantially equivalent means.

 

Beauty & Dignity

Beauty: what pleases the senses, what enlightens the mind, what is righteous, what is just, what is truthful, what is good.

Dignity: the idea that each and every human being without distinction of any kind, by the mere fact of being born into this world, has innate equal rights such as the right to freedom and to the pursuit of happiness, and is worthy of honour, esteem, consideration and respect.


 

Lean-Agile Coach self-assessment radars


These slides show 3 self-assessment radars for Lean-Agile Coaches.


The first slide is based on the Agile-Coach Competency Framework by Michael K. Spayd and Lyssa Adkins.
The second and third slide come from a blog post by Esther Derby.


Feels free to comment and to question the skills, the traits, and the levels.



“Don't bring me problems,bring me solutions." Really?!?!







“The thought that disaster is impossible often leads to an unthinkable disaster.”
Gerald M. Weinberg










Modern leadership is servant, modern managers are like hosts that receive and entertain guests.
Team members have ownership and autonomy in the way, in the ‘how’, they pursue the value they are asked to create.

When team members face difficulties, they raise obstacles to management attention. And managers act on the obstacles that bubble up from the team. This follows the principle of transparency and feedback.

Are managers ready to hear about all these problems?   Continue...

Planning ÷ reacting: finding the balance





Life is what happens to you while you're busy making other plans - John Lennon, Beautiful Boy









The are things that can be planned and others that cannot be. The conundrum is, which is which?


What happens when someone

This is why it is important to find out in every moment the balance between the two, to know


Someone says that Agile is the art of finding the balance between anticipation (as e.g. panning) and adaptation (as e.g. reacting).  

An interesting final reflection: what values, principle and practices help to find a good balance, and how ?

 


Agile Diversity, theme of XP2014 Rome


Transcript of the lightning talk at XP2014 about:
Agile Diversity, theme of XP2014 - 15th International Conference on Agile Software Development May 26-30, 2014, Rome, Italy


Who are you?
Who am I?
What defines yourself, your identity?

 Each one of us is unique, is a distinct individual, and is different.
Without diversity there is no identity, without diversity we would be just an army of clones.

 For the only fact that we exist, we have the right to our identity, we have the right to diversity.
This is true from a personal point of view, and it's
also true in the workplace.

The company I work for, ThoughtWorks, for example actively encourage a divers
e range of people in all parts of the company, in terms of such things as gender, religion, race, sexual orientation, and the like.

Everybody has the right to love the person they love, without distinction of gender, religion or race.
You have that right. I have it too.


In Italy we know it very well, without diversity and dissent there is fascism.
That’s why it is important both critical thinking and independent judgment, otherwise we become victims of an ideology.
When someone for example tells you “Do TDD because I know better than you”, this is ideology. I prefer to have the right to ask why, to try and experiment, to make my own decisions.



 Diversity also means pluralism, the idea that there are several principle and value systems that may be in conflict with each other and still are useful and fundamentally correct.
Without pluralism we get trapped into tribal fights, as for example Scrum vs Kanban, Lean vs Agile, or TDD vs BDD.
With pluralism instead you can be a Lean and Agile polyglot, you can find contextual fitness for purpose instead of being limited to a 1-size-fits-all solution.


So make yourself, your dears, and those around you who you care about, a gift: recognize and accept diversity, give yourself the possibility to choose among a large rainbow of many different colors and options, and the right to change idea when you feel like it.

You’ll be a better Lean & Agile professional, a better community member, and a better person too.

Embrace change. Embrace diversity.

The nature of software quality, the complexity of the intangible



In my everyday life, here and there, I perceive humour and kindness, discipline and grace, personality and competence, easily and clearly. But when my colleague Matthew asked me to measure them, I realised that’s a whole different story.


When you ask to 10 experienced software engineers, users and entrepreneurs what bad quality is in software, they will provide you plenty of examples. Then when you ask them to define good software quality and how to measure it, you’ll easily get 10 different answers. This is because nowadays we still don't have a rigorous definition for it, and we don't even know if it exists at all.



It is not a simple problem for sure, it took about 27 years to Tom De Marco to realise the nature of this complexity.
It was 1982 when Tom De Marco wrote in Controlling Software Projects the famous quote “You can't control what you can't measure”. In 2009 Tom De Marco wrote the article Software Engineering: An Idea Whose Time Has Come and Gone? for the IEEE Software journal, and there he wrote  "do I still believe that metrics are a must for any successful software development effort? My answers are no, no, and no.”



So why is so difficult just to define and measure software quality?
Maybe because
- quality perception is subjective
- quality is in part in the eye of the beholders and in the eyes of engeneers
- quality can be relative, for example can be superior or inferior to a competitor’s product
- quality is contextual, it is fitness-for-pourpose, depend on the context and on the person that judge it



In the end the problem of defining and measuring software quality can be like the problem of predicting the future: when software engineers create new and unprecedented software applications and are asked to figure out how do it right at the first time, without exactly knowing what ‘right’ will mean.


Now that you know the nature of software quality, how do you deal efficiently with that ?

One suggestion from Jim Highsmith: do not focus on what is easily measurable and than ignore important characteristics that are harder to quantify !

Self-organisation without self-regulation, a recipe for chaos


This post is from the series of posts on Self-Organisation.



First time I experienced an Agile lego game it was playing the Leadership game: we had been asked to form 3 teams and then we had been given a goal to pursue. We worked in time-boxed iterations, and at the end of each iteration we reflected how to become more effective, even with the freedom to move to another team if that could be more useful.


In other words we had been asked to Self-Regulate: team members have early and frequent feedback in order to perceive the connection between actions and the consequences and based on that react and adapt their behaviour and their actions as needed to reach the desired goal.


Self-Regulation is a required element for a well functioning self-organisation. Indeed one of the principles of the Agile Manifesto state:
At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

The Lean principle Amplify learning suggests to increase feedback via short feedback sessions to support the continuous improvement, to better understand customers and project's needs, to learn how to better satisfy those needs and so adjust efforts for future improvements accordingly. Kanban is a Lean tool that helps here in many ways.




Why it’s  important to reflect, as individual and as team, at regular intervals?
This is because when our actions have consequences beyond our learning horizon, it becomes impossible to learn from direct experience.

What it means to become more effective?
For a team it means for example getting better in 
  • Completing team projects (i.e. quantity, rate and quality of the outcomes)
  • Fulfilling member needs (i.e. rewardingness from membership or from a member, commitment from members and to members)
  • Processing information and generating meaning (i.e. contribution of information from members, proportion of information held in common or uniquely by single member, proportion of information relevant to the tasks or socio-emotional)
  • Managing conflict and developing consensus (i.e. distribution between task and procedure and interpersonal conflict, expressed versus unexpressed conflict, escalation and de-escalation dynamic, level of implicit and explicit consensus)
  • Maintaining the structure and integrity of the team as a system (i.e. team social and task cohesiveness, patterns of interaction, of influence, of participation and of affect)
  • Motivating, regulating and coordinating member behavior (i.e. behavioral coordination about team norms and about errors and its speed/delay of the feedback)

Managing without impeding self-organisation



From the series of posts on Self-Organisation.

I used to play football and basketball with friends every now and then in the afternoons after school. We formed 2 teams with the
same size and with players filling the basic roles required for the game.

We were free to self-organise guided and constrained by the teams size and roles. Teams size and roles defined our 
boundaries/barriers. In a self-organising team there are many boundaries/barriers that can be set and tweaked.


This excerpt from Joseph Pelrine training material describes 
boundaries/barriers:

They define the edges of the system, who is in and who is out. By changing the barriers of the system, who is included and who is not, you change the dynamics in the system. In a sense, a boundary is the opposite of an attractor – people will shy away from it. “Barriers” is a more appropriate term than “boundaries”


Boundaries and barriers can be rigid or elastic, and the elastic ones are more resilient.  They are useful when:
1) They are good, beneficial, fit for purpose
2) The team is capable to benefit from them


Boundaries and barriers can be set by an agile manager or a coach. And also originate from external factors such as company strategic direction, company policies, budget, technology, partners, clients, and project goals and priorities. An agile manager and a coach can set and tweak and visualize boundaries and barriers to guide and safeguard the team:


to direct and influence the emergence of behaviors toward positive directions, 
to amplify the emergence of beneficial behaviors and to reduce or revert the non beneficial ones.


So managers can influence the outcomes without micro-managing and without impeding self-organisation.