Blog


Review of Learning, Part 5: Systems Thinking

posted 24 Oct 2014, 08:47 by Robert Taylor   [ updated 24 Oct 2014, 08:57 ]

Awareness of Systems Thinking | Inventory, Flow, Constraints and Throughput

Systems Thinking or The Systems Approach

A system is defined as a set of things working together as parts of a mechanism or an interconnecting network; a complex whole. The act of systems thinking is to be cognisant of the whole when considering the parts.

Mullins (2009) references Bertalanffy (1951) as the founder of Systems Theory, and General System Theory (GST), and refers to the business organisation as an open system, part of a broader environment with which the organisation is constantly interacting with.

Deming (1986) suggests that production should be viewed as a system, and to improve quality the whole system from the incoming materials to the consumer should be considered. In his 14 points for Management his 5th point states that the system of production should be constantly improved to increase quality, productivity and therefore decrease cost.

Inventory

Ohno (1988) defines one of the seven non-value adding wastes in a process as excess inventory. Suggesting that excess raw materials, work-in-progress or unshipped goods cause longer lead times, obsolescence and delay. Inventory obsolescence is a term used to refer to a deterioration in value of inventory over time. 

Ladis (2008) suggests that specifications (or requirements) that have not been implemented, designs that have not been reviewed and code that has not been tested and deployed, can all be considered inventory. He goes on to suggest a chain or system that is self-limiting, where teams should not build features which people do not yet need, or define more requirements than the team can code, or write more code than can be tested, or test more code than can be deployed into an environment where a user can use it.

Eisenmann (2013) presents Spolsky’s (2012) view of Software Inventory, where he describes items in a feature backlog which will never get implemented, therefore, the time taken to write down, define, design, think about and discuss those items can all be considered wasteful.

Theory of Constraints

Goldratt (1992) introduced the Theory of Constraints (ToC), the suggestion that all systems have bottlenecks, which aren’t necessarily bad or good, but just the reality of any system. Goldratt states that the bottlenecks within a system determines the effective capacity of the system, and we can use those bottlenecks (or constraints) to control the flow through a system. He suggests that bottlenecks govern both throughput and inventory, and only by increasing flow through those constraints can the overall throughput of a system be increased.

To improve the throughput of a system, Goldratt suggests starting with identifying the system’s constraint(s), then exploit (make the most of) and use the identified constraint as the control (or heartbeat) for the system. Then focus on increasing the constraint(s) capacity while being conscious of breaking other constraints in doing so.

Flow and Pull

To reduce waste within a system, and therefore reduce costs and increase quality, Ohno (1988) suggests that the ideal state of production is to have the least amount of inventory along with just-in-time flow of that inventory to the right place, at the right time in the right quantity. Understanding that it would be extremely difficult to apply zero-inventory and just-in-time to every process in a system, he proposed the use of Kanban as a way to visually control and prevent overproduction and reduce waste.

Kanban is the Japanese word for billboard, sign and is more broadly used in lean manufacturing to mean a signal or tag. Ohno observed that by having each step in a system signal to the previous step when its inventory needed to be replenished, a pull process is created back through the system, which reduces the amount of inventory and the overall lead time of the system; the time it takes to realise value from the system’s input.

Key points:
  • Consider the whole system, organisations are not closed systems
  • Be aware of the cost of inventory, and waste within a system
  • Identify, exploit and alleviate constraints within a system
  • Use flow and pull to understand the capacity and constraints of a system

References

Mullins, L. (2009). Management and Organisational Behaviour, 8th edition, Pearson Educational.

Deming, W. E. (1986). Out of the crisis. Cambridge, MA: Massachusetts Institute of Technology. Center for Advanced Engineering Study, 6.

Ohno, T. (1988) Toyota Production System, Productivity Press

Ladas, C. (2009). Scrumban-essays on kanban systems for lean software development.

Eisenmann, T (2013) Managing Startups: Best Blog Posts & Spolsky, J (2012) Software Inventory

Goldratt, E. M., Cox, J., & Whitford, D. (1992). The goal: a process of ongoing improvement (Vol. 2). Great Barrington, MA: North River Press.

Review of Learning, Part 4: Quality Feedback and Facilitation

posted 24 Oct 2014, 08:29 by Robert Taylor   [ updated 30 Nov 2016, 10:54 ]

Importance of Quality Feedback and Facilitation

Goodwin and Miller (2012) suggest that vague feedback has negative effects, resulting in uncertainty, decreased motivation, and diminished learning for individuals. In referencing Wiliam (2011) they state that the best feedback is clear, with specific guidance on how to improve. They also suggest that feedback provided weeks after completion of a long-since-forgotten unit or assignment presents little opportunity for learning.

I find this congruent with my own views on feedback, for both individuals and teams. For teams, a facilitator, or perhaps a servant leader as Greenleaf (1977) presents, can help to bring teams together, to discuss in an open and constructive manner any issues present in the work or the team.

During a recent Leadership and Management course I have been introduced to the BOCA model for providing positive and constructive feedback. Behaviour, Outcome, Consequences, Action, provides a method for structuring feedback using an outcomes-based analysis of the situation. I’ve found this technique helpful when considering what feedback is appropriate, and what I want to communicate.

Derby et al (2006) define a retrospective as a special meeting where teams gathers to inspect and adapt their methods and teamwork, after completing an increment of work. In this meeting, the whole team has an opportunity to learn and generate actions for improvement frequently and in a timely manner.

More traditional project methodologies leave the lessons learned reports or postmortem step to the project end. The OGC (2009) has capturing the lessons resulting from a project and completing a Lessons Learned Report, as part of the project closure stage. In my opinion this is too late, it does not provide timely feedback to the teams involved or the wider organisation they are part of.

The importance of specific and timely feedback is as true for groups as it is for individuals, and well facilitated retrospectives can help teams to improve.

Key points:
  • Feedback should be specific and timely.
  • Use outcome-based analysis to ensure clarity and purpose of feedback.
  • Frequent facilitated retrospectives, to help teams to improve.

References

Goodwin and Miller (2012), Good Feedback Is Targeted, Specific, Timely. Educational leadership

Derby, E., Larsen, D., & Schwaber, K. (2006). Agile retrospectives: Making good teams great. Raleigh, NC, Dallas, TX: Pragmatic Bookshelf.

Review of Learning Part 3: Value, Iterative and Incremental Delivery

posted 24 Oct 2014, 07:54 by Robert Taylor   [ updated 24 Oct 2014, 08:52 ]

Focus on Value and use short Iterations and Incremental Delivery

Focus on Value

Ries (2011) defines value as providing benefit to the customer, but observes that in some circumstances who the customer is and what they might find valuable is unknown. This is especially true when innovating or launching a new product into the market.

Ariely (2009) suggests that people being irrational beings, often don’t know what they want until they see it in context. Projects that have defined a large batch of feature requirements up front with the users often find that once delivered and people have had the opportunity to use the features, additional requirements or changes to those requirements are subsequently requested.

Learning from Failure

Argyris (1991) coined the terms single and double loop learning, to distinguish between individuals solving externally presented problems (a project for example) and the need to reflect critically on their own behaviour, through inspection and adaptation. He suggests that highly skilled professionals tend to be very good at single-loop learning, but are often bad at double-loop learning. 

Argyris observes that when single-loop learning outcomes go wrong, individuals become defensive and shift the blame onto others. This defensiveness limits the ability of individuals to learn, and he makes the point that doubt and debate are needed to promote learning. In double-loop learning, he suggests that assumptions are to be challenged, and hypotheses should be tested. 

In Tsoukas (2002) commentary of Argyris' work, he suggests that double-loop (or reflexive) learning is much more relevant in post-bureaucratic organisations, because individuals are more psychologically present in companies that are rich in information, and where employees are required to make more day-to-day decisions based on that information.

Empirical evidence to aid group judgment and decision making

Kahneman and Tversky ( 1973) described in their research that humans make use of heuristics which simply put, reduce the complexity of making probabilistic judgements. They observed that while heuristics are useful they can lead to severe and systematic biases. Systematic or cognitive biases can be a cause of decision paralysis and/or conflicts of direction within a group. When a group of people come together, they will each bring their own heuristics and, therefore, cognitive bias to the table. This in general is a good thing, it prevents groupthink and poor decision making, but the opposite can also be as detrimental to product delivery, as different cognitive heuristics and bias can lead to conflict within the group.

We can address this problem and take cognitive bias out of the equation to some extent, through working in short feedback loops, using an iterative process to test the success or failure of the experiments undertaken through empirical evidence.

Iterative and Incremental

The concept of short iterations and measuring progress with empirical evidence works well to combat Parkinson’s (1942) Law: work expands to fill the time, and can be a much better way of estimating delivery timescales accurately. In my experience timeboxed iterations are a fundamental step towards improving product teams, which may currently be delivering in large batch sizes. Although I consider it a step on the journey towards gaining a better understanding of their capacity and any constraints, while incrementing feature or process improvements.

Deming (1986) presents the Shewhart Cycle as a flow for improving a product or process. The Shewhart Cycle defines the steps as plan–do–check–act (or later revised to plan–do–study–act), as a continuous improvement through an iterative process.

Key points:
  • Focus on delivering value and use experiments to find value
  • Test hypotheses and learn from failure
  • Increment features through iteration feedback loops
  • Use short iterations to enable a faster feedback loop

References

Ries, E. (2011). The lean startup: How today's entrepreneurs use continuous innovation to create radically successful businesses. Random House LLC.

Ariely, D (2009), Predictably Irrational: The Hidden Forces that Shape Our Decisions.

Argyris, C. (1991). Teaching smart people how to learn. Tsoukas (2002) Harvard business review

Tversky, A; Kahneman, D (1973), Availability: A Heuristic for Judging Frequency and Probability, Cognitive Psychology

Deming, W. E. (1986). Out of the crisis. Cambridge, MA: Massachusetts Institute of Technology. Center for Advanced Engineering Study, 6.

Review of Learning Part 2: Product Teams

posted 16 Oct 2014, 03:09 by Robert Taylor   [ updated 24 Oct 2014, 08:51 ]


Product Teams | Small, Stable, Co-located

If organising work by product (or service) is the appropriate approach for specialist individuals involved with software delivery, how do we go about structuring that Product team?

Much of this depends upon the value a business is looking to achieve, and this influences which specialists are needed, to accomplish the work required. In my opinion there are some fundamental aspects which we need to be cognisant of, when structuring a Product team.

Small

Brooks (1975) law states that adding manpower to a late software project makes it later. This is based on the premise that people take time to become productive within a team, and that communication overheads increase as the number of people in a team increases. To address this I would advocate starting with the smallest team possible, and only adding to it as the work required emerges and becomes clearer. Although this approach will have an impact on the group’s development, it is my opinion that it is better to start with a small core team and pull in the skills required as the work emerges and the needs of the group are identified.

Group Development

As Tuckman (1977) observed people within teams progress through four stages of group development: Forming, Storming, Norming and Performing. When a group is first formed, individuals come together under the banner of a shared purpose. People are initially hesitant, stand-offish and often not clear on what their role in the group is and what’s required of them.

With a more typical work by major function organisational structure, a significant part of any project’s setup and life-cycle can be concerned with first identifying the relevant individuals, and then having them progress through to the Performing stage of group development. When individuals are not closely collaborating, this group development is hindered.

Co-located

Often individuals are based across different locations within the office, or perhaps geographically distributed across different offices around the country or even the world. Having individuals who are distributed across locations makes progressing through the stages of group development harder, and can take longer.

Teasley et al (2000) observed through their study of six teams operating in a war room environment; having the entire project team in one room for the duration of a project, resulted in producing remarkable productivity improvements for the teams involved.

I advocate starting with as many individuals as possible being co-located in a single location and sat together working as a team. This will help them progress faster through the stages of group development. I would also add that once mature, this team can still perform well when working remotely perhaps in a more virtual team setup, but this can only happen effectively after the team has progressed past the forming and storming stages.

Reduce Waiting

Ohno (1988) observed that within the time line, from the moment a customer places an order, to the point where the customer receives the order, and cash is collected, there are value-added steps and non-value-added steps that can be observed. He suggests there are seven major types of non-value adding wastes within a business process; put simply one can reduce the process time line by removing the non-value-added wastes. One of the Wastes Ohno identified is Waiting, typically during the handover from the previous to the next process step.

Identifying and reducing the causes of waiting within a team and their processes is a continuous activity. Although we can approach the structure of our product team with the aim of limiting the amount of handover and, therefore, the waiting time between new feature request or concept hypothesis, through to the point where that concept hypothesis or feature can be tested or used by the intended customer.

Key points:
  • Start with the smallest team possible, and grow.
  • Try to keep the team stable for the lifetime of the Product
  • Co-locate people where possible, to reduce communication overhead
  • Reduce waiting time in handover between process steps

References

Brooks Jr, F. P. (1995). The Mythical Man-Month, Anniversary Edition: Essays on Software Engineering. Pearson Education.

Tuckman, B. W., & Jensen, M. A. C. (1977). Stages of small-group development revisited. Group & Organization Management,

Teasley, S., Covi, L., Krishnan, M. S., & Olson, J. S. (2000). How does radical collocation help a team succeed?. In Proceedings of the 2000 ACM conference on Computer supported cooperative work.

Ohno, T. (1988) Toyota Production System, Productivity Press

Review of Learning, Part 1: Organisational Structure

posted 16 Oct 2014, 03:09 by Robert Taylor   [ updated 24 Oct 2014, 08:50 ]


Reflecting on my career and experience to date, moments of learning and my continued reading and exploration of ideas has resulted in the following views. Here's an overview of what I believe are appropriate ways to approach working with teams of people involved with software delivery.

The System: Organisational Structure | By Function or By Product

As Mullins (2009) describes, within the formal structure of an Organisation, work is typically grouped around a major purpose or function, in essence departments of specialists within an organisation. In larger organisations, the many people and skills involved with software delivery are typically separated across departments and divisions. For example, Product Management, Business Analysis, Front-end and Backend Software Development, Testing, Operations and Infrastructure.

As observed by Parkinson (1942) in his Law ‘work expands to fill the time’ and the rising pyramid; a manager of these specialist departments will inevitably recruit more people and grow their area of the organisation, as the demand and utilisation of people within their department or area of specialisation increases.

This departmental separation of people involved with software delivery has resulted in the need for methods to coordinate across those departments, when delivering any work that requires their specialist skills. To achieve this collaboration organisations have made increasingly heavy use of Projects, in an attempt to deliver valuable outcomes across these departments of specialists.

According to the OGC (2009) Projects are defined as temporary organisations, which are created to deliver one or more business products in accordance with an agreed business case. This approach to creating temporary organisations incurs a large overhead of setup and coordination through delivery. Once the project is complete, there’s also a need to hand-over to the teams who will provide ongoing development and operation support for the delivered software platforms.

In my experience this approach to Organisational structure and a Project mentality are the root cause of organisations failing to realise sufficient value from their initiatives, within an appropriate period. As Deming (1986) observed: No amount of care or skill in workmanship can overcome fundamental faults in the system. The system we have in many large organisations determines the performance of the people within them, something that those people rarely have any control over, especially those involved in temporary project teams.

The implications for Organisations looking to realise value from features involving the delivery of software are not insignificant. Conway (1968) suggested that organisations build system (broadly speaking) which are copies of their communication structures. This can be directly observed in software systems, whereby teams of specialists have built software systems within the context boundaries of their organisational structure and specialisms. For example, a front-end software specialist who doesn’t have the skills, knowledge, or close collaboration with a backend software specialist, will often work around that limitation and implement inappropriate rules or logic within the front-end software. Resulting in many problems, but most typically with the dissemination and replication of business logic across multiple software systems, which ultimately results in blurred lines of control and reduces the ability for a business to enact change.

Matrix management has attempted to address this problem, but this approach disregards the importance of the psychological contracts that form between people. I’ve often heard terms like “gone native” to describe (in a negative way) people from a matrix capability area, who have become more embedded in the product or service area to which they’ve been assigned.

In addition to the complex human aspects, from a more fundamental financial perspective it becomes increasingly difficult to accurately measure the cost vs. benefit for any new or existing initiatives or products, given the shared cost across multiple specialist departments.

Mullins (2009) describes an alternative to the division of work by major purpose or function, as to organise work by product or service. The division by product or service is an integrated semi-autonomous unit of different specialists, which share collective responsibility. In essence a cross-functional team working together to achieve the common goal of that product or service. 

A pragmatic but considered approach should be taken when constructing a cross-functional team. The individuals within these cross-functional product or service teams are responsible for what Woodward (1965) refers to as ‘task’ functions; activities that are related to the completion of their overall objectives. 

It would perhaps not be appropriate for some specialised departments which provide more of what Woodward refers to as ‘element’ functions; activities that are supportive of the task function. I suggest that this is not something that can be predetermined, the inclusion of certain specialists within a cross-functional team is very dependant on the work involved. Careful assessment of what skills and people would be appropriate must be assessed, based on the challenge, tasks and ultimately the value required.

Maslow (1966) observed that people with specialist skills will look to apply them regardless of how appropriate they are to the task at hand. The importance understanding this point in software delivery teams cannot be underestimated. For specialist people who are more often than not working across departmental boundaries, and typically engaged together for the duration of a project, it’s my opinion that an alternative organisational structure would be more appropriate. For these specialist individuals, organising work by product or service would be a much more efficient organisational structure, and would result in better outcomes and solutions for the business and their customers.

Deming’s (1986) ninth of his 14 points for Management, states that the barriers between departments should be broken down, and for those people to work together as a team. This enables the team to foresee problems that may be encountered with a product or service, during its production and use.

Key points:
  • Organise work by Product or Service not Function
  • Think Products not Projects
  • Carefully consider the skills needed to build cross-functioning teams

References

Mullins, L. (2009). Management and Organisational Behaviour, 8th edition, Pearson Educational.

Parkinson, C. N. (1942). Parkinson's law, and other studies in administration.

Deming, W. E. (1986). Out of the crisis. Cambridge, MA: Massachusetts Institute of Technology. Center for Advanced Engineering Study, 6.

The future of Television?

posted 27 Feb 2013, 05:19 by Robert Taylor   [ updated 5 Aug 2013, 00:52 ]

Here I am watching the penultimate episode of Netflix' House of Cards (in HD quality on my AppleTV). I can't help but wonder if this it a Gladwell-like tipping point in the way we'll be watching 'big-budget serials' (as the telegraph puts itin the future.




If it is, then what does this mean for established broadcasters? Mr Robert Colvile's take on this is an interesting one:

For many people, this splintered vision of the broadcasting future will appear rather terrifying – a world beset by a bewildering profusion of entertainment options, in which the idea of “water cooler” television, or even a national cultural conversation, is a relic of the past. But the consolation is that there will be more TV, of better quality, than there’s ever been before. True, it will be impossible to find time to watch it all – but as problems go, that’s rather a nice one to have.

Some may say that this trend will increase, exponentially reducing traditional broadcasting's influence on primetime viewing slots. Ultimately reducing their advertising revenues, share price etc, ...but will it, or will it force broadcasters to make a step change within the industry?

No wait, that's not the question... it's inevitable that the broadcast industry will change and adapt to viewing habits, it has done for years. The question is...

As a viewer, do I want to search for something to watch tonight, or do I want someone to make that decision for me?


Let's take a step back. We know that the amount of VoD content available to viewers is increasing. As it does the amount of energy we have to excerpt as viewers, to discover something we want to watch also increases. This is similar to why Pinterest has gained popularity, for curating the web's content

In the traditional scheduled TV world, I select a channel with the expectation that it will deliver content to me. I sit back and let the shows and the adverts wash over me. All the content that broadcaster has curated into a schedule, largely based on what will have mass market appeal. I occasionally get a sense of satisfaction from this experience, out of knowing that others will also be watching that same scheduled content too. Friends at work, family, and more prominently these days, people I follow on my twitter feed.

The Linear broadcast schedule isn't going anywhere, anytime soon, especially for live mass-market appeal content. What about more niche content; pre-recorded VoD content, like Netfilx' House of Cards? Could there be an opportunity, a need to deliver more granular curated content for viewers, as an alternative to scheduling and viewer search?

Flash forward a few years to a hypothetical world, and imagine that pre-recorded content is primarily consumed through VoD. Broadcast scheduling has become much more about live events, and focused on mass market appeal. More and more quality / premier style content becomes available via VoD for first time viewing. VoD platforms like Netflix will need to do this to attract subscribers, and broadcasters will follow suit to compete and gain customer / viewer insight.

Great, but I get home from work and I'm tired.. how do I choose what to put on in the background as I make dinner? There's a vast catalogue of choice, but I have no apetite to spend the next 30 minutes of my life putting effort into searching for content I'd like to watch. There's the current approach of applying clever analytical algorithms to generate recommendations, but people are far too irrational and emotional creatures for those algorithms to guess what we want, every time. Recommendations are too generic, and the content is too broad and getting broader. 

Lets go back to why people enjoy linear broadcast scheduled content; my view is it's ultimately the social aspect. In a world with a plethora of VoD content, could we benefit from focusing on narrower communities built around our friends, family, other fans, people you follow on twitter?

This sounds like a problem I already have, is this just history repeating itself for a different medium? Consider your music habits, do you subscribe to Spotify? There's a vast amount of music available, and sometimes it's difficult to know what to listen to. If you want to discover new music there are services like SoundDrop which offer "rooms" for users to curate their own playlists of tracks. Joining a room to listen to music curated by people with similar taste to you, or perhaps the mood you're in, as you would tune into a radio station.

Perhaps initially we will see a finer grained schedule of content, curated by broadcasters and targeted at specific viewer demographics. Then in the future maybe we'll move towards broadcasters providing the ability for viewers to curate their own schedules. Schedules that trend like twitter hashtags, and maybe if it's trending for the community I have constructed, then perhaps I'll tune into that feed, and contribute to that short-lived community of people, just as I would now alongside linear broadcast TV.

Is this a bad thing for traditional broadcasters? Perhaps if they miss, or worse, ignore the opportunity. As ultimately this capability provides more viewer insight, viewing habits and all the advertising opportunities that go along with that insight.

Fundamental Attribution Error and Confirmation Bias

posted 6 Feb 2013, 15:43 by Robert Taylor   [ updated 26 Mar 2014, 03:02 ]



Following on from my post on 
Cognitive Heuristics, Biases and Lean Product Delivery, I was asked by a colleague to elaborate:
"Do you have any real world examples of things like cognitive bias and ways to measure success with empirical evidence?"

Context

Ok so first of all there are many cognitive bias (over 150) and more being discovered all the time.

According to Wikipedia, the more commonly studied biases are: 

  • Framing by using a too-narrow approach and description of the situation or issue.
  • Hindsight bias, sometimes called the "I-knew-it-all-along" effect, is the inclination to see past events as being predictable.
  • Fundamental attribution error is the tendency for people to over-emphasize personality-based explanations for behaviours observed in others while under-emphasizing the role and power of situational influences on the same behaviour.
  • Confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions.
  • Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
  • Belief bias is when one's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.

In their talk Megan Folsom and Dan North both touched on Confirmation biasPro-innovation bias, and Fundamental attribution error. They made the case that these are all biases that we should be aware of when attempting to build teams to deliver a product to delight users.

Going back to the original question above: I'll try to provide examples, and I think it would be good to focus on two bias: Fundamental attribution error and Confirmation bias. The reason for this is my original post focused on two things: enabling people to work together collaboratively, by being aware of biases and using feedback loops for empirical measurement of the results. My view is that our susceptibility to fundamental attribution error, and confirmation bias are the main contributing factors to poor decision making and project failure. 


Fundamental attribution error

Fundamental attribution error is a really interesting bias, and one I'm sure we have all engaged in and have been the subject of. Fundamental attribution error describes the tendency to over-value dispositional or personality-based explanations for the observed behaviors of others, while under-valuing situational explanations for those behaviours.

There is a simple example of a Fundamental attribution error on wikipedia that illustrates the bias well, but I'll offer two here which are more aligned with situations in a work context.

When developing software if a team doesn't meet their estimates, an observer might conclude that the team are inexperienced and have not spent enough time estimating. If the observer was to look at the situational explanations then perhaps the team doesn't understand what needs to be built, or the person requesting the feature is not clear what they are expecting. There may be a large number of ways to solve the problem, or perhaps there's a bottleneck in the delivery process slowing the team down.

If we switch roles and try to put ourselves into that team's role, we might be able to understand the situational/contextual explanation for the observed behaviour.

On a larger scale, if a product delivery / project is overrunning, or it's not retuning on the investment it has received. A business might consider the project to be badly managed, or may seek to place blame on an individual or a group of individuals for the systemic failure, in a way that is out of context in regards to their actual levels of responsibility. This type of Fundamental attribution error or Group attribution error/Ultimate attribution error perhaps allows the business to not worry about the failed system (context/situation) by attributing the problem to an individual or group of individuals (disposition). 

The problem with not acknowledging these errors is that we limit our ability to learn and continuously improve, through acknowledgement of the situations that influenced the results.

Ok so I know what a Fundamental attribution error is, now what?

How can we address this bias / lack of situational thinking? Perhaps the first step is awareness, being aware that you are susceptible to Fundamental attribution errors can be helpful to guard against it. Helping to ensure you consider the situation, or take others on the journey of understanding the context/situation, so they don't fall into this error. Having this enlightened view perhaps then leads us to attempt more root-cause analysis to address problems, rather than just attributing them to individuals or groups. This is perhaps another reason to have more open and honest communication within teams and a business, as situational context is appreciated more by others around you.

For a more fun example watch this: Root Cause Analysis: Lean Muppets


Confirmation bias

Wikipedia states: Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.

This is an interesting one to address as it's such a core part of our psychological response and information processing. I imagine that we all recognise ourselves falling foul of this bias in our daily lives, either in our devotion to a sports team, company brand, political party, religion, or even in the advocation of the use of one methodology over another (the irony isn't lost on me here).

Therefore awareness of this bias is really important, so I will focus on two perspectives which I think are relevant to product delivery: -

  1. How to communicate effectively with others who may have different beliefs.
  2. How to create experiments and evaluate the results in a ways that don't just reinforce our original belief / hypothesis.

The first perspective is a subject for another post, and one I need to do more thinking on. The second perspective is very relevant to how we structure and deliver a product/project and its resulting success or failure, so I'll try to provide and example here.

Create experiments and evaluate the results...

When we start a project of work that has some high-level objectives, and set out to plan and research all the deliverables and requirements for that project up front, we will be influenced by that analysis and the plan throughout the project lifecycle. Even if there is evidence to suggest that things are not working, or that our original hypothesis are not resulting in a positive outcome. This is partly due to the high investment in the previous research (which can be interpreted as a sunk cost) and the belief that what has already been uncovered through hypothesis and prediction is correct (even in the face of empirical evidence to the contrary). In this scenario we are more likely to continue with the original plan until we deliver or run out of time or money, regardless of the resulting success or failure of the project.

Basically we have not provided any opportunities for the team / business to pivot on the original plan, and go in the direction which evidence suggests is more advantageous. It is folly to believe that we can  formulate lots of hypothesis upfront, and for all of those hypothesis to be realised as correct in one large experimental step.

This I believe (oh the irony...) is not an efficient way of delivering features for products/projects that involve system implementations and software delivery which interacts with multiple end users. Given the inter-connected nature of systems and users, the delivery of one feature will have an influence on the subsequent feature delivery, therefore any upfront planning and requirements gathering is ultimately just an exercise in (educated) hypothesis/conjecture creation.

To attempt to combat this, we can use a more scientific method, with small incremental feature delivery mitigating the potential influence Confirmation bias has on the group working on a project/product delivery. For example we can use small feature experiments combined with a focus on measuring the results, to either confirm or not our original hypothesis.

...in a ways that doesn't just reinforce our original belief / hypothesis.

The risk is that confirmation biases mean we naturally want to run experiments where the result would demonstrate that we are right, rather than risk running an experiment that we expect will be wrong. But if we did run an experiment that might show that we are wrong, we might eventually find out the true answer to our original hypothesis.

By being aware of this bias we can attempt to question our hypothesis and focus more on how to interpret the success or failure through conducting experiments.

Awareness
So how does this awareness of Fundamental attribution error and Confirmation bias influence our product delivery?

In conclusion, I propose that by creating small cross-functional product teams, who go on the journey together to deliver a product, understanding the context of the whole delivery pipeline, focus on small incremental feature delivery through proposing a hypothesis and conducting experiments, are better placed to guard against these two biases. Why? The cross-functional nature of the team reduces the potential for Fundamental attribution errors and Group attribution error throughout the delivery pipeline, as the situational context is apparent to all involved. The small incremental feature delivery mitigates against Confirmation bias, by focusing the team on a specific problem domain and iterating on features until the stated problem has been resolved or satisfied, using feedback and metrics to measure the success or failure of the feature, thereby removing the potential for Confirmation bias to influence decision making.

Cognitive Heuristics, Biases and Lean Product Delivery

posted 6 Feb 2013, 15:28 by Robert Taylor   [ updated 21 Mar 2014, 14:31 ]



I went to a very interesting talk back in September last year, where Megan Folsom (ebay, amazon, ibm) and Dan North were presenting on the subject of lean product delivery. Although Dan's insight is always great and he's really engaging, Megan really focused on the psychology of people and groups of people (in teams), stating: -

"Lean (product development/delivery) is a way for irrational people to make rational decisions and still like each other".


Context: She made that case that people are creatures of habit, our brains are really good at following the behavioural patterns (heuristics) we've built up over our years of our experiences. She raised the subject of cognitive bias which can make otherwise rational & logical humans become irrational and illogical, if faced with a problem or something we haven't encountered before.

More context:
In the early 1970's, Tversky and Kahneman described a research orientation which has dominated the judgement and decision making literature ever since. They argued that humans make use of cognitive heuristics which reduce the complexity of making probabilistic judgements. "In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors".

In my mind these cognitive bias (and any resulting systematic errors) are the cause of decision paralysis and/or conflicts of direction within a group. Basically put a bunch of people together and each of them are going to have different cognitive heuristics, which in general is a good thing, but can lead to conflict with others who have different cognitive heuristics and bias.

So what?
Well the point is we can address this problem and take cognitive bias out of the equation (to some extent). How? Through working with short feedback loops (using an iterative process) and focusing on empirical evidence to measure the success or failure of those results. In other words a lean process helps people work together, by giving them a way to take the ego out of the decision making process, and enabling that by making the feedback loop as short as possible. 

We do this in much the same way as the scientific method steps are defined: -
  • Formulate a question: or propose the problem.
  • Hypothesis: conjecture based on the knowledge obtained while formulating the question.
  • Prediction: determine the logical consequences of the hypothesis.
  • Test: conduct experiments and get feedback.
  • Analysis: determine the results of the experiment and decide on the next action to take.
So picture doing this process, (perhaps every two weeks) focusing on the highest priority features/problems/questions and getting feedback on the original question/problem. By doing this I feel we're in a great position to move people, products, a business towards its longer term visions / goals.

I think cognitive heuristics and bias links nicely to Structure and Agency (see Structuration) too, but need to ponder... (or maybe just get out more!)

NB. They also recommended a few books: Predictably Irrational (kindle version available) and A Mind of its own

1-8 of 8