Feeds:
Posts
Comments

Archive for the ‘What we learned…’ Category

From Katy, Standing Team Coordinator

In early 2011 when I joined the ECB inter-agency Accountability & Impact Measurement Standing Team as the coordinator, I honestly had no idea what kind of experience I was in for. I was passionate about accountability to beneficiaries. I had learned about the original ECB Standing Team from 2007. I had served on CARE’s internal Quality & Accountability team. I knew the team had the support of the AIM Advisers, who were each organization’s champions for accountability. And the Standing Team ‘experiment’ seemed to present an incredible and rare opportunity.

But I also knew that what lay ahead of us would be hard. We would be forming a large, inter-agency team of accountability specialists from six agencies. These people would not know each other (with very few exceptions). We all spoke different languages and came from all over the world. Some of us were very experienced accountability practitioners and others were just starting out. Some were senior staff, others more junior. Most were humanitarian workers but some came from the development side of their organizations. Before the first workshop, many of us had never traveled beyond our own country.

I am happy to say that the last two years have gone well beyond my expectations. The journey has been a beautiful one.  Together, with our Standing Team members, our field ‘clients’ who requested our services, our AIM Adviser champions, and our sector partners, I can confidently say we have met the expectations and goals we originally set out for ourselves.

We held three incredible face-to-face learning workshops. The first two, Accountability Fundamentals and Joint Evaluations, were held in 2011 to prepare the team for deployments. These focused on building a common understanding of accountability principles and frameworks, understanding AIM tools, developing deployment protocols, and committing to documenting our learning. The last workshop was held to document and learn from our experiences individually, as agencies, and as a team, to review and interrogate our model, and to consider options for the future.

We developed two amazing tools: the Standing Team toolkit and the Good Enough Guide Training of Trainers module.  The toolkit houses an extensive number of top notch tools and resources from agencies and sector networks on “how to’s”, including case studies and practical experience. The Good Enough Guide Training of Trainers module is part of a suite of tools to help agencies to implement accountable systems and create accountable practices and ways of operating.

We committed to four deployments this year. We exceeded that goal! We facilitated eight deployments to six countries (two to Bangladesh and Bolivia, one to Nepal, Niger, the Horn of Africa, and Indonesia). Deployments often helped country offices to identify accountability gaps and put action plans into place. Others were to facilitate trainings on accountability. These deployments were generally very well received, and several of our clients cited the team’s deployment as one of the most significant experiences for their consortia.

We created a fantastic deployable team of 30 accountability champions from six agencies. This is the achievement I am most proud of! We have all grown so much in our understanding of accountability thanks to these deployments, where we served our colleagues in country offices. These have been well recorded in reports (posted to the ECB website) and in our blog here.

The future of the team is unknown as of now. But at this point, I believe we have a solid base of evidence on what has worked well and what could be modified to improved the model.

I have very much enjoyed getting to know the team members from around the world. I have learned so much about complaints & feedback systems all over the world.

In closing—I thank you, accountability champions, for making the Standing Team an incredible experiment!

Advertisement

Read Full Post »

questionnaire results

Here are the results of the questionnaire results of the questionnaire

Read Full Post »

DSC_3451

We first shared some of the feelings and ideas we had when looking back at our work.

timeline

We went then through all the timeline, looking at how activities unfolded, connected and developed.

And we then looked at the individual activities, focusing on the key learnings and challenges around these.

After looking at the timeline, participants discussed in small groups.

  • What is a surprise to you?
  • What didn’t you know? What do you want to know more about?

A plenary discussion on this followed, and we captured the main highlights on a flipchart.

It was amazing to see how much good work had been done in the last year!

Read Full Post »

Standing Team member Faten believes that working for the Child’s Rights program at Save the Children – helping to create more accountable programs for Palestinian children – has been one of the most interesting and useful experiences in her professional life.  Last fall she participated in the Standing Team Accountability and Impact Measurement Fundamentals workshop in Jakarta.  This was Faten’s first introduction to many of the key concepts of accountability, and she describes it as the starting point of a “breakthrough” in her work. She states that upon her return:

I was back full of energy to transfer this learning to colleagues and partner organizations. I called for several briefing sessions for staff at different levels in my organization, and I conducted a four day training for Save the Children staff and partners in Gaza, where there is a high need for accountability in the context of emergency.

Trainees from partner organizations were very impressed with this learning, and requested access to the training material in Arabic in order to conduct a similar training for their staff and volunteers.

Thank you Faten for sharing your experience!  We would love to hear from more of you.  How has the learning from past Standing Team workshops contributed to your work?  Share your thoughts as a comment on this post or email sarnason@care.org or klove@care.org if you’re interested in drafting a blog submission!

Read Full Post »

We are happy to announce that the Bangladesh and Boliva deployment reports are now available on www.ecbproject.org.

 

 

 

 

 

 

 

 

 

 

You have read a few details from Brian and Hugh and Saji and Shagufta in our previous blog posts, now get all of the details!  Both reports discuss their key findings and recommendations. Thanks to all involved for your hard work! 

More deployments are being scheduled for this fall. Please check back for updates in the coming months.

Read Full Post »

During their deployment to Bangladesh in late April and early May, Saji & Shagufta were able to capture participant feedback through video.  They have shared with us here just a few key comments from our colleagues on both the usefulness of the workshop and some of the accountability mechanisms they are currently implementing.  Thank you Saji & Shagufta for sharing! 

Iqbal Nayyar, Deputy Country Director, Save the Children Bangladesh

 

Carla Benham, Accountability Specialist, World Vision International

 

Md. Mahbubur Rahman Program Manager-Water Logging, CARE Bangladesh

Read Full Post »

The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) published in October 2011 the Lessons Paper: Humanitarian Action in Drought-Related Emergencies by Kerren Hedlund and Paul Knox Clarke . The Lessons Paper, also available in Spanish, French and Arabic, identifies 17 key lessons for humanitarian agencies responding to droughts.  Topics such as early warning, targeting, working with government, cash and vouchers, water interventions and nutrition are included. However, lesson 6 is of particular interest to the AIM Standing Team:

Humanitarians are increasingly demonstrating accountability to an ever larger set of stakeholders. These accountability approaches have the potential to improve programme effectiveness but there is still a long way to go.

So what does the paper suggest regarding accountability practice?

  • Agency accountability to donors should not be done to the detriment of accountability to beneficiaries and impact of the interventions.

Agency staff may have to spend a disproportionate amount of time fulfilling reporting requirements to donors instead of spending their time on the practice of accountability. Staff may also not try out new approaches for fear of failure and consequent non-renewed funding. The paper suggests that donors contribute “to joint/pooled funds to decrease reporting requirements, and by clarifying their attitude to risk and failure. In some cases, it may be more effective for donors to take a ‘portfolio approach’ and consider the combined impact of several related actions rather than expecting each action to be an individual success.”

  • Coordinated action requires agencies to develop mechanisms for collective, multi-agency accountability.

Niger, CARE USA

Effective response, especially those in response to droughts, require much coordination between agencies. As mentioned in the previous blog on collective accountability, beneficiaries do not differentiate between agencies. Thus “there is a growing need for collective accountability mechanisms, where all agencies in a group are jointly accountable to beneficiaries and also accountable to one another. While this is extremely challenging, the international community is increasingly recognising the need to structure joint accountability into consortia and other groups addressing drought response. ECHO’s Regional Drought Decision and USAID’s PLI both include accountability and learning as a cross-cutting theme, and dedicate resources to achieving it.”

  • Agencies need to make use of evidence to implement more cost-effective and impactful interventions.

For example, much evidence exists of the cost-effectiveness and impact of using cash and vouchers, yet agencies still revert to the more traditional approach of food aid.

We want to hear from you! Do you have experience with any of these lessons being put into practice?  Share your thoughts…or questions for the group!

Read Full Post »

Hopefully you are familiar with ECB’s Good Enough Guide (GEG) (see this previous blog post)  and its communication materials. This week, we have interviewed Lucy Heaven Taylor, an AIM Advisor from Oxfam GB, to hear the fascinating story of how these materials were developed! Lucy co-managed this project, and this is what we found out:

Soon after the publishing of the GEG, ECHO announced a call for proposals for developing inter-agency capacity. Given the popularity of the Guide, the Accountability and Impact Measurement (AIM) Advisors and Oxfam decided to propose a project to develop materials to communicate the important principles of the GEG to agency staff and beneficiaries.

First, Lucy and Julian Srodecki, ex-AIM adviser for World Vision, project co-managers, conducted a large survey of practitioners through Survey Monkey and key informant interviews in order to find out what forms of communication were prefered. From hundreds of responses, they found that posters and leaflets were the most popular materials used to communicate key messages. 

They then moved on to conduct a literature review on the practice of communications among different cultures. This uncovered useful information, such as the fact  that the color red does not universally signify “stop.” They learned that in order to create materials and images to which people will respond and relate, the materials needed to be developed with the people themselves.

Five regions were chosen in which to develop the materials: Latin America, Sub-Saharan Africa, the Middle East, South Asia and Southeast Asia. The idea was to produce context-specific images in each of the regions as examples for humanitarian organizations, so that they could develop materials appropriate to their own geographic and linguistic context. ECB staff went to Bolivia, Kenya, Lebanon, Bangladesh and Myanmar to work on the materials with disaster affected people and local artists. The artist in each country created images to represent the people’s perceptions of disasters, of their rights and of themselves. The community members provided feedback on the images until the artist got it right. For example, in Bangladesh the artist created an image of somebody pointing, but the community thought he was holding a gun!  As you can see, their feedback was crucial!  Once the drawing was approved, the image was printed and tested in the same community.

It was interesting for the ECB staff to find that the community members prefered colored drawings to line drawings or photographs. They also prefered figures of people looking at them with recognizable facial features. In addition, it was discovered that people like to see images of themselves not exactly how they look, but instead represented in a more positive light.

Initially it was planned that the posters and leaflets would have no words because of a largely illiterate audience, but it proved to be too difficult to portray the messages. Thus it was decided that the materials would have words and a literate person could relay the message to those needing assistance. The specific wording for the posters and leaflets was agreed upon by the steering committee for the project. Half of the posters were designed for beneficiaries, to be displayed in public to raise awareness of people’s rights. Other posters were developed for agency staff, to raise awareness of the practice of accountability and to be posted in offices. The leaflets were designed to teach the principles of accountability and the GEG to agency staff. Both were printed in English, Arabic, Bangla, Burmese, Spanish and French.

 The videos were developed in a similar consultative fashion, with disaster affected communities in Bangladesh, Ethiopia, and Bolivia. These videos show staff and beneficiaries talking about the principles of the GEG, and they are designed to be viewed by agency staff for training purposes.

The project was truly a collaboration of member agencies of the ECB. It was co-led by Oxfam and World Vision, with a Steering Committee comprising of a cross-section of members, including CARE, Mercy Corps and ECB secretariat staff. The field work was undertaken by World Vision, CARE, Oxfam and ECB staff, and drew on experience from different agencies’ programmes. 

The other successful component of this project was that it not only sought to promote the practice of accountability in emergencies – but accountability was practiced while developing the materials!  The collaboration of ECB agencies and consultation with the communities was the key to their success.

Read Full Post »

The Harvard School of Public Health hosted a webinar Humanitarian Assistance Webcast 7: Empowering beneficiaries: Humanitarian professionals at a crossroads? on March 22 

Context

The movement towards enhancing accountability to and empowerment of beneficiaries in the humanitarian context seems to have put professionals in this field into a bind. Aid workers are mandated to follow two frameworks:

  • The legal framework adopted at the Geneva Conventions of 1949 holds organizations accountable to host states and donor states. However, this framework is inadequate, only referring to high contracting parties and non-state actors to which NGOs offer their services.
  • The human rights based framework calls for accountability to beneficiaries in humanitarian situations. The framework includes the Universal Declaration of Human Rights, International Refugee Law, International Humanitarian Law, Convention on the Rights of the Child, and the Convention on the Elimination of all forms of Discrimination Against Women.

Thus humanitarian workers must engage in the complex task of simultaneously responding to the expectations of host state authorities, maintaining accountability to donors, and responding to the needs of beneficiaries.  Unfortunately, the balance of power in this equation has not favored accountability to beneficiaries.

In addition, efforts to “professionalize” humanitarian action have led to yet another set of accountability measures to ensure the implementation of particular professional standards — from assessing humanitarian needs to implementing and evaluating humanitarian programs. These rising expectations of professionalism put further pressures on humanitarian actors.

Looking back

The webinar’s first speaker was Maria Kiani, Senior Quality and Accountability Advisor at the Humanitarian Accountability Partnership International (HAP). Maria gave a fascinating account of the historical emergence of accountability. Speaker, Brian Kelly, with the International Organization for Migration, added that the concept and promotion of accountability is not new. It can be seen in the Quran, the Torah and the Bible, in criminal and civil law, the concept of stakeholders and shareholders and the tax system. Also, it can be seen in the above-mentioned human-rights based declarations, laws and conventions. The modern movement for accountability to beneficiaries, however, came out of a 1996 joint evaluation of the emergency response to the 1994 Rwandan genocide.  This evaluation highlighted:

  • The need to improve accountability by monitoring performance of humanitarian action
  • The number of agencies was increasing but remained unregulated
  • The lack of consideration for local capacities, culture and context, whereby negligence in some cases led to increased suffering and death
  • Evidence of misconduct and abuses by staff
  • Protection, safety and security concerns

Similar findings were also found in evaluation of the response to the 2005 Tsunami.

Following the analysis of what went wrong in the humanitarian response to the Rwandan genocide, a shift occurred from providing charity out of benevolence towards compliance to professional standards at the agency and multi-agency level. There has also been a significant growth in agency self-regulation, and “by 2010, the database of self-regulation initiatives maintained by One World Trust identified over 350 self-regulation initiatives (most of which are at the national level).”

Collective Accountability

All speakers mentioned that the humanitarian field is facing a more complex environment with military actors, companies, for-profit organizations, and small and large NGOs, whereby recipients of aid do not know from whom the aid is coming. Andy Featherstone, an independent consultant, pointed out that due to lack of communication by agencies to the community, there is the risk that misconduct by one actor is blamed collectively on all actors because the people do not know which agency is doing what. Thus, in addition to the growth in agency level accountability initiatives, there has also been a movement toward leadership and coordination among the agencies, towards collective accountability. This can be seen in the growth of inter-agency networks, including HAP, ECB, ALNAP and CDAC (see this blog for more on CDAC).

Agencies, inter-agency networks and initiatives are not the only aspect of the movement toward greater accountability, though. There are external factors which have advanced the movement:

  • Increased media presence during emergencies (Investigative journalism/negative press has brought to light harmful practice)
  • Increased public awareness and scrutiny of performance of NGOs
  • Pressure from watchdogs and other rating agencies
  • Pressure from donors to show improved practices
  • Increase in government regulation of the sector (For example, as a result of misconduct during the response to the tsunami, the Sri Lankan government now regulates humanitarian actors)

These Quality & Accountability standards have been designed to be context relevant and appropriate. Such standards were developed in consultation with host governments, donors, aid workers and communities.

All three speakers mentioned the Inter-Agency Standing Committee’s (IASC) Transformative Agenda as a stepping stone towards collective accountability. The agenda was set at the end of 2010 to improve leadership, coordination and accountability to performance and beneficiaries in humanitarian action.

The latest step in the movement towards collective accountability is the Joint Standards Initiative, comprised of the Sphere Project, the Humanitarian Accountability Partnership (HAP) and People In Aid .  In 2012, this initiative will explore ways in which the three standards can be united into a single coherent framework that will work in the field (for more information on the Joint Standards Initiative, see this blog).

Stay tuned for more on the movement towards collective accountability!

Read Full Post »

In June of 2011, the ECB Project published the latest version of What we know about joint evaluations of humanitarian action: Learning from NGO Experiences. This paper aims to share the experiences and learnings of NGO staff who have conducted joint evaluations and serve as a resource for agencies considering conducting  joint evaluations in the future.

The Guide section of the booklet can be considered a ‘how‐to’ for those closely involved in joint evaluations. It discusses the benefits and disadvantages of the process, and what to do before, during and after a joint evaluation.

The Stories section shares three case studies from the ECB Project’s experiences.

  1. Joint Independent Evaluation of the Humanitarian Response of CARE, Catholic Relief Services, Save the Children and World Vision to the 2005 Food Crisis in the Republic of Niger
  2. Multi‐Agency Evaluation of the Response to the Emergency Created By Tropical Storm Stan in Guatemala – CARE, Catholic Relief Services, Oxfam
  3. CARE, Catholic Relief Services, Save the Children and World Vision Indonesia Joint Evaluation of their Responses to the Yogyakarta Earthquake in Indonesia

The Tools section includes templates and tools that can be adapted for evaluations, including sample terms of references, agreement documents, a joint evaluations readiness checklist, and suggested topics for discussion with prospective partner agencies.

Advantages of a Joint Evaluation

  • Like a single‐agency evaluation, a joint evaluation provides an opportunity to learn from past action so as to improve future decision‐making.
  • It allows agencies to see a bigger picture of the collective response and what gaps still exist.
  • By looking at a non-joint response of different agencies side by side, you can see where a coordinated effort would have been beneficial and can plan accordingly for the next response.

“Evaluation reports repeatedly show that better coordination would have led to a more effective response.”

  • When agencies open up to one another by sharing weaknesses and strengths, they increase transparency and make it easier for them to hold one another accountable for acting upon the recommendations.
  • Conducting the evaluation with other agencies allows sharing of perspectives and technical knowledge and builds trust for future cooperation.

Disadvantages

  • It takes greater time, funds and skills for agencies to agree to do and conduct a joint evaluation.
  • Less depth on the work of each agency is covered.

So check out What we know about Joint Evaluations and tell us what you think!

Read Full Post »

Older Posts »