• Keine Ergebnisse gefunden

Second High Level Forum – Paris 2005

3.3 Monitoring of implementation

3.3.1 Lessons from the first baseline survey

An agreement of this nature without effective periodic monitoring would not be worth the efforts leading to its endorsement. However, monitoring progress under the PD proved to be a demanding exercise of several dimensions, nationally and internationally. It required actions under the guidance of the WP-EFF, which replaced the Task Force on Donor Practices, expanding in Paris from a small group of mostly aid providers in 2003 to more than 50 members representing partner countries, aid providers, development institutions and civil society.

The first baseline survey in 2006, which was voluntary, drew responses from 34 partner countries (of the 60 that participated in Paris) and 60 aid-providing countries and institutions. This sample did not necessarily give a proportional representation of partner countries; survey findings, therefore, would have to be read with this in mind. There were only a few fragile states that took part, making it difficult to generalise about this particular group. The survey had two purposes: (a) to establish a baseline against which to measure future progress, and (b) to assess progress since Rome.

Survey results were to identify issues and challenges for future consideration (OECD, 2007b).

Monitoring arrangements consisted initially of survey design and fieldwork preparations, including a “Help Desk” to respond to questions.

The 2006 survey was launched in May. A number of regional workshops were launched for officials in charge of survey work in partner countries;

the workshops dealt with technical aspects, addressed database issues and explained the role of help desks. Following country-data analysis, country chapters were drafted and sent to survey National Coordinators for comment.

Nationally, arrangements varied from one partner country to another, but with many similarities. Having served as the Acting National Coordinator for Egypt twice, my experience shows that the most challenging task was how to organise and manage survey work within a fairly tight three-month time frame. National Coordinators had the authority to manage data collection from government and country-based aid providers, and each of the latter was an independent entity with its own set of priorities and pressures. Workshops were organised for local contributors and country-based aid providers to facilitate the work ahead and solicit cooperation.

The workshops had the added value of strengthened lines of communication among survey participants and dealt with issues of mutual concern.

One of the problems in data gathering was the use of different definitions for such items as aid disbursements, based on each department’s mandate.

A task group of seven agencies (Central Bank, Ministry of Finance, Ministry of International Cooperation, Central Agency for Statistics, Ministry of Foreign Affairs, Ministry of National Planning and the Cabinet’s Information for Decision-Support Centre) was set up to reconcile their figures and come up with one validated number.

Another issue was reconciling data received from aid providers with those recorded in government books, as these inputs were often based on different financial-year frames. Negotiations with aid providers led to the provision of flow- and disbursement estimates corresponding to govern-ment time frames.

Overall, the three-month period set to complete survey work proved too short. Communication problems, technical and definitional issues, lack of

data, response delays and lack of interest were the main reasons. An added pressure was the organising of a “validation” meeting of government and aid providers to endorse data and the Coordinator’s country report prior to submitting them to the OECD/DAC.

In the end, most collaborators felt this was a useful learning experience; it helped spread the word about aid effectiveness and the PD goals, and stressed the importance of “getting better value for aid received”. It was also a clear signal urging aid providers’ country offices to understand what their Head Offices had signed up to. Nevertheless, the process was too time-consuming and costly to be repeated in that format, suggesting the urgency to deal with these issues ahead of the next survey.

3.3.2 Survey results

Survey results were based on activities carried out in 2005, which provided the baseline data. Some survey findings used the World Bank’s 2005 Comprehensive Development Framework and its Annual Country Policy and Institutional Assessment (CPIA). The following paragraphs underline key survey results. Despite simplifying the language, some technical jargon is inevitable.

Ownership: Strengthening ownership posed a substantial challenge:

only 17 per cent of survey countries were meeting agreed quality thresholds for operational development strategies. (See World Bank Comprehensive Development Framework for definition of quality.) Reaching the 2012 target of 75 per cent of survey countries would require political commitments and more technical efforts.

Alignment: Use of country systems was a prerequisite for better alignment with national development priorities. Responses showed that public management financial systems were rated as being between moderately weak to moderately strong, with 31 per cent of countries having moderately strong systems. Although this indicates that the 2012 target of “half of the countries move up at least half a point up”

was feasible, other country systems, such as procurement, were not rated due to lack of data.

Alignment also addressed the gap between budget figures and actual aid disbursements (a wider gap implied less alignment). The data

showed a considerable discrepancy between the two figures, with half the countries showing a gap of as much as 70 per cent. To reach the 2010 goal of reducing the gap by half required coordinated actions by partner countries and aid providers. This gap was caused by providers’

less realistic expectations of their ability to disburse on schedule, and partner countries’ insufficient attention to capture disbursement intentions or make realistic estimates of shortfalls.

Capacity development (CD) was another issue. Some partner countries stated that no technical cooperation programmes existed that met the coordination criteria. Aid providers argued that the survey definition was too stringent. The survey’s aggregate baseline figure must, therefore, be taken with serious reservations, in view of conceptual differences that called for a re-examination of definitions.

Data on the use of country systems (Indicator 5) suffered due to the use of different interpretations and ambiguity in survey guidelines’

definitions. As a result, survey numbers tended to overstate the extent of using country systems. The target for 2012 was to reduce by one-third the non-use of country systems. A disturbing finding was that the correlation between the quality of a country’s systems and providers’

use was weak, implying that factors other than quality influenced the systems’ use. If this pattern continued, it would be quite difficult to reach the 2010 target.

Use of parallel implementation units was another aspect affecting alignment. Varied interpretations of definitions and criteria – with many aid providers applying flexibility while National Coordinators stuck to the narrower definitions – produced suspect results understating the use of PIUs. The 2010 target was to reduce the baseline stock of 1,832 PIUs by two-thirds. But meeting this target faced difficulties: a backlog of projects had been set up without concern for alignment and ownership; reluctance of PIU local staff to give up superior employment conditions, including fringe benefits; and aid providers’ unwillingness to switch from PIUs for fear this would adversely affect implementation.

Aid predictability and untying were two more issues affecting alignment. Predictability data (reflecting the combined ability to disburse aid on schedule and record disbursements to the government

sector) showed a gap between the 100 per cent target and average baseline figures of 70 per cent (Indicator 7). For untying, 75 per cent of aid to survey countries was untied, suggesting that more action by aid providers was needed.

Harmonisation was assessed by applying two criteria: use of common arrangements within programme-based approaches, and undertaking joint missions and analytic work. The 2010 target for the first (Indicator 9) was to have 66 per cent of government sector aid using programme-based approaches. A controversial baseline estimate suggested a 43 per cent compliance ratio, but – again due to different interpretations of the criteria – this figure overestimated the reality on the ground. Indicator 10 dealt with the second factor and showed that only 18 per cent of missions were conducted jointly, versus the 2010 target of 40 per cent. A substantial contribution to joint missions came from UN agencies. For joint analytic work, the ratio was 42 per cent.

Managing for results (Indicator 11) was a new principle introduced by the PD. It assessed the extent to which a partner country had established results-based performance frameworks (as opposed to the traditional emphasis on measuring inputs and outputs), using WB scores based on an A to D classification system (highest to poorest).

These showed that only two countries achieved a B grade, with 59 per cent and 34 per cent of survey countries receiving C and D grades, respectively. The 2010 target was to reduce by one-third the percentage of countries not achieving a B grade. This was another challenge that called for rethinking existing policies and practices to focus on results and produce credible data for monitoring.

Mutual accountability (Indicator 12) was another new principle, stressing the mutuality of commitments and responsibilities to improve the quality of aid. It called for strengthening systems, whereby governments on both sides would become more accountable to their respective parliaments and citizens, while also being accountable to each other as development partners to assess progress. Survey data indicated that 44 per cent of countries had mutual review mechanisms in place, with the remaining 56 per cent still having to establish them.

Again, these results should be interpreted carefully, as the notion of

“mutual review” tended to be applied flexibly, with varying degrees of effectiveness in conducting serious reviews.

3.4 Findings and recommendations