Program Evaluation Framework
Fogarty's Division of International Science Policy, Planning and Evaluation (DISPPE) has developed the following Framework for Evaluation.
Contact Information
Evaluation Officer:
Rachel Sturke, Ph.D., M.P.H., M.I.A.
Rachel.Sturke@nih.gov
Updated September 2016
Goals & Objectives | Principles & Elements | Criteria | Tools & Approaches
Goals and Objectives
The goals of evaluation at Fogarty are:
- To identify the role of the Fogarty sponsored program in fulfilling the Fogarty mission
- To stimulate the performance of Fogarty programs and to encourage innovative approaches to address problems and issues relating to improving global health
- To provide a transparent process for assessment of Fogarty programs and to demonstrate sound stewardship of federal funds and the programs they support
- To provide information for strategic planning, to strengthen programs, to enhance funding decisions, and to inform new directions for Fogarty programs
- To document program accomplishments and impact including public health and economic benefits
- To document a program’s progress and accomplishments for Fogarty, NIH, HHS, funding agencies, national and international partners and the U.S. Congress
- To identify important lessons learned and best-management practices in performance of Fogarty programs as a whole, and make recommendations for implementation of future programs
^ Top
Evaluation Principles and Elements
Principles of Evaluation at Fogarty
- Evaluation at Fogarty is a routine, continuous quality improvement, review process.
- Evaluation focuses on outputs, outcomes, and impacts and mechanisms to ensure that these occur. While reporting of metrics (number of trainees achieving advanced degrees, number of publications, etc.) is necessary, reviews go beyond metrics and incorporate qualitative data.
- To the extent possible, evaluations depend on the basic principle of external peer review and reflection to generate recommendations.
- Programs are assessed against their own goals and objectives, taking into account fiscal resources and granting mechanisms.
- Review and evaluation is based on measured quantitative outputs, outcomes, and impacts (metrics), as well as qualitative outputs, outcomes and impacts.
Elements and Basis for Review and Evaluation
The review and evaluation process is a continuum that spans from strategic planning, to the initiation of a program, to a retrospective reflection on its accomplishments.
Specifically, program plans are developed with input from key stakeholders, and articulated in Requests for Applications (RFA) and Program Announcements (PA). Once a program is launched, Program Officers monitor the progress of funded projects within a program. After five years of a program, a process evaluation can be conducted, reflecting suggestions for improving the program processes. After ten years of a program, an outcome evaluation can be conducted to analyze program outputs, outcomes, and impact. It is important to consider the length of time the program has been in existence before conducting an evaluation. (Note 1) Research and capacity building outputs, outcomes, and impacts take years to be seen.
A key to effective program review is the degree to which the review is normalized to the resources, objectives and program planning of the individual program. Given that each program has different financial resources, utilizes different talent pools with various specialties, faces different issues in host countries, works under unique institutional policies, and uses different approaches to reducing global health disparities, the reviews are tailored to take program variability into account.
Program Development
The foundation for individual program review is a well-developed program plan that culminates in an RFA/PA. Importantly, planning a program at NIH normally requires a two-year lead time to allow sufficient input, partnership development and administrative review. Each program is announced and guided by an RFA/PA that acts as a strategic plan for that program. The program concepts generally stem from the Fogarty and NIH strategic priorities. The program plan can be developed and informed through consultations, workshops, and meetings and should be specific to resource needs, managing the program to meet those needs, data needs, and data gathering, analysis and storage.
- A program plan, reflecting the input of funder/management, partners and key stakeholders, will include:
- Articulation of the vision and focus of the program as well as the niche the program fills and articulation of the value of the scientific direction;
- Background on scientific relevance of the program area, program implementation issues and mechanisms for establishing priorities for investment of resources; and
- Goals, objectives and performance milestone targets that provide guidance for evaluating program performance.
Planning is fundamental to program evaluation. Developing the understanding, communication and data collection processes necessary to meet the basic goals of the program is essential. A program should be reassessed and new planning (planning workshops, planning meetings etc.) should be implemented periodically as appropriate. Network meetings can also be used as part of the continuous review and planning process.
Program Monitoring and Self-evaluation
Programs are expected to conduct self-evaluation and monitoring on a regular basis, in between the more formal program evaluations. Annual self-evaluation can be accomplished at network meetings or through submission of progress reports from the individual projects under the program. Each program’s self-evaluation will be based on performance milestones unique to that program and should be guided by the criteria described below for all programs. The information collected during the self-evaluation process ideally feeds into a full-scale program evaluation conducted or sponsored by Fogarty.
^ Top
Evaluation Criteria
Evaluations are designed to strengthen, improve and enhance the impact of Fogarty programs. There are several important areas of evaluation that can be used to assess the effectiveness of a Fogarty program and these are outlined below.
Program Planning
Effective programs should use the Fogarty Strategic Plan as well as the priorities identified by program partners as a guide for the development of a program RFA/PA. The RFA/PA should also be based on the needs of the U.S. scientific community, host countries, and other stakeholders such as other government agencies, foreign scientists and experts in the field.
Metrics: Program Planning:
- Evidence of a planning process and a plan
- Relevance to Fogarty, NIH, or Health and Human Services (HHS) strategic plans
- Stakeholder involvement in planning
- Re-evaluation of program over time
- Integration of recommendations into planning
- Planning for sustainability of program results
Program Management
Project Selection
An effective program should incorporate a strong peer review process. The selection/review process should take into account host country needs in the program’s scientific area as well as any other criteria listed in the RFA/PA. Peer review should include reviewers with relevant developing country research experience and expertise in the subject area.
Metrics: Project Selection
- Review criteria
- Quality of feedback to PI
- Amount of time allowed for review
- Conflict of interest issues
- Involvement of the Program Officer
Recruiting Talent
Strong programs should have mechanisms in place to identify and attract the best and most appropriate talent available.
Metrics: Recruiting Talent
- Recruitment of new/young/foreign investigators
- Minority applicants
- Interdisciplinary teams
- Success rate
- Turnover of investigators
Program Components
Each program is made up of various grants that together form a program. It is the role of the Program Officer to ensure that the various projects or grantees have a chance to interact and gain experience from one another. Network meetings offer an opportunity for PIs to interact with one another and exchange ideas. An effective meeting should have goals and objectives that are clear to all participants from the beginning and should involve stakeholders and partners. Ideally a report should be generated from each network meeting that documents the activities and outcomes of the meeting.
Metrics: Program Components
- Annual network meetings
- Robust alumni networks
- Communication venues for PIs and/or trainees to exchange ideas
- Program operation (award size, length of time, funding amount, reapplication restrictions)
Institutional Setting
Programs vary in their institutional setting and institutional support. The program should be well supported by both the academic institution(s) involved and the appropriate national institutions.
Metrics: Institutional Setting
- Matching funds
- Mentorship support
- Laboratory support
- Administrative support and good business practices
Fiscal Accountability
Programs should demonstrate that they have appropriate mechanisms in place to account for federal funds and are properly documenting protocol reviews for human subjects.
Metrics: Fiscal Accountability
- Presence of operational IRB
- Good accounting/documentation practices
- Assurance that all intended funding is reaching foreign collaborators and trainees
Best Practices
As a result of ongoing evaluation, strong programs will help identify best practices with regard to various program factors, for example, prevention of brain drain, sustainability, and mentorship.
Metrics: Best Practices
- Strategies to prevent brain drain
- Strategies to promote interdisciplinary collaborations
- Strategies to promote long-term mentoring
- Strategies for selecting trainees
- Strategies to promote long-term networking
Partnerships and Communication
Partnerships
Federal, national and international partnerships are essential to addressing global health issues. Partnerships, in country and within the US federal government should be pursued, nurtured and maintained.
Metrics: Partnerships
- Number of partnerships
- Different types and sectors of partnerships
- Involvement of partners in development of the program and its strategic goals
- Funds from partners
Communications
To be fully successful, scientific results must be disseminated to the stakeholder community and utilized. During the evaluation of the program, the link to the entire stakeholder community will be reviewed and implementation of the science into policy or practice will be assessed.
Metrics: Communications
- Appropriate community input into strategic planning
- Involvement of program in the community
- Community needs surveys
- Stakeholder community feedback
Program Results
Depending upon the age of a program, significant results will fall into different categories. The following should be documented and reported, analyzed and evaluated for all programs:
Program Outputs
The program must be managed to produce program outputs that are the immediate, observable products of research and training activities, such as publications or patent submissions, citations, and degrees conferred. Quantitative indices of output are tools for the program that allow POs and PIs to track changes, highlight progress and identify potential problems.
Metrics: Outputs
- Number and list of publications (journal articles, book chapters, reports, etc.)
- List of trainees as first author
- Number and list of presentations/meetings
- Number of trainees
- Fields of training
- Number and type of degrees earned
- New curriculum developed and implemented
Program Outcomes
A program is designed to contribute to longer-term results such as strengthened research capacity within the U.S. and foreign sites, effective transfer of scientific principles and methods, success in obtaining/attracting further scientific and/or international support.
Metrics: Outcomes
- Number of laboratories started
- Scientific departments started or strengthened
- Number and types of scientific methods discovered
- Number of new grants or funding procured
- Careers paths initiated or enhanced
Program Impacts
For mature programs, the long-term results, both anticipated and unanticipated, are critical in highlighting a program’s scientific, practical or political impact. An effective program will demonstrate a contribution to the progress of a scientific field as well as utility to the greater program community (e.g., practitioners, policymakers). Since many factors may influence a desired impact, measuring impacts requires more complex analysis and synthesis of evidence of both a quantitative and qualitative nature.
Metrics: Impacts
- Policies adopted or advanced
- Scientific advancement developed
- Changes in health care system
- New interventions or altered health care practice
- New clinical procedures
- New career structure
- Robust scientific community in a specialized field
- Evidence that research outcomes were useful to the field/community
- Improved health of population
^ Top
Evaluation Tools and Approaches
Fogarty evaluations utilize various quantitative and qualitative approaches. Given the complex nature of the programs under evaluation, a range of analytic methodologies are utilized and are described in more detail below.
Bibliometrics
The bibliometric approach helps to understand one of the most basic, yet important outputs of research: publications. Bibliometrics provides a quantitative measure of the productivity, influence, efficiency, and topical trends of publications. For example, citation counts can be useful in comparing researchers or institutions over time in terms of their impact on a field.
Case Studies
The case study approach represents a specific way of collecting, organizing, and analyzing data, with the purpose of gathering comprehensive, systematic, and in-depth information about each case of interest. This approach represents one potential method for illustrating the long term impacts of investments, as it excels at bringing an understanding to complex issues and can extend experience or add strength to what is already known through previous research. Fogarty makes use of case studies during an evaluation to enable a detailed contextual analysis of a complex program and its impacts (e.g. policy, health systems delivery).
Interviews
An interview denotes a deliberate conversation where questions are asked by an investigator (i.e. interviewer) to obtain new or clarifying information. Interviews allow an opportunity to collect rich information on a person’s perspectives or opinions on a matter. Interviews at Fogarty are often conducted on grantees, trainees, Program Officers and funding partners.
Logic Model
A logic model provides a visual representation of the program components and processes by which a program produces outputs, outcomes, and impacts. The model illustrates a systematic flow of the resources (e.g. inputs), activities (e.g. network meetings, grants) and the outcomes and impacts that Fogarty expects the program to produce both in the short and long-term.
Needs Assessment
A needs assessment is a systematic approach to collecting information that helps identify a need in a targeted field, community or system. Fogarty utilizes this approach to document current activities in a particular field and identify the needs and gaps, thereby providing an opportunity for future programmatic investments.
Network Analysis
Network analysis can be part of a bibliometric analysis or a separate analysis altogether. A network analysis provides a visual representation of a system, the players within the system, and how they interact. Given the strong collaborative nature of some Fogarty programs, it is important to identify the links between individuals or institutions within a network and to determine if they change or build over time.
Secondary Data and Document Review
Many times, existing data sets and program documents are reviewed for an evaluation. This is often the first step during a Fogarty outcome or impact evaluation. A review helps determine what type of foundational information about the program is available as well as identify missing information that will need to be collected via another method (e.g. survey, interview).
In analyzing secondary data from NIH databases (e.g. IMPACII, Medline, CareerTrac, World RePORT), algorithms are applied to search for data as well as patterns within the data. Document review is an approach for extracting information from application abstracts, meeting notes, grantee annual reports or other existing documents relevant to the program under evaluation. Often the information needed is imbedded within narratives and a form of coding is employed to categorize the information in a meaningful way.
Surveys
Survey instruments collect both qualitative and quantitative information from a group of individuals. Surveys can supplement information not available from other documents, allow for greater details on topics of interest or garner personal opinions. Surveys at Fogarty have been distributed at annual network meetings and during outcome evaluations to the trainees and/or grantees. When creating a survey, the questions posed, answer structure (e.g. dropdown, free text), coding of answers and response rates are all important in ameliorating any bias and ensuring reliable answers. As appropriate, all surveys conducted for evaluations have gone through OMB clearance prior to implementation.
^ Top
Note 1: An evaluation conducted too soon after the inception of a program limits the possibility of measuring longer term outputs, outcomes, and impacts. Based on experience, Fogarty estimates it takes, at minimum, ten years to start to see outcomes and impacts.