Session Thu-Incl-RPP: Invited Inclusion Session: Connecting Research and Practice through Partnerships in K-12 CS Education for Inclusion

Session Thu-Incl-RPP: Invited Inclusion Session: Connecting Research and Practice through Partnerships in K-12 CS Education for Inclusion

Thu Feb 22, 2018
 1:45 PM – 3:00 PM



Leigh Ann Delyser, CSforAll Consortium, Fred Martin, CSTA, Stacey Sexton, SageFox


This session will define and discuss Research Practice Partnerships, and how these partnerships between K12 and research institutions can impact K12 CS Education for Inclusion.

CHAIR: Leigh Ann Delyser (NYC Foundation for CS Education, United States)

Session Wed-RPP-PM: RPPforCS for Community Meeting

Session Wed-RPP-PM: RPPforCS for Community Meeting

Wed Feb 21, 2018
 1:00 PM – 5:00 PM


Rebecca Zarch (SageFox Consulting Group, United States)

Alan Peterfreund (SageFox Consulting Group, United States)

Leigh Ann Delyser (NYC Foundation for CS Education, United States)

RPPforCS for Community Meeting

ABSTRACT. RPPforCS is a connected community of practice designed to support the NSF CS For All: RPP awardees and facilitate a common research agenda among its members, develop evaluator and researcher capacity, and collect common data elements across RPP projects. This event will begin with a plenary session with the RESPECT conference titled “Designing for Broadening Participation.” The plenary will include panelists RIchard Ladner, Chris Hoadley, and Ben Sayler and will be moderated by Joanna Goode (all recent RPPforCS awardees focusing on reaching underrepresented populations). The RPPforCS awardees will then go into an afternoon session focused on community building, early capacity building for the Researcher-Practitioner Partnerships and defining the shared research agenda.

Connecting Evaluation and Computing Education Research: Why is it so Important?

With the growth of computing education research in the last decade, we have found a call for a strengthening of empiricism within the computing education research community. Computer science education researchers are being asked to focus not only the innovation that the research creates or the question it answers, but also on validating the claims we made about the work. In this session, we will explore the relationship between evaluation and computing education research and why it is so vital to the success of the many computing education initiatives underway. It will also help computing faculty engaged in computer science education research understand why it is essential to integrate evaluation and validation from the very first conceptual stages of their intervention programs.


2017 American Evaluation Association Conference

Evaluation 2017 brings together evaluators, evaluation scholars, students, and evaluation users to assemble, share, and learn.


About Evaluation 2017

2017 marks the American Evaluation Association’s (AEA) 31st Annual Conference. Taking place on November 6-11 in Washington, D.C., Evaluation 2017 brings together evaluators, evaluation scholars, students, and evaluation users from around the world are invited to assemble, share, and learn from the successes of the international discipline and practice of evaluation.

No matter your skill level, Evaluation 2017 will provide the opportunity to be involved in the shared experience through a variety of presentations and learning formats. Click here for a more detailed description of our session formats.

Birds of a Feather Gatherings:  Also known as idea exchanges or networking tables, are relatively small and informal discussion-based gatherings, aimed at building networks and exploring ideas.

Demonstrations: Formal 45- or 90-minute presentations that show how to use or apply an evaluation concept or tool.

Expert Lecture: Formal 45-minute presentations by a single expert in the field.

Ignite Presentations: These fast-paced presentations use 20 PowerPoint slides that automatically advance every 15 seconds for a total presentation time of just 5 minutes.

Multi-paper Sessions:  Three or more paper presentations on a common theme. Each paper presenter will have approximately 15 minutes to present and discuss the key points of their work.

Panel: This formal, thematic, 45- or 90-minute presentation focuses on an issue facing the field of evaluation.

Roundtables:  45-minute oral presentations, which typically include 15 minutes of presentation, followed by 30 minutes of discussion and feedback.

Skill-Building Workshop: Workshops teach a specific skill needed by many evaluators and include one or more exercises that let attendees practice the skill.

Think Tank:  45- or 90-minute session focusing on a single issue or question. Attendees break into small groups to explore the issue or question and reconvene to share their understanding through a discussion.

SageFox is granted NSF Award

Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All

Division Of Research On Learning
divider line
Initial Amendment Date: August 31, 2017
divider line
Latest Amendment Date: August 31, 2017
divider line
Award Number: 1745199
divider line
Award Instrument: Standard Grant
divider line
Program Manager: Karen King
DRL Division Of Research On Learning
EHR Direct For Education and Human Resources
divider line
Start Date: September 1, 2017
divider line
End Date: August 31, 2021 (Estimated)
divider line
Awarded Amount to Date: $1,222,855.00
divider line
Investigator(s): Alan Peterfreund (Principal Investigator)
Rebecca Zarch (Co-Principal Investigator)
Leigh DeLyser (Co-Principal Investigator)
divider line
Sponsor: SageFox Consulting Group, LLC
30 Boltwood Walk
Amherst, MA 01002-2155 (413)256-6169
divider line
NSF Program(s): ITEST
divider line
Program Reference Code(s): 023Z
divider line
Program Element Code(s): 7227

This project will adapt the Researcher-Practitioner Partnership (RPP) approach to build the capacity of evaluators and researchers to study, understand, and report on their project efforts and to establish a participant-driven, multi-site research agenda for the Computer Science for All: Researcher-Practitioner Partnerships (CS for All: RPP) program. This project will engage the community of funded partnerships to collectively work to develop a shared research agenda to facilitate the understanding of the efficacy of the RPP model and the impact on computer science (CS) and computational thinking (CT) education. The connected community will bring together the RPP research and evaluation teams and connect them to the larger CS for All education community through the CSforAll Consortium. This connection to the larger community will ensure a bidirectional dissemination – with the intellectual merit of the projects reaching the largest possible audience of researchers and practitioners, as well as ongoing connection to initiatives outside the funded RPPs helping to share learnings and best practices as well. The project will develop a community consensus on a research agenda for the RPP programs which may provide a solid foundation for future research, program evaluation and assessment. This project has the potential to affect the relative success of the CS for All: RPP projects and the program overall. By creating a connected community, it will promote a robust culture of sharing knowledge from experts (both within and external to the community), lessons learned in near-real time associated with implementation, having sharing common metrics that are both supportive of the projects and the program overall, and shared means of dissemination to broader communities of researchers and practitioners. This project has the broader potential of serving as a model for NSF programs for proactively developing the methods for shared learning and common metrics that have more commonly been developed and implemented at much later stages.

To both support the projects and maintain awareness of the larger initiatives, we propose a Connected Community of Practice (CCOP). A Researcher-Evaluator Working Group (R-EWG) will form to provide a process for pursuing the shared research agenda developed through the CCOP. This structure will facilitate data collection, analysis, interpretation and dissemination at a program level. The approach will also generate a mechanism for those outside of the immediate RPP community to benefit from this investment by engaging in learning from the participants that will add value to the initial investment by supporting those undertaking the RPP work and scaling to a broader audience. Recognizing that the funded partnerships will likely be highly diverse in terms of size, context, research area and strength of working relationships, the project will deeply engage evaluators and researchers at the onset of the program to collaboratively develop a shared data set to capture participation data across projects, co-define an RPP research agenda to advance the field of CT/CS educational knowledge, and use the CS for All infrastructure for collaboration, learning and dissemination. This approach will lay the foundation for program-wide assessment and learning. The project will facilitate the community in developing a framework for answering: 1. What are the RPP-specific activities that are high-leverage/highly effective in affecting quality computer science education? 2. What common indicators are appropriate to collect across CS for All: RPP projects to demonstrate the relative and overall effectiveness of the RPP projects? 3. Using these indicators, what are the outcomes and impact of the CS for All: RPP projects for districts, teachers and students?