Why your Broader Impacts killed your proposal

I’ve seen it all. Broader Impacts that were incredibly well thought out to those that were obviously thrown together at the last minute. Performative inclusion, narrowly focused, little impact, no measures of feasibility all tanked proposals that were otherwise great science. Panelists usually comment something like “The science was great but I couldn’t get past how terrible the Broader Impacts were”.

NSF’s new mandate for broadening participation changes review expectations, but this is where the science community can use creativity (and subtle subversion). The mandate stipulates that any activities must be “open and available to all Americans”. BUT non-protected characteristics include socioeconomic status, geography, institutional type, and career stage. So as reviewers, we’ll be on the look out for proposals from Minority Serving Institutions, EPSCoR states, low income regions or other subtle indicators. The below list still applies.

Performative inclusion. Tell-tale signs of performative inclusion include: the PI or team has no prior experience working or collaborating with proposed groups; there is no ‘how’ described for recruiting participants; the activities are performed by a less senior person (usually a female POC) who does not clearly benefit from doing this (and probably wasn’t in a position to say no to the oblivious PI); or otherwise presents a bad ‘white-savior’ aftertaste. This usually stems from ‘we will give these poor people a workshop that doesn’t clearly benefit them in any way but they obviously need this training because they are [insert marginalized flavor here]’. I’m a strong critic of proposal that show no evidence of need, benefit, or feasibility.

Narrow focus/benefit. If the proposal is requesting millions of dollars for science that is global, tackles wicked problems, involves many coPIs but the broader impacts involve training a few graduate students, this falls in the category of ‘the scope of the Intellectual Merit activities does not match the scope of the Broader Impact activities’. Don’t underestimate the importance of Broader Impact activities. They count just as much as Intellectual Merit so throwing in funding for a few graduate students and counting it as ‘training’ isn’t going to cut it anymore.

Feasibility or likelihood of success. It’s one thing to propose cool Broader Impact activities but if there’s little evidence that the PIs will be successful, this is another red flag. Feasibility could come in the form of: we’ve piloted this activity in a course and it worked well so we want to expand; the on-campus center/group/student club is committed to integrating these activities and the PI is a long-time collaborator/faculty advisor as evidenced by this…; our partner has successfully engaged students/public in X number of past events which have averaged XX participants.

No creativity. As a reviewer we see a lot of the same types of activities: training graduate students, workshops for stakeholders/students/public and so it’s easy to get bored. When something totally different and unexpected and new comes along, we sit straight up in our chairs. These types of activities can get a reviewer very excited and ready to go to bat for your proposal. The most creative activity I have ever seen: guiding middle schoolers through the civic process of proposing a state bird to the state's legislature. Science and policy meet in unexpected ways? Love it. Don’t be afraid to get creative, as long as there is evidence you can actually pull it off.

A High Priority proposal has strong Intellectual Merit AND Broader Impacts. Don’t make the mistake of spending all your time developing the science, while overlooking the broader impacts.

Next
Next

Five common GRFP mistakes