#180 - 5 May 2024

Fundability and Staffability. Plus: Successes and Struggles of Horizon 2020; Research Management as a profession; RSE Competencies; NASA Transform to Open Science; Infrastructure for Research on Teaching; CaRCC Engagement Guide for Smaller Institutions


For the last four issues we’ve been walking our way around the flywheel of research computing teams, looking at the external forces tugging on our teams — today I’ll want to talk about the internal forces acting on us.

Researchers need things researchers highly value; our duty to research in our institution drives us to ai for high bang-for-buck impact; and continuing to do our job requires that our high impact be demonstrable to decision makers

But first let’s look at what those external arrows have in common.

I’ve drawn the arrows as pointing in different directions, representing the different stakeholders (our individual researcher clients; research in our community as a whole; and our supporting clients, the institutional and external funders).

The fantastic news is, though, all of those forces are actually pulling us in the same direction.

  • It’s not enough that some diffuse population of researchers kinda, sorta like our services — we need to offer services that actual individual researchers care enough about to phone VPRs or CIOs or program officers to advocate for support and/or services that they are willing to write into their grants. (#178)
  • It’s not enough that our efforts result in some vaguely positive outcome (#173) - they have to have high research impact per unit investment, as high or higher than anything else that could have been done with the funds we use (#177). As a result,
  • It’s not enough that we can explain in the abstract to our decision maker stakeholders what we do; we need to demonstrate to them that we are an excellent investment of scarce research funding dollars because we have unusually high and understandable impact per unit of investment, and we need to voice this consistently and often (#178)

All of these constraints are orienting us towards one requirement: that we ensure our teams are fundable. Our teams are fundable when continuously allocating money to our teams is something that named researchers care specifically and vociferously about, and is something our funders feel confident about, because we’re consistently and demonstrably enabling disproportionately high research impact per unit money invested.

So if that’s the big external constraint on our teams, what are the internal constraints?

When I’m talking to leaders of technical research support teams, there’s one topic that always comes up very quickly after funding — and that’s hiring and retaining staff.

The people working on our teams are extremely intelligent, capable, and self-motivated experts, people who could find a job paying twice as much in industry within the space of two or three months if they chose to. But they highly value the world of academic research. In fact, they probably had the opportunity to stay on the research track longer than they did. However, they bore easily, prefer technical work to applying for funding and writing papers, and very much enjoy to be able to go from project to project, learning new domain science as well as technical skills as they do so. That’s how they end up on our teams.

So while the big external forces require us to make sure our team is fundable, our big internal forces require something of us if we’re going to have a staffed team to fund at all. Our team needs work that:

  • Allows people to see the direct impact of their work on their community’s research
  • Allows people to work on different kinds of projects and learn new things
  • Allows people to get better and better at something
  • Does not require heroic effort
  • Requires as little gruntwork or repetitive work as possible
  • Gives senior people a chance to have wide impact and a range of projects
  • Gives junior staff clear paths to career growth and advancement

We're driven by two constraints - fundability and staffability

So here’s where we’re just unbelievably fortunate, much more so than other expertise-based businesses like management consultancies. Because the overlap between these two parts of the Venn diagram is enormous; our staff by and large want the same things required of us by external constraints.

  • The funders want to see demonstrable high impact, and our team members also want to see their particular work have real and important research significance.
  • Our researchers have a number of different kinds of needs, and our staff want to grow their skills and learn new things.
  • We need to deliver our offerings effectively and efficiently, and our staff want to get better at doing things, don’t want to be faffing about with repetitive or unnecessary work, nor reinventing wheels, nor to have heroics required of them to get something over the finish line.

So by and large we want to structure the work our team takes on in a way that allows us to pick an area in which we’re going to become very good, to be fairly nimble about how we offer services and products in that area to researcher clients whose projects will greatly benefit from our contribution, and to have high impact in as efficient a way as possible.

I’ve talked about doing this before, especially in #157. Our work and our team and our fundability benefit from putting together a system where we can leverage existing skills, knowledge gained from projects done from researchers, and documented and communicated success stories into an ongoing practice. Over time, that looks like this slide below which always generates a lot of discussion when I show it:

The spectrum of ways of bundling expertise: Open-ended cutting edge engagements on the left, productized services consulting to take advantage of broad experience in the middle, and efficient provision of best-practices driven products on the right.

That is, we develop the processes by which we can take the existing expertise of our team, systematically grow it and leverage it with standard operating practices and automation, and bundle it into things that we know that researchers value, that will lead more often than not to successfull research projects and research impact, and will let us communicate that impact to our sustaining clients in our institutions and funders. Which is to say now we’re back at the diagram that we started with 10 weeeks ago:

The feedback loop that keeps our teams funded.   Researchers bring us hard problems we can help with, domain experience, and possibly fees for services.  They interact with us through an “API” of products and services.   We apply our people, leveraged with process and equipment, to help reduce successful research projects.   The resulting research impact, if high enough, gets noticed by funders, which in turn fund the researchers and ourselves, and the cycle continues.

Next issue I’ll talk a little bit about how, once we’ve got this down, we can make it easier for ourselves and others to communicate what we do. Two very scary words - positioning and marketing.

And on that dire note, on to the roundup!

Managing Individuals and Teams

Over in The Other Place, Manager, Ph.D., in issue #172 I talked about how individual productivity is, for our kinds of teams especially, not really what we care about.

Also covered in the roundups were articles on:

  • Coaching a team member towards deciding
  • Unlocking team performance
  • Decision transparency for stakeholders
  • Paying attention to the next larger context
  • Working in your best environment

The Research Ecosystem

In review: The successes and shortcomings of Horizon 2020 - Thomas Brent & Goda Naujokaitytė

It’s always worth reading reviews of funding programs to see how funders and those that decide on their funding to see what matters to them. This article in Science Business summarize a European Commission evaluation of the Horizon 2020 program. Yes, papers and citations matter, but there were other measures that Horizon 2020 was measured against:

  • Commercial impact - turnover and total assets for participating firms, patents and trademarks
  • Building capacity by funding work in countries that don’t get as much research funding (“Closing the EU’s R&I gap”)
  • Closing a well-documented research funding gender gap
  • Within-Europe mobility of people
  • Open access publications
  • Social impact like COVID-19 research
  • Dissemination and deployment of results

Funders (institutional or national/supranational) will tell you what they care about (#75), and the more we can help them advance their goals, the more support we can start seeing for ours.


‘Very positive’ national support for research management - Nina Bo Wagner

I find it heartening that after years of seeing nothing, there’s starting to be broad support for a professionalization of management in our professions. Here, Wagner summarizes a panel discussion of some work being done as part of the RM Roadmap effort in Europe. That effort defines Research Managers (RM) as

…including research policy advisers, research managers, financial support staff, data stewards, research infrastructure operators, knowledge transfer officers, business developers, knowledge brokers, innovation managers, legal and research contracts managers/professionals, etc. For simplicity, we use the term research management, but this exercise covers also other terms such as research support, research management and administration, professionals at the interface of science and other terms which are used as the norm in the national landscapes across Europe.

And yes RMs aren’t a great name. In the UK, for instance, they’re looking for a better name and title.


Research Software Development

Foundational Competencies and Responsibilities of a Research Software Engineer - Goth et al, arXiv:2311.11457v2

I didn’t report on this when it first came out - there’s a somewhat reorganized up v2 of the manuscript now, describing a set of competencies for RSEs at what I think is the right level of abstraction. (That’s no small praise! The hardest part about such an effort in as diverse a field as any kind of technical research support is considering the problem at a high enough level to be able to apply widely while remaining grounded enough to still be able to make meaningful distinctions. Some efforts in our line of work have struggled with this).

  • Software/technical souks
    • Software lifecycle
    • Documented code building blocks
    • Distributable libraries
    • Use software repositories
    • Software behaviour awareness and analysis
  • Research skills
    • Curiosity
    • Understanding the research cycle
    • Software re-use
    • Software publication and citation
    • Using domain-science specific repositories/directories
  • Communications skills
    • Working in a team
    • Teaching
    • Project management
    • Interaction with users and other stakeholders

This is a really nice framework.


NASA Transform To Open Science (TOPS)

Ah, and I had missed this, too - NASA’s long had a commitment to practicing open science themselves, but with TOPS, they are putting together a curriculum, tools, and resources for the practice of Open Science more broadly. I’ll be keeping an eye on this.


Research Data Management and Analysis

US agency allocates $90m to education research infrastructure - Craig Nicholson, Research Professional News
NSF invests $90M in innovative national scientific cyberinfrastructure for transforming STEM education - NSF News

This is interesting - digital research infrastructure to help researchers (and industry) study and improve things for one of the other missions of our institutions, education.

From the NSF announcement:

SafeInsights aims to serve as a central hub, facilitating research coordination and leveraging data across a range of major digital learning platforms that currently serve tens of millions of U.S. learners across education levels and science, technology, engineering and mathematics. With its controlled and intuitive framework, unique privacy-protecting approach and emphasis on the inclusion of students, educators and researchers from diverse backgrounds, SafeInsights will enable extensive, long-term research on the predictors of effective learning, which are key to academic success and persistence. […] Because progress in science, technology and innovation increasingly relies on advanced research infrastructure — including equipment, cyberinfrastructure, large-scale datasets and skilled personnel — this Mid-scale RI-2 investment [led by OpenStax at Rice University: LJD] will allow researchers to delve into deeper and broader scientific inquiries than ever before

One of the things I like about these mission-driven projects is that they inherently cut across what are the traditional DRI silos - there’s necessarily elements of research computing, research data/research data management, and research software development integrated into this. Here the privacy requirements make the research data management aspects primary, but the product wouldn’t work without research computing and research software expertise.


Research Computing Systems

Engagement Facilitation Guide for Smaller and Emerging RCD Programs - Daphne McCanse
CaRCC Capabilities Model Focused Tools Engagement Guide and Script - John Nicks, Forough Ghahramani, et al

Ah, this is nice - I’m a big fan of the CaRCC Capabilities model, but it’s an awful lot for a smaller institution to even know how to start with. This is a guide to engage with smaller institutions to help them come up with a plan for mapping out their capabilities. It could be used for someone coming in from the outside, or for the institution itself.

More broadly, it’s a nice guide to mapping out and engaging with key decision makers and stakeholders at an institution for any purpose.


Random

Fascinating look from quite some time ago on spreadsheet errors in the context of broader human error research: Thinking Is Bad.

The case against sudo.

The case for naming your handful of utility scripts starting with a comma.

Another introduction to differentiable programming: Alice’s Adventures in a differentiable wonderland.

MS-DOS 4.0 is now open sourced.


That’s it…

And that’s it for another week. If any of the above was interesting or helpful, feel free to share it wherever you think it’d be useful! And let me know what you thought, or if you have anything you’d like to share about the newsletter or stewarding and leading our teams. Just email me, or reply to this newsletter if you get it in your inbox.

Have a great weekend, and good luck in the coming week with your research computing team,

Jonathan

About This Newsletter

Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.

So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.

This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.

Jobs Leading Research Computing Teams

This week’s new-listing highlights are below in the email edition; the full listing of 135 jobs is, as ever, available on the job board.