#46 - 16 Oct 2020

Fist-of-five voting; how clear are your goals to your team; more flexible work; 500 research computing team manager job ads; the role of a maintainer

Hi, everyone:

I had a couple of good talks with fellow research computing team managers (in title or de facto) this week resulting from either the newsletter or the SORSE talk last week.

The problems each were wrestling with were tricky and stressful, and it was a reminder of how lonely management can be. I hope this newsletter - and the AMAs (Ask Managers Anything) section we had for a while, and will have again - helps cut through that a little bit. I also I hope we can find or build more forums for exchanging knowledge, advice, or even just sharing the victories and defeats in research computing.

There are some good general management or especially leadership management fora out there such as Rands Leadership Slack, which I quite like; but that audience mostly wouldn’t understand the particular culture of research (and especially academic research) that these managers were wrestling with.

If you do have questions that you’d like to get the readership’s input on, do hit reply and email it to me, and I can share it in the newsletter. I always end up learning something from reader emails, whether they’re questions or answers like from last week when I asked if anyone was using OpenHPC:

I learned just yesterday that the XSEDE Compatible Basic Cluster project is based on OpenHPC! https://kb.iu.edu/d/bfum So I doubt that it’s in use everywhere in XSEDE, but I would bet that a lot of the smaller sites are using it so they have a place to get started.

With that, let’s get on to the weekly roundup:

Managing Teams

Learning with Fist of Five Voting - Jake Calabrese

We’ve talked before about the benefits of not asking your team binary yes/no questions about agreement but “on a scale of 1..5”; e.g. in #39 when mentioning the use of zoom polls. This gives people who aren’t comfortable with a direction a way to express that without coming out and saying no. And if a number of people vote 1 or 2 or 3, that will give them a bit more confidence in discussing why.

Calabrese describes a simple and fast way of doing the same without needing polling functionality (but also without anonymity) - holding up a hand with 0-5 digits showing varying levels of support for an idea. 0 (just a fist) is more or less adamant opposition, 5 is wild support, and anything in between is in between.


Are We Clear? - Paulo André

It can be mystifying to us, who constantly are looking at big picture goals and nudging things in the direction of overall strategy, that the team don’t automatically understand the goals we have for them. But our team members, lacking telepathy, can’t see the stuff that’s rattling around inside our head all day. The only thing to do is share those goals and vision again and again and again and again, verbally and in documents and and in actions.

There is no plausible amount of talking about goals and strategy that is too much. And André reminds us that Google’s Project Aristotle (a followup to Project Oxygen) found that structure and clarity were one of the 5 most important factors in team success: those factors were, in order:

  • Psychological safety
  • Team member dependability
  • Structure and clarity
  • Meaning, and
  • Impact.

André suggests a sobering test to see how aligned your team members are on the goals:

Here’s a quick test: try asking each of your team members to describe their understanding of the goals of the team, and what the plan to accomplish them is.


Do Your Employees Feel Safe Reporting Abuse and Discrimination? - Lily Zheng, HBR

If we want to support our employees, especially team members who experience sexism or racism, we need to make sure they have opportunities to report that abuse and discrimination. Although our teams are typically small, we’re often in large institutions which have mechanisms that can help, such as employee assistance plans (EAP), explicit offices for EDI or that handle sexual harassment or racial discrimination complaints, or ombudsman offices. It’s our responsibility to know what those resources are an to make it clear that they’re available to employees.

We also have to take any complaints that any of our team members trust us with extremely seriously, and to take their needs into account when deciding how to proceed.


Embracing a flexible workplace - Kathleen Hogan - Executive Vice President and Chief People Officer, Microsoft

You may well have read that Microsoft, a pretty staid company, is leaning way in to distributed work:

Work site: … However, for most roles, we view working from home part of the time (less than 50%) as now standard – assuming manager and team alignment. Work hours:… Work schedule flexibility is now considered standard for most roles. … Work location: (.. e.g. city and country): Similarly the guidance is there for managers and employees to discuss and address considerations…

It’s likely going to be the case most research computing teams will follow suit (maybe not about working from other countries, because of the requirements of some national funding agencies, but in most other ways). And I don’t see the kind of introspection about this in research computing I’d like to see, beyond “it’ll be nice to keep working from home and go into the office some times”.

I’ll write about this more, but I think this is going to end up redistributing research computing teams substantially. Once Universities get used to the idea that “our research computing teams don’t have to be here physically”, the next step is “do they have to be here organizationally”?

Small research computing teams will likely stay in place, there’s always some advantage to having some local expertise; the large research computing teams that learn to run distributed teams well are going to have a pretty easy time picking up great talent, attracted to the breadth of opportunities that being part of a large team enables. It seems pretty clear, however, that undistinguished medium-sized teams — expensive enough to be an important line item but with a muddy story about why they’re important — are going to be fielding increasingly skeptical questions from administrations.

Teams with clear specializations that bring clear benefit to a research organization are going to do ok, and those that don’t are going to have a year or two to make that clear benefit obvious to decision makers.


Managing Your Own Career

Things I Learned from Looking at 500 Research Computing Manager Jobs over 10 Months - Jonathan Dursi (me)

As you know, the newsletter has always had a section on job postings for leads and managers of research computing teams, and since June I’ve also been keeping them on a job board. We’ve now hit 500 jobs that were posted at one time or another, so I thought I’d summarize the patterns I’ve seen:

  • There are a lot of jobs out there for people managing research computing teams.
  • There are certainly many more I’m missing.
  • Research computing teams are broadening, and so is the need for managers.
  • Research data management is increasingly employable.
  • “Traditional” research computing team management jobs remain, and they take forever to fill.
  • Despite the talk of RSE units, many research computing jobs within academic institutions are still lone outposts, embedded in a particular project.

Product Management and Working with Research Communities

Customizing pandoc to generate beautiful pdf and epub from markdown - Sundeep Agarwal

A lot of us write in Markdown and want to repurpose documentation, writing, or tutorial material into PDF and/or epub with pandoc; one problem is that the default templates are pretty ugly. Agarwal goes through a few basic steps showing where and how to insert formatting to improve the output so that it’s something that can be distributed without a lot of manual tweaking afterwards.


Research Software Development

Research Software Engineers - Job Descriptions - Aalto Scientific Computing Group

The Scientific Computing group of Aalto University has text for their job descriptions of a simple three-step (RSE1, RSE2, RSE3) progression for software development in their institution. It’s not a formally recognized ladder by HR yet but it guides their hiring decisions. The whole thing is just a few paragraphs long, but it’s very clear and is a lot more than most institutions have. The other internal documents they have on the page are models of clarity and transparency and they should be commended for it.

Does your institution have anything like this that’s publicly viewable? If so and you’re willing to share with the newsletter community, send it along.


Digital Humanities RSE Survey Results - DH Tech

DH Tech, an online tech community for the digital humanities, published their survey results of 66 software developers. Some interesting results from my point of view:

  • Almost half (49%) were part of RSE groups rather than part of DH projects (33%); I think the ratio would go the other way in the sciences.
  • Almost 60% have permanent positions, which is terrific, but more than 2/3 (67%) say they have no career path, which isn’t great.
  • Most devs were “full stack” and/or spent their time doing project management, and Python + JS were overwhelmingly the most common programming languages
  • There was a pretty flat distribution of tasks they enjoyed doing most, but where they wanted to build more skills were heavily weighted to data science, AI/ML, and visualization
  • Almost half were PhDs, and more than half had their highest degree in the humanities. It’s interesting to me that the humanities are now developing their own RSEs one way or another. To me it seems like 10-15 years ago most would have been “ex-pat” physical sciences types.

The Role of a Maintainer - Matthew Rocklin

Matthew Rocklin, of pydata and Dask fame, talks about the role of an open source software maintainer - task which will look awfully familiar to managers of software development efforts - and how to prioritize tasks if you are going to spend (say) 10 hours a week on it. This would be a good document to point someone to as an expectation-setting document.


Calls for Proposals

Call for Posters: Minisymposterium on Software Productivity and Sustainability for Computational Science and Engineering - Due 29 Oct

SIAM’s Computational Science and Engineering (CSE) 2021, 1-5 March 2021 is officially all-virtual, and contributed posters are due 29 October. There’s a number of sessions of interest to the readers of RCT; this one for posters on software productivity and sustainability may be particularly relevant.


PASC 2021 - Due 22 Nov for minisymposia proposals, 13 Dec for paper submissions

This coming year’s ACM/Swiss National Supercomputing Centre (CSCS)-sponsored Platform for Advanced Scientific Computing (PASC) Conference will be held from July 5 to 8; it is currently planned to be in person at the University of Geneva.

Some relevant topics of interest for the conference are:

  • Extreme scalable methods in
  • Numerical methods, algorithms, or large-scale simulation
  • Effective use of advanced computing systems
  • Best practices and tools for productive and sustainable scientific and engineering software development.
  • Data/simulation integration
  • Reproducibility, and Verification, validation, and UQ
  • DSLs
  • Runtimes and middleware
  • AI/ML for computational science
  • Computational approaches for social sciences

ISC 2021 Call for Research Papers - 10 Dec 2020

ISC 2021, which will be held from June 28-30 2021 and is currently planned to be held in person, has their CFP out. The big topic areas are: Architecture, Networks, & Storage; HPC Algorithms & Applications; Programming Environments & Systems Software; Machine Learning, AI, & Emerging Technologies; and Performance Modeling, Evaluation, & Analysis.


Random

ArXiv now allows you to link code to paper submissions, at least for machine learning papers (this was done in a partnership with Papers With Code, an ML effort). Hopefully it will spread more broadly across arXiv.

Bubbletea is an Elm-language inspired go framework for building highly interactive terminal programs.

Buildpacks are a CNCF standard for building container images directly from code, allowing easier enforcement of standards of how containers are built in a project. Google announced widespread support for buildpacks within google cloud.

More formal methods for verification are becoming more readily useable in a number of ways. Crux allows symbolic testing of C++ or Rust code using property-based testing, with counterexamples of where a test would fail. This may be really nice for subtle kernels of scientific code.

The Fortran front end for LLVM, Flang, is now a standard part of LLVM 11.0.

Even banks are doing chaos testing.

Interesting looking CI/CD tool, conducto - the tool is free for running on your hardware, you pay to use their cloud instances to run the tests.

Hashicorp released two interesting tools this week; Boundary for zero-trust ssh access to nodes, and Waypoint for building and deploying.