#56 - 9 Jan 2021

The Stop Doing Something Challenge Part 1; embedding staff question; nudges; product ownership; property-based testing; against text protocols; more smaller HPCs; real time ML

Hi!

Happy New Year, everyone. I hope you had some time over the holidays to rest and recuperate. A lot of us have discovered that it wasn’t enough - we’re still dragging - but we’re less tired than before, research needs us and our teams, and so we’re back into it.

An awful lot of early January writing always advocates for people to start something new - develop a new useful habit, start learning a new skill, make time for other important activities.

Those can be great! But we only have so much time in the day, and to add something useful to our routine we have to drop something else, especially with our and our teams energy levels still low. To add a new high priority effort we need to de-prioritize - possibly to zero - something else.

Stopping doing things is difficult, because we’re all smart people and presumably all the things that we used to do that have zero benefit have already been chucked. So we’re left with things that do have some kind of benefit to someone; they all have a justification.

But that doesn’t mean all of the things we do are equally valuable, and the low value things we do - or the things we do that could just as easily (or more easily) be done by someone else - take time away from doing other things that would be better for our researchers, our team, and ourselves in the long run.

It’s important to routinely cut these lower value tasks out of our to-dos. Even if we’re used to doing them, even if they don’t take long, even if it’s not that bad or even kind of fun to do them and cross them off the list, even if someone likes that we do them - these are time sucks that take us away from the most useful things we could be doing.

So I want to encourage you and your team to stop doing some things. This month, ideally this week - let’s find something to stop doing and get it off our desk, either by delegating it to someone else (and helping them to drop one or more lower priority things) or, ideally, stopping the activity entirely.

Tasks that aren’t aligned with priorities, or that don’t have significant leverage, can and should be ruthlessly dropped. That can be at the team scale, activities your team as a whole is doing, or at the individual scale, things you are personally doing.

Hopefully you have a pretty good understanding of your team’s or organization’s strategic priorities. What are the things your bosses and stakeholders need, that your team can provide best, and that there aren’t alternatives for? What are the things you can be doing that will enable research to happen that wouldn’t happen otherwise? Are there areas that your team needs to grow - or grow in visibility - to open up the opportunities you see ahead? These are strategic goals, and it’s difficult to make good decisions about what to take on and what to drop without them.

If you don’t have a clear set of strategic goals, there’s no shame in that – I’m working with a team that doesn’t, and am working to help them discover them – but it’s time to start thinking about what they should be. I’ll write more about that later.

As a manager you can also develop a pretty good sense of the leverage of any activity. Anything you do that helps your team execute on future tasks faster, or on new kinds of tasks, is an activity with leverage. Like a lever, it helps your team move rocks they wouldn’t have been able to move before. It helps your team develop develop capacity and capability, and it so is higher priority than doing than any individual task.

Not everything can be top priority, and some low-leverage tasks have to get done. You’ll have a portfolio of activities - as a team, and yourself as an individual - with high and low alignment to strategic priorities, and high and low leverage.

But your team will have more impact, collectively and as individuals, if you as manager routinely drop the low leverage or low-priority tasks in favour of those with align with priorities or have higher leverage. And your team members will have more clarity, and less uncertainty, about what they’re supposed to be accomplishing and how to prioritize their efforts if you are routinely, repetitively clear about why these tasks are being dropped and are consistent about taking on the more important efforts.

High priority tasks should be pretty clear - once you know what your priorities are as a team. What do high leverage tasks look like?

  • Work spent recruiting and hiring great new team members directly builds the capability and capacity of your team;
  • Time spent developing your team members skills or your skills as a manager - doing online courses, attending online conferences, knowledge exchange within the team or across teams - also builds capacity and capability;
  • Delegating tasks to a team member - giving them more responsibility and showing them how to use it - is a form of team member development;
  • Work spent - continuously - on improving team processes and communications will speed up execution of future tasks and is almost always more valuable than getting any particular task done a little faster;
  • Time you spend talking individually with team members keeping them on the same page, forestalling conflict or wasted work, can avoid much more lost time down the road;
  • Similarly, time spent talking with your bosses, funders, stakeholders, or researchers to communicate what the team is doing and finding out what they need can avoid even more lost time.

The frustrating thing about these high leverage tasks is they’re not “one-and-done” - keeping people aligned, building people skills, improving team processes are all ongoing processes and take modest amounts of time continuously.

So what does all this mean for you as a research computing team manger?

Almost all of your personal activities can be high-leverage. You’re a multiplier, not a maker, and by virtue of your role you have a lot of responsibility for keeping the team aligned to high priorities, improving team communication and performance, and going back and forth between the team and bosses/funders/researchers/stakeholders to make sure you’re working on the right things.

That doesn’t mean you can’t spend any time reviewing PRs or on call or curating datasets, but if that same hour could be spent helping your 5-person team be 0.1% more effective this year, that’s one person-hours worth of effort expended that could have produced 10 person-hours worth of benefit.

You’re not the only one who should be performing high-leverage tasks; team members can be encouraged to share knowledge, communicate within and outside the team, contribute to improved team practices, and the like. Supporting them doing that is another high-leverage task for you.

So what am I going to do?

Our team has been pretty ruthlessly focused on our strategic priorities - we’re all about connecting private genomic data, so we’ve avoided non-strategic efforts like model organism or microbial genomics (not private) or building single-node versions (not “connected”). So there’s not much to drop there - but within our activities I haven’t been very good at maintaining focus on the top priorities, letting “nice-to-haves” slow us down on the way to our “need-to-haves”.

But I haven’t been very good at focusing on high-leverage activity. So I’m going to stop:

  • Writing routine materials. I’ve been responsible for some time in our team for generating some kinds of documents - meeting notes, external communications - because I was the one who had the whole-team picture. That’s extended well beyond where it makes sense; it’s actually a problem that I’m the bottleneck and that others aren’t routinely speaking for the team. I’m going to start delegating those tasks, which will take more time to begin with but will save time and strengthen the team over the coming months
  • Creating common things from scratch. Between processes and templates I’m going to semi-automate a number of things I have to do (some onboarding processes, some emails,
  • Attending some meetings, and attending others as often. Most of the meetings I attend are useful and well-run (we’re lucky!) but there are one or two I’m just not needed at and have notes circulated that I could scan afterwards instead; others I should be at but I’m going to advocate for them to be less frequent.

I’m hoping I can continue stopping through the year, and will keep a to-not-do list to make sure they stay stopped. How about you, reader? Is there something you can stop doing or do less of? I’d love to hear from you and share it with the newsletter as an example. As always, just hit reply or email me at [email protected].


On other topics, a reader writes in wondering what options people provide “PIs that have funding to hire “research assistants” who are typically assigned tasks such as research software development but also, too often, hpc system administration and user support, “data science”, etc.”:

It seems to me that the options could map reasonably well to the usual options for computing resources, e.g.:

  • the usual, “dedicated” hiring option
  • a “buy-in” option where formally there’s still a dedicated hire but the staff is embedded in a central team that supports them, in return for some of the staff’s time for other tasks (a step towards the staff remaining in the central team once the project/grant is over)
  • a “service” option, where the PI buys a part of the time of staff in the central team
  • besides of course the “shared” option, supported by central funding, but with its inherent limitations

In particular, I think the “buy-in” could be a step towards reducing many of the anxieties on the part of both the PIs and the staff, and reduce redundancies across institutions…

But of course, it’s a hard problem, there are often formal obstacles, and I’m sure there’s no silver bullet, otherwise everyone would be using it :)

What other formulae have you seen or offered to researchers? I have one that I wish existed but I’ve never seen - a “recruiter” option where the research team directly hires their own person, but the local team of experts helps recruit and assess candidates. I’ve never seen that kind of research computing recruiter before, even though it’s common elsewhere in tech, and given how hard it is to hire technical people I’m kind of surprised.

Have you had any particular luck (good or bad) with any of those models? Have you seen something else used? Write back and let us know - reply or [email protected].

And now, the roundup!

Managing Teams

How to Be an Even Better Leader - Karin Hurt and David Dye
A different kind of new manager checklist: The 4 essential questions to ask yourself as a leader - Claire Lew, Know Your Team

Two short checklists on upping our manager games.

In the first, we’re encouraged to

  • Slow down and revisit the fundamentals
  • Teach new managers - for any of us who went to grad school, we’ve probably had the experience of thinking we knew the undergrad material, then really learning it when we were forced to teach it as a TA. Teaching is a powerful way of learning
  • Learning new techniques yourself first
  • Letting your team know what you’re working on - to model learning new things, and to be accountable in what you’re trying to learn.

In the second, there’s four questions recommended that we periodically ask ourselves:

  • How can I create an environment for people to do their best work?
  • How can I create as much clarity and coherence about what needs to get done, and why?
  • How can I personally model the behavior I want to see in the team?
  • How can I see things for what they are, instead of what I want them to be?

I have trouble with the last two in both lists.


Hidden Lenses: What to do when your intention is misread - Padmini Pyapali
Driving Cultural Change Through Software Choices - Camille Fournier

These two articles emphasize the importance of communicating not just the “what”, but (in the first case) the “why” and (in the second) enabling the “how” to make sure what really matters is being communicated.

In Pyapali’s article, we’re reminded that our intention often doesn’t come through in our communications, so it’s important to communicate your intent. Without this, it’s too easy for others to misinterpret your intent, often resulting in hurt feelings - or, more often but maybe worse, misunderstanding the why, so they do the wrong things in the future. If you communicate the “why”, the intent you’re trying to achieve, not only will it help with what you’re communicating at the moment, but it will help your team member be more aligned with what that intention in other activities or decisions they make. This is one of the reasons it’s so important to include mention of impact when giving feedback, whether using the Situation-Behaviour-Impact model or the Manager-Tools model.

Fournier’s article focusses on the importance of “how”s - trying to ensure that doing the right thing is easy. If you want to make sure that developers write tests, make sure there’s a library of existing tests for them to base new tests off of. If you want to make sure that deployments happen quickly, ensure there’s automation which makes that happen. You can communicate the “what” all you like, but making the “how” for doing the right thing easy will have much more impact.


Productivity Is About Your Systems, Not Your People - Daniel Markovitz

It came up in our discussions of some “measuring developer productivity” articles last year that, especially in research, teams are productive, not individuals. And to make teams productive you have to spend time (leveraged!) time of making sure your team processes are working smoothly. That means ensuring good communications, making work visible (we’ve been pushing towards Jira and Confluence - it’s been a slog bug we’re starting to see the benefits) and clarifying communications expectations.


Building neuro-diverse team culture

Here’s an evolving collection of resources which may be of use to managers to support current or future team members with ADHD, Autism, or Dyslexia.


Product Management and Working with Research Communities

${var?} and &&: Two simple tips for shell commands in tech docs - Vidar Holen

A short blogpost making a simple point - users will have much more luck copy-and-pasting sets of command lines from your documents and tutorials if you use && between consecutive commands and use ${NAME?} instead of ad-hoc placeholders like <NAME>.


The 10 Attitudes of Outstanding Product Owners - David Pereira
Tactfully rejecting feature requests - Andrew Quan

Because of the funding structure of research our training has taught us to think in terms of projects, but in research computing we’re mainly managing products - long lived things that other people use, and don’t typically have clear start or end dates.

That means thinking in terms of differentiation, strategy, speeding the learning process, priorities, and alignment, rather than or at least in addition to thinking of deadlines, roadmap/gantt charts, and execution.

Pereira’s article is a good crash course into that line of thinking. Quan’s emphasizes one particular part of this, and is particularly relevant to the thinking about strategic priorities for the year. You should say yes to new feature requests sparingly; but “no”s don’t have to be negative. You can use your “no”s to align stakeholders to your strategic goals - and to validate those strategic goals to make sure they’re the right ones.


Why Maryam Tsegaye’s prizewinning video is so important for online learning: my 12 reasons - Tony Bates, writing for Contact North

Fort McMurry, Canada high school student Maryam Tsegaye won a global video science communication competition with this three minute video explaining quantum tunnelling. Bates has been working on distance learning for decades, and has been very influential in online learning in the last twenty years or so.

He points out that the video is good and has circulated widely for a number of great and well-understood reasons - and, by implication, that we can do this for our own projects too. Clear explanation, enthusiasm, relatively modest requirements for editing and diagrams, are all things we can bring to bear in our efforts, though we rarely make this kind of effort.

(And it’s never too late to start! 64 years after its invention, Fortran finally has an online community, and its first year has been a big success)


Research Software Development

Hypothesis and Proptest for Property-Based Testing in Bioinformatics Software - Luiz Irber, twitter thread
An Introduction To Property-Based Testing In Rust - Luca Palmieri

Back in Issue 48 I asked if others were using property-based testing for research software; it seemed like a natural match for some of what we do, ensuring that properties of return values are satisfied, rather than just “the right values” for some small number of inputs.

Irber mentions his use of property-basted testing in the using the hypothesis package in the python parts of the sourmash bioinformatics tool and using the proptest crate in Rust for the growing rust portions. He also shares this youtube video by Jes Ford on testing for data science using property-based testing.

Palmieri, in his book section, gives a deep dive into using the proptest crate in rust for an easy to understand (if less clearly research computing-relevant) can of validating emails, in a test-driven-development fashion, and then refactoring the code to make it clearer.

For a lot of research software development property-based testing seems like an excellent fit, with packages available in a number of programming languages. Are there other examples of teams starting to move to this approach?


A case against text protocols - unmdplyr

I’d add “and file formats” to the title.

We’re super bad at this in research computing. We’ll spend ages having very sophisticated thoughts and conversations about algorithms or architectures, and then we switch over to file formats and writing things out - on disk or over the wire - as text.

This is a terrible idea. We need to think about serialization of input and output of data at higher levels - in terms of APIs we want to support - and do a much better job of using existing tooling instead of bike shedding text formats and figuring out what field goes in what column like it’s 1957 and we’re writing punched cards.

The blog lays out some of the reasons why, in the form of rebutting arguments for it:

  • You can type it out! Yeah, well, you could do that storing fields in binary in a database or with a simple converter, too
  • Easier to parse/debug! Absolutely nonsense. Anyone who argues that text is easy to parse gets flung into deepest darkness
  • Extensibility! Not false, but orthogonal to text vs non-text, and extensibility isn’t an unvarnished good
  • Error recovery and resilience! This goes back to easier to parse/debug and just isn’t true.

We need to use better tools than text for serializing results, and we need to think at higher levels than byte-level representation in files.


Learn about ghapi, a new third-party Python client for the GitHub API - Hamel Husain, GitHub

GitHub has their own CLI but this third party tool, ghapi, looks to be significantly more feature complete, including being able to configure and run github actions, and with tab completion.


Nine Best Practices for Research Software Registries and Repositories: A Concise Guide - Task Force On Best Practices for Software Registries, ArXiv

An interesting distillation of best practices from a committee of many different research software registries and repositories, going into some detail on their recommended practices:

  • Provide a public scope statement
  • Provide guidance for users
  • Provide guidance for software contributors
  • Establish an authorship policy
  • Share your metadata schema
  • Stipulate conditions of use
  • State a privacy policy
  • Provide a retention policy
  • Disclose your end-of-life policy

Research Computing Systems

ARM support in Linux distributions demystified - Eloy Degen My guess is that there’s going to be increasing interest in ARM for research computing this year. This blog post gives the state of support for some common Linux distress for recent (ARMv6-v8) ARM systems.


The Case for ‘Center Class’ HPC: Think Tank Calls for $10B Fed Funding over Five Years For those who haven’t seen the Centre for Data Innovation’s report advocating tripling NSF’s funding for university HPC centres, the report and the arguments therein may be useful for your own internal advocacy efforts.


Emerging Data & Infrastructure Tools

Maybe You Don’t Need Kubernetes - Matthias Endler
Bare-metal Kubernetes with K3s - Alex Ellis
Kubernetes is a container orchestration system, but that’s not the point - Nikhil Jha

For a lot of research computing, docker compose isn’t enough, but Kubernetes … is a lot. There are a number of lightweight kuberneti out there - k3s and microk8s are two that come to mind - but there’s also projects like nomad which just came out in 1.0.

Endler’s post describes why their team didn’t require Kubernetes at trivago and why they went with nomad instead. In another direction, Ellis’ article describes how they’re using k3s on bare metal which may be of interest to research computing.

Those who are surprised to hear of bare metal Kubernetes may want to read Jha’s article. Kubernetes is most widely thought of as container orchestration, but it’s principally declarative, multi-node process management for applications - basically a declarative OS for clusters - and the use fo containers is just one possible implementation of that.


Machine learning is going real-time - Chip Huyen

Research computing systems teams are increasingly being asked to not just support batch-mode training but to support real-time or near-real time inference based on deployed models so trained. Even more complicated is the “online learning” case, where the incoming events actually update the model.

Huyen’s post provides a detailed overview of how these “MLOps” problems are being solved in different regimes - static vs online learning, request-driven vs event-driven - and what some architectural approaches are to supporting those modes.


Calls for Proposals

2021 Annual Modeling and Simulation Conference, 19-22 July, Hybrid event - Tutorial proposals due 22 Jan, Papers due 1 March.

Poster outlining technical tracks is available here


Argonne Training Program on Extreme-Scale Computing (ATPESC 2021) - 1-13 Aug, Chicago, deadline for Applications 1 March

From the website:

The core of the program will focus on programming methodologies that are effective across a variety of supercomputers and that are expected to be applicable to exascale systems. Additional topics to be covered include computer architectures, mathematical models and numerical algorithms, approaches to building community codes for HPC systems, and methodologies and tools relevant for Big Data applications.


Events: Conferences, Training

Research Software and the Modelling of COVID-19 in the UK - 13 Jan, 15h00 UTC, Free

The first SORSE event of the new year, discussing the role of RSE in COVID-19 modelling in the UK.


Know Your Team Live: A New Manager’s Primer on Leading Remote Teams - 13 Jan 10am PST, Free

Know Your Team, articles from whose blog sometimes make the roundup, have a free 1 hour workshop on leading remote teams.


LeadDev Live - 21 Jan, 12:45-7:50pm EST, Free

A free, half-day, 2 track conference on software development leadership, covering both technical and people topics.


2021 Common Workflow Language Virtual Mini Conference - Repeated sessions on Feb 8, 9, 10 in USA, EMEA, and APAC friendly timezones. Free

Four-hour miniconference on the Common Workflow Language, one of several workflow languages supported by a number of workflow engines.


C++ Europe - 23 Feb, Online, €176-327

A day long series of talks on C++ software development, including topics of interest to many of us in research computing, such as maturity of a code base, refactoring legacy code, building and packaging, and cross-platform development.


Random

A post-incident review of the events of the movie Home Alone.

Every year technical leader Anil Dash (of Glitch, Stack Overflow, and many other companies in the past) does a personal digital reset - deleting apps, unfollowing all accounts, and then only adding accounts and apps as needed.

An introductory video for an interesting looking ODEs MOOC based on the authors free textbook (see this twitter thread by an expert in both performing and teaching computational physics).

The broadening remit of research computing well beyond the physical sciences is now so commonplace that it’s coming up in general tech articles, such as this Ars Technica interview on Digital Archaeology.

There’s more and more interest in programmatically generated diagrams based on diagram “languages” - potentially useful for those of us needing to keep architectural diagrams up-to-date.

Speaking of, DSLs are becoming more common as tooling improves - here’s a guide to LLVM for programming language creators.

“Staticly-link” your shell scripts - replace dependence on PATH variables with absolute paths using resholve.

A really deep dive into tree data structures for indexing.

Manage your (Chrome) browser tabs as a (in my case, very large) file system.

In 1993 or so, were you, like me, convinced that Gopher - and services like Archie and Veronica - were the way to go, and that HTTP and Mosaic were a fad that would pass? There’s still time for us to be right - welcome to the Melbourne home of the Gopher revival.

Love Comic Sans, but thought you spent too much time in the terminal with monospace fonts to be able to make more use of it? Great news.

Interactive C++ for Data Science, using cling and Juptyer, using the high energy physics data analysis package ROOT as an example.

It absolutely kills me to say this, but Windows is increasingly a plausible platform for research computing. Here’s Scott Hanselman’s 2021 “Ultimate Developer and Power Users Tool List For Windows”.

Writing maintainable CSS. As with so much software development device, much of it comes down to using the most powerful aspects of your tools sparingly.