#31 - Link Roundup, 3 July 2020

Hi, all:

Half the year has passed, it’s Canada Day week here, and it’s the last day at our teams organizational home for the past four years before moving to a neighbouring institution. It’s been a good opportunity to take stock, and our team has some suggestions for what we should do differently starting afresh next week.

And I’d definitely welcome input from you, too! A reminder that all tracking is disabled on this newsletter — I rely on the old-fashioned approach of you emailing me to tell me what is interesting, what isn’t, what you’d like more of, and what you could do without.

So with that, on to the — somewhat shorter for the holiday week — link roundup!

Managing Teams

Making Space to Disagree - Meg Douglas Howie

I know I keep hammering on this, but it’s such an important topic, and people keep writing good articles about it.

In our line of work our team members are generally experts or becoming experts in various areas, and if they’re not comfortable speaking up and disagreeing — with each other, or maybe more importantly, with us — not only are you losing incredibly valuable input, you’re also running the risk of eventually losing them.

There’s a number of techniques in here for soliciting input in early stages of discussions that avoid either-ors or active disagreement - ranking options, placing optoins on a two-dimensional spectrum, and approaches that are more like brainstorming. That allows people to weight options or express pros and cons differently. And as a benefit, the conversation automatically focuses on the more subtle/interesting tradeoffs rather than on the parts where there’s consensus and so no further conversation is really needed.


The Work-Centric Standup - Leeor Engel

A lot of us in resarch computing run daily standups of one sort or another. We use the “people-centric” status update approach - what we did since the last standup, what we plan to do until the next, and if we’re blocked; this article advocates for the more kanban-board focussed approach, where you run through the list of tickets/stories/stickies and talk about what progress has been made.

There good arguments either way. What approach does your team use? Do you know anyone who switches back and forth between approaches?


Four tools that help researchers working in collaborations to see the big picture - Anna Nowogrodzki, Nature

The tools described here are well known to readers of this roundup — Trello, Jira, Asana, and GitHub Project boards — but it’s nice to hear how other research groups use project management tools and which are the killer features for them.


Managing Your Own Career

Avoid Burnout and Start Saying No. Here’s How - Pat Kua

We’re all still being asked to do an awful lot, and between supporting our teams during a pandemic and while confronting issues around systemic discrimination, and pressures from our own management and stakeholders, things can be overwhelming. This article is just a reminder that saying “No” or “Not now” is almost always an option. Your team has priorities and neither you nor they can or should be loosing focus on those for every incoming request.


Product Management and Working with Research Communities

Engineers Code: reusable open learning modules for engineering computations - Lorena A Barba

Prof. Barba is a well known CFD researcher and advocate for reproducible simulations and papers. Here she discusses her experience running a MOOC for modules for engineering computation with Jupyter notebooks and on Open edX.

A lot of reseach computing teams live run training sessions, but I don’t see very many trying to scale up their efforts by moving to automated assignments and on-line self-guided materials. While the failings of MOOCs are pretty well understood by now for introductory materials, they can be very successful for helping students who are already knowledgable in an area learn specific new skills, especially around computing. (I’ve been meaning to take a look at DataQuest, for instance). Have your teams tried approaches like this? How did it work?


Cool Research Computing Projects

A/B Street - Dustin Carlino

Not explicitly a research computing project, but it could be - a very fine-grained simulation of Seattle traffic modelling, allowing “A/B” simulation of small street changes. While it is set up as a game, the idea is to enable citizen science, giving local neighborhoods the opportunity to virtually experiment with local changes and see their effects.


Research Software Development

Agile or Waterfall; a risk management perspective - Alfredo Motta

There’s been lots written about agile vs waterfall, and most of us operate so much in an agile mode that we don’t really think about it any more, but I think Motta’s article is a very clear description of the what and why of the two approaches, and a good reminder that there are absolutely places in research computing where waterfall is the right choice.

The Chaotic/Complex/Complicted/Obvious distinction is useful:

The path is… There are … practices for this work The right approach is
Obvious Best Practices Waterfall
Complicated Good Practices could go either way
Complex Emergent Practices Agile
Chaotic Only Novel Practice Agile

In research computing we’re often in the complex/chaotic side of things, but not always - if we’re setting up a new set of webpages or rolling out database migrations, setting up a new cluster, or designing processes for our teams to follow, there are best practices and using those and having a more waterfall approach to things can be perfectly appropriate. Most of us understand this instinctively but it’s good to have these distinctions explicitly in mind.


Research Computing Systems

A Guide to Threat Modelling for Developers - Jim Gumbley

This is a great, detailed guide to having a threat modelling session - and while the title says it’s for developers, almost all of it applies to clusters or to integrated systems; it’s very much reading. This definitely isn’t something you do yourself, you get a lot of people in a room or connected virtually to brainstorm threats and make sure the entire system is well characterized.

I’d be interested to hear from people who ran such sessions — how did it go? What would you do differently?


Calls for Proposals

Virtual Event Awards - Code for Science and Society: Deadline 25 August 2020

If you’re thinking of putting together a training session for data science, especially on open datasets, this grant, for amounts ranging from $5k-$20k USD, are worth taking a look at.

Events must prioritize broadening diverse participation in research-driven open data science.


Events: Conferences, Training

AWS Cloud Containers Conference - 9 Jul 2020, 9am-6pm Pacific time, Free

An interesting looking program, mainly AWS focussed (Fargate, ECS, EKS) but also a general docker session and one on persistant storage.


Random

It’s always reassuring to hear about other teams’ computing and data integrity issues. Canadian Tire is a popular retail store in Canada; in one geographic cluster of the stores, for a little while last week, every single purchaase rang up at the cash as a Mr Potato Head due to a “downloading error”.

AWS has a new ML-powered tool that looks through your code for deploying on AWS and finds your most expensive lines of code. No, not expensive like CPU/memory intensive lines - expensive like money intensive.