Hi - I hope you’re enjoying your August!
One of the things I’d like to do now that I’m back from a couple of weeks off from the newsletter, is start being systematic about continually speaking with new research computing and data team managers, leads, PMs, etc - I’d like to talk to fifty or so and make sure I know the problems they have faced, are facing, and their successes. I’ll write up the aggregated, anonymized results results, and circulate it to participants first. I have five or six questions I’d like to ask them (and maybe you?) over a 20-30 minute zoom chat:
If that’s something you’d be up for, or you know someone else who’d be game, let me know - hit reply or email me at [email protected]!
There’s a reason I want to be more deliberate about speaking with a broader range of research computing and data managers and leads, and not just the ones I’m already working with. Whenever I do, people raise important topics that we need to talk about more as a community.
One person I spoke to last week raised the fundamental issue of internal knowledge sharing. It’s so easy, they explained, for individuals or subteams to learn things during the course of the work that would be useful more broadly, but not really have any incentive or mechanism to share it with others. Or even to realize that it was important to do so.
This isn’t a small thing; it goes to the heart of what we do and how we operate as teams experts that support research. I can’t emphasize enough how important it is to face this problem head on. It matters particularly for teams where individuals provide services to research groups. (e.g. research facilitiators, ARC support staff, informaticians, datas scientists, software developers, …) If we as managers and leads don’t do anything about this, we end up with less a team than a bunch of individuals who have each learned cool things.
We want our team to be a centre of excellence, not a temp agency.
And our institution needs us to be a centre of excellence, too.
There’s nothing wrong, of course, with temp agencies. They serve a real need, even in or around our line of work - heavens knows there’s no shortage of software development or bioinformatics agencies. But if what our institution wanted was a supply of temporary staffing for individual efforts, HR could run it, or it could be contracted out.
Key to operating as a centre of excellence is a practice of continually growing and developing a shared pool of knowledge. The skills, techniques, and knowledge learn as practitioners in research computing and data are widely transferrable, across disciplines and problem types. And they combine to be more valuable than the sum of their parts; problems that aren’t tractable with technique A or approach B individually can suddenly yield to A + B.
Sharing knowledge across our organizations is crucial to their becoming centres of excellence. It’s better for professional development growth of the individual team members, and it’s much better for the team as a whole. Team members’ knowledge grows more valuable when they can see how it connects elsewhere. It shows them how it could be combined with something someone else has just learned. Team members discover areas they can collaborate with each other on, or who they can get help on with their effort. As a side effect, it also mitigates the risk of institutional memory loss when a team member leaves.
Internal knowledge sharing can happen sort of organically and accidentally when there’s a very small number of team members. This is especially true when they are co-located. In larger groups or with people working from different locations, we have to actively nurture it, and put structures in place to encourage it.
There’s no one technique for promoting and enabling internal knowledge sharing. Instead, there’s a toolbox of methods from which we can assemble bespoke practices that work for our teams. Documentation can be great, particularly if we can take advantage of other process (like reports to the research groups). Internal chat and communications can be collected. Journal-club style meetings can be held. We can routinely give short talks to each other. Each have their place, and some will be more natural fits for the culture of a team than others.
I do want to highlight some advantages of giving short (~10-15 minute) talks to each other, though. Routinely giving short talks to a friendly audience (and getting feedback on them from peers and their manager) is a terrific professional development practice, especially for junior team members. For more senior staff, talks this short aren’t too onerous to prepare. They line up well with the culture of research institutions, so many staff are pretty comfortable with the idea of both giving and attending the talks, and they can be incorporated into other meetings, be stand-alone, or combined into short mini-conferences.
Whatever materials we generate for internal knowledge sharing may be very valuable for sharing externally. After all, another key piece of becoming a centre of excellence is not just nurturing knowledge internally but disseminating it. Short talks can often form the heart a talk at a local department’s colloquium or seminar series, or of a conference. Internal write-ups can often be polished into posters or blog posts or conference abstracts. 10-15 minute talks also a really nice length for sharing and distributing externally.
What approaches have you seen work (and not work!) for internal knowledge sharing? Hit reply or email me at [email protected].
And with that, on to a short, getting-back-in-to-it, roundup!
Understanding Factors that Influence Research Computing and Data Careers - Chaudhry et al, PEARC22
I’m still working my way through PEARC22 papers - this one reports on a survey of 225 research computing and data staff. It’s nice to see some data specifically about our community.
There is some interesting data here about what matters to RCD staff; for instance, the top three factors that respondents considered as being important ways their advancement should be recognized were:
And the top four opportunities that mattered for considering a job move were:
The importance of being recognized and making a real impact come through pretty clearly, but I think it’s not something that comes up enough when we talk about the difficulties of hiring and retention. We focus on the elephant in the room, which is salaries, but our ability to influence that is modest at best. On the other hand we have an enormous ability to influence our team members feeling recognized, ability to see the impact of their work, feeling like they’re in a team that embraces innovation, and getting as much professional skills development as possible.
There’s also a nice chart in here about hiring managers, and the factors they take into account when hiring new team members. Technical skills were still number one, but very shortly behind were “interpersonal, communication, and related skills”. It’s good to see progress!
A survey of research quality in core facilities - Kos-Braun, Gerlach, and Pitzer, eLife
As a community, we in RCD could learn a lot from our colleagues who run core facilities of all sorts - like us, they run equipment, provide expertise, do consulting for researchers, and have as an ultimate goal accelerating and amplifying research impact for their clients. As groups with a longer history in some ways than our own, core facilities are a little more mature in thinking about ways to operate, sustainability, and more.
There’s also a lot more of them - this paper sent surveys out to 1,000 core facilities in Europe. I found this paper when a reader shared a more recent follow-up paper from this group. Here they take a look at the practices of 253 (the number who responded) core facilities, and how those practices supported consistently high research quality. I think some of the issues they see would be pretty familiar to us, even if the words used are a little different - for instance not having a good sense (or much control over) how high-quality the work is being done on their infrastructure, but worried about push-back if they tried to enforce good practices. Whether that’s processing contaminated samples or running code in ways it wasn’t meant to be used, the basic tensions are the same.
They also surveyed challenges faced by core facility leaders - areas that needed to be improved. High up on the list were the need to better train, advise, and communicate with users, challenges with finding, keeping, and having resources to hire enough qualified staff, keeping infrastructure up to date and maintained, quality control and good scientific practice - again, these probably sound fairly similar! Management was another issue that was seen as a key (the second place!) area to improve, but, sadly, the importance of that issue ranked much lower (#2 from the bottom, in fact). Obviously I think that’s a mistake - better management makes other problems more tractable.
When finding out about challenges, they dug deeper into what’s usually everyone’s knee-jerk response: “funding”, and I really appreciate that. There’s no team or organization anywhere, in any sector, that doesn’t want more funding. They could all accomplish more given more resources. But too often we don’t dig any deeper. Lack of a funding is a situation; it causes specific problems. Below they have a nice flowchart of the issues the respondents saw that they attributed to a top-level cause of “funding”.
A Simple Way to Introduce Yourself - Andrea Wojnicki, HBR
For earlier-career managers who suddenly find themselves involved in meetings with a number of stakeholders, introducing ourselves can be a little daunting. Wojnicki offers a nice, simple, professional script to introduce yourself at meetings:
e.g. me introducing myself on the calls I mention above might sound like:
“Hi, I’m Jonathan! (1) I help research computing and data managers, new and experienced, with the challenges of these complex roles. (2) I’ve worked in a number of RCD teams myself in the past, and have worked with many RCD staff and managers of all levels and disciplines. (3) I’m looking forward to hearing from you today about your experiences!”
How do I make sure my work is visible? - James Stanier
The things you have to do to being managing thoughtfully and deliberately - which requires setting some intentions and keeping track of what’s happening - are also the things you need to do make your work visible to your stakeholders and boss. As Stanier says:
What was even worse was that I was doing a bad job at making my work visible to myself.
If you yourself can’t quickly describe what you’ve been doing and accomplishing, how could your boss or stakeholders possibly know?
Stanier suggests a brag document, but anything that you’re doing to keep track of your initiatives as a manager, see how they’re succeeding or not, and decide next steps will be perfectly good starting point.
The hugely revamped GitHub Projects is now in GA - this, along with Discussions, increasingly makes GitHub a possible all-in-one home not just for the developers on a product but for stakeholders and users as well. Has anyone been using Projects in beta? Or Discussions? I’d love to hear your experiences.
One thing that RStudio has always had that Jupyter notebooks haven’t are clear off-ramps for getting initial exploratory code cleaned up, under tests and into version control, and packaged up. So I’m really curious about fast.ai’s nbdev. Is this something anyone’s played with?
No observability without theory - Dan Slimmon
Slimmon’s point here is a great one and widely applicable, not just to large compute systems. Empirical measurements are great - as an ex-scientist, I’m a big fan - but for all but the simplest systems, they have to be interpreted through theory. This is why so many dashboarding projects fail, or why collecting metrics isn’t as useful as it sounds. Without a clear (and shared!) mental model of the system under observation, it’s hard to have any common understanding of what the data means.
Fascinating. Apparently you can ssh-tunnel UNIX sockets (!!), and Vanessa Sochat is using that to try to make running web apps on HPC clusters much easier.
Moving Networks Forward with Digital Twins - Jeffrey Burt, The Next Platform
Given I have basically no networking expertise, it may seem odd, but I’m unreasonably excited about the growing movement towards simulations of both wide-area and data-centre-scale networks. Yes, or “Digital Twins”, if you must.
This article focuses mainly on wide area networks, but to me the exciting piece is the same approach for complex data centre networks, which frequently grow in an ad-hoc way, accreting new systems in ways that cause hard-to-predict problems. Making it easier to simulate “what-if” scenarios ahead of time, or even have ongoing simulations informed by real network conditions to give a heads-up of what might be about to happen, seems like it would be extremely useful.
This is a bit long, but a lot of random stuff has happened in the last three weeks!
A story of using DALL-E 2 to generate a logo for a project - this is a great way to generate ideas for a logo or other visual even if you have a human create the finished product. For that idea generation phase, the free Craiyon tool may be enough.
Create a virtual sqlite table that calls a JSON API for the return values with sqlite-http.
Learn SQL by solving a murder mystery.
Janet Jackson’s “Rythm Nation” could crash laptops even by being played near them, because a note in the song had a resonance with a particular model of 5400 rpm laptop hard drives.
Easier ways of crediting co-contributors on GitHub.
Since we last spoke, Gitlab announced and then immediately backtracked on automatically deleting free-account repos if they hadn’t been touched in a year. It’s great that they walked that back but… uh, who is making these decisions at GitLab?
A simple but comprehensive engine simulator that correctly generates engine sounds.
In a development that will infuriate absolutely everyone, wisp is a lisp with hardly any parenthesis but instead uses Python-like meaningful whitespace.
This is really cool - docker-slim will run and analyze your container, decide what doesn’t need to be there, and remove unneeded files and generate security profiles for those that remain.
Valgrind is 20 years old - here’s a retrospective.
Friends don’t let friends make these visualization mistakes.
Linux micro VMs on M1/M2 Macs with krunvm.
Frameworks for games and agent-based simulation are getting closer and closer - Flecs is for games but could easily be interesting for single-node agent simulations.
21 pages(!!) of C and C++ incompatibilities.
Meta is releasing Docosaurus v2, kind of sphinx for markdown but also allows directly embedding react components.
Nice set of html + css tutorials for those new to front end, taking modern techniques like flexbox and web fonts as a given.
The case for PubPub for online-only journals.
And that’s it for another week. Let me know what you thought, or if you have anything you’d like to share about the newsletter or management. Just email me or reply to this newsletter if you get it in your inbox.
Have a great weekend, and good luck in the coming week with your research computing team,
Research computing - the intertwined streams of software development, systems, data management and analysis - is much more than technology. It’s teams, it’s communities, it’s product management - it’s people. It’s also one of the most important ways we can be supporting science, scholarship, and R&D today.
So research computing teams are too important to research to be managed poorly. But no one teaches us how to be effective managers and leaders in academia. We have an advantage, though - working in research collaborations have taught us the advanced management skills, but not the basics.
This newsletter focusses on providing new and experienced research computing and data managers the tools they need to be good managers without the stress, and to help their teams achieve great results and grow their careers.