Skip to content

Retrospectives

rrahn edited this page Jan 28, 2020 · 15 revisions

Retrospective protocols

27.01.2020 (@rrahn)

1. Release retrospect

  • celebrate the release 🍾
  • involve the entire team in the release process
  • keep the changelog up-to-date (make it part of the Definition of Done)
  • release more often
  • automate release process as much as possible
  • theme release, e.g. next release could be about search and search related functionality; has the advantage that breaking changes might be limited to one module, which comes with lots of great improvements; less scary for users.

About release notes:

  • good release notes matter; essay form and adding some jokes
  • will bind users who read it more strongly when they feel that the release is really about them
  • advocate the issues of external people that were fixed
  • in the issue ticket write a description such that it can be used inside of a release note. Show some code snippets as if it were documentation. Label these features to mark them for a release note. On release gather these and only make some general editing
  • describe the problem in a non-technical jargon -> good start for becoming a good user story; helps other team members to tackle the issue and implement a solution

2. Project Milestones

February

  • 24.: BIOSTEC Conference

    • new training material for filtering/processing bioinformatic files (BioC++ curriculum)
    • comparison to BioPython, BioJS, BioPerl, Seq, BioJulia etc.
  • 29.: modernizing apps with SeqAn3

    • Lambda almost done
    • next application to modernize ?
    • setup infrastructure for future applications
    • probably have time until 15.06. (next report)

March

  • 22.-28.: Developer retreat
    • organise talks
    • plan work to do on retreat
    • integrate external projects

April

  • 3.: KNIME Spring Summit: Variant Calling Pipeline
    • prepare tutorial with KNIME

July

  • ISMB BioC++ course: - refined material/new material for BioC++ course
    • prepare course with participants

September

  • 1.: support for additional wokflow systems
    • finish support for KNIME (CTD)
    • add support for CWL
    • maybe separate Argument Parser Project
  • 5-9.: ECCB BioC++ course
    • new material for distributed search
    • compare with solution that uses simple splitting of indices
    • what else should be supported or demonstrated:
      • updates to database (insert?, erase?, edit?)

3. Roadmap

Application/Feature overview

application search alignment range i/o argument parser
lambda sdsl-bitvector; sdsl sentinels; epr dict; search interface; performance vectorised (banded) aa alignments ? faster seq_io; fast align_io; blast out CTD support for nested arg_parser
rna_aligner $lambda; adapted search interface for rna structure profile vectorised alignment with PSSM; vectorised (banded) meyers'/gotoh; msa ? fast seq_io; fast align_io; msa formats $lambda
archive indexer (DREAM-index) $lambda; IBF; nested IBF; partitioned IBF; JST search vectorised (banded) meyers'/gotoh; wavefront alignment journaled string; journaled string tree fast seq_io; fast align_io; JST format $lambda
rna mapper $lambda vectorised (banded) meyers'/gotoh; wavefront alignment ? fast seq_io/align_io $lambda
igenvar ? breakpoint refinement: vectorised (banded) convex; wavefront bam utility views? fast bam in; bam index; tabix; annotation format (vcf/bcf) $lambda

Setup Application infrastructure Motivation: User satisfaction

  • tools are like a suite
  • applications have same user interface
    • similar options have same name
    • similar options are placed under same sections
    • similar sub parsers have same name
  • same repository structure
  • same project readme: badges, platforms, versions etc.
  • tutorials that explain the usage
  • possibly how-tos to move from an competitive application to this tool
  • API doc for an application

Task: Setup infrastructure [@marehr, @joergi-w]

  • template git repository to copy from
  • main with argument parser setup
  • setup CI with GitHub actions
  • setup nightlies
  • unit test infrastructure
  • test coverage
  • API documentation -> methods in app should be documented as well.
  • micro benchmarks
  • macro benchmarks
  • build dependency to library -> when library builds through, builds applications -> if through deploy nightly snapshot
  • github pages with corporate design

Task: Search module [@eseiler, @svnbgnk, @MitraDarja]

  • familiarise with SDSL and open tasks: EPR dict, sentinels, bit vector with wasting bits, ...
  • finalise search interface
  • performance benchmark of search

Task: I/O module [@smehringer, @rrahn, @simonsasse]

  • investigate compile time
  • investigate performance

4. Modus operandi

  • work in sprints
  • regularly meet to plan upcoming sprints
  • feature/task description must be self-explanatory so that everyone in the team can work on them
  • in sprint each team member pulls work from the list of tasks
  • each team member should not always use the easiest path but sometimes choose a ticket from a less known module or field of expertise

Definition of Done (DoD)

A feature is ready when all these points have been checked:

  • fully implemented and approved
  • unit tests pass
  • test coverage > 97%
  • micro benchmark
  • api documentation
  • tutorial/teaching material added
  • macro benchmark with <= 10 % performance difference to SeqAn (if applicable)
  • tests should compile in less than 30 seconds
  • changelog entry

5. Miscellaneous

  • Openly communicate in team if retrospect is missing or if topic was not dealt with
  • Have regular retrospects
  • Try NextCloud and Exchange as possible team calendar

18.11.2019 (@rrahn)

TOP1: Pull requests - what did we observe

  • Lengthy discussions on GitHub -> maybe pair programming?
    • Naming
    • Design decisions
    • A lot of different opinions
  • Non-full-time employees lack days in which they can't review
  • Too many review cycles where we find new stuff
    • 1-2 days of waiting
  • Certain reviews take long if only one person is available
  • A rebase often causes that the changes are not visible anymore
  • Long periods of time between review and applying changes

Suggestions for improving:

  • Limit the round of reviews! No more than 3.
  • Use new commits for review requests and squash them later!
  • First look into the PR and make your notes, than see the developer one-on-one if it is a big review or if there are a lot of things you feel that they need to be clarified.

Other suggestions:

  • Script to show how many reviews a person has
  • Reassign if you don't feel if you have the time
  • Communicate who is available for reviewing (maybe on Gitter)
  • Use Gitter more often for group communication
  • Resolve things that were resolved on GitHub

TOP-2: Weekly todos

  • Keep them as the group feels they are beneficial
  • Track each persons Todos in Trello

04.11.2019 (@rrahn)

Present: 5 members

  • Summary of the first iteration:

    • 26 planned tasks
    • 10 active tasks
    • 4 closed
  • The team agreed that @rrahn takes on the role as an agile coach for the team.

    • This means that he his responsible for the execution of the agile environment and consultation during the transformation to become more agile in our team.
  • Feedback of first "Story Gatherings":

    • In general there was a really positive effect and every member in the team had a good impression of it.
    • There was some discussion: Some members that only recently joined a specific project had the feeling that this was the first meeting where actually the clear picture, vision and reasoning for the project was communicated. Accordingly, the meetings were quite important but not as a story-gathering meeting but rather as a kick-off meeting.
    • A detailed discussion with the project members of the team after the meeting led to the first real initial stories that now can be more refined.
  • General feedback:

    • There is a big concern within the team that our PRs take quite long.
    • Looking back to the last 30 days with closed PRs it took on average 67.8 days to close them.
    • The following list identifies some of the reasons:
      • Too much focus on code formatting, naming issues, etc. (Is there a lack of a common coding/naming style? Is there missing domain or technical knowledge?)
      • Too big. (What is a reasonable size? What would be a good measure, e.g. LOC?)
      • Too many rounds of reviews. (Too big? Not focused enough during the review? Missing technical/domain knowledge?)
      • No response in time. (which channels to communicate? when to look into a PR?)
      • Hard to track changes. (Squashed/force-pushed commits)
    • The team decided to actively protocol how PRs are reviewed during the next iteration to get a clear understanding of what takes how long. When does it happen? Why might this happen? Who was the reviewer? Note the last one is not meant to blame anyone but to figure out if there are large gaps in domain/technical knowledge that results in long discussions or many changes? How long do you need to wait between re-requesting a review and receiving the new one etc.

Tasks for next iteration

  1. Discuss the topic for the next iteration and define tasks for it.
  2. Have another story refinement session to work out more fine grained stories (not tasks!) from the initial stories.
  3. Every team member should actively protocol why PRs take so long until they can be merged (see point 4 above)
  • Seen from the perspective of the reviewer.
  • Seen from the perspective of the reviewed person.
Clone this wiki locally