Skip to content

Commit

Permalink
Merge pull request #126 from ImogenBits/en_teaching
Browse files Browse the repository at this point in the history
add english teaching concept
  • Loading branch information
Benezivas authored Sep 28, 2023
2 parents cc35857 + d90bba5 commit 8b33fd4
Show file tree
Hide file tree
Showing 3 changed files with 309 additions and 1 deletion.
305 changes: 305 additions & 0 deletions docs/teaching_concept/english.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
# Teaching Concept Lab Course "Algorithmic Battle"

!!! info "This page also is available in German"
[Switch language :de:](german.md){ .md-button }

## Basics

### About this Document

This document describes a teaching concept that was developed
in university teaching. Its goal is to support teachers to use
the `algobattle`-tool as well as related software for their
own teaching. The focus is here on the teaching aspect. For
an explanation of the technical requirements as well as a manual,
please consult the [technical
documentation](https://algobattle.org/docs).

## Who is this Software Made for?

The `algobattle`-tool has been developed since 2019 by the Theory
Group of RWTH Aachen University for use in a software programming lab
course in a bachelor of computer science. Although this is reflected
in the [collection of pre-made problems](https://github.com/Benezivas/algobattle-problems),
nothing speaks against using this tool in other study courses of late
high-school teaching.

The project is thus targeted at teachers, for whom we want to ease the
work to establish a competitive teaching experience. The extremely
modular nature of the project allows to either cover a wide range of
different topics or to focus on a specific topic.

## What do I have to Know about Licensing?

The complete software and associated documentation are published under
a free MIT-license, if not stated otherwise. You can thus use, modify,
extend or use them, even in a commercial context, without asking
for permission. We would however appreciate it if you would reference
us as a source if you do.

## How Many Students Can the Lab Course Accommodate?

We recommend splitting up your students into groups; from our
experience groups of six students work best. There then can be an
arbitrary number of such groups. Do mind of course, that with
an increase in students, the supervisory effort increases as well.
With two teaching assistants we were able to manage 18 students
over the course of a semester, requiring not more than two work
days in total per week.

## Structure

### Basic Idea

The Algorithmic Battle, in its original form, had the aim of providing
students a practical way to engage with commonly theoretical results
from their lectures. In computer science, complexity theory is an
important element: It tells us that for many problems we should not
expect to find algorithms that are fast, correct and universal at the
same time.

This often neglects the fact that such problems do in fact need to be
solved and indeed do get solved in practice. Especially due to the
boom of so-called MIP-solvers such as Gurobi, CPLEX and others,
practical optimization problems are solved and often very fast.

This observation is based on seeing a problem as a collection of
all possible instances: For some problems, there are cores that
consist of subsets of such instances. On the other hand, many other
subsets do admit solutions that are very easy to find and these seem
to correlate with practical instances.

### Teaching Goals

The focus of the lab course lies in letting the students
take both perspectives of a solving process. They should learn to
answer two questions:

- What are "hard instances" of a problem?
- How do I write algorithms that are fast and correct most of the time?

This means that students should both write code for a `generator` for
the creation of hard instances (given an instance size), as well
as a `solver`, that takes an instance as an argument and should
output a good solution within a given time limit.

Every `generator` of a team is then matched against the `solver` of
another team. In the classical setting, the question is then up to
which instance size the `solver` of one team is able to still solve
the instances generated by the other team. The scoring is then done by
_how much better_ the winning team is than the other: The highest
instance size for which an instance of one team still could be solved
is compared to that of the other, and points are awarded according to
this ratio. To maximize this relative distance of scores, it is thus
important on the one hand to write a `generator` that outputs hard
instances, in order to ensure that the other teams `solver` fails
early. On the other hand, it is just as important to also write a
strong `solver`, as not to fail early on instances that should not be
so hard to solve.

We see it as particularly encouraging that we do not give any
limitations on external software used by the students (as long as
they do not result in legal issues.) Since all student software is
deployed in docker containers and since one is not artificially held
back in real life either, the students are actively encouraged to
research their given problem, related publications and software to
use in their code. There are only very few limitations that we
actively enforce:

- Only use software if the license allows you to do so
- No plotting with or spying out other teams
- Do not exploit our framework software (see also `Bug Bounties`)

We let the students develop their code in a version control system to
which we have full observer rights. The students should coordinate
their coding process and organize themselves, be it by simply talking,
creating issues or creating branches and pull requests. By having a
full view of the commit history, one can already see during the
duration of the lab course whether individual persons do not contribute
substantially, allowing you to interfere early.

We additionally demand a full documentation of the development
process. Every researched idea and every implementation, even if only
attempted, is to be documented. For this to work, we let every person
of a team be responsible for the coordination of the documentation
once during the semester. Every other person of the team then has the
responsibility to report to the documenting person what they have done.
The coordinating person thus does not have the responsibility to
nag their team members: Those who do not report their progress will
not be part of the documentation and are thus assumed not to have
contributed anything until proven otherwise.

### Sample Structure of a Semester

We next describe the structure of a typical semester-long lab
course at RWTH Aachen University. This structure of of course
not in any way binding, but resulted in very positive feedback
by the students over the past years.

Every semester starts the same. Shortly before or right at the start
of the lecture period, we host a kick-off meeting in which we inform
the students about the format and structure of the lab course.
We next find a regular time-slot in the week for the weekly meeting
and assign the students into groups of six. We have a focus on
putting at most subgroups of three students that already are a peer
group into one team, to lower the risk of exclusive behavior.

The first task that we use as a primer is always the same, namely the
`pairsum` task, which can be found in the `algobattle-problems`
repository. The achieved points in this task do not count towards the
global rating, the task is meant to get familiar with the framework,
our software and the format of battles. We do however insist on
creating a documentation, as described above. As in all subsequent
tasks, we start scheduling daily automated battles with the current
state of the student's software after a week, to give them feedback on
how good or bad their code performs against the code of the other
teams.

At the end of this task and every following one, we have a concluding
meeting. In this meeting, we ask two randomly chosen students to
(freely) present the ideas of their `solver` and `generator` to the
rest of the students. We expect every student to be informed about
their software and the development process, which we try to enforce
by this random selection.

We then present the overall results of all battles, talk about
possibly collected `Bug Bounties` and discuss in an open round
what the students have learned in the past two weeks. Finally,
we present the next task.

The cycle is then always the same: After getting a description of the
next problem, we let the groups research and implement for a week.
After this week, we meet with every group separately. In this meeting,
we discuss their ideas and implementations, as well as possible
problems. After this meeting, we schedule daily battles until the
final meeting. The first of these battles does not award points to
mitigate the consequences of oversights in implementations resulting
in crashes.

In a typical semester, there is time for 6-7 tasks of this format
(`pairsum` included.) We recommend sticking to six tasks and to use
the remaining time as a buffer for other tasks. This results in less
pressure for the students and matches the number of documentations to
be created.

Regarding the teaching goals for individual tasks, we roughly follow
the following scheme of task types:

0. `pairsum` as a warm-up task
1. A classical, NP-complete problem to encourage research
2. Problem in P for strong optimizations of data structures and algorithms
3. Approximation problem with an approximation ratio that should not allow for polynomial algorithms
4. Non-classical, NP-complete problem to encourage the use of MIP-solvers
5. Wildcard problem, e.g. strongly limited Memory, problem in FPT,...

After the last task, the lab course ends, coinciding with the end of
the lecture period. Afterwards, the grading process starts.

## Grading of Students

If you would like to grad the participants of the lab course, we give
a few pointers on what you could base this grading on. We already
check the following criteria in the middle of the semester, to spot
and help individuals that already struggle at this point.

### Overall Quality of the Software

This allows for a first impression for grading the group. Central
questions are: How much effort was put into the code during the course
of the semester? A group that tends to perform bad in battles, but
researched and implemented many different approaches should in our
opinion not automatically be graded badly.

### Documentation

The person responsible for the documentation is of course reliant on
the quality and contents provided by their other group members. It is
commonly very clearly visible how much each member contributed during
each week. We grade students that exclusively focused on one kind of
work during the whole semester, e.g. _only_ implementation or _only_
research, worse than others.

### Talks

If a student was clearly badly prepared for holding a talk, this
results in worse grading. A bad preparation commonly coincides
with little participation in the task.

### Implementation

Since this is a software lab, we do not let anyone pass that did not
contribute to the implementation work, be it by pair programming or
individual contributions. This is easily checkable by the commit
history of a group.

## Further Uses of the Software

We want to emphasize that the `algobattle`-tool was written with
modularity in mind regarding the nature of tasks and the inner nature
of battles. While we as the original authors have a strong bias
towards problems of theoretical computer science, nothing speaks
against other types of battles and problems.

### Other Types of Battles

So far, we have assumed in our description that a `generator` and
`solver` battle on ever increasing instance sizes, until one of them
fails. Points are then awarded on the relative distance between the
largest solved instances. This is however only the description of the
default `battle type`, the `iterated` type.

Another, alternative `battle type` shipped with the software is the
`averaged` type, in which only instances of a fixed instance size are
to be solved a given number of times within a time limit. The points
are then awarded based on the average solution quality of a solver.

We encourage you to create your own `battle type` to model different
abstractions of `generator`s and `solver`s fighting one another.

### Other Types of Tasks

In most of our [sample tasks](https://github.com/Benezivas/algobattle-problems), the input
and output of `generator`s and `solver`s are simple json-files
containing text. The I/O specification does however allow for arbitrary
file- and folder structures to be passed. Thus, other files such as
multimedia content in the form of audio or pictures can be specified
as the input and output of `generator`s and `solver`s.

## Miscellaneous

### Bug Bounties

We encourage our students to attack and exploit the
`algobattle`-framework as well as the code of the tasks that we
provide them. We are interested in inputs that allow students to crash
our code, the `solver` and `generators` of other teams (independent of
the contents of them) or even exploit the framework to gain access to
the code of other groups. As mentioned before, we demand these bugs
not to be exploited in order to achieve a better standing. We do
however award additional points for finding and reporting the bugs in
a reproducible manner. Only the first group to find and report a bug
is awarded points for it. The number of points is based on the gravity
of the bug found.

The big advantage of this approach is to reward the natural curiosity
to explore and exploit systems in a constructive way. A positive
consequence is that we are then to implement more test cases for the
found bugs.

### Known Issues During a Lab Course

At some point, students learn that MIP-solvers exist. This is
disadvantageous for many problems, as they are rather efficient in
practice and do not encourage further research. We thus recommend
creating tasks that make MIP-solvers, or other kinds of efficient,
general solvers, unattractive. In the case of MIP-solvers it commonly
also helps to reduce the amount of available memory, as they often
require a lot of it.

### Additional Resources

[Official Website](https://algobattle.org)
[algobattle-Framework (Github)](https://github.com/Benezivas/algobattle)
[Collection of problems for algobattle (Github)](https://github.com/Benezivas/algobattle-problems)
[Web-framework for algobattle (Github)](https://github.com/Benezivas/algobattle-web)
[Technical Documentation](https://algobattle.org/docs)
3 changes: 3 additions & 0 deletions docs/teaching_concept.md → docs/teaching_concept/german.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
# Lehrkonzept Praktikum "Algorithmic Battle"

!!! info "Diese Seite ist auch auf Englisch verfügbar"
[Sprache wechseln :gb:](english.md){ .md-button }

## Grundlegendes

### Über dieses Dokument
Expand Down
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -120,5 +120,5 @@ nav:
- api/battle.md
- api/problem.md
- api/util.md
- Teaching Concept 🇩🇪: teaching_concept.md
- Teaching Concept: teaching_concept/english.md

0 comments on commit 8b33fd4

Please sign in to comment.