Skip to content
Onno VK6FLAB edited this page Jul 2, 2022 · 5 revisions

Welcome to the amateur contesting standard wiki!

This Wiki is currently editable by anyone with a GitHub account and is intended to shape the discussion surrounding the introduction of an Amateur Radio shared open contesting standard that can serve contest organisers, contest software developers and contest participants, as well as Amateur Associations and Peak Bodies.

Audience

The users of this standard are represented across the amateur radio community in many different roles. It's intended that a contest definition can be utilised by all of the following:

  • Organisers - people who validate the rules, manage, coordinate, publicise and promote an amateur contest.
  • Participants - amateurs who use the rules to operate their amateur station to make contacts in a contest.
  • Software - tools that use the rules to assist with creating valid logs ready for submission.
  • Scoring - tools that use the rules to process submitted logs to create a valid score.
  • Associations - groups of amateurs who define lists like the definitive DXCC list, or a list of power levels.

Rationale

Amateur Radio Contesting is an activity enjoyed by many members of the amateur community.

In a typical contest an organiser conceives of a contest, devises a set of rules, writes them down in human readable form and publishes them as a document or website (or both).

The application developer for contesting software reads those rules and interprets them to implement a programmatic version within their software. Once the software is updated, the developer publishes their application.

A contester downloads the latest version of a contesting application and participates in the contest, using the combination of the human readable rules and the software implemented interpretation of the rules, to make contacts. After completion, the contester submits their log to the organiser.

The organiser processes the log to arrive at an official score. The scoring is either handled manually, by interpreting the human rules, or processed by software, using an implementation of the rules, which is arrived at separately from the software defined by the contesting application developer.

This means that there are at least two software versions of the rules, but in reality there are many more, each derived from a different application by a different developer who supports a particular contest.

None of these interpretations are validated by the organiser and there is no mechanism to update the rules without involving the software developers of both the contesting application(s), or the scoring application.

Abortive and incomplete attempts at streamlining this process exist in the popular N1MM and tlf contesting tools which support a limited form of user defined contests (UDC), but even N1MM includes this disclaimer in their documentation:

While UDC files give a user extensive options for adapting N1MM Logger+ to the rules of many contests, they cannot provide the same control that a programmer has, or that may be required to fully implement them. For example, a contest defined with a UDC file can only give a fixed number of points per QSO. Many contests (WPX is a good example) are more complicated. Also, in many contests the rules are different for contestants from one area than for those from another – a good example is QSO parties with different rules for in-state and out of state participants. In those cases, it may make sense to create two UDC contests, for use by in-area and out-of-area contestants.

The aim of the standard being developed here is to overcome these limitations by making the standard include the same control that a programmer has.

In using an open shared standard, user defined contest files could become universal, shared and documented for the entire community.

Methods

Contest definitions currently either exist in human readable (and not computer readable) form. Limited attempts have been made to create keywords to manage the infinite variety of contest rules, but this results in a perpetually incomplete list. Instead this aims to develop a set of requirements for scoring and ruling of a contest. The underlying idea is that a contest can be defined using this standard and used by both humans and computers to define any contest. It's envisaged that this is achieved by implementing a contest using this standard as a programming language file that can be loaded by software and executed to return computer readable results.

Furthermore, the contest definition files are expected to be relatively simple documents that can be shared via web/email and stored on version control systems like GitHub, making it possible to exchange and update contest files for all users with significantly less effort than is currently required.

The two examples shown below are trivial. They're implemented using JavaScript and are not intended to be interpreted as the standard at this time. They show what a definition might look like, how it interacts with any software and how readable it might be for humans.

Example 1: contest_time

The time when a contest runs is part of every contest rule set. Software uses this to determine if logging can start, or when checking scores, if a log entry is valid from a time perspective. A contester requires the time to know when to fire up their radio to get on air and make noise.

function contest_time () {
	return {
		'start': "2022-10-29T00:00:00.000Z",
		'finish': "2022-10-30T00:00:00.000Z"
	}
}

Implemented as a (for example) JavaScript function, all software could use the same method to derive the contesting times.

Example 2: valid_contest_band

Every contest defines which bands are permitted and which ones are not. Contesting software needs to alert the contester that they're operating on an incorrect band and scoring software needs to process log entries that are on an invalid band. Here is an (again JavaScript) example that returns True if the current band (on the radio, or in the log, or on a computer display) is valid for this contest.

function valid_contest_band (currentBand) {
	const contestBand = new Set();

	// List of the bands that are permitted in this contest
	contestBand.add(1.8);
	contestBand.add(3.5);
	contestBand.add(7);
	contestBand.add(14);
	contestBand.add(21);
	contestBand.add(28);
	
	// Return True if the current band is valid
	return contestBand.has(currentBand);
}

How to develop this?

From a contest rules perspective, there are inputs and outputs. The idea is that a definition takes optional parameters and returns a result. The intent is that the parameters are items that are either environmental, things like date and time, radio frequency and mode, or items entered by a contester, either into a screen, or extracted from a log file. The purpose of the definition is to have a single source of truth in relation to a contest in such a way that each user has the same information, not an interpretation of that information.

To create a meaningful standard it needs to include all of the types of information that are required to make a contest. Developing a list of this information will need to be an iterative process, driven by existing and historical contest rules. While scoring might be complex, the purpose of this process is to put that complexity in scoring inside the definition of a context, a machine readable version of the current human rules, in exactly the same way that software developers currently write code to interpret those rules, with one specific difference. The rules come from the organiser of a contest, not from a software developer reading the rules and making their code.

Essentially this is a software development exercise, but it has a number of considerations to take into account:

  • Not all contesters are software developers, but they have invaluable information to contribute to this effort.
  • The standard must be simple to use by both organisers and software developers.
  • Any tool requirements need to support cross platform deployment. At a minimum: Windows, MacOS, Linux, Android, IOS.
  • The standard must be extensible.

Decisions to discuss and make

The list of decisions shown here is not exhaustive, merely illustrative of the implications of the standard proposal being worked on.

  • The standard must be described using some form of descriptive language. It could be written in plain English, or formalised using something like ABNF.
  • It's expected that this will be implemented using a programming language like JavaScript, Lua, or some other language.
  • What things is scoring dependent on, for example, a score might require a list of external resources like the VK Shires contest which requires a current list of Shires. Either the Shires need to be enumerated inside the rules, or they need to be read from an external resource. Is access to an external resource needed and what security implications does this have?
  • The standard is intended to run as code inside another piece of software. How does this impact security of that software?
  • Is the language of choice stable, or does it change rapidly, potentially causing all historic rules to cease working and in doing so requiring a high level of overhead maintenance and version control.
  • As this is a programming environment, should it support modules and in what form?
  • Could modules be "overarching" standards, issued by an Amateur Society, for example, the ARRL might define each of the individual states which could be used by any contest as a definitive list of states. Similarly, the CQWW might issue a list of CQ Zones which another contest might use as a module to include in their contest. The definition of power levels might be defined at the IARU level and included as a standard within any contest.

Contributors:

  • Onno VK6FLAB