diff --git a/examples/index.html b/examples/index.html index 679fd926..c6374432 100644 --- a/examples/index.html +++ b/examples/index.html @@ -1222,8 +1222,8 @@
For various code examples using SignalFlow, see examples/python
in GitHub:
https://github.com/ideoforms/signalflow/tree/master/examples/python
+For various code examples using SignalFlow, see examples
in GitHub:
https://github.com/ideoforms/signalflow/tree/master/examples
diff --git a/getting-started/index.html b/getting-started/index.html index 7ec86ba2..2219e24c 100644 --- a/getting-started/index.html +++ b/getting-started/index.html @@ -1337,7 +1337,7 @@If you're new to Python or getting started from scratch:
macOS: Install SignalFlow with Visual Studio Code
Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
+Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
diff --git a/index.html b/index.html index f89f9ac1..63397bba 100644 --- a/index.html +++ b/index.html @@ -1348,7 +1348,7 @@For more detailed installation information, including Windows install and compilation from source, see the README.
Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
+Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
diff --git a/installation/macos/command-line/index.html b/installation/macos/command-line/index.html index f7beafe4..9fa03d5b 100644 --- a/installation/macos/command-line/index.html +++ b/installation/macos/command-line/index.html @@ -1338,7 +1338,7 @@signalflow test
Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
+Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
diff --git a/installation/macos/easy/index.html b/installation/macos/easy/index.html index 4066cf7b..2cee3ba1 100644 --- a/installation/macos/easy/index.html +++ b/installation/macos/easy/index.html @@ -1377,17 +1377,31 @@The simplest way to start exploring SignalFlow is with the free Visual Studio Code editor. Visual Studio Code can edit interactive "Jupyter" notebooks, which allow you to run and modify blocks of Python code in real-time, which is a great way to experiment live with audio synthesis.
+You'll only need to do this installation process once. Once setup, experimenting with SignalFlow is as simple as opening Visual Studio Code.
Download and install the latest version of Python (currently 3.12).
Download and install the latest version of Visual Studio Code.
+Once installed, open Applications
and run Visual Studio Code
.
Open Visual Studio Code, select File → Open Folder...
, select New Folder
, and create a new folder that will contain all your new SignalFlow project.
In Visual Studio code, create a new folder to contain your new SignalFlow project:
+File → Open Folder...
New Folder
, and pick a name for your new project folderWhere to put your workspace
+You can store your project workspace anywhere on your drive. The workspace can hold multiple notebooks, audio files, etc.
+Trusted workspaces
+If Visual Studio asks "Do you trust the authors of the files in this folder?", select "Yes, I trust the authors". This is a security mechanism to protect you against untrusted third-party code.
+Visual Studio Code requires some extensions to be installed to handle Python and Jupyter files.
-In Visual Studio Code, select the Extensions
icon from in the far-left column (or press ⇧⌘X
), and install the Python
and Jupyter
extensions by searching their names. These are needed to modify Jupyter notebooks in real-time.
Visual Studio Code requires extensions to be installed to handle Python and Jupyter files.
+In Visual Studio Code, select the Extensions
icon from in the far-left column (or press ⇧⌘X
), and install the Python
and Jupyter
extensions by searching for their names and clicking "Install" on each.
Once installation has finished, close the Extensions
tab.
Select File → New File...
(^⌥⌘N
), and select Jupyter Notebook
. You should see the screen layout change to display an empty black text block (in Jupyter parlance, a "cell").
This will create a sine oscillator, attenuate it, and play it from the system. Hopefully you should now hear a tone playing from your speaker or headphones.
+To stop the playback, create a new cell and run:
+sine.stop()
+
Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
SignalFlow is an audio synthesis framework whose goal is to make it quick and intuitive to explore complex sonic ideas. It has a simple and consistent Python API, allowing for rapid prototyping in Jupyter, PyCharm, or on the command-line. It comes with over 100 of built-in node classes for creative exploration.
Its core is implemented in C++11, with cross-platform hardware acceleration.
SignalFlow has robust support for macOS and Linux (including Raspberry Pi), and has work-in-progress support for Windows. The overall project is currently in alpha status, and interfaces may change without warning.
This documentation currently focuses specifically on Python interfaces and examples.
"},{"location":"#overview","title":"Overview","text":"At its core, SignalFlow has a handful of key concepts.
Let's take a look at a minimal SignalFlow example. Here, we create and immediately start the AudioGraph
, construct a stereo sine oscillator, connect the oscillator to the graph's output, and run the graph indefinitely.
from signalflow import *\n\ngraph = AudioGraph()\nsine = SineOscillator([440, 880])\nenvelope = ASREnvelope(0.1, 0.1, 0.5)\noutput = sine * envelope\noutput.play()\ngraph.wait()\n
This demo shows a few syntactical benefits that SignalFlow provides to make it easy to work with audio:
SineOscillator
is expanded to create a stereo, 2-channel output. If you passed a 10-item array, the output would have 10 channels. (Read more: Multichannel nodes)*
can be used to multiply, add, subtract or divide the output of nodes, and creates a new output Node that corresponds to the output of the operation. This example uses an envelope to modulate the amplitude of an oscillator. (Read more: Node operators)In subsequent examples, we will skip the import
line and assume you have already imported everything from the signalflow
namespace.
Info
If you want to keep your namespaces better separated, you might want to do something like the below.
import signalflow as sf\n\ngraph = sf.AudioGraph()\nsine = sf.SineOscillator(440)\n...\n
"},{"location":"#documentation","title":"Documentation","text":"For various code examples using SignalFlow, see examples/python
in GitHub:
https://github.com/ideoforms/signalflow/tree/master/examples/python
"},{"location":"getting-started/","title":"Getting started","text":""},{"location":"getting-started/#requirements","title":"Requirements","text":"SignalFlow supports macOS, Linux (including Raspberry Pi), and has alpha support for Windows.
"},{"location":"getting-started/#installation","title":"Installation","text":""},{"location":"getting-started/#macos","title":"macOS","text":"If you are an existing Python user and confident with the command line:
macOS: Install SignalFlow from the command line
If you're new to Python or getting started from scratch:
macOS: Install SignalFlow with Visual Studio Code
"},{"location":"getting-started/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"license/","title":"License","text":"SignalFlow is under the MIT license.
This means that you are welcome to use it for any purpose, including commercial usage, but must include the copyright notice above in any copies or derivative works.
Please do let me know what you use it for!
"},{"location":"buffer/","title":"Buffer","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
A Buffer
is an allocated area of memory that can be used to store single-channel or multi-channel data, which may represent an audio waveform or any other type of signal.
AudioGraph
is the global audio processing system that schedules and performs audio processing. It is comprised of an interconnected network of Node and Patch objects, which audio flows through.
Each time a new block of audio is requested by the system audio I/O layer, the AudioGraph
object is responsible for traversing the tree of nodes and generating new samples by calling each Node
's process
method.
Why 'Graph'?
You may be more familiar with \"graph\" being used to mean a data visualisation. In signal processing and discrete mathematics, the term \"graph\" is also used to denote a system of nodes (\"vertices\") related by connections (\"edges\"). Read more: Graph Theory Basics (Lumen Learning).
\u2192 Next: Creating the graph
"},{"location":"graph/config/","title":"The AudioGraph","text":""},{"location":"graph/config/#graph-configuration","title":"Graph configuration","text":"There are a number of graph configuration parameters that can be used to change the global behaviour of the audio system. This can be done programmatically, via a config file, or via environmental variables.
Parameter Description output_backend_name The name of the audio output backend to use, which can be one of:jack
, alsa
, pulseaudio
, coreaudio
, wasapi
, dummy
. Defaults to the first of these found on the system. Typically only required for Linux. output_device_name The name of the audio output device to use. This must precisely match the device's name in your system. If not found, DeviceNotFoundException
is thrown when instantiating the graph. output_buffer_size The size of the hardware output audio buffer, in samples. A larger buffer reduces the chance of buffer overflows and glitches, but at the cost of higher latency. Note that this config option merely specifies the preferred output buffer size, which may not be available in the system hardware. To check the actual buffer size used by the AudioGraph, query graph.output_buffer_size
after instantiation. input_device_name The name of the input device to use. input_buffer_size The size of the hardware input audio buffer. sample_rate The audio sample rate to use. cpu_usage_limit Imposes a hard limit on the CPU usage permitted by SignalFlow. If the estimated (single-core) CPU usage exceeds this value, no more nodes or patches can be created until it returns to below the limit. Floating-point value between 0..1, where 0.5 means 50% CPU."},{"location":"graph/config/#configuring-the-graph-programmatically","title":"Configuring the graph programmatically","text":"To specify an alternative config, create and populate an AudioGraphConfig
object before the graph is started:
config = AudioGraphConfig()\nconfig.output_device_name = \"MacBook Pro Speakers\"\nconfig.sample_rate = 44100\nconfig.output_buffer_size = 2048\n\ngraph = AudioGraph(config)\n
"},{"location":"graph/config/#configuring-the-graph-via-signalflowconfig","title":"Configuring the graph via ~/.signalflow/config","text":"To specify a configuration that is used by all future SignalFlow sessions, create a file ~/.signalflow/config
, containing one or more of the \"Graph configuration\" fields listed above.
For example:
[audio]\nsample_rate = 48000\noutput_buffer_size = 256\ninput_buffer_size = 256\noutput_device_name = \"MacBook Pro Speakers\"\ninput_device_name = \"MacBook Pro Microphone\"\n
All fields are optional.
A quick and easy way to edit your config, or create a new config file, is by using the signalflow
command-line utility:
signalflow configure\n
This will use your default $EDITOR
to open the configuration, or pico
if no editor is specified.
SignalFlow config can also be set by setting an environmental variable in your shell. Variable names are identical to the upper-case version of the config string, prefixed with SIGNALFLOW_
. For example:
export SIGNALFLOW_OUTPUT_DEVICE_NAME=\"MacBook Pro Speakers\"\nexport SIGNALFLOW_OUTPUT_BUFFER_SIZE=1024\n
"},{"location":"graph/config/#printing-the-current-config","title":"Printing the current config","text":"To print the current configuration to stdout:
graph.config.print()\n
\u2192 Next: Graph status and properties
"},{"location":"graph/creating/","title":"The AudioGraph","text":""},{"location":"graph/creating/#creating-the-graph","title":"Creating the graph","text":"Creating the graph is simple: graph = AudioGraph()
By default, a new AudioGraph
immediately connects to the system's default audio hardware device (via the integrated libsoundio
library), using the system's default sample rate and buffer size.
Info
Note that the AudioGraph is a singleton object: only one AudioGraph can be created, which is shared globally.
To prevent the graph from starting instantly (for example, if you want to use the graph in offline mode), pass start=False
to the constructor.
To configure graph playback or recording parameters, see AudioGraph: Configuration.
\u2192 Next: Graph configuration
"},{"location":"graph/properties/","title":"The AudioGraph","text":""},{"location":"graph/properties/#status-and-properties","title":"Status and properties","text":"A number of methods are provided to query the graph's current status and properties.
"},{"location":"graph/properties/#status","title":"Status","text":"Querying graph.status
returns a one-line description of the number of nodes and patches in the graph, and the estimated CPU and RAM usage:
>>> graph.status\nAudioGraph: 235 active nodes, 6 patches, 13.95% CPU usage, 34.91MB memory usage\n
To automatically poll and print the graph's status periodically, call graph.poll(interval)
, where interval
is in seconds:
>>> graph.poll(1)\nAudioGraph: 118 active nodes, 3 patches, 7.09% CPU usage, 34.91MB memory usage\nAudioGraph: 118 active nodes, 3 patches, 7.16% CPU usage, 34.91MB memory usage\nAudioGraph: 40 active nodes, 1 patch, 2.60% CPU usage, 34.91MB memory usage\n
To stop polling, call graph.poll(0)
.
Querying graph.structure
returns a multi-line string describing every Node in the graph, their parameter values, and their connectivity structure.
>>> graph.structure\n * audioout-soundio\n input0:\n * linear-panner\n pan: 0.000000\n input:\n * multiply\n input1: 0.251189\n input0:\n * sine\n frequency: 440.000000\n
"},{"location":"graph/properties/#other-graph-properties","title":"Other graph properties","text":"graph.node_count
(int): Returns the current number of Nodes in the graph (including within patches)graph.patch_count
(int): Returns the current number of Patches in the graphcpu_usage
(float): Returns the current CPU usage, between 0.0 (0%) and 1.0 (100%). CPU usage can be lowered by increasing the output buffer size.memory_usage
(int): Returns the current RAM usage, in bytes. This is typically mostly used by waveform data in Buffers.num_output_channels
(int): Returns the graph's current output channel count, which is typically identical to the number of channels supported by the audio output device.output_buffer_size
(int): Returns the current hardware output buffer size, in bytes.\u2192 Next: Recording graph output
"},{"location":"graph/recording/","title":"The AudioGraph","text":""},{"location":"graph/recording/#recording-the-audio-output-of-the-graph","title":"Recording the audio output of the graph","text":"Convenience methods are provided to make it easy to record the global audio output when rendering audio in real-time:
graph.start_recording(\"filename.wav\")\n...\ngraph.stop_recording()\n
To record output in formats other than the default stereo, start_recording
takes a num_channels
argument that can be used to specify an alternative channel count.
Note
At present, only .wav is supported as an output format for global audio recordings.
"},{"location":"graph/recording/#offline-non-real-time-rendering","title":"Offline (non-real-time) rendering","text":"It is also possible to perform non-real-time rendering of a synthesis graph, by synthesizing audio output to a Buffer
which can then be saved to disk:
# Create an AudioGraph with a dummy output device\ngraph = AudioGraph(output_device=AudioOut_Dummy(2))\n\n# Create a buffer that will be used to store the audio output\nbuffer = Buffer(2, graph.sample_rate * 4)\n\n# Create a synthesis graph to render\nfreq = SawLFO(1, 200, 400)\nsine = SineOscillator([freq, freq+10])\ngraph.play(sine)\n\n# Render to the buffer. Non-real-time, so happens instantaneously.\n# Note that the graph renders as many samples as needed to fill the buffer.\ngraph.render_to_buffer(buffer)\n\n# Write the buffer contents to a file\nbuffer.save(\"output.wav\")\n\n# Finally, tear down the buffer\ngraph.destroy()\n
\u2192 Next: Clearing and stopping the graph
"},{"location":"graph/stopping/","title":"The AudioGraph","text":""},{"location":"graph/stopping/#clearing-and-stopping-the-graph","title":"Clearing and stopping the graph","text":"To clear all nodes and patches from the graph but leave it running for further audio synthesis:
>>> graph.clear()\n
To stop the graph and pause audio I/O:
>>> graph.stop()\n
To permanently destroy the graph:
>>> graph.destroy()\n
"},{"location":"howto/","title":"Howto","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
Tutorials on common tasks with SignalFlow.
"},{"location":"howto/midi/","title":"Howto: MIDI control","text":""},{"location":"installation/linux/","title":"Getting started","text":""},{"location":"installation/linux/#requirements","title":"Requirements","text":"SignalFlow supports macOS, Linux (including Raspberry Pi), and has alpha support for Windows.
Python 3.8 or above is required. On macOS, we recommend installing an up-to-date version of Python3 using Homebrew: brew install python3
.
On macOS and Linux x86_64, SignalFlow can be installed using pip
:
pip3 install signalflow \n
Verify that the installation has worked correctly by using the signalflow
command-line tool to play a test tone through your default system audio output:
signalflow test\n
For more detailed installation information, including Windows install and compilation from source, see the README.
"},{"location":"installation/linux/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"installation/macos/command-line/","title":"SignalFlow: Command-line installation for macOS","text":"These instructions assume you have a working version of Python 3.8+, installed either via Homebrew or from Python.org.
"},{"location":"installation/macos/command-line/#1-set-up-a-virtual-environment","title":"1. Set up a virtual environment","text":"We strongly recommend setting up a dedicated Python \"virtual environment\" for SignalFlow
python3 -m venv signalflow-env\nsource signalflow-env/bin/activate\n
"},{"location":"installation/macos/command-line/#2-install-signalflow","title":"2. Install SignalFlow","text":"Installing SignalFlow with pip
:
pip3 install signalflow jupyter\npython3 -m ipykernel install --name signalflow-env\n
If the installation succeeds, you should see Successfully installed signalflow
.
The installation of SignalFlow includes a command-line tool, signalflow
, that can be used to test and configure the framework. Check that the installation has succeeded by playing a test tone through your default system audio output.
This may take a few seconds to run for the first time. To exit the test, press ctrl-C (^C
).
signalflow test\n
"},{"location":"installation/macos/command-line/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"installation/macos/easy/","title":"SignalFlow: Easy install for macOS","text":"The simplest way to start exploring SignalFlow is with the free Visual Studio Code editor. Visual Studio Code can edit interactive \"Jupyter\" notebooks, which allow you to run and modify blocks of Python code in real-time, which is a great way to experiment live with audio synthesis.
"},{"location":"installation/macos/easy/#1-install-python","title":"1. Install Python","text":"Download and install the latest version of Python (currently 3.12).
Download Python
"},{"location":"installation/macos/easy/#2-download-and-install-visual-studio-code","title":"2. Download and install Visual Studio Code","text":"Download and install the latest version of Visual Studio Code.
Download Visual Studio Code
"},{"location":"installation/macos/easy/#3-create-a-new-visual-studio-code-workspace","title":"3. Create a new Visual Studio Code workspace","text":"Open Visual Studio Code, select File \u2192 Open Folder...
, select New Folder
, and create a new folder that will contain all your new SignalFlow project.
Visual Studio Code requires some extensions to be installed to handle Python and Jupyter files.
In Visual Studio Code, select the Extensions
icon from in the far-left column (or press \u21e7\u2318X
), and install the Python
and Jupyter
extensions by searching their names. These are needed to modify Jupyter notebooks in real-time.
Once installation has finished, close the Extensions
tab.
Select File \u2192 New File...
(^\u2325\u2318N
), and select Jupyter Notebook
. You should see the screen layout change to display an empty black text block (in Jupyter parlance, a \"cell\").
Click the button marked Select Kernel
in the top right.
Python Environments...
Create Python Environment
Venv
3.12.x
).Visual Studio Code will launch into some activity, in which it is installing necessary libraries and creating a Python \"virtual environment\", which is an isolated area of the filesystem containing all of the packages needed for this working space. Working in different virtual environments for different projects is good practice to minimise the likelihood of conflicts and disruptions.
When the setup is complete, the button in the top right should change to say .venv (Python 3.12.x)
.
You're now all set to start writing code!
"},{"location":"installation/macos/easy/#7-start-writing-code","title":"7. Start writing code","text":"In the first block, type:
print(\"Hello, world\")\n
To run the cell, press ^\u21b5
(control-enter). You should see \"Hello, world\" appear below the cell. You're now able to edit, change and run Python code in real-time!
Keyboard shortcuts
enter
to begin editing a cell, and escape
to end editing and move to select modeb
to add a cell after the current cell, and a
to add a cell before it \u21e7\u21b5
(shift-enter)Clear the first cell, and replace it with:
from signalflow import *\n
Run the cell with ^\u21b5
. This imports all of the SignalFlow commands and classes.
Create a new cell (b
), and in the new cell, run:
graph = AudioGraph()\n
This will create and start a new global audio processing system, using the system's default audio output. You should see the name of the audio device printed to the notebook.
In a new cell, run:
sine = SineOscillator(440) * 0.1\nsine.play()\n
This will create a sine oscillator, attenuate it, and play it from the system. Hopefully you should now hear a tone playing from your speaker or headphones.
"},{"location":"node/","title":"Nodes","text":"A Node
object is an audio processing unit that performs one single function. For example, a Node's role may be to synthesize a waveform, read from a buffer, or take two input Nodes and sum their values.
+
, -
, *
, %
, etc)\u2192 Next: Node playback
"},{"location":"node/developing/","title":"Nodes","text":""},{"location":"node/developing/#developing-new-node-classes","title":"Developing new Node classes","text":"See CONTRIBUTING.md
"},{"location":"node/inputs/","title":"Nodes","text":""},{"location":"node/inputs/#node-inputs","title":"Node inputs","text":"A node has three different classes of input:
Virtually every node has one or more audio-rate inputs. Put simply, an audio-rate input is the output of another node. Let's look at a short example:
lfo = SineLFO()\nsignal = SquareOscillator(frequency=200, width=lfo)\n
In this case, we are passing the output of a SineLFO
as the pulse width of a SquareOscillator
. This is an audio-rate input.
Although it's not obvious, the frequency
parameter is also an audio-rate input. Any constant value (such as the 200
here) is behind the scenes implemented as a Constant
node, which continuously outputs the value at an audio rate.
All audio-rate inputs can be modified just like a normal Python property. For example:
signal.frequency = TriangleOscillator(0.5, 100, 1000)\n
"},{"location":"node/inputs/#variable-input-nodes","title":"Variable input nodes","text":"Some nodes have a variable number of inputs, which can change over the Node's lifetime. For example, Sum()
takes an arbitrary number of input Nodes, and generates an output which is the sum of all of its inputs.
For variable-input nodes such as this, audio-rate inputs are added with add_input()
, and can be removed with remove_input()
.
a = Constant(1)\nb = Constant(2)\nc = Constant(3)\nsum = Sum()\nsum.add_input(a)\nsum.add_input(b)\nsum.add_input(c)\n# sum will now generate an output of 6.0\n
It is possible to check whether a Node object takes variable inputs by querying node.has_variable_inputs
.
When working with sequencing and timing, it is often useful be able to trigger discrete events within a node. This is where trigger inputs come in handy.
There are two different ways to handle trigger inputs:
trigger()
method on a Node
To generate trigger events at arbitrary times, call node.trigger()
. For example:
freq_env = Line(10000, 100, 0.5)\nsine = SineOscillator(freq_env)\nsine.play()\nwhile True:\n freq_env.trigger()\n graph.wait(1)\n
This is useful because it can be done outside the audio thread. For example, trigger()
could be called each time a MIDI note event is received.
The trigger()
method takes an optional name
parameter, which is used by Node
classes containing more than one type of trigger. This example uses the set_position
trigger of BufferPlayer
to seek to a new location in the sample every second.
buffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True)\nplayer.play()\nwhile True:\n player.trigger(\"set_position\", random_uniform(0, buffer.duration))\n graph.wait(1)\n
Note
Because the trigger
method happens outside the audio thread, it will take effect at the start of the next audio block. This means that, if you are running at 44.1kHz with an audio buffer size of 1024 samples, this could introduce a latency of up to 1024/44100 = 0.023s
. For time-critical events like drum triggers, this can be minimised by reducing the hardware output buffer size.
This constraint also means that only one event can be triggered per audio block. To trigger events at a faster rate than the hardware buffer size allows, see Audio-rate triggers below.
"},{"location":"node/inputs/#audio-rate-triggers","title":"Audio-rate triggers","text":"It is often desirable to trigger events using the audio-rate output of another Node object as a source of trigger events, to give sample-level precision in timing. Most nodes that support trigger
inputs can also be triggered by a corresponding audio-rate input.
Triggers happen at zero-crossings \u2014 that is, when the output of the node passes above zero (i.e., from <= 0
to >0
). For example, to create a clock with an oscillating tempo to re-trigger buffer playback:
clock = Impulse(SineLFO(0.2, 1, 10))\nbuffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True, clock=clock)\nplayer.play()\n
This can be used to your advantage with the boolean operator nodes.
on_the_right = MouseX() > 0.5\nenvelope = ASREnvelope(0, 0, 0.5, clock=on_the_right)\nsquare = SquareOscillator(100)\noutput = envelope * square * 0.1\noutput.play()\n
TODO: Should the name of the trigger() event always be identical to the trigger input name? So clock
for envelopes, buffer player, etc...?
The third type of input supported by nodes is the buffer. Nodes often take buffer inputs as sources of audio samples. They are also useful as sources of envelope shape data (for example, to shape the grains of a Granulator), or general control data (for example, recording motion patterns from a MouseX
input).
buffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True)\n
\u2192 Next: Operators
"},{"location":"node/library/","title":"Node reference library","text":""},{"location":"node/library/#analysis","title":"analysis","text":"(input=nullptr, buffer=nullptr, hop_size=0)
(input=0.0, threshold=2.0, min_interval=0.1)
(input=0.0, plugin_id=\"vamp-example-plugins:spectralcentroid:linearcentroid\")
(buffer=nullptr, segment_count=8, stutter_probability=0.0, stutter_count=1, jump_probability=0.0, duty_cycle=1.0, rate=1.0, segment_rate=1.0)
(buffer=nullptr, input=0.0, feedback=0.0, loop_playback=false, loop_record=false)
(buffer=nullptr, rate=1.0, loop=0, start_time=nullptr, end_time=nullptr, clock=nullptr)
(buffer=nullptr, input=0.0, feedback=0.0, loop=false)
(buffer=nullptr)
(buffer=nullptr, input=0.0, delay_time=0.1)
(buffer=nullptr, clock=0, target=0, offsets={}, values={}, durations={})
(buffer=nullptr, clock=0, pos=0, duration=0.1, pan=0.0, rate=1.0, max_grains=2048)
(buffer=nullptr, onsets={})
()
()
(button_index=0)
(attack=0.1, decay=0.1, sustain=0.5, release=0.1, gate=0)
(attack=0.1, sustain=0.5, release=0.1, curve=1.0, clock=nullptr)
(input=nullptr, threshold=0.00001)
(levels=std::vector<NodeRef> ( ), times=std::vector<NodeRef> ( ), curves=std::vector<NodeRef> ( ), clock=nullptr, loop=false)
(from=0.0, to=1.0, time=1.0, loop=0, clock=nullptr)
(sustain=1.0, clock=nullptr)
(input=nullptr, rate=1.0)
(input=nullptr, buffer=nullptr)
(input=0.0, fft_size=SIGNALFLOW_DEFAULT_FFT_SIZE, hop_size=SIGNALFLOW_DEFAULT_FFT_HOP_SIZE, window_size=0, do_window=true)
(fft_size=None, hop_size=None, window_size=None, do_window=None)
(input=nullptr)
(input=0, prominence=1, threshold=0.000001, count=SIGNALFLOW_MAX_CHANNELS, interpolate=true)
(input=nullptr, do_window=false)
(input=0, frequency=2000)
(input=0, threshold=0.5)
(input=nullptr)
(input=0, level=0.5, smoothing=0.9)
(input=0)
(a=0, b=0)
(a=0)
(a=0)
()
(num_channels=1, input=0, amplitude_compensation=true)
(input=nullptr, offset=0, maximum=0, step=1)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0)
(a=0, value_if_true=0, value_if_false=0)
(a=1, b=1)
(a=0)
(a=0)
(a=1.0, b=1.0)
(a=0, b=0)
(a=0)
(a=0)
(input=0, a=0, b=1, c=1, d=10)
(input=0, a=0, b=1, c=1, d=10)
(a=0, b=0)
()
(a=0)
(a=0)
(a=0)
(a=0)
(value=0)
(frequency=1.0)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440, phase=nullptr)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440)
(frequency=1.0, min=0.0, max=1.0, width=0.5, phase=0.0)
(frequency=440, width=0.5)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440)
(buffer=nullptr, frequency=440, phase=0, sync=0, phase_map=nullptr)
(buffer=nullptr, frequency=440, crossfade=0.0, phase=0.0, sync=0)
(input=nullptr, min=-1.0, max=1.0)
(input=nullptr, min=-1.0, max=1.0)
(input=nullptr, smooth=0.99)
(dry_input=nullptr, wet_input=nullptr, wetness=0.0)
(input=nullptr, min=-1.0, max=1.0)
(input=0.0, delay_time=0.1, feedback=0.5, max_delay_time=0.5)
(input=0.0, delay_time=0.1, feedback=0.5, max_delay_time=0.5)
(input=0.0, delay_time=0.1, max_delay_time=0.5)
(input=0.0, stutter_time=0.1, stutter_count=1, clock=nullptr, max_stutter_time=1.0)
(input=0, sample_rate=44100, bit_rate=16)
(input=nullptr, clock=nullptr)
(input=0.0, rate=2.0, chunk_size=1)
(input=0.0, buffer=nullptr)
(input=0.0, threshold=0.1, ratio=2, attack_time=0.01, release_time=0.1, sidechain=nullptr)
(input=0.0, threshold=0.1)
(input=0.0, ceiling=0.5, attack_time=1.0, release_time=1.0)
(input=0.0)
(input=0.0, filter_type=SIGNALFLOW_FILTER_TYPE_LOW_PASS, cutoff=440, resonance=0.0, peak_gain=0.0)
(input=0.0, low_gain=1.0, mid_gain=1.0, high_gain=1.0, low_freq=500, high_freq=5000)
(input=0.0, cutoff=200.0, resonance=0.0)
(input=0.0, filter_type=SIGNALFLOW_FILTER_TYPE_LOW_PASS, cutoff=440, resonance=0.0)
(num_channels=2, input=0, pan=0.0, width=1.0)
(num_channels=2, input=0, pan=0.0, width=1.0)
(env=nullptr, input=0.0, x=0.0, y=0.0, z=0.0, radius=1.0, algorithm=\"dbap\")
(input=0, balance=0)
(input=0, pan=0.0)
(input=0, width=1)
(clock=0, factor=1)
(clock=0, min=0, max=2147483647)
(clock=0, sequence_length=0, num_events=0)
(clock=0)
(sequence=std::vector<int> ( ), clock=nullptr)
(list={}, index=0)
(set=0, reset=0)
(sequence=std::vector<float> ( ), clock=nullptr)
(chaos=3.7, frequency=0.0)
(low_cutoff=20.0, high_cutoff=20000.0, reset=nullptr)
(min=-1.0, max=1.0, delta=0.01, clock=nullptr, reset=nullptr)
(values=std::vector<float> ( ), clock=nullptr, reset=nullptr)
(probability=0.5, clock=nullptr, reset=nullptr)
(scale=0.0, clock=nullptr, reset=nullptr)
(min=0.001, max=1.0, clock=nullptr, reset=nullptr)
(mean=0.0, sigma=0.0, clock=nullptr, reset=nullptr)
(probability=0.5, length=8, clock=nullptr, explore=nullptr, generate=nullptr, reset=nullptr)
(frequency=1.0, distribution=SIGNALFLOW_EVENT_DISTRIBUTION_UNIFORM, reset=nullptr)
(min=0.0, max=1.0, clock=nullptr, reset=nullptr)
(reset=nullptr)
(frequency=0.0, min=-1.0, max=1.0, interpolate=true, random_interval=true, reset=nullptr)
When passing a value to audio-rate input of a Node, the signal is by default monophonic (single-channel). For example, SquareOscillator(440)
generates a 1-channel output.
It is possible to generate multi-channel output by passing an array of values in the place of a constant. For example, SquareOscillator([440, 880])
generates stereo output with a different frequency in the L and R channels.
There is no limit to the number of channels that can be generated by a node. For example, SquareOscillator(list(100 + 50 * n for n in range(100)))
will create a node with 100-channel output, each with its own frequency.
>>> sq = SquareOscillator([100 + 50 * n for n in range(100)])\n>>> print(sq.num_output_channels)\n100\n
"},{"location":"node/multichannel/#automatic-upmixing","title":"Automatic upmixing","text":"There are generally multiple inputs connected to a node, which may themselves have differing number of channels. For example, SquareOscillator(frequency=[100, 200, 300, 400, 500], width=0.7)
has a 5-channel input and a 1-channel input. In cases like this, the output of the nodes with fewer channels is upmixed to match the higher-channel inputs.
Upmixing here means simply duplicating the output until it reaches the desired number of channels. In the above case, the width
input will be upmixed to generate 5 channels, all containing 0.7
.
If width
were a stereo input with L and R channels, the output would be tiled, alternating between the channels. Each frame of stereo input would then be upmixed to contain [L, R, L, R, L]
, where L
and R
are the samples corresponding to the L and R channels.
The key rule is that, for nodes that support upmixing, the output signal has as many channels as the input signal with the highest channel count.
This process percolates through the signal chain. For example:
SquareOscillator(frequency=SineLFO([1, 3, 5], min=440, max=880),\n width=SawLFO([0.5, 0.6], min=0.25, max=0.75))\n
min
and max
inputs of the frequency
LFO would be upmixed to 3 channels eachmin
and max
inputs of the width
LFO would be upmixed to 2 channels eachwidth
node would be upmixed from 2 to 3 channelsSome nodes have immutable numbers of input/output channels. For example:
StereoPanner
has 1 input channel and 2 output channelsStereoBalance
has 2 input channels and 2 output channelsChannelMixer
has an arbitrary number of input channels, but a fixed, user-specified number of output channelsEven Nodes that do not have an obvious input (e.g. BufferPlayer
) have input channels, for modulation inputs (for example, modulating the rate of the buffer).
When two nodes are connected together with incompatible channel counts (for example, connecting a StereoBalance
into a StereoMixer
), an InvalidChannelCountException
will be raised.
There are a number of Node subclasses dedicated to channel handling.
ChannelMix
with nodes of N
and M
channels will produce an output of N + M
channels.Single channels of a multi-channel node can be accessed using the index []
operator. For example:
square = SquareOscillator([440, 441, 442, 443])\noutput = square[0]\n# output now contains a mono output, with a frequency of 440Hz.\n
Slice syntax can be used to query multiple subchannels:
square = SquareOscillator([440, 441, 442, 880])\noutput = square[0:2]\n# now contains a two-channel square wave\n
\u2192 Next: Status and properties
"},{"location":"node/operators/","title":"Nodes","text":""},{"location":"node/operators/#node-operators","title":"Node operators","text":""},{"location":"node/operators/#arithmetic","title":"Arithmetic","text":"The output of multiple nodes can be combined using Python's mathematical operators. For example, to sum two sine waves together to create harmonics, use the +
operator:
output = SineOscillator(440) + SineOscillator(880)\noutput.play()\n
To modulate the amplitude of one node with another, use the *
operator:
sine = SineOscillator(440)\nenvelope = ASREnvelope(0.1, 1, 0.1)\noutput = sine * envelope\n
You can use constant values in place of Node
objects:
sine = SineOscillator(440)\nattenuated = sine * 0.5\n
Operators can be chained together in the normal way:
# Create an envelope that rises from 0.5 to 1.0 and back to 0.5\nenv = (ASREnvelope(0.1, 1, 0.1) * 0.5) + 0.5\n
Behind the scenes, these operators are actually creating composites of Node
subclasses. The last example could alternatively be written as:
Add(Multiply(ASREnvelope(0.1, 1, 0.1), 0.5), 0.5)\n
"},{"location":"node/operators/#comparison","title":"Comparison","text":"Comparison operators can also be used to compare two Node output values, generating a binary (1/0) output. For example:
# Generates an output of 1 when the sinusoid is above 0, and 0 otherwise \nSineOscillator(440) > 0\n
This can then be used as an input to other nodes. The below will generate a half-wave-rectified sine signal (that is, a sine wave with all negative values set to zero).
sine = SineOscillator(440)\nrectified = sine * (sine > 0)\n
"},{"location":"node/operators/#index-of-operators","title":"Index of operators","text":"Below is a full list of operators supported by SignalFlow.
"},{"location":"node/operators/#arithmetic-operators","title":"Arithmetic operators","text":"Operator Node class+
Add -
Subtract *
Multiply /
Divide **
Power %
Modulo"},{"location":"node/operators/#comparison-operators","title":"Comparison operators","text":"Operator Node class ==
Equal !=
NotEqual <
LessThan <=
LessThanOrEqual >
GreaterThan >=
GreaterThanOrEqual \u2192 Next: Multichannel
"},{"location":"node/playback/","title":"Nodes","text":""},{"location":"node/playback/#playing-and-stopping-a-node","title":"Playing and stopping a node","text":""},{"location":"node/playback/#starting-playback","title":"Starting playback","text":"To start a node playing, simply call the play()
method:
graph = AudioGraph()\nnode = SineOscillator(440)\nnode.play()\n
This connects the node to the output
endpoint of the current global AudioGraph
. The next time the graph processes a block of samples, the graph's output
node then calls upon the sine oscillator to generate a block.
It is important to remember that playing a node means \"connecting it to the graph\". For this reason, it is not possible to play the same node more than once, as it is already connected to the graph. To play multiples of a particular Node type, simply create and play multiple instances.
"},{"location":"node/playback/#connecting-a-node-to-another-nodes-input","title":"Connecting a Node to another Node's input","text":"It is often the case that you want to connect a Node to the input of another Node for playback, rather than simply wiring it to the output of a graph -- for example, to pass an oscillator through a processor. In this case, you do not need to call play()
(which means \"connect this node to the graph\"). Instead, it is sufficient to simply connect the Node to the input of another Node that is already playing.
For example:
# create and begin playback of a variable input summer, passed through a filter\nsum = Sum()\nflt = SVFilter(sum, \"low_pass\", 200)\nflt.play()\n
Now, let's create an oscillator. Observe that connecting the oscillator to the filter's input begins playback immediately.
square = SquareOscillator(100)\nsum.add_input(square)\n
"},{"location":"node/playback/#stopping-playback","title":"Stopping playback","text":"To stop a node playing:
node.stop()\n
This disconnects the node from the output device that it is connected to.
\u2192 Next: Inputs
"},{"location":"node/properties/","title":"Nodes","text":""},{"location":"node/properties/#node-properties","title":"Node properties","text":"A Node
has a number of read-only properties which can be used to query its status at a given moment in time.
asr-envelope
) num_output_channels int The number of output channels that the node generates. num_input_channels int The number of input channels that the node takes. Note that most nodes have matches_input_channels
set, meaning that their num_input_channels
will be automatically increased according to their inputs. To learn more, see Nodes: Multichannel. matches_input_channels bool Whether the node automatically increases its num_input_channels
based on its inputs. To learn more, see Nodes: Multichannel. has_variable_inputs bool Whether the node supports an arbitrary number of audio-rate inputs output_buffer numpy.ndarray Contains the Node's most recent audio output, in float32
samples. The buffer is indexed by channel
x frame
, so to obtain the 32nd sample in the first channel, query: node.output_buffer[0][31]
. inputs dict A dict containing all of the Node
's audio-rate inputs. Note that buffer inputs are not currently included within this dict. state int The Node's current playback state, which can be one of SIGNALFLOW_NODE_STATE_ACTIVE
and SIGNALFLOW_NODE_STATE_STOPPED
. The STOPPED
state only applies to those nodes which have a finite duration (e.g. ASREnvelope
, or BufferPlayer
with looping disabled) and have reached the end of playback. Nodes continue to have a state of ACTIVE
whether or not they are connected to the graph. patch Patch Indicates the Patch that the node is part of, or None if the Node does not belong to a Patch."},{"location":"node/properties/#monitoring-a-nodes-output","title":"Monitoring a node's output","text":"To monitor the output of a node, call node.poll(num_seconds)
, where num_seconds
is the interval between messages. This will print the last sample generated by the node to stdout
. In the case of multichannel nodes, only the first channel's value is printed.
>>> a = Counter(Impulse(1))\n>>> a.poll(1)\n>>> a.play()\ncounter: 0.00000\ncounter: 1.00000\ncounter: 2.00000\n
To stop polling a node, call node.poll(0)
.
Some Node
classes have additional properties, containing information on implementation-specific states. These can be accessed via the get_property
method.
For example, the BufferPlayer
node exposes a position
property, which returns the playhead's current position, in seconds.
>>> buffer = Buffer(\"audio.wav\")\n>>> player = BufferPlayer(buffer)\n>>> player.play()\n...\n>>> player.get_property(\"position\")\n5.984000205993652\n
\u2192 Next: Stochastic nodes
"},{"location":"node/stochastic/","title":"Nodes","text":""},{"location":"node/stochastic/#chance-and-stochastic-nodes","title":"Chance and stochastic nodes","text":"SignalFlow has a number of stochastic nodes, which make use of a pseudo-random number generator (RNG) to produce unpredictable output values.
Each object of these StochasticNode
subclasses stores its own RNG. By default, the RNG is seeded with a random value, so that each run will generate a different set of outputs. However, to create a repeatable pseudo-random output, the seed
of the node's RNG can be set to a known value:
>>> r = RandomUniform(0, 1)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.48836085, 0.64326525, 0.79819506, 0.8489549 ], dtype=float32)\n>>> r.set_seed(123)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.7129553 , 0.42847094, 0.6908848 , 0.7191503 ], dtype=float32)\n>>> r.set_seed(123)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.7129553 , 0.42847094, 0.6908848 , 0.7191503 ], dtype=float32)\n
Note the identical sequences generated after repeatedly setting the seed to a known value.
Warning
Calling node.process()
is generally not good practice, as it does not recursively process all of the node's inputs (unlike when a node is embedded within an AudioGraph, which correctly handles recursion and cyclical loops). Please use at your peril!
\u2192 Next: Node reference library
"},{"location":"patch/","title":"Patch","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
A Patch
represents a connected group of Nodes
, analogous to a synthesizer. Defining patches makes it easy to create higher-level structures, which can then be reused and instantiated with a single line of code, in much the same way as a Node.
Behind the scenes, the structure of a Patch
is encapsulated by a PatchSpec
, a template which can be instantiated or serialised to a JSON file for later use.
\u2192 Next: Defining a Patch
"},{"location":"patch/auto-free/","title":"Patch","text":""},{"location":"patch/auto-free/#auto-free-and-memory-management","title":"Auto-free and memory management","text":"Auto-free.
"},{"location":"patch/defining/","title":"Patch","text":""},{"location":"patch/defining/#defining-a-patch","title":"Defining a Patch","text":"A Patch is made up of a connected network of Nodes, together with a set of properties that determine how the Patch can be controlled.
There are two general ways to define the structure of a Patch:
Patch
. In general, this is the recommended approach for defining new Patches.PatchSpec
, which describes the structure of a patch The quickest and most intuitive way to define a Patch
is by subclassing the Patch
class itself. Let's look at an example.
class Bleep (Patch):\n def __init__(self, frequency=880, duration=0.1):\n super().__init__()\n frequency = self.add_input(\"frequency\", frequency)\n duration = self.add_input(\"duration\", duration)\n sine = SineOscillator(frequency)\n env = ASREnvelope(0.001, duration, 0.001)\n output = sine * env\n self.set_output(output)\n self.set_auto_free(True)\n
In the above example:
__init__
function, super().__init__()
must be called to initialise the Patch and its storage. This is vital! Without it, your program will crash. add_input()
method is used to define them as inputs of the Patch
, which can then be subsequently modulated. Note that the add_input()
method returns a reference to the frequency node, which then acts as a pointer to the input node.self.set_output()
is used to define the Patch's output. A Patch can only have one single output.self.set_auto_free()
is used to automatically stop and free the Patch after playback of the envelope is completed. More about auto-free... You can now instantiate a Bleep
object in just the same way as you would instantiate and play a Node:
b = Bleep(frequency=440, duration=0.2)\nb.play()\n
If you query graph.status
after playback has finished, you should see that the Patch
is automatically freed and the number of nodes returns to 0.
The structure of a Patch
is described by a PatchSpec
, which can in turn be imported/exported in the JSON text-based data interchange format.
For information on loading or saving PatchSpecs as JSON, see Exporting and importing patches.
\u2192 Next: Playing and stopping a Patch
"},{"location":"patch/exporting/","title":"Patch","text":""},{"location":"patch/exporting/#exporting-and-importing-patches","title":"Exporting and importing patches","text":"A Patch can be exported or imported.
\u2192 Next: Auto-free and memory management
"},{"location":"patch/inputs/","title":"Patch","text":""},{"location":"patch/inputs/#patch-inputs","title":"Patch inputs","text":"Just like a Node, a Patch supports three different classes of input:
A Patch supports any number of user-defined named inputs, which can be used to modulate the nodes within the patch.
Each input must be defined by calling add_input()
when the Patch is first defined, with an optional default value.
Info
Note that Patches do not yet support variable inputs.
When a Patch is playing, the value of its inputs can be set using patch.set_input()
:
class Bloop (Patch):\n def __init__(self, frequency=880, duration=0.1):\n super().__init__()\n frequency = self.add_input(\"frequency\", frequency)\n sine = SineOscillator(frequency)\n self.set_output(sine)\n self.set_auto_free(True)\n\nbloop = Bloop()\nbloop.play()\n...\nbloop.set_input(\"frequency\", 100)\n
Info
Note that Patches do not yet support setting inputs with Python properties (e.g. patch.prop_name = 123
), as is possible with node inputs.
When defining a Patch
, it is possible to define which Node should receive trigger()
events sent to the Patch. This is done with patch.set_trigger_node()
:
class Hat (Patch):\n def __init__(self, duration=0.1):\n super().__init__()\n duration = self.add_input(\"duration\", duration)\n noise = WhiteNoise()\n env = ASREnvelope(0.0001, 0.0, duration, curve=2)\n output = noise * env\n self.set_trigger_node(env)\n self.set_output(output)\n\nh = Hat()\nh.play()\n...\nh.trigger() # triggers a hit, resetting the ASREnvelope to its start point\n
This can be used to create a Patch
that stays connected to the AudioGraph and can be retriggered to play a hit.
Info
Note that Patches only presently support trigger events directed to a single node within the patch, and cannot route triggers to multiple different nodes.
"},{"location":"patch/inputs/#buffer-inputs","title":"Buffer inputs","text":"Buffer inputs can be declared at define time by calling self.add_buffer_input()
. Similar to add_input
, the return value is a placeholder Buffer
that can be used wherever you would normally pass a Buffer
:
class WobblyPlayer (Patch):\n def __init__(self, buffer):\n super().__init__()\n buffer = self.add_buffer_input(\"buffer\", buffer)\n rate = SineLFO(0.2, 0.5, 1.5)\n player = BufferPlayer(buffer, rate=rate, loop=True)\n self.set_output(player)\n\nbuffer = Buffer(\"examples/audio/stereo-count.wav\")\nplayer = WobblyPlayer(buffer)\nplayer.play()\n
The buffer can then be replaced at runtime by calling set_input()
:
player.set_input(\"buffer\", another_buffer)\n
\u2192 Next: Operators
"},{"location":"patch/operators/","title":"Patch","text":""},{"location":"patch/operators/#operators","title":"Operators","text":"The output of a Patch can be amplified, attenuated, combined, modulated and compared using Python operators, in much the same way as Node:
patch = Patch(patch_spec)\noutput = patch * 0.5\n
For a full list of the operators that can be applied to a Patch
, see Node operators.
\u2192 Next: Patch properties
"},{"location":"patch/playback/","title":"Patch","text":""},{"location":"patch/playback/#playing-a-patch","title":"Playing a Patch","text":"Once a Patch
has been defined or imported, it can be instantiated in two different ways depending on how it was defined:
The simplest way to instantiate a Patch is by defining it as a Patch subclass, and then instantiating it in the same way as a Node.
class Hat (Patch):\n def __init__(self, duration=0.1):\n super().__init__()\n duration = self.add_input(\"duration\", duration)\n noise = WhiteNoise()\n env = ASREnvelope(0.0001, 0.0, duration, curve=2)\n output = noise * env\n self.set_output(output)\n self.set_auto_free(True)\n\nhat = Hat()\nhat.play()\n
Once a Patch has finished, its state changes to SIGNALFLOW_PATCH_STATE_STOPPED
.
Just as with nodes, it is important to remember that playing a patch means \"connecting it to the graph\". For this reason, it is not possible to play the same patch more than once, as it is already connected to the graph.
To play multiples of a particular Patch
type, simply create and play multiple instances.
Once a PatchSpec
has been created or imported, it can be played by instantiating a Patch
with the PatchSpec
as an argument:
patch = Patch(patch_spec)\npatch.play()\n
"},{"location":"patch/playback/#connecting-a-patch-to-another-patchs-input","title":"Connecting a Patch to another Patch's input","text":"A Patch
can be connected to the input of another Patch
(or Node), in exactly the same way described in Connecting a Node to another Node's input.
Once you have got to grips with this paradigm, it becomes simple to build up sophisticated processing graphs by abstracting complex functionality within individual Patch
objects, and connecting them to one another.
As in Node playback, stopping a Patch disconnects it from the AudioGraph. Patches with auto-free are automatically stopped when their lifetimes ends. Patches with an unlimited lifespan must be stopped manually, with:
patch.stop()\n
This disconnects the Patch from its output.
\u2192 Next: Patch inputs
"},{"location":"patch/properties/","title":"Patch","text":""},{"location":"patch/properties/#patch-properties","title":"Patch properties","text":"Property Type Description nodes list A list of all of the Node objects that make up this Patch inputs dict A dict of key-value pairs corresponding to all of the (audio rate) inputs within the Patch state int The Patch's current playback state, which can beSIGNALFLOW_PATCH_STATE_ACTIVE
or SIGNALFLOW_PATCH_STATE_STOPPED
. graph AudioGraph A reference to the AudioGraph that the Patch is part of \u2192 Next: Exporting and importing patches
"},{"location":"planning/NAMING/","title":"NAMING","text":""},{"location":"planning/NAMING/#nodes","title":"NODES","text":"Generators - Oscillators - Wavetable - Waveforms (all wrappers around Wavetable with band-limiting) - SineOscillator - SquareOscillator - TriangleOscillator - SawOscillator - LFO (all wrappers around Wavetable) - SineLFO - SquareLFO - TriangleLFO - SawLFO - Buffer - BufferPlayer - BufferRecorder - Stochastic - Processors - Panners - ChannelMixer - LinearPanner - AzimuthPanner - ObjectPanner - Delay - AllpassDelay - Delay - Effects - EQ - Gate - Resampler - Waveshaper
Stochastic - Random signal generators - WhiteNoise - PinkNoise - BrownNoise - PerlinNoise - Random number generators (with clocked inputs) - RandomUniform - RandomLinear - RandomBrownian - RandomExponentialDist - RandomGaussian - RandomBeta - Random event generators - RandomImpulse
-- PATCHES - Patch - PatchDef
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"SignalFlow","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
SignalFlow is an audio synthesis framework whose goal is to make it quick and intuitive to explore complex sonic ideas. It has a simple and consistent Python API, allowing for rapid prototyping in Jupyter, PyCharm, or on the command-line. It comes with over 100 of built-in node classes for creative exploration.
Its core is implemented in C++11, with cross-platform hardware acceleration.
SignalFlow has robust support for macOS and Linux (including Raspberry Pi), and has work-in-progress support for Windows. The overall project is currently in alpha status, and interfaces may change without warning.
This documentation currently focuses specifically on Python interfaces and examples.
"},{"location":"#overview","title":"Overview","text":"At its core, SignalFlow has a handful of key concepts.
Let's take a look at a minimal SignalFlow example. Here, we create and immediately start the AudioGraph
, construct a stereo sine oscillator, connect the oscillator to the graph's output, and run the graph indefinitely.
from signalflow import *\n\ngraph = AudioGraph()\nsine = SineOscillator([440, 880])\nenvelope = ASREnvelope(0.1, 0.1, 0.5)\noutput = sine * envelope\noutput.play()\ngraph.wait()\n
This demo shows a few syntactical benefits that SignalFlow provides to make it easy to work with audio:
SineOscillator
is expanded to create a stereo, 2-channel output. If you passed a 10-item array, the output would have 10 channels. (Read more: Multichannel nodes)*
can be used to multiply, add, subtract or divide the output of nodes, and creates a new output Node that corresponds to the output of the operation. This example uses an envelope to modulate the amplitude of an oscillator. (Read more: Node operators)In subsequent examples, we will skip the import
line and assume you have already imported everything from the signalflow
namespace.
Info
If you want to keep your namespaces better separated, you might want to do something like the below.
import signalflow as sf\n\ngraph = sf.AudioGraph()\nsine = sf.SineOscillator(440)\n...\n
"},{"location":"#documentation","title":"Documentation","text":"For various code examples using SignalFlow, see examples
in GitHub:
https://github.com/ideoforms/signalflow/tree/master/examples
"},{"location":"getting-started/","title":"Getting started","text":""},{"location":"getting-started/#requirements","title":"Requirements","text":"SignalFlow supports macOS, Linux (including Raspberry Pi), and has alpha support for Windows.
"},{"location":"getting-started/#installation","title":"Installation","text":""},{"location":"getting-started/#macos","title":"macOS","text":"If you are an existing Python user and confident with the command line:
macOS: Install SignalFlow from the command line
If you're new to Python or getting started from scratch:
macOS: Install SignalFlow with Visual Studio Code
"},{"location":"getting-started/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"license/","title":"License","text":"SignalFlow is under the MIT license.
This means that you are welcome to use it for any purpose, including commercial usage, but must include the copyright notice above in any copies or derivative works.
Please do let me know what you use it for!
"},{"location":"buffer/","title":"Buffer","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
A Buffer
is an allocated area of memory that can be used to store single-channel or multi-channel data, which may represent an audio waveform or any other type of signal.
AudioGraph
is the global audio processing system that schedules and performs audio processing. It is comprised of an interconnected network of Node and Patch objects, which audio flows through.
Each time a new block of audio is requested by the system audio I/O layer, the AudioGraph
object is responsible for traversing the tree of nodes and generating new samples by calling each Node
's process
method.
Why 'Graph'?
You may be more familiar with \"graph\" being used to mean a data visualisation. In signal processing and discrete mathematics, the term \"graph\" is also used to denote a system of nodes (\"vertices\") related by connections (\"edges\"). Read more: Graph Theory Basics (Lumen Learning).
\u2192 Next: Creating the graph
"},{"location":"graph/config/","title":"The AudioGraph","text":""},{"location":"graph/config/#graph-configuration","title":"Graph configuration","text":"There are a number of graph configuration parameters that can be used to change the global behaviour of the audio system. This can be done programmatically, via a config file, or via environmental variables.
Parameter Description output_backend_name The name of the audio output backend to use, which can be one of:jack
, alsa
, pulseaudio
, coreaudio
, wasapi
, dummy
. Defaults to the first of these found on the system. Typically only required for Linux. output_device_name The name of the audio output device to use. This must precisely match the device's name in your system. If not found, DeviceNotFoundException
is thrown when instantiating the graph. output_buffer_size The size of the hardware output audio buffer, in samples. A larger buffer reduces the chance of buffer overflows and glitches, but at the cost of higher latency. Note that this config option merely specifies the preferred output buffer size, which may not be available in the system hardware. To check the actual buffer size used by the AudioGraph, query graph.output_buffer_size
after instantiation. input_device_name The name of the input device to use. input_buffer_size The size of the hardware input audio buffer. sample_rate The audio sample rate to use. cpu_usage_limit Imposes a hard limit on the CPU usage permitted by SignalFlow. If the estimated (single-core) CPU usage exceeds this value, no more nodes or patches can be created until it returns to below the limit. Floating-point value between 0..1, where 0.5 means 50% CPU."},{"location":"graph/config/#configuring-the-graph-programmatically","title":"Configuring the graph programmatically","text":"To specify an alternative config, create and populate an AudioGraphConfig
object before the graph is started:
config = AudioGraphConfig()\nconfig.output_device_name = \"MacBook Pro Speakers\"\nconfig.sample_rate = 44100\nconfig.output_buffer_size = 2048\n\ngraph = AudioGraph(config)\n
"},{"location":"graph/config/#configuring-the-graph-via-signalflowconfig","title":"Configuring the graph via ~/.signalflow/config","text":"To specify a configuration that is used by all future SignalFlow sessions, create a file ~/.signalflow/config
, containing one or more of the \"Graph configuration\" fields listed above.
For example:
[audio]\nsample_rate = 48000\noutput_buffer_size = 256\ninput_buffer_size = 256\noutput_device_name = \"MacBook Pro Speakers\"\ninput_device_name = \"MacBook Pro Microphone\"\n
All fields are optional.
A quick and easy way to edit your config, or create a new config file, is by using the signalflow
command-line utility:
signalflow configure\n
This will use your default $EDITOR
to open the configuration, or pico
if no editor is specified.
SignalFlow config can also be set by setting an environmental variable in your shell. Variable names are identical to the upper-case version of the config string, prefixed with SIGNALFLOW_
. For example:
export SIGNALFLOW_OUTPUT_DEVICE_NAME=\"MacBook Pro Speakers\"\nexport SIGNALFLOW_OUTPUT_BUFFER_SIZE=1024\n
"},{"location":"graph/config/#printing-the-current-config","title":"Printing the current config","text":"To print the current configuration to stdout:
graph.config.print()\n
\u2192 Next: Graph status and properties
"},{"location":"graph/creating/","title":"The AudioGraph","text":""},{"location":"graph/creating/#creating-the-graph","title":"Creating the graph","text":"Creating the graph is simple: graph = AudioGraph()
By default, a new AudioGraph
immediately connects to the system's default audio hardware device (via the integrated libsoundio
library), using the system's default sample rate and buffer size.
Info
Note that the AudioGraph is a singleton object: only one AudioGraph can be created, which is shared globally.
To prevent the graph from starting instantly (for example, if you want to use the graph in offline mode), pass start=False
to the constructor.
To configure graph playback or recording parameters, see AudioGraph: Configuration.
\u2192 Next: Graph configuration
"},{"location":"graph/properties/","title":"The AudioGraph","text":""},{"location":"graph/properties/#status-and-properties","title":"Status and properties","text":"A number of methods are provided to query the graph's current status and properties.
"},{"location":"graph/properties/#status","title":"Status","text":"Querying graph.status
returns a one-line description of the number of nodes and patches in the graph, and the estimated CPU and RAM usage:
>>> graph.status\nAudioGraph: 235 active nodes, 6 patches, 13.95% CPU usage, 34.91MB memory usage\n
To automatically poll and print the graph's status periodically, call graph.poll(interval)
, where interval
is in seconds:
>>> graph.poll(1)\nAudioGraph: 118 active nodes, 3 patches, 7.09% CPU usage, 34.91MB memory usage\nAudioGraph: 118 active nodes, 3 patches, 7.16% CPU usage, 34.91MB memory usage\nAudioGraph: 40 active nodes, 1 patch, 2.60% CPU usage, 34.91MB memory usage\n
To stop polling, call graph.poll(0)
.
Querying graph.structure
returns a multi-line string describing every Node in the graph, their parameter values, and their connectivity structure.
>>> graph.structure\n * audioout-soundio\n input0:\n * linear-panner\n pan: 0.000000\n input:\n * multiply\n input1: 0.251189\n input0:\n * sine\n frequency: 440.000000\n
"},{"location":"graph/properties/#other-graph-properties","title":"Other graph properties","text":"graph.node_count
(int): Returns the current number of Nodes in the graph (including within patches)graph.patch_count
(int): Returns the current number of Patches in the graphcpu_usage
(float): Returns the current CPU usage, between 0.0 (0%) and 1.0 (100%). CPU usage can be lowered by increasing the output buffer size.memory_usage
(int): Returns the current RAM usage, in bytes. This is typically mostly used by waveform data in Buffers.num_output_channels
(int): Returns the graph's current output channel count, which is typically identical to the number of channels supported by the audio output device.output_buffer_size
(int): Returns the current hardware output buffer size, in bytes.\u2192 Next: Recording graph output
"},{"location":"graph/recording/","title":"The AudioGraph","text":""},{"location":"graph/recording/#recording-the-audio-output-of-the-graph","title":"Recording the audio output of the graph","text":"Convenience methods are provided to make it easy to record the global audio output when rendering audio in real-time:
graph.start_recording(\"filename.wav\")\n...\ngraph.stop_recording()\n
To record output in formats other than the default stereo, start_recording
takes a num_channels
argument that can be used to specify an alternative channel count.
Note
At present, only .wav is supported as an output format for global audio recordings.
"},{"location":"graph/recording/#offline-non-real-time-rendering","title":"Offline (non-real-time) rendering","text":"It is also possible to perform non-real-time rendering of a synthesis graph, by synthesizing audio output to a Buffer
which can then be saved to disk:
# Create an AudioGraph with a dummy output device\ngraph = AudioGraph(output_device=AudioOut_Dummy(2))\n\n# Create a buffer that will be used to store the audio output\nbuffer = Buffer(2, graph.sample_rate * 4)\n\n# Create a synthesis graph to render\nfreq = SawLFO(1, 200, 400)\nsine = SineOscillator([freq, freq+10])\ngraph.play(sine)\n\n# Render to the buffer. Non-real-time, so happens instantaneously.\n# Note that the graph renders as many samples as needed to fill the buffer.\ngraph.render_to_buffer(buffer)\n\n# Write the buffer contents to a file\nbuffer.save(\"output.wav\")\n\n# Finally, tear down the buffer\ngraph.destroy()\n
\u2192 Next: Clearing and stopping the graph
"},{"location":"graph/stopping/","title":"The AudioGraph","text":""},{"location":"graph/stopping/#clearing-and-stopping-the-graph","title":"Clearing and stopping the graph","text":"To clear all nodes and patches from the graph but leave it running for further audio synthesis:
>>> graph.clear()\n
To stop the graph and pause audio I/O:
>>> graph.stop()\n
To permanently destroy the graph:
>>> graph.destroy()\n
"},{"location":"howto/","title":"Howto","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
Tutorials on common tasks with SignalFlow.
"},{"location":"howto/midi/","title":"Howto: MIDI control","text":""},{"location":"installation/linux/","title":"Getting started","text":""},{"location":"installation/linux/#requirements","title":"Requirements","text":"SignalFlow supports macOS, Linux (including Raspberry Pi), and has alpha support for Windows.
Python 3.8 or above is required. On macOS, we recommend installing an up-to-date version of Python3 using Homebrew: brew install python3
.
On macOS and Linux x86_64, SignalFlow can be installed using pip
:
pip3 install signalflow \n
Verify that the installation has worked correctly by using the signalflow
command-line tool to play a test tone through your default system audio output:
signalflow test\n
For more detailed installation information, including Windows install and compilation from source, see the README.
"},{"location":"installation/linux/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"installation/macos/command-line/","title":"SignalFlow: Command-line installation for macOS","text":"These instructions assume you have a working version of Python 3.8+, installed either via Homebrew or from Python.org.
"},{"location":"installation/macos/command-line/#1-set-up-a-virtual-environment","title":"1. Set up a virtual environment","text":"We strongly recommend setting up a dedicated Python \"virtual environment\" for SignalFlow
python3 -m venv signalflow-env\nsource signalflow-env/bin/activate\n
"},{"location":"installation/macos/command-line/#2-install-signalflow","title":"2. Install SignalFlow","text":"Installing SignalFlow with pip
:
pip3 install signalflow jupyter\npython3 -m ipykernel install --name signalflow-env\n
If the installation succeeds, you should see Successfully installed signalflow
.
The installation of SignalFlow includes a command-line tool, signalflow
, that can be used to test and configure the framework. Check that the installation has succeeded by playing a test tone through your default system audio output.
This may take a few seconds to run for the first time. To exit the test, press ctrl-C (^C
).
signalflow test\n
"},{"location":"installation/macos/command-line/#examples","title":"Examples","text":"Several example scripts are included within the repo, covering simple control and modulation, FM synthesis, sample granulation, MIDI control, chaotic functions, etc.
"},{"location":"installation/macos/easy/","title":"SignalFlow: Easy install for macOS","text":"The simplest way to start exploring SignalFlow is with the free Visual Studio Code editor. Visual Studio Code can edit interactive \"Jupyter\" notebooks, which allow you to run and modify blocks of Python code in real-time, which is a great way to experiment live with audio synthesis.
You'll only need to do this installation process once. Once setup, experimenting with SignalFlow is as simple as opening Visual Studio Code.
"},{"location":"installation/macos/easy/#1-install-python","title":"1. Install Python","text":"Download and install the latest version of Python (currently 3.12).
Download Python
"},{"location":"installation/macos/easy/#2-download-and-install-visual-studio-code","title":"2. Download and install Visual Studio Code","text":"Download and install the latest version of Visual Studio Code.
Download Visual Studio Code
Once installed, open Applications
and run Visual Studio Code
.
In Visual Studio code, create a new folder to contain your new SignalFlow project:
File \u2192 Open Folder...
New Folder
, and pick a name for your new project folderWhere to put your workspace
You can store your project workspace anywhere on your drive. The workspace can hold multiple notebooks, audio files, etc.
Trusted workspaces
If Visual Studio asks \"Do you trust the authors of the files in this folder?\", select \"Yes, I trust the authors\". This is a security mechanism to protect you against untrusted third-party code.
"},{"location":"installation/macos/easy/#4-install-the-python-and-jupyter-extensions","title":"4. Install the Python and Jupyter extensions","text":"Visual Studio Code requires extensions to be installed to handle Python and Jupyter files.
In Visual Studio Code, select the Extensions
icon from in the far-left column (or press \u21e7\u2318X
), and install the Python
and Jupyter
extensions by searching for their names and clicking \"Install\" on each.
Once installation has finished, close the Extensions
tab.
Select File \u2192 New File...
(^\u2325\u2318N
), and select Jupyter Notebook
. You should see the screen layout change to display an empty black text block (in Jupyter parlance, a \"cell\").
Click the button marked Select Kernel
in the top right.
Python Environments...
Create Python Environment
Venv
3.12.x
).Visual Studio Code will launch into some activity, in which it is installing necessary libraries and creating a Python \"virtual environment\", which is an isolated area of the filesystem containing all of the packages needed for this working space. Working in different virtual environments for different projects is good practice to minimise the likelihood of conflicts and disruptions.
When the setup is complete, the button in the top right should change to say .venv (Python 3.12.x)
.
You're now all set to start writing code!
"},{"location":"installation/macos/easy/#7-start-writing-code","title":"7. Start writing code","text":"In the first block, type:
print(\"Hello, world\")\n
To run the cell, press ^\u21b5
(control-enter). You should see \"Hello, world\" appear below the cell. You're now able to edit, change and run Python code in real-time!
Keyboard shortcuts
enter
to begin editing a cell, and escape
to end editing and move to select modeb
to add a cell after the current cell, and a
to add a cell before it \u21e7\u21b5
(shift-enter)Clear the first cell, and replace it with:
from signalflow import *\n
Run the cell with ^\u21b5
. This imports all of the SignalFlow commands and classes.
Create a new cell (b
), and in the new cell, run:
graph = AudioGraph()\n
This will create and start a new global audio processing system, using the system's default audio output. You should see the name of the audio device printed to the notebook.
In a new cell, run:
sine = SineOscillator(440) * 0.1\nsine.play()\n
This will create a sine oscillator, attenuate it, and play it from the system. Hopefully you should now hear a tone playing from your speaker or headphones.
To stop the playback, create a new cell and run:
sine.stop()\n
"},{"location":"node/","title":"Nodes","text":"A Node
object is an audio processing unit that performs one single function. For example, a Node's role may be to synthesize a waveform, read from a buffer, or take two input Nodes and sum their values.
+
, -
, *
, %
, etc)\u2192 Next: Node playback
"},{"location":"node/developing/","title":"Nodes","text":""},{"location":"node/developing/#developing-new-node-classes","title":"Developing new Node classes","text":"See CONTRIBUTING.md
"},{"location":"node/inputs/","title":"Nodes","text":""},{"location":"node/inputs/#node-inputs","title":"Node inputs","text":"A node has three different classes of input:
Virtually every node has one or more audio-rate inputs. Put simply, an audio-rate input is the output of another node. Let's look at a short example:
lfo = SineLFO()\nsignal = SquareOscillator(frequency=200, width=lfo)\n
In this case, we are passing the output of a SineLFO
as the pulse width of a SquareOscillator
. This is an audio-rate input.
Although it's not obvious, the frequency
parameter is also an audio-rate input. Any constant value (such as the 200
here) is behind the scenes implemented as a Constant
node, which continuously outputs the value at an audio rate.
All audio-rate inputs can be modified just like a normal Python property. For example:
signal.frequency = TriangleOscillator(0.5, 100, 1000)\n
"},{"location":"node/inputs/#variable-input-nodes","title":"Variable input nodes","text":"Some nodes have a variable number of inputs, which can change over the Node's lifetime. For example, Sum()
takes an arbitrary number of input Nodes, and generates an output which is the sum of all of its inputs.
For variable-input nodes such as this, audio-rate inputs are added with add_input()
, and can be removed with remove_input()
.
a = Constant(1)\nb = Constant(2)\nc = Constant(3)\nsum = Sum()\nsum.add_input(a)\nsum.add_input(b)\nsum.add_input(c)\n# sum will now generate an output of 6.0\n
It is possible to check whether a Node object takes variable inputs by querying node.has_variable_inputs
.
When working with sequencing and timing, it is often useful be able to trigger discrete events within a node. This is where trigger inputs come in handy.
There are two different ways to handle trigger inputs:
trigger()
method on a Node
To generate trigger events at arbitrary times, call node.trigger()
. For example:
freq_env = Line(10000, 100, 0.5)\nsine = SineOscillator(freq_env)\nsine.play()\nwhile True:\n freq_env.trigger()\n graph.wait(1)\n
This is useful because it can be done outside the audio thread. For example, trigger()
could be called each time a MIDI note event is received.
The trigger()
method takes an optional name
parameter, which is used by Node
classes containing more than one type of trigger. This example uses the set_position
trigger of BufferPlayer
to seek to a new location in the sample every second.
buffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True)\nplayer.play()\nwhile True:\n player.trigger(\"set_position\", random_uniform(0, buffer.duration))\n graph.wait(1)\n
Note
Because the trigger
method happens outside the audio thread, it will take effect at the start of the next audio block. This means that, if you are running at 44.1kHz with an audio buffer size of 1024 samples, this could introduce a latency of up to 1024/44100 = 0.023s
. For time-critical events like drum triggers, this can be minimised by reducing the hardware output buffer size.
This constraint also means that only one event can be triggered per audio block. To trigger events at a faster rate than the hardware buffer size allows, see Audio-rate triggers below.
"},{"location":"node/inputs/#audio-rate-triggers","title":"Audio-rate triggers","text":"It is often desirable to trigger events using the audio-rate output of another Node object as a source of trigger events, to give sample-level precision in timing. Most nodes that support trigger
inputs can also be triggered by a corresponding audio-rate input.
Triggers happen at zero-crossings \u2014 that is, when the output of the node passes above zero (i.e., from <= 0
to >0
). For example, to create a clock with an oscillating tempo to re-trigger buffer playback:
clock = Impulse(SineLFO(0.2, 1, 10))\nbuffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True, clock=clock)\nplayer.play()\n
This can be used to your advantage with the boolean operator nodes.
on_the_right = MouseX() > 0.5\nenvelope = ASREnvelope(0, 0, 0.5, clock=on_the_right)\nsquare = SquareOscillator(100)\noutput = envelope * square * 0.1\noutput.play()\n
TODO: Should the name of the trigger() event always be identical to the trigger input name? So clock
for envelopes, buffer player, etc...?
The third type of input supported by nodes is the buffer. Nodes often take buffer inputs as sources of audio samples. They are also useful as sources of envelope shape data (for example, to shape the grains of a Granulator), or general control data (for example, recording motion patterns from a MouseX
input).
buffer = Buffer(\"../audio/stereo-count.wav\")\nplayer = BufferPlayer(buffer, loop=True)\n
\u2192 Next: Operators
"},{"location":"node/library/","title":"Node reference library","text":""},{"location":"node/library/#analysis","title":"analysis","text":"(input=nullptr, buffer=nullptr, hop_size=0)
(input=0.0, threshold=2.0, min_interval=0.1)
(input=0.0, plugin_id=\"vamp-example-plugins:spectralcentroid:linearcentroid\")
(buffer=nullptr, segment_count=8, stutter_probability=0.0, stutter_count=1, jump_probability=0.0, duty_cycle=1.0, rate=1.0, segment_rate=1.0)
(buffer=nullptr, input=0.0, feedback=0.0, loop_playback=false, loop_record=false)
(buffer=nullptr, rate=1.0, loop=0, start_time=nullptr, end_time=nullptr, clock=nullptr)
(buffer=nullptr, input=0.0, feedback=0.0, loop=false)
(buffer=nullptr)
(buffer=nullptr, input=0.0, delay_time=0.1)
(buffer=nullptr, clock=0, target=0, offsets={}, values={}, durations={})
(buffer=nullptr, clock=0, pos=0, duration=0.1, pan=0.0, rate=1.0, max_grains=2048)
(buffer=nullptr, onsets={})
()
()
(button_index=0)
(attack=0.1, decay=0.1, sustain=0.5, release=0.1, gate=0)
(attack=0.1, sustain=0.5, release=0.1, curve=1.0, clock=nullptr)
(input=nullptr, threshold=0.00001)
(levels=std::vector<NodeRef> ( ), times=std::vector<NodeRef> ( ), curves=std::vector<NodeRef> ( ), clock=nullptr, loop=false)
(from=0.0, to=1.0, time=1.0, loop=0, clock=nullptr)
(sustain=1.0, clock=nullptr)
(input=nullptr, rate=1.0)
(input=nullptr, buffer=nullptr)
(input=0.0, fft_size=SIGNALFLOW_DEFAULT_FFT_SIZE, hop_size=SIGNALFLOW_DEFAULT_FFT_HOP_SIZE, window_size=0, do_window=true)
(fft_size=None, hop_size=None, window_size=None, do_window=None)
(input=nullptr)
(input=0, prominence=1, threshold=0.000001, count=SIGNALFLOW_MAX_CHANNELS, interpolate=true)
(input=nullptr, do_window=false)
(input=0, frequency=2000)
(input=0, threshold=0.5)
(input=nullptr)
(input=0, level=0.5, smoothing=0.9)
(input=0)
(a=0, b=0)
(a=0)
(a=0)
()
(num_channels=1, input=0, amplitude_compensation=true)
(input=nullptr, offset=0, maximum=0, step=1)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0, b=0)
(a=0)
(a=0, value_if_true=0, value_if_false=0)
(a=1, b=1)
(a=0)
(a=0)
(a=1.0, b=1.0)
(a=0, b=0)
(a=0)
(a=0)
(input=0, a=0, b=1, c=1, d=10)
(input=0, a=0, b=1, c=1, d=10)
(a=0, b=0)
()
(a=0)
(a=0)
(a=0)
(a=0)
(value=0)
(frequency=1.0)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440, phase=nullptr)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440)
(frequency=1.0, min=0.0, max=1.0, width=0.5, phase=0.0)
(frequency=440, width=0.5)
(frequency=1.0, min=0.0, max=1.0, phase=0.0)
(frequency=440)
(buffer=nullptr, frequency=440, phase=0, sync=0, phase_map=nullptr)
(buffer=nullptr, frequency=440, crossfade=0.0, phase=0.0, sync=0)
(input=nullptr, min=-1.0, max=1.0)
(input=nullptr, min=-1.0, max=1.0)
(input=nullptr, smooth=0.99)
(dry_input=nullptr, wet_input=nullptr, wetness=0.0)
(input=nullptr, min=-1.0, max=1.0)
(input=0.0, delay_time=0.1, feedback=0.5, max_delay_time=0.5)
(input=0.0, delay_time=0.1, feedback=0.5, max_delay_time=0.5)
(input=0.0, delay_time=0.1, max_delay_time=0.5)
(input=0.0, stutter_time=0.1, stutter_count=1, clock=nullptr, max_stutter_time=1.0)
(input=0, sample_rate=44100, bit_rate=16)
(input=nullptr, clock=nullptr)
(input=0.0, rate=2.0, chunk_size=1)
(input=0.0, buffer=nullptr)
(input=0.0, threshold=0.1, ratio=2, attack_time=0.01, release_time=0.1, sidechain=nullptr)
(input=0.0, threshold=0.1)
(input=0.0, ceiling=0.5, attack_time=1.0, release_time=1.0)
(input=0.0)
(input=0.0, filter_type=SIGNALFLOW_FILTER_TYPE_LOW_PASS, cutoff=440, resonance=0.0, peak_gain=0.0)
(input=0.0, low_gain=1.0, mid_gain=1.0, high_gain=1.0, low_freq=500, high_freq=5000)
(input=0.0, cutoff=200.0, resonance=0.0)
(input=0.0, filter_type=SIGNALFLOW_FILTER_TYPE_LOW_PASS, cutoff=440, resonance=0.0)
(num_channels=2, input=0, pan=0.0, width=1.0)
(num_channels=2, input=0, pan=0.0, width=1.0)
(env=nullptr, input=0.0, x=0.0, y=0.0, z=0.0, radius=1.0, algorithm=\"dbap\")
(input=0, balance=0)
(input=0, pan=0.0)
(input=0, width=1)
(clock=0, factor=1)
(clock=0, min=0, max=2147483647)
(clock=0, sequence_length=0, num_events=0)
(clock=0)
(sequence=std::vector<int> ( ), clock=nullptr)
(list={}, index=0)
(set=0, reset=0)
(sequence=std::vector<float> ( ), clock=nullptr)
(chaos=3.7, frequency=0.0)
(low_cutoff=20.0, high_cutoff=20000.0, reset=nullptr)
(min=-1.0, max=1.0, delta=0.01, clock=nullptr, reset=nullptr)
(values=std::vector<float> ( ), clock=nullptr, reset=nullptr)
(probability=0.5, clock=nullptr, reset=nullptr)
(scale=0.0, clock=nullptr, reset=nullptr)
(min=0.001, max=1.0, clock=nullptr, reset=nullptr)
(mean=0.0, sigma=0.0, clock=nullptr, reset=nullptr)
(probability=0.5, length=8, clock=nullptr, explore=nullptr, generate=nullptr, reset=nullptr)
(frequency=1.0, distribution=SIGNALFLOW_EVENT_DISTRIBUTION_UNIFORM, reset=nullptr)
(min=0.0, max=1.0, clock=nullptr, reset=nullptr)
(reset=nullptr)
(frequency=0.0, min=-1.0, max=1.0, interpolate=true, random_interval=true, reset=nullptr)
When passing a value to audio-rate input of a Node, the signal is by default monophonic (single-channel). For example, SquareOscillator(440)
generates a 1-channel output.
It is possible to generate multi-channel output by passing an array of values in the place of a constant. For example, SquareOscillator([440, 880])
generates stereo output with a different frequency in the L and R channels.
There is no limit to the number of channels that can be generated by a node. For example, SquareOscillator(list(100 + 50 * n for n in range(100)))
will create a node with 100-channel output, each with its own frequency.
>>> sq = SquareOscillator([100 + 50 * n for n in range(100)])\n>>> print(sq.num_output_channels)\n100\n
"},{"location":"node/multichannel/#automatic-upmixing","title":"Automatic upmixing","text":"There are generally multiple inputs connected to a node, which may themselves have differing number of channels. For example, SquareOscillator(frequency=[100, 200, 300, 400, 500], width=0.7)
has a 5-channel input and a 1-channel input. In cases like this, the output of the nodes with fewer channels is upmixed to match the higher-channel inputs.
Upmixing here means simply duplicating the output until it reaches the desired number of channels. In the above case, the width
input will be upmixed to generate 5 channels, all containing 0.7
.
If width
were a stereo input with L and R channels, the output would be tiled, alternating between the channels. Each frame of stereo input would then be upmixed to contain [L, R, L, R, L]
, where L
and R
are the samples corresponding to the L and R channels.
The key rule is that, for nodes that support upmixing, the output signal has as many channels as the input signal with the highest channel count.
This process percolates through the signal chain. For example:
SquareOscillator(frequency=SineLFO([1, 3, 5], min=440, max=880),\n width=SawLFO([0.5, 0.6], min=0.25, max=0.75))\n
min
and max
inputs of the frequency
LFO would be upmixed to 3 channels eachmin
and max
inputs of the width
LFO would be upmixed to 2 channels eachwidth
node would be upmixed from 2 to 3 channelsSome nodes have immutable numbers of input/output channels. For example:
StereoPanner
has 1 input channel and 2 output channelsStereoBalance
has 2 input channels and 2 output channelsChannelMixer
has an arbitrary number of input channels, but a fixed, user-specified number of output channelsEven Nodes that do not have an obvious input (e.g. BufferPlayer
) have input channels, for modulation inputs (for example, modulating the rate of the buffer).
When two nodes are connected together with incompatible channel counts (for example, connecting a StereoBalance
into a StereoMixer
), an InvalidChannelCountException
will be raised.
There are a number of Node subclasses dedicated to channel handling.
ChannelMix
with nodes of N
and M
channels will produce an output of N + M
channels.Single channels of a multi-channel node can be accessed using the index []
operator. For example:
square = SquareOscillator([440, 441, 442, 443])\noutput = square[0]\n# output now contains a mono output, with a frequency of 440Hz.\n
Slice syntax can be used to query multiple subchannels:
square = SquareOscillator([440, 441, 442, 880])\noutput = square[0:2]\n# now contains a two-channel square wave\n
\u2192 Next: Status and properties
"},{"location":"node/operators/","title":"Nodes","text":""},{"location":"node/operators/#node-operators","title":"Node operators","text":""},{"location":"node/operators/#arithmetic","title":"Arithmetic","text":"The output of multiple nodes can be combined using Python's mathematical operators. For example, to sum two sine waves together to create harmonics, use the +
operator:
output = SineOscillator(440) + SineOscillator(880)\noutput.play()\n
To modulate the amplitude of one node with another, use the *
operator:
sine = SineOscillator(440)\nenvelope = ASREnvelope(0.1, 1, 0.1)\noutput = sine * envelope\n
You can use constant values in place of Node
objects:
sine = SineOscillator(440)\nattenuated = sine * 0.5\n
Operators can be chained together in the normal way:
# Create an envelope that rises from 0.5 to 1.0 and back to 0.5\nenv = (ASREnvelope(0.1, 1, 0.1) * 0.5) + 0.5\n
Behind the scenes, these operators are actually creating composites of Node
subclasses. The last example could alternatively be written as:
Add(Multiply(ASREnvelope(0.1, 1, 0.1), 0.5), 0.5)\n
"},{"location":"node/operators/#comparison","title":"Comparison","text":"Comparison operators can also be used to compare two Node output values, generating a binary (1/0) output. For example:
# Generates an output of 1 when the sinusoid is above 0, and 0 otherwise \nSineOscillator(440) > 0\n
This can then be used as an input to other nodes. The below will generate a half-wave-rectified sine signal (that is, a sine wave with all negative values set to zero).
sine = SineOscillator(440)\nrectified = sine * (sine > 0)\n
"},{"location":"node/operators/#index-of-operators","title":"Index of operators","text":"Below is a full list of operators supported by SignalFlow.
"},{"location":"node/operators/#arithmetic-operators","title":"Arithmetic operators","text":"Operator Node class+
Add -
Subtract *
Multiply /
Divide **
Power %
Modulo"},{"location":"node/operators/#comparison-operators","title":"Comparison operators","text":"Operator Node class ==
Equal !=
NotEqual <
LessThan <=
LessThanOrEqual >
GreaterThan >=
GreaterThanOrEqual \u2192 Next: Multichannel
"},{"location":"node/playback/","title":"Nodes","text":""},{"location":"node/playback/#playing-and-stopping-a-node","title":"Playing and stopping a node","text":""},{"location":"node/playback/#starting-playback","title":"Starting playback","text":"To start a node playing, simply call the play()
method:
graph = AudioGraph()\nnode = SineOscillator(440)\nnode.play()\n
This connects the node to the output
endpoint of the current global AudioGraph
. The next time the graph processes a block of samples, the graph's output
node then calls upon the sine oscillator to generate a block.
It is important to remember that playing a node means \"connecting it to the graph\". For this reason, it is not possible to play the same node more than once, as it is already connected to the graph. To play multiples of a particular Node type, simply create and play multiple instances.
"},{"location":"node/playback/#connecting-a-node-to-another-nodes-input","title":"Connecting a Node to another Node's input","text":"It is often the case that you want to connect a Node to the input of another Node for playback, rather than simply wiring it to the output of a graph -- for example, to pass an oscillator through a processor. In this case, you do not need to call play()
(which means \"connect this node to the graph\"). Instead, it is sufficient to simply connect the Node to the input of another Node that is already playing.
For example:
# create and begin playback of a variable input summer, passed through a filter\nsum = Sum()\nflt = SVFilter(sum, \"low_pass\", 200)\nflt.play()\n
Now, let's create an oscillator. Observe that connecting the oscillator to the filter's input begins playback immediately.
square = SquareOscillator(100)\nsum.add_input(square)\n
"},{"location":"node/playback/#stopping-playback","title":"Stopping playback","text":"To stop a node playing:
node.stop()\n
This disconnects the node from the output device that it is connected to.
\u2192 Next: Inputs
"},{"location":"node/properties/","title":"Nodes","text":""},{"location":"node/properties/#node-properties","title":"Node properties","text":"A Node
has a number of read-only properties which can be used to query its status at a given moment in time.
asr-envelope
) num_output_channels int The number of output channels that the node generates. num_input_channels int The number of input channels that the node takes. Note that most nodes have matches_input_channels
set, meaning that their num_input_channels
will be automatically increased according to their inputs. To learn more, see Nodes: Multichannel. matches_input_channels bool Whether the node automatically increases its num_input_channels
based on its inputs. To learn more, see Nodes: Multichannel. has_variable_inputs bool Whether the node supports an arbitrary number of audio-rate inputs output_buffer numpy.ndarray Contains the Node's most recent audio output, in float32
samples. The buffer is indexed by channel
x frame
, so to obtain the 32nd sample in the first channel, query: node.output_buffer[0][31]
. inputs dict A dict containing all of the Node
's audio-rate inputs. Note that buffer inputs are not currently included within this dict. state int The Node's current playback state, which can be one of SIGNALFLOW_NODE_STATE_ACTIVE
and SIGNALFLOW_NODE_STATE_STOPPED
. The STOPPED
state only applies to those nodes which have a finite duration (e.g. ASREnvelope
, or BufferPlayer
with looping disabled) and have reached the end of playback. Nodes continue to have a state of ACTIVE
whether or not they are connected to the graph. patch Patch Indicates the Patch that the node is part of, or None if the Node does not belong to a Patch."},{"location":"node/properties/#monitoring-a-nodes-output","title":"Monitoring a node's output","text":"To monitor the output of a node, call node.poll(num_seconds)
, where num_seconds
is the interval between messages. This will print the last sample generated by the node to stdout
. In the case of multichannel nodes, only the first channel's value is printed.
>>> a = Counter(Impulse(1))\n>>> a.poll(1)\n>>> a.play()\ncounter: 0.00000\ncounter: 1.00000\ncounter: 2.00000\n
To stop polling a node, call node.poll(0)
.
Some Node
classes have additional properties, containing information on implementation-specific states. These can be accessed via the get_property
method.
For example, the BufferPlayer
node exposes a position
property, which returns the playhead's current position, in seconds.
>>> buffer = Buffer(\"audio.wav\")\n>>> player = BufferPlayer(buffer)\n>>> player.play()\n...\n>>> player.get_property(\"position\")\n5.984000205993652\n
\u2192 Next: Stochastic nodes
"},{"location":"node/stochastic/","title":"Nodes","text":""},{"location":"node/stochastic/#chance-and-stochastic-nodes","title":"Chance and stochastic nodes","text":"SignalFlow has a number of stochastic nodes, which make use of a pseudo-random number generator (RNG) to produce unpredictable output values.
Each object of these StochasticNode
subclasses stores its own RNG. By default, the RNG is seeded with a random value, so that each run will generate a different set of outputs. However, to create a repeatable pseudo-random output, the seed
of the node's RNG can be set to a known value:
>>> r = RandomUniform(0, 1)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.48836085, 0.64326525, 0.79819506, 0.8489549 ], dtype=float32)\n>>> r.set_seed(123)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.7129553 , 0.42847094, 0.6908848 , 0.7191503 ], dtype=float32)\n>>> r.set_seed(123)\n>>> r.process(1024)\n>>> r.output_buffer[0][:4]\narray([0.7129553 , 0.42847094, 0.6908848 , 0.7191503 ], dtype=float32)\n
Note the identical sequences generated after repeatedly setting the seed to a known value.
Warning
Calling node.process()
is generally not good practice, as it does not recursively process all of the node's inputs (unlike when a node is embedded within an AudioGraph, which correctly handles recursion and cyclical loops). Please use at your peril!
\u2192 Next: Node reference library
"},{"location":"patch/","title":"Patch","text":"Warning
This documentation is a work-in-progress and may have sections that are missing or incomplete.
A Patch
represents a connected group of Nodes
, analogous to a synthesizer. Defining patches makes it easy to create higher-level structures, which can then be reused and instantiated with a single line of code, in much the same way as a Node.
Behind the scenes, the structure of a Patch
is encapsulated by a PatchSpec
, a template which can be instantiated or serialised to a JSON file for later use.
\u2192 Next: Defining a Patch
"},{"location":"patch/auto-free/","title":"Patch","text":""},{"location":"patch/auto-free/#auto-free-and-memory-management","title":"Auto-free and memory management","text":"Auto-free.
"},{"location":"patch/defining/","title":"Patch","text":""},{"location":"patch/defining/#defining-a-patch","title":"Defining a Patch","text":"A Patch is made up of a connected network of Nodes, together with a set of properties that determine how the Patch can be controlled.
There are two general ways to define the structure of a Patch:
Patch
. In general, this is the recommended approach for defining new Patches.PatchSpec
, which describes the structure of a patch The quickest and most intuitive way to define a Patch
is by subclassing the Patch
class itself. Let's look at an example.
class Bleep (Patch):\n def __init__(self, frequency=880, duration=0.1):\n super().__init__()\n frequency = self.add_input(\"frequency\", frequency)\n duration = self.add_input(\"duration\", duration)\n sine = SineOscillator(frequency)\n env = ASREnvelope(0.001, duration, 0.001)\n output = sine * env\n self.set_output(output)\n self.set_auto_free(True)\n
In the above example:
__init__
function, super().__init__()
must be called to initialise the Patch and its storage. This is vital! Without it, your program will crash. add_input()
method is used to define them as inputs of the Patch
, which can then be subsequently modulated. Note that the add_input()
method returns a reference to the frequency node, which then acts as a pointer to the input node.self.set_output()
is used to define the Patch's output. A Patch can only have one single output.self.set_auto_free()
is used to automatically stop and free the Patch after playback of the envelope is completed. More about auto-free... You can now instantiate a Bleep
object in just the same way as you would instantiate and play a Node:
b = Bleep(frequency=440, duration=0.2)\nb.play()\n
If you query graph.status
after playback has finished, you should see that the Patch
is automatically freed and the number of nodes returns to 0.
The structure of a Patch
is described by a PatchSpec
, which can in turn be imported/exported in the JSON text-based data interchange format.
For information on loading or saving PatchSpecs as JSON, see Exporting and importing patches.
\u2192 Next: Playing and stopping a Patch
"},{"location":"patch/exporting/","title":"Patch","text":""},{"location":"patch/exporting/#exporting-and-importing-patches","title":"Exporting and importing patches","text":"A Patch can be exported or imported.
\u2192 Next: Auto-free and memory management
"},{"location":"patch/inputs/","title":"Patch","text":""},{"location":"patch/inputs/#patch-inputs","title":"Patch inputs","text":"Just like a Node, a Patch supports three different classes of input:
A Patch supports any number of user-defined named inputs, which can be used to modulate the nodes within the patch.
Each input must be defined by calling add_input()
when the Patch is first defined, with an optional default value.
Info
Note that Patches do not yet support variable inputs.
When a Patch is playing, the value of its inputs can be set using patch.set_input()
:
class Bloop (Patch):\n def __init__(self, frequency=880, duration=0.1):\n super().__init__()\n frequency = self.add_input(\"frequency\", frequency)\n sine = SineOscillator(frequency)\n self.set_output(sine)\n self.set_auto_free(True)\n\nbloop = Bloop()\nbloop.play()\n...\nbloop.set_input(\"frequency\", 100)\n
Info
Note that Patches do not yet support setting inputs with Python properties (e.g. patch.prop_name = 123
), as is possible with node inputs.
When defining a Patch
, it is possible to define which Node should receive trigger()
events sent to the Patch. This is done with patch.set_trigger_node()
:
class Hat (Patch):\n def __init__(self, duration=0.1):\n super().__init__()\n duration = self.add_input(\"duration\", duration)\n noise = WhiteNoise()\n env = ASREnvelope(0.0001, 0.0, duration, curve=2)\n output = noise * env\n self.set_trigger_node(env)\n self.set_output(output)\n\nh = Hat()\nh.play()\n...\nh.trigger() # triggers a hit, resetting the ASREnvelope to its start point\n
This can be used to create a Patch
that stays connected to the AudioGraph and can be retriggered to play a hit.
Info
Note that Patches only presently support trigger events directed to a single node within the patch, and cannot route triggers to multiple different nodes.
"},{"location":"patch/inputs/#buffer-inputs","title":"Buffer inputs","text":"Buffer inputs can be declared at define time by calling self.add_buffer_input()
. Similar to add_input
, the return value is a placeholder Buffer
that can be used wherever you would normally pass a Buffer
:
class WobblyPlayer (Patch):\n def __init__(self, buffer):\n super().__init__()\n buffer = self.add_buffer_input(\"buffer\", buffer)\n rate = SineLFO(0.2, 0.5, 1.5)\n player = BufferPlayer(buffer, rate=rate, loop=True)\n self.set_output(player)\n\nbuffer = Buffer(\"examples/audio/stereo-count.wav\")\nplayer = WobblyPlayer(buffer)\nplayer.play()\n
The buffer can then be replaced at runtime by calling set_input()
:
player.set_input(\"buffer\", another_buffer)\n
\u2192 Next: Operators
"},{"location":"patch/operators/","title":"Patch","text":""},{"location":"patch/operators/#operators","title":"Operators","text":"The output of a Patch can be amplified, attenuated, combined, modulated and compared using Python operators, in much the same way as Node:
patch = Patch(patch_spec)\noutput = patch * 0.5\n
For a full list of the operators that can be applied to a Patch
, see Node operators.
\u2192 Next: Patch properties
"},{"location":"patch/playback/","title":"Patch","text":""},{"location":"patch/playback/#playing-a-patch","title":"Playing a Patch","text":"Once a Patch
has been defined or imported, it can be instantiated in two different ways depending on how it was defined:
The simplest way to instantiate a Patch is by defining it as a Patch subclass, and then instantiating it in the same way as a Node.
class Hat (Patch):\n def __init__(self, duration=0.1):\n super().__init__()\n duration = self.add_input(\"duration\", duration)\n noise = WhiteNoise()\n env = ASREnvelope(0.0001, 0.0, duration, curve=2)\n output = noise * env\n self.set_output(output)\n self.set_auto_free(True)\n\nhat = Hat()\nhat.play()\n
Once a Patch has finished, its state changes to SIGNALFLOW_PATCH_STATE_STOPPED
.
Just as with nodes, it is important to remember that playing a patch means \"connecting it to the graph\". For this reason, it is not possible to play the same patch more than once, as it is already connected to the graph.
To play multiples of a particular Patch
type, simply create and play multiple instances.
Once a PatchSpec
has been created or imported, it can be played by instantiating a Patch
with the PatchSpec
as an argument:
patch = Patch(patch_spec)\npatch.play()\n
"},{"location":"patch/playback/#connecting-a-patch-to-another-patchs-input","title":"Connecting a Patch to another Patch's input","text":"A Patch
can be connected to the input of another Patch
(or Node), in exactly the same way described in Connecting a Node to another Node's input.
Once you have got to grips with this paradigm, it becomes simple to build up sophisticated processing graphs by abstracting complex functionality within individual Patch
objects, and connecting them to one another.
As in Node playback, stopping a Patch disconnects it from the AudioGraph. Patches with auto-free are automatically stopped when their lifetimes ends. Patches with an unlimited lifespan must be stopped manually, with:
patch.stop()\n
This disconnects the Patch from its output.
\u2192 Next: Patch inputs
"},{"location":"patch/properties/","title":"Patch","text":""},{"location":"patch/properties/#patch-properties","title":"Patch properties","text":"Property Type Description nodes list A list of all of the Node objects that make up this Patch inputs dict A dict of key-value pairs corresponding to all of the (audio rate) inputs within the Patch state int The Patch's current playback state, which can beSIGNALFLOW_PATCH_STATE_ACTIVE
or SIGNALFLOW_PATCH_STATE_STOPPED
. graph AudioGraph A reference to the AudioGraph that the Patch is part of \u2192 Next: Exporting and importing patches
"},{"location":"planning/NAMING/","title":"NAMING","text":""},{"location":"planning/NAMING/#nodes","title":"NODES","text":"Generators - Oscillators - Wavetable - Waveforms (all wrappers around Wavetable with band-limiting) - SineOscillator - SquareOscillator - TriangleOscillator - SawOscillator - LFO (all wrappers around Wavetable) - SineLFO - SquareLFO - TriangleLFO - SawLFO - Buffer - BufferPlayer - BufferRecorder - Stochastic - Processors - Panners - ChannelMixer - LinearPanner - AzimuthPanner - ObjectPanner - Delay - AllpassDelay - Delay - Effects - EQ - Gate - Resampler - Waveshaper
Stochastic - Random signal generators - WhiteNoise - PinkNoise - BrownNoise - PerlinNoise - Random number generators (with clocked inputs) - RandomUniform - RandomLinear - RandomBrownian - RandomExponentialDist - RandomGaussian - RandomBeta - Random event generators - RandomImpulse
-- PATCHES - Patch - PatchDef
"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 8bab7426..eeca54cd 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ