CINXE.COM
Chromatone
<?xml version="1.0" encoding="utf-8"?> <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"> <channel> <title>Chromatone</title> <link>https://chromatone.center</link> <description>Visual Music Language development updates and more</description> <lastBuildDate>Tue, 04 Mar 2025 13:32:09 GMT</lastBuildDate> <docs>https://validator.w3.org/feed/docs/rss2.html</docs> <generator>https://github.com/jpmonette/feed</generator> <language>en-EN</language> <copyright>Copyright (c) 2017-present, Denis Starov</copyright> <item> <title><![CDATA[Words sequencing]]></title> <link>https://chromatone.center/practice/generative/words/</link> <guid>https://chromatone.center/practice/generative/words/</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><![CDATA[<script setup import { defineClientComponent } from 'vitepress' const Words = defineClientCompone]]></description> <content:encoded><![CDATA[<Words/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Words sequencing]]></title> <link>https://chromatone.center/practice/sequencing/words/</link> <guid>https://chromatone.center/practice/sequencing/words/</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><![CDATA[This page is moved to https://chromatone.center/practice/generative/words/]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/words/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/words/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[ZZFXM]]></title> <link>https://chromatone.center/practice/sequencing/zzfxm/</link> <guid>https://chromatone.center/practice/sequencing/zzfxm/</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><![CDATA[<script setup import { defineClientComponent } from 'vitepress' const ZzFxm = defineClientCompone]]></description> <content:encoded><![CDATA[<ZzFxm /><p>Work in progress. You can edit only the bpm value for now. Stay tuned!</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[AMY - A high-performance fixed-point Music synthesizer librarY for microcontrollers]]></title> <link>https://chromatone.center/practice/synth/amy/README.html</link> <guid>https://chromatone.center/practice/synth/amy/README.html</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><, Mac, Linux, ESP32, ESP32S3 and ESP32P4, Teensy 3.6, Teensy 4.1, the Raspberry Pi, the Playdate, Pi Pico RP2040, the Pi Pico 2 RP2530, iOS devices, the Electro-Smith Daisy (ARM Cortex M7), and more to come. It is highly optimized for polyphony and poly-timbral operation on even the lowest power and constrained RAM microcontroller but can scale to as many cores as you want. It can be used as a very good analog-type synthesizer (Juno-6 style) a FM synthesizer (DX7 style), a partial breakpoint synthesizer (Alles machine or Atari AMY), a [very good synthesized piano](https://shorepine.github.io/amy/piano.html), a sampler (where you load in your own PCM data), a drum machine (808-style PCM samples are included), or as a lower level toolkit to make your own combinations of oscillators, filters, LFOs and effects. AMY powers the multi-speaker mesh synthesizer [Alles](https://github.com/shorepine/alles), as well as the [Tulip Creative Computer](https://github.com/shorepine/tulipcc). Let us know if you use AMY for your own projects and we'll add it here! AMY was built by [DAn Ellis](https://research.google/people/DanEllis/) and [Brian Whitman](https://notes.variogram.com), and would love your contributions. [ **Chat about AMY on our Discord!**](https://discord.gg/TzBFkUb8pG) It supports * An arbitrary number (compile-time option) of band-limited oscillators, each with adjustable frequency and amplitude: * pulse (+ adjustable duty cycle) * sine * saw (up and down) * triangle * noise * PCM, reading from a baked-in buffer of percussive and misc samples, or by loading samples with looping and base midi note * karplus-strong string with adjustable feedback * Stereo audio input can be used as an oscillator for real time audio effects * An operator / algorithm-based frequency modulation (FM) synth * Biquad low-pass, bandpass or hi-pass filters with cutoff and resonance, can be assigned to any oscillator * Reverb, echo and chorus effects, set globally * Stereo pan or mono operation * An additive partial synthesizer with an analysis front end to play back long strings of breakpoint-based sine waves * Oscillators can be specified by frequency in floating point or midi note * Each oscillator has 2 envelope generators, which can modify any combination of amplitude, frequency, PWM duty, filter cutoff, or pan over time * Each oscillator can also act as an modulator to modify any combination of parameters of another oscillator, for example, a bass drum can be indicated via a half phase sine wave at 0.25Hz modulating the frequency of another sine wave. * Control of overall gain and 3-band EQ * Built in patches for PCM, DX7, piano, Juno and partials * A front end for Juno-6 patches and conversion setup commands * Built-in clock and pattern sequencer * Can use multi-core (including microcontrollers) for rendering if available The FM synth provides a Python library, [`fm.py`](https://github.com/shorepine/amy/blob/main/fm.py) that can convert any DX7 patch into AMY setup commands, and also a pure-Python implementation of the AMY FM synthesizer in [`dx7_simulator.py`](https://github.com/shorepine/amy/blob/main/experiments/dx7_simulator.py). The partial tone synthesizer provides [`partials.py`](https://github.com/shorepine/amy/blob/main/partials.py), where you can model the partials of any arbitrary audio into AMY setup commands for live partial playback of hundreds of oscillators. The Juno-6 emulation is in [`juno.py`](https://github.com/shorepine/amy/blob/main/juno.py) and can read in Juno-6 SYSEX patches and convert them into AMY commands and generate patches. [The piano voice and the code to generate the partials are described here](https://shorepine.github.io/amy/piano.html). ## Using AMY in Arduino Copy this repository to your `Arduino/libraries` folder as `Arduino/libraries/amy`, and `#include <AMY-Arduino.h>`. There are examples for the Pi Pico, ESP32 (and variants), and Teensy (works on 4.X and 3.6) Use the File->Examples->AMY Synthesizer menu to find them. The examples rely on the following board packages and libraries: * RP2040 / Pi Pico: [`arduino-pico`](https://arduino-pico.readthedocs.io/en/latest/install.html#installing-via-arduino-boards-manager) * Teensy: [`teensyduino`](https://www.pjrc.com/teensy/td_download.html) * ESP32/ESP32-S3/etc: [`arduino-esp32`](https://espressif-docs.readthedocs-hosted.com/projects/arduino-esp32/en/latest/installing.html) - use a 2.0.14+ version when installing * The USB MIDI example requires the [MIDI Library](https://www.arduino.cc/reference/en/libraries/midi-library/) You can use both cores of supported chips (RP2040 or ESP32) for more oscillators and voices. We provide Arduino examples for the Arduino ESP32 in multicore, and a `pico-sdk` example for the RP2040 that renders in multicore. If you really want to push the chips to the limit, we recommend using native C code using the `pico-sdk` or `ESP-IDF`. We have a simple [dual core ESP-IDF example available](https://github.com/shorepine/amy_dual_core_esp32) or you can see [Alles](https://github.com/shorepine/alles). ## Using AMY in Python on any platform You can `import amy` in Python and have it render either out to your speakers or to a buffer of samples you can process on your own. To install the `libamy` library, run `cd src; pip install .`. You can also run `make test` to install the library and run a series of tests. ## Using AMY on the web We provide an `emscripten` port of AMY that runs in javascript. [See the AMY web demo](https://shorepine.github.io/amy/). To build for the web, use `make docs/amy.js`. It will generate `amy.js` in `docs/`. You can also do `make docs/amy-audioin.js` to build a version of AMY for the web with audio input support -- but be warned this will ask your users for microphone access. ## Using AMY in any other software To use AMY in your own software, simply copy the .c and .h files in `src` to your program and compile them. No other libraries should be required to synthesize audio in AMY. You'll want to make sure the configuration in `amy_config.h` is set up for your application / hardware. To run a simple C example on many platforms: ``` make ./amy-example # you should hear tones out your default speaker, use ./amy-example -h for options ``` # Using AMY > This section introduces AMY starting from the primitive oscillators. If your interest is mainly in using the preset patches to emulate a full synthesizer, you might skip to [Voices and patches](#voices_and_patches) section. AMY can be controlled using its wire protocol or by fillng its data structures directly. It's up to what's easier for you and your application. In Python, rendering to a buffer of samples, using the high level API: ```python >>> import amy >>> m = amy.message(voices='0', load_patch=130, note=50, vel=1) >>> print(m) # Show the wire protocol message v0n50l1K130r0Z >>> amy.send_raw(m) >>> # This plays immediately on Tulip, but if you're running Amy in regular Python, you can get the waveform from render: >>> audio = amy.render(5.0) ``` You can also start a thread playing live audio: ```python >>> import amy >>> amy.live() # can optinally pass in playback and capture audio device IDs, amy.live(2, 1) >>> amy.send(voices='0', load_patch=130, note=50, vel=1) >>> amy.stop() ``` In C, using the high level structures directly; ```c #include "amy.h" void bleep() { struct event e = amy_default_event(); int32_t start = amy_sysclock(); // Right now.. e.time = start; e.osc = 0; e.wave = SINE; e.freq_coefs[COEF_CONST] = 220; e.velocity = 1; // start a 220 Hz sine. amy_add_event(e); e.time = start + 150; // in 150 ms.. e.freq_coefs[COEF_CONST] = 440; // change to 440 Hz. amy_add_event(e); e.time = sysclock + 300; // in 300 ms.. e.velocity = 0; // note off. amy_add_event(e); } void main() { amy_start(/* cores= */ 1, /* reverb= */ 0, /* chorus= */ 0, /* echo */ 1); // initializes amy amy_live_start(1); // render live audio bleep(); } ``` Or in C, sending the wire protocol directly: ```c #include "amy.h" void main() { amy_start(/* cores= */ 1, /* reverb= */ 0, /* chorus= */ 0, /* echo */ 1); amy_live_start(1); amy_play_message("v0n50l1K130r0Z"); } ``` If you want to receive buffers of samples, or have more control over the rendering pipeline to support multi-core, instead of using `amy_live_start()`: ```c #include "amy.h" ... amy_start(/* cores= */ 2, /* reverb= */ 1, /* chorus= */ 1, /* echo */ 1); ... ... { // For each sample block: amy_prepare_buffer(); // prepare to render this block amy_render(0, OSCS/2, 0); // render oscillators 0 - OSCS/2 on core 0 // on the other core... amy_render(OSCS/2, OSCS, 1); // render oscillators OSCS/2-OSCS on core 1 // when they are both done.. int16_t * samples = amy_fill_buffer(); // do what you want with samples ... } ``` On storage constrained devices, you may want to limit the amount of PCM samples we ship with AMY. To do this, include a smaller set after including `amy.h`, like: ```c #include "amy.h" #include "pcm_tiny.h" // or, #include "pcm_small.h" ``` # Wire protocol AMY's wire protocol is a series of numbers delimited by ascii characters that define all possible parameters of an oscillator. This is a design decision intended to make using AMY from any sort of environment as easy as possible, with no data structure or parsing overhead on the client. It's also readable and compact, far more expressive than MIDI and can be sent over network links, UARTs, or as arguments to functions or commands. We've used AMY over multicast UDP, over Javascript, in Max/MSP, in Python, C, Micropython and many more! AMY accepts commands in ASCII, like so: ``` v0w4f440.0l1.0Z ``` This example controls osc 0 (`v0`), sets its waveform to triangle (`w4`), sets its frequency to 4400 Hz (`f440.0`), and velocity (i.e. amplitude) to 1 (`l1.0`). The final `Z` is a terminator indicating the message is complete. Here's the full list: | Code | Python | Type-range | Notes | | ]]></description> <content:encoded><![CDATA[<h1 id="amy-a-high-performance-fixed-point-music-synthesizer-library-for-microcontrollers" tabindex="-1">AMY - A high-performance fixed-point Music synthesizer librarY for microcontrollers <a class="header-anchor" href="#amy-a-high-performance-fixed-point-music-synthesizer-library-for-microcontrollers" aria-label="Permalink to "AMY - A high-performance fixed-point Music synthesizer librarY for microcontrollers""></a></h1> <p>AMY is a fast, small and accurate music synthesizer library written in C with Python and Arduino bindings that deals with combinations of many oscillators very well. It can easily be embedded into almost any program, architecture or microcontroller. We've run AMY on <a href="https://shorepine.github.io/amy/" target="_blank" rel="noreferrer">the web</a>, Mac, Linux, ESP32, ESP32S3 and ESP32P4, Teensy 3.6, Teensy 4.1, the Raspberry Pi, the Playdate, Pi Pico RP2040, the Pi Pico 2 RP2530, iOS devices, the Electro-Smith Daisy (ARM Cortex M7), and more to come. It is highly optimized for polyphony and poly-timbral operation on even the lowest power and constrained RAM microcontroller but can scale to as many cores as you want.</p> <p>It can be used as a very good analog-type synthesizer (Juno-6 style) a FM synthesizer (DX7 style), a partial breakpoint synthesizer (Alles machine or Atari AMY), a <a href="https://shorepine.github.io/amy/piano.html" target="_blank" rel="noreferrer">very good synthesized piano</a>, a sampler (where you load in your own PCM data), a drum machine (808-style PCM samples are included), or as a lower level toolkit to make your own combinations of oscillators, filters, LFOs and effects.</p> <p>AMY powers the multi-speaker mesh synthesizer <a href="https://github.com/shorepine/alles" target="_blank" rel="noreferrer">Alles</a>, as well as the <a href="https://github.com/shorepine/tulipcc" target="_blank" rel="noreferrer">Tulip Creative Computer</a>. Let us know if you use AMY for your own projects and we'll add it here!</p> <p>AMY was built by <a href="https://research.google/people/DanEllis/" target="_blank" rel="noreferrer">DAn Ellis</a> and <a href="https://notes.variogram.com" target="_blank" rel="noreferrer">Brian Whitman</a>, and would love your contributions.</p> <p><a href="https://discord.gg/TzBFkUb8pG" target="_blank" rel="noreferrer"><img src="https://raw.githubusercontent.com/shorepine/tulipcc/main/docs/pics/shorepine100.png" alt="shore pine sound systems discord"> <strong>Chat about AMY on our Discord!</strong></a></p> <p>It supports</p> <ul> <li>An arbitrary number (compile-time option) of band-limited oscillators, each with adjustable frequency and amplitude: <ul> <li>pulse (+ adjustable duty cycle)</li> <li>sine</li> <li>saw (up and down)</li> <li>triangle</li> <li>noise</li> <li>PCM, reading from a baked-in buffer of percussive and misc samples, or by loading samples with looping and base midi note</li> <li>karplus-strong string with adjustable feedback</li> <li>Stereo audio input can be used as an oscillator for real time audio effects</li> <li>An operator / algorithm-based frequency modulation (FM) synth</li> </ul> </li> <li>Biquad low-pass, bandpass or hi-pass filters with cutoff and resonance, can be assigned to any oscillator</li> <li>Reverb, echo and chorus effects, set globally</li> <li>Stereo pan or mono operation</li> <li>An additive partial synthesizer with an analysis front end to play back long strings of breakpoint-based sine waves</li> <li>Oscillators can be specified by frequency in floating point or midi note</li> <li>Each oscillator has 2 envelope generators, which can modify any combination of amplitude, frequency, PWM duty, filter cutoff, or pan over time</li> <li>Each oscillator can also act as an modulator to modify any combination of parameters of another oscillator, for example, a bass drum can be indicated via a half phase sine wave at 0.25Hz modulating the frequency of another sine wave.</li> <li>Control of overall gain and 3-band EQ</li> <li>Built in patches for PCM, DX7, piano, Juno and partials</li> <li>A front end for Juno-6 patches and conversion setup commands</li> <li>Built-in clock and pattern sequencer</li> <li>Can use multi-core (including microcontrollers) for rendering if available</li> </ul> <p>The FM synth provides a Python library, <a href="https://github.com/shorepine/amy/blob/main/fm.py" target="_blank" rel="noreferrer"><code>fm.py</code></a> that can convert any DX7 patch into AMY setup commands, and also a pure-Python implementation of the AMY FM synthesizer in <a href="https://github.com/shorepine/amy/blob/main/experiments/dx7_simulator.py" target="_blank" rel="noreferrer"><code>dx7_simulator.py</code></a>.</p> <p>The partial tone synthesizer provides <a href="https://github.com/shorepine/amy/blob/main/partials.py" target="_blank" rel="noreferrer"><code>partials.py</code></a>, where you can model the partials of any arbitrary audio into AMY setup commands for live partial playback of hundreds of oscillators.</p> <p>The Juno-6 emulation is in <a href="https://github.com/shorepine/amy/blob/main/juno.py" target="_blank" rel="noreferrer"><code>juno.py</code></a> and can read in Juno-6 SYSEX patches and convert them into AMY commands and generate patches.</p> <p><a href="https://shorepine.github.io/amy/piano.html" target="_blank" rel="noreferrer">The piano voice and the code to generate the partials are described here</a>.</p> <h2 id="using-amy-in-arduino" tabindex="-1">Using AMY in Arduino <a class="header-anchor" href="#using-amy-in-arduino" aria-label="Permalink to "Using AMY in Arduino""></a></h2> <p>Copy this repository to your <code>Arduino/libraries</code> folder as <code>Arduino/libraries/amy</code>, and <code>#include <AMY-Arduino.h></code>. There are examples for the Pi Pico, ESP32 (and variants), and Teensy (works on 4.X and 3.6) Use the File->Examples->AMY Synthesizer menu to find them.</p> <p>The examples rely on the following board packages and libraries:</p> <ul> <li>RP2040 / Pi Pico: <a href="https://arduino-pico.readthedocs.io/en/latest/install.html#installing-via-arduino-boards-manager" target="_blank" rel="noreferrer"><code>arduino-pico</code></a></li> <li>Teensy: <a href="https://www.pjrc.com/teensy/td_download.html" target="_blank" rel="noreferrer"><code>teensyduino</code></a></li> <li>ESP32/ESP32-S3/etc: <a href="https://espressif-docs.readthedocs-hosted.com/projects/arduino-esp32/en/latest/installing.html" target="_blank" rel="noreferrer"><code>arduino-esp32</code></a> - use a 2.0.14+ version when installing</li> <li>The USB MIDI example requires the <a href="https://www.arduino.cc/reference/en/libraries/midi-library/" target="_blank" rel="noreferrer">MIDI Library</a></li> </ul> <p>You can use both cores of supported chips (RP2040 or ESP32) for more oscillators and voices. We provide Arduino examples for the Arduino ESP32 in multicore, and a <code>pico-sdk</code> example for the RP2040 that renders in multicore. If you really want to push the chips to the limit, we recommend using native C code using the <code>pico-sdk</code> or <code>ESP-IDF</code>.</p> <p>We have a simple <a href="https://github.com/shorepine/amy_dual_core_esp32" target="_blank" rel="noreferrer">dual core ESP-IDF example available</a> or you can see <a href="https://github.com/shorepine/alles" target="_blank" rel="noreferrer">Alles</a>.</p> <h2 id="using-amy-in-python-on-any-platform" tabindex="-1">Using AMY in Python on any platform <a class="header-anchor" href="#using-amy-in-python-on-any-platform" aria-label="Permalink to "Using AMY in Python on any platform""></a></h2> <p>You can <code>import amy</code> in Python and have it render either out to your speakers or to a buffer of samples you can process on your own. To install the <code>libamy</code> library, run <code>cd src; pip install .</code>. You can also run <code>make test</code> to install the library and run a series of tests.</p> <h2 id="using-amy-on-the-web" tabindex="-1">Using AMY on the web <a class="header-anchor" href="#using-amy-on-the-web" aria-label="Permalink to "Using AMY on the web""></a></h2> <p>We provide an <code>emscripten</code> port of AMY that runs in javascript. <a href="https://shorepine.github.io/amy/" target="_blank" rel="noreferrer">See the AMY web demo</a>. To build for the web, use <code>make docs/amy.js</code>. It will generate <code>amy.js</code> in <code>docs/</code>. You can also do <code>make docs/amy-audioin.js</code> to build a version of AMY for the web with audio input support -- but be warned this will ask your users for microphone access.</p> <h2 id="using-amy-in-any-other-software" tabindex="-1">Using AMY in any other software <a class="header-anchor" href="#using-amy-in-any-other-software" aria-label="Permalink to "Using AMY in any other software""></a></h2> <p>To use AMY in your own software, simply copy the .c and .h files in <code>src</code> to your program and compile them. No other libraries should be required to synthesize audio in AMY. You'll want to make sure the configuration in <code>amy_config.h</code> is set up for your application / hardware.</p> <p>To run a simple C example on many platforms:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>make</span></span> <span class="line"><span>./amy-example # you should hear tones out your default speaker, use ./amy-example -h for options</span></span></code></pre> </div><h1 id="using-amy" tabindex="-1">Using AMY <a class="header-anchor" href="#using-amy" aria-label="Permalink to "Using AMY""></a></h1> <blockquote> <p>This section introduces AMY starting from the primitive oscillators. If your interest is mainly in using the preset patches to emulate a full synthesizer, you might skip to <a href="#voices_and_patches">Voices and patches</a> section.</p> </blockquote> <p>AMY can be controlled using its wire protocol or by fillng its data structures directly. It's up to what's easier for you and your application.</p> <p>In Python, rendering to a buffer of samples, using the high level API:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> import</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> m </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.message(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">load_patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">130</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> print</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(m) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Show the wire protocol message</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">v0n50l1K130r0Z</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.send_raw(m)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> # This plays immediately on Tulip, but if you're running Amy in regular Python, you can get the waveform from render:</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> audio </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.render(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5.0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>You can also start a thread playing live audio:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> import</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.live() </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># can optinally pass in playback and capture audio device IDs, amy.live(2, 1) </span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">load_patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">130</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">>>></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.stop()</span></span></code></pre> </div><p>In C, using the high level structures directly;</p> <div class="language-c vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">c</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">#include</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> "amy.h"</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">void</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> bleep</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">() {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> struct</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> event e </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_default_event</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">();</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> int32_t</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> start </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_sysclock</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">();</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // Right now..</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.time </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> start;</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.osc </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.wave </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> SINE;</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.freq_coefs[COEF_CONST] </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 220</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.velocity </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // start a 220 Hz sine.</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_add_event</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(e);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.time </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> start </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 150</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // in 150 ms..</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.freq_coefs[COEF_CONST] </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 440</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // change to 440 Hz.</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_add_event</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(e);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.time </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> sysclock </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 300</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // in 300 ms..</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> e.velocity </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">;</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // note off.</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_add_event</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(e);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">void</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> main</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">() {</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_start</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D">/* cores= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* reverb= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* chorus= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* echo */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // initializes amy </span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_live_start</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // render live audio</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> bleep</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">();</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span></code></pre> </div><p>Or in C, sending the wire protocol directly:</p> <div class="language-c vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">c</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">#include</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> "amy.h"</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">void</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> main</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">() {</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_start</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D">/* cores= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* reverb= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* chorus= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* echo */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_live_start</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_play_message</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">"v0n50l1K130r0Z"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span></code></pre> </div><p>If you want to receive buffers of samples, or have more control over the rendering pipeline to support multi-core, instead of using <code>amy_live_start()</code>:</p> <div class="language-c vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">c</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">#include</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> "amy.h"</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">...</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">amy_start</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D">/* cores= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* reverb= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* chorus= */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> /* echo */</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">...</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">... {</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // For each sample block:</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_prepare_buffer</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">();</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // prepare to render this block</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_render</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, OSCS</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // render oscillators 0 - OSCS/2 on core 0</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // on the other core... </span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_render</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(OSCS</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, OSCS, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">);</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // render oscillators OSCS/2-OSCS on core 1</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // when they are both done..</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> int16_t</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> *</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> samples </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> amy_fill_buffer</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">();</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> // do what you want with samples</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">... }</span></span></code></pre> </div><p>On storage constrained devices, you may want to limit the amount of PCM samples we ship with AMY. To do this, include a smaller set after including <code>amy.h</code>, like:</p> <div class="language-c vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">c</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">#include</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> "amy.h"</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">#include</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> "pcm_tiny.h"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> </span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D">// or, #include "pcm_small.h"</span></span></code></pre> </div><h1 id="wire-protocol" tabindex="-1">Wire protocol <a class="header-anchor" href="#wire-protocol" aria-label="Permalink to "Wire protocol""></a></h1> <p>AMY's wire protocol is a series of numbers delimited by ascii characters that define all possible parameters of an oscillator. This is a design decision intended to make using AMY from any sort of environment as easy as possible, with no data structure or parsing overhead on the client. It's also readable and compact, far more expressive than MIDI and can be sent over network links, UARTs, or as arguments to functions or commands. We've used AMY over multicast UDP, over Javascript, in Max/MSP, in Python, C, Micropython and many more!</p> <p>AMY accepts commands in ASCII, like so:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>v0w4f440.0l1.0Z</span></span></code></pre> </div><p>This example controls osc 0 (<code>v0</code>), sets its waveform to triangle (<code>w4</code>), sets its frequency to 4400 Hz (<code>f440.0</code>), and velocity (i.e. amplitude) to 1 (<code>l1.0</code>). The final <code>Z</code> is a terminator indicating the message is complete.</p> <p>Here's the full list:</p> <table tabindex="0"> <thead> <tr> <th>Code</th> <th>Python</th> <th>Type-range</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td><code>a</code></td> <td><code>amp</code></td> <td>float[,float...]</td> <td>Control the amplitude of a note; a set of ControlCoefficients. Default is 0,0,1,1 (i.e. the amplitude comes from the note velocity multiplied by Envelope Generator 0.)</td> </tr> <tr> <td><code>A</code></td> <td><code>bp0</code></td> <td>string</td> <td>Envelope Generator 0's comma-separated breakpoint pairs of time(ms) and level, e.g. <code>100,0.5,50,0.25,200,0</code>. The last pair triggers on note off (release)</td> </tr> <tr> <td><code>b</code></td> <td><code>feedback</code></td> <td>float 0-1</td> <td>Use for the ALGO synthesis type in FM or for karplus-strong, or to indicate PCM looping (0 off, >0, on)</td> </tr> <tr> <td><code>B</code></td> <td><code>bp1</code></td> <td>string</td> <td>Breakpoints for Envelope Generator 1. See bp0</td> </tr> <tr> <td><code>c</code></td> <td><code>chained_osc</code></td> <td>uint 0 to OSCS-1</td> <td>Chained oscillator. Note/velocity events to this oscillator will propagate to chained oscillators. VCF is run only for first osc in chain, but applies to all oscs in chain.</td> </tr> <tr> <td><code>d</code></td> <td><code>duty</code></td> <td>float[,float...]</td> <td>Duty cycle for pulse wave, ControlCoefficients, defaults to 0.5</td> </tr> <tr> <td><code>D</code></td> <td><code>debug</code></td> <td>uint, 2-4</td> <td>2 shows queue sample, 3 shows oscillator data, 4 shows modified oscillator. Will interrupt audio!</td> </tr> <tr> <td><code>f</code></td> <td><code>freq</code></td> <td>float[,float...]</td> <td>Frequency of oscillator, set of ControlCoefficients. Default is 0,1,0,0,0,0,1 (from <code>note</code> pitch plus <code>pitch_bend</code>)</td> </tr> <tr> <td><code>F</code></td> <td><code>filter_freq</code></td> <td>float[,float...]</td> <td>Center/break frequency for variable filter, set of ControlCoefficients</td> </tr> <tr> <td><code>G</code></td> <td><code>filter_type</code></td> <td>0-4</td> <td>Filter type: 0 = none (default.) 1 = lowpass, 2 = bandpass, 3 = highpass, 4 = double-order lowpass.</td> </tr> <tr> <td><code>H</code></td> <td><code>sequence</code></td> <td>int,int,int</td> <td>Tick offset, period, tag for sequencing</td> </tr> <tr> <td><code>h</code></td> <td><code>reverb</code></td> <td>float[,float,float,float]</td> <td>Reverb parameters -- level, liveness, damping, xover: Level is for output mix; liveness controls decay time, 1 = longest, default 0.85; damping is extra decay of high frequencies, default 0.5; xover is damping crossover frequency, default 3000 Hz.</td> </tr> <tr> <td><code>I</code></td> <td><code>ratio</code></td> <td>float</td> <td>For ALGO types, ratio of modulator frequency to base note frequency / For the PARTIALS base note, ratio controls the speed of the playback</td> </tr> <tr> <td><code>j</code></td> <td><code>tempo</code></td> <td>float</td> <td>The tempo (BPM, quarter notes) of the sequencer. Defaults to 108.0.</td> </tr> <tr> <td><code>k</code></td> <td><code>chorus</code></td> <td>float[,float,float,float]</td> <td>Chorus parameters -- level, delay, freq, depth: Level is for output mix (0 to turn off); delay is max in samples (320); freq is LFO rate in Hz (0.5); depth is proportion of max delay (0.5).</td> </tr> <tr> <td><code>K</code></td> <td><code>load_patch</code></td> <td>uint 0-X</td> <td>Apply a saved patch (e.g. DX7 or Juno) to a specified voice (or starting at the selected oscillator).</td> </tr> <tr> <td><code>l</code></td> <td><code>vel</code></td> <td>float 0-1+</td> <td>Velocity: > 0 to trigger note on, 0 to trigger note off</td> </tr> <tr> <td><code>L</code></td> <td><code>mod_source</code></td> <td>0 to OSCS-1</td> <td>Which oscillator is used as an modulation/LFO source for this oscillator. Source oscillator will be silent.</td> </tr> <tr> <td><code>m</code></td> <td><code>portamento</code></td> <td>uint</td> <td>Time constant (in ms) for pitch changes when note is changed without intervening note-off. default 0 (immediate), 100 is good.</td> </tr> <tr> <td><code>M</code></td> <td><code>echo</code></td> <td>float[,int,int,float,float]</td> <td>Echo parameters -- level, delay_ms, max_delay_ms, feedback, filter_coef (-1 is HPF, 0 is flat, +1 is LPF).</td> </tr> <tr> <td><code>n</code></td> <td><code>note</code></td> <td>float, but typ. uint 0-127</td> <td>Midi note, sets frequency. Fractional Midi notes are allowed</td> </tr> <tr> <td><code>N</code></td> <td><code>latency_ms</code></td> <td>uint</td> <td>Sets latency in ms. default 0 (see LATENCY)</td> </tr> <tr> <td><code>o</code></td> <td><code>algorithm</code></td> <td>uint 1-32</td> <td>DX7 FM algorithm to use for ALGO type</td> </tr> <tr> <td><code>O</code></td> <td><code>algo_source</code></td> <td>string</td> <td>Which oscillators to use for the FM algorithm. list of six (starting with op 6), use empty for not used, e.g 0,1,2 or 0,1,2,,,</td> </tr> <tr> <td><code>p</code></td> <td><code>patch</code></td> <td>int</td> <td>Which predefined PCM or Partials patch to use, or number of partials if < 0. (Juno/DX7 patches are different - see <code>load_patch</code>).</td> </tr> <tr> <td><code>P</code></td> <td><code>phase</code></td> <td>float 0-1</td> <td>Where in the oscillator's cycle to begin the waveform (also works on the PCM buffer). default 0</td> </tr> <tr> <td><code>Q</code></td> <td><code>pan</code></td> <td>float[,float...]</td> <td>Panning index ControlCoefficients (for stereo output), 0.0=left, 1.0=right. default 0.5.</td> </tr> <tr> <td><code>r</code></td> <td><code>voices</code></td> <td>int[,int]</td> <td>Comma separated list of voices to send message to, or load patch into.</td> </tr> <tr> <td><code>R</code></td> <td><code>resonance</code></td> <td>float</td> <td>Q factor of variable filter, 0.5-16.0. default 0.7</td> </tr> <tr> <td><code>s</code></td> <td><code>pitch_bend</code></td> <td>float</td> <td>Sets the global pitch bend, by default modifying all note frequencies by (fractional) octaves up or down</td> </tr> <tr> <td><code>S</code></td> <td><code>reset</code></td> <td>uint</td> <td>Resets given oscillator. set to RESET_ALL_OSCS to reset all oscillators, gain and EQ. RESET_TIMEBASE resets the clock (immediately, ignoring <code>time</code>). RESET_AMY restarts AMY. RESET_SEQUENCER clears the sequencer.</td> </tr> <tr> <td><code>t</code></td> <td><code>time</code></td> <td>uint</td> <td>Request playback time relative to some fixed start point on your host, in ms. Allows precise future scheduling.</td> </tr> <tr> <td><code>T</code></td> <td><code>eg0_type</code></td> <td>uint 0-3</td> <td>Type for Envelope Generator 0 - 0: Normal (RC-like) / 1: Linear / 2: DX7-style / 3: True exponential.</td> </tr> <tr> <td><code>u</code></td> <td><code>store_patch</code></td> <td>number,string</td> <td>Store up to 32 patches in RAM with ID number (1024-1055) and AMY message after a comma. Must be sent alone</td> </tr> <tr> <td><code>v</code></td> <td><code>osc</code></td> <td>uint 0 to OSCS-1</td> <td>Which oscillator to control</td> </tr> <tr> <td><code>V</code></td> <td><code>volume</code></td> <td>float 0-10</td> <td>Volume knob for entire synth, default 1.0</td> </tr> <tr> <td><code>w</code></td> <td><code>wave</code></td> <td>uint 0-16</td> <td>Waveform: [0=SINE, PULSE, SAW_DOWN, SAW_UP, TRIANGLE, NOISE, KS, PCM, ALGO, PARTIAL, PARTIALS, BYO_PARTIALS, INTERP_PARTIALS, AUDIO_IN0, AUDIO_IN1, CUSTOM, OFF]. default: 0/SINE</td> </tr> <tr> <td><code>x</code></td> <td><code>eq</code></td> <td>float,float,float</td> <td>Equalization in dB low (~800Hz) / med (~2500Hz) / high (~7500Gz) -15 to 15. 0 is off. default 0.</td> </tr> <tr> <td><code>X</code></td> <td><code>eg1_type</code></td> <td>uint 0-3</td> <td>Type for Envelope Generator 1 - 0: Normal (RC-like) / 1: Linear / 2: DX7-style / 3: True exponential.</td> </tr> <tr> <td><code>z</code></td> <td><code>load_sample</code></td> <td>uint x 6</td> <td>Signal to start loading sample. patch, length(samples), samplerate, midinote, loopstart, loopend. All subsequent messages are base64 encoded WAVE-style frames of audio until <code>length</code> is reached. Set <code>patch</code> and <code>length=0</code> to unload a sample from RAM.</td> </tr> </tbody> </table> <h1 id="synthesizer-details" tabindex="-1">Synthesizer details <a class="header-anchor" href="#synthesizer-details" aria-label="Permalink to "Synthesizer details""></a></h1> <p>We'll use Python for showing examples of AMY. Maybe you're running under <a href="https://github.com/shorepine/tulipcc" target="_blank" rel="noreferrer">Tulip</a>, in which case AMY is already loaded, but if you're running under standard Python, make sure you've installed <code>libamy</code> and are running a live AMY first by running <code>make test</code> and then:</p> <div class="language-bash vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">bash</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">python</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">>>> </span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">import</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> amy</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">>>> </span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">amy.live</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">()</span></span></code></pre> </div><h2 id="amy-s-sequencer-and-timestamps" tabindex="-1">AMY's sequencer and timestamps <a class="header-anchor" href="#amy-s-sequencer-and-timestamps" aria-label="Permalink to "AMY's sequencer and timestamps""></a></h2> <p>AMY is meant to either receive messages in real time or scheduled events in the future. It can be used as a sequencer where you can schedule notes to play in the future or on a divider of the clock.</p> <p>The scheduled events are very helpful in cases where you can't rely on an accurate clock from the client, or don't have one. The clock used internally by AMY is based on the audio samples being generated out the speakers, which should run at an accurate 44,100 times a second. This lets you do things like schedule fast moving parameter changes over short windows of time.</p> <p>For example, to play two notes, one a second after the first, you could do:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">time.sleep(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">52</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>But you'd be at the mercy of Python's internal timing, or your OS. A more precise way is to send the messages at the same time, but to indicate the intended time of the playback:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">start </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.millis() </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># arbitrary start timestamp</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">time</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">start)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">52</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">time</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">start </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1000</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>Both <code>amy.send()</code>s will return immediately, but you'll hear the second note play precisely a second after the first. AMY uses this internal clock to schedule step changes in breakpoints as well.</p> <h3 id="the-sequencer" tabindex="-1">The sequencer <a class="header-anchor" href="#the-sequencer" aria-label="Permalink to "The sequencer""></a></h3> <p>On supported platforms (right now any unix device with pthreads, and the ESP32 or related chip), AMY starts a sequencer that works on <code>ticks</code> from startup. You can reset the <code>ticks</code> to 0 with an <code>amy.send(reset=amy.RESET_TIMEBASE)</code>. Note this will happen immediately, ignoring any <code>time</code> or <code>sequence</code>.</p> <p>Ticks run at 48 PPQ at the set tempo. The tempo defaults to 108 BPM. This means there are 108 quarter notes a minute, and <code>48 * 108 = 5184</code> ticks a minute, 86 ticks a second. The tempo can be changed with <code>amy.send(tempo=120)</code>.</p> <p>You can schedule an event to happen at a precise tick with <code>amy.send(... ,sequence="tick,period,tag")</code>. <code>tick</code> can be an absolute or offset tick number. If <code>period</code> is ommited or 0, <code>tick</code> is assumed to be absolute and once AMY reaches <code>tick</code>, the rest of your event will play and the saved event will be removed from memory. If an absolute <code>tick</code> is in the past, AMY will ignore it.</p> <p>You can schedule repeating events (like a step sequencer or drum machine) with <code>period</code>, which is the length of the sequence in ticks. For example a <code>period</code> of 48 with <code>ticks</code> omitted or 0 will trigger once every quarter note. A <code>period</code> of 24 will happen twice every quarter note. A <code>period</code> of 96 will happen every two quarter notes. <code>period</code> can be any whole number to allow for complex rhythms.</p> <p>For pattern sequencers like drum machines, you will also want to use <code>tick</code> alongisde <code>period</code>. If both are given and nonzero, <code>tick</code> is assumed to be an offset on the <code>period</code>. For example, for a 16-step drum machine pattern running on eighth notes (PPQ/2), you would use a <code>period</code> of <code>16 * 24 = 384</code>. The first slot of the drum machine would have a <code>tick</code> of 0, the 2nd would have a <code>tick</code> offset of 24, and so on.</p> <p><code>tag</code> should be given, and will be <code>0</code> if not. You should set <code>tag</code> to a random or incrementing number in your code that you can refer to later. <code>tag</code> allows you to replace or delete the event once scheduled.</p> <p>If you are including AMY in a program, you can set the hook <code>void (*amy_external_sequencer_hook)(uint32_t)</code> to any function. This will be called at every tick with the current tick number as an argument.</p> <p>Sequencer examples:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">"1000,,3"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># play a PCM drum at absolute tick 1000 </span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">",24,1"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># play a PCM drum every eighth note.</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">",48,2"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># play a PCM drum every quarter note.</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">",,1"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># remove the eighth note sequence</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">70</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">",48,2"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># change the quarter note event</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">reset</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">RESET_SEQUENCER</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># clears the sequence</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">"0,384,1"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># first slot of a 16 1/8th note drum machine</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">sequence</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">"216,384,2"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># ninth slot of a 16 1/8th note drum machine</span></span></code></pre> </div><h2 id="examples" tabindex="-1">Examples <a class="header-anchor" href="#examples" aria-label="Permalink to "Examples""></a></h2> <p><code>amy.drums()</code> should play a test pattern.</p> <p>Try to set the volume of the synth with <code>amy.send(volume=2)</code> -- that can be up to 10 or so. The default is 1.</p> <p><code>amy.reset()</code> resets all oscillators to default. You can also do <code>amy.reset(osc=5)</code> to do just one oscillator.</p> <p>Let's set a simple sine wave first</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">220</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>We are simply telling oscillator 0 to be a sine wave at 220Hz and amplitude (specified as a note-on velocity) of 1. You can also try <code>amy.PULSE</code>, or <code>amy.SAW_DOWN</code>, etc.</p> <p>To turn off the note, send a note off (velocity zero):</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Note off.</span></span></code></pre> </div><p>You can also make oscillators louder with <code>vel</code> larger than 1. By default, the total amplitude comes from multiplying together the oscillator amplitude (i.e., the natural level of the oscillator, which is 1 by default) and the velocity (the particular level of this note event) -- however, this can be changed by changing the default values of the <code>amp</code> <strong>ControlCoefficients</strong> (see below).</p> <p>You can also use <code>note</code> (MIDI note value) instead of <code>freq</code> to control the oscillator frequency for each note event:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset()</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">57</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>This won't work as intended without the <code>amy.reset()</code>, because once you've set the oscillator to a non-default frequency with <code>freq=220</code>, it will act as an offset to the frequency specified by <code>note</code>. (See <strong>ControlCoefficients</strong> below to see how to control this behavior).</p> <p>Now let's make a lot of sine waves!</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">import</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> time</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset()</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">for</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">in</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> range</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">16</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">):</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">i, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">110</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(i</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">*</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">80</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">), </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">((</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">16</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">i)</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">32.0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">))</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> time.sleep(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Sleep for 0.5 seconds</span></span></code></pre> </div><p>Neat! You can see how simple / powerful it is to have control over lots of oscillators. You have up to 64 (or more, depending on your platform). Let's make it more interesting. A classic analog tone is the filtered saw wave. Let's make one.</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SAW_DOWN</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">filter_freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">400</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">resonance</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">filter_type</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">FILTER_LPF</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">40</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>You want to be able to stop the note too by sending a note off:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>Sounds nice. But we want that filter freq to go down over time, to make that classic filter sweep tone. Let's use an Envelope Generator! An Envelope Generator (EG) creates a smooth time envelope based on a breakpoint set, which is a simple list of (time-delta, target-value) pairs - you can have up to 8 of these per EG, and 2 different EGs to control different things. They're just like ADSRs, but more powerful. You can use an EG to control amplitude, oscillator frequency, filter cutoff frequency, PWM duty cycle, or stereo pan. The EG gets triggered when the note begins. So let's make an EG that turns the filter frequency down from its start at 3200 Hz to 400 Hz over 1000 milliseconds. And when the note goes off, it tapers the frequency to 50 Hz over 200 milliseconds.</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SAW_DOWN</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">resonance</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">filter_type</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">FILTER_LPF</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">filter_freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'50,0,0,0,1'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">bp1</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0,6.0,1000,3.0,200,0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">40</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>There are two things to note here: Firstly, the envelope is defined by the set of breakpoints in <code>bp1</code> (defining the second EG; the first is controlled by <code>bp0</code>). The <code>bp</code> strings alternate time intervals in milliseconds with target values. So <code>0,6.0,1000,3.0,200,0</code> means to move to 6.0 after 0 ms (i.e., the initial value is 6), then to decay to 3.0 over the next 1000 ms (1 second). The final pair is always taken as the "release", and does not begin until the note-off event is received. In this case, the EG decays to 0 in the 200 ms after the note-off.</p> <p>Secondly, EG1 is coupled to the filter frequency with <code>filter_freq='50,0,0,0,1'</code>. <code>filter_freq</code> is an example of a set of <strong>ControlCoefficients</strong> where the control value is calculated on-the-fly by combining a set of inputs scaled by the coefficients. This is explained fully below, but for now the first coefficient (here 50) is taken as a constant, and the 5th coefficient (here 1) is applied to the output of EG1. To get good "musical" behavior, the filter frequency is controlled using a "unit per octave" rule. So if the envelope is zero, the filter is at its base frequency of 50 Hz. But the envelope starts at 6.0, which, after scaling by the control coefficient of 1, drives the filter frequency 6 octaves higher, or 2^6 = 64x the frequency -- 3200 Hz. As the envelope decays to 3.0 over the first 1000 ms, the filter moves to 2^3 = 8x the default frequency, giving 400 Hz. It's only during the final release of 200 ms that it falls back to 0, giving a final filter frequency of (2^0 = 1x) 50 Hz.</p> <h3 id="controlcoefficients" tabindex="-1">ControlCoefficients <a class="header-anchor" href="#controlcoefficients" aria-label="Permalink to "ControlCoefficients""></a></h3> <p>The full set of parameters accepting <strong>ControlCoefficients</strong> is <code>amp</code>, <code>freq</code>, <code>filter_freq</code>, <code>duty</code>, and <code>pan</code>. ControlCoefficients are a list of up to 7 floats that are multiplied by a range of control signals, then summed up to give the final result (in this case, the filter frequency). The control signals are:</p> <ul> <li><code>const</code>: A constant value of 1 - so the first number in the control coefficient list is the default value if all the others are zero.</li> <li><code>note</code>: The frequency corresponding to the <code>note</code> parameter to the note-on event (converted to unit-per-octave relative to middle C).</li> <li><code>vel</code>: The velocity, from the note-on event.</li> <li><code>eg0</code>: The output of Envelope Generator 0.</li> <li><code>eg1</code>: The output of Envelope Generator 1.</li> <li><code>mod</code>: The output of the modulating oscillator, specified by the <code>mod_source</code> parameter.</li> <li><code>bend</code>: The current pitch bend value (from <code>amy.send(pitch_bend=0.5)</code> etc.).</li> </ul> <p>The set <code>50,0,0,0,1</code> means that we have a base frequency of 50 Hz, we ignore the note frequency and velocity and EG0, but we also add the output of EG1. Any coefficients that you do not specify, for instance by providing fewer than 7 values, are not modified. You can also use empty strings to skip positional values, so <code>filter_freq=',,,,1'</code> couples EG1 to the filter frequency without changing any of the other coefficients. (Note that when we passed <code>freq=220</code> in the first example, that was interpreted setting the <code>const</code> coefficient to 220, but leaving all the remaining coefficients untouched.)</p> <p>Because entering lists of commas is error prone, you can also specify control coefficients as Python dicts consisting of value with keys from the list above, i.e. <code>filter_freq={'const': 50, 'eg1': 1}</code> is equivalent to <code>filter_freq='50,,,,1'</code>.</p> <p>You can use the same EG to control several things at once. For example, we could include <code>freq=',,,,0.333'</code>, which says to modify the note frequency from the same EG1 as is controlling the filter frequency, but scaled down by 1/3rd so the initial decay is over 1 octave, not 3. Give it a go!</p> <p>The note frequency is scaled relative to a zero-point of middle C (MIDI note 60, 261.63 Hz), so to make the oscillator faithfully track the <code>note</code> parameter to the note-on event, you would use something like <code>freq='261.63,1'</code>. Setting it to <code>freq='523.26,1'</code> would make the oscillator always be one octave higher than the <code>note</code> MIDI number. Setting <code>freq='261.3,0.5'</code> would make the oscillator track the <code>note</code> parameter at half an octave per unit, so while <code>note=60</code> would still give middle C, <code>note=72</code> (C5) would make the oscillator run at F#4, and <code>note=84</code> (C6) would be required to get C5 from the oscillator.</p> <p>The default set of ControlCoefficients for <code>freq</code> is <code>'261.63,1,0,0,0,0,1'</code>, i.e. a base of middle C, tracking the MIDI note, plus pitch bend (at unit-per-octave). Because 261.63 is such an important value, as a special case, setting the first <code>freq</code> value to zero is magically rewritten as 261.63, so <code>freq='0,1,0,0,0,0,1'</code> also yields the default behavior. <code>amp</code> also has a set of defaults: <code>amp='0,0,1,1,0,0,0'</code>, i.e. tracking note-on velocity plus modulation by EG0 (which just tracks the note-on status if it has not been set up). <code>amp</code> is a little special because the individual components are <em>multiplied</em> together, instead of added together, for any control inputs with nonzero coefficients. Finally, an offset of 1.0 is added to the coefficient-scaled LFO modulator and pitch bend inputs before multiplying them into the amplitude, to allow small variations around unity e.g. for tremolo. These defaults are set up in <a href="https://github.com/shorepine/amy/blob/b1ed189b01e6b908bc19f18a4e0a85761d739807/src/amy.c#L551" target="_blank" rel="noreferrer"><code>src/amy.c:reset_osc()</code></a>.</p> <p>We also have LFOs, which are implemented as one oscillator modulating another (instead of sending its waveform to the output). You set up the low-frequency oscillator, then have it control a parameter of another audible oscillator. Let's make the classic 8-bit duty cycle pulse wave modulation, a favorite:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset() </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Clear the state.</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># We set the amp but not the vel, so it doesn't sound.</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PULSE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">duty</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'mod'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.4</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">mod_source</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>You see we first set up the modulation oscillator (a sine wave at 0.5Hz, with amplitude of 1). We do <em>not</em> send it a velocity, because that would make it start sending a 0.5 Hz sinewave to the audio output; we want its output only to be used internally. Then we set up the oscillator to be modulated, a pulse wave with a modulation source of oscillator 1 and the duty <strong>ControlCoefficients</strong> set to have a constant value of 0.5 plus 0.4 times the modulating input (i.e., the depth of the pulse width modulation, where 0.4 modulates between 0.1 and 0.9, almost the maximum depth). The initial duty cycle will start at 0.5 and be offset by the state of oscillator 1 every tick, to make that classic thick saw line from the C64 et al. The modulation will re-trigger every note on. Just like with envelope generators, the modulation oscillator has a 'slot' in the ControlCoefficients - the 6th coefficient, <code>mod</code> - so it can modulate PWM duty cycle, amplitude, frequency, filter frequency, or pan! And if you want to modulate more than one thing, like frequency and duty, just specify multiple ControlCoefficients:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">TRIANGLE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PULSE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">duty</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'mod'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.25</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'mod'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">mod_source</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p><code>amy.py</code> has some helpful presets, if you want to use them, or add to them. To make that filter bass, just do <code>amy.preset(1, osc=0)</code> and then <code>amy.send(osc=0, vel=1, note=40)</code> to hear it. Here's another one:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.preset(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># will set a simple sine wave tone on oscillator 2</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># will play the note at velocity 1.5</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># will send a "note off" -- you'll hear the note release</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">220.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># same but specifying the frequency</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset()</span></span></code></pre> </div><h2 id="core-oscillators" tabindex="-1">Core oscillators <a class="header-anchor" href="#core-oscillators" aria-label="Permalink to "Core oscillators""></a></h2> <p>We support bandlimited saw, pulse/square and triangle waves, alongside sine and noise. Use the wave parameter: 0=SINE, PULSE, SAW_DOWN, SAW_UP, TRIANGLE, NOISE. Each oscillator can have a frequency (or set by midi note), amplitude and phase (set in 0-1.). You can also set <code>duty</code> for the pulse type. We also have a karplus-strong type (KS=6).</p> <p>Oscillators will not become audible until a <code>velocity</code> over 0 is set for the oscillator. This is a "note on" and will trigger any modulators or envelope generators set for that oscillator. Setting <code>velocity</code> to 0 sets a note off, which will stop modulators and also finish the envelopes at their release pair. <code>velocity</code> also internally sets <code>amplitude</code>, but you can manually set <code>amplitude</code> after <code>velocity</code> starts a note on.</p> <h2 id="lfos-modulators" tabindex="-1">LFOs & modulators <a class="header-anchor" href="#lfos-modulators" aria-label="Permalink to "LFOs & modulators""></a></h2> <p>Any oscillator can modulate any other oscillator. For example, a LFO can be specified by setting oscillator 0 to 0.25Hz sine, with oscillator 1 being a 440Hz sine. Using the 6th parameter of <strong>ControlCoefficient</strong> lists, you can have oscillator 0 modulate frequency, amplitude, filter frequency, or pan of oscillator 1. You can also add targets together, for example amplitude+frequency. Set the <code>mod_target</code> and <code>mod_source</code> on the audible oscillator (in this case, oscillator 1.) The source mod oscillator will not be audible once it is referred to as a <code>mod_source</code> by another oscillator. The amplitude of the modulating oscillator indicates how strong the modulation is (aka "LFO depth.")</p> <h2 id="filters" tabindex="-1">Filters <a class="header-anchor" href="#filters" aria-label="Permalink to "Filters""></a></h2> <p>We support lowpass, bandpass and hipass filters in AMY. You can set <code>resonance</code> and <code>filter_freq</code> per oscillator.</p> <h2 id="eq-volume" tabindex="-1">EQ & Volume <a class="header-anchor" href="#eq-volume" aria-label="Permalink to "EQ & Volume""></a></h2> <p>You can set a synth-wide volume (in practice, 0-10), or set the EQ of the entire synths's output.</p> <h2 id="envelope-generators" tabindex="-1">Envelope Generators <a class="header-anchor" href="#envelope-generators" aria-label="Permalink to "Envelope Generators""></a></h2> <p>AMY allows you to set 2 Envelope Generators (EGs) per oscillator. You can see these as ADSR / envelopes (and they can perform the same task), but they are slightly more capable. Breakpoints are defined as pairs of time deltas (specified in milliseconds) and target value. You can specify up to 8 pairs, but the last pair you specify will always be seen as the "release" pair, which doesn't trigger until note off. All preceding pairs have time deltas relative to the previous segment, so <code>100,1,100,0,0,0</code> goes up to 1 over 100 ms, then back down to zero over the next 100ms. The last "release" pair counts from ms from the note-off.</p> <p>An EG can control amplitude, frequency, filter frequency, duty or pan of an oscillator via the 4th (EG0) and 5th (EG1) entries in the corresponding ControlCoefficients.</p> <p>For example, to define a common ADSR curve where a sound sweeps up in volume from note on over 50ms, then has a 100ms decay stage to 50% of the volume, then is held until note off at which point it takes 250ms to trail off to 0, you'd set time to be 50ms and target to be 1.0, then 100ms with target .5, then a 250ms release with ratio 0. By default, amplitude is set up to be controlled by EG0. At every synthesizer tick, the given amplitude (default of 1.0) will be multiplied by the EG0 value. In AMY wire parlance, this would look like <code>v0f220w0A50,1.0,100,0.5,250,0</code> to specify a sine wave at 220Hz with this envelope.</p> <p>When using <code>amy.py</code>, use the string form of the breakpoint: <code>amy.send(osc=0, bp0='50,1.0,100,0.5,250,0')</code>.</p> <p>Every note on (specified by setting <code>vel</code> / <code>l</code> to anything > 0) will trigger this envelope, and setting velocity to 0 will trigger the note off / release section.</p> <p>You can set a completely separate envelope using the second envelope generator, for example, to change pitch and amplitude at different rates.</p> <p>As with ControlCoefficients, missing values in the comma-separated parameter strings mean to leave the existing value unchanged. However, unlike ControlCoefficients, it's important to explicitly indicate every value you want to leave unchanged, since the number of parameters provided determines the number of breakpoints in the set. So in the following sequence:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>amy.send(osc=0, bp0='0,1,1000,0.1,200,0')</span></span> <span class="line"><span>amy.send(osc=0, bp0=',,,0.9,,')</span></span></code></pre> </div><p>.. we end up with the same effect as <code>bp0='0,1,1000,0.9,200,0</code>. However, if we do:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>amy.send(osc=0, bp0='0,1,1000,0.1,200,0')</span></span> <span class="line"><span>amy.send(osc=0, bp0=',,,0.9') # No trailing commas.</span></span></code></pre> </div><p>.. we effectively end up with <code>bp0='0,1,1000,0.9</code>, i.e. the 4 elements in the second <code>bp0</code> string change the first breakpoint set to have only 2 breakpoints, meaning a constant amplitude during note-on, then a final slow release to 0.9 -- not at all like the first form, and likely not what we wanted.</p> <h2 id="audio-input-and-effects" tabindex="-1">Audio input and effects <a class="header-anchor" href="#audio-input-and-effects" aria-label="Permalink to "Audio input and effects""></a></h2> <p>By setting <code>wave</code> to <code>AUDIO_IN0</code> or <code>AUDIO_IN1</code>, you can have either channel of a stereo input act as an AMY oscillator. You can use this oscillator like you would any other in AMY, apply global effects to it, add filters, change amplitude, etc.</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>amy.send(osc=0, wave=amy.AUDIO_IN0, vel=1)</span></span> <span class="line"><span>amy.echo(1, 250, 250, 0.5, 0.5)</span></span></code></pre> </div><p>If you are building your own audio system around AMY you will want to fill in the buffer <code>amy_in_block</code> before rendering. Our included <code>miniaudio</code>-based system does this for you. See <a href="https://github.com/shorepine/amychip" target="_blank" rel="noreferrer"><code>amychip</code></a> for a demo of this in hardware.</p> <h2 id="fm-algo-type" tabindex="-1">FM & ALGO type <a class="header-anchor" href="#fm-algo-type" aria-label="Permalink to "FM & ALGO type""></a></h2> <p>Try default DX7 patches, from 128 to 256:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">load_patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">128</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>The <code>load_patch</code> lets you set which preset is used (0 to 127 are the Juno 106 analog synth presets, and 128 to 255 are the DX7 FM presets). But let's make the classic FM bell tone ourselves, without a patch. We'll just be using two operators (two sine waves), one modulating the other.</p> <p><img src="https://raw.githubusercontent.com/shorepine/alles/main/pics/dx7_algorithms.jpg" alt="DX7 Algorithms"></p> <p>When building your own algorithm sets, assign a separate oscillator as wave=<code>ALGO</code>, but the source oscillators as <code>SINE</code>. The algorithm #s are borrowed from the DX7. You don't have to use all 6 operators. Note that the <code>algo_source</code> parameter counts backwards from operator 6. When building operators, they can have their frequencies specified directly with <code>freq</code> or as a ratio of the root <code>ALGO</code> oscillator via <code>ratio</code>.</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset()</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">ratio</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'vel'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'eg0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">})</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">ratio</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'vel'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'eg0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">bp0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0,1,1000,0,0,0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">ALGO</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">algorithm</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">algo_source</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">',,,,2,1'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>Let's unpack that last line: we're setting up a ALGO "oscillator" that controls up to 6 other oscillators. We only need two, so we set the <code>algo_source</code> to mostly not used and have oscillator 2 modulate oscillator 1. You can have the operators work with each other in all sorts of crazy ways. For this simple example, we just use the DX7 algorithm #1. And we'll use only operators 2 and 1. Therefore our <code>algo_source</code> lists the oscillators involved, counting backwards from 6. We're saying only have operator 2 (osc 2 in this case) and operator 1 (osc 1). From the picture, we see DX7 algorithm 1 has operator 2 feeding operator 1, so we have osc 2 providing the frequency-modulation input to osc 1.</p> <p>What's going on with <code>ratio</code>? And <code>amp</code>? Ratio, for FM synthesis operators, means the ratio of the frequency for that operator relative to the base note. So oscillator 1 will be played at 20% of the base note frequency, and oscillator 2 will take the frequency of the base note. In FM synthesis, the <code>amp</code> of a modulator input is called "beta", which describes the strength of the modulation. Here, osc 2 is providing the modulation with a constant beta of 1, which will result in a range of sinusoids with frequencies around the carrier at multiples of the modulator. We set osc 2's amp ControlCoefficients for velocity and envelope generator 0 to 0 because they default to 1, but we don't want them for this example (FM sines don't receive the parent note's velocity, so we need to disable its influence). Osc 1 has <code>bp0</code> decaying its amplitude to 0 over 1000 ms, but because beta is fixed there's no other change to the sound over that time.</p> <p>Ok, we've set up the oscillators. Now, let's hear it!</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>You should hear a bell-like tone. Nice. (This example is also implemented using the C API in <a href="https://github.com/shorepine/amy/blob/b1ed189b01e6b908bc19f18a4e0a85761d739807/src/examples.c#L281" target="_blank" rel="noreferrer"><code>src/examples.c:example_fm()</code></a>.)</p> <p>FM gets much more exciting when we vary beta, which just means varying the amplitide envelope of the modulator. The spectral effects of the frequency modulation depend on beta in a rich, nonlinear way, leading to the glistening FM sounds. Let's try fading in the modulator over 5 seconds:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.reset()</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">ratio</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'vel'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'eg0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">bp0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0,0,5000,1,0,0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Op 2, modulator</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">SINE</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">ratio</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">amp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">{</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'const'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'vel'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'eg0'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Op 1, carrier</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">ALGO</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">algorithm</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">algo_source</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">',,,,2,1'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>Just a refresher on envelope generators; here we are saying to set the beta parameter (amplitude of the modulating tone) to 2x envelope generator 0's output, which starts at 0 at time 0 (actually, this is the default), then grows to 1.0 at time 5000ms - so beta grows to 2.0. At the release of the note, beta immediately drops back to 0. We can play it with:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>and stop it with</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><h2 id="partials" tabindex="-1">Partials <a class="header-anchor" href="#partials" aria-label="Permalink to "Partials""></a></h2> <p>Additive synthesis is simply adding together oscillators to make more complex tones. You can modulate the breakpoints of these oscillators over time, for example, changing their pitch or time without artifacts, as the synthesis is simply playing sine waves back at certain amplitudes and frequencies (and phases). It's well suited to certain types of instruments.</p> <p><img src="https://raw.githubusercontent.com/shorepine/alles/main/pics/partials.png" alt="Partials"></p> <p>We have analyzed the partials of a group of instruments and stored them as presets baked into the synth. Each of these patches are comprised of multiple sine wave oscillators, changing over time. The <code>PARTIALS</code> type has the presets:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PARTIALS</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># a nice organ tone</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">55</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PARTIALS</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># change the frequency</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">50</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PARTIALS</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">6</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">ratio</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.2</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># ratio slows down the partial playback</span></span></code></pre> </div><p>The presets are just the start of what you can do with partials in AMY. You can analyze any piece of audio and decompose it into sine waves and play it back on the synthesizer in real time. It requires a little setup on the client end, here on macOS:</p> <div class="language-bash vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">bash</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">brew</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> install</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> python3</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> swig</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> ffmpeg</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">python3</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> -m</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> pip</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> install</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> pydub</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> numpy</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> --user</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">tar</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> xvf</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> loris-1.8.tar</span></span> <span class="line"><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">cd</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> loris-1.8</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">CPPFLAGS</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">`</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">python3-config</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> --includes</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">`</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> PYTHON</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">`</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">which</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> python3`</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> ./configure</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> --with-python</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> --prefix=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">`</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">python3-config</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> --prefix</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">`</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">make</span></span> <span class="line"><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">make</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> install</span></span> <span class="line"><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">cd</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF"> ..</span></span></code></pre> </div><p>And then in python (run <code>python3</code>):</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">import</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> partials, amy</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(m, s) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> partials.sequence(</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'sounds/sleepwalk_original_45s.mp3'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Any audio file</span></span> <span class="line"><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">153</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> partials </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">and</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 977</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> breakpoints, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">max</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> oscs used at once was </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">8</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.live() </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Start AMY playing audio</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">partials.play(s)</span></span></code></pre> </div><p><a href="https://user-images.githubusercontent.com/76612/131150119-6fa69e3c-3244-476b-a209-1bd5760bc979.mp4" target="_blank" rel="noreferrer">https://user-images.githubusercontent.com/76612/131150119-6fa69e3c-3244-476b-a209-1bd5760bc979.mp4</a></p> <p>You can see, given any audio file, you can hear a sine wave decomposition version of in AMY. This particular sound emitted 109 partials, with a total of 1029 breakpoints among them to play back to the mesh. Of those 109 partials, only 8 are active at once. <code>partials.sequence()</code> performs voice stealing to ensure we use as few oscillators as necessary to play back a set.</p> <p>There's a lot of parameters you can (and should!) play with in Loris. <code>partials.sequence</code> and <code>partials.play</code>takes the following with their defaults:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">def</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> sequence</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(filename, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># any audio filename</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> max_len_s </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 10</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># analyze first N seconds</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amp_floor</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=-</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">30</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># only accept partials at this amplitude in dB, lower #s == more partials</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> hop_time</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.04</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># time between analysis windows, impacts distance between breakpoints</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> max_oscs</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">OSCS</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># max AMY oscs to take up</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> freq_res </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 10</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># freq resolution of analyzer, higher # -- less partials & breakpoints </span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> freq_drift</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">20</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># max difference in Hz within a single partial</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> analysis_window </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 100</span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> # analysis window size </span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># returns (metadata, sequence)</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">def</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> play</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(sequence, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># from partials.sequence</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> osc_offset</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># start at this oscillator #</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> sustain_ms </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> -</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># if the instrument should sustain, here's where (in ms)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> sustain_len_ms </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># how long to sustain for</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> time_ratio </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># playback speed -- 0.5 , half speed</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> pitch_ratio </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># frequency scale, 0.5 , half freq</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amp_ratio </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># amplitude scale</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> )</span></span></code></pre> </div><h2 id="build-your-own-partials" tabindex="-1">Build-your-own Partials <a class="header-anchor" href="#build-your-own-partials" aria-label="Permalink to "Build-your-own Partials""></a></h2> <p>You can also explicitly control partials in "build-your-own partials" mode, accessed via <code>wave=amy.BYO_PARTIALS</code>. This sets up a string of oscs as individual sinusoids, just like <code>PARTIALS</code> mode, but it's up to you to control the details of each partial via its parameters, envelopes, etc. You just have to say how many partials you want with <code>num_partials</code>. You can then individually set up the amplitude <code>bp0</code> envelopes of the next <code>num_partials</code> oscs for arbitrary control, subject to the limit of 7 breakpoints plus release for each envelope. For instance, to get an 8-harmonic pluck tone with a 50 ms attack, and harmonic weights and decay times inversely proportional to to the harmonic number:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">num_partials </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 8</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">BYO_PARTIALS</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">num_partials</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">num_partials)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">for</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">in</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> range</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, num_partials </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">):</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> # Set up each partial as the corresponding harmonic of 261.63</span></span> <span class="line"><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"> # with an amplitude of 1/N, 50ms attack, and a decay of 1 sec / N.</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">i, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PARTIAL</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">261.63</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> *</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i,</span></span> <span class="line"><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70"> bp0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'50,</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">%.2f</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">,</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">%d</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">,0,0,0'</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> %</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ((</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1.0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> /</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i), </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1000</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> //</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i))</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span></code></pre> </div><p>You can add a filter (or an envelope etc.) to the sum of all the <code>PARTIAL</code> oscs by configuring it on the parent <code>PARTIALS</code> osc:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>amy.send(osc=0, filter=amy.FILTER_HPF, resonance=4, filter_freq={'const': 200, 'eg1': 4}, bp1='0,0,1000,1,0,0')</span></span> <span class="line"><span>amy.send(osc=0, note=60, vel=1)</span></span> <span class="line"><span># etc.</span></span></code></pre> </div><p>Note that the default <code>bp0</code> amplitude envelope of the <code>PARTIALS</code> osc is a gate, so if you want to have a nonzero release on your partials, you'll need to add a slower release to the <code>PARTIALS</code> osc to avoid it cutting them off.</p> <h2 id="interpolated-partials" tabindex="-1">Interpolated partials <a class="header-anchor" href="#interpolated-partials" aria-label="Permalink to "Interpolated partials""></a></h2> <p>Please see our <a href="https://shorepine.github.io/amy/piano.html" target="_blank" rel="noreferrer">piano voice documentation</a> for more on the <code>INTERP_PARTIALS</code> type.</p> <h2 id="pcm" tabindex="-1">PCM <a class="header-anchor" href="#pcm" aria-label="Permalink to "PCM""></a></h2> <p>AMY comes with a set of 67 drum-like and instrument PCM samples to use as well, as they are normally hard to render with additive, subtractive or FM synthesis. You can use the type <code>PCM</code> and patch numbers 0-66 to explore them. Their native pitch is used if you don't give a frequency or note parameter. You can update the baked-in PCM sample bank using <code>amy_headers.py</code>.</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">10</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># cowbell</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">10</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">70</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># higher cowbell!</span></span></code></pre> </div><p>You can turn on sample looping, helpful for instruments, using <code>feedback</code>:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">21</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">feedback</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># clean guitar string, no looping</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">21</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">feedback</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># loops forever until note off</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># note off</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">35</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">,</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">feedback</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># nice violin</span></span></code></pre> </div><h2 id="sampler-aka-memory-pcm" tabindex="-1">Sampler (aka Memory PCM) <a class="header-anchor" href="#sampler-aka-memory-pcm" aria-label="Permalink to "Sampler (aka Memory PCM)""></a></h2> <p>You can also load your own samples into AMY at runtime. We support sending PCM data over the wire protocol. Use <code>load_sample</code> in <code>amy.py</code> as an example:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.load_sample(</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">"G1.wav"</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">3</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">wave</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">PCM</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">3</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># plays the sample</span></span></code></pre> </div><p>You can use any patch number. If it overlaps with an existing PCM baked in number, it will play the memory sample instead of the baked in sample until you <code>unload_sample</code> the patch.</p> <p>If the WAV file has sampler metadata like loop points or base MIDI note, we use that in AMY. You can set it directly as well using <code>loopstart</code>, <code>loopend</code>, <code>midinote</code> or <code>length</code> in the <code>load_sample</code> call. To unload a sample:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.unload_sample(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">3</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># unloads the RAM for patch 3</span></span></code></pre> </div><p>Under the hood, if AMY receives a <code>load_sample</code> message (with patch number and nonzero length), it will then pause all other message parsing until it has received <code>length</code> amount of base64 encoded bytes over the wire protocol. Each individual message must be base64 encoded. Since AMY's maximum message length is 255 bytes, there is logic in <code>load_sample</code> in <code>amy.py</code> to split the sample data into 188 byte chunks, which generates 252 bytes of base64 text. Please see <code>amy.load_sample</code> if you wish to load samples on other platforms.</p> <h2 id="voices-and-patches-dx7-piano-juno-6-custom-support" tabindex="-1"><a name="voices_and_patches"></a>Voices and patches (DX7, Piano, Juno-6, custom) support <a class="header-anchor" href="#voices-and-patches-dx7-piano-juno-6-custom-support" aria-label="Permalink to "<a name="voices_and_patches"></a>Voices and patches (DX7, Piano, Juno-6, custom) support""></a></h2> <p>Up until now, we have been directly controlling the AMY oscillators, which are the fundamental building blocks for sound production. However, as we've seen, most interesting tones involve multiple oscillators. AMY provides a second layer of organization, <strong>voices</strong>, to make it easier to configure and use groups of oscillators in coordination. And you configure a voice by using a <strong>patch</strong>, which is simply a stored list of AMY commands that set up one or more oscillators.</p> <p>A voice in AMY is a collection of oscillators. You can assign any patch to any voice number, or set up mulitple voices to have the same patch (for example, a polyphonic synth), and AMY will allocate the oscillators it needs under the hood. (Note that when you use voices, you'll need to include the <code>voices</code> arg when addressing oscillators, and AMY will automatically route your command to the relevant oscillator in each voice set -- there's no other way to tell which oscillators are being used by which voices.)</p> <p>To play a patch, for instance the built-in patches emulating Juno and DX7 synthesizers and a piano, you allocate them to one or more voices, then send note events, or parameter moidifications, to those voices. We ship patches 0-127 for Juno, 128-255 for DX7, and 256 for our <a href="https://shorepine.github.io/amy/piano.html" target="_blank" rel="noreferrer">built in piano</a>. For example, a multitimbral Juno/DX7 synth can be set up like this:</p> <div class="language-python vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">python</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'0,1,2,3'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">load_patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Juno patch #1 on voice 0-3</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#032F62;--shiki-dark:#9ECBFF">'4,5,6,7'</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">load_patch</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">129</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># DX7 patch #2 on voices 4-7</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">note</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">60</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">vel</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Play note 60 on voice 0</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">amy.send(</span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">voices</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">osc</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">filter_freq</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">8000</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D"># Open up the filter on the Juno voice (using its bottom oscillator)</span></span></code></pre> </div><p>The code in <code>amy_headers.py</code> generates these patches and bakes them into AMY so they're ready for playback on any device. You can add your own patches by storing alternative wire-protocol setup strings in <code>patches.h</code>.</p> <p>You can also create your own patches at runtime and use them for voices with <code>store_patch='PATCH_NUMBER,AMY_PATCH_STRING'</code> where <code>PATCH_NUMBER</code> is a number in the range 1024-1055. This message must be on its own in the <code>amy.send()</code> command, not combined with any other parameters, because AMY will treat the rest of the message as a patch rather than interpreting the remaining arguments as ususal.</p> <p>So you can do:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>>>> import amy; amy.live() # Not needed on Tulip.</span></span> <span class="line"><span>>>> amy.send(store_patch='1024,v0S0Zv0S1Zv1w0f0.25P0.5a0.5Zv0w0f261.63,1,0,0,0,1A0,1,500,0,0,0L1Z')</span></span> <span class="line"><span>>>> amy.send(voices=0, load_patch=1024)</span></span> <span class="line"><span>>>> amy.send(voices=0, vel=2, note=50)</span></span></code></pre> </div><p>AMY infers the number of oscs needed for the patch at <code>store_patch</code> time. If you store a new patch over an old one, that old memory is freed and re-allocated. (We rely on <code>malloc</code> for all of this.)</p> <p>You can "record" patches in a sequence of commands like this:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>>>> amy.log_patch()</span></span> <span class="line"><span>>>> # Execute any commands to set up the oscillators.</span></span> <span class="line"><span>>>> amy.preset(5)</span></span> <span class="line"><span>>>> bass_drum = amy.retrieve_patch()</span></span> <span class="line"><span>>>> bass_drum</span></span> <span class="line"><span>'v0S0Zv0S1Zv1w0f0.25P0.5a0.5Zv0w0f261.63,1,0,0,0,1A0,1,500,0,0,0L1Z'</span></span> <span class="line"><span>>>> amy.send(store_patch='1024,' + bass_drum)</span></span></code></pre> </div><p><strong>Note on patches and AMY timing</strong>: If you're using AMY's time scheduler (see below) note that unlike all other AMY commands, allocating new voices from patches (using <code>load_patch</code>) will happen once AMY receives the message, not using any advance clock (<code>time</code>) you may have set. This default is the right decision for almost all use cases of AMY, but if you do need to be able to "schedule" voice allocations within the short term scheduling window, you can load patches by sending the patch string directly to AMY using the timer, and manage your own oscillator mapping in your code.</p> ]]></content:encoded> <enclosure url="https://raw.githubusercontent.com/shorepine/tulipcc/main/docs/pics/shorepine100.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Synth snippets]]></title> <link>https://chromatone.center/practice/synth/snippets.html</link> <guid>https://chromatone.center/practice/synth/snippets.html</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><![CDATA[Code pieces]]></description> <content:encoded><""></a></h3> <p>Sharing a useful snippet. JUCE's logskew function in Typescript, if anyone wants to scale their parameter values (for example convert 0-1 range to 20-20000 hz in a non linear way) :</p> <div class="language-js vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">js</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">const</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> clamp</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ( </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">min</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">max</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">num</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">min</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">max</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(num, min), max);</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">const</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> skew</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ( </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">min</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">max</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">centerPoint</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">log</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">log</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">((centerPoint </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> min) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> (max </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> min));</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">const</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> logskew</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> ( </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">a</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">b</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">time</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#E36209;--shiki-dark:#FFAB70">centerPoint</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">:</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> number</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> const</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> proportion</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> clamp</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, time);</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> const</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> sk</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> skew</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(a, b, centerPoint);</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> const</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> v</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">exp</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> (Math.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">log</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> (proportion) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> sk);</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> a </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> (b </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> a) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">*</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> v;</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">console.</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">log</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">logskew</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">( </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">20</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">20000</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0.5</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1000</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)) </span><span style="--shiki-light:#6A737D;--shiki-dark:#6A737D">// will output 1000</span></span></code></pre> </div>]]></content:encoded> </item> <item> <title><![CDATA[Reharmonization]]></title> <link>https://chromatone.center/theory/harmony/reharmonization/</link> <guid>https://chromatone.center/theory/harmony/reharmonization/</guid> <pubDate>Tue, 04 Mar 2025 13:30:19 GMT</pubDate> <description><![CDATA[How to reharmonize a song]]></description> <content:encoded><![CDATA[<youtube-embed video="lz3WR-F_pnM" /><h2 id="reharmonization-chord-substitution-but-for-the-whole-chord-progression" tabindex="-1">Reharmonization = Chord substitution but for the whole chord progression <a class="header-anchor" href="#reharmonization-chord-substitution-but-for-the-whole-chord-progression" aria-label="Permalink to "Reharmonization = Chord substitution but for the whole chord progression""></a></h2> <p>Reharmonization is used to:</p> <ul> <li>Make a song more ‘jazzy’ – that is, more harmonically and/or structurally complex;</li> <li>Personalise a song and make it your own (it’s almost like composing a brand new song, or a variation of an existing song).</li> </ul> <p>So, generally, a song consists of a:</p> <ul> <li>Melody; and</li> <li>Chord progression</li> </ul> <p>Reharmonization involves <strong>changing the melody or chords</strong> or both – but changing them with regard to certain rules or ideas.</p> <p>To reharmonize a song you need to take two things into account:</p> <ul> <li><strong>Harmony</strong> <ul> <li>This depends on the interaction between the melody and the chords</li> <li>It depends on the quality of the chord (Maj, min, V7, etc.)</li> <li><strong>Goal</strong>: To ensure the <strong>melody note is an ‘acceptable harmony’ over the chord</strong></li> </ul> </li> <li><strong>Structure</strong> <ul> <li>This refers to the movement of chords & bass-line</li> <li>It is NOT affected by the quality of the chord</li> <li><strong>Goal</strong>: The structure must be logical.</li> <li>In practice this means that chords and bass-lines should move: <ul> <li>by Fixed Intervals;</li> <li>by Diatonically;</li> <li>be Melody based.</li> </ul> </li> </ul> </li> </ul> <p><strong>The Overarching Goal of Reharmonization is</strong>:</p> <ul> <li>Change the chords and/or melody to ensure the melody is an ‘acceptable harmony’ over the chord.</li> <li>Change chords and bass-line so they move in a ‘structured way’.</li> </ul> <youtube-embed video="XPFo_LmqnJg" /><h2 id="key-melody-note" tabindex="-1">Key Melody Note <a class="header-anchor" href="#key-melody-note" aria-label="Permalink to "Key Melody Note""></a></h2> <p>The first thing you have to do when reharmonizing a song is to identify the <strong>Key Melody Note</strong> (KMN) of each bar:</p> <ul> <li>The KMN is harmonically the most important note in a bar – with all the other notes treated as passing notes (so NOT harmonically important)</li> <li>This is largely subjective, but the KMN tends to be the: <ul> <li>Longest note in the bar</li> <li>First note in the bar</li> <li>Most repeated note in the bar</li> <li>A note played on-the-beat</li> </ul> </li> <li>There may be more than one KMN in a bar, in which case: <ul> <li>Account for both; or</li> <li>Look for the KMN in each half-bar (and so on)</li> </ul> </li> <li>There may be no obvious KMN – e.g. a fast run of notes, in this case: <ul> <li>Look at ALL or MOST of the notes in the bar</li> <li>Structure > Harmony – more on this later</li> </ul> </li> </ul> <h2 id="harmony" tabindex="-1">Harmony <a class="header-anchor" href="#harmony" aria-label="Permalink to "Harmony""></a></h2> <p>First let’s deal with the <strong>Harmony</strong>:</p> <ul> <li>Change the chord (tonality and quality) or KMN to ensure the KMN is an ‘acceptable harmony’ over the chord</li> <li>We have already learned in previous lessons that not all notes are equal. Some notes sound strong when played over a chord (<a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-improvisation/guide-tones/" target="_blank" rel="noreferrer"><strong>Guide Tones</strong></a>), some notes sound pleasant and complement the sound of a chord (<a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-chords/available-tensions/" target="_blank" rel="noreferrer"><strong>Available Tensions</strong></a>), while some notes sound weak and dissonant (<a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-improvisation/avoid-notes/" target="_blank" rel="noreferrer"><strong>Avoid Notes</strong></a>). Well, I’ve defined ‘acceptable harmony’ as ‘guide tones’ (3rd & 7th) plus ‘available tensions’. These are the notes that sound good over a particular chord. I have allocated every note into the following categories: <ul> <li><strong>Strong Harmony</strong> = Guide Tones</li> <li><strong>Weak Harmony</strong> = Root & 5th</li> <li><strong>Jazzy Harmony</strong> = Available Tensions</li> <li><strong>Unacceptable Harmony</strong> = Unavailable Tensions/Avoid Notes</li> </ul> </li> </ul> <p>Acceptable Harmony = Guide Tones + Available Tensions</p> <p>Below are the Acceptable Harmonies for a number of chord types.</p> <h3 id="acceptable-harmonies" tabindex="-1">Acceptable Harmonies <a class="header-anchor" href="#acceptable-harmonies" aria-label="Permalink to "Acceptable Harmonies""></a></h3> <p><img src="https://www.thejazzpianosite.com/wp-content/uploads/2016/12/How-to-Reharmonize-a-Song.png" alt="How to Reharmonize a Song"></p> <p>Ok, so now let’s do the same exercise but in reverse. Instead of taking a chord and finding all its acceptable harmonies, let’s take a note and find all the chords where that note is an acceptable harmony. Let’s use the note ‘C’.</p> <ul> <li>Below, all chords that create an ‘Acceptable Harmony’ with the Key Melody Note (C) are highlighted <span style="color: #008000;"><strong>Green</strong></span>.</li> <li>The tonality/root of the chord is given in the row labelled ‘Root’</li> <li>The quality of the chord is given in the rows labelled ‘Chord Quality’</li> <li>The row labelled ‘Degree’ will tell you what degree the Key Melody Note is in relation to the chord.</li> </ul> <p><img src="https://www.thejazzpianosite.com/wp-content/uploads/2016/12/Reharmonization-Chords.png" alt="Reharmonization Chords"></p> <p>You can use any of the chords coloured <span style="color: #008000;"><strong>Green</strong></span> above to substitute for a chord when the key melody note of a bar is ‘C’.</p> <h2 id="structure" tabindex="-1">Structure <a class="header-anchor" href="#structure" aria-label="Permalink to "Structure""></a></h2> <p>Next let’s deal with the Structure:</p> <ul> <li>‘Structure’ involves moving the <strong>chord</strong> and the <strong>bass-line</strong> in a ‘structured’ or ‘logical’ way, ignoring the quality of the chord.</li> <li>In reality this means moving the chords and bass-line in: <ul> <li>Fixed intervals;</li> <li>Diatonically;</li> <li>Melody based; or</li> <li>Some other logical structure</li> </ul> </li> <li>You can change chord inversions to create a smoother bass-line that doesn’t jump around too much</li> <li>The bass-line and chords can move in different ways as long as they are both ‘structured’</li> </ul> <table id="tablepress-161" class="tablepress tablepress-id-161"> <tbody class="row-hover"> <tr class="row-1 odd"> <td class="column-1">**Chromatic (Fixed Intervals)**</td> <td class="column-2">E♭7</td> <td class="column-3">D7</td> <td class="column-4">D♭7</td> <td class="column-5">CMaj7</td> </tr> <tr class="row-2 even"> <td class="column-1">**Coltrane (Fixed Intervals)**</td> <td class="column-2">CMaj7</td> <td class="column-3">G#Maj7</td> <td class="column-4">EMaj7</td> <td class="column-5">CMaj7</td> </tr> <tr class="row-3 odd"> <td class="column-1">**Diatonic**</td> <td class="column-2">F7</td> <td class="column-3">E7</td> <td class="column-4">D7</td> <td class="column-5">CMaj7</td> </tr> <tr class="row-4 even"> <td class="column-1">**Melody Based**</td> <td class="column-2">**Melody:** C **Chord:** Am7</td> <td class="column-3">**Melody:** E **Chord:** CMaj7</td> <td class="column-4">**Melody:** F **Chord:** D♭Maj7</td> <td class="column-5">**Melody:** F **Chord:** Dm7</td> </tr> <tr class="row-5 odd"> <td class="column-1">**Bassline (Pedal Point)**</td> <td class="column-2">**Chord:** E♭6 **Bass:** C</td> <td class="column-3">**Chord:** D7 **Bass:** C</td> <td class="column-4">**Chord:** D♭Maj7 **Bass:** C</td> <td class="column-5">**Chord:** CMaj7 **Bass:** C</td> </tr> <tr class="row-6 even"> <td class="column-1">**Bassline (Stepwise)**</td> <td class="column-2">**Chord:** Em7 **Bass:** E</td> <td class="column-3">**Chord:** A7 **Bass:** E</td> <td class="column-4">**Chord:** Dm7 - G7 **Bass:** D - D</td> <td class="column-5">**Chord:** CMaj7 **Bass:** C</td> </tr> </tbody> </table> <p><em>Remember: The chords and bass-line can move independently and the chord quality doesn’t matter</em></p> <h2 id="some-rules-and-tips" tabindex="-1">Some Rules and Tips <a class="header-anchor" href="#some-rules-and-tips" aria-label="Permalink to "Some Rules and Tips""></a></h2> <ul> <li>Choose chords & melody that increase and then decrease tension <ul> <li>To increase tension – use higher extensions and alterations (e.g.♭13)</li> <li>To decrease tension – use Guide Tones (3rd & 7th) & lower extensions (e.g. 9)</li> </ul> </li> <li>On a long melody note or repeated melody note, move through a number of chords (this makes the melody note a kind of pedal point, which sounds great)</li> <li>While ideally we should have an acceptable harmony AND a logical structure, it is possible to have one without the other: <ul> <li>You can have a weak harmony (root or 5th) if you have a strong structure (II-V); OR</li> <li>You can have a weak structure (no pattern) if you have a strong harmony (3rd or 7th)</li> </ul> </li> <li>If the melody is a fast run of notes with no obvious ‘key melody note’ <ul> <li>Structure becomes more important than Harmony <ul> <li>Bebop songs often use fast melody runs with strong II-V structured chord progressions</li> </ul> </li> <li>Look at and consider all the notes in the bar</li> </ul> </li> </ul> <h2 id="further-study" tabindex="-1">Further study <a class="header-anchor" href="#further-study" aria-label="Permalink to "Further study""></a></h2> <p>To make your reharmonization sound even more professional and smooth:</p> <ul> <li>Add <a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-improvisation/passing-notes/" target="_blank" rel="noreferrer"><strong>Passing Chords</strong></a></li> <li>Add <a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-chord-voicings/" target="_blank" rel="noreferrer"><strong>Jazz Chord Voicings</strong></a></li> <li><a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-improvisation/embellishing-the-melody/" target="_blank" rel="noreferrer"><strong>Embellish the Melody</strong></a></li> </ul> <h2 id="google-doc" tabindex="-1">Google Doc <a class="header-anchor" href="#google-doc" aria-label="Permalink to "Google Doc""></a></h2> <p>Below is an embedded Google Doc which outlines all the possible reharmonization (substitute) chords for all chord types and all 12 note. You’re welcome to export it to Excel. To do this:</p> <ul> <li>Go to the Google Doc by <a href="https://docs.google.com/spreadsheets/d/1SuTuEAg8Lk8S09YJnBpy5UIP-MJZiKXycD7kmsaZW48" target="_blank" rel="noreferrer">clicking here</a></li> <li>Press ‘File’</li> <li>Select ‘Download as’</li> <li>Select ‘Microsoft Excel’</li> </ul> <iframe src="https://docs.google.com/spreadsheets/d/1SuTuEAg8Lk8S09YJnBpy5UIP-MJZiKXycD7kmsaZW48/pubhtml?widget=true&headers=false" width="640" height="500"></iframe> <p>To see the above theory in action, please watch the below video.</p> <youtube-embed video="hhMCNhfZ8Iw" /><ul> <li>Quoted from the source at <a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-reharmonization/how-to-reharmonize-a-song/" target="_blank" rel="noreferrer">https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-reharmonization/how-to-reharmonize-a-song/</a></li> </ul> ]]></content:encoded> <enclosure url="https://www.thejazzpianosite.com/wp-content/uploads/2016/12/How-to-Reharmonize-a-Song.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Traditional sub-Saharan African harmony]]></title> <link>https://chromatone.center/theory/harmony/sub-saharan/</link> <guid>https://chromatone.center/theory/harmony/sub-saharan/</guid> <pubDate>Mon, 27 Jan 2025 00:00:00 GMT</pubDate> <description><![CDATA[A music theory of harmony in sub-Saharan African music based on the principles of homophonic parallelism]]></description> <content:encoded><![CDATA[<p><strong>Traditional sub-Saharan African harmony</strong> is a music theory of harmony in sub-Saharan African music based on the principles of homophonic parallelism (chords based around a leading melody that follow its rhythm and contour), homophonic polyphony (independent parts moving together), counter-melody (secondary melody) and ostinato-variation (variations based on a repeated theme). Polyphony (contrapuntal and ostinato variation) is common in African music and heterophony (the voices move at different times) is a common technique as well. Although these principles of traditional (precolonial and pre-Arab) African music are of Pan-African validity, the degree to which they are used in one area over another (or in the same community) varies. Specific techniques that used to generate harmony in Africa are the "span process", "pedal notes" (a held note, typically in the bass, around which other parts move), "rhythmic harmony", "harmony by imitation", and "scalar clusters".</p> <h2 id="general-overview" tabindex="-1">General overview <a class="header-anchor" href="#general-overview" aria-label="Permalink to "General overview""></a></h2> <blockquote> <p>"By Western standards, African music is characteristically complex...Two or more events tend to occur simultaneously within a musical context. Even players of simple solo instruments (such as the musical bow or the flute) manage to manipulate the instrument in such a way to produce simultaneous sounds by playing overtones with the bow, by humming while bowing, and the like... Overlapping choral antiphony and responsorial singing are principal types of African polyphony. Various combinations of ostinato and drone-ostinato, polymelody (mainly two-part), and parallel intervals are additional polyphonic techniques frequently employed. Several types may intermingle within one vocal or instrumental piece, with the resulting choral or orchestral tendency being the stacking of parts or voices. Consequently three- or four-part density is not an uncommon African musical feature. Such densities are constantly fluctuating so that continuous triads throughout an entire piece are uncommon. Canonic imitation may occur in responsorial or antiphonal sections of African music as a result of the repetition of the first phrase or the introduction of new melodic material in the form of a refrain. The latter may involve a contrasting section or a completion of the original melody." —Karlton E. Hester</p> </blockquote> <p>Chordal relationships that occur as a result of the polyphony, homophonic parallelism and homophonic polyphony found in African music are not always 'functional' in the western musical sense. However, they accomplish the balance of tension-release and dissonance-consonance. In addition, they form varieties of chord combinations and clusters, as well as varying levels of harmonic patterning.</p> <h2 id="scales" tabindex="-1">Scales <a class="header-anchor" href="#scales" aria-label="Permalink to "Scales""></a></h2> <p>Chords are constructed from scales. Pentatonic and hexatonic scales are very common scales across Africa. Nonetheless, heptatonic scales can be found in abundance. Anhemitonic scales, equal heptatonic scales, and scales based on the selected use of partials are used in Africa as well. The same community that may use one set of instruments tuned to a certain scale (i.e. pentatonic), can use a different scale for a different set of instruments, or song type (i.e. heptatonic).</p> <p>In traditional African music, scales are practised and thought of as descending from top to bottom. African harmony is based on the scales being employed in a particular musical setting. Scales have a profound impact on the harmony because Africans modalize their music. Modalization is the process of applying modal concepts in a non-modal setting. African music uses recurring harmonic reference points as a means of musical organization. Therefore, African music is not modal or purely based on one mode. Nonetheless, modal concepts are employed in African music. This predates exposure to Western and Arab musics.[citation needed]</p> <h2 id="principles" tabindex="-1">Principles <a class="header-anchor" href="#principles" aria-label="Permalink to "Principles""></a></h2> <h3 id="homophonic-parallelism-and-homophonic-polyphony" tabindex="-1">Homophonic parallelism and homophonic polyphony <a class="header-anchor" href="#homophonic-parallelism-and-homophonic-polyphony" aria-label="Permalink to "Homophonic parallelism and homophonic polyphony""></a></h3> <p>Homophonic parallelism is the harmonizing of a single melody, or subordinate melody and moving with it in parallel. This means the notes that harmonize the melody follow its characteristic shape and rhythm. This type of parallelism is common to all African peoples, but the degree to which it is employed varies. It is important to note that parallelism in thirds (inversely tenths), fourths, fifths, and octaves (inversely unison) are Pan-African methods of homophonic parallel harmonization. These intervals are interchanged depending on the melody they are accompanying and the scale source of the harmonization.</p> <p>Homophonic polyphony occurs when two different melodies are harmonized in the style of homophonic parallelism, and either (1) occur simultaneously by means of overlapping antiphony or (2) over at the simultaneously as a result of melodic counterpoint.</p> <p>This parallelism is not to be confused with strict parallelism. Gerhard Kubik states that much variation and freedom is permitted in parallel parts, with the stipulation that words remain intelligible (or in the case of instruments the melody remains recognizable), and the scalar source is observed. The harmonic line harmonized normally moves by step rarely jumping beyond a fourth.</p> <blockquote> <p>"A.M. Jones states that 'generally speaking all over the continent south of the Sahara, African harmony is in organum and is sung either in parallel fourths, parallel fifths, parallel octaves or parallel thirds.' Parallelism, however, is not without limitations. Melodic and scale considerations, as has been shown, are of primary importance in deciding what notes are employed in harmonizing tunes and, consequently, what intervals are formed. The adaptation of parallelism to fit melodic requirements is much more apparent in the music of those areas of Africa where the pentatonic scale is the norm. Kirby has shown how the demands of a pentatonic scale result in the employment of sixths in Bantu polyphony, where parallelism in fifths is the principle. He points out that the limitations of the pentatonic scale make for the awareness of other intervals instead of what apparently was the strict duplication of the melody at the same interval employed by early European musicians." —Lazarus Ekweme</p> </blockquote> <h2 id="secondary-melody" tabindex="-1">Secondary melody <a class="header-anchor" href="#secondary-melody" aria-label="Permalink to "Secondary melody""></a></h2> <p>The harmonization of a subordinate melody – be it responsorial or with regular repetitions within the cycle – is often based on a counter melody or secondary melody. From this melody the span process, pedal notes and other techniques can be used to for the harmony supporting the main melody.</p> <p>Gerhard Kubik notes "In the Ijesha multipart singing style the basic chorus phrase, to which harmonically parallel lines may be added above and below, is the one in the middle, standing at the same pitch level with the leader's phrase...The basic chorus line is the one with which the chorus member singing alone would invariably link 'in unison' with the leader's phrase. as other chorus members join in, more voices are then added above and below in intervals perceived as consonant. These additional voices are essentially euphoric in concept; they are equivalent to a basic one, but are only collaterally dependent on the voice of the leader".</p> <p>Lazarus Ekweme quotes J. H. Kwabena Nketia saying "In chorus response, there is primacy in the sense that one line is regarded as the basic melody. But the supporting line, by virtue of its running parallel to it, shares its characteristic progressions and is accordingly treated as a secondary melody. Indeed, when a cantor has to sing the chorus response, he may have the freedom of singing either of the two or of moving from one section to the other."</p> <p>Secondary melody in this case refers to the voice harmonizing the chorus response. However, the chorus response is the secondary melody, which is harmonized. The harmonizing parts can vary just as the chorus response (or secondary melody) may vary. The added harmony part embellishes its own line as an independent melody, instead of following rigidly the intervocalic distance from the main chorus line in parallel movement.</p> <p>The underlying concept is to create a melody and then a responsoral secondary melody. This secondary melodic line or phrase is then harmonized in parallel motion. The harmonic line harmonized normally moves by step rarely jumping beyond a fourth.</p> <h2 id="ostinato-variation" tabindex="-1">Ostinato-variation <a class="header-anchor" href="#ostinato-variation" aria-label="Permalink to "Ostinato-variation""></a></h2> <p>Musical instruments in traditional African music often serve as a modal and/or rhythmic support for vocal music. Instrumental Music can also be heard frequently without vocal music and to a lesser extent solo. Harmony produced through ostinato produced on instruments is common place. These ostinati can be varied, or embellished, but otherwise provide modal support. Ostinato used in African music is a principal means of polyphony although other procedures for producing polyphony exist. Arom Simha states "music in the Central African Republic, regardless of the kind of polyphony or polyrhythm that is practiced, always involves the principle of ostinato with variations."</p> <p>The principle of ostinato with variations is significant to African music and its polyphonic nature as most forms of traditional African polyphony are based on this principle. Simha continues "If one had to describe in a formula all the polyphonic and polyrhythmic procedures used in the Central African Republic, one might define them as ostinato (ostinati) with variations." The ostinato is normally used to create a modal pattern or background.</p> <p>Arom Simha continues "This definition does not conflict with Western musicological definitions of the term. Thus Riemann defines ostinato as 'a technical term that describes the continual return of a theme surrounded by ever changing counterpoint [...] The great masters of the age of polyphony loved to write a whole mass or long motets on a single phrase constantly repeated by the tenor. But the repetitions are not always identical, and the little theme would appear in all sorts of modified forms' (Riemann 1931:953)."</p> <p>Many African musics correspond exactly to this definition and are musical pieces based on a phrase, which reappears in varied and modified forms. These ostinato can be continuous or intermittent, vocal or instrumental, and may appear above or below the main line. Frequently in African music two or more ostinatos moving contrapuntally are employed, with or without a longer melodic line to create an orchestral texture (dense textures are desired and aimed for by both composers and performers alike). This type of polyphony is of the contrapuntal or horizontal type. In practice each ostinato moves in independent melodic and rhythmic patterns.</p> <h2 id="cadences-and-chord-structures" tabindex="-1">Cadences and chord structures <a class="header-anchor" href="#cadences-and-chord-structures" aria-label="Permalink to "Cadences and chord structures""></a></h2> <p>Chords are normally formed using one of two techniques: the span process or scalar clusters. These chords can be embellished as a result of variation in which any combination of notes permitted by the scale can be used in a chord. However, in common practice, chords are formed by harmonizing in 3rds, 4ths, 5ths, 6ths, etc. The type of chord formed depends on the scale system being used.</p> <p>Recent research has shown that African music has chord progressions. Gerhard Kubik states "until recently, little attention has been paid to a further structuring element, namely, the tonal-harmonic segmentation of a cycle. In most African music, cycles are sub-divided into two, four or eight tonal-harmonic segments." (A theory of African music, Volume II, page 44, paragraph 5).</p> <p>In addition, the use of the tritone interval for tension is common in Africa. Oluwaseyi Kehinde notes "it is interesting that the interval of the tritone (augmented fourth or diminished fifth) is a salient feature in both vocal and instrumental music throughout Africa” (Karlton E. Hester and Francis Tovey use the same phrase to describe it). Gerhard Kubik in his article "bebop a case in point: the African matrix in Jazz harmonic practices" and his book "Africa and the Blues" echoes this point. This is significant to chords used as reference points or chord progressions in African musical structures.</p> <p>Through the use of parallelism cadential patterns are inevitable. O.O. Bateye clarifies: "The subdominant (plagal) cadence is (resulting from the frequent tendency toward parallelism in African music) the favored cadence and not the perfect cadence, which is the norm in classical western music... Cadential patterns are frequent in African music and invariably result as a consequence of melodic movement either by thirds, fourths, or fifths – that is as a consequence of what may be referred to as shadow harmony ... A cadential descending minor third is frequently noted between the minor third step and the tonic (Reiser, 1982:122) in African music." These cadential movements are made using the melody and the scale as the guiding factor.</p> <p>He continues "T.K. Philips objects to the te-doh and fah-me cadences as being authentic for African music, but nevertheless, as has been pointed out, are a frequent occurrence in African music utilizing scales other than the pentatonic. The presence of drones is a common feature of African music."</p> <h2 id="target-chords" tabindex="-1">"Target chords" <a class="header-anchor" href="#target-chords" aria-label="Permalink to ""Target chords"""></a></h2> <p>African music whose scalar source for the harmony is based on anhemtonic (every note is consonant with every other note) pentatonic and hexatonic scalar sources, Targeting specific vertical structures in relation to the secondary melodic phrase being harmonized is not a concern although this does happen.</p> <p>For scalar systems that are not anhemtonic, target chords or vertical structures that are targeted for resolution are common place. Although the arrangement of the notes may be altered and/or embellished notes viewed as dissonant traditionally will be omitted from that structure. In harp music and xylophone music with 2 beaters these structures are dyads and are targeted for resolution by means of suspensions, anticipations, and other techniques of variation.</p> <p>The "target chord" concept is applied equally to homomphonic parallelism and its various iterations as it is to polyphony. These vertical combinations by means of their strict repetition serve as an organizing structure to the improvised nature of the harmonic motions.</p> <h2 id="polyphony" tabindex="-1">Polyphony <a class="header-anchor" href="#polyphony" aria-label="Permalink to "Polyphony""></a></h2> <p>Polyphonic techniques used in African music include:</p> <ul> <li>Melodic counterpoint – related to homophony, however there is no predominant melodic line or no hierarchy among the parts. Although it is not a general rule, all the parts frequently observe the same rhythmic values.</li> <li>Polymelody – two different melodies with different start and end points occurring simultaneously.</li> <li>Ostinato-variation – variations on a theme with an onstinato or ostinati above or below the melody line</li> <li>Hocket – interlocking, interweaving and overlapping rhythmic figures which are tiered on different pitches in a scalar system.</li> <li>Polyphony by polyrhythmics – Polyphony normally does not occur unless the melodies are rhythmically independent. When two African melodies occur at the same time and are rhythmically independent it is polyphony.</li> <li>Polyphony by inherent patterns – using auxiliary and passing note groups separated by disjunct intervals gives the facade of two melodies occurring back to back. This technique is used often by solo instrumentalists to create a pseudo-polyphony.</li> </ul> <h2 id="techniques" tabindex="-1">Techniques <a class="header-anchor" href="#techniques" aria-label="Permalink to "Techniques""></a></h2> <p>Traditional African music often employs the following techniques to create harmony:</p> <h3 id="span-process" tabindex="-1">Span process <a class="header-anchor" href="#span-process" aria-label="Permalink to "Span process""></a></h3> <p>Gerhard Kubik describes succinctly a process he attributes to the formation of chords used in parallelism throughout Africa. This process he calls the "span process". He states "The Span process or skipping process,(is) a structural principle implying that usually one note of a given scale is skipped by a second singer (or instrumental line) to obtain harmonic simultaneous sound in relation to the melodic line of a first vocalist (or instrumental line)". The harmonic line harmonized normally moves by step rarely jumping beyond a fourth.</p> <h3 id="pedal-notes" tabindex="-1">Pedal notes <a class="header-anchor" href="#pedal-notes" aria-label="Permalink to "Pedal notes""></a></h3> <p>A frequent technique employed in African music (either as a means of variation or as part of a harmonic reference point) in which notes are repeated (on a monotone) in a part while others move in parallel motion above it. When there are at least 3 singers, the two or more upper parts follow the shape of the tune, while maintaining the intonation of the words, in parallel or similar motion. The lowest part repeats a basic drone (pedal notes can equally be found in higher voices as well). The repetitions may be temporary or extended depending on the performers and the particular musical piece. The employment of pedal notes is often the sources that oblique motion and contrary motion in African choral music. This technique is also applied to instrumental music.—Lazarus Ekwueme</p> <h3 id="rhythmic-harmony" tabindex="-1">Rhythmic harmony <a class="header-anchor" href="#rhythmic-harmony" aria-label="Permalink to "Rhythmic harmony""></a></h3> <p>The use of harmony to enhance a rhythmic accent or to emphasize a note in the melody. These normally occur at the end of melodic phrases, but may take place anywhere it is desirable to accentuate a note or text in the melody.</p> <h3 id="harmony-by-imitation" tabindex="-1">Harmony by imitation <a class="header-anchor" href="#harmony-by-imitation" aria-label="Permalink to "Harmony by imitation""></a></h3> <p>This occurs when an added part imitates the shape of a portion of the melody (or other portion of the song) at a higher or lower pitch and after the initial musical phrase but overlapping with it. Due to tonal inflections (in the regions using tonal languages), the shape of the new musical phrase is similar to, but not necessarily identical with, that of the previous one. The use of imitation accounts for a wide variety of interval combinations within the scale system being used in African musics.</p> <h3 id="scalar-clusters" tabindex="-1">Scalar clusters <a class="header-anchor" href="#scalar-clusters" aria-label="Permalink to "Scalar clusters""></a></h3> <p>In parallel motion, rhythmic harmony or in harmonic patterns varying interval combinations can be found. However, all these intervals are limited to those permitted by the scale. The intervals of the second, third, fourth, fifth, sixth, seventh, octave, ninth and tenth can all be found. As African music and harmony is based on a cyclical structure with recurring reference points and harmonic reference points (or chords) some of these intervals are seen as color tones while others have structural significance. Generally the tones and intervals of structural significance are based on thirds, fourths, fifths, and octaves.</p> <p>Simha states "In Central African polyphony, one can in fact find clusters of all the combinations of intervals allowed by the scale. The number of sounds included in vertical combinations varies with the number and type of performing instruments: while there are no more than two in Sanza music, it is not unusual to find four in xylophone music. In the limiting case, it can happen that the 5 sounds of the scale are simultaneously emitted as a cluster. This particular type of verticality can easily be explained by referring each sound comprising a 'chord' or sound cluster to its own melodic axis. It then becomes clear that the vertical configurations are the (partly fortuitous) consequence of the horizontal conception of melodic counterpoint."</p> <p>This not only occurs when using pentatonic scales. Gerhard Kubik notes that the use of a partials derived system, or a Bordon system can also lead to the use of scalar clusters as consonance. In addition communities, and ethnic groups that use pentatonic systems many times also employ hexatonic and heptatonic scalar systems.</p> <h3 id="variation-principle" tabindex="-1">Variation principle <a class="header-anchor" href="#variation-principle" aria-label="Permalink to "Variation principle""></a></h3> <p>The variation principle describes the process of altering, embellishing, and modifying of melodic, rhythmic, harmonic, and/or other parts of a musical structure. These variations are made within and/or around the role of the part being varied. These variations rarely break the function of the part to be varied. In African music these variations are often improvised. Variation in African music are abundant and the musicians view them as necessary. Simha Arom states, "All musical pieces are characterized by cyclic structure that generates numerous improvised variations: repetition and variation is one of the most fundamental principles of all Central African musics,as indeed of many other musics in Black Africa." He continues, "Finally, improvisation, which I have described as the driving force behind melodic and rhythmic variations, plays an important part in every group. But there is no such thing as free improvisation, that is, improvisation that does not refer back to some precise and identifiable piece of music. It is always subordinate to the musical structure in which it appears..."</p> <p>Variation is a very important aspect in African music (and the musics of the African diaspora). At every level of music variation is expected, with the stipulation that the structure of the part being played is not compromised. Harmony is no exception. He explains: "Melodic and rhythmic variations can, however, affect the instrumental formula, just as they can appear in the song the formula supports and summarizes. These variations engender a large variety of vertical combinations, or consonances." Vertical combinations in African music have two different yet complementary functions. One function is that of being a structural reference point. The other is that of being an embellishment, or "color tone".</p> <p>Arom Simha proceeds to note "We have already remarked that specific vertical combinations in each formula act as temporal reference points by virtue of their regular repetition at a given position in the periodic cycle. These combinations are the points at which several superposed melodic lines meet. They are usually based on octaves, fifths, and fourths, precisely the intervals which make up sections 1–4 of Chailley's resonance table (i960: 35), and this is certainly no accident. We may therefore assume that they take on a structural function." These vertical combinations that constitute reference points are chords that, together, form a chord progression. This is similar to the concepts of chords and progressions in cyclical forms in Jazz, Blues and other musics of the African diaspora.</p> <p>Simha concludes "All other consonances can be viewed in the same way as the result of conjunctions of different melodies, but unlike the regularly repeated ones, their content (and at times even their position) is an arbitrary consequence of the numerous melodic and, particularly, rhythmic variations allowed in the various realizations of the formula. Consonances of this type seem intended to provide color, over and above the melodic nature of their constituent elements. This is a natural consequence of the fact that musicians tend to make full use of their available resources to enrich and variegate the texture of sound when performing cyclic music." These harmonic variations combined with rhythmic variations explain (in addition to the implementation of pedal notes) both "oblique" phenomena (anticipations and suspensions) and horizontal phenomena (drones and broken or ornamented pedal points).</p> <p>David Locke in an article entitled "improvisation in west African music" states "...African musicians do improvise on various aspects of music, including melody, text, form, polyphony, rhythm, and timbre." These improvisations are based on preexisting musical structures and as such are variations and embellishments.</p> <p>The Principles and techniques outlined above are all subject to variation not only by region, and the people, but also, by spontaneously improvised variations during performance. This creates complex harmonies. This is similar to the way Jazz musicians during a performance will alter a chord and embellish it with different "color tones", while still emphasizing principle chord tones so as not to disrupt the chord progression of the song.</p> <p>Homophonic parallelism is also affected by this variation principle. With regards to improvised vocal parts within homophonic parallelism Gerhard Kubik in his book "Theory of African music", volume I says, "Another implicit concept of this multi-part musical system is linearity, i.e. each voice exists in its own right, though at the same time there remains the perspective of simultaneous vertical sound. All participants sing the same text, but their melodic lines are not parallel throughout, as might be expected from the tonal inflections of the language. on the contrary, oblique and counter-movement is consciously employed in order to emphasize the individuality of each participating voice. contrary motion is not always perceptible in recordings because the voices merge with one another. In practice an individual singer in the group can change the direction of his melodic line whenever he likes...An individual singer can also string up several variants of his voice part successively along the time-line. In 'chiyongoyongo' for instance, there are dozens of simultaneous variants possible and each are perceived as correct. this leads to a very lively style of variation, in which each individual voice is conceived to be linear and independent while contributing to the euphoric whole."</p> <p>He continues "Where the precepts of tonal languages permit it (and this is the case in eastern Angola) we can thus find a kind of multi-part singing which transcends the "parallel harmony," so often described by authors as typical of one or the other African style. The multi-part singing style of the peoples of eastern Angola, including the Mbwela, Luchazi, Chokwe, Luvale and others is only parallel in theory. the creation of harmonic sound is accomplished within a relatively loose combination of individual voices, fluctuating between triads, bichords and more or less dense accumulations of notes. the exact shape of the chords, the duplication and omissions of individual notes in the total pattern may change with every repetition."</p> <h2 id="traditional-african-harmony-as-the-basis-for-jazz-and-blues-harmony" tabindex="-1">Traditional African harmony as the basis for jazz and blues harmony <a class="header-anchor" href="#traditional-african-harmony-as-the-basis-for-jazz-and-blues-harmony" aria-label="Permalink to "Traditional African harmony as the basis for jazz and blues harmony""></a></h2> <p>See:</p> <ul> <li>Gerhard Kubik's A case in point: Bebop: the African matrix in Jazz harmonic practices</li> <li>Gerhard Kubik's Africa and the Blues</li> <li>Gerhard Kubik's Theory of African Music volumes I and II</li> <li>David Locke's Improvisation in West Africa</li> <li>Karlton E. Hester's Bigotry and the Afrocentric “Jazz” Evolution</li> <li>Gunther Schuller Early Jazz, Its Roots and Musical Development</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/Orchestra_impa-abmpix-17288.jpeg" length="0" type="image/jpeg"/> </item> <item> <title><![CDATA[Gnawa music]]></title> <link>https://chromatone.center/theory/rhythm/system/crossbeat/gnawa/</link> <guid>https://chromatone.center/theory/rhythm/system/crossbeat/gnawa/</guid> <pubDate>Sun, 27 Oct 2024 00:00:00 GMT</pubDate> <description><![CDATA[Moroccan religious songs and rhythms]]></description> <content:encoded><![CDATA[<p>Gnawa music (Ar. ڭْناوة or كْناوة) is a body of Moroccan religious songs and rhythms. Emerging in the 16th and 17th centuries, Gnawa music developed through the cultural fusion of West Africans brought to Morocco, notably the Hausa, Fulani, and Bambara peoples, whose presence and heritage are reflected in the songs and rituals. Its well-preserved heritage combines ritual poetry with traditional music and dancing. The music is performed at lila, communal nights of celebration dedicated to prayer and healing guided by the Gnawa maalem, or master musician, and their group of musicians and dancers. Though many of the influences that formed this music can be traced to West African kingdoms, its traditional practice is concentrated in Morocco. Gnawa music has spread to many other countries in Africa and Europe, such as France.</p> <p>The origins of Gnawa music are intricately associated with that of the famed royal "Black Guard" of Morocco.</p> <h2 id="etymology" tabindex="-1">Etymology <a class="header-anchor" href="#etymology" aria-label="Permalink to "Etymology""></a></h2> <p>The word "Gnawa", plural of "Gnawi", is taken to be derived from the Hausa demonym "Kanawa" for the residents of Kano, the capital of the Hausa-Fulani Emirate, which was under Morocco influence (Opinion of Essaouira Gnawa Maalems, Maalem Sadiq, Abdallah Guinia, and many others). The Moroccan language often replaces "K" with "G", which is how the Kanawa, or Hausa people, were called Gnawa in Morocco.</p> <h2 id="music" tabindex="-1">Music <a class="header-anchor" href="#music" aria-label="Permalink to "Music""></a></h2> <p>In a Gnawa song, one phrase or a few lines are repeated over and over, so the song may last a long time. In fact, a song may last several hours non-stop. However, what seems to the uninitiated to be one long song is actually a series of chants describing the various spirits (in Arabic mlouk (sing. melk)), so what seems to be a 20-minute piece may be a whole series of pieces – a suite for Sidi Moussa, Sidi Hamou, Sidi Mimoun or others. Because they are suited for adepts in a state of trance, they go on and on, and have the effect of provoking a trance from different angles.</p> <p>The melodic language of the stringed instrument is closely related to their vocal music and to their speech patterns, as is the case in much African music. It is a language that emphasizes on the tonic and fifth, with quavering pitch-play, especially pitch-flattening, around the third, the fifth, and sometimes the seventh.</p> <p>Gnawa music is characterized by instrumentation. The large, heavy iron castanets known as qraqab or krakeb and a three-string lute known as a hajhuj, guembri or gimbri, or sentir, are central to Gnawa music. The hajhuj has strong historical and musical links to West African lutes like the Hausa halam, a direct ancestor of the banjo.</p> <p>The rhythms of the Gnawa, like their instruments, are distinctive. Gnawa is particularly characterized by interplay between triple and duple meters. The "big bass drums" mentioned by Schuyler are not typically featured in a more traditional setting.</p> <p>Gnawa have venerable stringed-instrument traditions involving both bowed lutes like the gogo and plucked lutes like the hajhuj. The Gnawa also use large drums called tbel in their ritual music.</p> <p>Gnawa hajhuj players use a technique which 19th century American minstrel banjo instruction manuals identify as "brushless drop-thumb frailing". The "brushless" part means the fingers do not brush several strings at once to make chords. Instead, the thumb drops repeatedly in a rhythmic pattern against the freely vibrating bass string, producing a throbbing drone, while the first two or three fingers of the same (right) hand pick out percussive patterns in a drum-like, almost telegraphic, manner.</p> <h2 id="rituals" tabindex="-1">Rituals <a class="header-anchor" href="#rituals" aria-label="Permalink to "Rituals""></a></h2> <p>Gnawas perform a complex liturgy, called lila or derdeba. The ceremony recreates the first sacrifice and the genesis of the universe by the evocation of the seven main manifestations of the divine demiurgic activity. It calls the seven saints and mluk, represented by seven colors, as a prismatic decomposition of the original light/energy. The derdeba is jointly animated by a maâlem (master musician) at the head of his troop and by a moqadma or shuwafa (clairvoyant) who is in charge of the accessories and clothing necessary to the ritual.</p> <p>During the ceremony, the clairvoyant determines the accessories and clothing as it becomes ritually necessary. Meanwhile, the maâlem, using the guembri and by burning incense, calls the saints and the supernatural entities to present themselves in order to take possession of the followers, who devote themselves to ecstatic dancing.</p> <p>Inside the brotherhood, each group (zriba; Arabic: زريبة) gets together with an initiatory moqadma (Arabic: مقدمة), the priestess that leads the ecstatic dance called the jedba (Arabic: جذبة), and with the maâlem, who is accompanied by several players of krakeb.</p> <p>Preceded by an animal sacrifice that assures the presence of the spirits, the all-night ritual begins with an opening that consecrates the space, the aâda ("habit" or traditional norm; Arabic: عادة), during which the musicians perform a swirling acrobatic dance while playing the krakeb.</p> <p>The mluk are abstract entities that gather a number of similar jinn (genie spirits). The participants enter a trance state (jedba) in which they may perform spectacular dances. By means of these dances, participants negotiate their relationships with the mluk either placating them if they have been offended or strengthening an existing relationship. The mluk are evoked by seven musical patterns, seven melodic and rhythmic cells, who set up the seven suites that form the repertoire of dance and music of the Gnawa ritual. During these seven suites, seven different types of incense are burned and the dancers are covered by veils of seven different colours.</p> <p>Each of the seven families of mluk is populated by many characters identifiable by the music and by the footsteps of the dance. Each mluk is accompanied by its specific colour, incense, rhythm and dance. These entities, treated like "presences" (called hadra, Arabic: حضرة) that the consciousness meets in ecstatic space and time, are related to mental complexes, human characters, and behaviors. The aim of the ritual is to reintegrate and to balance the main powers of the human body, made by the same energy that supports the perceptible phenomena and divine creative activity.</p> <p>Later, the guembri opens the treq ("path," Arabic: طريق), the strictly encoded sequence of the ritual repertoire of music, dances, colors and incenses, that guides in the ecstatic trip across the realms of the seven mluk, until the renaissance in the common world, at the first lights of dawn.</p> <p>Almost all Moroccan brotherhoods, such as the Issawa or the Hamadsha, relate their spiritual authority to a saint. The ceremonies begin by reciting that saint's written works or spiritual prescriptions (hizb, Arabic: حزب) in Arabic. In this way, they assert their role as spiritual descendants of the founder, giving themselves the authority to perform the ritual. Gnawa, whose ancestors were neither literate nor native speakers of Arabic, begin the lila by recalling through song and dance their origins, the experiences of their slave ancestors, and ultimately redemption.</p> <h2 id="gnawa-music-today" tabindex="-1">Gnawa music today <a class="header-anchor" href="#gnawa-music-today" aria-label="Permalink to "Gnawa music today""></a></h2> <p>During the last few decades, Gnawa music has been modernizing and thus become more profane. However, there are still many privately organized lilas that conserve the music's sacred, spiritual status.</p> <p>Within the framework of the Gnaoua World Music Festival of Essaouira ("Gnaoua and Musics of the World"), the Gnawa play in a profane context with slight religious or therapeutic dimensions. Instead, in this musical expression of their cultural art, they share stages with other musicians from around the world. As a result, Gnawa music has taken a new direction by fusing its core spiritual music with genres like jazz, blues, reggae, and hip-hop. For four days every June, the festival welcomes musicians that come to participate, exchange and mix their own music with Gnawa music, creating one of the largest public festivals in Morocco. Since its debut in 1998, the free concerts have drawn an audience that has grown from 20,000 to over 200,000 in 2006, including 10,000 visitors from around the world.</p> <p>Past participants have included Randy Weston, Adam Rudolph, The Wailers, Pharoah Sanders, Keziah Jones, Byron Wallen, Omar Sosa, Doudou N'Diaye Rose, and the Italian trumpet player Paolo Fresu.</p> <p>There are also projects such as "The Sudani Project", a jazz/gnawa dialogue between saxophonist/composer Patrick Brennan, Gnawi maâlem Najib Sudani, and drummer/percussionist/vocalist Nirankar Khalsa. Brennan has pointed out that the metal qraqeb and gut bass strings of the guembri parallel the cymbal and bass in jazz sound.</p> <p>In the 1990s, young musicians from various backgrounds and nationalities started to form modern Gnawa bands. Gnawa Impulse from Germany, Mehdi Qamoum aka Medicament (The cure) from Morocco, Bab L' Bluz with members from France and Morocco, and Gnawa Diffusion from Algeria are some examples. These groups offer a rich mix of musical and cultural backgrounds, fusing their individual influences into a collective sound. They have woven elements of rap, reggae, jazz, blues, rock and rai into a vibrant musical patchwork.</p> <p>These projects incorporating Gnawa and Western musicians are essentially Gnawa fusions.</p> ]]></content:encoded> </item> <item> <title><![CDATA[MIDI Grid]]></title> <link>https://chromatone.center/practice/midi/grid/</link> <guid>https://chromatone.center/practice/midi/grid/</guid> <pubDate>Wed, 23 Oct 2024 00:00:00 GMT</pubDate> <description><![CDATA[Reactive colorful notes grid as a performance instrument]]></description> <content:encoded><![CDATA[<MidiGrid :width :height />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Interval reference songs]]></title> <link>https://chromatone.center/theory/notes/ear-traning/reference-songs/</link> <guid>https://chromatone.center/theory/notes/ear-traning/reference-songs/</guid> <pubDate>Wed, 11 Sep 2024 00:00:00 GMT</pubDate> <description><![CDATA[Popular songs that start or notably use intervals to be memorized and easily recognized]]></description> <content:encoded><![CDATA[<p>Interval recognition, the ability to name and reproduce musical intervals, is an important part of ear training, music transcription, musical intonation and sight-reading.</p> <h2 id="reference-songs" tabindex="-1">Reference songs <a class="header-anchor" href="#reference-songs" aria-label="Permalink to "Reference songs""></a></h2> <p>Some music teachers teach their students relative pitch by having them associate each possible interval with the first interval of a popular song. Such songs are known as "reference songs". However, others have shown that such familiar-melody associations are quite limited in scope, applicable only to the specific scale-degrees found in each melody.</p> <table tabindex="0"> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>0/unison</td> <td>"America the Beautiful" (on "Oh beautiful")' "God Save the King"/"My Country, 'Tis of Thee" "Hava Nagila" "Jingle Bells" "La Marseillaise""One Note Samba" "Twinkle, Twinkle, Little Star" (on "twin-kle")</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Steps/interval</td> <td>Ascending</td> <td></td> <td>Descending</td> <td></td> </tr> <tr> <td>1/minor second</td> <td>Theme of the One Ring from The Lord of the Rings Theme from Jaws "Nice Work If You Can Get It" "Isn't She Lovely" Ode to Joy (10)</td> <td>C↑C♯</td> <td>"Stella by Starlight" "Joy to the World" "Für Elise" Theme from Jurassic Park Wedding March Mendelssohn</td> <td>C↓B</td> </tr> <tr> <td>2/major second</td> <td>"Frère Jacques" "Silent Night" "Never Gonna Give You Up" "Strangers in the Night" "Do-Re-Mi"</td> <td>C↑D</td> <td>"Mary Had a Little Lamb" "Three Blind Mice" "Satin Doll" "The First Noel"</td> <td>C↓B♭</td> </tr> <tr> <td>3/minor third</td> <td>"A Cruel Angel's Thesis" (theme from Neon Genesis Evangelion) "Axel F" the Beverly Hills Cop theme song "Greensleeves" "Smoke on the Water" "O Canada" "The Impossible Dream" "So Long, Farewell" "Oh where, oh where has my little dog gone" "Iron Man" by Black Sabbath "Bad" "Brahms's Lullaby" "Hallelujah"</td> <td>C↑E♭</td> <td>"Hey Jude" "The Star-Spangled Banner" "Frosty the Snowman" "This Old Man" "Can We Fix It?" from Bob the Builder</td> <td>C↓A</td> </tr> <tr> <td>4/major third</td> <td>"When the Saints Go Marching In" "Spring" from Vivaldi's "Four Seasons" "Kumbaya" "And did those feet in ancient time" "O mio babbino caro"</td> <td>C↑E</td> <td>"Summertime" "Swing Low, Sweet Chariot" "Goodnight, Ladies" Beethoven's Symphony No. 5</td> <td>C↓A♭</td> </tr> <tr> <td>5/perfect fourth</td> <td>"Auld Lang Syne" "O Tannenbaum/Oh Christmas Tree" Here Comes the Bride "Amazing Grace"</td> <td>C↑F</td> <td>Eine kleine Nachtmusik "O Come, All Ye Faithful" Theme from Dynasty</td> <td>C↓G</td> </tr> <tr> <td>6/tritone</td> <td>"Maria" from West Side Story "The Simpsons Theme"</td> <td>C↑F♯</td> <td>"YYZ" "Black Sabbath" "Even Flow"</td> <td>C↓F♯</td> </tr> <tr> <td>7/perfect fifth</td> <td>Also sprach Zarathustra "Can't Help Falling in Love" (on Wise Men) "My Favorite Things" "Scarborough Fair" Theme from Star Wars "Twinkle, Twinkle, Little Star" (from 1st to 2nd "twinkle") "Yeah!"</td> <td>C↑G</td> <td>Back to the Future Theme "Don't You (Forget About Me)" chorus The Flintstones Theme "Seven Steps to Heaven" "What Do You Do With A Drunken Sailor?"</td> <td>C↓F</td> </tr> <tr> <td>8/minor sixth</td> <td>"Go Down Moses" on "When Is(-rael)" Theme of 1492: Conquest of Paradise "The Entertainer" (big interval after pick-up) "Close Every Door" "Nothing Compares 2 U"</td> <td>C↑A♭</td> <td>"You're Everything" from Light as a Feather "Where Do I Begin?" from the film Love Story</td> <td>C↓E</td> </tr> <tr> <td>9/major sixth</td> <td>"For He's a Jolly Good Fellow" "It Came Upon the Midnight Clear" "Jingle Bells" (on "dash-ing through the snow") "Leia's Theme" (from Star Wars) "Libiamo ne' lieti calici", brindisi from Verdi's 1853 opera La traviata"My Bonnie Lies over the Ocean" "My Way" NBC Theme Song</td> <td>C↑A</td> <td>"Man in the Mirror" (chorus) "The Music of the Night" "Nobody Knows the Trouble I've Seen" "Over There" "Sweet Caroline"</td> <td>C↓E♭</td> </tr> <tr> <td>10/minor seventh</td> <td>Theme from Star Trek "Somewhere" from West Side Story "The Winner Takes It All"</td> <td>C↑B♭</td> <td>"Watermelon Man" "An American in Paris" "Lady Jane" (refrain)(9)</td> <td>C↓D</td> </tr> <tr> <td>11/major seventh</td> <td>"Take On Me" Theme from Fantasy Island</td> <td>C↑B</td> <td>"I Love You" “Have Yourself a Merry Little Christmas (Nat King Cole)” "Maybe"</td> <td>C↓C♯</td> </tr> <tr> <td>12/octave</td> <td>"Over the Rainbow" "Blue Bossa" "The Christmas Song" "Starman" "Let It Snow! Let It Snow! Let It Snow!" "When You Wish Upon a Star"</td> <td>C↑C</td> <td>"Willow Weep for Me"</td> <td>C↓C</td> </tr> </tbody> </table> <p>In addition, there are various solmization systems (including solfeggio, sargam, and numerical sight-singing) that assign specific syllables to different notes of the scale. Among other things, this makes it easier to hear how intervals sound in different contexts, such as starting on different notes of the same scale.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/indra-projects.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Eguchi method]]></title> <link>https://chromatone.center/theory/notes/ear-traning/eguchi/</link> <guid>https://chromatone.center/theory/notes/ear-traning/eguchi/</guid> <pubDate>Tue, 10 Sep 2024 00:00:00 GMT</pubDate> <description><![CDATA[Effective use of colors to train perfect pitch with kids]]></description> <content:encoded><![CDATA[<p><a href="https://pganssle.github.io/cim/" target="_blank" rel="noreferrer">https://pganssle.github.io/cim/</a></p> <youtube-embed video="l2Z6uEsx9lE"/><blockquote> <p><a href="https://www.washingtonpost.com/wp-dyn/content/article/2009/07/26/AR2009072602350.html?sid=ST2009072602370" target="_blank" rel="noreferrer">https://www.washingtonpost.com/wp-dyn/content/article/2009/07/26/AR2009072602350.html?sid=ST2009072602370</a></p> </blockquote> <h1 id="an-elusive-musical-gift-could-be-at-children-s-fingertips" tabindex="-1">An Elusive Musical Gift Could Be at Children's Fingertips <a class="header-anchor" href="#an-elusive-musical-gift-could-be-at-children-s-fingertips" aria-label="Permalink to "An Elusive Musical Gift Could Be at Children's Fingertips""></a></h1> <h2 id="a-perfect-way-to-teach-music" tabindex="-1">A 'Perfect' Way to Teach Music <a class="header-anchor" href="#a-perfect-way-to-teach-music" aria-label="Permalink to "A 'Perfect' Way to Teach Music""></a></h2> <blockquote> <p>A Japanese method of training may help young music students learn "perfect pitch," a skill that many thought was only innate. The method, as shown in this video, teaches children to raise a certain color flag when they hear a specific chord, starting with the red flag and the C chord. Throughout the training, more flags and different chords are introduced.Video courtesy of Ichionkai Music School, Edited by Ashley Barnas/The Washington Post</p> <p>By Kathryn Tolbert Washington Post Staff Writer Monday, July 27, 2009 TOKYO</p> </blockquote> <p>If you could give your child the gift of perfect pitch - the ability to identify a note simply by hearing it - would you? The few who are born with perfect pitch say notes have a concrete identity and presence, almost like colors, and being able to intuitively recognize them gives music an almost three-dimensional quality.</p> <p>To put it simply, "if you taste a dish and you can name every ingredient - that is like having perfect pitch," said pianist and music teacher Chizuko Ozawa.</p> <p>It is widely accepted that you cannot learn perfect pitch as an adult. But your child, it appears, can.</p> <p>Kazuko Eguchi started developing a method 40 years ago, when she was a young college music instructor frustrated both by her own lack of perfect pitch and the weaknesses she saw in her students. She attributed the problem to poor early training.</p> <p>U.S. piano teachers will get a glimpse of her eventual solution this week. Tomoko Kanamaru, pianist and assistant professor of music at The College of New Jersey, will give a presentation titled "Can Perfect Pitch be Taught? Introduction to the Eguchi Method" at the National Conference on Keyboard Pedagogy in Lombard, Ill.</p> <p>The Eguchi Method is used by more than 800 teachers around Japan to teach perfect pitch to very young children, claiming a success rate of almost 100 percent for those who start before they are 4 years old. At the end of the training, which starts by matching chords with colored flags, a teacher will play random notes on the piano and the child, without looking, can identify them.</p> <p>Once learned, perfect pitch stays with a child for life, the teachers believe. But attaining it is not quick or easy, even for a 3-year-old. It also requires a committed and patient parent willing to invest a few minutes several times a day for up to two years. And it is all separate from learning to play the piano or any other instrument. In fact, the child never touches the keyboard during these ear training sessions.</p> <p>The teacher starts by playing the three-note C major chord on a piano, and the child is instructed to raise a red flag. (It doesn't have to be red, or even a flag; any simple symbol will do.) At home, the parent carries on the instruction by playing the C chord and the child, sitting where he cannot see the keyboard, raises the red flag. They do this a few times every day.</p> <p>After a couple of weeks, a second chord and flag are added. Now the child has to raise a yellow flag for an F major chord, and the red for a C. Then a third and fourth chord join the mix. Eventually all the white-key chords are associated with a colored flag, then all those with black keys. The child names the chord only by its color.</p> <p>Training sessions are meant to be quite short, a few minutes each, but repeated frequently. Chords are played in random sequence, never in the same order, to prevent the child from identifying any chord by its relation to another.</p> <p>Later, the child calls out the individual notes that make up the chord. For C major, which is C-E-G, or do-mi-so, the child raises the red flag and says, "red, do-mi-so," for example.</p> <p>Eventually, the parent or teacher, after playing the chord, takes the highest note and plays it separately. The child names the chord, the individual notes, and then upon hearing the single note, identifies that one.</p> <p>The Eguchi Method - a course for children that focuses on the piano and includes perfect pitch development as part of ear training - differs from other pitch training by focusing initially on chords instead of single notes. Eguchi says that starting with notes instead of chords leads some children to identify the note by its relative position to another note. Ozawa says remembering chords is easier for children; she compares it to remembering a face rather than just the eyes.</p> <p>But other than the perceptual satisfaction of having perfect pitch, what are the benefits?</p> <p>Eugene Pridonoff, a pianist and artist in residence at the University of Cincinnati who visits Eguchi's Ichionkai Music School in Tokyo twice a year as a guest piano teacher, says he can see the results from early ear training.</p> <p>"I'll explain and demonstrate: They understand, are able to absorb, hear it and immediately reproduce it on their piano," he said. "These kids were remarkably consistent and remarkably quick in that respect - in how sensitive their ears are to nuance.</p> <p>"It's clear that the musical ears of these kids are so well developed. Is it the Eguchi Method? To a great degree, it's because they start so young," he said. "That sort of discerning differences in groups of chords has to be activating brain connections beyond what most kids are getting."</p> <p>The music school has about 1,500 students, from toddlers to high-school teenagers. The ideal starting age is 2 1/2 to 3, and training is not effective after 8, said Kanamaru, the College of New Jersey professor.</p> <p>A recent study by Ken'ichi Miyazawa and Yoko Ogawa published in the journal Music Perception looks at the incidence of perfect pitch among Japanese children who started music lessons at age 4 at an unnamed music school "run by the largest music corporation in Japan." They reported that the accuracy of pitch improved from near-chance at age 4 to around 80 percent at age 7 and did not improve much after that.</p> <p>Eguchi, 67, who is confined to a wheelchair because she suffers from acute articular rheumatism and can no longer play the piano or teach, said in a telephone interview that because of her illness, she was only able to give her daughter, Ayako, partial training.</p> <p>But her grandchildren have learned at her music school.</p> <p>"Ayako's daughter can harmonize to the melody very smoothly. It's almost like when she sees the music, she hears it," Eguchi said. "She has a good ability to make judgments in sound. She can come up with different harmonization than she sees. When the key changes, it is very natural to her."</p> <p>Pridonoff says he is impressed with Eguchi's success in Japan. But he is not sure how adaptable the method is to American society. "Very few children in the U.S. are able to start so young with such intensive guidance in the combination of teacher and parent," he said. "What we have seen in Japan is that the mother is working with those children on a regular basis every day in addition to their having lessons."</p> <p>He added: "In our culture, in many families both parents are working, and we seem to want our children to be well-rounded, involved in a multitude of activities. It is rare in our country for children to start to specialize at such a young age."</p> <p>See video of flag training at <a href="http://www.washington" target="_blank" rel="noreferrer">http://www.washington</a> post.com/science.</p> <p>To see if you have perfect pitch, Diana Deutsch of the University of California, San Diego, has a test at <a href="http://deutsch.ucsd.edu" target="_blank" rel="noreferrer">http://deutsch.ucsd.edu</a>, clicking on the article "Perfect Pitch: Language Wins Out Over Genetics" and then going to the bottom of the page. Or go directly to <a href="http://www.acoustics.org/press/157th/deutsch3.htm" target="_blank" rel="noreferrer">http://www.acoustics.org/press/157th/deutsch3.htm</a>.</p> <hr> <blockquote> <p><a href="https://ichionkai.co.jp/english4.html" target="_blank" rel="noreferrer">https://ichionkai.co.jp/english4.html</a></p> </blockquote> <h2 id="perfect-pitch-program" tabindex="-1">Perfect Pitch Program <a class="header-anchor" href="#perfect-pitch-program" aria-label="Permalink to "Perfect Pitch Program""></a></h2> <p>Developed by Japanese master pedagogue, Kazuko Eguchi, the Eguchi Method has proven that anyone under the age of six can acquire perfect pitch. To this date, more than 20000 children have acquired the perfect pitch through the method.</p> <p>This method was first introduced in the U.S. at the National Conference on Keyboard Pedagogy, held in Chicago in July and August 2009, by faculty of Ichionkai: Ayako Eguchi, Ph.D (Ochanomizu University, Tokyo Japan) and Tomoko Kanamaru, DMA (currently also teaches at The College of New Jersey, Ewing, NJ). This workshop offered an oppotunity to become familier with this unique training, and was accepted with a great interest from an audience.</p> <h2 id="what-is-perfect-pitch" tabindex="-1">What is Perfect Pitch? <a class="header-anchor" href="#what-is-perfect-pitch" aria-label="Permalink to "What is Perfect Pitch?""></a></h2> <p>Perfect Pitch is the ability that a person can identify tones as a pitch name without any reference or external standard. In the Eguchi Method, we use the single-note labeling test (presented below) to recognize if a person has the perfect pitch ability. We consider him/her having perfect pitch by answering all the questions correct. In the test, he/she names (labels) several single pitches without any standard tone given. He/she is not allowed to realize if the answer is right or false until she/he finishes all the questions.</p> <h2 id="what-advantages-come-with-perfect-pitch" tabindex="-1">What advantages come with Perfect Pitch? <a class="header-anchor" href="#what-advantages-come-with-perfect-pitch" aria-label="Permalink to "What advantages come with Perfect Pitch?""></a></h2> <p>You can identify the tone as a pitch only by hearing it, without any reference. In other words, perfect pitch gives you the ability to image the right pitch without any instruments or tuners. This gives you an easier time to playing the music by ear, and writing the music onto the score. In addition, you will be able to memorize the music in pitches as well as you can obtain the stronger image for tones. This ability can preserve your memory fresh for a long term. All musicians will appreciate how it is useful to have perfect pitch for many kinds of musical activity, such as singing and composing.</p> <h2 id="so-everything-depends-on-perfect-pitch" tabindex="-1">So everything depends on perfect pitch? <a class="header-anchor" href="#so-everything-depends-on-perfect-pitch" aria-label="Permalink to "So everything depends on perfect pitch?""></a></h2> <p>Obtaining perfect pitch is not a goal for the overall musical skill. It is rather a tool to develop the musical sense and makes the further musical development smoother. In Eguchi method, we do not consider the perfect pitch as a goal for children, but recognize it as one of the general musical abilities.</p> <p>As talked about in 「What advantages can you have with Perfect Pitch?」, the student with perfect pitch can participate the music only by "ear" - their internal perception. This allows them to have quicker and easier time for playing, singing and composing. In many cases, these students even become more careful and sensitive for sound. We believe that this skill encourages their musical independency for their further musical studying.</p> <p>Perfect pitch is not a special ability for only a few people naturally given it. For anybody, it can be achieved under the proper training environment, at the certain stage of children's development. Not only for professionals, but also for anybody who love music, perfect pitch will be a useful skill to enjoy the music.</p> <h2 id="what-kind-of-training-process" tabindex="-1">What kind of training process? <a class="header-anchor" href="#what-kind-of-training-process" aria-label="Permalink to "What kind of training process?""></a></h2> <p>The main procedure of Eguchi Method Perfect Pitch Training System is listening to the chords and labeling them in "color" but not an actual pitch name. Each chord is applied different colors. Children do not perceive the chords as pitch names at first, but identify as a "color" image.</p> <p>The internet lessons are easily accessible and will cause no difficulty for children. Access to our Perfect Pitch Training System website, and type password. Then start training! It requires no complicated process, but simply clicking the answers. The result needs to be sent to us, and we will give a feedback for each lesson.</p> <h2 id="under-what-condition-can-children-have-the-training" tabindex="-1">Under what condition, can children have the training? <a class="header-anchor" href="#under-what-condition-can-children-have-the-training" aria-label="Permalink to "Under what condition, can children have the training?""></a></h2> <p>Since 1960s, one of the leading Japanese music pedagogues, Kazuko Eguchi, has started employing the perfect pitch training in her piano teaching.</p> <p>The procedure was presented in public at the International Music Psychology Seminar in Czech Republic in 1981, for the first time. In the following year, Japanese academic journal "Music Pedagogy Studying" had featured about Eguchi's perfect pitch training. In 1991, a book about know-how of perfect pitch training had been published: "Tones flying to you like a rocket" by Niki-Publishing in Japan (current version: "New Perfect Pitch Program" by Zen-on Publishing). Later, Ayako Eguchi, psychologist, had started to systematize the perfect pitch training through a point of view as a psychologist.</p> <p>She introduced and discussed about the method three times in the academic journal "educational psychology study", and five times at international conference (ICMPC). In 1999, the effectivity of Perfect Pitch Program is proved and accepted as a result of her Ph.D dissertation at Tokyo University. By 2002, Kazuko and Ayako Eguchi had published four books about Eguchi method Perfect Pitch Training System (all in Japanese).</p> <p>The training method has been frequently introduced in the newspaper and media, and widely accepted in Japan.</p> <p>Also in 2009 July, it has been introduced in Washington Post for the first time in United States.</p> <p>At Ichionkai Music School, we are currently teaching about 1500 students.</p> <p>All of them either acquired the perfect pitch by this training system or continuing training currently. Also other 500 students participate the training through the Internet perfect pitch program. So far, there a re total 20000 students who acquired the perfect pitch through the Eguchi Method Perfect Pitch Training System.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/book.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Ear training]]></title> <link>https://chromatone.center/theory/notes/ear-traning/</link> <guid>https://chromatone.center/theory/notes/ear-traning/</guid> <pubDate>Tue, 10 Sep 2024 00:00:00 GMT</pubDate> <description><![CDATA[Aural skills development methods]]></description> <content:encoded><![CDATA[<p>In music, ear training is the study and practice in which musicians learn various aural skills to detect and identify pitches, intervals, melody, chords, rhythms, solfeges, and other basic elements of music, solely by hearing. Someone who can identify pitch accurately without any context is said to have perfect pitch, while someone who can only identify pitch provided a reference tone or other musical context is said to have relative pitch. Someone that can't perceive these qualities at all is said to be tone deaf. The application of this skill is somewhat analogous to taking dictation in written/spoken language. As a process, ear training is in essence the inverse of reading music, which is the ability to decipher a musical piece by reading musical notation. Ear training is typically a component of formal musical training and is a fundamental, essential skill required in music schools and the mastery of music.</p> <h2 id="functional-pitch-recognition" tabindex="-1">Functional pitch recognition <a class="header-anchor" href="#functional-pitch-recognition" aria-label="Permalink to "Functional pitch recognition""></a></h2> <p>Functional pitch recognition involves identifying the function or role of a single pitch in the context of an established tonic. Once a tonic has been established, each subsequent pitch may be classified without direct reference to accompanying pitches. For example, once the tonic G has been established, listeners may recognize that the pitch D plays the role of the dominant in the key of G. No reference to any other pitch is required to establish this fact.</p> <p>Many musicians use functional pitch recognition in order to identify, understand, and appreciate the roles and meanings of pitches within a key. To this end, scale-degree numbers or movable-do solmization (do, re, mi, etc.) can be quite helpful. Using such systems, pitches with identical functions (the key note or tonic, for example) are associated with identical labels (1 or do, for example).</p> <p>Functional pitch recognition is not the same as fixed-do solfège, e.g. do, re, mi, etc. Functional pitch recognition emphasizes the role of a pitch with respect to the tonic, while fixed-do solfège symbols are labels for absolute pitch values (do=C, re=D, etc., in any key). In the fixed-do system (used in the conservatories of the Romance language nations, e.g. Paris, Madrid, Rome, as well as the Juilliard School and the Curtis Institute in the USA), solfège symbols do not describe the role of pitches relative to a tonic, but rather actual pitches. In the movable-do system, there happens to be a correspondence between the solfège symbol and a pitch's role. However, there is no requirement that musicians associate the solfège symbols with the scale degrees. In fact, musicians may utilize the movable-do system to label pitches while mentally tracking intervals to determine the sequence of solfège symbols.</p> <p>Functional pitch recognition has several strengths. Since a large body of music is tonal, the technique is widely applicable. Since reference pitches are not required, music may be broken up by complex and difficult to analyze pitch clusters, for example, a percussion sequence, and pitch analysis may resume immediately once an easier to identify pitch is played, for example, by a trumpet—no need to keep track of the last note of the previous line or solo nor any need to keep track of a series of intervals going back all the way to the start of a piece. Since the function of pitch classes is a key element, the problem of compound intervals with interval recognition is not an issue—whether the notes in a melody are played within a single octave or over many octaves is irrelevant.</p> <p>Functional pitch recognition has some weaknesses. Music with no tonic or ambiguous tonality does not provide the frame of reference necessary for this type of analysis. When dealing with key changes, a student must know how to account for pitch function recognition after the key changes: retain the original tonic or change the frame of reference to the new tonic. This last aspect in particular, requires an ongoing real-time (even anticipatory) analysis of the music that is complicated by modulations and is the chief detriment to the movable-do system.</p> <h2 id="interval-recognition" tabindex="-1">Interval recognition <a class="header-anchor" href="#interval-recognition" aria-label="Permalink to "Interval recognition""></a></h2> <p>Interval recognition is also a useful skill for musicians: in order to determine the notes in a melody, a musician must have some ability to recognize intervals. Some music teachers teach their students relative pitch by having them associate each possible interval with the first two notes of a popular song. However, others have shown that such familiar-melody associations are quite limited in scope, applicable only to the specific scale-degrees found in each melody.</p> <p>In addition, there are various systems (including solfeggio, sargam, and numerical sight-singing) that assign specific syllables to different notes of the scale. Among other things, this makes it easier to hear how intervals sound in different contexts, such as starting on different notes of the same scale.</p> <h2 id="chord-recognition" tabindex="-1">Chord recognition <a class="header-anchor" href="#chord-recognition" aria-label="Permalink to "Chord recognition""></a></h2> <p>Complementary to recognizing the melody of a song is hearing the harmonic structures that support it. Musicians often practice hearing different types of chords and their inversions out of context, just to hear the characteristic sound of the chord. They also learn chord progressions to hear how chords relate to one another in the context of a piece of music.</p> <h2 id="microtonal-chord-and-interval-recognition" tabindex="-1">Microtonal chord and interval recognition <a class="header-anchor" href="#microtonal-chord-and-interval-recognition" aria-label="Permalink to "Microtonal chord and interval recognition""></a></h2> <p>The process is similar to twelve-tone ear training, but with many more intervals to distinguish. Aspects of microtonal ear training are covered in Harmonic Experience, by W. A. Mathieu, with sight-singing exercises, such as singing over a drone, to learn to recognize just intonation intervals. There are also software projects underway or completed geared to ear training or to assist in microtonal performance.</p> <p>Gro Shetelig at The Norwegian Academy of Music is working on the development of a Microtonal Ear Training method for singers[4] and has developed the software Micropalette,[5] a tool for listening to microtonal tones, chords and intervals. Aaron Hunt at Hi Pi instruments has developed Xentone,[6] another tool for microtonal ear training. Furthermore, Reel Ear Web Apps[7] have released a Melodic Microtone Ear Training App based on call and response dictations.</p> <h2 id="rhythm-recognition" tabindex="-1">Rhythm recognition <a class="header-anchor" href="#rhythm-recognition" aria-label="Permalink to "Rhythm recognition""></a></h2> <p>One way musicians practise rhythms is by breaking them up into smaller, more easily identifiable sub-patterns. For example, one might start by learning the sound of all the combinations of four eighth notes and eighth rests, and then proceed to string different four-note patterns together.</p> <p>Another way to practise rhythms is by muscle memory, or teaching rhythm to different muscles in the body. One may start by tapping a rhythm with the hands and feet individually, or singing a rhythm on a syllable (e.g. "ta"). Later stages may combine keeping time with the hand, foot, or voice and simultaneously tapping out the rhythm, and beating out multiple overlapping rhythms.</p> <p>A metronome may be used to assist in maintaining accurate tempo.</p> <h2 id="timbre-recognition" tabindex="-1">Timbre recognition <a class="header-anchor" href="#timbre-recognition" aria-label="Permalink to "Timbre recognition""></a></h2> <p>Each type of musical instrument has a characteristic sound quality that is largely independent of pitch or loudness. Some instruments have more than one timbre, e.g. the sound of a plucked violin is different from the sound of a bowed violin. Some instruments employ multiple manual or embouchure techniques to achieve the same pitch through a variety of timbres. If these timbres are essential to the melody or function, as in shakuhachi music, then pitch training alone will not be enough to fully recognize the music. Learning to identify and differentiate various timbres is an important musical skill that can be acquired and improved by training.</p> <h2 id="transcription" tabindex="-1">Transcription <a class="header-anchor" href="#transcription" aria-label="Permalink to "Transcription""></a></h2> <p>Music teachers often recommend transcribing recorded music as a way to practise all of the above, including recognizing rhythm, melody and harmony. The teacher may also perform ('dictate') short compositions, with the pupil listening and transcribing them on to paper.</p> <h2 id="modern-training-methods" tabindex="-1">Modern training methods <a class="header-anchor" href="#modern-training-methods" aria-label="Permalink to "Modern training methods""></a></h2> <p>For accurate identification and reproduction of musical intervals, scales, chords, rhythms, and other audible parameters a great deal of practice is often necessary. Exercises involving identification often require a knowledgeable partner to play the passages in question and to assess the answers given. Specialised music theory software can remove the need for a partner, customise the training to the user's needs and accurately track progress. Conservatories and university music departments often license commercial software for their students, such as Meludia, EarMaster, Auralia, and MacGAMUT, so that they can track and manage student scores on a computer network. Similar data tracking software such as MyMusicianship and SonicFit focus on ear training for singers and are licensed by schools and community choirs. A variety of free software also exists, either as browser-based applications or as downloadable executables. For example, free and open source software under the GPL, such as GNU Solfege, often provides many features comparable with those of popular proprietary products.[citation needed] Most ear-training software is MIDI-based, permitting the user to customise the instruments used and even to receive input from MIDI-compatible devices such as electronic keyboards. Interactive ear-training applications are also available for smartphones.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/robin-gislain-gessy.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Absolute (Perfect) Pitch]]></title> <link>https://chromatone.center/theory/notes/ear-traning/absolute-pitch/</link> <guid>https://chromatone.center/theory/notes/ear-traning/absolute-pitch/</guid> <pubDate>Mon, 09 Sep 2024 00:00:00 GMT</pubDate> <description><![CDATA[Basic research on the ability to discern note frequencies without reference tone]]></description> <content:encoded><![CDATA[<blockquote> <p><a href="https://en.wikipedia.org/wiki/Absolute_pitch" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Absolute_pitch</a></p> </blockquote> <p>Absolute pitch (AP), often called perfect pitch, is the ability to identify or re-create a given musical note without the benefit of a reference tone. AP may be demonstrated using linguistic labelling ("naming" a note), associating mental imagery with the note, or sensorimotor responses. For example, an AP possessor can accurately reproduce a heard tone on a musical instrument without "hunting" for the correct pitch.</p> <h2 id="about" tabindex="-1">About <a class="header-anchor" href="#about" aria-label="Permalink to "About""></a></h2> <p>The frequency of AP in the general population is not known. A proportion of 1 in 10,000 is widely reported, but not supported by evidence; a 2019 review indicated a prevalence of at least 4% amongst music students.</p> <p>Generally, absolute pitch implies some or all of these abilities, achieved without a reference tone:</p> <ul> <li>Identify by name individual pitches played on various instruments.</li> <li>Name the key of a given piece of tonal music.</li> <li>Identify and name all the tones of a given chord or other tonal mass.</li> <li>Name the pitches of common everyday sounds such as car horns and alarms.</li> </ul> <p>Absolute pitch is distinct from relative pitch. While the ability to name specific pitches can be used to infer intervals, relative pitch identifies an interval directly by its sound. Absolute pitch may complement relative pitch in musical listening and practice, but it may also influence its development.</p> <p>Adults who possess relative pitch but do not already have absolute pitch can learn "pseudo-absolute pitch" and become able to identify notes in a way that superficially resembles absolute pitch. Some people have been able to develop accurate pitch identification in adulthood through training.</p> <h2 id="scientific-studies" tabindex="-1">Scientific studies <a class="header-anchor" href="#scientific-studies" aria-label="Permalink to "Scientific studies""></a></h2> <h3 id="history-of-study-and-terminologies" tabindex="-1">History of study and terminologies <a class="header-anchor" href="#history-of-study-and-terminologies" aria-label="Permalink to "History of study and terminologies""></a></h3> <p>Scientific studies of absolute pitch commenced in the 19th century, focusing on the phenomenon of musical pitch and methods of measuring it. It would have been difficult for the notion of absolute pitch to have formed earlier because pitch references were not consistent. For example, the note known as 'A' varied in different local or national musical traditions between what is considered as G sharp and B flat before the standardisation of the late 19th century. While the term absolute pitch, or absolute ear, was in use by the late 19th century by both British and German researchers, its application was not universal; other terms such as musical ear, absolute tone consciousness, or positive pitch referred to the same ability. The skill is not exclusively musical.</p> <h2 id="difference-in-cognition-not-elementary-sensation" tabindex="-1">Difference in cognition, not elementary sensation <a class="header-anchor" href="#difference-in-cognition-not-elementary-sensation" aria-label="Permalink to "Difference in cognition, not elementary sensation""></a></h2> <p>Physically and functionally, the auditory system of an absolute listener evidently does not differ from that of a non-absolute listener. Rather, "it reflects a particular ability to analyze frequency information, presumably involving high-level cortical processing." Absolute pitch is an act of cognition, needing memory of the frequency, a label for the frequency (such as "B-flat"), and exposure to the range of sound encompassed by that categorical label. Absolute pitch may be directly analogous to recognizing colors, phonemes (speech sounds), or other categorical perception of sensory stimuli. For example, most people have learned to recognize and name the color blue by the range of frequencies of the electromagnetic radiation that are perceived as light; those who have been exposed to musical notes together with their names early in life may be more likely to identify the note C. Although it was once thought that it "might be nothing more than a general human capacity whose expression is strongly biased by the level and type of exposure to music that people experience in a given culture", absolute pitch may be influenced by genetic variation, possibly an autosomal dominant genetic trait.</p> <h2 id="influence-by-music-experience" tabindex="-1">Influence by music experience <a class="header-anchor" href="#influence-by-music-experience" aria-label="Permalink to "Influence by music experience""></a></h2> <p>Evidence suggests that absolute pitch sense is influenced by cultural exposure to music, especially in the familiarization of the equal-tempered C-major scale. Most of the absolute listeners that were tested in this respect identified the C-major tones more reliably and, except for B, more quickly than the five "black key" tones, which corresponds to the higher prevalence of these tones in ordinary musical experiences. One study of Dutch non-musicians also demonstrated a bias toward using C-major tones in ordinary speech, especially on syllables related to emphasis.</p> <h2 id="linguistics" tabindex="-1">Linguistics <a class="header-anchor" href="#linguistics" aria-label="Permalink to "Linguistics""></a></h2> <p>Absolute pitch is more common among speakers of tonal languages, such as most dialects of Chinese or Vietnamese, which depend on pitch variation to distinguish words that otherwise sound the same—e.g., Mandarin with four possible tonal variations, Cantonese with nine, Southern Min with seven or eight (depending on dialect), and Vietnamese with six. Speakers of Sino-Tibetan languages have been reported to speak a word in the same absolute pitch (within a quarter-tone) on different days; it has therefore been suggested that absolute pitch may be acquired by infants when they learn to speak a tonal language (and possibly also by infants when they learn to speak a pitch-accent language). However, the brains of tonal-language speakers do not naturally process musical sound as language; such speakers may be more likely to acquire absolute pitch for musical tones when they later receive musical training. Many native speakers of a tone language, even those with little musical training, are observed to sing a given song with consistent pitch. Among music students of East Asian ethnic heritage, those who speak a tone language fluently have a higher prevalence of absolute pitch than those who do not speak a tone language.</p> <p>African level-tone languages—such as Yoruba, with three pitch levels, and Mambila, with four—may be better suited to study the role of absolute pitch in speech than the pitch and contour tone languages of East Asia.</p> <p>Speakers of European languages make subconscious use of an absolute pitch memory when speaking.</p> <h2 id="perception" tabindex="-1">Perception <a class="header-anchor" href="#perception" aria-label="Permalink to "Perception""></a></h2> <p>Absolute pitch is the ability to perceive pitch class and to mentally categorize sounds according to perceived pitch class. A pitch class is the set of all pitches that are a whole number of octaves apart. While the boundaries of musical pitch categories vary among human cultures, the recognition of octave relationships is a natural characteristic of the mammalian auditory system. Accordingly, absolute pitch is not the ability to estimate a pitch value from the dimension of pitch-evoking frequency (30–5000 Hz), but to identify a pitch class category within the dimension of pitch class (e.g., C-C♯-D ... B-C).</p> <p>An absolute listener's sense of hearing is typically no keener than that of a non-absolute ("normal") listener. Absolute pitch does not depend upon a refined ability to perceive and discriminate gradations of sound frequencies, but upon detecting and categorizing a subjective perceptual quality typically referred to as "chroma". The two tasks— of identification (recognizing and naming a pitch) and discrimination (detecting changes or differences in rate of vibration)— are accomplished with different brain mechanisms.</p> <h2 id="special-populations" tabindex="-1">Special populations <a class="header-anchor" href="#special-populations" aria-label="Permalink to "Special populations""></a></h2> <p>The prevalence of absolute pitch is higher among those who are blind from birth as a result of optic nerve hypoplasia.</p> <p>Absolute pitch is considerably more common among those whose early childhood was spent in East Asia. This might seem to be a genetic difference; however, people of East Asian ancestry who are reared in North America are significantly less likely to develop absolute pitch than those raised in East Asia, so the difference is more probably explained by experience. The language that is spoken may be an important factor; many East Asians speak tonal languages such as Mandarin, Cantonese, and Thai, while others (such as those in Japan and certain provinces of Korea) speak pitch-accent languages, and the prevalence of absolute pitch may be partly explained by exposure to pitches together with meaningful musical labels very early in life.</p> <p>Absolute pitch ability has higher prevalence among those with Williams syndrome and those with an autism spectrum disorder, with claims estimating that up to 30% of autistic people have absolute pitch. A non-verbal piano-matching method resulted in a correlation of 97% between autism and absolute pitch, with a 53% correlation in non-autistic observers. However, the converse is not indicated by research which found no difference between those with absolute pitch and those without on measures of social and communication skills, which are core deficits in autistic spectrum disorders. Additionally, the absolute pitch group's autism-spectrum quotient was "way below clinical thresholds".</p> <h2 id="nature-vs-nurture" tabindex="-1">Nature vs. nurture <a class="header-anchor" href="#nature-vs-nurture" aria-label="Permalink to "Nature vs. nurture""></a></h2> <p>Absolute pitch might be achievable by any human being during a critical period of auditory development, after which period cognitive strategies favor global and relational processing. Proponents of the critical-period theory agree that the presence of absolute pitch ability is dependent on learning, but there is disagreement about whether training causes absolute skills to occur or lack of training causes absolute perception to be overwhelmed and obliterated by relative perception of musical intervals.</p> <p>One or more genetic loci could affect absolute pitch ability, a predisposition for learning the ability or signal the likelihood of its spontaneous occurrence.</p> <p>Researchers have been trying to teach absolute pitch ability in laboratory settings for more than a century, and various commercial absolute-pitch training courses have been offered to the public since the early 1900s. In 2013, experimenters reported that adult men who took the antiseizure drug valproate (VPA) "learned to identify pitch significantly better than those taking placebo—evidence that VPA facilitated critical-period learning in the adult human brain". However, no adult has ever been documented to have acquired absolute listening ability, because all adults who have been formally tested after AP training have failed to demonstrate "an unqualified level of accuracy... comparable to that of AP possessors".</p> <h2 id="pitch-memory-related-to-musical-context" tabindex="-1">Pitch memory related to musical context <a class="header-anchor" href="#pitch-memory-related-to-musical-context" aria-label="Permalink to "Pitch memory related to musical context""></a></h2> <p>While very few people have the ability to name a pitch with no external reference, pitch memory can be activated by repeated exposure. People who are not skilled singers will often sing popular songs in the correct key, and can usually recognize when TV themes have been shifted into the wrong key. Members of the Venda culture in South Africa also sing familiar children's songs in the key in which the songs were learned.</p> <p>This phenomenon is apparently unrelated to musical training. The skill may be associated more closely with vocal production. Violin students learning the Suzuki method are required to memorize each composition in a fixed key and play it from memory on their instrument, but they are not required to sing. When tested, these students did not succeed in singing the memorized Suzuki songs in the original, fixed key.</p> <h2 id="possible-problems" tabindex="-1">Possible problems <a class="header-anchor" href="#possible-problems" aria-label="Permalink to "Possible problems""></a></h2> <youtube-embed video="QRaACa1Mrd4" /><p>Musicians with absolute perception may experience difficulties which do not exist for other musicians. Because absolute listeners are capable of recognizing that a musical composition has been transposed from its original key, or that a pitch is being produced at a nonstandard frequency (either sharp or flat), a musician with absolute pitch may become confused upon perceiving tones believed to be "wrong" or hearing a piece of music "in the wrong key". The relative pitch of the notes may be in tune to each other, but out of tune to the standard pitch or pitches the musician is familiar with or perceives as correct. This can especially apply to Baroque music, as many Baroque orchestras tune to A = 415 Hz as opposed to 440 Hz (i.e. roughly one standard semitone lower than the ISO standard for concert A), while other recordings of Baroque pieces (especially those of French Baroque music) are performed at 392 Hz. Historically, tuning forks for concert A used on keyboard instruments (which ensembles tune to when present), have varied widely in frequency, often between 415 Hz to 456.7 Hz.</p> <p>Variances in the sizes of intervals for different keys and the method of tuning instruments also can affect musicians in their perception of correct pitch, especially with music synthesized digitally using alternative tunings (e.g. unequal well temperaments and alternative meantone tunings such as 19-tone equal temperament and 31-tone equal temperament) as opposed to 12-tone equal temperament. An absolute listener may also use absolute strategies for tasks which are more efficiently accomplished with relative strategies, such as transposition or producing harmony that is microtonal or whose frequencies do not match standard 12-tone equal temperament. It is also possible for some musicians to have displaced absolute pitch, where all notes are slightly flat or slightly sharp of their respective pitch as defined by a given convention. This may arise from learning the pitch names from an instrument that was tuned to a concert pitch convention other than the one in use (e.g. A = 435 Hz, the Paris Opera convention of the late 19th and early 20th centuries, as opposed to the modern Euro-American convention for concert A = 442 Hz). Concert pitches have shifted higher for a brighter sound. When playing in groups with other musicians, this may lead to playing in a tonality that is slightly different from that of the rest of the group, such as when soloists tune slightly sharp of the rest of the ensemble to stand out or to compensate for loosening strings during longer performances.</p> <h2 id="synesthesia" tabindex="-1">Synesthesia <a class="header-anchor" href="#synesthesia" aria-label="Permalink to "Synesthesia""></a></h2> <p>Absolute pitch shows a genetic overlap with music-related and non-music-related synesthesia/ideasthesia. They may associate certain notes or keys with different colors, enabling them to tell what any note or key is. In this study, about 20% of people with absolute pitch are also synesthetes.</p> <h2 id="correlations" tabindex="-1">Correlations <a class="header-anchor" href="#correlations" aria-label="Permalink to "Correlations""></a></h2> <p>There is evidence of a higher rate of absolute pitch in the autistic population. Many studies have examined pitch abilities in autism, but not rigidly perfect pitch, which makes them controversial. It is unclear just how many people with autism have perfect pitch because of this. In a 2009 study, researchers studied 72 teenagers with autism and found that 20 percent of the teenagers had a significant ability to detect pitches. Children with autism are especially sensitive to changes in pitch.</p> <h2 id="correlation-with-musical-talent" tabindex="-1">Correlation with musical talent <a class="header-anchor" href="#correlation-with-musical-talent" aria-label="Permalink to "Correlation with musical talent""></a></h2> <p>Absolute pitch is not a prerequisite for skilled musical performance or composition. However, there is evidence that musicians with absolute pitch tend to perform better on musical transcription tasks (controlling for age of onset and amount of musical training) compared to those without absolute pitch. It was previously argued that musicians with absolute pitch perform worse than those without absolute pitch on recognition of musical intervals; however, experiments on which this conclusion was based contained an artifact and, when this artifact was removed, absolute pitch possessors were found to perform better than nonpossessors on recognition of musical intervals.</p> <blockquote> <p><code>aangeles</code> @ <a href="https://forum.littlelearner.kids/t/eguchi-method-of-teaching-perfect-pitch/223376/12" target="_blank" rel="noreferrer">https://forum.littlelearner.kids/t/eguchi-method-of-teaching-perfect-pitch/223376/12</a></p> <p>The way I understood HH replies was that she was not so much criticizing the Eguchi method per se as she was saying that having perfect pitch alone may not be all that great. Perhaps I inadvertently contributed to this “misunderstanding” by the way I posted my very first message. (I started this thread just as I was leaving work the other day and I was in a hurry so I failed to elaborate.)</p> <p>Even though I have been doing perfect pitch training with Ella using a modified Doman method for some time now, I have been kinda conflicted whether I should be doing this at all. In the back of my mind, I was always wondering whether I am teaching her an ability that she would later wish she did not have at all. (Apparently, once you develop perfect pitch, you will not be able to lose it even if you wanted to.) While there is a lot of contradictory information on the topic (and sometimes even on the DEFINITION of perfect pitch), most of the articles and posts in music/piano/violin forums I have read by people WITH perfect pitch seem to say that:</p> <ol> <li>It is no guarantee of musical success or, even, musical talent.</li> <li>Having this ability can be inconvenient, annoying, and can sometimes even get in the way of singing/playing an instrument because they get distracted/annoyed when they hear other people sing/play off-key, they have a problem transposing to a higher or lower key, they get too focused on pitch, pitch, pitch and fail to just enjoy the music, it is a hindrance to improvisation and creativity, etc. Granted, most of these people are not professional musicians so maybe that is another variable that has to be taken into consideration. I do remember reading somewhere in this forum that DrPrimo, one of our brillkids member, said that she has perfect pitch and she is very musical and that it is a nice skill to have!</li> </ol> <p>So, anyway, I always had this nagging concern that teaching Ella to recognize individual notes/pitch (by using the glockenspiel, keyboard, or tuning forks) out of context of a holistic music education would not be a good thing at all. As of now, I don’t have enough information on the Eguchi method or on the Chordhopper ear training software to decide whether they are effective or not. They may be 100% effective for all I know (I don’t think HH ever said that these methods would not be effective). It’s just that I agree with HH that probably the best goal to aim for for Ella would not be having perfect pitch per se but having a combination of perfect pitch and what she calls “harmonical ear.”</p> <p>For now, I will carry on with what I have been doing so far with Ella with regard to perfect pitch (modified Doman), until I receive my SM package and learn about the program firsthand. I had already ordered SM even before starting this thread so I guess I don’t have a choice anyway… lol</p> <p>I do hope that Ella will like SM and that I will have encouraging updates to post soon about our experience with using this program.</p> </blockquote> ]]></content:encoded> <enclosure url="https://chromatone.center/andrik-langfield.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Pianolizer]]></title> <link>https://chromatone.center/practice/experiments/pianolizer/</link> <guid>https://chromatone.center/practice/experiments/pianolizer/</guid> <pubDate>Wed, 04 Sep 2024 00:00:00 GMT</pubDate> <description><![CDATA[More musical Sliding DTF]]></description> <content:encoded><![CDATA[<p><a href="https://sysd.org/pianolizer/" target="_blank" rel="noreferrer">https://sysd.org/pianolizer/</a> <a href="https://github.com/creaktive/pianolizer/tree/master" target="_blank" rel="noreferrer">https://github.com/creaktive/pianolizer/tree/master</a></p> <!-- <script setup> import { defineClientComponent } from 'vitepress' const Pianolizer = defineClientComponent(() => { return import('./Pianolizer.vue') }) </script> <Pianolizer/> --> ]]></content:encoded> </item> <item> <title><![CDATA[Draw on a canvas]]></title> <link>https://chromatone.center/practice/experiments/draw/</link> <guid>https://chromatone.center/practice/experiments/draw/</guid> <pubDate>Wed, 21 Aug 2024 00:00:00 GMT</pubDate> <description><![CDATA[Atrament experiment]]></description> <content:encoded><![CDATA[<Atrament/>]]></content:encoded> <enclosure url="https://chromatone.center/drawing.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[All scales list]]></title> <link>https://chromatone.center/practice/scale/list/</link> <guid>https://chromatone.center/practice/scale/list/</guid> <pubDate>Sat, 13 Jul 2024 00:00:00 GMT</pubDate> <description><![CDATA[The most full list of subsets of 12 chromatic notes]]></description> <content:encoded><![CDATA[<ScaleList />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Elementary synth]]></title> <link>https://chromatone.center/practice/synth/elementary/</link> <guid>https://chromatone.center/practice/synth/elementary/</guid> <pubDate>Sun, 23 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[MIDI enabled synthesizer built with Elementary audio library]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/synth/elements/" target="_blank" rel="noreferrer">https://chromatone.center/practice/synth/elements/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/lockup.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Elements synth]]></title> <link>https://chromatone.center/practice/synth/elements/</link> <guid>https://chromatone.center/practice/synth/elements/</guid> <pubDate>Sun, 23 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Performant multilayered polyphonic web-synth]]></description> <enclosure url="https://chromatone.center/lockup.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Visual Music & the Poetics of Synaesthesia]]></title> <link>https://chromatone.center/theory/interplay/visual-music/</link> <guid>https://chromatone.center/theory/interplay/visual-music/</guid> <pubDate>Thu, 20 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[by Michael Filimowicz, PhD]]></description> <content:encoded><![CDATA[<h1 id="visual-music-and-audiovisual-aesthetics" tabindex="-1">Visual Music and Audiovisual Aesthetics <a class="header-anchor" href="#visual-music-and-audiovisual-aesthetics" aria-label="Permalink to "Visual Music and Audiovisual Aesthetics""></a></h1> <youtube-embed video="jQCEXWjitsA" /><h1 id="visual-music-the-poetics-of-synaesthesia" tabindex="-1">Visual Music & the Poetics of Synaesthesia <a class="header-anchor" href="#visual-music-the-poetics-of-synaesthesia" aria-label="Permalink to "Visual Music & the Poetics of Synaesthesia""></a></h1> <p><strong>Poetics</strong> in the context of this essay refers broadly to ‘principles of making’ and by extension, general design principles. The origins of poetics was with Aristotle’s <em>Poetics</em> which analyzed literary-theatrical narratives, but today the scope of poetics is much wider and can cover any media- and meaning-making terrain where patterns of making can be formally analyzed.</p> <blockquote> <p>For most of its long history, the term <strong>poetics</strong> subsumed attempts to reveal the inner <a href="http://csmt.uchicago.edu/glossary2004/logic.htm" target="_blank" rel="noreferrer">logic</a> of a work of art in an examination of its formal and constituent features while inevitably raising problems of intention, meaning, and interpretation.</p> <p>With the advent of new technologies and an increasing differentiation of media, the medium of <a href="http://csmt.uchicago.edu/glossary2004/typeprint.htm" target="_blank" rel="noreferrer">print</a> has lost some of its status while other technologies vie for acceptance alongside it. Accordingly, in critical <a href="http://csmt.uchicago.edu/glossary2004/discourse.htm" target="_blank" rel="noreferrer">discourse</a>, new media studies have gained ascendancy over poetics. Poetics, broadly understood, takes as its subject matter a hermeneutic process productive of meaning and responsive to <a href="http://csmt.uchicago.edu/glossary2004/communication.htm" target="_blank" rel="noreferrer">communication</a>, even where this process is intentionally made difficult for artistic purposes, a view that has been hotly contested as a result of the emergence of new technologies (<a href="https://lucian.uchicago.edu/blogs/mediatheory/keywords/poetics/" target="_blank" rel="noreferrer">source</a>).</p> </blockquote> <p>Visual music predates the contemporary concept of the music video by at least half a millennia, depending on how one qualifies it. Most historical commentary on visual music finds an origin point in the Medieval color organ, which is the earliest known technological precursor to today’s iTunes Visualizer. Many commentators, however, do not go so far back, and relate visual music to early 20th Century trends in Modernism. In today’s maker culture, there is often a lack of deep historical sense and context. At the start of the first video embed below, an Arduino-tinkerer claims that the color organ originated in the 1970s (!) when there was a genre of music-light interactive consumer novelties that were also called color organs. Here is what Wikipedia has to say about color organs (all Wikipedia links will be left in place in case you want to explore more):</p> <blockquote> <p>The term <strong>color organ</strong> refers to a tradition of mechanical devices built to represent sound and accompany music in a visual medium. The earliest created color organs were manual instruments based on the harpsichord design. By the 1900s they were electromechanical. In the early 20th century, a silent color organ tradition (Lumia) developed. In the 1960s and ’70s, the term “color organ” became popularly associated with electronic devices that responded to their music inputs with <a href="https://en.wikipedia.org/wiki/Liquid_light_show" target="_blank" rel="noreferrer">light shows</a>. The term “<a href="https://en.wikipedia.org/wiki/Light_organ" target="_blank" rel="noreferrer">light organ</a>” is increasingly being used for these devices; allowing “color organ” to reassume its original meaning.</p> <p>The dream of creating a visual music comparable to auditory music found its fulfillment in animated abstract films by artists such as <a href="https://en.wikipedia.org/wiki/Oskar_Fischinger" target="_blank" rel="noreferrer">Oskar Fischinger</a>, <a href="https://en.wikipedia.org/wiki/Len_Lye" target="_blank" rel="noreferrer">Len Lye</a> and <a href="https://en.wikipedia.org/wiki/Norman_McLaren" target="_blank" rel="noreferrer">Norman McLaren</a>; but long before them, many people built instruments, usually called “color organs,” that would display modulated colored light in some kind of fluid fashion comparable to music.</p> <p>— <a href="https://en.wikipedia.org/wiki/William_Moritz" target="_blank" rel="noreferrer">William Moritz</a></p> <p>In 1590, Gregorio Comanini described an invention by the <a href="https://en.wikipedia.org/wiki/Mannerist" target="_blank" rel="noreferrer">Mannerist</a> painter <a href="https://en.wikipedia.org/wiki/Arcimboldo" target="_blank" rel="noreferrer">Arcimboldo</a> of a system for creating color-music, based on apparent luminosity (light-dark contrast) instead of hue.</p> <p>In 1725, French Jesuit monk <a href="https://en.wikipedia.org/wiki/Louis_Bertrand_Castel" target="_blank" rel="noreferrer">Louis Bertrand Castel</a> proposed the idea of <em>Clavecin pour les yeux</em> (<em>Ocular Harpsichord</em>). In the 1740s, German composer <a href="https://en.wikipedia.org/wiki/Georg_Philipp_Telemann" target="_blank" rel="noreferrer">Telemann</a> went to <a href="https://en.wikipedia.org/wiki/France" target="_blank" rel="noreferrer">France</a> to see it, composed some pieces for it and wrote a book about it. It had 60 small colored glass panes, each with a curtain that opened when a key was struck. In about 1742, Castel proposed the <em>clavecin oculaire</em> (a light organ) as an instrument to produce both sound and the ‘proper’ light colors.</p> </blockquote> <p><img src="./organ.webp" alt="organ"></p> <p>Castel’s Ocular Organ, <a href="https://en.wikipedia.org/wiki/Color_organ#/media/File:A_caricature_of_Louis-Bertrand_Castel's_%22ocular_organ%22.jpg" target="_blank" rel="noreferrer">Source</a>. A caricature of Louis-Bertrand Castel’s “ocular organ” by <a href="https://en.wikipedia.org/wiki/Charles_Germain_de_Saint_Aubin" target="_blank" rel="noreferrer">Charles Germain de Saint Aubin</a></p> <blockquote> <p>In 1743, Johann Gottlob Krüger, a professor at the University of Hall, proposed his own version of the ocular harpsichord.</p> <p>In 1816, <a href="https://en.wikipedia.org/wiki/Sir_David_Brewster" target="_blank" rel="noreferrer">Sir David Brewster</a> proposed the <a href="https://en.wikipedia.org/wiki/Kaleidoscope" target="_blank" rel="noreferrer">Kaleidoscope</a> as a form of visual-music that became immediately popular.</p> <p>In 1877, US artist, inventor <a href="https://en.wikipedia.org/w/index.php?title=Bainbridge_Bishop&action=edit&redlink=1" target="_blank" rel="noreferrer">Bainbridge Bishop</a> gets a patent for his first Color Organ.The instruments were lighted attachments designed for pipe organs that could project colored lights onto a screen in synchronization with musical performance. Bishop built three of the instruments; each was destroyed in a fire, including one in the home of <a href="https://en.wikipedia.org/wiki/P._T._Barnum" target="_blank" rel="noreferrer">P. T. Barnum</a>.</p> <p>In 1893, British painter <a href="https://en.wikipedia.org/wiki/Alexander_Wallace_Rimington" target="_blank" rel="noreferrer">Alexander Wallace Rimington</a> invented the <a href="https://en.wikipedia.org/wiki/Clavier_%C3%A0_lumi%C3%A8res" target="_blank" rel="noreferrer">Clavier à lumières</a>. Rimington’s <em>Colour Organ</em> attracted much attention, including that of <a href="https://en.wikipedia.org/wiki/Richard_Wagner" target="_blank" rel="noreferrer">Richard Wagner</a> and Sir <a href="https://en.wikipedia.org/wiki/George_Grove" target="_blank" rel="noreferrer">George Grove</a>. It has been incorrectly claimed that his device formed the basis of the moving lights that accompanied the <a href="https://en.wikipedia.org/wiki/New_York_City" target="_blank" rel="noreferrer">New York City</a> premiere of <a href="https://en.wikipedia.org/wiki/Alexander_Scriabin" target="_blank" rel="noreferrer">Alexander Scriabin</a>’s <a href="https://en.wikipedia.org/wiki/Synesthesia" target="_blank" rel="noreferrer">synaesthetic</a> symphony <a href="https://en.wikipedia.org/wiki/Prometheus:_Poem_of_Fire" target="_blank" rel="noreferrer"><em>Prometheus: The Poem of Fire</em></a> in 1915. The instrument that accompanied that premiere was lighting engineer Preston S. Millar’s chromola, which was similar to Rimington’s instrument.</p> <p>In a 1916 <a href="https://en.wikipedia.org/wiki/Art_manifesto" target="_blank" rel="noreferrer">art manifesto</a>, the Italian Futurists <a href="https://en.wikipedia.org/wiki/Arnaldo_Ginna" target="_blank" rel="noreferrer">Arnaldo Ginna</a> and <a href="https://en.wikipedia.org/wiki/Bruno_Corra" target="_blank" rel="noreferrer">Bruno Corra</a> described their experiments with “color organ” projection in 1909. They also painted nine abstract films, now lost….</p> <p>In 1918, American concert pianist <a href="https://en.wikipedia.org/wiki/Mary_Hallock-Greenewalt" target="_blank" rel="noreferrer">Mary Hallock-Greenewalt</a> created an instrument she called the <a href="https://en.wikipedia.org/wiki/Sarabet" target="_blank" rel="noreferrer"><em>Sarabet</em></a>. Also an inventor, she patented nine inventions related to her instrument, including the <a href="https://en.wikipedia.org/wiki/Rheostat" target="_blank" rel="noreferrer">rheostat</a>.</p> <p>In 1921, Arthur C. Vinageras proposed the <em>Chromopiano,</em> an instrument resembling and played like a grand piano, but designed to project “chords” composed from colored lights.</p> <p>In the 1920s, Danish-born <a href="https://en.wikipedia.org/wiki/Thomas_Wilfred" target="_blank" rel="noreferrer">Thomas Wilfred</a> created the <em>Clavilux,</em> a color organ, ultimately patenting seven versions. By 1930, he had produced 16 “Home Clavilux” units. Glass disks bearing art were sold with these “Clavilux Juniors.” Wilfred coined the word <a href="https://en.wikipedia.org/wiki/Lumia_(art)" target="_blank" rel="noreferrer"><em>lumia</em></a> to describe the art. Significantly, Wilfred’s instruments were designed to project colored imagery, not just fields of colored light as with earlier instruments.</p> <p>In 1925, Hungarian composer <a href="https://en.wikipedia.org/wiki/Alexander_Laszlo_(composer)" target="_blank" rel="noreferrer">Alexander Laszlo</a> wrote a text called <em>Color-Light-Music</em> ; Laszlo toured Europe with a color organ.</p> <p>In <a href="https://en.wikipedia.org/wiki/Hamburg" target="_blank" rel="noreferrer">Hamburg</a>, Germany from the late 1920s–early 1930s, several color organs were demonstrated at a series of Colour-Sound Congresses (German:<em>Kongreß für Farbe-Ton-Forschung</em>).<a href="https://en.wikipedia.org/wiki/Ludwig_Hirschfeld_Mack" target="_blank" rel="noreferrer">Ludwig Hirschfeld Mack</a> performed his Farbenlichtspiel colour organ at these congresses and at several other festivals and events in Germany. He had developed this color organ at the <a href="https://en.wikipedia.org/wiki/Bauhaus" target="_blank" rel="noreferrer">Bauhaus</a> school in Weimar, with Kurt Schwerdtfeger.</p> <p>The 1939 London Daily Mail Ideal Home Exhibition featured a “72-way Light Console and Compton Organ for Colour Music”, as well as a 70 feet, 230 kW “Kaleidakon” tower.</p> <p>From 1935–77, Charles Dockum built a series of Mobilcolor Projectors, his versions of silent color organs.</p> <p>In the late 1940s, <a href="https://en.wikipedia.org/wiki/Oskar_Fischinger" target="_blank" rel="noreferrer">Oskar Fischinger</a> created the <a href="https://en.wikipedia.org/w/index.php?title=Lumigraph&action=edit&redlink=1" target="_blank" rel="noreferrer">Lumigraph</a> that produced imagery by pressing objects/hands into a rubberized screen that would protrude into colored light. The imagery of this device was manually generated, and was performed with various accompanying music. It required two people to operate: one to make changes to colors, the other to manipulate the screen. Fischinger performed the Lumigraph in Los Angeles and San Francisco in the late 1940s through early 1950s. The Lumigraph was licensed by the producers of the 1964 sci-fi film, <a href="https://en.wikipedia.org/wiki/The_Time_Travelers_(1964_film)" target="_blank" rel="noreferrer"><em>The Time Travelers</em></a>. The Lumigraph does not have a keyboard, and does not generate music.</p> <p>In 2000, <a href="https://en.wikipedia.org/wiki/Jack_Ox" target="_blank" rel="noreferrer">Jack Ox</a> and David Britton created “The Virtual Color Organ.” The 21st Century Virtual Reality Color Organ is a computational system for translating musical compositions into visual performance. It uses supercomputing power to produce 3D visual images and sound from Musical Instrument Digital Interface (MIDI) files and can play a variety of compositions. Performances take place in interactive, immersive, virtual reality environments such as the Cave Automatic Virtual Environment (CAVE), VisionDome, or Immersadesk. Because it’s a 3D immersive world, the Color Organ is also a place — that is, a performance space.<em>(</em><a href="https://en.wikipedia.org/wiki/Color_organ#:~:text=In%20the%20late%201940s%2C%20Oskar,performed%20with%20various%20accompanying%20music." target="_blank" rel="noreferrer"><em>source</em></a><em>)</em></p> </blockquote> <p><img src="./12v.webp" alt="12v LED"></p> <p>Contrary to the Young Maker at the start of the video below, the Color Organ did NOT originate in the 1970s. Image <a href="https://www.richardmudhar.com/blog/2015/12/12v-led-sound-to-light-or-color-organ/" target="_blank" rel="noreferrer">Source</a></p> <youtube-embed video="HUw1-Kxq9_U" /><p>There aren’t any videos on YouTube of the Medieval color organ, but in the 20th Century many artists were inspired by the concept, and with new electronic and audiovisual technologies, what is sometimes called sound-image or music-visual ‘synaesthesia’ was often pursued in a range of creative works in different media and performance contexts. Below is an excerpt from a performance of Scriabin’s <em>Prometheus: Poem of Fire</em> painstakingly recreated at Yale University in 2010.</p> <youtube-embed video="V3B7uQ5K0IU" /><p>Prometheus: Poem of Fire (Scriabin)</p> <blockquote> <p>Scriabin suffered from the natural condition of synesthesia, which made him associate musical notes and keys with colors. For example, the pitch “D” represented bright yellow, while “A” looked like dark green, and “D flat” felt like deep purple. In addition, in his late works traditional tonality is replaced by a set of unique harmonic spaces that inhabit a world of polyrhythmic uncertainty. In his quest to transfigure the word, Scriabin thought it necessary to confront the forces of evil. His Ninth Sonata is aptly nicknamed “The Black Mass,” and Scriabin tellingly regarded the performance of this work as “practicing sorcery.” Whatever the case may be, it is certainly a work of great musical concentration and extreme emotional intensity. (<a href="https://interlude.hk/taste-color-musicalexander-scriabin/" target="_blank" rel="noreferrer">source</a>)</p> </blockquote> <p>Many experimental filmmakers made use of optical representations of sound to manipulate the audio track that is played by analog projectors, reversing the usual process of transcribing optical audio by drawing sounds directly onto film to create the soundtrack.</p> <youtube-embed video="Q0vgZv_JWfM" /><p>Optical Sound</p> <youtube-embed video="E3-vsKwQ0Cg" /><p>McLaren’s Dots</p> <p>Visual music approaches are particularly popular in creative practices where electroacoustic composition and animation intersect, and the synaesthetic explorations that we saw above in the realm of analog film continue today in the use of integrating the output 3D modeling and animation software with sound synthesis.</p> <iframe title="vimeo-player" src="https://player.vimeo.com/video/14112798?h=62e7053ce4" width="640" height="360" frameborder="0" allowfullscreen></iframe> <p>Here is a very recent example of some new trends emerging in computational visual music (to use that term in a very broad sense, since ‘visual music’ can encompass quite a range of creative practices) related to an increasing interest in data and algorithms. Michele Zaccagnini is developing a process he calls Deep Mapping, which is</p> <blockquote> <p>an approach that allows the composer to store and render musical data into visuals by “catching” the data at its source, at a compositional stage. The advantages of this approach are: accuracy and discreteness in the representation of musical features; computational efficiency; and, more abstractly, the stimulation of a practice of audiovisual composition that encourages composers to envision their multimedia output from the early stages of their work.The drawbacks are: prerecorded sounds cannot be deep-mapped and deep mapping presupposes an algorithmic compositional approach.</p> </blockquote> <youtube-embed video="yGXZBfgmVBo" /><p>Deep Map #1</p> <h2 id="visual-music-in-film" tabindex="-1">Visual Music in Film <a class="header-anchor" href="#visual-music-in-film" aria-label="Permalink to "Visual Music in Film""></a></h2> <p>Visual Music is a concept also sometimes employed in the purely visual arts, where some painters have found inspiration in the concept of music to describe their abstractions.</p> <p><img src="./sq.webp" alt="New Harmony"></p> <p>Paul Klee, New Harmony, 1936</p> <p><img src="./kand.webp" alt="New Harmony"></p> <p>Wassily Kandinsky, Improvisation (Dreamy), 1913</p> <p>In Visual Music of the moving image variety (e.g. film and video, rather than music as the “referent” or metaphor for abstractions in painting, such as those of Kandinsky or Klee), the image track is often in a fantasia mode vis-à-vis the soundtrack. While as is usually the case with sound design, the script, footage and first edits usually precede the production of sound (though there are exceptions, as with Ben Burtt’s collection of sounds for Star Wars where he often recorded interesting sounds before knowing what to do with them), with visual music there is typically a pre-existent work of music, which the moving image takes as an inspiration or motivation for free form and abstract play. Visual Music as a form can range from abstraction (e.g. a play on geometric shapes, light or color), to specific references to technologies of mediation (e.g. bright flashes of overexposed film, video footage modulated by rhythms in a techno beat, or even the generative imagery produced by iTunes Visualizer), to highly stylized and very abstract characterizations of personae with degrees of recognizable action and even plot (for instance, the struggle of an orange triangle to escape from thick lines and grids of black, which metaphorically morph into the bars of a prison or cage, as below with Synchromy No 4 Escape).</p> <youtube-embed video="YRmu-GcClls" /><p>Mary Ellen Bute’s Synchromy №4 Escape</p> <youtube-embed video="pzErTRNVj3Y" /><p>Fan video, Black Eyed Peas “I Gotta Feeling” as motion-visual geometry.</p> <youtube-embed video="T96K2inyQok" /><p>Powercord vs Philter Phreak</p> <p>Photo-collage and montage editing have also been a feature of Visual Music, and highly stylized music video production (especially for electronica genres) can often blur the usual stylistic boundaries between what one might typically refer to as a ‘music video’ versus a work of Visual Music.</p> <youtube-embed video="JY_gQ9TNIUw" /><p>One Dot Zero</p> <blockquote> <p>The ident is based around the idea of the roots of computer technology in the pre digital world — a world of music boxes, jacquard looms, punch cards and relay switches. Music box mechanisms were the precursors to punch cards as ways of communicating binary information, the Jacquard loom used punch-cards to essentially program the loom to create complex textile patterns (looked at now they resemble 8-bit computer drawings). The first real computer circuit was created using telephone relay switches. Our contemporary digital world is linked to pre-electric era of automated crafts and musical automata. This has a resonance with the tenth anniversary of onedotzero — both in the way that it references history but also it is mirrored in a lot of the work that’s being produced now. Computers have become almost invisible, powerful tools which are being used to facilitate craft. (<a href="https://www.youtube.com/watch?v=JY_gQ9TNIUw" target="_blank" rel="noreferrer">source</a>)</p> </blockquote> <p>In its ‘classic’ mode,’ visual music is often linked to the tradition of seeking an experience of synaesthesia as a spiritually heightened fusion of the senses. Its antecedents are in Wagner’s Gesamtkunstwerk (indeed, Richard Wagner’s “Evening Star” is the music of Mary Ellen Bute’s Synchrony №2) or total artwork (a fusion of the all the arts and consequently, of the senses), but it also has roots in the Symbolist and Spiritualist movements of the 19th century — all of this has its origins in Schopenhauer’s philosophy of music, in which music was framed as the highest art due to its ability to directly represent the Will (for Schopenhauer, the Will as a category included magnetism, love, rage, and electricity — in other words, forces in general, whether internal to one’s self and unconscious, or external in the workings of the world).</p> <blockquote> <p>As an outgrowth of the Romantic and Symbolist movements, music was elevated to a status of supremacy over all the other forms of creative expression. The other arts, notably poetry and painting, were said to aspire to the “condition of music.” Artists came to believe that painting should be analogous to music.</p> <p>Proponents of musical analogy based their aesthetic theories on an abstraction of the idea of music, rather than on a clear understanding of musicology. For them music represented a non-narrative, non-discursive mode of expression. They reasoned that music, in its direct appeal to emotions and senses, transcended language. Just as music was a universal form of expression, so should the visual arts attain universality by evoking sensual pleasure or an emotional response in the viewer.</p> <p>Advocates of musical analogy and color music also depended upon the related notion of synaesthesia; that is, they believed in the subjective interaction of all sensory perceptions. This common acceptance of synaesthesia resulted from two divergent philosophical positions. According to the more romantically inclined artists and writers, the interchangeability of the senses was evidence of mystical correspondence to a higher reality. On the other hand, some artists joined forces with scientific researchers to study synaesthesia as a phenomenon of human perception. (<a href="https://www.jstor.org/stable/1483303?seq=1" target="_blank" rel="noreferrer">Source</a>)</p> </blockquote> <p>Many works in the filmic synaesthetic tradition can be read as an a counter-modernist inclination, a vestige of romantic impulse in the development of 20th century mediation.</p> <h2 id="sound-design" tabindex="-1"><strong>Sound Design</strong> <a class="header-anchor" href="#sound-design" aria-label="Permalink to "**Sound Design**""></a></h2> <p>The term Sound Design came into vogue in the 1970s to describe a new role in the creation of the sound film, analogous to a “director” or “cinematographer” of the soundtrack. In the early ’70s Dolby noise reduction, which had already established itself widely in music production and distribution during the ’60s, expanded its application to the area of film sound. The specific properties of Dolby — increased dynamic range, improved spatialization, better frequency response, and reduction of the noise floor — combined with Dolby’s strategy of providing relatively affordable licensing to theater owners so that it’s noise reduction technology could be widely adapted, provided for the first time a universal standard in cinematic sound reproduction, allowing the sound mix heard in the theaters to be closer to that heard on the mix stage than at any time previously. (<a href="https://www.amazon.com/Dolby-era-contemporary-Hollywood-Popular/dp/0719070678" target="_blank" rel="noreferrer">source</a>)</p> <p>The aesthetic possibilities opened up by this technological change were exploited by filmmakers, particularly those based in San Francisco’s “Hollywood North.” For instance, the two artist-technicians usually credited with being the first “sound designers” (the first to receive this designation), Ben Burtt and Walter Murch, were each given unprecedented periods of time to explore sound as a dimension of film. Burtt was given more than a year to build a sound effects library for George Lucas’s <em>Star Wars</em> film soundtrack, famously (in widely circulated images) “wandering the desert” with field recording gear, tapping on phone wires and recording sounds that would eventually support such futuristic technologies as X-Wing Fighters and light sabers. Murch spent a year mixing and remixing Francis Ford Coppola’s <em>Apocalypse Now</em>, trying out multiple edits and approaches to what is likewise (as in the case of Star Wars) regarded as a paradigm shift in film mixing.</p> <p>In entertainment industry contexts, sound‘s role is often described in the relevant literature as a form of “<a href="https://www.amazon.com/Music-Imagination-Culture-Clarendon-Paperbacks/dp/0198163037/ref=sr_1_1?keywords=Music%2C+Imagination+and+Culture+by+Nicholas+Cook&qid=1579364384&s=books&sr=1-1" target="_blank" rel="noreferrer">subordination</a>” to the film’s imagery. In other words, the work of the soundtrack is to reinforce, through its specific effects (heightened emotion, spatial depth, representing sound sources, clarity of speech, rhythmic pacing and the like) the narratological and often “realist” motivations of the image track (and reciprocally, reserving “weird sound.” often simply taken from the realm of avant-garde music — for dream sequences, aliens, monsters and the like (<a href="https://www.jstor.org/stable/j.ctt2005s0z" target="_blank" rel="noreferrer">source</a>). Such an approach is typical of more narrative, mainstream or commercial projects.</p> <p>In contrast, experimental approaches to sound design tend to assert a relative autonomy for the soundtrack in relation to the moving image. But at the same time there is often an “associational intent” at work, in that there is an attempt to create poetic or connotational relationships between sound and image.</p> <h2 id="experimental-music" tabindex="-1"><strong>Experimental Music</strong> <a class="header-anchor" href="#experimental-music" aria-label="Permalink to "**Experimental Music**""></a></h2> <p>In her paper “Experimental Music Semiotics,” Morag Josephine Grant elaborates an intriguing <a href="https://vanseodesign.com/web-design/icon-index-symbol/" target="_blank" rel="noreferrer">Peircean semiotic approach</a> to understanding the distinction between experimental music and other forms of avant-garde, classical or “new” music. In her analysis, experimental music has a heightened interest in the indexical relationship to sound, whereas other forms may be better described as having a stronger affinity for the symbolic.</p> <blockquote> <p>Definitive for the icon in similarity with the object referred to, definitive for the index is contiguity with the object referred to, definitive for the symbol is its dependence on a standard rule of interpretation. (<a href="https://www.jstor.org/stable/i30032123" target="_blank" rel="noreferrer">source</a>)</p> </blockquote> <p>It is in this notion of “a standard rule of interpretation” that one can find a slew of correspondences to non-experimental (including avant-garde) music practices. For instance, an interval can be read as also referring to a moment in the score, a minor third, and all the rules of harmony with its allowances and strictures of what one is to do with a minor third (or not do with it). In the recapitulation of a theme, the work itself can be understood as the interpretant which contextualizes its significance. One can expand this to other fields of music as well. Jazz, for instance, can be understood as a metaphor or symbol for communication (call and response, dialogue). To draw her distinction sharply, she offers the striking example of the sound of a telephone:</p> <blockquote> <p>So why don’t telephones ring in music? They ring all over the place in literature. They appear in pictures more often than they do in music. They are the hinge of countless film plots.</p> <p>The case of experimental music is immediately different because it can deal with the telephone as a telephone.</p> </blockquote> <p>Grant cites <a href="https://www.springer.com/de/book/9783476012265" target="_blank" rel="noreferrer">Winfried Nöth</a>, noting that “the index makes no assertions regarding its object, but merely shows us the object or draws our attention to it,” the index is “of fact, of reality, and of experience in time and space.”</p> <p>Experimental music does not draw a distinction between the realms of “music” and that of “sound and noise.” It forces our attention to the causal dimensions of sonic experience, the productions of sonorous bodies, rather than to the systemic embeddedness of a sound in a formal logic or system (such as score, or the rules of harmony). Indeed, to further bolster her argument that non-experimental (but still scored) music has more of a symbolic character, we need only note the aspects of rhetoric that accompany such compositions: themes, argument, development, recapitulation, verse, refrain, and the like.</p> <p>Grant does not assert, however, that non-experimental music is only symbolic, or that experimental music is only indexical. Indeed, her essay devotes much time to exploring devil’s advocate, borderline, and seemingly contradictory examples to her schema. She notes that “signification generally involves complex hybrids of these categories (icon, index, symbol).” But as a general description of what may make some music “experimental” and others not, it does have a subtle cogency. For instance, John Cage’s silent work <em>4’33”</em> can be understood in relation to Peirce’s example of the indexical weather vane signifying even the absence of wind.</p> <blockquote> <p>Even if there is no wind at a particular moment, the weather vane still fulfills its purpose, confirming that there is no wind… It is specifically created to draw our attention to something by contiguous relationship with it. Even if there is never wind again, a weather vane will not stop being a weather vane…</p> </blockquote> <p>The silence evoked by Cage in this work is analogous to the absence of wind — it is still significant (hence a work of experimental music) even though it does not sound.</p> <p>What is interesting to note about visualized sound in a Peircean semiotic context is that the sound which results from such processes has its “origin” in an indexical (directly causal) relationship. The nature of the index-causality is different in these cases: pen or scratches on film emulsion in McLaren’s work, or digital drawing tablet in the case of Xenakis‘ <a href="https://en.wikipedia.org/wiki/UPIC" target="_blank" rel="noreferrer">UPIC system</a>. However, once we as listener-viewers are experiencing the work, the synaesthetic play of visual-sonic percepts takes on an iconic dimension as well, as we start to see that the sounds high up in the visual frame also sound high-pitched, or thick bands may produce clusters, while thin bands produce purer tones, or as we notice the way in which rhythmic syncretism in sound and image reinforce each other. At times one feels that one is really ‘seeing the sound’ but simultaneously sounds and images have their own autonomy — in fact, in an iconic sense one is only seeing certain aspects of the sound — both the visual and sonic imagination have their own resonances which can’t be entirely merged.</p> <h2 id="new-visualizations-in-music" tabindex="-1">New Visualizations in Music <a class="header-anchor" href="#new-visualizations-in-music" aria-label="Permalink to "New Visualizations in Music""></a></h2> <p>In the context of visual music, we would be amiss if we didn’t touch on new practices of visualizing musical form that have gone far beyond traditional music notation using bass and treble clefs, note and rest values, bars, measures, time signatures, and all of the other inscription apparatus of classical music common practice. As musical form exploded in its sheer variety of approaches in the 20th Century, so too did ways of representing the new kinds of sounds and their composition. The images below are visualization of musical compositions, and are taken from Sylvia Smith’s article <a href="https://www.jstor.org/stable/i238916" target="_blank" rel="noreferrer">Visual Music</a>.</p> <p><img src="./bl.webp" alt="bl"> <img src="./kk.webp" alt="kk"> <img src="./12.webp" alt="12"> <img src="./qq.webp" alt="qq"> <img src="./ww.webp" alt="ww"> <img src="./yy.webp" alt="yy"></p> <p>Note: some text above has been excerpted from <a href="http://piim.newschool.edu/journal/issues/2010/02/index.php" target="_blank" rel="noreferrer">a previous article</a> in the <em>Parsons Journal for Information Mapping</em>.</p> <h2 id="written-by-michael-filimowicz-phd" tabindex="-1">Written by Michael Filimowicz, PhD <a class="header-anchor" href="#written-by-michael-filimowicz-phd" aria-label="Permalink to "Written by Michael Filimowicz, PhD""></a></h2> <p><a href="https://soundand.design/audiovisual-aesthetics-5-b29c9471020" target="_blank" rel="noreferrer">Original article at oundand.design</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.webp" length="0" type="image/webp"/> </item> <item> <title><![CDATA[Bouncing sinusoids]]></title> <link>https://chromatone.center/practice/generative/bounce/</link> <guid>https://chromatone.center/practice/generative/bounce/</guid> <pubDate>Tue, 18 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Polyrhythmic explorations]]></description> <content:encoded><![CDATA[<Bounce/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Bouncing sinusoids]]></title> <link>https://chromatone.center/practice/sequencing/bounce/</link> <guid>https://chromatone.center/practice/sequencing/bounce/</guid> <pubDate>Tue, 18 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Polyrhythmic explorations]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/bounce/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/bounce/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Pendulums]]></title> <link>https://chromatone.center/practice/generative/pendulums/</link> <guid>https://chromatone.center/practice/generative/pendulums/</guid> <pubDate>Mon, 17 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Polyrhythmic explorations]]></description> <content:encoded><![CDATA[<MultiPendulums/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Pendulums]]></title> <link>https://chromatone.center/practice/sequencing/pendulums/</link> <guid>https://chromatone.center/practice/sequencing/pendulums/</guid> <pubDate>Mon, 17 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Polyrhythmic explorations]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/pendulums/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/pendulums/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Soundfont sampler synth]]></title> <link>https://chromatone.center/practice/synth/soundfont/</link> <guid>https://chromatone.center/practice/synth/soundfont/</guid> <pubDate>Mon, 10 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Open source sample-based online synthesizer]]></description> <content:encoded><![CDATA[<client-only> <Synth-font class="m-2 max-h-40" /> <MidiKeys > </MidiKeys> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Karplus–Strong synthesis]]></title> <link>https://chromatone.center/practice/synth/karplus-strong/</link> <guid>https://chromatone.center/practice/synth/karplus-strong/</guid> <pubDate>Sat, 01 Jun 2024 00:00:00 GMT</pubDate> <description><![CDATA[Pratical KS synth]]></description> <content:encoded><![CDATA[<StringSynth />]]></content:encoded> <enclosure url="https://chromatone.center/ksa.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Matter physics]]></title> <link>https://chromatone.center/practice/generative/matter/</link> <guid>https://chromatone.center/practice/generative/matter/</guid> <pubDate>Fri, 24 May 2024 00:00:00 GMT</pubDate> <description><![CDATA[2D rigid body physics simulator]]></description> <content:encoded><![CDATA[<Matter />]]></content:encoded> <enclosure url="https://chromatone.center/matter.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Matter physics]]></title> <link>https://chromatone.center/practice/sequencing/matter/</link> <guid>https://chromatone.center/practice/sequencing/matter/</guid> <pubDate>Fri, 24 May 2024 00:00:00 GMT</pubDate> <description><![CDATA[2D rigid body physics simulator]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/matter/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/matter/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/matter.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Random Jam]]></title> <link>https://chromatone.center/practice/jam/random/</link> <guid>https://chromatone.center/practice/jam/random/</guid> <pubDate>Tue, 30 Apr 2024 00:00:00 GMT</pubDate> <description><![CDATA[A simple randomizer for basic jam parameters - BPM, Tonic and Scale.]]></description> <enclosure url="https://chromatone.center/jam.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Number Sequences]]></title> <link>https://chromatone.center/practice/generative/numbers/</link> <guid>https://chromatone.center/practice/generative/numbers/</guid> <pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate> <description><![CDATA[Playing with mathematical wonders]]></description> <content:encoded><![CDATA[<Numbers/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Number Sequences]]></title> <link>https://chromatone.center/practice/sequencing/numbers/</link> <guid>https://chromatone.center/practice/sequencing/numbers/</guid> <pubDate>Mon, 08 Apr 2024 00:00:00 GMT</pubDate> <description><![CDATA[Playing with mathematical wonders]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/numbers/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/numbers/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Karplus–Strong string]]></title> <link>https://chromatone.center/theory/synthesis/karplus-strong/</link> <guid>https://chromatone.center/theory/synthesis/karplus-strong/</guid> <pubDate>Tue, 14 Nov 2023 00:00:00 GMT</pubDate> <description><![CDATA[A method of physical modelling synthesis]]></description> <content:encoded><![CDATA[<p>Karplus–Strong string synthesis is a method of physical modelling synthesis that loops a short waveform through a filtered delay line to simulate the sound of a hammered or plucked string or some types of percussion.</p> <p>At first glance, this technique can be viewed as subtractive synthesis based on a feedback loop similar to that of a comb filter for z-transform analysis. However, it can also be viewed as the simplest class of wavetable-modification algorithms now known as digital waveguide synthesis, because the delay line acts to store one period of the signal.</p> <p>Alexander Strong invented the algorithm, and Kevin Karplus did the first analysis of how it worked. Together they developed software and hardware implementations of the algorithm, including a custom VLSI chip. They named the algorithm "Digitar" synthesis, as a portmanteau for "digital guitar".</p> <h2 id="how-it-works" tabindex="-1">How it works <a class="header-anchor" href="#how-it-works" aria-label="Permalink to "How it works""></a></h2> <p><img src="./ksa.png" alt="schema"></p> <ul> <li>A short excitation waveform (of length L samples) is generated. In the original algorithm, this was a burst of white noise, but it can also include any wideband signal, such as a rapid sine wave chirp or frequency sweep, or a single cycle of a sawtooth wave or square wave.</li> <li>This excitation is output and simultaneously fed back into a delay line L samples long.</li> <li>The output of the delay line is fed through a filter. The gain of the filter must be less than 1 at all frequencies, to maintain a stable positive feedback loop. The filter can be a first-order lowpass filter (as pictured). In the original algorithm, the filter consisted of averaging two adjacent samples, a particularly simple filter that can be implemented without a multiplier, requiring only shift and add operations. The filter characteristics are crucial in determining the harmonic structure of the decaying tone.</li> <li>The filtered output is simultaneously mixed into the output and fed back into the delay line.</li> </ul> <h2 id="tuning-the-string" tabindex="-1">Tuning the string <a class="header-anchor" href="#tuning-the-string" aria-label="Permalink to "Tuning the string""></a></h2> <p>The fundamental frequency (specifically, the lowest nonzero resonant frequency) of the resulting signal is the lowest frequency at which the unwrapped phase response of the delay and filter in cascade is − 2 π -2\pi . The required phase delay D for a given fundamental frequency F0 is therefore calculated according to D = Fs/F0 where Fs is the sampling frequency.</p> <p>The length of any digital delay line is a whole-number multiple of the sampling period. In order to obtain a fractional delay often needed for fine tuning the string below JND (Just Noticeable Difference), interpolating filters are used with parameters selected to obtain an appropriate phase delay at the fundamental frequency. Either IIR or FIR filters may be used, but FIR have the advantage that transients are suppressed if the fractional delay is changed over time. The most elementary fractional delay is the linear interpolation between two samples (e.g., s(4.2) = 0.8s(4) + 0.2s(5)). If the phase delay varies with frequency, harmonics may be sharpened or flattened relative to the fundamental frequency. The original algorithm used equal weighting on two adjacent samples, as this can be achieved without multiplication hardware, allowing extremely cheap implementations.</p> <p>Z-transform analysis can be used to get the pitches and decay times of the harmonics more precisely, as explained in the 1983 paper that introduced the algorithm.</p> <p>Holding the period (= length of the delay line) constant produces vibrations similar to those of a string or bell. Increasing the period sharply after the transient input produces drum-like sounds.</p> <h2 id="refinements-to-the-algorithm" tabindex="-1">Refinements to the algorithm <a class="header-anchor" href="#refinements-to-the-algorithm" aria-label="Permalink to "Refinements to the algorithm""></a></h2> <p>Due to its plucked-string sound in certain modes, Alex Strong and Kevin Karplus conjectured that the Karplus-Strong (KS) algorithm was in some sense a vibrating string simulation, and they worked on showing that it solved the wave equation for the vibrating string, but this was not completed. Julius O. Smith III recognized that the transfer-function of the KS, when viewed as a digital filter, coincided with that of a vibrating string, with the filter in the feedback loop representing the total string losses over one period. He later derived the KS algorithm as a special case of digital waveguide synthesis, which was used to model acoustic waves in strings, tubes, and membranes. The first set of extensions and generalizations of the Karplus-Strong Algorithm, typically known as the Extended Karplus-Strong (EKS) Algorithm, was presented in a paper in 1982 at the International Computer Music Conference in Venice, Italy, and published in more detail in 1983 in Computer Music Journal in an article entitled "Extensions of the Karplus Strong Plucked String Algorithm," by David A. Jaffe and Julius O. Smith, and in Smith's PhD/EE dissertation.</p> <p>Alex Strong developed a superior wavetable-modification method for plucked-string synthesis, but only published it as a patent.</p> <h2 id="musical-applications" tabindex="-1">Musical applications <a class="header-anchor" href="#musical-applications" aria-label="Permalink to "Musical applications""></a></h2> <p>The first musical use of the algorithm was in the work May All Your Children Be Acrobats written in 1981 by David A. Jaffe, and scored for eight guitars, mezzo-soprano and computer-generated stereo tape, with a text based on Carl Sandburg's The People, Yes. Jaffe continued to explore the musical and technical possibilities of the algorithm in Silicon Valley Breakdown, for computer-generated plucked strings (1982), as well as in later works such as Telegram to the President, 1984 for string quartet and tape, and Grass for female chorus and tape (1987).</p> <p>The patent was licensed first to Mattel Electronics, which failed as a company before any product using the algorithm was developed, then to a startup company founded by some of the laid-off Mattel executives. They never got sufficient funding to finish development, and so never brought a product to market either. Eventually Yamaha licensed the patent, as part of the Sondius package of patents from Stanford. It is unknown whether any hardware using the algorithm was ever sold, though many software implementations (which did not pay any license fees to the inventors) have been released.</p> <p>While they may not adhere strictly to the algorithm, many hardware components for modular systems have been commercially produced that invoke the basic principles of Karplus-Strong Synthesis: using an inverted, scaled control system for very small time values in a filtered delay line to create playable notes in the Western Tempered tuning system, controlled with volt per octave tracking or MIDI data. The Inventors were not specifically credited, though the term "Karplus-Strong Synthesis" is referenced in some of the manuals.</p> <p>Hardware components capable of Karplus-Strong style synthesis include the Moog Clusterflux 108M, Mutable Instruments Elements and Rings, 4ms Company Dual Looping Delay, 2HP Pluck, Make Noise Mimeophon, Arturia MicroFreak and the Strymon Starlab.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/ksa.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Modulation techniques]]></title> <link>https://chromatone.center/theory/synthesis/modulation/</link> <guid>https://chromatone.center/theory/synthesis/modulation/</guid> <pubDate>Wed, 18 Oct 2023 00:00:00 GMT</pubDate> <description><![CDATA[Signal cross relation]]></description> <content:encoded><![CDATA[<p><img src="./Amfm3-en-de.gif" alt="animated AM FM display"></p> <p>In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.</p> <youtube-embed video="XnoHXyb7dkY" /><p>This carrier wave usually has a much higher frequency than the message signal does. This is because it is impractical to transmit signals with low frequencies. Generally, to receive a radio wave one needs a radio antenna with length that is one-fourth of wavelength. For low frequency radio waves, wavelength is on the scale of kilometers and building such a large antenna is not practical. In radio communication, the modulated carrier is transmitted through space as a radio wave to a radio receiver.</p> <p>Another purpose of modulation is to transmit multiple channels of information through a single communication medium, using frequency-division multiplexing (FDM). For example, in cable television (which uses FDM), many carrier signals, each modulated with a different television channel, are transported through a single cable to customers. Since each carrier occupies a different frequency, the channels do not interfere with each other. At the destination end, the carrier signal is demodulated to extract the information bearing modulation signal.</p> <p>A modulator is a device or circuit that performs modulation. A demodulator (sometimes detector) is a circuit that performs demodulation, the inverse of modulation. A modem (from modulator–demodulator), used in bidirectional communication, can perform both operations. The lower frequency band occupied by the modulation signal is called the baseband, while the higher frequency band occupied by the modulated carrier is called the passband.[citation needed]</p> <p>In analog modulation, an analog modulation signal is "impressed" on the carrier. Examples are amplitude modulation (AM) in which the amplitude (strength) of the carrier wave is varied by the modulation signal, and frequency modulation (FM) in which the frequency of the carrier wave is varied by the modulation signal. These were the earliest types of modulation[citation needed], and are used to transmit an audio signal representing sound in AM and FM radio broadcasting. More recent systems use digital modulation, which impresses a digital signal consisting of a sequence of binary digits (bits), a bitstream, on the carrier, by means of mapping bits to elements from a discrete alphabet to be transmitted. This alphabet can consist of a set of real or complex numbers, or sequences, like oscillations of different frequencies, so-called frequency-shift keying (FSK) modulation. A more complicated digital modulation method that employs multiple carriers, orthogonal frequency-division multiplexing (OFDM), is used in WiFi networks, digital radio stations and digital cable television transmission.</p> <h2 id="analog-modulation-methods" tabindex="-1">Analog modulation methods <a class="header-anchor" href="#analog-modulation-methods" aria-label="Permalink to "Analog modulation methods""></a></h2> <p>In analog modulation, the modulation is applied continuously in response to the analog information signal. Common analog modulation techniques include:</p> <ul> <li>Amplitude modulation (AM) (here the amplitude of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal) <ul> <li>Double-sideband modulation (DSB) <ul> <li>Double-sideband modulation with carrier (DSB-WC) (used on the AM radio broadcasting band)</li> <li>Double-sideband suppressed-carrier transmission (DSB-SC)</li> <li>Double-sideband reduced carrier transmission (DSB-RC)</li> </ul> </li> <li>Single-sideband modulation (SSB, or SSB-AM) <ul> <li>Single-sideband modulation with carrier (SSB-WC)</li> <li>Single-sideband modulation suppressed carrier modulation (SSB-SC)</li> </ul> </li> <li>Vestigial sideband modulation (VSB, or VSB-AM)</li> <li>Quadrature amplitude modulation (QAM)</li> </ul> </li> <li>Angle modulation, which is approximately constant envelope <ul> <li>Frequency modulation (FM) (here the frequency of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal)</li> <li>Phase modulation (PM) (here the phase shift of the carrier signal is varied in accordance with the instantaneous amplitude of the modulating signal)</li> <li>Transpositional Modulation (TM), in which the waveform inflection is modified resulting in a signal where each quarter cycle is transposed in the modulation process. TM is a pseudo-analog modulation (AM). Where an AM carrier also carries a phase variable phase f(ǿ). TM is f(AM,ǿ)</li> </ul> </li> </ul> <h2 id="am-amplitude-modulation" tabindex="-1">AM - Amplitude modulation <a class="header-anchor" href="#am-amplitude-modulation" aria-label="Permalink to "AM - Amplitude modulation""></a></h2> <p>...</p> <h2 id="fm-frequency-modulation" tabindex="-1">FM - Frequency modulation <a class="header-anchor" href="#fm-frequency-modulation" aria-label="Permalink to "FM - Frequency modulation""></a></h2> <p>Frequency modulation synthesis (or FM synthesis) is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency with a modulator. The (instantaneous) frequency of an oscillator is altered in accordance with the amplitude of a modulating signal.</p> <p>FM synthesis can create both harmonic and inharmonic sounds. To synthesize harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e. inharmonic), inharmonic bell-like and percussive spectra can be created.</p> <youtube-embed video="AzvxefRDT84" /><h3 id="applications" tabindex="-1">Applications <a class="header-anchor" href="#applications" aria-label="Permalink to "Applications""></a></h3> <p>FM synthesis using analog oscillators may result in pitch instability. However, FM synthesis can also be implemented digitally, which is more stable and became standard practice. Digital FM synthesis (equivalent to the phase modulation using the time integration of instantaneous frequency) was the basis of several musical instruments beginning as early as 1974. Yamaha built the first prototype digital synthesizer in 1974, based on FM synthesis, before commercially releasing the Yamaha GS-1 in 1980. The Synclavier I, manufactured by New England Digital Corporation beginning in 1978, included a digital FM synthesizer, using an FM synthesis algorithm licensed from Yamaha. Yamaha's groundbreaking Yamaha DX7 synthesizer, released in 1983, brought FM to the forefront of synthesis in the mid-1980s.</p> <h4 id="amusement-use-fm-sound-chips-on-pcs-arcades-game-consoles-and-mobile-phones" tabindex="-1">Amusement use: FM sound chips on PCs, arcades, game consoles, and mobile phones <a class="header-anchor" href="#amusement-use-fm-sound-chips-on-pcs-arcades-game-consoles-and-mobile-phones" aria-label="Permalink to "Amusement use: FM sound chips on PCs, arcades, game consoles, and mobile phones""></a></h4> <p>FM synthesis also became the usual setting for games and software up until the mid-nineties. For IBM PC compatible systems, sound cards like the AdLib and Sound Blaster popularized Yamaha chips like the OPL2 and OPL3. Other computers such as the Sharp X68000 and MSX (Yamaha CX5M computer unit) use the OPM sound chip (which was also commonly used for arcade machines up to the mid-nineties) with later CX5M units using the OPP sound chip, and the NEC PC-88 and PC-98 computers use the OPN and OPNA. For arcade systems and game consoles, OPNB was used as main basic sound generator board in Taito's arcade boards (with a variant of the OPNB being used in the Taito Z System) and notably used in SNK's Neo Geo arcade (MVS) and home console (AES) machines. The related OPN2 was used in the Sega's Mega Drive (Genesis) and Fujitsu's FM Towns Marty as one of its sound generator chips. Throughout the 2000s, FM synthesis was also used on a wide range of phones to play ringtones and other sounds, typically in the Yamaha SMAF format.</p> <youtube-embed video="wn71QBApCRg" /><h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <h3 id="don-buchla-mid-1960s" tabindex="-1">Don Buchla (mid-1960s) <a class="header-anchor" href="#don-buchla-mid-1960s" aria-label="Permalink to "Don Buchla (mid-1960s)""></a></h3> <p>Don Buchla implemented FM on his instruments in the mid-1960s, prior to Chowning's patent. His 158, 258 and 259 dual oscillator modules had a specific FM control voltage input,[7] and the model 208 (Music Easel) had a modulation oscillator hard-wired to allow FM as well as AM of the primary oscillator.[8] These early applications used analog oscillators, and this capability was also followed by other modular synthesizers and portable synthesizers including Minimoog and ARP Odyssey.</p> <h3 id="john-chowning-late-1960s–1970s" tabindex="-1">John Chowning (late-1960s–1970s) <a class="header-anchor" href="#john-chowning-late-1960s–1970s" aria-label="Permalink to "John Chowning (late-1960s–1970s)""></a></h3> <p>By the mid-20th century, frequency modulation (FM), a means of carrying sound, had been understood for decades and was being used to broadcast radio transmissions. FM synthesis was developed since 1967 at Stanford University, California, by John Chowning, who was trying to create sounds different from analog synthesis[citation needed]. His algorithm[citation needed] was licensed to Japanese company Yamaha in 1973. The implementation commercialized by Yamaha (US Patent 4018121 Apr 1977 or U.S. Patent 4,018,121) is actually based on phase modulation[citation needed], but the results end up being equivalent mathematically as both are essentially a special case of quadrature amplitude modulation[citation needed].</p> <h3 id="_1970s–1980s" tabindex="-1">1970s–1980s <a class="header-anchor" href="#_1970s–1980s" aria-label="Permalink to "1970s–1980s""></a></h3> <h4 id="expansions-by-yamaha" tabindex="-1">Expansions by Yamaha <a class="header-anchor" href="#expansions-by-yamaha" aria-label="Permalink to "Expansions by Yamaha""></a></h4> <p>Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation[citation needed], though it would take several years before Yamaha released their FM digital synthesizers. In the 1970s, Yamaha were granted a number of patents, under the company's former name "Nippon Gakki Seizo Kabushiki Kaisha", evolving Chowning's work. Yamaha built the first prototype FM digital synthesizer in 1974. Yamaha eventually commercialized FM synthesis technology with the Yamaha GS-1, the first FM digital synthesizer, released in 1980. FM digital synthesizer Yamaha DX7 (1983)</p> <p>FM synthesis was the basis of some of the early generations of digital synthesizers, most notably those from Yamaha, as well as New England Digital Corporation under license from Yamaha. Yamaha's DX7 synthesizer, released in 1983, was ubiquitous throughout the 1980s. Several other models by Yamaha provided variations and evolutions of FM synthesis during that decade.</p> <p>Yamaha had patented its hardware implementation of FM in the 1970s, allowing it to nearly monopolize the market for FM technology until the mid-1990s.</p> <h4 id="related-development-by-casio" tabindex="-1">Related development by Casio <a class="header-anchor" href="#related-development-by-casio" aria-label="Permalink to "Related development by Casio""></a></h4> <p>Casio developed a related form of synthesis called phase distortion synthesis, used in its CZ range of synthesizers. It had a similar (but slightly differently derived) sound quality to the DX series.</p> <h3 id="_1990s" tabindex="-1">1990s <a class="header-anchor" href="#_1990s" aria-label="Permalink to "1990s""></a></h3> <h4 id="popularization-after-the-expiration-of-patent" tabindex="-1">Popularization after the expiration of patent <a class="header-anchor" href="#popularization-after-the-expiration-of-patent" aria-label="Permalink to "Popularization after the expiration of patent""></a></h4> <p>With the expiration of the Stanford University FM patent in 1995, digital FM synthesis can now be implemented freely by other manufacturers. The FM synthesis patent brought Stanford $20 million before it expired, making it (in 1994) "the second most lucrative licensing agreement in Stanford's history". FM today is mostly found in software-based synths such as FM8 by Native Instruments or Sytrus by Image-Line, but it has also been incorporated into the synthesis repertoire of some modern digital synthesizers, usually coexisting as an option alongside other methods of synthesis such as subtractive, sample-based synthesis, additive synthesis, and other techniques. The degree of complexity of the FM in such hardware synths may vary from simple 2-operator FM, to the highly flexible 6-operator engines of the Korg Kronos and Alesis Fusion, to creation of FM in extensively modular engines such as those in the latest synthesisers by Kurzweil Music Systems.[citation needed]</p> <h4 id="realtime-convolution-modulation-afm-sample-and-formant-shaping-synthesis" tabindex="-1">Realtime Convolution & Modulation (AFM + Sample) and Formant Shaping Synthesis <a class="header-anchor" href="#realtime-convolution-modulation-afm-sample-and-formant-shaping-synthesis" aria-label="Permalink to "Realtime Convolution & Modulation (AFM + Sample) and Formant Shaping Synthesis""></a></h4> <p>New hardware synths specifically marketed for their FM capabilities disappeared from the market after the release of the Yamaha SY99 and FS1R, and even those marketed their highly powerful FM abilities as counterparts to sample-based synthesis and formant synthesis respectively. However, well-developed FM synthesis options are a feature of Nord Lead synths manufactured by Clavia, the Alesis Fusion range, the Korg Oasys and Kronos and the Modor NF-1. Various other synthesizers offer limited FM abilities to supplement their main engines.</p> <p>Combining sets of 8 FM operators with multi-spectral wave forms began in 1999 by Yamaha in the FS1R. The FS1R had 16 operators, 8 standard FM operators and 8 additional operators that used a noise source rather than an oscillator as its sound source. By adding in tuneable noise sources the FS1R could model the sounds produced in the human voice and in a wind instrument, along with making percussion instrument sounds. The FS1R also contained an additional wave form called the Formant wave form. Formants can be used to model resonating body instrument sounds like the cello, violin, acoustic guitar, bassoon, English horn, or human voice. Formants can even be found in the harmonic spectrum of several brass instruments.</p> <h3 id="_2000s–present" tabindex="-1">2000s–present <a class="header-anchor" href="#_2000s–present" aria-label="Permalink to "2000s–present""></a></h3> <h4 id="variable-phase-modulation-fm-x-synthesis-altered-fm-etc" tabindex="-1">Variable Phase Modulation, FM-X Synthesis, Altered FM, etc <a class="header-anchor" href="#variable-phase-modulation-fm-x-synthesis-altered-fm-etc" aria-label="Permalink to "Variable Phase Modulation, FM-X Synthesis, Altered FM, etc""></a></h4> <p>In 2016, Korg released the Korg Volca FM, a, 3-voice, 6 operators FM iteration of the Korg Volca series of compact, affordable desktop modules,. More recently Korg released the opsix (2020) and opsix SE (2023) integrating 6 operators FM synthesis with subtractive, analogue modeling, additive, semi-modular and Waveshaping. Yamaha released the Montage, which combines a 128-voice sample-based engine with a 128-voice FM engine. This iteration of FM is called FM-X, and features 8 operators; each operator has a choice of several basic wave forms, but each wave form has several parameters to adjust its spectrum. The Yamaha Montage was followed by the more affordable Yamaha MODX in 2018, with 64-voice, 8 operators FM-X architecture in addition to a 128-voice sample-based engine. Elektron in 2018 launched the Digitone, an 8-voice, 4 operators FM synth featuring Elektron's renowned sequence engine.</p> <p>FM-X synthesis was introduced with the Yamaha Montage synthesizers in 2016. FM-X uses 8 operators. Each FM-X operator has a set of multi-spectral wave forms to choose from, which means each FM-X operator can be equivalent to a stack of 3 or 4 DX7 FM operators. The list of selectable wave forms includes sine waves, the All1 and All2 wave forms, the Odd1 and Odd2 wave forms, and the Res1 and Res2 wave forms. The sine wave selection works the same as the DX7 wave forms. The All1 and All2 wave forms are a saw-tooth wave form. The Odd1 and Odd2 wave forms are pulse or square waves. These two types of wave forms can be used to model the basic harmonic peaks in the bottom of the harmonic spectrum of most instruments. The Res1 and Res2 wave forms move the spectral peak to a specific harmonic and can be used to model either triangular or rounded groups of harmonics further up in the spectrum of an instrument. Combining an All1 or Odd1 wave form with multiple Res1 (or Res2) wave forms (and adjusting their amplitudes) can model the harmonic spectrum of an instrument or sound.</p> <h2 id="spectral-analysis" tabindex="-1">Spectral analysis <a class="header-anchor" href="#spectral-analysis" aria-label="Permalink to "Spectral analysis""></a></h2> <p>There are multiple variations of FM synthesis, including:</p> <ul> <li>Various operator arrangements (known as "FM Algorithms" in Yamaha terminology) <ul> <li>2 operators</li> <li>Serial FM (multiple stages)</li> <li>Parallel FM (multiple modulators, multiple-carriers),</li> <li>Mix of them</li> </ul> </li> <li>Various waveform of operators <ul> <li>Sinusoidal waveform</li> <li>Other waveforms</li> </ul> </li> <li>Additional modulation <ul> <li>Linear FM</li> <li>Exponential FM (preceded by the anti-logarithm conversion for CV/oct. interface of analog synthesizers)</li> <li>Oscillator sync with FM</li> </ul> </li> </ul> <p>etc.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/FM-Synthesis-Yoast-1_2048x.webp" length="0" type="image/webp"/> </item> <item> <title><![CDATA[Distortion]]></title> <link>https://chromatone.center/theory/synthesis/distortion/</link> <guid>https://chromatone.center/theory/synthesis/distortion/</guid> <pubDate>Thu, 12 Oct 2023 00:00:00 GMT</pubDate> <description><![CDATA[Linear and non-linear harmonic distortion]]></description> <content:encoded><![CDATA[<youtube-embed video="4QeqSYIXDr4" /><p>Distortion and overdrive are forms of audio signal processing used to alter the sound of amplified electric musical instruments, usually by increasing their gain, producing a "fuzzy", "growling", or "gritty" tone. Distortion is most commonly used with the electric guitar, but may also be used with other electric instruments such as electric bass, electric piano, synthesizer and Hammond organ. Guitarists playing electric blues originally obtained an overdriven sound by turning up their vacuum tube-powered guitar amplifiers to high volumes, which caused the signal to distort. While overdriven tube amps are still used to obtain overdrive, especially in genres like blues and rockabilly, a number of other ways to produce distortion have been developed since the 1960s, such as distortion effect pedals. The growling tone of a distorted electric guitar is a key part of many genres, including blues and many rock music genres, notably hard rock, punk rock, hardcore punk, acid rock, and heavy metal music, while the use of distorted bass has been essential in a genre of hip hop music and alternative hip hop known as "SoundCloud rap".</p> <youtube-embed video="7dLArMd-y64" /><p>The effects alter the instrument sound by clipping the signal (pushing it past its maximum, which shears off the peaks and troughs of the signal waves), adding sustain and harmonic and inharmonic overtones and leading to a compressed sound that is often described as "warm" and "dirty", depending on the type and intensity of distortion used. The terms distortion and overdrive are often used interchangeably; where a distinction is made, distortion is a more extreme version of the effect than overdrive. Fuzz is a particular form of extreme distortion originally created by guitarists using faulty equipment (such as a misaligned valve (tube); see below), which has been emulated since the 1960s by a number of "fuzzbox" effects pedals.</p> <p>Distortion, overdrive, and fuzz can be produced by effects pedals, rackmounts, pre-amplifiers, power amplifiers (a potentially speaker-blowing approach), speakers and (since the 2000s) by digital amplifier modeling devices and audio software. These effects are used with electric guitars, electric basses (fuzz bass), electronic keyboards, and more rarely as a special effect with vocals. While distortion is often created intentionally as a musical effect, musicians and sound engineers sometimes take steps to avoid distortion, particularly when using PA systems to amplify vocals or when playing back prerecorded music.</p> <h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <p>The guitar solo on Chuck Berry's 1955 single "Maybellene" features "warm" overtone distortion produced by an inexpensive valve (tube) amplifier.</p> <h3 id="early-uses-of-amplified-distortion" tabindex="-1">Early uses of amplified distortion <a class="header-anchor" href="#early-uses-of-amplified-distortion" aria-label="Permalink to "Early uses of amplified distortion""></a></h3> <p>The first guitar amplifiers were relatively low-fidelity, and would often produce distortion when their volume (gain) was increased beyond their design limit or if they sustained minor damage. Around 1945, Western swing guitarist Junior Barnard began experimenting with a rudimentary humbucker pick-up and a small amplifier to obtain his signature "low-down and dirty" bluesy sound. Many electric blues guitarists, including Chicago bluesmen such as Elmore James and Buddy Guy, experimented in order to get a guitar sound that paralleled the rawness of blues singers such as Muddy Waters and Howlin' Wolf, replacing often their originals with the powerful Valco "Chicagoan" pick-ups, originally created for lap-steel, to obtain a louder and fatter tone. In early rock music, Goree Carter's "Rock Awhile" (1949) featured an over-driven electric guitar style similar to that of Chuck Berry several years later, as well as Joe Hill Louis' "Boogie in the Park" (1950).</p> <p>In the early 1950s, guitar distortion sounds started to evolve based on sounds created earlier in the decade by accidental damage to amps, such as in the popular early recording of the 1951 Ike Turner and the Kings of Rhythm song "Rocket 88", where guitarist Willie Kizart used a vacuum tube amplifier that had a speaker cone slightly damaged in transport. Electric guitarists began intentionally "doctoring" amplifiers and speakers in order to emulate this form of distortion.</p> <p>Electric blues guitarist Willie Johnson of Howlin' Wolf′s band began deliberately increasing gain beyond its intended levels to produce "warm" distorted sounds. Guitar Slim also experimented with distorted overtones, which can be heard in his hit electric blues song "The Things That I Used to Do" (1953). Chuck Berry's 1955 classic "Maybellene" features a guitar solo with warm overtones created by his small valve amplifier. Pat Hare produced heavily distorted power chords on his electric guitar for records such as James Cotton's "Cotton Crop Blues" (1954) as well as his own "I'm Gonna Murder My Baby" (1954), creating "a grittier, nastier, more ferocious electric guitar sound," accomplished by turning the volume knob on his amplifier "all the way to the right until the speaker was screaming."</p> <p>In 1956, guitarist Paul Burlison of the Johnny Burnette Trio deliberately dislodged a vacuum tube in his amplifier to record "The Train Kept A-Rollin" after a reviewer raved about the sound Burlison's damaged amplifier produced during a live performance. According to other sources Burlison's amp had a partially broken loudspeaker cone. Pop-oriented producers were horrified by that eerie "two-tone" sound, quite clean on trebles but strongly distorted on basses, but Burnette insisted on releasing the sessions, arguing that "that guitar sounds like a nice horn section".</p> <p>In the late 1950s, Guitarist Link Wray began intentionally manipulating his amplifiers' vacuum tubes to create a "noisy" and "dirty" sound for his solos after a similarly accidental discovery. Wray also poked holes in his speaker cones with pencils to further distort his tone, used electronic echo chambers (then usually employed by singers), the recent powerful and "fat" Gibson humbucker pickups, and controlled "feedback" (Larsen effect). The resultant sound can be heard on his highly influential 1958 instrumental, "Rumble" and Rawhide.</p> <h3 id="_1960s-fuzz-distortion-and-introduction-of-commercial-devices" tabindex="-1">1960s: fuzz, distortion, and introduction of commercial devices <a class="header-anchor" href="#_1960s-fuzz-distortion-and-introduction-of-commercial-devices" aria-label="Permalink to "1960s: fuzz, distortion, and introduction of commercial devices""></a></h3> <p>In 1961, Grady Martin scored a hit with a fuzzy tone caused by a faulty preamplifier that distorted his guitar playing on the Marty Robbins song "Don't Worry". Later that year Martin recorded an instrumental tune under his own name, using the same faulty preamp. The song, on the Decca label, was called "The Fuzz." Martin is generally credited as the discoverer of the "fuzz effect." The recording engineer from Martin's sessions, Glenn Snoddy, partnered with fellow WSM radio engineer Revis V. Hobbs to design and build a stand-alone device that would intentionally create the fuzzy effect. The two engineers sold their circuit to Gibson, who introduced it as the Maestro FZ-1 Fuzz-Tone in 1962, one of the first commercially-successful mass-produced guitar pedals.</p> <p>Shortly thereafter, the American instrumental rock band The Ventures asked their friend, session musician and electronics enthusiast Orville "Red" Rhodes for help recreating the Grady Martin "fuzz" sound. Rhodes offered The Ventures a fuzzbox he had made, which they used to record "2000 Pound Bee" in 1962.</p> <p>In 1964, a fuzzy and somewhat distorted sound gained widespread popularity after guitarist Dave Davies of The Kinks used a razor blade to slash his speaker cones for the band's single "You Really Got Me".</p> <p>In May 1965 Keith Richards used a Maestro FZ-1 Fuzz-Tone to record "(I Can't Get No) Satisfaction". The song's success greatly boosted sales of the device, and all available stock sold out by the end of 1965. Other early fuzzboxes include the Mosrite FuzzRITE and Arbiter Group Fuzz Face used by Jimi Hendrix, the Electro-Harmonix Big Muff Pi used by Hendrix and Carlos Santana, and the Vox Tone Bender used by Paul McCartney to play fuzz bass on "Think for Yourself" and other Beatles recordings.</p> <p>In 1966, Jim Marshall of the British company Marshall Amplification began modifying the electronic circuitry of his amplifiers so as to achieve a "brighter, louder" sound and fuller distortion capabilities. Also in 1966, Syd Barrett of Pink Floyd created the song Interstellar Overdrive, a song made entirely in electric distortion. It was released a year later in modified form on their debut album The Piper at the Gates of Dawn.</p> <p>In the late 1960s and early 1970s hard rock bands such as Deep Purple, Led Zeppelin and Black Sabbath forged what would eventually become the heavy metal sound through a combined use of high volumes and heavy distortion.</p> <youtube-embed video="DZ4IGAR7iio" /><h2 id="theory-and-circuits" tabindex="-1">Theory and circuits <a class="header-anchor" href="#theory-and-circuits" aria-label="Permalink to "Theory and circuits""></a></h2> <p>Waveform plot showing the different types of clipping. Valve overdrive is a form of soft limiting, while transistor clipping or extremely overdriven valves resemble hard clipping.</p> <p>The word distortion refers to any modification of wave form of a signal, but in music it is used to refer to nonlinear distortion (excluding filters) and particularly to the introduction of new frequencies by memoryless nonlinearities. In music the different forms of linear distortion have specific names describing them. The simplest of these is a distortion process known as "volume adjustment", which involves distorting the amplitude of a sound wave in a proportional (or 'linear') way in order to increase or decrease the volume of the sound without affecting the tone quality. In the context of music, the most common source of (nonlinear) distortion is clipping in amplifier circuits and is most commonly known as overdrive.</p> <p>Clipping is a non-linear process that produces frequencies not originally present in the audio signal. These frequencies can be harmonic overtones, meaning they are whole number multiples of one of the signal's original frequencies, or "inharmonic", resulting from general intermodulation distortion. The same nonlinear device will produce both types of distortion, depending on the input signal. Intermodulation occurs whenever the input frequencies are not already harmonically related. For instance, playing a power chord through distortion results in intermodulation that produces new subharmonics.</p> <p>"Soft clipping" gradually flattens the peaks of a signal which creates a number of higher harmonics which share a harmonic relationship with the original tone. "Hard clipping" flattens peaks abruptly, resulting in higher power in higher harmonics. As clipping increases, a tone input progressively begins to resemble a square wave which has odd number harmonics. This is generally described as sounding "harsh".</p> <p>Distortion and overdrive circuits each 'clip' the signal before it reaches the main amplifier (clean boost circuits do not necessarily create 'clipping') as well as boost signals to levels that cause distortion to occur at the main amplifier's front end stage (by exceeding the ordinary input signal amplitude, thus overdriving the amplifier) Note : product names may not accurately reflect type of circuit involved - see above.</p> <p>A fuzz box alters an audio signal until it is nearly a square wave and adds complex overtones by way of a frequency multiplier.</p> <youtube-embed video="YuojAtE8YCY" /><h3 id="valve-overdrive" tabindex="-1">Valve overdrive <a class="header-anchor" href="#valve-overdrive" aria-label="Permalink to "Valve overdrive""></a></h3> <p>Vacuum tube or "valve" distortion is achieved by "overdriving" the valves in an amplifier. In layman's terms, overdriving is pushing the tubes beyond their normal rated maximum. Valve amplifiers—particularly those using class-A triodes—tend to produce asymmetric soft clipping that creates both even and odd harmonics. The increase in even harmonics is considered to create "warm"-sounding overdrive effects.</p> <p>A basic triode valve (tube) contains a cathode, a plate and a grid. When a positive voltage is applied to the plate, a current of negatively charged electrons flows to it from the heated cathode through the grid. This increases the voltage of the audio signal, amplifying its volume. The grid regulates the extent to which plate voltage is increased. A small negative voltage applied to the grid causes a large decrease in plate voltage.</p> <p>Valve amplification is more or less linear—meaning the parameters (amplitude, frequency, phase) of the amplified signal are proportional to the input signal—so long as the voltage of the input signal does not exceed the valve's "linear region of operation". The linear region falls between</p> <p>The saturation region: the voltages at which plate current stops responding to positive increases in grid voltage and The cutoff region: the voltages at which the charge of the grid is too negative for electrons to flow to the plate. If a valve is biased within the linear region and the input signal's voltage exceeds this region, overdrive and non-linear clipping will occur. Multiple stages of valve gain/clipping can be "cascaded" to produce a thicker and more complex distortion sound. In layperson's terms, a musician will plug a fuzz pedal into a tube amp that is being "cranked" to a clipping "overdriven" condition; as such, the musician will get the distortion from the fuzz which is then distorted further by the amp. During the 1990s, some Seattle grunge guitarists chained together as many as four fuzz pedals to create a thick "wall of sound" of distortion.</p> <p>In some modern valve effects, the "dirty" or "gritty" tone is actually achieved not by high voltage, but by running the circuit at voltages that are too low for the circuit components, resulting in greater non-linearity and distortion. These designs are referred to as "starved plate" configurations, and result in an "amp death" sound.[citation needed]</p> <h3 id="solid-state-distortion" tabindex="-1">Solid-state distortion <a class="header-anchor" href="#solid-state-distortion" aria-label="Permalink to "Solid-state distortion""></a></h3> <p>Solid-state amplifiers incorporating transistors and/or op amps can be made to produce hard clipping. When symmetrical, this adds additional high-amplitude odd harmonics, creating a "dirty" or "gritty" tone. When asymmetrical, it produces both even and odd harmonics. Electronically, this is usually achieved by either amplifying the signal to a point where it is clipped by the DC voltage limitation of the power supply rail, or by clipping the signal with diodes.[citation needed] Many solid-state distortion devices attempt to emulate the sound of overdriven vacuum valves using additional solid-state circuitry. Some amplifiers (notably the Marshall JCM 900) utilize hybrid designs that employ both valve and solid-state components.[citation needed]</p> <h2 id="approaches" tabindex="-1">Approaches <a class="header-anchor" href="#approaches" aria-label="Permalink to "Approaches""></a></h2> <p>Guitar distortion can be produced by many components of the guitar's signal path, including effects pedals, the pre-amplifier, power amplifier, and speakers. Many players use a combination of these to obtain their "signature" tone.</p> <h3 id="pre-amplifier-distortion" tabindex="-1">Pre-amplifier distortion <a class="header-anchor" href="#pre-amplifier-distortion" aria-label="Permalink to "Pre-amplifier distortion""></a></h3> <p>The pre-amplifier section of a guitar amplifier serves to amplify a weak instrument signal to a level that can drive the power amplifier. It often also contains circuitry to shape the tone of the instrument, including equalization and gain controls. Often multiple cascading gain/clipping stages are employed to generate distortion. Because the first component in a valve amplifier is a valve gain stage, the output level of the preceding elements of the signal chain has a strong influence on the distortion created by that stage. The output level of the guitar's pickups, the setting of the guitar's volume knob, how hard the strings are plucked, and the use of volume-boosting effects pedals can drive this stage harder and create more distortion.</p> <p>During the 1980s and 1990s, most valve amps featured a "master volume" control, an adjustable attenuator between the preamp section and the power amp. When the preamp volume is set high to generate high distortion levels, the master volume lowered, keeping the output volume at manageable levels.</p> <h3 id="overdrive-distortion-pedals" tabindex="-1">Overdrive/distortion pedals <a class="header-anchor" href="#overdrive-distortion-pedals" aria-label="Permalink to "Overdrive/distortion pedals""></a></h3> <p>Analog overdrive/distortion pedals work on similar principles to preamplifier distortion. Because most effects pedals are designed to operate from battery voltages, using vacuum tubes to generate distortion and overdrive is impractical; instead, most pedals use solid-state transistors, op-amps and diodes. Classic examples of overdrive/distortion pedals include the Boss OD series (overdrives), the Ibanez Tube Screamer (an overdrive), the Electro-Harmonix Big Muff Pi (a fuzz box) and the Pro Co RAT (a distortion). Typically, "overdrive" pedals are designed to produce sounds associated with classic rock or blues, with "distortion" pedals producing the "high gain, scooped mids" sounds associated with heavy metal; fuzz boxes are designed to emulate the distinctive sound of the earliest overdrive pedals such as the Big Muff and the Fuzz Face.[citation needed]</p> <p>Most overdrive/distortion pedals can be used in two ways: a pedal can be used as a "boost" with an already overdriven amplifier to drive it further into saturation and "color" the tone, or it can be used with a completely clean amplifier to generate the whole overdrive/distortion effect. With care—and with appropriately chosen pedals—it is possible to "stack" multiple overdrive/distortion pedals together, allowing one pedal to act as a 'boost' for another.</p> <p>Fuzz boxes and other heavy distortions can produce unwanted dissonances when playing chords. To get around this, guitar players (and keyboard players) using these effects may restrict their playing to single notes and simple "power chords" (root, fifth, and octave). Indeed, with the most extreme fuzz pedals, players may choose to play mostly single notes, because the fuzz can make even single notes sound very thick and heavy. Heavy distortion also tends to limit the player's control of dynamics (loudness and softness)—similar to the limitations imposed on a Hammond organ player (Hammond organ does not produce louder or softer sounds depending on how hard or soft the performer plays the keys; however, the performer can still control the volume with drawbars and the expression pedal). Heavy metal music has evolved around these restrictions, using complex rhythms and timing for expression and excitement. Lighter distortions and overdrives can be used with triadic chords and seventh chords; as well, lighter overdrive allows more control of dynamics.[citation needed]</p> <h3 id="power-amplifier-distortion" tabindex="-1">Power amplifier distortion <a class="header-anchor" href="#power-amplifier-distortion" aria-label="Permalink to "Power amplifier distortion""></a></h3> <p>Power valves (tubes) can be overdriven in the same way that pre-amplifier valves can, but because these valves are designed to output more power, the distortion and character they add to the guitar's tone is unique. During the 1960s to early 1970s, distortion was primarily created by overdriving the power valves. Because they have become accustomed to this sound[dubious – discuss], many guitar players[who?] favour this type of distortion, and thus set their amps to maximum levels in order to drive the power section hard. Many valve-based amplifiers in common use have a push-pull output configuration in their power section, with matched pairs of tubes driving the output transformer. Power amplifier distortion is normally entirely symmetric, generating predominantly odd-order harmonics.</p> <p>Because driving the power valves this hard also means maximum volume, which can be difficult to manage in a small recording or rehearsal space, many solutions have emerged that in some way divert some of this power valve output from the speakers, and allow the player to generate power valve distortion without excessive volume. These include built-in or separate power attenuators and power-supply-based power attenuation, such as a VVR, or Variable Voltage Regulator to drop the voltage on the valves' plates, to increase distortion whilst lowering volume. Guitarists such as Eddie Van Halen have been known to use variacs before VVR technology was invented.[specify] Lower-power valve amps [such as a quarter-watt or less](citation needed), speaker isolation cabinets, and low-efficiency guitar speakers are also used to tame the volume.</p> <p>Power-valve distortion can also be produced in a dedicated rackmount valve power amp. A modular rackmount setup often involves a rackmount preamp, a rackmount valve power amp, and a rackmount dummy load to attenuate the output to desired volume levels. Some effects pedals internally produce power-valve distortion, including an optional dummy load for use as a power-valve distortion pedal. Such effects units can use a preamp valve such as the 12AX7 in a power-valve circuit configuration (as in the Stephenson's Stage Hog), or use a conventional power valve, such as the EL84 (as in the H&K Crunch Master compact tabletop unit). However, because these are usually placed before the pre-amplifier in the signal chain, they contribute to the overall tone in a different way. Power amplifier distortion may damage speakers.</p> <p>A Direct Inject signal can capture the power-tube distortion sound without the direct coloration of a guitar speaker and microphone. This DI signal can be blended with a miked guitar speaker, with the DI providing a more present, immediate, bright sound, and the miked guitar speaker providing a colored, remote, darker sound. The DI signal can be obtained from a DI jack on the guitar amp, or from the Line Out jack of a power attenuator.</p> <h3 id="output-transformer-distortion" tabindex="-1">Output transformer distortion <a class="header-anchor" href="#output-transformer-distortion" aria-label="Permalink to "Output transformer distortion""></a></h3> <p>The output transformer sits between the power valves and the speaker, serving to match impedance. When a transformer's ferromagnetic core becomes electromagnetically saturated a loss of inductance takes place, since the back E.M.F. is reliant on a change in flux in the core. As the core reaches saturation, the flux levels off and cannot increase any further. With no change in flux there is no back E.M.F. and hence no reflected impedance. The transformer and valve combination then generate large 3rd order harmonics. So long as the core does not go into saturation, the valves will clip naturally as they drop the available voltage across them. In single ended systems the output harmonics will be largely even ordered due to the valve's relatively non linear characteristics at large signal swings. This is only true however if the magnetic core does NOT saturate.</p> <h3 id="power-supply-sag" tabindex="-1">Power supply "sag" <a class="header-anchor" href="#power-supply-sag" aria-label="Permalink to "Power supply "sag"""></a></h3> <p>Early valve amplifiers used unregulated power supplies. This was due to the high cost associated with high-quality high-voltage power supplies. The typical anode (plate) supply was simply a rectifier, an inductor and a capacitor. When the valve amplifier was operated at high volume, the power supply voltage would dip, reducing power output and causing signal attenuation and compression. This dipping effect is known as "sag", and is sought-after by some electric guitarists. Sag only occurs in class-AB amplifiers. This is because, technically, sag results from more current being drawn from the power supply, causing a greater voltage drop over the rectifier valve. Class AB amplifiers draw the most power at both the maximum and minimum point of the signal, putting more stress on the power supply than class A, which only draws maximum power at the peak of the signal.</p> <p>As this effect is more pronounced with higher input signals, the harder "attack" of a note will be compressed more heavily than the lower-voltage "decay", making the latter seem louder and thereby improving sustain. Additionally, because the level of compression is affected by input volume, the player can control it via their playing intensity: playing harder results in more compression or "sag". In contrast, modern amplifiers often use high-quality, well-regulated power supplies.</p> <h3 id="speaker-distortion" tabindex="-1">Speaker distortion <a class="header-anchor" href="#speaker-distortion" aria-label="Permalink to "Speaker distortion""></a></h3> <p>Guitar loudspeakers are designed differently from high fidelity stereo speakers or public address system speakers. While hi-fi and public address speakers are designed to reproduce the sound with as little distortion as possible, guitar speakers are usually designed so that they will shape or color the tone of the guitar, either by enhancing some frequencies or attenuating unwanted frequencies.</p> <p>When the power delivered to a guitar speaker approaches its maximum rated power, the speaker's performance degrades, causing the speaker to "break up", adding further distortion and colouration to the signal. Some speakers are designed to have much clean headroom, while others are designed to break up early to deliver grit and growl.</p> <h3 id="amp-modeling-for-distortion-emulation" tabindex="-1">Amp modeling for distortion emulation <a class="header-anchor" href="#amp-modeling-for-distortion-emulation" aria-label="Permalink to "Amp modeling for distortion emulation""></a></h3> <p>Guitar amp modeling devices and software can reproduce various guitar-specific distortion qualities that are associated with a range of popular "stomp box" pedals and amplifiers. Amp modeling devices typically use digital signal processing to recreate the sound of plugging into analogue pedals and overdriven valve amplifiers. The most sophisticated devices allow the user to customize the simulated results of using different preamp, power-tube, speaker distortion, speaker cabinet, and microphone placement combinations. For example, a guitarist using a small amp modeling pedal could simulate the sound of plugging their electric guitar into a heavy vintage valve amplifier and a stack of 8 X 10" speaker cabinets.</p> <h2 id="voicing-with-equalization" tabindex="-1">Voicing with equalization <a class="header-anchor" href="#voicing-with-equalization" aria-label="Permalink to "Voicing with equalization""></a></h2> <p>Guitar distortion is obtained and shaped at various points in the signal processing chain, including multiple stages of preamp distortion, power valve distortion, output and power transformer distortion, and guitar speaker distortion. Much of the distortion character or voicing is controlled by the frequency response before and after each distortion stage. This dependency of distortion voicing on frequency response can be heard in the effect that a wah pedal has on the subsequent distortion stage, or by using tone controls built into the guitar, the preamp or an EQ pedal to favor the bass or treble components of the guitar pickup signal prior to the first distortion stage. Some guitarists place an equalizer pedal after the distortion effect, to emphasize or de-emphasize different frequencies in the distorted signal.</p> <p>Increasing the bass and treble while reducing or eliminating the centre midrange (750 Hz) results in what is popularly known as a "scooped" sound (since the midrange frequencies are "scooped" out). Conversely, decreasing the bass while increasing the midrange and treble creates a punchy, harsher sound. Rolling off all of the treble produces a dark, heavy sound.</p> <h2 id="avoiding-distortion" tabindex="-1">Avoiding distortion <a class="header-anchor" href="#avoiding-distortion" aria-label="Permalink to "Avoiding distortion""></a></h2> <p>Electronic audio compression devices, such as this DBX 566, are used by audio engineers to prevent signal peaks from causing unwanted distortion.</p> <p>While musicians intentionally create or add distortion to electric instrument signals or vocals to create a musical effect, there are some musical styles and musical applications where as little distortion as possible is sought. When DJs are playing recorded music in a nightclub, they typically seek to reproduce the recordings with little or no distortion. In many musical styles, including pop music, country music and even genres where the electric guitars are almost always distorted, such as heavy metal, punk and hard rock, sound engineers usually take a number of steps to ensure that the vocals sounding through the sound reinforcement system are undistorted (the exception is the rare cases where distortion is purposely added to vocals in a song as a special effect).</p> <p>Sound engineers prevent unwanted, unintended distortion and clipping using a number of methods. They may reduce the gain on microphone preamplifiers on the audio console; use attenuation "pads" (a button on audio console channel strips, DI unit and some bass amplifiers); and use electronic audio compressor effects and limiters to prevent sudden volume peaks from vocal mics from causing unwanted distortion.</p> <p>Though some bass guitar players in metal and punk bands intentionally use fuzz bass to distort their bass sound, in other genres of music, such as pop, big band jazz and traditional country music, bass players typically seek an undistorted bass sound. To obtain a clear, undistorted bass sound, professional bass players in these genres use high-powered amplifiers with a lot of "headroom" and they may also use audio compressors to prevent sudden volume peaks from causing distortion. In many cases, musicians playing stage pianos or synthesizers use keyboard amplifiers that are designed to reproduce the audio signal with as little distortion as possible. The exceptions with keyboards are the Hammond organ as used in blues and the Fender Rhodes as used in rock music; with these instruments and genres, keyboardists often purposely overdrive a tube amplifier to get a natural overdrive sound. Another example of instrument amplification where as little distortion as possible is sought is with acoustic instrument amplifiers, designed for musicians playing instruments such as the mandolin or fiddle in a folk or bluegrass style.</p> <h2 id="links" tabindex="-1">Links <a class="header-anchor" href="#links" aria-label="Permalink to "Links""></a></h2> <ul> <li><a href="https://www.izotope.com/en/learn/what-is-distortion-in-music-when-and-how-to-use-it.html" target="_blank" rel="noreferrer">https://www.izotope.com/en/learn/what-is-distortion-in-music-when-and-how-to-use-it.html</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/bernie-almanzar.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[MIDI Keys]]></title> <link>https://chromatone.center/practice/midi/keys/</link> <guid>https://chromatone.center/practice/midi/keys/</guid> <pubDate>Fri, 06 Oct 2023 00:00:00 GMT</pubDate> <description><![CDATA[Reactive colorful virtual piano keyboard]]></description> <content:encoded><![CDATA[<MidiKeys />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Sound painting]]></title> <link>https://chromatone.center/theory/composition/sound-painting/</link> <guid>https://chromatone.center/theory/composition/sound-painting/</guid> <pubDate>Tue, 11 Jul 2023 00:00:00 GMT</pubDate> <description><![CDATA[Sign language for live music composition]]></description> <content:encoded><![CDATA[<youtube-embed video="YJQf0MDsNaA" /><h2 id="introduction" tabindex="-1">Introduction <a class="header-anchor" href="#introduction" aria-label="Permalink to "Introduction""></a></h2> <p><a href="http://www.soundpainting.com" target="_blank" rel="noreferrer">Soundpainting</a> is the universal multidisciplinary live composing sign language for musicians, actors, dancers, and visual Artists. Presently (2023) the language comprises more than 1500 gestures that are signed by the Soundpainter (composer) to indicate the type of material desired of the performers. The creation of the composition is realized, by the Soundpainter, through the parameters of each set of signed gestures. The Soundpainting language was created by Walter Thompson in Woodstock, New York in 1974.</p> <h2 id="developing-the-creative-mind" tabindex="-1">Developing the creative mind <a class="header-anchor" href="#developing-the-creative-mind" aria-label="Permalink to "Developing the creative mind""></a></h2> <p>Soundpainting is an essential method for engaging students of all ages, ability levels, and art forms in the creative process. Unlike learning to create within a single style, Soundpainting develops the creative voices of students through an array of structural parameters allowing individual choice and stylistic parameters. Using the composer, or “Soundpainter,” as teacher, the innate creativity of students is drawn out and developed constructively by way of the gestural choices of the Soundpainter, enabling each individual, each group, to express their own character in an experiential learning format.</p> <youtube-embed video="hp_AxCgtD1M" /><h2 id="analysis" tabindex="-1">Analysis <a class="header-anchor" href="#analysis" aria-label="Permalink to "Analysis""></a></h2> <p>The Soundpainter (the composer) standing in front (usually) of the group communicates a series of signs using hand and body gestures indicating specific and/or aleatoric material to be performed by the group. The Soundpainter develops the responses of the performers, molding and shaping them into the composition then signs another series of gestures, a phrase, and continues in this process of composing the piece.</p> <p>The Soundpainter composes in real time utilizing the gestures to create the composition in any way they desire. The Soundpainter sometimes knows what he/she will receive from the performers and sometimes does not know what he/she will receive – the elements of specificity and chance. The Soundpainter composes with what happens in the moment, whether expected or not. The ability to compose with what happens in the moment, in real time, is what is required in order to attain a high level of fluency with the Soundpainting language.</p> <p>The gestures of the Soundpainting language are signed using the syntax of Who, What, How and When. There are many types of gestures, some indicating specific material to be performed as well as others indicating specific styles, genres, aleatoric concepts, improvisation, disciplines, stage positions, costumes, props, and many others.</p> <p><img src="./12.jpg" alt="on stage"></p> <h2 id="the-structure-of-soundpainting" tabindex="-1">The Structure of Soundpainting <a class="header-anchor" href="#the-structure-of-soundpainting" aria-label="Permalink to "The Structure of Soundpainting""></a></h2> <p>The Soundpainting gestures are grouped in two basic categories: Sculpting gestures and Function signals.</p> <p>Sculpting gestures indicate What type of material and How it is to be performed and Function signals indicate Who performs and When to begin performing. Who, What, How, and When comprise the Soundpainting syntax. Note: The How gestures are not always employed. The Soundpainter often signs a phrase leaving out a How gesture. For example: Whole Group, Long Tone, Play. If you sign your phrase without a How gesture, then it is the performers choice in deciding the dynamics and quality of the material.</p> <p>The Soundpainting syntax Who, What, How, When and the two basic categories Sculpting Gestures and Function Signals are further broken down into six subcategories: Identifiers, Content, Modifiers, Go gestures, Modes, and Palettes.</p> <ol> <li> <p><strong>Identifiers</strong> are in the Function category and are Who gestures such as Whole Group, Woodwinds, Brass, Group 1, Rest of the Group, etc.</p> </li> <li> <p><strong>Content gestures</strong> are in the Sculpting category and identify What type of material is to be performed such as Pointillism, Minimalism, Long Tone, Play Can’t Play etc.</p> </li> <li> <p><strong>Modifiers</strong> are in the Sculpting category and are How gestures such as Volume Fader and Tempo Fader.</p> </li> <li> <p><strong>Go gestures</strong> are in the Function category and indicate When to enter or exit the composition and in some cases when to exit Content such as Snapshot or Launch Mode.</p> </li> <li> <p><strong>Modes</strong> are in the Sculpting category and are Content gestures embodying specific performance parameters. Scanning, Point to Point, and Launch Mode are several examples of Modes.</p> </li> <li> <p><strong>Palettes</strong> are in the Sculpting category and are primarily Content gestures identifying composed and/or rehearsed material</p> </li> </ol> <p><img src="./use.jpg" alt="use of sound painting"></p> <h2 id="three-rates-of-development" tabindex="-1">Three Rates of Development <a class="header-anchor" href="#three-rates-of-development" aria-label="Permalink to "Three Rates of Development""></a></h2> <p>Certain gestures such as Point to Point, Scanning, Play Can’t Play, Relate To and Improvise are a few of the Content gestures that include parameters requiring specific rates of material development.</p> <p><strong>Rate 1:</strong> The performer develops their material in such a way that one minute later there would still be a relationship to their original idea.</p> <p><strong>Rate 2:</strong> The rate of development of material is about twice as fast as that of Rate 1. A minute later there would only be a vague relationship to the original idea.</p> <p><strong>Rate 3:</strong> Open rate of development. The performer may develop their material at any rate of their choosing.</p> <p><img src="./draw.jpg" alt="drawing of sound painting"></p> <h2 id="imaginary-regions" tabindex="-1">Imaginary Regions <a class="header-anchor" href="#imaginary-regions" aria-label="Permalink to "Imaginary Regions""></a></h2> <p>There are four Imaginary Regions the Soundpainter utilizes when signing gestures.</p> <ol> <li> <p><strong>The Neutral position:</strong> The Neutral position is the place on stage where the Soundpainter’s body indicates silence and/or stillness. It is where the Soundpainter prepares the phrase for initiation.</p> </li> <li> <p><strong>The Box:</strong> An imaginary space just in front of the Soundpainters Neutral position where phrases are initiated – the place of action. The Box is approximately 2 meters (6 feet) long and one meter wide. Important: Who, What, and How (sometimes) gestures are prepared out of the Box and then the Soundpainter steps into the Box initiating the phrase with a Go gesture. For example: Whole Group (out of the Box), Long Tone (out of the Box), Volume Fader (medium) (out of the Box), Play (in the Box). Important Note: Modifying Gestures such as Volume Fader and Tempo Fader can either be prepared out of the Box, then initiated with a Go gesture or, may be used in real time by stepping into the Box and signing the gesture for an immediate response from the performers.</p> </li> <li> <p><strong>The Imaginary Staff:</strong> An imaginary vertical field 1 and ½ meters (3 ½ feet) just in front of the Soundpainter that indicates low to high pitch range with sound and slow to fast movement with certain gestures such as a Long Tone. Note: The name Imaginary Staff is derived from music language. It is related to the music staff, which is a set of five parallel lines with spaces between them, on which notes are written to indicate their pitch.</p> </li> <li> <p><strong>The Imaginary Stage:</strong> A horizontal field (like a small square table top) approximately ¾ of a meter for each side (3/4 of a yard squared) at waist height positioned just in front of the Soundpainter. The Imaginary Stage is the region in which the Soundpainter indicates movement directions on the stage – where the movement will travel to and from. Such gestures as Directions and Space Fader are both signed on the Imaginary Stage.</p> </li> </ol> <youtube-embed video="tKFjZEUbYdU" /><p><img src="./walter.jpg" alt="Walter Thompson"></p> <h2 id="walter-thompson" tabindex="-1">Walter Thompson <a class="header-anchor" href="#walter-thompson" aria-label="Permalink to "Walter Thompson""></a></h2> <h3 id="soundpainter-composer-woodwinds-piano-percussion-educator" tabindex="-1">Soundpainter, Composer, Woodwinds, Piano, Percussion, Educator <a class="header-anchor" href="#soundpainter-composer-woodwinds-piano-percussion-educator" aria-label="Permalink to "Soundpainter, Composer, Woodwinds, Piano, Percussion, Educator""></a></h3> <p>Walter Thompson has achieved international recognition as a composer and for the creation of Soundpainting, the universal multidisciplinary live composing sign language. Thompson has composed Soundpaintings with contemporary orchestras, dance companies, theatre ensembles and multidisciplinary groups in United States, Europe and South America.</p> <p>In 1974, after attending Berklee School of Music, Walter Thompson moved to Woodstock and began an association with the Creative Music Studio. While there, he studied composition and woodwinds with Anthony Braxton and began to develop his interest in using hand and body gestures as a way to create real-time compositions. Beginning as a tool to help shape the direction of a performance, it has evolved to become a universal composing language for composers and artists off all disciplines and abilities.</p> <p>The language continues to be developed through Thompson’s performances, international think tanks, and the contributions of a wide range of artists and educators. Soundpainting is now being used both professionally and in education in more than 35 countries around the world including; the United States, France, Canada, Australia, Czech Republic, China, Germany, Spain, Norway, Denmark, Sweden, Finland, Italy, Japan, South Africa, Brazil, Uruguay, Montenegro, Guadeloupe, Argentina, Kazakhstan, Mexico, Nigeria, Switzerland, Turkey, and the Netherlands.</p> <p><img src="./perf.jpg" alt="WT on stage"></p> <p>Thompson has composed Soundpaintings with contemporary orchestras, dance companies, theatre ensembles and multidisciplinary groups in many cities, including Barcelona, Paris, New York, Chicago, Los Angeles, Boston, Oslo, Berlin, Bergen, Lucerne, Copenhagen, and Reykjavik, among others, and has taught Soundpainting at the Paris Conservatoire; Grieg Academy, Bergen, Norway; Iceland Academy of the Arts; Eastman School of Music; University of California San Diego; University of Michigan; University of Iowa; Oberlin College-Conservatory of Music; and New York University, among many others. Thompson is founder of and Soundpainter for The Walter Thompson Orchestra founded in 1984 and based in New York City.</p> <p>In 2002, Premis FAD Sebastià Gasch d’Arts Parateatrals awarded Thompson the prestigious “Aplaudiment” for his work with Soundpainting in Barcelona, Spain. He has also received awards from the National Endowment for the Arts, Meet the Composer, the Mary Flagler Cary Charitable Trust, ASCAP, Rockefeller Foundation, Mid Atlantic Arts Foundation, New York State Council on the Arts, and the Jerome Foundation.</p> <h2 id="a-new-approach-in-music-education-improving-creativity-soundpainting" tabindex="-1">A New Approach In Music Education Improving Creativity: Soundpainting <a class="header-anchor" href="#a-new-approach-in-music-education-improving-creativity-soundpainting" aria-label="Permalink to "A New Approach In Music Education Improving Creativity: Soundpainting""></a></h2> <p>Common characteristics of Orff, Kodaly and Dalcrose music education approaches are not only that they are the education methods improving the creativity but it is also that they gain aimed behaviours in dramatization, improve musical skills and ensure adaption into social environment thanks to music, acquire mental skills such as motivation, attention and self-confidence. Soundpainting is the universal live composing language created for musicians, dancers, actors, poets and visual artists working in their improvised environment. Soundpainting, a performance art based on improvisation, has started to be used in education in recent years. Just like other music education methods, education of Soundpainting is quite an important music education approach in terms of development in the creativity of individuals, their musical and mental skills. With this study, it has been tried to emphasize the Soundpainting's common points with other music education approaches by providing information about Soundpainting. Within this study, it is aimed at contributing to music education and teaching as a new approach. This research is one of the very few studies, which carried out in the fields of Soundpainting. Therefore, it is important in terms of its contribution to the both education and Soundpainting fields. This research is descriptive research and data were obtained by literature review. At the end of the research, it was emerged that Soundpainting has common ground with the other music education approaches and it could be used in music education. <a href="https://www.academia.edu/30315900/A_New_Approach_In_Music_Education_Improving_Creativity_Soundpainting" target="_blank" rel="noreferrer">Read more</a></p> <ul> <li>A New Approach In Music Education Improving Creativity: Soundpainting <a href="/books/A_New_Approach_In_Music_Education_Improv-1.pdf">Download PDF</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/stage.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Modality]]></title> <link>https://chromatone.center/theory/harmony/modal/</link> <guid>https://chromatone.center/theory/harmony/modal/</guid> <pubDate>Tue, 10 Jan 2023 00:00:00 GMT</pubDate> <description><![CDATA[Modal harmony study]]></description> <content:encoded><![CDATA[<h2 id="introduction" tabindex="-1">Introduction <a class="header-anchor" href="#introduction" aria-label="Permalink to "Introduction""></a></h2> <p>The vast majority of music written in the last few centuries has been ‘tonal’. This is the type of music we are all used to hearing day to day. However, in the 1950’s Jazz musicians began feeling restricted by ‘tonality’ and started experimenting with other ways of structuring harmony (i.e. chords).</p> <p>From Tonality (which encompasses your more traditional Jazz all the way through to Bebop, Hard-bop and Cool Jazz) Jazz musicians moved to Modality (<a href="https://www.thejazzpianosite.com/jazz-piano-lessons/modern-jazz-theory/modal-jazz/" target="_blank" rel="noreferrer">Modal Jazz</a>) and Atonality (<a href="https://www.thejazzpianosite.com/jazz-piano-lessons/modern-jazz-theory/free-jazz/" target="_blank" rel="noreferrer">Free Jazz</a> – though Free Jazz is NOT necessarily atonal).</p> <p>In this lesson we’re going to start with the difference between Tonal Harmony vs Modal Harmony.</p> <p>Just as a semantic aside, when I say ‘modal harmony’ I am referring to the modern meaning of the term – that is Miles Davis/Kind of Blue/Modal Jazz Modal Harmony – and <strong>NOT</strong> Medieval music or Gregorian Modes.</p> <h2 id="modality" tabindex="-1">Modality <a class="header-anchor" href="#modality" aria-label="Permalink to "Modality""></a></h2> <p>Modality has the following features:</p> <ul> <li>It uses all <a href="https://www.thejazzpianosite.com/jazz-piano-lessons/the-basics/modes/" target="_blank" rel="noreferrer">modes</a> (Ionian, Dorian, Phrygian, etc.)</li> <li>It does NOT use a Functional Harmony</li> <li>It has a Tonal Centre (i.e. root note)</li> </ul> <p>Modal songs can be written in any mode (not just major and minor), so for example it can be in the key of D Dorian.</p> <p>In Modal Harmony, chords DO NOT have a function, so in a sense: all chords are equal. A chord DOES NOT need to resolve to any other chord. But there is still a Tonal Centre – for example the note D in the key of D Dorian (i.e. the root note).</p> <p>But because there is no ‘functional harmony’ the chords DO NOT feel like they need to resolve to the tonic or Dm7 chord. Each chord just floats there by itself as a standalone entity.</p> <p>In order to achieve this you have to avoid playing the diatonic tritone – because this tritone interval creates a dissonance which sounds like a Dominant Chord and feels like it wants to resolve to the Tonic Chord, thus turning the music tonal.</p> <p>So it’s a delicate balance. You have to make D sound like the ‘tonal centre’ but you can’t do it by using the function of the diatonic chords. So you:</p> <ul> <li>CAN’T use a dominant chord to establish the tonic (i.e. A7 to Dm)</li> <li>CAN use Pedal point (Repeat Root Note)</li> <li>CAN use Ostinato (Repetitive pattern)</li> </ul> <p>Because the majority of music we hear (pop, rock, etc) is tonal and chords are usually built out of stacked 3rds, we’ve learned to associate chords built in thirds with tonal harmony. The way to get around this problem is to build chord with 4ths – that is, use <a href="https://www.thejazzpianosite.com/jazz-piano-lessons/jazz-chord-voicings/quartal-voicings/" target="_blank" rel="noreferrer"><strong>Quartal Chords</strong></a>. By building chords in 4ths, you break that tonal anticipation of the Dominant chord wanting to move to the tonic and you create a more ambiguous, vague and modal sound.</p> <p>Because modal chords don’t have ‘functions’, they don’t have to go anywhere (i.e. they don’t have to resolve to the tonic). They just float around. So modal songs usually don’t have chord progressions. They just state the key/scale/mode the song is in and it’s your job to play any diatonic chords (i.e. Dm7, Em7, FMaj7, G7, Am7, Bø7, & CMaj7 in the key of D Dorian) and make your own ‘chord progression’.</p> <p>So then when playing a Modal Song you have to:</p> <ul> <li>Emphasis the root note in the bass (to reinforce the tonal centre); and</li> <li>Avoid playing the diatonic tritone (to avoid tonal sound) <ul> <li>DO NOT play Bø7</li> <li>Play G Triad (instead of G7) to avoid the tritone interval</li> </ul> </li> <li>Move around the diatonic chords at random but smoothly (generally stepwise)</li> <li>Keep the chord movements sparse and simple – not too busy, not too many chords, nice and boring. The chords are there just to create a harmonic underlay.</li> <li>Use Quartal Chords (to avoid tonal sound)</li> </ul> <p>Modal harmony creates a more ambiguous and vague sound and is now considered much more ‘modern’ than traditional tonal harmony. Modal Harmony completely changed the way people think about Jazz and improvisation. It gave the soloist greater freedom and choice in his or her solo (I’ll have much more to say about this in the next lesson).</p> <h2 id="tonal-harmony-vs-modal-harmony-summary" tabindex="-1">Tonal Harmony vs Modal Harmony Summary <a class="header-anchor" href="#tonal-harmony-vs-modal-harmony-summary" aria-label="Permalink to "Tonal Harmony vs Modal Harmony Summary""></a></h2> <p>And that, in a nutshell, is the difference between Tonal Harmony vs Modal Harmony. So, in summary:</p> <table id="tablepress-183" class="tablepress tablepress-id-183"> <thead> <tr class="row-1 odd"> <th class="column-1">Tonality</th> <th class="column-2">Modality</th> </tr> </thead> <tbody class="row-hover"> <tr class="row-2 even"> <td class="column-1">Uses Major & minor keys</td> <td class="column-2">Uses all modes</td> </tr> <tr class="row-3 odd"> <td class="column-1">Functional Harmony</td> <td class="column-2">No Functional Harmony</td> </tr> <tr class="row-4 even"> <td class="column-1">Tonal Centre (root note)</td> <td class="column-2">Tonal Centre</td> </tr> </tbody> </table> <p><img src="https://www.thejazzpianosite.com/wp-content/uploads/2016/12/Tonal-Harmony-vs-Modal-Harmony.png" alt="Tonal Harmony vs Modal Harmony"></p> <h2 id="have-a-listen-to" tabindex="-1">Have a Listen to <a class="header-anchor" href="#have-a-listen-to" aria-label="Permalink to "Have a Listen to""></a></h2> <ul> <li>So What ~ Miles Davis</li> <li>Milestones ~ Miles Davis</li> <li>My Favorite Things ~ John Coltrane</li> <li>Impressions ~ John Coltrane</li> <li>Little Sunflower ~ Freddie Hubbard</li> <li>Footprints ~ Wayne Shorter</li> </ul> <youtube-embed video="OCkCn0dEgpw" />]]></content:encoded> <enclosure url="https://www.thejazzpianosite.com/wp-content/uploads/2016/12/Tonal-Harmony-vs-Modal-Harmony.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Jam table]]></title> <link>https://chromatone.center/practice/jam/table/</link> <guid>https://chromatone.center/practice/jam/table/</guid> <pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate> <description><![CDATA[A visual guide for jamming to be placed on the table and accessible from both sides]]></description> <content:encoded><![CDATA[<JamTable />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Indian ragas]]></title> <link>https://chromatone.center/practice/scale/raga/</link> <guid>https://chromatone.center/practice/scale/raga/</guid> <pubDate>Mon, 11 Jul 2022 00:00:00 GMT</pubDate> <description><![CDATA[Interactive scaleset used in Melakarta raga]]></description> <content:encoded><![CDATA[<ScaleRaga />]]></content:encoded> <enclosure url="https://chromatone.center/saubhagya-gandharv.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Tuner]]></title> <link>https://chromatone.center/practice/pitch/tuner/</link> <guid>https://chromatone.center/practice/pitch/tuner/</guid> <pubDate>Sat, 18 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Fast and precise instrument tuner web-app]]></description> <content:encoded><![CDATA[<PitchTuner style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>Start the tuner with the button at the center and produce the sound you want to find the pitch for. It may be your guitar, ukulele or any other string instrument. It may be your voice. The app will show the base frequency of the sound as well as the cents difference with the closes 12-TET note. The color of the background gradient corresponds to the exact frequency the app has detected.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/tuner.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[See Chroma]]></title> <link>https://chromatone.center/practice/chroma/see/</link> <guid>https://chromatone.center/practice/chroma/see/</guid> <pubDate>Thu, 16 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Let's look at the relative amounts of all pitch class frequencies in any audio signal in real time.]]></description> <content:encoded><![CDATA[<ChromaSee /><hr> <p>Features:</p> <ul> <li>Drag over the scene horizontally to adjust circles blur</li> <li>Drag over the scene vertically to adjust circles size</li> </ul> <p>Standalone visualizer at <a href="https://see.chromatone.center/" target="_blank" rel="noreferrer">see.chromatone.center</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/chroma.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Roll]]></title> <link>https://chromatone.center/practice/midi/roll/</link> <guid>https://chromatone.center/practice/midi/roll/</guid> <pubDate>Thu, 16 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Record all MIDI notes on an infinite roll]]></description> <content:encoded><![CDATA[<MidiRoll style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <ol> <li>Play some notes on your MIDI controller or computer keyboard and watch them appear on the endless roll.</li> <li>Drag or scroll over the canvas to change the roll speed.</li> <li>Press the <i class="p-3 mr-1 i-la-arrow-up"></i> (<i class="p-3 mr-1 i-la-arrow-left"></i> ) button to change the plot direction</li> <li>Press the <i class="p-3 mr-1 i-la-expand"></i> button to expand the app to full screen</li> <li>Press the <i class="p-3 mr-1 i-la-play"></i>/<i class="p-3 mr-1 i-la-pause"></i> icon or anywhere on the canvas to send MIDI play/pause signal to all connected MIDI devices</li> <li>Press the <i class="p-3 mr-1 i-la-stop"></i> icon or double click on the canvas to send MIDI play/pause signal to all connected MIDI devices</li> <li>Use MIDI channel filter to show only the desired channels.</li> </ol> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/midi-roll.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Spectrogram]]></title> <link>https://chromatone.center/practice/pitch/spectrogram/</link> <guid>https://chromatone.center/practice/pitch/spectrogram/</guid> <pubDate>Thu, 16 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Time-frequency audiovisual analysis tool]]></description> <content:encoded><![CDATA[<p>The colorful spectrogram is a powerful tool for visual audio analysis. Each particular frequency in the spectrum gets its own position on the vertical axis along with the corresponding Chromatone color. The pitch spectrum is continous and the graph shows all the partials in a rather high resolution. The colors of the lines help differentiate pitches and overtones in any incoming audio signal. The quality of analysis is based primarily on the quality of the signal – thus a good microphone is recommended for best experience.</p> <h2 id="how-to-use-the-spectrogram" tabindex="-1">How to use the spectrogram <a class="header-anchor" href="#how-to-use-the-spectrogram" aria-label="Permalink to "How to use the spectrogram""></a></h2> <ol> <li>Drag across <i class="p-3 mr-1 i-la-hand-rock"></i> the spectrogram top change the roll speed. The actual setting is at the top-left corner.</li> <li>Press the <i class="p-3 mr-1 i-la-expand"></i> icon at the bottom-right corner make the spectrogram go full screen. Very useful mode for deep explorations and teaching.</li> <li>You can pause <i class="p-3 mr-1 i-la-pause"></i> and resume <i class="p-3 mr-1 i-la-play"></i> the roll either by clicking anywhere at the spectrogram or by pressing the <code>Spacebar</code> button on your keyboard. Useful when comparing two or more sound spectrums – record a sound spectrum on the roll, then pause it, take another instrument and record another. The roll can fit up to 10 segments or even more.</li> <li>Clear the canvas with the <i class="p-3 mr-1 i-la-trash-alt"></i> button at the top-right corner or by pressing the <code>Enter</code> button on your keyboard.</li> <li>CONTRAST control changes the steepness to which we raise the value of each band. It accentuates louder frequencies and dampens noise.</li> <li>MIDPOINT control is for tuning the values before the raising to the power. It can help adjust the brightness and sensitivity of the spectroscope.</li> <li>Right click the small video to open the spectrogram in Picture-In-Picture mode. You may need to press play in the floating window to see the spectrogram running.</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/spectrogram.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[ZZFX]]></title> <link>https://chromatone.center/practice/synth/zzfx/</link> <guid>https://chromatone.center/practice/synth/zzfx/</guid> <pubDate>Thu, 16 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Zuper Zmall Zound Zynth]]></description> <content:encoded><![CDATA[<SynthZzfx/><h2 id="zzfx" tabindex="-1">ZzFX <a class="header-anchor" href="#zzfx" aria-label="Permalink to "ZzFX""></a></h2> <h3 id="a-tiny-javascript-sound-fx-system" tabindex="-1">A Tiny JavaScript Sound FX System <a class="header-anchor" href="#a-tiny-javascript-sound-fx-system" aria-label="Permalink to "A Tiny JavaScript Sound FX System""></a></h3> <p>ZzFX is a tiny sound generator designed to produce a wide variety of sound effects with minimal code overhead. It's perfect for games, prototypes, and any web application that needs sound without the bulk of traditional sound files.</p> <p><a href="https://zzfx.3d2k.com" target="_blank" rel="noreferrer">https://zzfx.3d2k.com</a></p> <p><a href="https://github.com/KilledByAPixel/ZzFX" target="_blank" rel="noreferrer">https://github.com/KilledByAPixel/ZzFX</a></p> <h3 id="features" tabindex="-1">Features <a class="header-anchor" href="#features" aria-label="Permalink to "Features""></a></h3> <ul> <li>Compact: Less than 1 kilobyte when compressed!</li> <li>Versatile: 20 controllable parameters for diverse sound effects.</li> <li>No Dependencies: Standalone with no external libraries.</li> <li>Cross-Browser: Compatible with nearly all web browsers.</li> <li>Open Source: MIT licensed, use it anywhere!</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Сhromagram]]></title> <link>https://chromatone.center/practice/chroma/gram/</link> <guid>https://chromatone.center/practice/chroma/gram/</guid> <pubDate>Sun, 12 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Visual representation of any audio chroma content]]></description> <content:encoded><![CDATA[<ChromaGram style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>This app shows the <a href="https://en.wikipedia.org/wiki/Chroma_feature" target="_blank" rel="noreferrer">chromagram</a> or the <a href="https://en.wikipedia.org/wiki/Harmonic_pitch_class_profiles" target="_blank" rel="noreferrer">12 tone harmonic pitch class profile</a> of the incoming signal. It means is analyzes the frequency spectrum and sums them according to one of the 12 pitch classes. The relative power of every pitch class is plotted, showing the primary tones of the audio signal. Pure sine tone will give us one line filled and noise will make all the bands glow equally bright.</p> <ol> <li><i class="p-3 mr-1 i-fluent-drag-24-regular"></i> Drag the canvas graph to change the roll speed.</li> <li>Press the <i class="p-3 i-la-arrow-up"></i> (<i class="p-3 i-la-arrow-left"></i>) button to change the plot direction</li> <li>Press the <i class="p-3 mr-1 i-la-expand"></i>button to expand the app to full screen</li> </ol> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/chromagram.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Monitor]]></title> <link>https://chromatone.center/practice/midi/monitor/</link> <guid>https://chromatone.center/practice/midi/monitor/</guid> <pubDate>Sun, 12 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[See everything that's happening on your MIDI-bus right in the browser]]></description> <content:encoded><![CDATA[<client-only> <midi-monitor /> <midi-panel :to-channel="false" /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/monitor.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Pitch roll]]></title> <link>https://chromatone.center/practice/pitch/roll/</link> <guid>https://chromatone.center/practice/pitch/roll/</guid> <pubDate>Sun, 12 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Plot main pitch of any incoming audio on an endless roll]]></description> <content:encoded><![CDATA[<PitchRoll style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>This app listens to the incoming audio and analyzes it for the base pitch and tempo. The note with the cents difference is show at the top left. Tempo is shown at the top right. The pitch is plotted on the vertical axis with colored circles while beats are drawn as vertical lines.</p> <ol> <li>Press <i class="p-3 mr-1 i-la-play"></i> to start plotting the audio parameters. Press <i class="p-3 mr-1 i-la-pause"></i> to pause the process. You can also do this by clicking the graph itself or by pressing the <strong>Spacebar</strong> key on your keyboard.</li> <li>Press <i class="p-3 mr-1 i-la-times"></i> to clear the graph. Double clicking the graph and pressing the <strong>Enter</strong> key will have the same effect</li> <li>Drag across the graph plain left or right to increase or decrease the speed of the plot head.</li> <li>If you see the <i class="p-3 mr-1 i-la-expand"></i> button at the bottom right, you browser is capable of stretching the graph to the full screen. Have fun!</li> </ol> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/roll.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Router]]></title> <link>https://chromatone.center/practice/midi/router/</link> <guid>https://chromatone.center/practice/midi/router/</guid> <pubDate>Wed, 08 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Forward all MIDI messages from one device to another]]></description> <content:encoded><![CDATA[<client-only> <div id="screen"> <midi-router class="mb-20" /><midi-panel class="mb-4" /></div> </client-only> <p>Click on the desired output under the input you want to send signals from.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/midi-router.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Drone]]></title> <link>https://chromatone.center/practice/pitch/drone/</link> <guid>https://chromatone.center/practice/pitch/drone/</guid> <pubDate>Mon, 06 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Shruti box synth / tanpura online]]></description> <enclosure url="https://chromatone.center/drone.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Circle of fifths]]></title> <link>https://chromatone.center/practice/chord/fifths/</link> <guid>https://chromatone.center/practice/chord/fifths/</guid> <pubDate>Thu, 02 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[12 major chords organized in a sequence of perfect fifths along with their relative minors]]></description> <content:encoded><![CDATA[<div class="info custom-block"><p class="custom-block-title">INFO</p> <h2 id="the-double-circle-of-fifths-a-tool-to-explore-chords-in-tonal-space" tabindex="-1">The double circle of fifths. A tool to explore chords in tonal space <a class="header-anchor" href="#the-double-circle-of-fifths-a-tool-to-explore-chords-in-tonal-space" aria-label="Permalink to "The double circle of fifths. A tool to explore chords in tonal space""></a></h2> <p>The circle of fifths organizes pitches in a sequence of perfect fifths, generally shown as a circle with the pitches (and their corresponding keys) in a clockwise progression. Musicians and composers often use the circle of fifths to describe the musical relationships between pitches. Its design is helpful in composing and harmonizing melodies, building chords, and modulating to different keys within a composition.</p> <p>Moving counterclockwise, the pitches descend by a fifth, but ascending by a perfect fourth will lead to the same note an octave higher (therefore in the same pitch class). Moving counter-clockwise from C could be thought of as descending by a fifth to F, or ascending by a fourth to F.</p> <p>Here we have two circles of fifths rotated by a minor third interval. With this placement we get quite a useful tool. Considering each position of the circle as a scale we instantly get two parallel major and minor keys. It has the same notes, but start from another tonic and have the opposite tonal quality. Take C major and get A minor. Take C# minor and get E major at one glimpse.</p> <p>The highlighted sector of 90 degrees includes 6 points of the circle that represent 6 main degrees of the scale starting from the notes in the middle of the sector. You can choose either major or minor basic scale by pressing on the little circles outside or inside the rings. Choose the one on the outside and you'll get the scale degrees numbers for the major scale. On the outer circle to the left of it you find the IV degree - the subdominant – the major chord starting from the note a fourth apart from the tonic. To the right lays the dominant V major chord. The inner circle shows the minor chords of the major scale – the ii, the vi and the iii degrees show in a clockwise succession. The vi diminished chord of a major scale isn't shown in the main sector, but you can find a reminder of it one step to the right in the inner circle.</p> <p>So for C major scale you'll get C major chord as the tonic I degree, F major to the left as the IV subdominant degree, and G major chord as the dominant V degree to the right of the tonic. The inner circle will show you the Dm chord for the ii degree, Am as the vi degree and the tonic of the parallel minor scale and Em as the iii degree of the C major scale. The next step of the inner circle shows Bm and it can represent the vi degree – the leading tone and its diminished chord Bdim (Bb5).</p> <p>This scheme helps find interesting chord progressions either in one predefined key or traversing different keys in complex modulation movements. Neighbouring sectors include scales that are easy to borrow chords from and to travel to with simple moves like common-chord modulation. Once you find any other path from one key to another you can trace its form on the circle and transpose it to any key you're in or you want to move to. The article about <a href="https://en.wikipedia.org/wiki/Secondary_chord" target="_blank" rel="noreferrer">secondary chords</a> is a good starting poing in this exploration.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/screen.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Table]]></title> <link>https://chromatone.center/practice/pitch/table/</link> <guid>https://chromatone.center/practice/pitch/table/</guid> <pubDate>Thu, 02 Jun 2022 00:00:00 GMT</pubDate> <description><![CDATA[Every possible note in a huge expandable table]]></description> <content:encoded><![CDATA[<PitchTable />]]></content:encoded> <enclosure url="https://chromatone.center/table.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Euclidean rhythms]]></title> <link>https://chromatone.center/theory/rhythm/system/euclidean/</link> <guid>https://chromatone.center/theory/rhythm/system/euclidean/</guid> <pubDate>Wed, 11 May 2022 00:00:00 GMT</pubDate> <description><![CDATA[Mathematical algorithm to create well-formed rhythm patterns]]></description> <content:encoded><![CDATA[<p>‘Euclidean’ rhythms are one of the few ideas for algorithmically generating musical material that has gained relative popularity over the last few years. We say ‘relative’ because we really don’t know how many composers/electronic producers really have heard of them in the grand scheme of things, but if you do a google search on this you’ll get far more hits than searching for, for example, music from L-systems. Many DAWs, iOS apps, MAX/MSP patches etc, allow music makers to generate Euclidean rhythms.</p> <h2 id="what-are-euclidean-rhythms" tabindex="-1">What are Euclidean rhythms? <a class="header-anchor" href="#what-are-euclidean-rhythms" aria-label="Permalink to "What are Euclidean rhythms?""></a></h2> <p>So what does it mean, and why has it become popular? What does Euclid have to do with this? Euclidean rhythms are essentially a way of spacing out <strong>n</strong> events (let's call them ‘onsets’) across <strong>m</strong> positions (essentially, pulses or beats) as evenly possible. If you space out, for example, 4 onsets across 16 positions, the result is just 4 evenly spaced onsets. But if the number of onsets is relatively prime with respect to the number of pulses, the resulting pattern is more interesting. The term ‘Euclidean rhythm’ comes from a paper by McGill University computer scientist Godfried T. Toussaint, and fortunately his paper is <a href="/media/pdf/banff.pdf">available online</a>.</p> <p>In the paper, Toussaint starts by explaining an algorithm by E. Bjorklund used in certain components of spallation neutron source (SNS) accelerators that require spacing out pulses across a certain number of time events as equidistantly as possible. After giving an example of how Bjorklund’s algorithm works, he shows that it has a parallel structure to Euclid’s algorithm from the Elements which uses repeated subtraction to establish the <strong>greatest common divisor</strong> (GCD) for two numbers.</p> <h3 id="here-is-a-simple-example-of-bjorklund-s-algorithm-spacing-5-onsets-across-12-intervals-pulses" tabindex="-1">Here is a simple example of Bjorklund’s algorithm spacing 5 onsets across 12 intervals (pulses). <a class="header-anchor" href="#here-is-a-simple-example-of-bjorklund-s-algorithm-spacing-5-onsets-across-12-intervals-pulses" aria-label="Permalink to "Here is a simple example of Bjorklund’s algorithm spacing 5 onsets across 12 intervals (pulses).""></a></h3> <p>We start with <strong>five onsets</strong> on the left (the ‘front’ group) and the <strong>seven non-onsets</strong> on the right (the ‘back’ group). We pair up one element from the front with one from the back until either the front or back group is exhausted. When we’re finished the newly combined elements will form the front group at the next iteration, and whatever is left over will form the new back group (in this case the two non-onsets):</p> <img src="./images/1.gif" /> <p>We then repeat the process. Note that the combined elements from stage 1, are treated as single units here. At this stage, we start with five elements in the front and only two in the back. We combine them as before, but this time we will exhaust the back group first, with three elements left at the front — these will form the new back group, and the combined elements again are the new front group:</p> <img src="./images/2.gif" /> <p>We just recursively repeat these steps until the back group has one (or zero) elements. At the start of third iteration, we have two elements in the front and three at the back and after combining them, the back group has a single element and so we are finished.</p> <img src="./images/3.gif" /> <p>In conventional notation, the rhythm would look like this:</p> <img src="./images/4.png" /> <p>Euclidean rhythms are often understood as ‘rhythm necklaces’, that is, you could rotate this pattern so that each of the five onsets could be in the first position, and those five patterns would share the maximally equidistant property that this procedure generates. We could say that this algorithm generates five patterns (in this case) that belong to the same necklace.</p> <p>By the way, if you want to get a better sense of what these patterns sound like, I’ve included a CodePen below that will sound out these patterns for you (and generate all of their rotations).</p> <p>Here is an animation of 7 onsets in 16 pulses:</p> <img src="./images/5.gif" /> <p>And again the resultant rhythm in standard notation:</p> <img src="./images/6.png" /> <p>Toussaint could have called them Bjorklund rhythms, but Euclidean rhythm has a more timeless feel. As I will discuss later, there are other ways of deriving the same set of patterns.</p> <h2 id="euclidean-rhythm-in-code" tabindex="-1">Euclidean rhythm in code <a class="header-anchor" href="#euclidean-rhythm-in-code" aria-label="Permalink to "Euclidean rhythm in code""></a></h2> <p>This is fairly basic to code. As an input, we’ll use the number of onsets, and a total number of pulses. To keep things simple we’ll have our end result be an array of 1s (onset) and 0s (no onset). Conceptually, we need keep track of two groups: a front group and a back group. To start, the front group will be an array consisting of the onsets (each in their own array) and the back array will consist of one array for each non-onset pulse. Then recursively we create a new front array by concatenating pairs from the old front and back until we run out of elements in either the front or the back, and the new back array will be whatever elements are left over. We keep going until the back array has one (or zero) elements, then we flatten the resulting 2nd array. Our Swift code looks something like this:</p> <div class="language-swift vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">swift</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">func</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> generateEuclidean</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">onsets</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">pulses</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> [</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">] {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> let</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> front:[[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]] </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> Array</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">repeating</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: [</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">1</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">], </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">count</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: onsets)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> let</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back:[[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]] </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> Array</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">repeating</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: [</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">], </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">count</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: pulses </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> onsets)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> euclidRecursive</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">front</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: front, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">back</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: back).</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">flatMap</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> {</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">$0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span> <span class="line"></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">private</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> func</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> euclidRecursive</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> (</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">front</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: [[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]], </span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">back</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: [[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]]) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> [[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]] {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> var</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> var</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> front </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> front</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> </span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> guard</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">count</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> ></span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> else</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> { </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">return</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> front </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back }</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> </span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> var</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> newFront </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> [[</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]]()</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> while</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> front.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">count</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> ></span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> &&</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">count</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> ></span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> {</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> newFront.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">append</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(front.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">popLast</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">()</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">!</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> +</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">popLast</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">()</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">!</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> }</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> euclidRecursive</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">front</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: newFront, </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">back</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: front </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">+</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> back)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span></code></pre> </div><h2 id="an-alternate-approach-bresenham-s-line-algorithm" tabindex="-1">An alternate approach: Bresenham’s line algorithm <a class="header-anchor" href="#an-alternate-approach-bresenham-s-line-algorithm" aria-label="Permalink to "An alternate approach: Bresenham’s line algorithm""></a></h2> <p>As we mentioned earlier, there are other ways to get at this set of patterns. In rather quirky little book called <a href="https://www.amazon.co.jp/Creating-Rhythms-English-Stefan-Hollos-ebook/dp/B00IQWM8EA/ref=sr_1_1?ie=UTF8&qid=1505108854&sr=8-1&keywords=creating+rhythm" target="_blank" rel="noreferrer">Creating Rhythms by Stephan Hollis and J. Richard Hollis</a>, it is pointed out that Toussaint’s rhythms can also be derived using <a href="https://en.wikipedia.org/wiki/Christoffel_symbols" target="_blank" rel="noreferrer">Christoffel words</a>. A related, but more direct approach, is to use <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm" target="_blank" rel="noreferrer">Bresenham’s line algorithm</a> which I will explain here.</p> <p>Basically Bresenham’s algorithm is used for drawing a line in a raster graphics environment. In raster graphics we have essentially a bitmap, i.e., a two-dimensional grid of discrete points or pixels. If we want to draw a straight diagonal line with a slope of 1 in a bitmap, we colour a pixel, then colour the pixel that is up and to its right, then the next pixel that is up and to the right of that one, and so on.</p> <img src="./images/7.png" /> <p>While this is straight forward when the rise evenly divides into the run, when they are relatively prime the situation is more interesting. Some of the pixels will need to be directly adjacent to each other, while some will need to move up and to the right. For example, if our line has a slope of 7/16, it will need to go up 7 times per 16 squares. What principle can we use to determine when we need to go up? To make the line appear straight, we will need to space out those rises as evenly as possible. It becomes essentially the same problem that Bjorklund solved. And we don’t even need a fancy algorithm for it, we can get it directly from the slope of the line. We could just take the floor (or the ceiling) of the y-value at each integer x-point and use that as our y-coordinate.</p> <img src="./8.png" /> <p>And to get our rhythm, we can just say that if floor of the y-value at each x is the same as the previous value, then it is a continuation of the previous onset, and if it has a new y-value, it is the beginning of a new onset.</p> <img src="./images/9.png" /> <p>Here is the algorithm in Swift. Not that the Bjorklund approach was difficult, but the Bresenham approach is incredibly simple to code.</p> <div class="language-swift vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang">swift</span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">func</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0"> bresenhamEuclidean</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">onsets</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">, </span><span style="--shiki-light:#6F42C1;--shiki-dark:#B392F0">pulses</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">-></span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> [</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">] {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> let</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> slope </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> Double</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(onsets) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">/</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> Double</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(pulses)</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> var</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> result </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> [</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">]()</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> var</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> previous: </span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Int</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">?</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> =</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> nil</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> for</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> i </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">in</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">..<</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">pulses {</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> let</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> current </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> Int</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">floor</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">Double</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(i) </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">*</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> slope))</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> result.</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF">append</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">(current </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">!=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> previous </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">?</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 1</span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> :</span><span style="--shiki-light:#005CC5;--shiki-dark:#79B8FF"> 0</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">)</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> previous </span><span style="--shiki-light:#D73A49;--shiki-dark:#F97583">=</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> current</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> }</span></span> <span class="line"><span style="--shiki-light:#D73A49;--shiki-dark:#F97583"> return</span><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8"> result</span></span> <span class="line"><span style="--shiki-light:#24292E;--shiki-dark:#E1E4E8">}</span></span></code></pre> </div><h2 id="and-what-does-it-sound-like" tabindex="-1">And what does it sound like? <a class="header-anchor" href="#and-what-does-it-sound-like" aria-label="Permalink to "And what does it sound like?""></a></h2> <p>You can listen to any Euclidean pattern in the <a href="./../../../../practice/rhythm/circle/">Circular metronome app</a>, the algorhithm is used to set a default position for mutes along the beat cycle. Just mute some notes in a cycle and press the reset button. It will create the rhythm for you. There're also ‘rotate buttons’ that will rotate the pattern by taking the final onset and moving it to the start of the pattern (recall that although the algorithms discussed here generate a single pattern, they stand for a ‘rhythm necklace’ that includes all the rotations of that pattern that start with an onset).</p> <h2 id="a-musical-analysis" tabindex="-1">A musical analysis <a class="header-anchor" href="#a-musical-analysis" aria-label="Permalink to "A musical analysis""></a></h2> <p>What are some of the properties of these rhythms? Well, most importantly, the locations of the onsets are spaced out as evenly as possible. If the number of time events is an integer multiple of the number of pulses, then they will be completely even (e.g., four groups of four), and the result is entirely trivial. But when number of onsets and pulses are relatively prime, the results will have some interesting characteristics. There are two properties that we can observe from this maximal spacing.</p> <p>First, if we look at the distance (in pulses) between adjacent onsets, we can see that each rhythm generated by these algorithms will have at most two possible distances between adjacent onsets, and that these distances differ by at most one pulse. In the first example (5 of 12), some of the onsets are three pulses apart and other are two pulses apart. In the 9 of 16 example, the distances are two and one.</p> <p>Whenever we have these two distances (i.e, when the two numbers are relatively prime) we will usually have a syncopated rhythm, because the two duration lengths in use will be interleaved as much as possible. That is, if we have some groups of two and some groups of three, they will alternate whenever they can, creating large chunks often with one having an odd number of pulses. So when the repetition of some rhythm pattern occurs (for example, the two instances of [x o o x o] in the 5 in 12 example), the first instance will start on the beat, and the second will start off the beat. This gives this set of rhythms a characteristic sound.</p> <p>So what is the big deal about these rhythms? Are they special? I would say that from a composer’s point of view, they aren’t anything to write home about. A composer/songwriter/improvising musician can easily come up with many rhythms that will resonate with a listener, many of which will not be Euclidean.</p> <p>It is true that these rhythms do occur frequently in world music (from Toussaint’s point of view, notably Afro-Cuban, and sub-Saharan music). The maximally distributed property of these rhythms might somehow have some appeal to the human perceptual system. And rhythms where there is some repeating odd length fragment are interesting, but there is nothing magical about them. They aren’t necessary, and I don’t think the average composers, discovering these rhythms would think them worth adding to their bag of tricks.</p> <p>But from the point of view of someone generating a rhythm algorithmically, they are more significant. There are many many ways to arbitrarily generate a rhythm of some length — and most of them sound, well, arbitrary.</p> <p>The point here is that patterns that form the set of Euclidean rhythms are highly reliable. Many Euclidean rhythms are not at all interesting (e.g., all of them where the onset number is a factor of the total pulse number), but they will almost never sound bad. For someone trying to quickly make something out of nothing on a music creation app, they clearly have some value.</p> <h2 id="relationship-to-mos-well-formed-rhythms" tabindex="-1">Relationship to MOS (Well-Formed) Rhythms <a class="header-anchor" href="#relationship-to-mos-well-formed-rhythms" aria-label="Permalink to "Relationship to MOS (Well-Formed) Rhythms""></a></h2> <p>Euclidean rhythms are a special case of MOS/well-formed rhythms, in which the generator is an integer division of the period, and the relationship between the large and small step sizes is a <a href="https://en.xen.wiki/w/Superparticular" target="_blank" rel="noreferrer">superparticular number</a>.</p> <p>Thus, Euclidean rhythms are the direct analogy of <a href="https://en.xen.wiki/w/Maximal_evenness" target="_blank" rel="noreferrer">maximally even MOS scales</a> in equal temperaments.</p> <h2 id="relationship-to-aksak" tabindex="-1">Relationship to aksak <a class="header-anchor" href="#relationship-to-aksak" aria-label="Permalink to "Relationship to aksak""></a></h2> <p>Aksak, a Turkic/Balkan rhythmic concept in which a meter (usually uneven) is divided into cells of two and three steps, is represented in any metric length, given a certain density. So not only are the majority of traditional aksak rhythms represented, Euclidean rhythms offer a way to extend the concept even further.</p> <p>Euclidean rhythms consisting of only groups of two and three will be labeled aksak-compatible.</p> <ul> <li><a href="https://medium.com/code-music-noise/euclidean-rhythms-391d879494df" target="_blank" rel="noreferrer">Original article by Jeff Holtzkener</a></li> <li><a href="/media/pdf/banff.pdf">Euclidean rhythm by Godfried T. Toussaint</a> + <a href="https://habr.com/ru/post/278265/" target="_blank" rel="noreferrer">русский перевод</a></li> <li><a href="https://en.wikipedia.org/wiki/The_Geometry_of_Musical_Rhythm" target="_blank" rel="noreferrer">Geometry of Musical Rhythm</a></li> <li><a href="https://plus.maths.org/content/os/issue40/features/wardhaugh/index" target="_blank" rel="noreferrer">Music and Euclid's algorithm</a></li> <li><a href="https://www.lawtonhall.com/blog/euclidean-rhythms-pt1" target="_blank" rel="noreferrer">Maximum Evenness, Maximum Groove</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/8.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Ambient drone box]]></title> <link>https://chromatone.center/practice/generative/ambience/</link> <guid>https://chromatone.center/practice/generative/ambience/</guid> <pubDate>Thu, 28 Apr 2022 00:00:00 GMT</pubDate> <description><![CDATA[A generative instrument for creating meditative sound landscapes]]></description> <content:encoded><![CDATA[<client-only> <ambient-drone /> </client-only> <h3 id="work-in-progress" tabindex="-1">Work in progress <a class="header-anchor" href="#work-in-progress" aria-label="Permalink to "Work in progress""></a></h3> ]]></content:encoded> <enclosure url="https://chromatone.center/ambience.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Ambient drone box]]></title> <link>https://chromatone.center/practice/sequencing/ambience/</link> <guid>https://chromatone.center/practice/sequencing/ambience/</guid> <pubDate>Thu, 28 Apr 2022 00:00:00 GMT</pubDate> <description><![CDATA[A generative instrument for creating meditative sound landscapes]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/generative/ambience/" target="_blank" rel="noreferrer">https://chromatone.center/practice/generative/ambience/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/ambience.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Log]]></title> <link>https://chromatone.center/practice/midi/log/</link> <guid>https://chromatone.center/practice/midi/log/</guid> <pubDate>Tue, 05 Apr 2022 00:00:00 GMT</pubDate> <description><![CDATA[Inspect all the messages going through the MIDI bus, online in the browser]]></description> <content:encoded><![CDATA[<client-only> <midi-log /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/log.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Tonal array]]></title> <link>https://chromatone.center/practice/chord/array/</link> <guid>https://chromatone.center/practice/chord/array/</guid> <pubDate>Tue, 01 Feb 2022 00:00:00 GMT</pubDate> <description><![CDATA[Harmonic table note layout - symmetrical hexagonal pattern of interval sequences]]></description> <content:encoded><![CDATA[<TonalArray/><p><a href="https://en.wikipedia.org/wiki/Harmonic_table_note_layout" target="_blank" rel="noreferrer">wiki</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/array.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Color]]></title> <link>https://chromatone.center/practice/color/</link> <guid>https://chromatone.center/practice/color/</guid> <pubDate>Tue, 28 Dec 2021 00:00:00 GMT</pubDate> <description><![CDATA[Tools to work with color models]]></description> <content:encoded><![CDATA[<p>Here anyone can explore the differenced between different color models with their own eyes. Starting from basic <a href="./rgb/">RGB</a> and <a href="./cmyk/">CMYK</a> we come to the more humane and pleasing cicular <a href="./hsl/">HSL</a> and 3D <a href="./lab/">LAB</a> color models.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/daniel-apodaca.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Sound]]></title> <link>https://chromatone.center/practice/sound/</link> <guid>https://chromatone.center/practice/sound/</guid> <pubDate>Sat, 18 Dec 2021 00:00:00 GMT</pubDate> <description><![CDATA[Exploration of sound and hearing]]></description> <content:encoded><![CDATA[<p>Experience the basic observations about sound physics and human sound perception phenomena yourself. You can build your own <a href="./loudness/">Equal loudness contour</a> with the tool to match any pitch and volume.</p> <p>You can imagine youself exploring basic music maths with Pythagoras and his <a href="./monochord/">Monochord</a> without going any where in space and time.</p> <p>Visualized <a href="./overtones/">String overtones</a> can teach about the shapes of an oscillating string or column of air gets and how it affects the sound. And then by superimposing two complex sounds overtones we can establish the <a href="./dissonance/">Sensory dissonance curve</a> that lays in the basis of all modern music harmony.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/dylann-hendricks.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Melody study]]></title> <link>https://chromatone.center/theory/melody/study/</link> <guid>https://chromatone.center/theory/melody/study/</guid> <pubDate>Wed, 01 Dec 2021 00:00:00 GMT</pubDate> <description><![CDATA[What is melody and how can we understand it]]></description> <content:encoded><![CDATA[<p>A <a href="https://en.wikipedia.org/wiki/Melody" target="_blank" rel="noreferrer">melody</a> (from Greek μελῳδία, melōidía, "singing, chanting"), also tune, voice or line, is a linear succession of musical tones that the listener perceives as a single entity. In its most literal sense, a melody is a combination of pitch and rhythm, while more figuratively, the term can include successions of other musical elements such as tonal color. It is the foreground to the background accompaniment. A line or part need not be a foreground melody.</p> <p>Melodies often consist of one or more musical phrases or motifs, and are usually repeated throughout a composition in various forms. Melodies may also be described by their melodic motion or the pitches or the intervals between pitches (predominantly conjunct or disjunct or with further restrictions), pitch range, tension and release, continuity and coherence, cadence, and shape.</p> <youtube-embed video="QpxN2VXPMLc" /><h2 id="function-and-elements" tabindex="-1">Function and elements <a class="header-anchor" href="#function-and-elements" aria-label="Permalink to "Function and elements""></a></h2> <p>Johann Philipp Kirnberger argued:</p> <blockquote> <p>The true goal of music—its proper enterprise—is melody. All the parts of harmony have as their ultimate purpose only beautiful melody. Therefore, the question of which is the more significant, melody or harmony, is futile. Beyond doubt, the means is subordinate to the end.<br> — Johann Philipp Kirnberger (1771)</p> </blockquote> <p>The Norwegian composer Marcus Paus has argued:</p> <blockquote> <p>Melody is to music what a scent is to the senses: it jogs our memory. It gives face to form, and identity and character to the process and proceedings. It is not only a musical subject, but a manifestation of the musically subjective. It carries and radiates personality with as much clarity and poignancy as harmony and rhythm combined. As such a powerful tool of communication, melody serves not only as protagonist in its own drama, but as messenger from the author to the audience.<br> — Marcus Paus (2017)</p> </blockquote> <p>Given the many and varied elements and styles of melody "many extant explanations [of melody] confine us to specific stylistic models, and they are too exclusive. Paul Narveson claimed in 1984 that more than three-quarters of melodic topics had not been explored thoroughly.</p> <p>The melodies existing in most European music written before the 20th century, and popular music throughout the 20th century, featured "fixed and easily discernible frequency patterns", recurring "events, often periodic, at all structural levels" and "recurrence of durations and patterns of durations".</p> <p>Melodies in the 20th century "utilized a greater variety of pitch resources than ha[d] been the custom in any other historical period of Western music." While the diatonic scale was still used, the chromatic scale became "widely employed." Composers also allotted a structural role to "the qualitative dimensions" that previously had been "almost exclusively reserved for pitch and rhythm". Kliewer states, "The essential elements of any melody are duration, pitch, and quality (timbre), texture, and loudness. Though the same melody may be recognizable when played with a wide variety of timbres and dynamics, the latter may still be an "element of linear ordering."</p> <youtube-embed video="WEnUuYKL3c8" /><h2 id="examples" tabindex="-1">Examples <a class="header-anchor" href="#examples" aria-label="Permalink to "Examples""></a></h2> <p>Different musical styles use melody in different ways. For example:</p> <ul> <li>Jazz musicians use the term "lead" or "head" to refer to the main melody, which is used as a starting point for improvisation.</li> <li>Rock music, and other forms of popular music and folk music tend to pick one or two melodies (verse and chorus, sometimes with a third, contrasting melody known as a bridge or middle eight) and stick with them; much variety may occur in the phrasing and lyrics.</li> <li>Indian classical music relies heavily on melody and rhythm, and not so much on harmony, as the music contains no chord changes.</li> <li>Balinese gamelan music often uses complicated variations and alterations of a single melody played simultaneously, called heterophony.</li> <li>In western classical music, composers often introduce an initial melody, or theme, and then create variations. Classical music often has several melodic layers, called polyphony, such as those in a fugue, a type of counterpoint. Often, melodies are constructed from motifs or short melodic fragments, such as the opening of Beethoven's Fifth Symphony. Richard Wagner popularized the concept of a leitmotif: a motif or melody associated with a certain idea, person or place.</li> <li>While in both most popular music and classical music of the common practice period pitch and duration are of primary importance in melodies, the contemporary music of the 20th and 21st centuries pitch and duration have lessened in importance and quality has gained importance, often primary. Examples include musique concrète, klangfarbenmelodie, Elliott Carter's Eight Etudes and a Fantasy (which contains a movement with only one note), the third movement of Ruth Crawford-Seeger's String Quartet 1931 (later re-orchestrated as Andante for string orchestra), which creates the melody from an unchanging set of pitches through "dissonant dynamics" alone, and György Ligeti's Aventures, in which recurring phonetics create the linear form.</li> </ul> <youtube-embed video="XLIrjjklq_s" /><h3 id="counterpoint" tabindex="-1">Counterpoint <a class="header-anchor" href="#counterpoint" aria-label="Permalink to "Counterpoint""></a></h3> <youtube-embed video="b5PoTBOj7Xc" /><h2 id="voice-leading" tabindex="-1">Voice leading <a class="header-anchor" href="#voice-leading" aria-label="Permalink to "Voice leading""></a></h2> <p>Voice leading (or part writing) is the linear progression of individual melodic lines (voices or parts) and their interaction with one another to create harmonies, typically in accordance with the principles of common-practice harmony and counterpoint.</p> <p>Rigorous concern for voice leading is of greatest importance in common-practice music, although jazz and pop music also demonstrate attention to voice leading to varying degrees. In Jazz Theory, Gabriel Sakuma writes that "[a]t the surface level, jazz voice-leading conventions seem more relaxed than they are in common-practice music." Marc Schonbrun also states that while it is untrue that "popular music has no voice leading in it, [...] the largest amount of popular music is simply conceived with chords as blocks of information, and melodies are layered on top of the chords."</p> <h3 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h3> <p>Voice leading developed as an independent concept when Heinrich Schenker stressed its importance in "free counterpoint", as opposed to strict counterpoint. He wrote:</p> <blockquote> <p>All musical technique is derived from two basic ingredients: voice leading and the progression of scale degrees [i.e. of harmonic roots]. Of the two, voice leading is the earlier and the more original element. The theory of voice leading is to be presented here as a discipline unified in itself; that is, I shall show how […] it everywhere maintains its inner unity.</p> </blockquote> <p>Schenker indeed did not present the rules of voice leading merely as contrapuntal rules, but showed how they are inseparable from the rules of harmony and how they form one of the most essential aspects of musical composition. (See Schenkerian analysis: voice leading.)</p> <h3 id="common-practice-conventions-and-pedagogy" tabindex="-1">Common-practice conventions and pedagogy <a class="header-anchor" href="#common-practice-conventions-and-pedagogy" aria-label="Permalink to "Common-practice conventions and pedagogy""></a></h3> <p>Western musicians have tended to teach voice leading by focusing on connecting adjacent harmonies because that skill is foundational to meeting larger, structural objectives. Common-practice conventions dictate that melodic lines should be smooth and independent. To be smooth, they should be primarily conjunct (stepwise), avoid leaps that are difficult to sing, approach and follow leaps with movement in the opposite direction, and correctly handle tendency tones (primarily, the leading-tone, but also the scale degree 4, which often moves down to scale degree 3) To be independent, they should avoid parallel fifths and octaves.</p> <p>Contrapuntal conventions likewise consider permitted or forbidden melodic intervals in individual parts, intervals between parts, the direction of the movement of the voices with respect to each other, etc. Whether dealing with counterpoint or harmony, these conventions emerge not only from a desire to create easy-to-sing parts but also from the constraints of tonal materials and from the objectives behind writing certain textures.</p> <h3 id="these-conventions-are-discussed-in-more-detail-below" tabindex="-1">These conventions are discussed in more detail below. <a class="header-anchor" href="#these-conventions-are-discussed-in-more-detail-below" aria-label="Permalink to "These conventions are discussed in more detail below.""></a></h3> <ol> <li>Move each voice the shortest distance possible. One of the main conventions of common-practice part-writing is that, between successive harmonies, voices should avoid leaps and retain common tones as much as possible. This principle was commonly discussed among 17th- and 18th-century musicians as a rule of thumb. For example, Rameau taught "one cannot pass from one note to another but by that which is closest." In the 19th century, as music pedagogy became a more theoretical discipline in some parts of Europe, the 18th-century rule of thumb became codified into a more strict definition. Organist Johann August Dürrnberger coined the term "rule of the shortest way" for it and delineated that:</li> </ol> <ul> <li>When a chord contains one or more notes that will be reused in the chords immediately following, then these notes should remain, that is retained in the respective parts.</li> <li>The parts which do not remain, follow the law of the shortest way (Gesetze des nächsten Weges), that is that each such part names the note of the following chord closest to itself if no forbidden succession arises from this.</li> <li>If no note at all is present in a chord which can be reused in the chord immediately following, one must apply contrary motion according to the law of the shortest way, that is, if the root progresses upwards, the accompanying parts must move downwards, or inversely, if the root progresses downwards, the other parts move upwards and, in both cases, to the note of the following chord closest to them. This rule was taught by Bruckner to Schoenberg and Schenker, who both had followed his classes in Vienna. Schenker re-conceived the principle as the "rule of melodic fluency":</li> </ul> <blockquote> <p>If one wants to avoid the dangers produced by larger intervals [...], the best remedy is simply to interrupt the series of leaps – that is, to prevent a second leap from occurring by continuing with a second or an only slightly larger interval after the first leap; or one may change the direction of the second interval altogether; finally both means can be used in combination. Such procedures yield a kind of wave-like melodic line which as a whole represents an animated entity, and which, with its ascending and descending curves, appears balanced in all its individual component parts. This kind of line manifests what is called melodic fluency [Fließender Gesang].</p> </blockquote> <p>Schenker attributed the rule to Cherubini, but this is the result of a somewhat inexact German translation. Cherubini only said that conjunct movement should be preferred. Franz Stoepel, the German translator, used the expression Fließender Gesang to translate mouvement conjoint. The concept of Fließender Gesang is a common concept of German counterpoint theory. Modern Schenkerians made the concept of "melodic fluency" an important one in their teaching of voice leading.</p> <ol start="2"> <li> <p>Voice crossing should be avoided except to create melodic interest.</p> </li> <li> <p>Avoid parallel fifths and octaves. To promote voice independence, melodic lines should avoid parallel unisons, parallel fifths, and parallel octaves between any two voices. They should also avoid hidden consecutives, perfect intervals reached by any two voices moving in the same direction, even if not by the same interval, particularly if the higher of the two voices makes a disjunct motion. In organ registers, certain interval combinations and chords are activated by a single key so that playing a melody results in parallel voice leading. These voices, losing independence, are fused into one and the parallel chords are perceived as single tones with a new timbre. This effect is also used in orchestral arrangements; for instance, in Ravel's Boléro #5[clarification needed] the parallel parts of flutes, horn and celesta resemble the sound of an electric organ. In counterpoint, parallel voices are prohibited because they violate the homogeneity of musical texture when independent voices occasionally disappear turning into a new timbre quality and vice versa.</p> </li> </ol> <h3 id="harmonic-roles" tabindex="-1">Harmonic roles <a class="header-anchor" href="#harmonic-roles" aria-label="Permalink to "Harmonic roles""></a></h3> <p>As the Renaissance gave way to the Baroque era in the 1600s, part writing reflected the increasing stratification of harmonic roles. This differentiation between outer and inner voices was an outgrowth of both tonality and homophony. In this new Baroque style, the outer voices took a commanding role in determining the flow of the music and tended to move more often by leaps. Inner voices tended to move stepwise or repeat common tones.</p> <p>A Schenkerian analysis perspective on these roles shifts the discussion somewhat from "outer and inner voices" to "upper and bass voices." Although the outer voices still play the dominant, form-defining role in this view, the leading soprano voice is often seen as a composite line that draws on the voice leadings in each of the upper voices of the imaginary continuo. Approaching harmony from a non-Schenkerian perspective, Dmitri Tymoczko nonetheless also demonstrates such "3+1" voice leading, where "three voices articulate a strongly crossing-free voice leading between complete triads [...], while a fourth voice adds doublings," as a feature of tonal writing.</p> <p>Neo-Riemannian theory examines another facet of this principle. That theory decomposes movements from one chord to another into one or several "parsimonious movements" between pitch classes instead of actual pitches (i.e., neglecting octave shifts). Such analysis shows the deeper continuity underneath surface disjunctions, as in the Bach example from BWV 941 hereby.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/chandler-cruttenden.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Motif]]></title> <link>https://chromatone.center/theory/melody/motif/</link> <guid>https://chromatone.center/theory/melody/motif/</guid> <pubDate>Sat, 27 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Constructing music piece per repetition and change of ideas]]></description> <content:encoded><![CDATA[<p>In music, a motif is a short musical idea, a salient recurring figure, musical fragment or succession of notes that has some special importance in or is characteristic of a composition. The motif is the smallest structural unit possessing thematic identity.</p> <iframe title="vimeo-player" src="https://player.vimeo.com/video/112208320?h=c4e78c0fa8" width="640" height="360" frameborder="0" allowfullscreen></iframe> <h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <p>The Encyclopédie de la Pléiade regards it as a "melodic, rhythmic, or harmonic cell", whereas the 1958 Encyclopédie Fasquelle maintains that it may contain one or more cells, though it remains the smallest analyzable element or phrase within a subject. It is commonly regarded as the shortest subdivision of a theme or phrase that still maintains its identity as a musical idea. "The smallest structural unit possessing thematic identity". Grove and Larousse also agree that the motif may have harmonic, melodic and/or rhythmic aspects, Grove adding that it "is most often thought of in melodic terms, and it is this aspect of the motif that is connoted by the term 'figure'."</p> <p>A harmonic motif is a series of chords defined in the abstract, that is, without reference to melody or rhythm. A melodic motif is a melodic formula, established without reference to intervals. A rhythmic motif is the term designating a characteristic rhythmic formula, an abstraction drawn from the rhythmic values of a melody.</p> <p>A motif thematically associated with a person, place, or idea is called a leitmotif. Occasionally such a motif is a musical cryptogram of the name involved. A head-motif (German: Kopfmotiv) is a musical idea at the opening of a set of movements which serves to unite those movements.</p> <p>Scruton, however, suggests that a motif is distinguished from a figure in that a motif is foreground while a figure is background: "A figure resembles a moulding in architecture: it is 'open at both ends', so as to be endlessly repeatable. In hearing a phrase as a figure, rather than a motif, we are at the same time placing it in the background, even if it is...strong and melodious".</p> <p>Any motif may be used to construct complete melodies, themes and pieces. Musical development uses a distinct musical figure that is subsequently altered, repeated, or sequenced throughout a piece or section of a piece of music, guaranteeing its unity.</p> <youtube-embed video="J0ib2EKHofc" /><h2 id="examples" tabindex="-1">Examples <a class="header-anchor" href="#examples" aria-label="Permalink to "Examples""></a></h2> <p>Such motivic development has its roots in the keyboard sonatas of Domenico Scarlatti and the sonata form of Haydn and Mozart's age. Arguably Beethoven achieved the highest elaboration of this technique; the famous "fate motif" —the pattern of three short notes followed by one long one—that opens his Fifth Symphony and reappears throughout the work in surprising and refreshing permutations is a classic example.</p> <p>Motivic saturation is the "immersion of a musical motif in a composition", i.e., keeping motifs and themes below the surface or playing with their identity, and has been used by composers including Miriam Gideon, as in "Night is my Sister" (1952) and "Fantasy on a Javanese Motif" (1958), and Donald Erb. The use of motifs is discussed in Adolph Weiss' "The Lyceum of Schönberg".</p> <h2 id="definitions" tabindex="-1">Definitions <a class="header-anchor" href="#definitions" aria-label="Permalink to "Definitions""></a></h2> <p>Hugo Riemann defines a motif as, "the concrete content of a rhythmically basic time-unit."</p> <p>Anton Webern defines a motif as, "the smallest independent particle in a musical idea", which are recognizable through their repetition.</p> <p>Arnold Schoenberg defines a motif as, "a unit which contains one or more features of interval and rhythm [whose] presence is maintained in constant use throughout a piece".</p> <h2 id="head-motif" tabindex="-1">Head-motif <a class="header-anchor" href="#head-motif" aria-label="Permalink to "Head-motif""></a></h2> <p>Head-motif (German: Kopfmotiv) refers to an opening musical idea of a set of movements which serves to unite those movements. It may also be called a motto, and is a frequent device in cyclic masses.</p> <youtube-embed video="s5SkqX8vo2s" />]]></content:encoded> </item> <item> <title><![CDATA[Motion]]></title> <link>https://chromatone.center/theory/melody/motion/</link> <guid>https://chromatone.center/theory/melody/motion/</guid> <pubDate>Wed, 24 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Melodic motion of the voice]]></description> <content:encoded><![CDATA[<youtube-embed video="DU0ZuBccJ2o" /><youtube-embed video="Vuk7WQ4xvQs" /><p><a href="https://en.wikipedia.org/wiki/Melodic_motion" target="_blank" rel="noreferrer">Melodic motion</a> is the quality of movement of a melody, including nearness or farness of successive pitches or notes in a melody. This may be described as conjunct or disjunct, stepwise, skipwise or no movement, respectively. See also contrapuntal motion. In a conjunct melodic motion, the melodic phrase moves in a stepwise fashion; that is the subsequent notes move up or down a semitone or tone, but no greater. In a disjunct melodic motion, the melodic phrase leaps upwards or downwards; this movement is greater than a whole tone. In popular Western music, a melodic leap of disjunct motion is often present in the chorus of a song, to distinguish it from the verses and captivate the audience.</p> <p>Bruno Nettl describes various types of melodic movement or contour (Nettl 1956, 51–53):</p> <ul> <li>Ascending: Upwards melodic movement</li> <li>Descending: Downwards melodic movement (prevalent in the New World and Australian music)</li> <li>Undulating: Equal movement in both directions, using approximately the same intervals for ascent and descent (prevalent in Old World culture music)</li> <li>Pendulum: Extreme undulation that covers a large range and uses large intervals is called pendulum-type melodic movement</li> <li>Tile, terrace, or cascading: a number of descending phrases in which each phrase begins on a higher pitch than the last ended (prevalent in the North American Plain Indians music)</li> <li>Arc: The melody rises and falls in roughly equal amounts, the curve ascending gradually to a climax and then dropping off (prevalent among Navaho Indians and North American Indian music)</li> <li>Rise: may be considered a musical form, a contrasting section of higher pitch, a "musical plateau".</li> </ul> <p>Other examples include:</p> <ul> <li>Double tonic: smaller pendular motion in one direction</li> </ul> <p>These all may be modal frames or parts of modal frames.</p> <h2 id="modal-frame" tabindex="-1">Modal frame <a class="header-anchor" href="#modal-frame" aria-label="Permalink to "Modal frame""></a></h2> <p>A <a href="https://en.wikipedia.org/wiki/Modal_frame" target="_blank" rel="noreferrer">modal frame</a> in music is "a number of types permeating and unifying African, European, and American song" and melody. It may also be called a melodic mode. "Mode" and "frame" are used interchangeably in this context without reference to scalar or rhythmic modes. Melodic modes define and generate melodies that are not determined by harmony, but purely by melody. A note frame, is a melodic mode that is atonic (without a tonic), or has an unstable tonic.</p> <p>Modal frames may be defined by their:</p> <ul> <li>floor note: the bottom of the frame, felt to be the lowest note, though isolated notes may go lower,</li> <li>ceiling note: the top of the frame,</li> <li>central note: the center around which other notes cluster or gravitate,</li> <li>upper or lower focus: portion of the mode on which the melody temporarily dwells, and can also defined by melody types, such as: <ul> <li>chant tunes: (Bob Dylan's "Subterranean Homesick Blues")</li> <li>axial tunes: ("A Hard Day's Night", "Peggy Sue", Marvin Gaye's "Can I Get A Witness", and Roy Milton's "Do the Hucklebuck")</li> <li>oscillating: (Rolling Stones' "Jumpin' Jack Flash")</li> <li>open/closed: (Bo Diddley's "Hey Bo Diddley")</li> <li>terrace</li> <li>shout-and-fall</li> <li>ladder of thirds</li> </ul> </li> </ul> <p>Further defined features include:</p> <ul> <li>melodic dissonance: the quality of a note that is modally unstable and attracted to other more important tones in a non-harmonic way</li> <li>melodic triad: arpeggiated triads in a melody. A non-harmonic arpeggio is most commonly a melodic triad, it is an arpeggio the notes of which do not appear in the harmony of the accompaniment.</li> <li>level: a temporary modal frame contrasted with another built on a different foundation note. A change in levels is called a shift.</li> <li>co-tonic: a melodic tonic different from and as important as the harmonic tonic</li> <li>secondary tonic: a melodic tonic different from but subordinate to the harmonic tonic</li> <li>pendular third: alternating notes a third apart, most often a neutral, see double tonic</li> </ul> <h3 id="shout-and-fall" tabindex="-1">Shout-and-fall <a class="header-anchor" href="#shout-and-fall" aria-label="Permalink to "Shout-and-fall""></a></h3> <p>Shout-and-fall or tumbling strain is a modal frame, "very common in Afro-American-derived styles" and featured in songs such as "Shake, Rattle and Roll" and "My Generation".</p> <p>"Gesturally, it suggests 'affective outpouring', 'self-offering of the body', 'emptying and relaxation'." The frame may be thought of as a deep structure common to the varied surface structures of songs in which it occurs.</p> <h3 id="ladder-of-thirds" tabindex="-1">Ladder of thirds <a class="header-anchor" href="#ladder-of-thirds" aria-label="Permalink to "Ladder of thirds""></a></h3> <p>A ladder of thirds (coined by van der Merwe 1989, adapted from Curt Sachs) is similar to the circle of fifths, though a ladder of thirds differs in being composed of thirds, major or minor, and may or may not circle back to its starting note and thus may or may not be an interval cycle.</p> <p>Triadic chords may be considered as part of a ladder of thirds.</p> <p>It is a modal frame found in Blues and British folk music. Though a pentatonic scale is often analyzed as a portion of the circle of fifths, the blues scale and melodies in that scale come, "into being through piling up thirds below and/or above a tonic or central note."</p> <p>They are "commonplace in post-rock 'n' roll popular music – and also appear in earlier tunes". Examples include The Beatles' "A Hard Day's Night", Buddy Holly's "Peggy Sue" and The Who's "My Generation", Ben Harney's "You've Been A Good Old Wagon" (1895) and Ben Bernie et al.'s "Sweet Georgia Brown" (1925).</p> <h2 id="melodic-expectation" tabindex="-1">Melodic expectation <a class="header-anchor" href="#melodic-expectation" aria-label="Permalink to "Melodic expectation""></a></h2> <p>In music cognition and musical analysis, the study of melodic expectation considers the engagement of the brain's predictive mechanisms in response to music. For example, if the ascending musical partial octave "do-re-mi-fa-sol-la-ti-..." is heard, listeners familiar with Western music will have a strong expectation to hear or provide one more note, "do", to complete the octave.</p> <p>The notion of melodic expectation has prompted the existence of a corpus of studies in which authors often choose to provide their own terminology in place of using the literature's. This results in an important number of different terms that all point towards the phenomenon of musical expectation:</p> <ul> <li>Anticipation</li> <li>Arousal</li> <li>Deduction</li> <li>Directionality</li> <li>Expectancy, expectation, expectedness, and in French attente</li> <li>Facilitation</li> <li>Implication / realization</li> <li>Implication (independent from realization)</li> <li>Induction</li> <li>Inertia</li> <li>Musical force(s)</li> <li>Previsibility, predictability and prediction</li> <li>Resolution</li> <li>Tension / release, tension / relaxation</li> <li>Closure, which may be used as the ending of the expectation process, as a group boundary, or as both simultaneously</li> </ul> <p>Leonard Meyer's <strong>Emotion and Meaning in Music</strong> is the classic text in music expectation. Meyer's starting point is the belief that the experience of music (as a listener) is derived from one's emotions and feelings about the music, which themselves are a function of relationships within the music itself. Meyer writes that listeners bring with them a vast body of musical experiences that, as one listens to a piece, conditions one's response to that piece as it unfolds. Meyer argued that music's evocative power derives from its capacity to generate, suspend, prolongate, or violate these expectations.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Singing]]></title> <link>https://chromatone.center/theory/melody/singing/</link> <guid>https://chromatone.center/theory/melody/singing/</guid> <pubDate>Sat, 20 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Connecting melody and text through a human voice]]></description> <content:encoded><![CDATA[<h2 id="gregorian-chant" tabindex="-1">Gregorian chant <a class="header-anchor" href="#gregorian-chant" aria-label="Permalink to "Gregorian chant""></a></h2> <youtube-embed video="H3v9unphfi0" /><h2 id="melismatic-singing" tabindex="-1">Melismatic singing <a class="header-anchor" href="#melismatic-singing" aria-label="Permalink to "Melismatic singing""></a></h2> <youtube-embed video="PRS2grauL4I" /><youtube-embed video="U8iJ6SCH6rU" /><h2 id="riffs-and-runs" tabindex="-1">Riffs and runs <a class="header-anchor" href="#riffs-and-runs" aria-label="Permalink to "Riffs and runs""></a></h2> <youtube-embed video="EpLdMIA9QzQ" /><youtube-embed video="1V25bEVuulk" /><youtube-embed video="kkKuecXa5RQ" /><h3 id="vocal-techniques" tabindex="-1">Vocal techniques <a class="header-anchor" href="#vocal-techniques" aria-label="Permalink to "Vocal techniques""></a></h3> <youtube-embed video="GC-tQl9HWp4" /><youtube-embed video="3-UgkKoOcAI" /><youtube-embed video="Vc54taQsLxA" />]]></content:encoded> </item> <item> <title><![CDATA[Rhythm]]></title> <link>https://chromatone.center/practice/rhythm/</link> <guid>https://chromatone.center/practice/rhythm/</guid> <pubDate>Mon, 15 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Delicate patterns of beat and groove]]></description> <content:encoded><![CDATA[<p>Explore beats loops with the <a href="./bars/">Horizontal bars</a> or <a href="./circle/">Circlular</a> metronomes.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/abimael-ahumada.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Generative theory of tonal music]]></title> <link>https://chromatone.center/theory/composition/generative/</link> <guid>https://chromatone.center/theory/composition/generative/</guid> <pubDate>Sat, 13 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Formal description of the musical intuitions of a listener who is experienced in a musical idiom]]></description> <content:encoded><![CDATA[<p>A <a href="https://en.wikipedia.org/wiki/Generative_theory_of_tonal_music" target="_blank" rel="noreferrer">generative theory of tonal music</a> (GTTM) is a theory of music conceived by American composer and music theorist Fred Lerdahl and American linguist Ray Jackendoff and presented in the 1983 book of the same title. It constitutes a "formal description of the musical intuitions of a listener who is experienced in a musical idiom" with the aim of illuminating the unique human capacity for musical understanding.</p> <p>The musical collaboration between Lerdahl and Jackendoff was inspired by Leonard Bernstein's 1973 Charles Eliot Norton Lectures at Harvard University, wherein he called for researchers to uncover a musical grammar that could explain the human musical mind in a scientific manner comparable to Noam Chomsky's revolutionary transformational or generative grammar.</p> <p>Unlike the major methodologies of music analysis that preceded it, GTTM construes the mental procedures under which the listener constructs an unconscious understanding of music, and uses these tools to illuminate the structure of individual compositions. The theory has been influential, spurring further work by its authors and other researchers in the fields of music theory, music cognition and cognitive musicology. Contents</p> <youtube-embed video="ra8TGtzZYo8" /><h2 id="theory" tabindex="-1">Theory <a class="header-anchor" href="#theory" aria-label="Permalink to "Theory""></a></h2> <p>GTTM focuses on four hierarchical systems that shape our musical intuitions. Each of these systems is expressed in a strict hierarchical structure where dominant regions contain smaller subordinate elements and equal elements exist contiguously within a particular and explicit hierarchical level. In GTTM any level can be small-scale or large-scale depending on the size of its elements.</p> <h3 id="structures" tabindex="-1">Structures <a class="header-anchor" href="#structures" aria-label="Permalink to "Structures""></a></h3> <h4 id="i-grouping-structure" tabindex="-1">I. Grouping structure <a class="header-anchor" href="#i-grouping-structure" aria-label="Permalink to "I. Grouping structure""></a></h4> <p>GTTM considers grouping analysis to be the most basic component of musical understanding. It expresses a hierarchical segmentation of a piece into motives, phrases, periods, and still larger sections.</p> <h4 id="ii-metrical-structure" tabindex="-1">II. Metrical structure <a class="header-anchor" href="#ii-metrical-structure" aria-label="Permalink to "II. Metrical structure""></a></h4> <p>Metrical structure expresses the intuition that the events of a piece are related to a regular alternation of strong and weak beats at a number of hierarchical levels. It is a crucial basis for all the structures and reductions of GTTM.</p> <h3 id="iii-time-span-reduction" tabindex="-1">III. Time-span reduction <a class="header-anchor" href="#iii-time-span-reduction" aria-label="Permalink to "III. Time-span reduction""></a></h3> <p>Time-span reductions (TSRs) are based on information gleaned from metrical and grouping structures. They establish tree structure-style hierarchical organizations uniting time-spans at all temporal levels of a work. The TSR analysis begins at the smallest levels, where metrical structure marks off the music into beats of equal length (or more precisely into attack points separated by uniform time-spans) and moves through all larger levels where grouping structure divides the music into motives, phrases, periods, theme groups, and still greater divisions. It further specifies a “head” (or most structurally important event) for each time-span at all hierarchical levels of the analysis. A completed TSR analysis is often called a time-span tree.</p> <h3 id="iv-prolongational-reduction" tabindex="-1">IV. Prolongational reduction <a class="header-anchor" href="#iv-prolongational-reduction" aria-label="Permalink to "IV. Prolongational reduction""></a></h3> <p>Prolongational reduction (PR) provides our "psychological" awareness of tensing and relaxing patterns in a given piece with precise structural terms. In time-span reduction, the hierarchy of less and more important events is established according to rhythmic stability. In prolongational reduction, hierarchy is concerned with relative stability expressed in terms of continuity and progression, the movement toward tension or relaxation, and the degree of closure or non-closure. A PR analysis also produces a tree-structure style hierarchical analysis, but this information is often conveyed in a visually condensed modified "slur" notation.</p> <p>The need for prolongational reduction mainly arises from two limitations of time-span reductions. The first is that time-span reduction fails to express the sense of continuity produced by harmonic rhythm. The second is that time-span reduction—even though it establishes that particular pitch-events are heard in relation to a particular beat, within a particular group—fails to say anything about how music flows across these segments.</p> <h3 id="more-on-tsr-vs-pr" tabindex="-1">More on TSR vs PR <a class="header-anchor" href="#more-on-tsr-vs-pr" aria-label="Permalink to "More on TSR vs PR""></a></h3> <p>It is helpful to note some basic differences between a time-span tree produced by TSR and a prolongational tree produced by PR. First, though the basic branching divisions produced by the two trees are often the same or similar at high structural levels, branching variations between the two trees often occur as one travels further down towards the musical surface.</p> <p>A second and equally important differentiation is that a prolongational tree carries three types of branching: strong prolongation (represented by an open node at the branching point), weak prolongation (a filled node at the branching point) and progression (simple branching, with no node). Time-span trees do not make this distinction. All time-span tree branches are simple branches without nodes (though time-span tree branches are often annotated with other helpful comments).</p> <youtube-embed video="qWreUHbws9g" /><h2 id="rules" tabindex="-1">Rules <a class="header-anchor" href="#rules" aria-label="Permalink to "Rules""></a></h2> <p>Each of the four major hierarchical organizations (grouping structure, metrical structure, time-span reduction and prolongational reduction) is established through rules, which are in three categories:</p> <ol> <li>The well-formedness rules, which specify possible structural descriptions.</li> <li>The preference rules, which draw on possible structural descriptions eliciting those descriptions that correspond to experienced listeners’ hearings of any particular piece.</li> <li>The transformational rules, which provide a means of associating distorted structures with well-formed descriptions.</li> </ol> <h3 id="i-grouping-structure-rules" tabindex="-1">I. Grouping structure rules <a class="header-anchor" href="#i-grouping-structure-rules" aria-label="Permalink to "I. Grouping structure rules""></a></h3> <h4 id="grouping-well-formedness-rules-g-wfrs" tabindex="-1">Grouping well-formedness rules (G~WFRs) <a class="header-anchor" href="#grouping-well-formedness-rules-g-wfrs" aria-label="Permalink to "Grouping well-formedness rules (G~WFRs)""></a></h4> <ol> <li>"Any contiguous sequence of pitch-events, drum beats, or the like can constitute a group, and only contiguous sequences can constitute a group."</li> <li>"A piece constitutes a group."</li> <li>"A group may contain smaller groups."</li> <li>"If a group G1 contains part of a group G2, it must contain all of G2."</li> <li>'If a group G1 contains a smaller group G2, then G1 must be exhaustively partitioned into smaller groups."</li> </ol> <h4 id="grouping-preference-rules-g-prs" tabindex="-1">Grouping preference rules (G~PRs) <a class="header-anchor" href="#grouping-preference-rules-g-prs" aria-label="Permalink to "Grouping preference rules (G~PRs)""></a></h4> <ol> <li>"Avoid analyses with very small groups – the smaller, the less preferable."</li> <li><strong>Proximity:</strong> Consider a sequence of four notes, n1–n4, the transition n2–n3 may be heard as a group boundary if: a.(slur/rest) the interval of time from the end of n2 is greater than that from the end of n1 to the beginning of n2 and that from the end of n3 to the beginning of n4 or if b.(attack/point) the interval of time between the attack points of n2 and n3 is greater than between those of n1 and n2 and between those of n3 and n4.</li> <li><strong>Change:</strong> Consider a sequence of four notes, n1–n4. The transition n2–n3 may be heard as a group boundary if marked by a. register, b. dynamics, c. articulation, or d. length.</li> <li><strong>Intensification:</strong> A larger-level group may be placed where the effects picked out by GPRs 2 and 3 are more pronounced.</li> <li><strong>Symmetry:</strong> "Prefer grouping analyses that most closely approach the ideal subdivision of groups into two parts of equal length."</li> <li><strong>Parallelism:</strong> "Where two or more segments of music can be construed as parallel, they preferably form parallel parts of groups."</li> <li><strong>Time-span and prolongational stability:</strong> "Prefer a grouping structure that results in more stable time-span and/or prolongational reductions."</li> </ol> <h4 id="transformational-grouping-rules" tabindex="-1">Transformational grouping rules <a class="header-anchor" href="#transformational-grouping-rules" aria-label="Permalink to "Transformational grouping rules""></a></h4> <ol> <li>Grouping overlap (p. 60).</li> <li>Grouping elision (p. 61).</li> </ol> <h3 id="ii-metrical-structure-rules" tabindex="-1">II. Metrical structure rules <a class="header-anchor" href="#ii-metrical-structure-rules" aria-label="Permalink to "II. Metrical structure rules""></a></h3> <h4 id="metrical-well-formedness-rules-m-wfrs" tabindex="-1">Metrical well-formedness rules (M~WFRs) <a class="header-anchor" href="#metrical-well-formedness-rules-m-wfrs" aria-label="Permalink to "Metrical well-formedness rules (M~WFRs)""></a></h4> <ol> <li>"Every attack point must be associated with a beat at the smallest metrical level present at that point in the piece."</li> <li>"Every beat at a given level must also be a beat at all smaller levels present at that point in that piece."</li> <li>"At each metrical level, strong beats are spaced either two or three beats apart."</li> <li>"The tactus and immediately larger metrical levels must consist of beats equally spaced throughout the piece. At subtactus metrical levels, weak beats must be equally spaced between the surrounding strong beats."</li> </ol> <h4 id="metrical-preference-rules-m-prs" tabindex="-1">Metrical preference rules (M~PRs) <a class="header-anchor" href="#metrical-preference-rules-m-prs" aria-label="Permalink to "Metrical preference rules (M~PRs)""></a></h4> <ol> <li><strong>Parallelism:</strong> "Where two or more groups or parts of groups can be construed as parallel, they preferably receive parallel metrical structure."</li> <li><strong>Strong beat early:</strong> "Weakly prefer a metrical structure in which the strongest beat in a group appears relatively early in the group."</li> <li><strong>Event:</strong> "Prefer a metrical structure in which beats of level Li that coincide with the inception of pitch-events are strong beats of Li."</li> <li><strong>Stress:</strong> "Prefer a metrical structure in which beats of level Li that are stressed are strong beats of Li."</li> <li><strong>Length:</strong> Prefer a metrical structure in which a relatively strong beat occurs at the inception of either relatively long: a. pitch-event; b. duration of a dynamic; c. slur; d. pattern of articulation; e. duration of a pitch in the relevant levels of the time-span reduction; f. duration of a harmony in the relevant levels of the time-span reduction (harmonic rhythm).</li> <li><strong>Bass:</strong> "Prefer a metrically stable bass."</li> <li><strong>Cadence:</strong> "Strongly prefer a metrical structure in which cadences are metrically stable; that is, strongly avoid violations of local preference rules within cadences."</li> <li><strong>Suspension:</strong> "Strongly prefer a metrical structure in which a suspension is on a stronger beat than its resolution."</li> <li><strong>Time-span interaction:</strong> "Prefer a metrical analysis that minimizes conflict in the time-span reduction."</li> <li><strong>Binary regularity:</strong> "Prefer metrical structures in which at each level every other beat is strong."</li> </ol> <h3 id="transformational-metrical-rule" tabindex="-1">Transformational metrical rule <a class="header-anchor" href="#transformational-metrical-rule" aria-label="Permalink to "Transformational metrical rule""></a></h3> <ol> <li>Metrical deletion (p. 101).</li> </ol> <h3 id="iii-time-span-reduction-rules" tabindex="-1">III. Time-span reduction rules <a class="header-anchor" href="#iii-time-span-reduction-rules" aria-label="Permalink to "III. Time-span reduction rules""></a></h3> <p>Time-span reduction rules begin with two segmentation rules and proceed to the standard WFRs, PRs and TRs.</p> <h4 id="time-span-segmentation-rules" tabindex="-1">Time-span segmentation rules <a class="header-anchor" href="#time-span-segmentation-rules" aria-label="Permalink to "Time-span segmentation rules""></a></h4> <ol> <li>"Every group in a piece is a time-span in the time-span segmentation of the piece."</li> <li>"In underlying grouping structure: a. each beat B of the smallest metrical level determines a time-span TB extending from B up to but not including the next beat of the smallest level; b. each beat B of metrical level Li determines a regular time-span of all beats of level Li-1 from B up to but not including (i) the next beat B’ of level Li or (ii) a group boundary, whichever comes sooner; and c. if a group boundary G intervenes between B and the preceding beat of the same level, B determines an augmented time-span T’B, which is the interval from G to the end of the regular time-span TB."</li> </ol> <h4 id="time-span-reduction-well-formedness-rules-tsr-wfrs" tabindex="-1">Time-span reduction well-formedness rules (TSR~WFRs) <a class="header-anchor" href="#time-span-reduction-well-formedness-rules-tsr-wfrs" aria-label="Permalink to "Time-span reduction well-formedness rules (TSR~WFRs)""></a></h4> <ol> <li>"For every time-span T there is an event e (or a sequence of events e1 – e2) that is the head of T."</li> <li>"If T does not contain any other time-span (that is, if T is the smallest level of time-spans), there e is whatever event occurs in T."</li> <li>If T contains other time-spans, let T1,...,Tn be the (regular or augmented) time-spans immediately contained in T and let e1,...,en be their respective heads. Then the head is defined depending on: a. ordinary reduction; b. fusion; c. transformation; d. cadential retention (p. 159).</li> <li>"If a two-element cadence is directly subordinate to the head e of a time-span T, the final is directly subordinate to e and the penult is directly subordinate to the final."</li> </ol> <h4 id="time-span-reduction-preference-rules-tsr-prs" tabindex="-1">Time-span reduction preference rules (TSR~PRs) <a class="header-anchor" href="#time-span-reduction-preference-rules-tsr-prs" aria-label="Permalink to "Time-span reduction preference rules (TSR~PRs)""></a></h4> <ol> <li>(Metrical position) "Of the possible choices for head of time-span T, prefer that is in a relatively strong metrical position."</li> <li>(Local harmony) "Of the possible choices for head of time-span T, prefer that is: a. relatively intrinsically consonant, b. relatively closely related to the local tonic."</li> <li>(Registral extremes) "Of the possible choices for head of time-span T, weakly prefer a choice that has: a. a higher melodic pitch; b. a lower bass pitch."</li> <li>(Parallelism) "If two or more time-spans can be construed as motivically and/or rhythmically parallel, preferably assign them parallel heads."</li> <li>(Metrical stability) "In choosing the head of a time-span T, prefer a choice that results in more stable choice of metrical structure."</li> <li>(Prolongational stability) "In choosing the head of a time-span T, prefer a choice that results in more stable choice of prolongational structure."</li> <li>(Cadential retention) (p. 170).</li> <li>(Structural beginning) "If for a time-span T there is a larger group G containing T for which the head of T can function as the structural beginning, then prefer as head of T an event relatively close to the beginning of T (and hence to the beginning of G as well)."</li> <li>"In choosing the head of a piece, prefer the structural ending to the structural beginning."</li> </ol> <h3 id="iv-prolongational-reduction-rules" tabindex="-1">IV. Prolongational reduction rules <a class="header-anchor" href="#iv-prolongational-reduction-rules" aria-label="Permalink to "IV. Prolongational reduction rules""></a></h3> <h4 id="prolongational-reduction-well-formedness-rules-pr-wfrs" tabindex="-1">Prolongational reduction well-formedness rules (PR~WFRs) <a class="header-anchor" href="#prolongational-reduction-well-formedness-rules-pr-wfrs" aria-label="Permalink to "Prolongational reduction well-formedness rules (PR~WFRs)""></a></h4> <ol> <li>"There is a single event in the underlying grouping structure of every piece that functions as prolongational head."</li> <li>"An event ei can be a direct elaboration of another pitch ej in any of the following ways: a. ei is a strong prolongation of ej if the roots, bass notes, and melodic notes of the two events are identical; b. ei is a weak prolongation of ej if the roots of the two events are identical but the bass and/or melodic notes differ; c. ei is a progression to or from ej if the harmonic roots of the two events are different."</li> <li>"Every event in the underlying grouping structure is either the prolongational head or a recursive elaboration of the prolongational head."</li> <li>(No crossing branches) "If an event ei is a direct elaboration of an event ej, every event between ei and ej must be a direct elaboration of either ei, ej, or some event between them."</li> </ol> <h4 id="prolongational-reduction-preference-rules-pr-prs" tabindex="-1">Prolongational reduction preference rules (PR~PRs) <a class="header-anchor" href="#prolongational-reduction-preference-rules-pr-prs" aria-label="Permalink to "Prolongational reduction preference rules (PR~PRs)""></a></h4> <ol> <li>(Time-span importance) "In choosing the prolongational most important event ek of a prolongational region (ei – ej), strongly prefer a choice in which ek is relatively time-span important."</li> <li>(Time-span segmentation) "Let ek be the prolongationally most important region (ei – ej). If there is a time-span that contains ei and ek but not ej, prefer a prolongational reduction in which ek is an elaboration of ei; similarly with the roles of ei and ej reversed."</li> <li>(Prolongational connection) "In choosing the prolongationally most important region (ei – ej), prefer an ek that attaches to as to form a maximally stable prolongational connections with one of the endpoints of the region."</li> <li>(Prolongational importance) "Let ek be the prolongationally most important region (ei – ej). Prefer a prolongational reduction in which ek is an elaboration of the prolongationally more important of the endpoints."</li> <li>(Parallelism) "Prefer a prolongational reduction in which parallel passages receive parallel analyses."</li> <li>(Normative prolongational structure) "A cadenced group preferably contains four (five) elements in its prolongational structure: a. a prolongational beginning; b. a prolongational ending consisting of one element of the cadences; (c. a right-branching prolongational as the most important direct elaboration direct of the prolongational beginning); d. a right-branching progression as the (next) most important direct elaboration of the prolongational beginning; e. a left-branching ‘subdominant’ progression as the most important elaboration of the first element of the cadence."</li> </ol> <h4 id="prolongational-reduction-transformational-rules" tabindex="-1">Prolongational reduction transformational rules <a class="header-anchor" href="#prolongational-reduction-transformational-rules" aria-label="Permalink to "Prolongational reduction transformational rules""></a></h4> <ol> <li>Stability conditions for prolongational connection (p. 224): a. Branching condition; b. Pitch-collection condition; c. Melodic condition; d. Harmonic condition.</li> <li>Interaction principle: "to make a sufficiently stable prolongational connection ek must be chosen from the events in the two most important levels of time-span reduction represented in (ei – ej)."</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/Analysis-of-the-beginning-of-Bachs-chorale-Ermuntre-Dich-mein-schwacher-Geist_Q320.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Equal loudness contour]]></title> <link>https://chromatone.center/practice/sound/loudness/</link> <guid>https://chromatone.center/practice/sound/loudness/</guid> <pubDate>Fri, 12 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Build you own loudness contours]]></description> <content:encoded><![CDATA[<SoundLoudness /><blockquote> <p><strong>Find your own equal loudness contour</strong><br> Place sine oscillators on the 2D-plane where vertical axis is the volume and horizontal is the frequency of the sounds being played. You can build up a curve for your absolute threshold of hearing, or explore your own feeling of really loud sounds. <strong>Be careful clicking at the top of the graph!</strong></p> </blockquote> ]]></content:encoded> <enclosure url="https://chromatone.center/loudness.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Atonality and serialism]]></title> <link>https://chromatone.center/theory/composition/serialism/</link> <guid>https://chromatone.center/theory/composition/serialism/</guid> <pubDate>Fri, 12 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Arnold Schoenberg and his explorations of 12 tone composition techniques]]></description> <content:encoded><![CDATA[<youtube-embed video="2ucLa-xLElo" /><h2 id="atonality" tabindex="-1">Atonality <a class="header-anchor" href="#atonality" aria-label="Permalink to "Atonality""></a></h2> <p>Atonality in its broadest sense is music that lacks a tonal center, or key. Atonality, in this sense, usually describes compositions written from about the early 20th-century to the present day, where a hierarchy of harmonies focusing on a single, central triad is not used, and the notes of the chromatic scale function independently of one another. More narrowly, the term atonality describes music that does not conform to the system of tonal hierarchies that characterized European classical music between the seventeenth and nineteenth centuries. "The repertory of atonal music is characterized by the occurrence of pitches in novel combinations, as well as by the occurrence of familiar pitch combinations in unfamiliar environments".</p> <youtube-embed video="DhdrKpw_VFc" /><p>The term is also occasionally used to describe music that is neither tonal nor serial, especially the pre-twelve-tone music of the Second Viennese School, principally Alban Berg, Arnold Schoenberg, and Anton Webern. However, "as a categorical label, 'atonal' generally means only that the piece is in the Western tradition and is not 'tonal'", although there are longer periods, e.g., medieval, renaissance, and modern modal music to which this definition does not apply. "Serialism arose partly as a means of organizing more coherently the relations used in the pre-serial 'free atonal' music. ... Thus, many useful and crucial insights about even strictly serial music depend only on such basic atonal theory".</p> <p>Late 19th- and early 20th-century composers such as Alexander Scriabin, Claude Debussy, Béla Bartók, Paul Hindemith, Sergei Prokofiev, Igor Stravinsky, and Edgard Varèse have written music that has been described, in full or in part, as atonal.</p> <youtube-embed video="1k3yb0o2uU0" /><youtube-embed video="VCODCJ3dERs" /><h3 id="free-atonality" tabindex="-1">Free atonality <a class="header-anchor" href="#free-atonality" aria-label="Permalink to "Free atonality""></a></h3> <p>The twelve-tone technique was preceded by Schoenberg's freely atonal pieces of 1908 to 1923, which, though free, often have as an "integrative element...a minute intervallic cell" that in addition to expansion may be transformed as with a tone row, and in which individual notes may "function as pivotal elements, to permit overlapping statements of a basic cell or the linking of two or more basic cells".</p> <p>The twelve-tone technique was also preceded by nondodecaphonic serial composition used independently in the works of Alexander Scriabin, Igor Stravinsky, Béla Bartók, Carl Ruggles, and others. "Essentially, Schoenberg and Hauer systematized and defined for their own dodecaphonic purposes a pervasive technical feature of 'modern' musical practice, the ostinato."</p> <h3 id="composing-atonal-music" tabindex="-1">Composing atonal music <a class="header-anchor" href="#composing-atonal-music" aria-label="Permalink to "Composing atonal music""></a></h3> <p>Setting out to compose atonal music may seem complicated because of both the vagueness and generality of the term. Additionally George Perle explains that, "the 'free' atonality that preceded dodecaphony precludes by definition the possibility of self-consistent, generally applicable compositional procedures". However, he provides one example as a way to compose atonal pieces, a pre-twelve-tone technique piece by Anton Webern, which rigorously avoids anything that suggests tonality, to choose pitches that do not imply tonality. In other words, reverse the rules of the common practice period so that what was not allowed is required and what was required is not allowed. This is what was done by Charles Seeger in his explanation of dissonant counterpoint, which is a way to write atonal counterpoint.</p> <p>Kostka and Payne list four procedures as operational in the atonal music of Schoenberg, all of which may be taken as negative rules. Avoidance of melodic or harmonic octaves, avoidance of traditional pitch collections such as major or minor triads, avoidance of more than three successive pitches from the same diatonic scale, and use of disjunct melodies (avoidance of conjunct melodies).</p> <p>Further, Perle agrees with Oster and Katz that, "the abandonment of the concept of a root-generator of the individual chord is a radical development that renders futile any attempt at a systematic formulation of chord structure and progression in atonal music along the lines of traditional harmonic theory". Atonal compositional techniques and results "are not reducible to a set of foundational assumptions in terms of which the compositions that are collectively designated by the expression 'atonal music' can be said to represent 'a system' of composition". Equal-interval chords are often of indeterminate root, mixed-interval chords are often best characterized by their interval content, while both lend themselves to atonal contexts.</p> <p>Perle also points out that structural coherence is most often achieved through operations on intervallic cells. A cell "may operate as a kind of microcosmic set of fixed intervallic content, statable either as a chord or as a melodic figure or as a combination of both. Its components may be fixed with regard to order, in which event it may be employed, like the twelve-tone set, in its literal transformations. … Individual tones may function as pivotal elements, to permit overlapping statements of a basic cell or the linking of two or more basic cells".</p> <p>Regarding the post-tonal music of Perle, one theorist wrote: "While ... montages of discrete-seeming elements tend to accumulate global rhythms other than those of tonal progressions and their rhythms, there is a similarity between the two sorts of accumulates spatial and temporal relationships: a similarity consisting of generalized arching tone-centers linked together by shared background referential materials".</p> <p>Another approach of composition techniques for atonal music is given by Allen Forte who developed the theory behind atonal music. Forte describes two main operations: transposition and inversion. Transposition can be seen as a rotation of t either clockwise or anti-clockwise on a circle, where each note of the chord is rotated equally. For example, if t = 2 and the chord is [0 3 6], transposition (clockwise) will be [2 5 8]. Inversion can be seen as a symmetry with respect to the axis formed by 0 and 6. If we carry on with our example [0 3 6] becomes [0 9 6].</p> <p>An important characteristic are the invariants, which are the notes which stay identical after a transformation. No difference is made between the octave in which the note is played so that, for example, all C♯s are equivalent, no matter the octave in which they actually occur. This is why the 12-note scale is represented by a circle. This leads us to the definition of the similarity between two chords which considers the subsets and the interval content of each chord.</p> <h2 id="serialism" tabindex="-1">Serialism <a class="header-anchor" href="#serialism" aria-label="Permalink to "Serialism""></a></h2> <p>In music, <a href="https://en.wikipedia.org/wiki/Serialism" target="_blank" rel="noreferrer">serialism</a> is a method of composition using series of pitches, rhythms, dynamics, timbres or other musical elements. Serialism began primarily with Arnold Schoenberg's twelve-tone technique, though some of his contemporaries were also working to establish serialism as a form of post-tonal thinking. Twelve-tone technique orders the twelve notes of the chromatic scale, forming a row or series and providing a unifying basis for a composition's melody, harmony, structural progressions, and variations. Other types of serialism also work with sets, collections of objects, but not necessarily with fixed-order series, and extend the technique to other musical dimensions (often called "parameters"), such as duration, dynamics, and timbre.</p> <youtube-embed video="9jqyU5oCZuQ" /><p>The idea of serialism is also applied in various ways in the visual arts, design, and architecture,and the musical concept has also been adapted in literature.</p> <p>Integral serialism or total serialism is the use of series for aspects such as duration, dynamics, and register as well as pitch. Other terms, used especially in Europe to distinguish post–World War II serial music from twelve-tone music and its American extensions, are general serialism and multiple serialism.</p> <p>Composers such as Arnold Schoenberg, Anton Webern, Alban Berg, Karlheinz Stockhausen, Pierre Boulez, Luigi Nono, Milton Babbitt, Elisabeth Lutyens, Henri Pousseur, Charles Wuorinen and Jean Barraqué used serial techniques of one sort or another in most of their music. Other composers such as Béla Bartók, Luciano Berio, Benjamin Britten, John Cage, Aaron Copland, Ernst Krenek, Gyorgy Ligeti, Olivier Messiaen, Arvo Pärt, Walter Piston, Ned Rorem, Alfred Schnittke, Ruth Crawford Seeger, Dmitri Shostakovich, and Igor Stravinsky used serialism only in some of their compositions or only in some sections of pieces, as did some jazz composers, such as Bill Evans, Yusef Lateef, and Bill Smith.</p> <youtube-embed video="Rr9zUyoBHBY" /><h2 id="basic-definitions" tabindex="-1">Basic definitions <a class="header-anchor" href="#basic-definitions" aria-label="Permalink to "Basic definitions""></a></h2> <p>Serialism is a method, "highly specialized technique", or "way" of composition. It may also be considered "a philosophy of life (Weltanschauung), a way of relating the human mind to the world and creating a completeness when dealing with a subject".</p> <p>Serialism is not by itself a system of composition or a style. Neither is pitch serialism necessarily incompatible with tonality, though it is most often used as a means of composing atonal music.</p> <p>"Serial music" is a problematic term because it is used differently in different languages and especially because, shortly after its coinage in French, it underwent essential alterations during its transmission to German. The term's use in connection with music was first introduced in French by René Leibowitz in 1947, and immediately afterward by Humphrey Searle in English, as an alternative translation of the German Zwölftontechnik (twelve-tone technique) or Reihenmusik (row music); it was independently introduced by Stockhausen and Herbert Eimert into German in 1955 as serielle Musik, with a different meaning, but also translated as "serial music". Twelve-tone serialism</p> <p>Serialism of the first type is most specifically defined as a structural principle according to which a recurring series of ordered elements (normally a set—or row—of pitches or pitch classes) is used in order or manipulated in particular ways to give a piece unity. "Serial" is often broadly used to describe all music written in what Schoenberg called "The Method of Composing with Twelve Notes related only to one another", or dodecaphony, and methods that evolved from his methods. It is sometimes used more specifically to apply only to music in which at least one element other than pitch is treated as a row or series. Such methods are often called post-Webernian serialism. Other terms used to make the distinction are twelve-note serialism for the former and integral serialism for the latter.</p> <p>A row may be assembled pre-compositionally (perhaps to embody particular intervallic or symmetrical properties), or derived from a spontaneously invented thematic or motivic idea. The row's structure does not in itself define the structure of a composition, which requires development of a comprehensive strategy. The choice of strategy often depends on the relationships contained in a row class, and rows may be constructed with an eye to producing the relationships needed to form desired strategies.</p> <p>The basic set may have additional restrictions, such as the requirement that it use each interval only once.</p> <h3 id="non-twelve-tone-serialism" tabindex="-1">Non-twelve-tone serialism <a class="header-anchor" href="#non-twelve-tone-serialism" aria-label="Permalink to "Non-twelve-tone serialism""></a></h3> <p>"The series is not an order of succession, but indeed a hierarchy—which may be independent of this order of succession".</p> <p>Rules of analysis derived from twelve-tone theory do not apply to serialism of the second type: "in particular the ideas, one, that the series is an intervallic sequence, and two, that the rules are consistent". For example, Stockhausen's early serial works, such as Kreuzspiel and Formel, "advance in unit sections within which a preordained set of pitches is repeatedly reconfigured ... The composer's model for the distributive serial process corresponds to a development of the Zwölftonspiel of Josef Matthias Hauer". Goeyvaerts's Nummer 4</p> <blockquote> <p>provides a classic illustration of the distributive function of seriality: 4 times an equal number of elements of equal duration within an equal global time is distributed in the most equable way, unequally with regard to one another, over the temporal space: from the greatest possible coïncidence to the greatest possible dispersion. This provides an exemplary demonstration of that logical principle of seriality: every situation must occur once and only once.</p> </blockquote> <p>Henri Pousseur, after initially working with twelve-tone technique in works like Sept Versets (1950) and Trois Chants sacrés (1951),</p> <blockquote> <p>evolved away from this bond in Symphonies pour quinze Solistes [1954–55] and in the Quintette [à la mémoire d’Anton Webern, 1955], and from around the time of Impromptu [1955] encounters whole new dimensions of application and new functions.</p> </blockquote> <blockquote> <p>The twelve-tone series loses its imperative function as a prohibiting, regulating, and patterning authority; its working-out is abandoned through its own constant-frequent presence: all 66 intervallic relations among the 12 pitches being virtually present. Prohibited intervals, like the octave, and prohibited successional relations, such as premature note repetitions, frequently occur, although obscured in the dense contexture. The number twelve no longer plays any governing, defining rôle; the pitch constellations no longer hold to the limitation determined by their formation. The dodecaphonic series loses its significance as a concrete model of shape (or a well-defined collection of concrete shapes) is played out. And the chromatic total remains active only, and provisionally, as a general reference.</p> </blockquote> <p>In the 1960s Pousseur took this a step further, applying a consistent set of predefined transformations to preexisting music. One example is the large orchestral work Couleurs croisées (Crossed Colours, 1967), which performs these transformations on the protest song "We Shall Overcome", creating a succession of different situations that are sometimes chromatic and dissonant and sometimes diatonic and consonant. In his opera Votre Faust (Your Faust, 1960–68) Pousseur used many quotations, themselves arranged into a "scale" for serial treatment. This "generalised" serialism (in the strongest possible sense) aims not to exclude any musical phenomena, no matter how heterogeneous, in order "to control the effects of tonal determinism, dialectize its causal functions, and overcome any academic prohibitions, especially the fixing of an anti-grammar meant to replace some previous one".</p> <p>At about the same time, Stockhausen began using serial methods to integrate a variety of musical sources from recorded examples of folk and traditional music from around the world in his electronic composition Telemusik (1966), and from national anthems in Hymnen (1966–67). He extended this serial "polyphony of styles" in a series of "process-plan" works in the late 1960s, as well as later in portions of Licht, the cycle of seven operas he composed between 1977 and 2003.</p> <h2 id="history-of-serial-music" tabindex="-1">History of serial music <a class="header-anchor" href="#history-of-serial-music" aria-label="Permalink to "History of serial music""></a></h2> <h3 id="before-world-war-ii" tabindex="-1">Before World War II <a class="header-anchor" href="#before-world-war-ii" aria-label="Permalink to "Before World War II""></a></h3> <p>In the late 19th and early 20th century, composers began to struggle against the ordered system of chords and intervals known as "functional tonality". Composers such as Debussy and Strauss found ways to stretch the limits of the tonal system to accommodate their ideas. After a brief period of free atonality, Schoenberg and others began exploring tone rows, in which an ordering of the 12 pitches of the equal-tempered chromatic scale is used as the source material of a composition. This ordered set, often called a row, allowed for new forms of expression and (unlike free atonality) the expansion of underlying structural organizing principles without recourse to common practice harmony.</p> <p>Twelve-tone serialism first appeared in the 1920s, with antecedents predating that decade (instances of 12-note passages occur in Liszt's Faust Symphony and in Bach. Schoenberg was the composer most decisively involved in devising and demonstrating the fundamentals of twelve-tone serialism, though it is clear it is not the work of just one musician. In Schoenberg’s own words, his goal of l'invention contrariée was to show constraint in composition. Consequently, some reviewers have jumped to the conclusion that serialism acted as a predetermined method of composing to avoid the subjectivity and ego of a composer in favour of calculated measure and proportion.</p> <h3 id="after-world-war-ii" tabindex="-1">After World War II <a class="header-anchor" href="#after-world-war-ii" aria-label="Permalink to "After World War II""></a></h3> <p>Along with John Cage's indeterminate music (music composed with the use of chance operations) and Werner Meyer-Eppler's aleatoricism, serialism was enormously influential in postwar music. Theorists such as Milton Babbitt and George Perle codified serial systems, leading to a mode of composition called "total serialism", in which every aspect of a piece, not just pitch, is serially constructed. Perle's 1962 text Serial Composition and Atonality became a standard work on the origins of serial composition in the music of Schoenberg, Berg, and Webern.[citation needed]</p> <p>The serialization of rhythm, dynamics, and other elements of music was partly fostered by the work of Olivier Messiaen and his analysis students, including Karel Goeyvaerts and Boulez, in postwar Paris. Messiaen first used a chromatic rhythm scale in his Vingt Regards sur l'enfant-Jésus (1944), but he did not employ a rhythmic series until 1946–48, in the seventh movement, "Turangalîla II", of his Turangalîla-Symphonie. The first examples of such integral serialism are Babbitt's Three Compositions for Piano (1947), Composition for Four Instruments (1948), and Composition for Twelve Instruments (1948). He worked independently of the Europeans.[citation needed] Olivier Messiaen's unordered series for pitch, duration, dynamics, and articulation from the pre-serial Mode de valeurs et d'intensités, upper division only—which Pierre Boulez adapted as an ordered row for his Structures I.</p> <p>Several of the composers associated with Darmstadt, notably Stockhausen, Goeyvaerts, and Pousseur, developed a form of serialism that initially rejected the recurring rows characteristic of twelve-tone technique in order to eradicate any lingering traces of thematicism. Instead of a recurring, referential row, "each musical component is subjected to control by a series of numerical proportions". In Europe, some serial and non-serial music of the early 1950s emphasized the determination of all parameters for each note independently, often resulting in widely spaced, isolated "points" of sound, an effect called first in German "punktuelle Musik" ("pointist" or "punctual music"), then in French "musique ponctuelle", but quickly confused with "pointillistic" (German "pointillistische", French "pointilliste"), the term associated with the densely packed dots in Seurat's paintings, even though the concept was unrelated.</p> <p>Pieces were structured by closed sets of proportions, a method closely related to certain works from the de Stijl and Bauhaus movements in design and architecture some writers called "serial art", specifically the paintings of Piet Mondrian, Theo van Doesburg, Bart van Leck, Georg van Tongerloo, Richard Paul Lohse, and Burgoyne Diller, who had sought to "avoid repetition and symmetry on all structural levels and working with a limited number of elements".</p> <p>Stockhausen described the final synthesis in this manner:</p> <blockquote> <p>So serial thinking is something that's come into our consciousness and will be there forever: it's relativity and nothing else. It just says: Use all the components of any given number of elements, don't leave out individual elements, use them all with equal importance and try to find an equidistant scale so that certain steps are no larger than others. It's a spiritual and democratic attitude toward the world. The stars are organized in a serial way. Whenever you look at a certain star sign you find a limited number of elements with different intervals. If we more thoroughly studied the distances and proportions of the stars we'd probably find certain relationships of multiples based on some logarithmic scale or whatever the scale may be.</p> </blockquote> <p>Stravinsky's adoption of twelve-tone serial techniques shows the level of influence serialism had after the Second World War. Previously Stravinsky had used series of notes without rhythmic or harmonic implications. Because many of the basic techniques of serial composition have analogs in traditional counterpoint, uses of inversion, retrograde, and retrograde inversion from before the war do not necessarily indicate Stravinsky was adopting Schoenbergian techniques. But after meeting Robert Craft and other younger composers, Stravinsky began to study Schoenberg's music, as well as that of Webern and later composers, and to adapt their techniques in his work, using, for example, serial techniques applied to fewer than twelve notes. During the 1950s he used procedures related to Messiaen, Webern and Berg. While it is inaccurate to call them all "serial" in the strict sense, all his major works of the period have clear serialist elements.[citation needed]</p> <p>During this period, the concept of serialism influenced not only new compositions but also scholarly analysis of the classical masters. Adding to their professional tools of sonata form and tonality, scholars began to analyze previous works in the light of serial techniques; for example, they found the use of row technique in previous composers going back to Mozart and Beethoven. In particular, the orchestral outburst that introduces the development section halfway through the last movement of Mozart's Symphony No. 40 is a tone row that Mozart punctuates in a very modern and violent way that Michael Steinberg called "rude octaves and frozen silences".</p> <p>Ruth Crawford Seeger extended serial control to parameters other than pitch and to formal planning as early as 1930–33 in a fashion that goes beyond Webern but was less thoroughgoing than the later practices of Babbitt and European postwar composers.[citation needed] Charles Ives's 1906 song "The Cage" begins with piano chords presented in incrementally decreasing durations, an early example of an overtly arithmetic duration series independent of meter (like Nono's six-element row shown above), and in that sense a precursor to Messiaen’s style of integral serialism. The idea of organizing pitch and rhythm according to similar or related principles is also suggested by both Henry Cowell's New Musical Resources (1930) and the work of Joseph Schillinger.[citation needed]</p> <h2 id="reactions-to-serialism" tabindex="-1">Reactions to serialism <a class="header-anchor" href="#reactions-to-serialism" aria-label="Permalink to "Reactions to serialism""></a></h2> <blockquote> <p>the first time I ever heard Webern in a concert performance …[t]he impression it made on me was the same as I was to experience a few years later when … I first laid eyes on a Mondriaan canvas...: those things, of which I had acquired an extremely intimate knowledge, came across as crude and unfinished when seen in reality.</p> </blockquote> <p>Karel Goeyvaerts on Anton Webern's music.</p> <p>Some music theorists have criticized serialism on the basis that its compositional strategies are often incompatible with the way the human mind processes a piece of music. Nicolas Ruwet (1959) was one of the first to criticise serialism by a comparison with linguistic structures, citing theoretical claims by Boulez and Pousseur, taking as specific examples bars from Stockhausen's Klavierstücke I & II, and calling for a general reexamination of Webern's music. Ruwet specifically names three works as exempt from his criticism: Stockhausen's Zeitmaße and Gruppen, and Boulez's Le marteau sans maître.</p> <p>In response, Pousseur questioned Ruwet's equivalence between phonemes and notes. He also suggested that, if analysis of Le marteau sans maître and Zeitmaße, "performed with sufficient insight", were to be made from the point of view of wave theory—taking into account the dynamic interaction of the different component phenomena, which creates "waves" that interact in a sort of frequency modulation—the analysis "would accurately reflect the realities of perception". This was because these composers had long since acknowledged the lack of differentiation found in punctual music and, becoming increasingly aware of the laws of perception and complying better with them, "paved the way to a more effective kind of musical communication, without in the least abandoning the emancipation that they had been allowed to achieve by this 'zero state' that was punctual music". One way this was achieved was by developing the concept of "groups", which allows structural relationships to be defined not only between individual notes but also at higher levels, up to the overall form of a piece. This is "a structural method par excellence", and a sufficiently simple conception that it remains easily perceptible. Pousseur also points out that serial composers were the first to recognize and attempt to move beyond the lack of differentiation within certain pointillist works. Pousseur later followed up on his own suggestion by developing his idea of "wave" analysis and applying it to Stockhausen's Zeitmaße in two essays.</p> <p>Later writers have continued both lines of reasoning. Fred Lerdahl, for example, in his essay "Cognitive Constraints on Compositional Systems", argues that serialism's perceptual opacity ensures its aesthetic inferiority. Lerdahl has in turn been criticized for excluding "the possibility of other, non-hierarchical methods of achieving musical coherence," and for concentrating on the audibility of tone rows, and the portion of his essay focusing on Boulez's "multiplication" technique (exemplified in three movements of Le Marteau sans maître) has been challenged on perceptual grounds by Stephen Heinemann and Ulrich Mosch. Ruwet's critique has also been criticised for making "the fatal mistake of equating visual presentation (a score) with auditive presentation (the music as heard)".</p> <p>In all these reactions discussed above, the "information extracted", "perceptual opacity", "auditive presentation" (and constraints thereof) pertain to what defines serialism, namely use of a series. And since Schoenberg remarked, "in the later part of a work, when the set [series] had already become familiar to the ear", it has been assumed that serial composers expect their series to be aurally perceived. This principle even became the premise of empirical investigation in the guise of "probe-tone" experiments testing listeners' familiarity with a row after exposure to its various forms (as would occur in a 12-tone work). In other words the supposition in critiques of serialism has been that, if a composition is so intricately structured by and around a series, that series should ultimately be clearly perceived or that a listener ought to become aware of its presence or importance. Babbitt denied this:</p> <blockquote> <p>That's not the way I conceive of a set [series]. This is not a matter of finding the lost [series]. This is not a matter of cryptoanalysis (where's the hidden [series]?). What I'm interested in is the effect it might have, the way it might assert itself not necessarily explicitly.</p> </blockquote> <p>Seemingly in accord with Babbitt's statement, but ranging over such issues as perception, aesthetic value, and the "poietic fallacy", Walter Horn offers a more extensive explanation of the serialism (and atonality) controversy.</p> <p>Within the community of modern music, exactly what constituted serialism was also a matter of debate. The conventional English usage is that the word "serial" applies to all twelve-tone music, which is a subset of serial music, and it is this usage that is generally intended in reference works. Nevertheless, a large body of music exists that is called "serial" but does not employ note-rows at all, let alone twelve-tone technique, e.g., Stockhausen's Klavierstücke I–IV (which use permuted sets), his Stimmung (with pitches from the overtone series, which is also used as the model for the rhythms), and Pousseur's Scambi (where the permuted sounds are made exclusively from filtered white noise).[citation needed]</p> <p>When serialism is not limited to twelve-tone techniques, a contributing problem is that the word "serial" is seldom if ever defined. In many published analyses of individual pieces the term is used while actual meaning is skated around.</p> <h2 id="theory-of-twelve-tone-serial-music" tabindex="-1">Theory of twelve-tone serial music <a class="header-anchor" href="#theory-of-twelve-tone-serial-music" aria-label="Permalink to "Theory of twelve-tone serial music""></a></h2> <p>Due to Babbitt's work, in the mid-20th century serialist thought became rooted in set theory and began to use a quasi-mathematical vocabulary for the manipulation of the basic sets. Musical set theory is often used to analyze and compose serial music, and is also sometimes used in tonal and nonserial atonal analysis.[citation needed]</p> <p>The basis for serial composition is Schoenberg's twelve-tone technique, where the 12 notes of the chromatic scale are organized into a row. This "basic" row is then used to create permutations, that is, rows derived from the basic set by reordering its elements. The row may be used to produce a set of intervals, or a composer may derive the row from a particular succession of intervals. A row that uses all of the intervals in their ascending form once is an all-interval row. In addition to permutations, the basic row may have some set of notes derived from it, which is used to create a new row. These are derived sets.[citation needed]</p> <p>Because there are tonal chord progressions that use all twelve notes, it is possible to create pitch rows with very strong tonal implications, and even to write tonal music using twelve-tone technique. Most tone rows contain subsets that can imply a pitch center; a composer can create music centered on one or more of the row's constituent pitches by emphasizing or avoiding these subsets, respectively, as well as through other, more complex compositional devices.</p> <p>To serialize other elements of music, a system quantifying an identifiable element must be created or defined (this is called "parametrization", after the term in mathematics). For example, if duration is serialized, a set of durations must be specified; if tone colour (timbre) is serialized, a set of separate tone colours must be identified; and so on.[citation needed]</p> <p>The selected set or sets, their permutations and derived sets form the composer's basic material.[citation needed]</p> <p>Composition using twelve-tone serial methods focuses on each appearance of the collection of twelve chromatic notes, called an aggregate. (Sets of more or fewer pitches, or of elements other than pitch, may be treated analogously.) One principle operative in some serial compositions is that no element of the aggregate should be reused in the same contrapuntal strand (statement of a series) until all the other members have been used, and each member must appear only in its place in the series. Yet, since most serial compositions have multiple (at least two, sometimes as many as a few dozen) series statements occurring concurrently, interwoven with each other in time, and feature repetitions of some of their pitches, this principle as stated is more a referential abstraction than a description of the concrete reality of a musical work that is termed "serial".[citation needed]</p> <p>A series may be divided into subsets, and the members of the aggregate not part of a subset are said to be its complement. A subset is self-complementing if it contains half of the set and its complement is also a permutation of the original subset. This is most commonly seen with hexachords, six-note segments of a tone row. A hexachord that is self-complementing for a particular permutation is called prime combinatorial. A hexachord that is self-complementing for all the canonic operations—inversion, retrograde, and retrograde inversion—is called all-combinatorial.</p> <h3 id="twelve-tone-technique" tabindex="-1">Twelve-tone technique <a class="header-anchor" href="#twelve-tone-technique" aria-label="Permalink to "Twelve-tone technique""></a></h3> <p><a href="https://en.wikipedia.org/wiki/Twelve-tone_technique" target="_blank" rel="noreferrer">The twelve-tone technique</a> — also known as dodecaphony, twelve-tone serialism, and (in British usage) twelve-note composition — is a method of musical composition first devised by Austrian composer Josef Matthias Hauer, who published his "law of the twelve tones" in 1919. In 1923, Arnold Schoenberg (1874–1951) developed his own, better-known version of 12-tone technique, which became associated with the "Second Viennese School" composers, who were the primary users of the technique in the first decades of its existence. The technique is a means of ensuring that all 12 notes of the chromatic scale are sounded as often as one another in a piece of music while preventing the emphasis of any one note through the use of tone rows, orderings of the 12 pitch classes. All 12 notes are thus given more or less equal importance, and the music avoids being in a key. Over time, the technique increased greatly in popularity and eventually became widely influential on 20th-century composers. Many important composers who had originally not subscribed to or actively opposed the technique, such as Aaron Copland and Igor Stravinsky,[clarification needed] eventually adopted it in their music.</p> <p>Schoenberg himself described the system as a "Method of composing with twelve tones which are related only with one another". It is commonly considered a form of serialism.</p> <p>Schoenberg's fellow countryman and contemporary Hauer also developed a similar system using unordered hexachords or tropes—but with no connection to Schoenberg's twelve-tone technique. Other composers have created systematic use of the chromatic scale, but Schoenberg's method is considered to be historically and aesthetically most significant.</p> <p><a href="https://www.musictheory.net/calculators/matrix" target="_blank" rel="noreferrer">https://www.musictheory.net/calculators/matrix</a></p> <h3 id="tone-row" tabindex="-1">Tone row <a class="header-anchor" href="#tone-row" aria-label="Permalink to "Tone row""></a></h3> <p>The basis of the twelve-tone technique is the <a href="https://en.wikipedia.org/wiki/Tone_row" target="_blank" rel="noreferrer">tone row</a>, an ordered arrangement of the twelve notes of the chromatic scale (the twelve equal tempered pitch classes). There are four postulates or preconditions to the technique which apply to the row (also called a set or series), on which a work or section is based:</p> <ol> <li>The row is a specific ordering of all twelve notes of the chromatic scale (without regard to octave placement).</li> <li>No note is repeated within the row.</li> <li>The row may be subjected to interval-preserving transformations—that is, it may appear in inversion (denoted I), retrograde (R), or retrograde-inversion (RI), in addition to its "original" or prime form (P).</li> <li>The row in any of its four transformations may begin on any degree of the chromatic scale; in other words it may be freely transposed. (Transposition being an interval-preserving transformation, this is technically covered already by 3.) Transpositions are indicated by an integer between 0 and 11 denoting the number of semitones: thus, if the original form of the row is denoted P0, then P1 denotes its transposition upward by one semitone (similarly I1 is an upward transposition of the inverted form, R1 of the retrograde form, and RI1 of the retrograde-inverted form).</li> </ol> <p>(In Hauer's system postulate 3 does not apply.)</p> <p>A particular transformation (prime, inversion, retrograde, retrograde-inversion) together with a choice of transpositional level is referred to as a set form or row form. Every row thus has up to 48 different row forms. (Some rows have fewer due to symmetry; see the sections on derived rows and invariance below.)</p> <p><a href="https://en.wikipedia.org/wiki/List_of_tone_rows_and_series" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/List_of_tone_rows_and_series</a></p> <h3 id="schoenberg-s-mature-practice" tabindex="-1">Schoenberg's mature practice <a class="header-anchor" href="#schoenberg-s-mature-practice" aria-label="Permalink to "Schoenberg's mature practice""></a></h3> <p>Ten features of Schoenberg's mature twelve-tone practice are characteristic, interdependent, and interactive:</p> <ol> <li>Hexachordal inversional combinatoriality</li> <li>Aggregates</li> <li>Linear set presentation</li> <li>Partitioning</li> <li>Isomorphic partitioning</li> <li>Invariants</li> <li>Hexachordal levels</li> <li>Harmony, "consistent with and derived from the properties of the referential set"</li> <li>Metre, established through "pitch-relational characteristics"</li> <li>Multidimensional set presentations.</li> </ol> <h3 id="unified-field" tabindex="-1">Unified field <a class="header-anchor" href="#unified-field" aria-label="Permalink to "Unified field""></a></h3> <p>In music, <a href="https://en.wikipedia.org/wiki/Unified_field" target="_blank" rel="noreferrer">unified field</a> is the 'unity of musical space' created by the free use of melodic material as harmonic material and vice versa.</p> <p>The technique is most associated with the twelve-tone technique, created by its 'total thematicism' where a tone-row (melody) generates all (harmonic) material. It was also used by Alexander Scriabin, though from a diametrically opposed direction, created by his use of extremely slow harmonic rhythm which eventually led to his use of unordered pitch-class sets, usually hexachords (of six pitches) as harmony from which melody may also be created. (Samson 1977)</p> <p>It may also be observed in Igor Stravinsky's Russian period, such as in Les Noces, derived from his use of folk melodies as generating material and influenced by shorter pieces by Claude Debussy, such as Voiles, and Modest Mussorgsky. In Béla Bartók's Bagatelles, and several of Alfredo Casella's Nine Piano Pieces such as No. 4 'In Modo Burlesco' the close intervallic relationship between motive and chord creates or justifies the great harmonic dissonance.</p> <blockquote> <p>Webern was the only one...who was conscious of a new sound-dimension, of the abolition of horizontal-vertical opposition, so that he saw in the series only a way of giving structure to the sound-space....That functional redistribution of intervals toward which he tended marks an extremely important moment in the history of language. — Pierre Boulez, Notes of an Apprenticeship, p.149,</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Who_Cares_if_You_Listen" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Who_Cares_if_You_Listen</a></p> <p><a href="/media/pdf/who-cares-if-you-listen.pdf" >Who cares if you listen?</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/Richard-Paul-Lohse.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Harmony study]]></title> <link>https://chromatone.center/theory/harmony/study/</link> <guid>https://chromatone.center/theory/harmony/study/</guid> <pubDate>Wed, 10 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Different approaches to harmony in music and everything else]]></description> <content:encoded><![CDATA[<h2 id="ancient-greece" tabindex="-1">Ancient Greece <a class="header-anchor" href="#ancient-greece" aria-label="Permalink to "Ancient Greece""></a></h2> <p>In Greek mythology, Harmonia (/hɑːrˈmoʊniə/; Ancient Greek: Ἁρμονία) is the immortal goddess of harmony and concord. Her Roman counterpart is Concordia. Her Greek opposite is Eris, whose Roman counterpart is Discordia.</p> <p>According to one account, she is the daughter of Ares and Aphrodite. By another account, Harmonia was from Samothrace and was the daughter of Zeus and Electra, her brother Iasion being the founder of the mystic rites celebrated on the island.</p> <p>Almost always, Harmonia is the wife of Cadmus. With Cadmus, she was the mother of Ino, Polydorus, Autonoë, Agave, and Semele. Their youngest son was Illyrius.</p> <p>He was the first Greek hero and, alongside Perseus and Bellerophon, the greatest hero and slayer of monsters before the days of Heracles.</p> <p>Cadmus was credited by the ancient Greeks (such as Herodotus c. 484 – c. 425 BC, one of the first Greek historians, but one who also wove standard myths and legends through his work) with introducing the original Phoenician alphabet to the Greeks, who adapted it to form their Greek alphabet. Herodotus estimates that Cadmus lived sixteen hundred years before his time, which would be around 2000 BC.</p> <h3 id="pythagoreans" tabindex="-1">Pythagoreans <a class="header-anchor" href="#pythagoreans" aria-label="Permalink to "Pythagoreans""></a></h3> <p>Pythagoras pioneered the mathematical and experimental study of music. He objectively measured physical quantities, such as the length of a string, and discovered quantitative mathematical relationships of music through arithmetic ratios. Pythagoras attempted to explain subjective psychological and aesthetic feelings, such as the enjoyment of musical harmony. Pythagoras and his students experimented systematically with strings of varying length and tension, with wind instruments, with brass discs of the same diameter but different thickness, and with identical vases filled with different levels of water. Early Pythagoreans established quantitative ratios between the length of a string or pipe and the pitch of notes and the frequency of string vibration.</p> <p>Pythagoras is credited with discovering that the most harmonious musical intervals are created by the simple numerical ratio of the first four natural numbers which derive respectively from the relations of string length: the octave (1/2), the fifth (2/3) and the fourth (3/4). The sum of those numbers 1 + 2 + 3 + 4 = 10 was for Pythagoreans the perfect number, because it contained in itself "the whole essential nature of numbers". Werner Heisenberg has called this formulation of musical arithmetic as "among the most powerful advances of human science" because it enables the measurement of sound in space.</p> <p>Pythagorean tuning is a system of musical tuning in which the frequency ratios of all intervals are based on the ratio 3:2. This ratio, also known as the "pure" perfect fifth, is chosen because it is one of the most consonant and easiest to tune by ear and because of importance attributed to the integer 3. As Novalis put it, "The musical proportions seem to me to be particularly correct natural proportions."</p> <p>The fact that mathematics could explain the human sentimental world had a profound impact on the Pythagorean philosophy. Pythagoreanism became the quest for establishing the fundamental essences of reality. Pythagorean philosophers advanced the unshakable belief that the essence of all thing are numbers and that the universe was sustained by harmony. According to ancient sources music was central to the lives of those practicing Pythagoreanism. They used medicines for the purification (katharsis) of the body and, according to Aristoxenus, music for the purification of the soul. Pythagoreans used different types of music to arouse or calm their souls.</p> <blockquote> <p><img src="./Pythagoras_and_Philolaus.png" alt=""></p> <p>Medieval woodcut by Franchino Gaffurio, depicting Pythagoras and Philolaus conducting musical investigations.</p> </blockquote> <p>For Pythagoreans, harmony signified the "unification of a multifarious composition and the agreement of unlike spirits". In Pythagoreanism, numeric harmony was applied in mathematical, medical, psychological, aesthetic, metaphysical and cosmological problems. For Pythagorean philosophers, the basic property of numbers was expressed in the harmonious interplay of opposite pairs. Harmony assured the balance of opposite forces. Pythagoras had in his teachings named numbers and the symmetries of them as the first principle, and called these numeric symmetries harmony. This numeric harmony could be discovered in rules throughout nature. Numbers governed the properties and conditions of all beings and were regarded the causes of being in everything else. Pythagorean philosophers believed that numbers were the elements of all beings and the universe as a whole was composed of harmony and numbers.</p> <h2 id="ancient-rome" tabindex="-1">Ancient Rome <a class="header-anchor" href="#ancient-rome" aria-label="Permalink to "Ancient Rome""></a></h2> <h3 id="concordia-discors" tabindex="-1">Concordia discors <a class="header-anchor" href="#concordia-discors" aria-label="Permalink to "Concordia discors""></a></h3> <blockquote> <p>Cum tu inter scabiem tantam et contagia lucri Nil parvum sapias et adhuc sublimia cures: Quae mare conpescant causae, quid temperet annum, Stellae sponte sua iussaene vagentur et errent, Quid premat obscurum lunae, quid proferat orbem, Quid velit et possit rerum concordia discors, Empedocles an Stertinium deliret acumen. — Horat. Epist. I,12 (23-20 BCE)</p> </blockquote> <blockquote> <p>Temporis angusti mansit concordia discors Paxque fuit non sponte ducum … — Lucan. Bell. civ. I, vers.98-99 (48-65 AD)</p> </blockquote> <h2 id="discordia-concors" tabindex="-1">Discordia concors <a class="header-anchor" href="#discordia-concors" aria-label="Permalink to "Discordia concors""></a></h2> <blockquote> <p>…faciuntque deum per quattuor artus Et mundi struxere globum prohibentque requiri Ultra se quicquam, cum per se cuncta crearint: Frigida nec calidis desint aut umida siccis, Spiritus aut solidis, sitque haec discordia concors Quae nexus habilis et opus generabile fingit Atque omnis partus elementa capacia reddit: Semper erit pugna ingeniis, dubiumque manebit Quod latet et tantum supra est hominemque deumque. — Manilius. Astronomica, I.137-146</p> </blockquote> <h2 id="renaissance" tabindex="-1">Renaissance <a class="header-anchor" href="#renaissance" aria-label="Permalink to "Renaissance""></a></h2> <blockquote> <p><img src="./Banchieri1.jpg" alt=""> Adriano Banchieri (1628)</p> </blockquote> <youtube-embed video="eRkgK4jfi6M" /><h2 id="balkan-vocal-harmony" tabindex="-1">Balkan Vocal Harmony <a class="header-anchor" href="#balkan-vocal-harmony" aria-label="Permalink to "Balkan Vocal Harmony""></a></h2> <youtube-embed video="AFgzzWT3zX4" /><h2 id="greece" tabindex="-1">Greece <a class="header-anchor" href="#greece" aria-label="Permalink to "Greece""></a></h2> <youtube-embed video="dm1MWr0ZNdI" /><h2 id="georgia" tabindex="-1">Georgia <a class="header-anchor" href="#georgia" aria-label="Permalink to "Georgia""></a></h2> <youtube-embed video="9HtdXXOTWlQ"/> <p>Georgian polyphonic singing encompasses several distinct regional styles, each with its own unique characteristics. The most prominent styles include Eastern Georgian and Western Georgian polyphony. Eastern Georgian polyphony, found in regions like Kartli and Kakheti, typically employs pedal drone polyphony. This style features two highly embellished melodic lines developing rhythmically free against a background of sustained pitches. In contrast, Western Georgian polyphony, prevalent in areas such as Achara, Imereti, Samegrelo, and Guria, utilizes contrapuntal techniques, often resulting in three and four-part harmonies with highly individualized melodic lines in each part.</p> <p>A key characteristic of Georgian polyphonic singing is its extensive use of sharp dissonant harmonies, including seconds, fourths, sevenths, and ninths. The so-called "Georgian Triad" (C-F-G), consisting of a fourth and a second above the bass note, is particularly common. Additionally, Georgian music is known for colorful modulations and unusual key changes. The vocal technique often involves male falsetto singers, known as "krimanchuli," performing high-pitched melodies above the main melody. This creates a unique and ethereal quality to the music.</p> <p>Georgian polyphonic singing typically involves three main vocal parts: mtkmeli (second tenor), modzakhili (first tenor), and bani (baritone). Other specialized parts include krini (falsetto), dvrini (bass), gamqivani, shemkhmobari, and krimanchuli. While most songs are sung a cappella, some may feature instrumental accompaniment on traditional instruments like the chonguri or panduri. The polyphonic structure allows for partial improvisation, with singers often creating their own melodies and harmonies within the established framework, resulting in a dynamic and engaging musical dialogue between voices.</p> <h2 id="pygmy-music" tabindex="-1">Pygmy music <a class="header-anchor" href="#pygmy-music" aria-label="Permalink to "Pygmy music""></a></h2> <youtube-embed video="fHNkSYXbVMI"/> <p>Pygmy music refers to the sub-Saharan African music traditions of the Central African foragers (or "Pygmies"), predominantly in the Congo, the Central African Republic and Cameroon.</p> <p>Pygmy groups include the Bayaka, the Mbuti, and the Batwa.</p> <p>Music is an important part of Pygmy life, and casual performances take place during many of the day's events. Music comes in many forms, including the spiritual likanos stories, vocable singing and music played from a variety of instruments including the bow harp (ieta), ngombi (harp zither) and limbindi (a string bow).</p> <p>Researchers who have studied Pygmy music include Simha Arom, Louis Sarno, Colin Turnbull and Jean-Pierre Hallet.</p> <h3 id="polyphonic-song" tabindex="-1">Polyphonic song <a class="header-anchor" href="#polyphonic-song" aria-label="Permalink to "Polyphonic song""></a></h3> <p>The Mbenga (Aka/Benzele) and Baka peoples in the west and the Mbuti (Efé) in the east are particularly known for their dense contrapuntal communal improvisation. Simha Arom says that the level of polyphonic complexity of Mbenga–Mbuti music was reached in Europe only in the 14th century. The polyphonic singing of the Aka Pygmies was relisted on the Representative List of the Intangible Cultural Heritage of Humanity in 2008.</p> <p>Mbenga–Mbuti Pygmy music consists of up to four parts and can be described as an "ostinato with variations" similar to a passacaglia in that it is cyclical. It is based on repetition of periods of equal length that each singer divides using different rhythmic figures specific to different repertoires and songs. This creates a detailed surface and endless variations not only of the same period repeated but of various performances of the same piece of music. As in some Balinese gamelan music, these patterns are based on a super-pattern which is never heard. The Pygmies themselves do not learn or think of their music in this theoretical framework, but learn the music growing up.</p> <p>Polyphonic music is only characteristic of the Mbenga and Mbuti. The Gyele/Kola, Great Lakes Twa and Southern Twa have very different musical styles.</p> <blockquote> <youtube-embed video="zb0z0yOdY5E"/> <p>Most of the literature on Baka music concentrates on their “spirit dances”. What is actually meant by “spirit” is rarely, if ever, mentioned in the literature. In English the language for such things as “spirit” and “enchantment” have been heavily influenced by 1000 years of Christianity and given negative connotations, so the true meanings of words get lost in translation.There seems to be an underlying inference that belief in “spirits” is a primitive animistic practice, rather than a rational interpretation of a real phenomenon that is indirectly revered in Western society in art, performance and sport, but which is never spoken about. The Baka call this mé. With their combination of polyphonic singing, polyrhythmic percussion and masked dancers, the Baka are experts at manifesting mé. Each “Spirit Dance” creates its own unique emotions personified in the mé. By being present at these “spirit dances” from before they are born, aware of sounds and movements while still in their mother’s womb, Baka children grow up learning that the purpose of “musicking” is not about performing songs, but about manifesting mé for the good of all.</p> </blockquote> <h3 id="liquindi" tabindex="-1">Liquindi <a class="header-anchor" href="#liquindi" aria-label="Permalink to "Liquindi""></a></h3> <youtube-embed video="ZNzX5t5S4Ls"/> <p>Liquindi is water drumming, typically practiced by Pygmy women and girls. The sound is produced by persons standing in water, and hitting the surface of the water with their hands, such as to trap air in the hands and produce a percussive effect that arises by sudden change in air pressure of the trapped air. The sound cannot exist entirely in water, since it requires the air-water boundary as a surface to be struck, so the sound is not hydraulophonic.</p> <youtube-embed video="8QyK0q1Kr-A"/> <h3 id="hindewhu" tabindex="-1">Hindewhu <a class="header-anchor" href="#hindewhu" aria-label="Permalink to "Hindewhu""></a></h3> <p>Hindewhu is a style of singing/whistle-playing of the BaBenzélé pygmies of the Central African Republic. The term hindewhu is an onomatopoeia of the sound of a performer alternately singing pitched syllables and blowing into a single-pitch papaya-stem whistle, in an interlocked rhythm similar to the gutera-kwakira structure of the Burundian akazehe. Hindewhu announces the return from a hunt and is performed solo, duo or in groups.</p> <h3 id="western-popularization" tabindex="-1">Western popularization <a class="header-anchor" href="#western-popularization" aria-label="Permalink to "Western popularization""></a></h3> <p>Colin M. Turnbull, an American anthropologist, wrote a book about the Efé Pygmies, The Forest People, in 1965. This introduced Mbuti culture to Western countries. Turnbull claimed that the Mbuti viewed the forest as a parental spirit with which they could communicate via song.</p> <p>Some of Turnbull's recordings of Efé music were commercially released and inspired more ethnomusicological study such as by Simha Arom, a French-Israeli who recorded hindewhu, and Luis Devin, an Italian ethnomusicologist who studied in depth the musical rituals and instruments of Baka Pygmies.</p> <p>Some popular musicians have used hindewhu in their music:</p> <ul> <li>"Hunting", a song by Deep Forest from their album Made in Japan.</li> <li>"Ba-Benzélé", a song by Jon Hassell and Brian Eno from the album Fourth World, Vol. 1: Possible Musics (1980).</li> <li>"Fabulous" (1983), a tune by John Oswald and Dick Hyman from the album plunderphonics (1989).</li> <li>Percussionist Bill Summers imitates hindewhu in the track "Watermelon Man" by Herbie Hancock from the 1973 album Head Hunters (see hocket).</li> <li>"Sanctuary", a song by Madonna from the album Bedtime Stories (1994) samples the Herbie Hancock recording.</li> <li>In 1992 the popularization of Pygmy music spread with the release of Eric Mouquet and Michel Sanchez's Deep Forest. A percentage of the proceeds from each album were donated to the Pygmy Fund set up to aid Zaire's Pygmies. The album was nevertheless subject to controversy, as the project used samples recorded by ethnomusicologist Hugo Zemp without permission; further controversy was stirred by the lack of consideration given to the original performer - a Northern Malaitian woman named Afunakwa - by either party during the resultant legal battle.</li> <li>Also in 1992 Martin Cradick and Su Hart spent three months living with and recording Baka in Cameroon. result was the creation of the band Baka Beyond and the release of their collaboration with the Baka musicians, "Spirit of the Forest" alongside the album "Heart of the Forest", and a musical relationship that has lasted over twenty years. Proceeds from both these albums have returned to the Baka musicians through the charity Global Music Exchange which continues to work with the Baka helping them in their rapidly changing environment.</li> <li>Pianist Pierre-Laurent Aimard programmed recordings of Pygmy songs (performed by the Aka Pygmies) with works of contemporary composers György Ligeti and Steve Reich on his African Rhythms (2003)</li> </ul> <h2 id="other-african-traditions" tabindex="-1">Other African traditions <a class="header-anchor" href="#other-african-traditions" aria-label="Permalink to "Other African traditions""></a></h2> <h3 id="mbube-and-isicathamiya-south-africa" tabindex="-1">Mbube and Isicathamiya (South Africa) <a class="header-anchor" href="#mbube-and-isicathamiya-south-africa" aria-label="Permalink to "Mbube and Isicathamiya (South Africa)""></a></h3> <p>Zulu a cappella singing styles featuring complex harmonies.</p> <youtube-embed video="FclwRECHoWc"/> <h3 id="khoikhoi-and-san-music-southern-africa" tabindex="-1">Khoikhoi and San Music (Southern Africa) <a class="header-anchor" href="#khoikhoi-and-san-music-southern-africa" aria-label="Permalink to "Khoikhoi and San Music (Southern Africa)""></a></h3> <p>Features overlapping vocal parts and complex rhythms.</p> <youtube-embed video="dTL_TdONVBs"/> <h3 id="west-african-griot" tabindex="-1">West African Griot <a class="header-anchor" href="#west-african-griot" aria-label="Permalink to "West African Griot""></a></h3> <p>While primarily melodic, some griot performances include harmonized choruses.</p> <blockquote> <youtube-embed video="Ig91Z0-rBfo"/> <p>Sona Jobarteh is the first female Kora virtuoso to come from a west African Griot family. The Kora is one of the most important instruments belonging to the Manding peoples of West Africa (Gambia, Senegal, Mali, Guinea and Guinea-Bissau). It belongs exclusively to griot families, and usually only men who are born into these families have the right to take up the instrument professionally. Sona Jobarteh combines various genres of African Music and western musical elements.</p> </blockquote> <youtube-embed video="--q1j2PExpE"/> ]]></content:encoded> <enclosure url="https://chromatone.center/Pythagoras_and_Philolaus.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Audio illusions]]></title> <link>https://chromatone.center/theory/sound/psychoacoustics/illusions/</link> <guid>https://chromatone.center/theory/sound/psychoacoustics/illusions/</guid> <pubDate>Wed, 10 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Deeper explorations of subtle psychoacoustic effects]]></description> <content:encoded><![CDATA[<youtube-embed video="Sn07AMCfaAI" /><youtube-embed video="OiW8gzBGz1A" /><youtube-embed video="fBMli2YAR8k" /><youtube-embed video="TVsMiSrlSSc" /><youtube-embed video="WMHyYCk7OqE" /><youtube-embed video="YQNsCg4z6L8" />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Radar]]></title> <link>https://chromatone.center/practice/midi/radar/</link> <guid>https://chromatone.center/practice/midi/radar/</guid> <pubDate>Tue, 09 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Circular MIDI visualisation experiment]]></description> <content:encoded><![CDATA[<client-only > <midi-radar style="position: sticky; top: 0;" /><midi-panel class=" mx-2 max-w-55ch" /></client-only> <div class="info custom-block"><p class="custom-block-title">INFO</p> <h3 id="see-all-the-midi-signals-on-the-clock" tabindex="-1">See all the MIDI signals on the clock <a class="header-anchor" href="#see-all-the-midi-signals-on-the-clock" aria-label="Permalink to "See all the MIDI signals on the clock""></a></h3> <p>Press play on your sequencer to start run the radar by incoming MIDI-clock signal, or just press <code>spacebar</code> to start internal metronome, that will drive the radar clocks.</p> <p>Drag across the circle up and down to adjust temporal zoom - ther higher the zoom, the longer the loop (from one to 8 measures).</p> <p>Use MIDI channel filter section to cut visualize the exact voices of a multichannel MIDI signal.</p> <p>You can toggle the internal synth on and off for using your MIDI-controller or enable audio input monitoring for your using your synths and sequencers.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/geometry.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Chroma Grid]]></title> <link>https://chromatone.center/practice/chroma/grid/</link> <guid>https://chromatone.center/practice/chroma/grid/</guid> <pubDate>Tue, 02 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Write note sequences in flexible grids]]></description> <content:encoded><![CDATA[<p>This page moved to <a href="https://chromatone/practice/sequencing/grid/" target="_blank" rel="noreferrer">https://chromatone/practice/sequencing/grid/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/grid.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Chroma Grid]]></title> <link>https://chromatone.center/practice/sequencing/grid/</link> <guid>https://chromatone.center/practice/sequencing/grid/</guid> <pubDate>Tue, 02 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[Compose phrases and motifs in flexible grids]]></description> <content:encoded><![CDATA[<client-only > <ChromaGrids/> <div class="flex flex-wrap"> <control-scale style="flex: 1 1 20px" /> <state-transport style="flex: 1 1 20px" /> </div> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/grid.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Musical form]]></title> <link>https://chromatone.center/theory/composition/form/</link> <guid>https://chromatone.center/theory/composition/form/</guid> <pubDate>Tue, 02 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[The structure of a musical composition or performance]]></description> <content:encoded><![CDATA[<p>In music, <a href="https://en.wikipedia.org/wiki/Musical_form" target="_blank" rel="noreferrer">form</a> refers to the structure of a musical composition or performance. In his book, Worlds of Music, Jeff Todd Titon suggests that a number of organizational elements may determine the formal structure of a piece of music, such as "the arrangement of musical units of rhythm, melody, and/or harmony that show repetition or variation, the arrangement of the instruments (as in the order of solos in a jazz or bluegrass performance), or the way a symphonic piece is orchestrated", among other factors.</p> <p>These organizational elements may be broken into smaller units called phrases, which express a musical idea but lack sufficient weight to stand alone. Musical form unfolds over time through the expansion and development of these ideas.</p> <p>Compositions that do not follow a fixed structure and rely more on improvisation are considered free-form. A fantasia is an example of this.</p> <h2 id="labeling-procedures" tabindex="-1">Labeling procedures <a class="header-anchor" href="#labeling-procedures" aria-label="Permalink to "Labeling procedures""></a></h2> <p>To aid in the process of describing form, musicians have developed a simple system of labeling musical units with letters. In his textbook "Listening to Music", professor Craig Wright writes,</p> <blockquote> <p>The first statement of a musical idea is designated A. Subsequent contrasting sections are labeled B, C, D, and so on. If the first or any other musical unit returns in varied form, then that variation is indicated by a superscript number-- A1 and B2, for example. Subdivisions of each large musical unit are shown by lowercase letters (a, b, and so on).</p> </blockquote> <p>Some writers also use a prime label (such as B', pronounced "B prime", or B'', pronounced "B double prime") to denote sections that are closely related, but vary slightly.</p> <h2 id="levels-of-organization" tabindex="-1">Levels of organization <a class="header-anchor" href="#levels-of-organization" aria-label="Permalink to "Levels of organization""></a></h2> <p>The founding level of musical form can be divided into two parts:</p> <ul> <li>The arrangement of the pulse into unaccented and accented beats, the cells of a measure that, when harmonized, may give rise to a motif or figure.</li> <li>The further organization of such a measure, by repetition and variation, into a true musical phrase having a definite rhythm and duration that may be implied in melody and harmony, defined, for example, by a long final note and a breathing space. This "phrase" may be regarded as the fundamental unit of musical form: it may be broken down into measures of two or three beats, but its distinctive nature will then be lost. Even at this level, the importance of the principles of repetition and contrast, weak and strong, climax and repose, can be seen. Thus, form may be understood on three levels of organization. For the purpose of this exposition, these levels can be roughly designated as passage, piece, and cycle.</li> </ul> <h3 id="passage" tabindex="-1">Passage <a class="header-anchor" href="#passage" aria-label="Permalink to "Passage""></a></h3> <p>The smallest level of construction concerns the way musical phrases are organized into musical sentences and "paragraphs" such as the verse of a song. This may be compared to, and is often decided by, the verse form or meter of the words or the steps of a dance.</p> <p>For example, the twelve bar blues is a specific verse form, while common meter is found in many hymns and ballads and, again, the Elizabethan galliard, like many dances, requires a certain rhythm, pace and length of melody to fit its repeating pattern of steps. Simpler styles of music may be more or less wholly defined at this level of form, which therefore does not differ greatly from the loose sense first mentioned and which may carry with it rhythmic, harmonic, timbral, occasional and melodic conventions.</p> <h3 id="piece-or-movement" tabindex="-1">Piece (or movement) <a class="header-anchor" href="#piece-or-movement" aria-label="Permalink to "Piece (or movement)""></a></h3> <p>The next level concerns the entire structure of any single self-contained musical piece or movement. If the hymn, ballad, blues or dance alluded to above simply repeats the same musical material indefinitely then the piece is said to be in strophic form overall. If it repeats with distinct, sustained changes each time, for instance in setting, ornamentation or instrumentation, then the piece is a theme and variations. If two distinctly different themes are alternated indefinitely, as in a song alternating verse and chorus or in the alternating slow and fast sections of the Hungarian czardas, then this gives rise to a simple binary form. If the theme is played (perhaps twice), then a new theme is introduced, the piece then closing with a return to the first theme, we have a simple ternary form.</p> <p>Great arguments and misunderstanding can be generated by such terms as 'ternary' and 'binary', as a complex piece may have elements of both at different organizational levels.[citation needed] A minuet, like any Baroque dance, generally had simple binary structure (AABB), however, this was frequently extended by the introduction of another minuet arranged for solo instruments (called the trio), after which the first was repeated again and the piece ended—this is a ternary form—ABA: the piece is binary on the lower compositional level but ternary on the higher. Organisational levels are not clearly and universally defined in western musicology, while words like "section" and "passage" are used at different levels by different scholars whose definitions, as Schlanker[full citation needed] points out, cannot keep pace with the myriad innovations and variations devised by musicians.</p> <h3 id="cycle" tabindex="-1">Cycle <a class="header-anchor" href="#cycle" aria-label="Permalink to "Cycle""></a></h3> <p>The grandest level of organization may be referred to as "cyclical form".[citation needed] It concerns the arrangement of several self-contained pieces into a large-scale composition. For example, a set of songs with a related theme may be presented as a song-cycle, whereas a set of Baroque dances were presented as a suite. The opera and ballet may organize song and dance into even larger forms. The symphony, generally considered to be one piece, nevertheless divides into multiple movements (which can usually work as a self-contained piece if played alone). This level of musical form, though it again applies and gives rise to different genres, takes more account of the methods of musical organisation used. For example: a symphony, a concerto and a sonata differ in scale and aim, yet generally resemble one another in the manner of their organization. The individual pieces which make up the larger form may be called movements.</p> <h2 id="common-forms-in-western-music" tabindex="-1">Common forms in Western music <a class="header-anchor" href="#common-forms-in-western-music" aria-label="Permalink to "Common forms in Western music""></a></h2> <p>Scholes suggested that European classical music had only six stand-alone forms: simple binary, simple ternary, compound binary, rondo, air with variations, and fugue (although musicologist Alfred Mann emphasized that the fugue is primarily a method of composition that has sometimes taken on certain structural conventions).</p> <p>Charles Keil classified forms and formal detail as "sectional, developmental, or variational."</p> <h3 id="sectional-form" tabindex="-1">Sectional form <a class="header-anchor" href="#sectional-form" aria-label="Permalink to "Sectional form""></a></h3> <p>This form is built from a sequence of clear-cut units that may be referred to by letters but also often have generic names such as introduction and coda, exposition, development and recapitulation, verse, chorus or refrain, and bridge. Sectional forms include:</p> <h4 id="strophic-form" tabindex="-1">Strophic form <a class="header-anchor" href="#strophic-form" aria-label="Permalink to "Strophic form""></a></h4> <p><a href="https://en.wikipedia.org/wiki/Strophic_form" target="_blank" rel="noreferrer">Strophic form</a> – also called verse-repeating form, chorus form, AAA song form, or one-part song form – is a song structure in which all verses or stanzas of the text are sung to the same music. Contrasting song forms include through-composed, with new music written for every stanza, and ternary form, with a contrasting central section.</p> <p>The term is derived from the Greek word στροφή, strophē, meaning "turn". It is the simplest and most durable of musical forms, extending a piece of music by repetition of a single formal section. This may be analyzed as "A A A...". This additive method is the musical analogue of repeated stanzas in poetry or lyrics and, in fact, where the text repeats the same rhyme scheme from one stanza to the next, the song's structure also often uses either the same or very similar material from one stanza to the next.</p> <p>A modified strophic form varies the pattern in some stanzas (A A' A"...) somewhat like a rudimentary theme and variations. Contrasting verse-chorus form is a binary form that alternates between two sections of music (ABAB), although this may also be interpreted as constituting a larger strophic verse-refrain form. While the terms 'refrain' and 'chorus' are often used interchangeably, 'refrain' may indicate a recurring line of identical melody and lyrics as a part of the verse (as in "Blowin' in the Wind": "...the answer my friend..."), while 'chorus' means an independent form section (as in "Yellow Submarine": "We all live in...").</p> <p>Many folk and popular songs are strophic in form, including the twelve-bar blues, ballads, hymns and chants. Examples include "Barbara Allen", "Erie Canal", and "Michael, Row the Boat Ashore". Also "Oh! Susanna" (A = verse & chorus).</p> <p>Many classical art songs are also composed in strophic form, from the 17th century French air de cour to 19th century German lieder and beyond. Haydn used the strophic variation form in many of his string quartets and a few of his symphonies, employed almost always in the slow second movement. Franz Schubert composed many important strophic lieder, including settings of both narrative poems and simpler, folk-like texts, such as his "Heidenröslein" and "Der Fischer". Several of the songs in his song cycle Die schöne Müllerin use strophic form.</p> <h4 id="medley-or-chain-form" tabindex="-1">Medley or "chain" form <a class="header-anchor" href="#medley-or-chain-form" aria-label="Permalink to "Medley or "chain" form""></a></h4> <p>Medley, potpourri or chain form is the extreme opposite, that of "unrelieved variation": it is simply an indefinite sequence of self-contained sections (ABCD...), sometimes with repeats (AABBCCDD...).</p> <p><a href="https://en.wikipedia.org/wiki/Potpourri_(music)" target="_blank" rel="noreferrer">Potpourri or Pot-Pourri</a> (/ˌpoʊpʊˈriː/; French, literally "putrid pot") is a kind of musical form structured as ABCDEF..., the same as medley or, sometimes, fantasia. It is often used in light, easy-going and popular types of music.</p> <p>This is a form of arrangement where the individual sections are simply juxtaposed with no strong connection or relationship. This type of form is organized by the principle of non-repetition. This is usually to be applied to a composition that consists of a string of favourite tunes, like a potpourri based on either some popular opera, operetta, or a collection of songs, dances, etc.</p> <p>The term has been in use since the beginning of the 18th century, or to be more specific, since it was used by the French music publisher Christophe Ballard (1641–1715) for the edition of a collection of pieces in 1711. In the 18th century the term was used in France for collections of songs which, with a thematic link, were sometimes given stage presentation. Later the term was used also for instrumental collections, like the "Potpourry français", a collection of originally unconnected dance pieces issued by the publisher Bouïn.</p> <p>Potpourris became especially popular in the 19th century. The opera overtures of French composers, such as François-Adrien Boïeldieu (1775–1834), Daniel Auber (1782–1871) and Ferdinand Hérold (1791–1833), or the Englishman Arthur Sullivan (1842–1900) belong to this type. Richard Strauss called the overture to his Die schweigsame Frau a "pot-pourri".</p> <p>The "overtures" to light modern stage works (e.g. operettas or musicals) are almost always written in potpourri form, using airs from the work in question. There is usually some structure to the order presented though. The opening is usually a fanfare or majestic theme (presumably the supposed hoped-for most popular song number), followed by a romantic number, then a comical number; and finally a return to the opening theme or a variation thereof.</p> <h4 id="binary-form" tabindex="-1">Binary form <a class="header-anchor" href="#binary-form" aria-label="Permalink to "Binary form""></a></h4> <p>The term <a href="https://en.wikipedia.org/wiki/Binary_form" target="_blank" rel="noreferrer">"Binary Form"</a> is used to describe a musical piece with two sections that are about equal in length. Binary Form can be written as AB or AABB. Using the example of Greensleeves provided, the first system is almost identical to the second system. We call the first system A and the second system A' (A prime) because of the slight difference in the last measure and a half. The next two systems (3rd and 4th) are almost identical as well, but a new musical idea entirely than the first two systems. We call the third system B and the fourth system B' (B prime) because of the slight difference in the last measure and a half. As a whole, this piece of music is in Binary Form: AA'BB'.</p> <p>Binary form is a musical form in 2 related sections, both of which are usually repeated. Binary is also a structure used to choreograph dance. In music this is usually performed as A-A-B-B.</p> <p>Binary form was popular during the Baroque period, often used to structure movements of keyboard sonatas. It was also used for short, one-movement works. Around the middle of the 18th century, the form largely fell from use as the principal design of entire movements as sonata form and organic development gained prominence. When it is found in later works, it usually takes the form of the theme in a set of variations, or the Minuet, Scherzo, or Trio sections of a "minuet and trio" or "scherzo and trio" movement in a sonata, symphony, etc. Many larger forms incorporate binary structures, and many more complicated forms (such as the 18th-century sonata form) share certain characteristics with binary form.</p> <h5 id="structure" tabindex="-1">Structure <a class="header-anchor" href="#structure" aria-label="Permalink to "Structure""></a></h5> <p>A typical example of a piece in binary form has two large sections of roughly equal duration. The first will begin in a certain key, which will often, (but not always), modulate to a closely related key. Pieces in a major key will usually modulate to the dominant, (the fifth scale degree above the tonic). Pieces in a minor key will generally modulate to the relative major key, (the key of the third scale degree above the minor tonic), or to the dominant minor. A piece in minor may also stay in the original key at the end of the first section, closing with an imperfect cadence.</p> <p>The second section of the piece begins in the newly established key, where it remains for an indefinite period of time. After some harmonic activity, the piece will eventually modulate back to its original key before ending.</p> <p>More often than not, especially in 18th-century compositions, the A and B sections are separated by double bars with repeat signs, meaning both sections were to be repeated.</p> <p>Binary form is usually characterized as having the form AB, though since both sections repeat, a more accurate description would be AABB. Others, however, prefer to use the label AA′. This second designation points to the fact that there is no great change in character between the two sections. The rhythms and melodic material used will generally be closely related in each section, and if the piece is written for a musical ensemble, the instrumentation will generally be the same. This is in contrast to the use of verse-chorus form in popular music—the contrast between the two sections is primarily one of the keys used.</p> <h4 id="further-distinctions" tabindex="-1">Further distinctions <a class="header-anchor" href="#further-distinctions" aria-label="Permalink to "Further distinctions""></a></h4> <p>A piece in binary form can be further classified according to a number of characteristics:</p> <h5 id="simple-vs-rounded" tabindex="-1">Simple vs. rounded <a class="header-anchor" href="#simple-vs-rounded" aria-label="Permalink to "Simple vs. rounded""></a></h5> <p>Occasionally, the B section will end with a "return" of the opening material from the A section. This is referred to as rounded binary, and is labeled as ABA′. In rounded binary, the beginning of the B section is sometimes referred to as the "bridge", and will usually conclude with a half cadence in the original key. Rounded binary is not to be confused with ternary form, also labeled ABA—the difference being that, in ternary form, the B section contrasts completely with the A material as in, for example, a minuet and trio. Another important difference between the rounded and ternary form is that in rounded binary, when the "A" section returns, it will typically contain only half of the full "A" section, whereas ternary form will end with the full "A" section.</p> <p>Sometimes, as in the keyboard sonatas of Domenico Scarlatti, the return of the A theme may include much of the original A section in the tonic key, so much so that some of his sonatas can be regarded as precursors of sonata form.</p> <p>Rounded binary form is sometimes referred to as small ternary form.</p> <p>Rounded binary or minuet form:</p> <blockquote> <p>A :||: B A or A'<br> I(->V) :||: V(or other closely related) I</p> </blockquote> <p>If the B section lacks such a return of the opening A material, the piece is said to be in simple binary.</p> <p>Simple:</p> <blockquote> <p>A->B :||: A->B<br> I->V :||: V->I</p> </blockquote> <p>Slow-movement form:</p> <blockquote> <p>A' A"<br> I->V I->I</p> </blockquote> <p>Many examples of rounded binary are found among the church sonatas of Vivaldi including his Sonata No. 1 for Cello and Continuo, First Movement, while certain Baroque composers such as Bach and Handel used the form rarely.</p> <h5 id="sectional-vs-continuous" tabindex="-1">Sectional vs. continuous <a class="header-anchor" href="#sectional-vs-continuous" aria-label="Permalink to "Sectional vs. continuous""></a></h5> <p>If the A section ends with an Authentic (or Perfect) cadence in the original tonic key of the piece, the design is referred to as a sectional binary. This refers to the fact that the piece is in different tonal sections, each beginning in their own respective keys.</p> <p>If the A section ends with any other kind of cadence, the design is referred to as a continuous binary. This refers to the fact that the B section will "continue on" with the new key established by the cadence at the end of A.</p> <h5 id="symmetrical-vs-asymmetrical" tabindex="-1">Symmetrical vs. asymmetrical <a class="header-anchor" href="#symmetrical-vs-asymmetrical" aria-label="Permalink to "Symmetrical vs. asymmetrical""></a></h5> <p>If the A and B sections are roughly equal in length, the design is referred to as symmetrical.</p> <p>If the A and B sections are of unequal length, the design is referred to as asymmetrical. In such cases, the B section is usually substantially longer than the A section.</p> <p>The asymmetrical binary form becomes more common than the symmetrical type from about the time of Beethoven, and is almost routine in the main sections of Minuet and Trio or Scherzo and Trio movements in works from this period. In such cases, occasionally only the first section of the binary structure is marked to be repeated.</p> <p>Although most of Chopin's nocturnes are in an overall ternary form, quite often the individual sections (either the A, the B, or both) are in binary form, most often of the asymmetrical variety. If a section of this binary structure is repeated, in this case it is written out again in full, usually considerably varied, rather than enclosed between repeat signs.</p> <h5 id="balanced-binary" tabindex="-1">Balanced binary <a class="header-anchor" href="#balanced-binary" aria-label="Permalink to "Balanced binary""></a></h5> <p>Balanced binary is when the end of the first section and the end of the second section have analogous material and are organized in a parallel way.</p> <h4 id="ternary-form" tabindex="-1">Ternary form <a class="header-anchor" href="#ternary-form" aria-label="Permalink to "Ternary form""></a></h4> <p><a href="https://en.wikipedia.org/wiki/Ternary_form" target="_blank" rel="noreferrer">Ternary form</a> is a three-part musical form in which the third part repeats or at least contains the principal idea of the first part, represented as A B A. There are both simple and compound ternary forms. Da capo arias are usually in simple ternary form (i.e. "from the head"). A compound ternary form (or trio form) similarly involves an ABA pattern, but each section is itself either in binary (two sub-sections which may be repeated) or (simple) ternary form.</p> <p>Ternary form, sometimes called song form, is a three-part musical form consisting of an opening section (A), a following section (B) and then a repetition of the first section (A). It is usually schematized as A–B–A. Prominent examples include the da capo aria "The trumpet shall sound" from Handel's Messiah, Chopin's Prelude in D-Flat Major "Raindrop", (Op. 28) and the opening chorus of Bach's St John Passion.</p> <h5 id="simple-ternary-form" tabindex="-1">Simple ternary form <a class="header-anchor" href="#simple-ternary-form" aria-label="Permalink to "Simple ternary form""></a></h5> <p>In ternary form each section is self-contained both thematically as well as tonally (that is, each section contains distinct and complete themes), and ends with an authentic cadence. The B section is generally in a contrasting but closely related key, usually a perfect fifth above or the parallel minor of the home key of the A section (V or i); however, in many works of the Classical period, the B section stays in tonic but has contrasting thematic material. It usually also has a contrasting character; for example section A might be stiff and formal while the contrasting B section would be melodious and flowing. Da capo aria</p> <p>Baroque opera arias and a considerable number of baroque sacred music arias was dominated by the Da capo aria which were in the ABA form. A frequent model of the form began with a long A section in a major key, a short B section in a relative minor key mildly developing the thematic material of the A section and then a repetition of the A section. By convention in the third section (the repeat of section A after section B) soloists may add some ornamentation or short improvised variations. In later classical music such changes may have been written into the score. In these cases the last section is sometimes labeled A’ or A1 to indicate that it is slightly different from the first A section.</p> <h5 id="compound-ternary-or-trio-form" tabindex="-1">Compound ternary or trio form <a class="header-anchor" href="#compound-ternary-or-trio-form" aria-label="Permalink to "Compound ternary or trio form""></a></h5> <p>In a trio form each section is a dance movement in binary form (two sub-sections which are each repeated) and a contrasting trio movement also in binary form with repeats. An example is the minuet and trio from Haydn's Surprise Symphony. The minuet consists of one section (1A) which is repeated and a second section (1B) which is also repeated. The trio section follows the same format (2A repeated and 2B repeated). The complete minuet is then played again at the end of the trio represented as: [(1A–1A–1B–1B) (2A–2A–2B–2B) (1A–1A–1B–1B)]. By convention in the second rendition of the minuet, the sections are not repeated with the scheme [(1A–1A–1B–1B) (2A–2A–2B–2B) (1A–1B)]. The trio may also be referred to as a double or as I/II, such as in Bach's polonaise and double (or Polonaise I/II) from his second orchestral suite and his bouree and double (or Bouree I/II) from his second English Suite for harpsichord.</p> <p>The scherzo and trio, which is identical in structure to other trio forms, developed in the late Classical and early Romantic periods. Examples include the scherzo and trio (second movement) from Beethoven's Symphony No. 9 and the scherzo and trio in Schubert's String Quintet. Another name for the latter is "composite ternary form".[citation needed]</p> <p>Trio form movements (especially scherzos) written from the early romantic era sometimes include a short coda (a unique ending to complete the entire movement) and possibly a short introduction. The second movement of Beethoven's Symphony No. 9 is written in this style which can be diagrammed as [(INTRO) (1A–1A–1B–1B) (2A–2A–2B–2B) (1A–1B) (CODA)]</p> <p>Marches by John Philip Sousa and others follow this form, and the middle section is called the "trio". Polkas are also often in compound-ternary form.</p> <h5 id="quasi-compound-form" tabindex="-1">Quasi compound form <a class="header-anchor" href="#quasi-compound-form" aria-label="Permalink to "Quasi compound form""></a></h5> <p>Occasionally the A section or B section of a dance like movement is not divided into two repeating parts. For example, in the Minuet in Haydn's String Quartet op. 76 no. 6, the Minuet is in standard binary form (section A and B) while the trio is in free form and not in two repeated sections. Haydn labeled the B section "Alternative", a label used in some Baroque pieces (though most such pieces were in proper compound ternary form).</p> <h4 id="ternary-form-within-a-ternary-form" tabindex="-1">Ternary form within a ternary form <a class="header-anchor" href="#ternary-form-within-a-ternary-form" aria-label="Permalink to "Ternary form within a ternary form""></a></h4> <p>In a complex ternary form each section is itself in ternary form in the scheme of [(A–B–A)(C–D–C)(A–B–A)] By convention each part is repeated and only on its first rendition: [(A–A–B–B–A)(C–C–D–D–C)(A–B–A)]. An example are the Impromptus (Op. 7) by Jan Voříšek.[9]</p> <p>Expanded ternary forms are especially common among Romantic-era composers; for example, Chopin's "Military" Polonaise (Op. 40, No. 1) is in the form [(A–A–B–A-B–A)(C–C–D–C-D–C)(A–B–A)], where the A and B sections and C and D sections are repeated as a group, and the original theme returning at the end without repeats.</p> <h4 id="rondo-form" tabindex="-1">Rondo form <a class="header-anchor" href="#rondo-form" aria-label="Permalink to "Rondo form""></a></h4> <p><a href="https://en.wikipedia.org/wiki/Rondo" target="_blank" rel="noreferrer">Rondo form</a> has a recurring theme alternating with different (usually contrasting) sections called "episodes". It may be asymmetrical (ABACADAEA) or symmetrical (ABACABA). A recurring section, especially the main theme, is sometimes more thoroughly varied, or else one episode may be a "development" of it. A similar arrangement is the ritornello form of the Baroque concerto grosso. Arch form (ABCBA) resembles a symmetrical rondo without intermediate repetitions of the main theme.</p> <h4 id="variational-form" tabindex="-1">Variational form <a class="header-anchor" href="#variational-form" aria-label="Permalink to "Variational form""></a></h4> <p><a href="https://en.wikipedia.org/wiki/Variation_(music)" target="_blank" rel="noreferrer">Variational</a> forms are those in which variation is an important formative element.</p> <p><a href="https://en.wikipedia.org/wiki/Theme_and_Variations" target="_blank" rel="noreferrer">Theme and Variations</a>: a theme, which in itself can be of any shorter form (binary, ternary, etc.), forms the only "section" and is repeated indefinitely (as in strophic form) but is varied each time (A,B,A,F,Z,A), so as to make a sort of sectional chain form. An important variant of this, much used in 17th-century British music and in the Passacaglia and Chaconne, was that of the ground bass—a repeating bass theme or basso ostinato over and around which the rest of the structure unfolds, often, but not always, spinning polyphonic or contrapuntal threads, or improvising divisions and descants. This is said by Scholes (1977) to be the form par excellence of unaccompanied or accompanied solo instrumental music. The Rondo is often found with sections varied (AA1BA2CA3BA4) or (ABA1CA2B1A).</p> <h4 id="sonata-allegro-form" tabindex="-1">Sonata-allegro form <a class="header-anchor" href="#sonata-allegro-form" aria-label="Permalink to "Sonata-allegro form""></a></h4> <p>Sonata-allegro form (also <a href="https://en.wikipedia.org/wiki/Sonata_form" target="_blank" rel="noreferrer">sonata form</a> or first movement form) is typically cast in a greater ternary form, having the nominal subdivisions of Exposition, Development and Recapitulation. Usually, but not always, the "A" parts (Exposition and Recapitulation, respectively) may be subdivided into two or three themes or theme groups which are taken asunder and recombined to form the "B" part (the development) —thus, e.g. (AabB[dev. of a and/or b]A1ab1+coda).</p> <youtube-embed video="14dwegqniNg" /><p>The sonata form is "the most important principle of musical form, or formal type from the classical period well into the twentieth century." It is usually used as the form of the first movement in multi-movement works. So, it is also called "first-movement form" or "sonata-allegro form"(Because usually the most common first movements are in allegro tempo).</p> <p>Each section of Sonata Form movement has its own function:</p> <ul> <li>It may have an introduction at the beginning.</li> <li>Following the introduction, the exposition is the first required section. It lays out the thematic material in its basic version. There are usually two themes or theme groups in the exposition, and they are often in contrast styles and keys and connected by a transition. In the end of the exposition, there is a closing theme which concludes the section.</li> <li>The exposition is followed by the development section in which the material in the exposition is developed.</li> <li>After the development section, there is a returning section called recapitulation where the thematic material returns in the tonic key.</li> <li>At the end of the movement, there may be a coda, after the recapitulation.</li> </ul> <youtube-embed video="HzHS7QL-B-c" /><h2 id="forms-used-in-western-popular-music" tabindex="-1">Forms used in Western popular music <a class="header-anchor" href="#forms-used-in-western-popular-music" aria-label="Permalink to "Forms used in Western popular music""></a></h2> <p>Some forms are used predominantly within popular music, including genre-specific forms. Popular music forms are often derived from strophic form (AAA song form), 32-bar form (AABA song form), verse-chorus form (AB song form) and 12-bar blues form (AAB song form).</p> <h3 id="sectional-forms" tabindex="-1">Sectional forms <a class="header-anchor" href="#sectional-forms" aria-label="Permalink to "Sectional forms""></a></h3> <ul> <li>AABA a.k.a. American Popular</li> <li>AB a.k.a. Verse/Chorus <ul> <li>ABC a.k.a. Verse/Chorus/Bridge</li> </ul> </li> <li>ABAB</li> <li>ABAC a.k.a. Verse/Chorus/Verse/Bridge</li> <li>ABCD a.k.a. Through-composed</li> <li>Blues Song forms <ul> <li>AAB a.k.a. Twelve-bar blues</li> <li>8-Bar Blues</li> <li>16-Bar Blues</li> </ul> </li> </ul> <h3 id="extended-forms" tabindex="-1">Extended forms <a class="header-anchor" href="#extended-forms" aria-label="Permalink to "Extended forms""></a></h3> <p>Extended form are forms that have their root in one of the forms above, however, they have been extended with additional sections. For example:</p> <ul> <li>AAAAA</li> <li>AABABA</li> </ul> <h3 id="compound-forms" tabindex="-1">Compound forms <a class="header-anchor" href="#compound-forms" aria-label="Permalink to "Compound forms""></a></h3> <p>Also called Hybrid song forms. Compound song forms blend together two or more song forms.</p> <h3 id="section-names-in-popular-music" tabindex="-1">Section names in popular music <a class="header-anchor" href="#section-names-in-popular-music" aria-label="Permalink to "Section names in popular music""></a></h3> <ul> <li>Introduction a.k.a. Intro</li> <li>Verse</li> <li>Refrain</li> <li>Pre-chorus / Rise / Climb</li> <li>Chorus</li> <li>Post-chorus</li> <li>Bridge</li> <li>Middle-Eight</li> <li>Solo / Instrumental Break</li> <li>Collision</li> <li>CODA / Outro</li> <li>Ad Lib (Often in CODA / Outro)</li> </ul> <h3 id="cyclical-forms" tabindex="-1">Cyclical forms <a class="header-anchor" href="#cyclical-forms" aria-label="Permalink to "Cyclical forms""></a></h3> <p>In the 13th century the song cycle emerged, which is a set of related songs (as the suite is a set of related dances). The oratorio took shape in the second half of the 16th century as a narrative recounted—rather than acted—by the singers.</p> <h3 id="matrix" tabindex="-1">Matrix <a class="header-anchor" href="#matrix" aria-label="Permalink to "Matrix""></a></h3> <p>In music, especially folk and popular music, a matrix is an element of variations which does not change. The term was derived from use in musical writings and from Arthur Koestler's The Act of Creation, who defines creativity as the bisociation of two sets of ideas or matrices. Musical matrices may be combined in any number, usually more than two, and may be — and must be for analysis — broken down into smaller ones. They may be intended by the composer and perceived by the listener, or they may not, and they may be purposefully ambiguous.</p> <p>The simplest examples given by van der Merwe are fixed notes, definite intervals, and regular beats, while the most complex given are the Baroque fugue, Classical tonality, and Romantic chromaticism. The following examples are some matrices which are part of "Pop Goes the Weasel":</p> <ul> <li>major mode</li> <li>6/8 time</li> <li>four-bar phrasing</li> <li>regular beat</li> <li>rhyming tune structure</li> <li>ending both halves of the tune with the same figure</li> <li>melodic climax</li> <li>perfect cadence</li> <li>three primary triads implied</li> </ul> <p>Co-ordinated matrices may possess "bound-upness" or "at-oddness", depending on the degree to which they are connected to each other or go their separate ways, respectively, and are more or less easy to reconcile. The matrices of the larger matrix known as sonata rondo form are more bound up than the matrices of rondo form, while African and Indian music feature more rhythmic at-oddness than European music's coinciding beats, and European harmony features more at-oddness (between the melody and bass) than the preceding organum. At-oddness is a matter of degree, and almost all at odd matrices are partially bound up.</p> <h2 id="andalusian-cadence" tabindex="-1">Andalusian Cadence <a class="header-anchor" href="#andalusian-cadence" aria-label="Permalink to "Andalusian Cadence""></a></h2> <youtube-embed video="_g9r-zRcxI0" />]]></content:encoded> <enclosure url="https://chromatone.center/Lohse_Zwei_Themen_web.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Color]]></title> <link>https://chromatone.center/theory/color/</link> <guid>https://chromatone.center/theory/color/</guid> <pubDate>Mon, 01 Nov 2021 00:00:00 GMT</pubDate> <description><![CDATA[The features of human light perception and modern color theory]]></description> <content:encoded><![CDATA[<YoutubeEmbed video="srRI7yMjGz0" /><p>Here's very start of our journey to the visual music theory. First we learn about the phenomenon of <a href="./light/">Light</a>, its sources and its main properties as an electromagnetic emission.</p> <p>Then we dive deep into the physiology of <a href="./perception/">Human color perception</a> that defines the way we interpret colors of the nature.</p> <p><a href="./models/">Color models</a> study shows the research and the maths behind modern methods of reproducing any given color on the screen or on printed media.</p> <p>We list 12 <a href="./names/">Main colors names</a> to make sure we can use them later as 12 notes correspondents.</p> <YoutubeEmbed video="1i8s8knHFTs" />]]></content:encoded> <enclosure url="https://chromatone.center/evie-s.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Theory]]></title> <link>https://chromatone.center/theory/</link> <guid>https://chromatone.center/theory/</guid> <pubDate>Sat, 30 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[All the knowledge of music becoming visible with the simple color coding system]]></description> <content:encoded><![CDATA[<blockquote> <p><strong>Chroma</strong> - from Greek <strong>khrōma</strong> - "surface of the body, skin, color of the skin," also used generically for "color"</p> </blockquote> <p> +</p> <blockquote> <p><strong>Tone</strong> - from Greek <strong>tonos</strong> "vocal pitch, raising of voice, accent, key in music," originally "a stretching, tightening, taut string".</p> </blockquote> <p>Welcome to the main research hub for establishing the Chromatone interpretation of basic and profound music theory concepts.</p> <p>We start from the very beginning - the physical world around us and the ways we can perceive and interpret it. What is <a href="./color/">Light and Color</a> and how do we see it? What is <a href="./sound/">Sound</a> and how do we hear it? This gives us the firm foundation for building more and more intricate structures on top of it.</p> <p>Some sounds are more musical than others and we soon find the importance of their frequency profile. <a href="./notes/">Notes</a> are born from distinguishing particular sound pitches. And then all the combinations arise.</p> <p>Two notes form <a href="./intervals/">Intervals</a> and three or more of them from <a href="./chords/">Chords</a>. If there's too many sounds to be played simultaneously, but still they have pleasant relations to one another, we talk about <a href="./scales/">Scales</a>.</p> <p>Then we start organizing all these sound structures in time based on <a href="./rhythm/">Rhythm</a> and dive deep into <a href="./harmony/">Harmony</a> and <a href="./melody/">Melody</a> that evolve naturally to the whole <a href="./composition/">Composition</a> level.</p> <p>There's quite a deep research here, and there's more <a href="./resources/">External resources</a> to explore for those, who needs to learn even more about music theory themselves.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/manuel-nageli.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[RGB]]></title> <link>https://chromatone.center/practice/color/rgb/</link> <guid>https://chromatone.center/practice/color/rgb/</guid> <pubDate>Fri, 22 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Additive color mixer]]></description> <content:encoded><![CDATA[<client-only> <color-rgb class="max-h-100svh" style="position: sticky; top: 0;" /> </client-only> <div class="info custom-block"><p class="custom-block-title">INFO</p> <p>Mix Red, Green and Blue lights in the dark to get any given color accessible whithin this color space</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/rgb.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Afro-Cuban clave]]></title> <link>https://chromatone.center/theory/rhythm/system/clave/</link> <guid>https://chromatone.center/theory/rhythm/system/clave/</guid> <pubDate>Thu, 21 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[African, Cuban and Latin American rhythm patterns]]></description> <content:encoded><![CDATA[<beat-bars v-bind="afrocuban" /><h2 id="clave" tabindex="-1">Clave <a class="header-anchor" href="#clave" aria-label="Permalink to "Clave""></a></h2> <p>The clave (/ˈklɑːveɪ, kleɪv/; Spanish: [ˈklaβe]) is a rhythmic pattern used as a tool for temporal organization in Afro-Cuban music. In Spanish, clave literally means key, clef, code, or keystone. It is present in a variety of genres such as Abakuá music, rumba, conga, son, mambo, salsa, songo, timba and Afro-Cuban jazz. The five-stroke clave pattern represents the structural core of many Afro-Cuban rhythms.</p> <p>The clave pattern originated in sub-Saharan African music traditions, where it serves essentially the same function as it does in Cuba. In ethnomusicology, clave is also known as a key pattern, guide pattern, phrasing referent, timeline, or asymmetrical timeline. The clave pattern is also found in the African diaspora music of Haitian Vodou drumming, Afro-Brazilian music, African-American music, Louisiana Voodoo drumming, and Afro-Uruguayan music (candombe). The clave pattern (or hambone, as it is known in the United States) is used in North American popular music as a rhythmic motif or simply a form of rhythmic decoration.</p> <p>The historical roots of the clave are linked to transnational musical exchanges within the African diaspora. For instance, influences of the African “bomba” rhythm are reflected in the clave. In addition to this, the emphasis and role of the drum within the rhythmic patterns speaks further to these diasporic roots.</p> <p>The clave is the foundation of reggae, reggaeton, and dancehall. In this sense, it is the “heartbeat” that underlies the essence of these genres. The rhythms and vibrations are universalized in that they demonstrate a shared cultural experience and knowledge of these roots. Ultimately, this embodies the diasporic transnational exchange.</p> <p>In considering the clave as this basis of cultural understanding, relation, and exchange, this speaks to the transnational influence and interconnectedness of various communities. This musical fusion is essentially what constitutes the flow and foundational “heartbeat” of a variety of genres.</p> <youtube-embed video="d1tiv0Ep0kA" /><h2 id="etymology" tabindex="-1">Etymology <a class="header-anchor" href="#etymology" aria-label="Permalink to "Etymology""></a></h2> <p>Clave is a Spanish word meaning 'code,' 'key,' as in key to a mystery or puzzle, or 'keystone,' the wedge-shaped stone in the center of an arch that ties the other stones together. Clave is also the name of the patterns played on claves; two hardwood sticks used in Afro-Cuban music ensembles.</p> <h2 id="the-key-to-afro-cuban-rhythm" tabindex="-1">The key to Afro-Cuban rhythm <a class="header-anchor" href="#the-key-to-afro-cuban-rhythm" aria-label="Permalink to "The key to Afro-Cuban rhythm""></a></h2> <p>The clave pattern holds the rhythm together in Afro-Cuban music. The two main clave patterns used in Afro-Cuban music are known in North America as son clave and the rumba clave. Both are used as <a href="https://en.wikipedia.org/wiki/Bell_pattern" target="_blank" rel="noreferrer">bell patterns</a> across much of Africa. Son and rumba clave can be played in either a triple-pulse (12/8 or 6/1) or duple-pulse (4/4, 2/4 or 2/2) structure. The contemporary Cuban practice is to write the duple-pulse clave in a single measure of 4/4. It is also written in a single measure in ethnomusicological writings about African music.</p> <p>Although they subdivide the beats differently, the 12/8 and 4/4 versions of each clave share the same pulse names. The correlation between the triple-pulse and duple-pulse forms of clave, as well as other patterns, is an important dynamic of sub-Saharan-based rhythm. Every triple-pulse pattern has its duple-pulse correlative.</p> <p>Both clave patterns are used in rumba. What we now call son clave (also known as Havana clave) used to be the key pattern played in Havana-style yambú and guaguancó. Some Havana-based rumba groups still use son clave for yambú. The musical genre known as son probably adopted the clave pattern from rumba when it migrated from eastern Cuba to Havana at the beginning of the 20th century.</p> <blockquote> <p>During the nineteenth century, African music and European music sensibilities were blended in original Cuban hybrids. Cuban popular music became the conduit through which sub-Saharan rhythmic elements were first codified within the context of European ('Western') music theory. The first written music rhythmically based on clave was the Cuban danzón, which premiered in 1879. The contemporary concept of clave with its accompanying terminology reached its full development in Cuban popular music during the 1940s. Its application has since spread to folkloric music as well. In a sense, the Cubans standardized their myriad rhythms, both folkloric and popular, by relating nearly all of them to the clave pattern. The veiled code of African rhythm was brought to light due to the clave’s omnipresence. Consequently, the term clave has come to mean both the five-stroke pattern and the total matrix it exemplifies. In other words, the rhythmic matrix is the clave matrix. Clave is the key that unlocks the enigma; it de-codes the rhythmic puzzle. It is commonly understood that the actual clave pattern does not need to be played for the music to be 'in clave'.<br> — Peñalosa (2009)</p> </blockquote> <blockquote> <p>One of the most difficult applications of the clave is in the realm of composition and arrangement of Cuban and Cuban-based dance music. Regardless of the instrumentation, the music for all of the instruments of the ensemble must be written with a very keen and conscious rhythmic relationship to the clave . . . Any ‘breaks’ and/or ‘stops’ in the arrangements must also be ‘in clave’. If these procedures are not properly taken into consideration, then the music is 'out of clave' which, if not done intentionally, is considered an error. When the rhythm and music are ‘in clave,’ a great natural ‘swing’ is produced, regardless of the tempo. All musicians who write and/or interpret Cuban-based music must be ‘clave conscious,’ not just the percussionists.<br> — Santos (1986)</p> </blockquote> <youtube-embed video="LlDgRYKyQAE" /><h2 id="clave-theory" tabindex="-1">Clave theory <a class="header-anchor" href="#clave-theory" aria-label="Permalink to "Clave theory""></a></h2> <p>There are three main branches of what could be called clave theory.</p> <h3 id="cuban-popular-music" tabindex="-1">Cuban popular music <a class="header-anchor" href="#cuban-popular-music" aria-label="Permalink to "Cuban popular music""></a></h3> <p>First is the set of concepts and related terminology, which were created and developed in Cuban popular music from the mid-19th to the mid-20th centuries. In Popular Cuban Music, Emilio Grenet defines in general terms how the duple-pulse clave pattern guides all members of the music ensemble. An important Cuban contribution to this branch of music theory is the concept of the clave as a musical period, which has two rhythmically opposing halves. The first half is antecedent and moving, and the second half is consequent and grounded.</p> <h3 id="ethnomusicological-studies-of-african-rhythm" tabindex="-1">Ethnomusicological studies of African rhythm <a class="header-anchor" href="#ethnomusicological-studies-of-african-rhythm" aria-label="Permalink to "Ethnomusicological studies of African rhythm""></a></h3> <p>The second branch comes from the ethnomusicological studies of sub-Saharan African rhythm. In 1959, Arthur Morris Jones published his landmark work Studies in African Music, in which he identified the triple-pulse clave as the guide pattern for many pieces of music from ethnic groups across Africa. An important contribution of ethnomusicology to clave theory is the understanding that the clave matrix is generated by cross-rhythm.</p> <h3 id="the-3–2-2–3-clave-concept-and-terminology" tabindex="-1">The 3–2/2–3 clave concept and terminology <a class="header-anchor" href="#the-3–2-2–3-clave-concept-and-terminology" aria-label="Permalink to "The 3–2/2–3 clave concept and terminology""></a></h3> <p>The third branch comes from the United States. An important North American contribution to clave theory is the worldwide propagation of the 3–2/2–3 concept and terminology, which arose from the fusion of Cuban rhythms with jazz in New York City.</p> <p>Only in the last couple of decades have the three branches of clave theory begun to reconcile their shared and conflicting concepts. Thanks to the popularity of Cuban-based music and the vast amount of educational material available on the subject, many musicians today have a basic understanding of clave. Contemporary books that deal with clave, share a certain fundamental understanding of what clave means.</p> <blockquote> <p>Chris Washburne considers the term to refer to the rules that govern the rhythms played with the claves. Bertram Lehman regards the clave as a concept with wide-ranging theoretical syntactic implications for African music in general, and for David Peñalosa, the clave matrix is a comprehensive system for organizing music.<br> —Toussaint (2013)</p> </blockquote> <h2 id="mathematical-analysis" tabindex="-1">Mathematical analysis <a class="header-anchor" href="#mathematical-analysis" aria-label="Permalink to "Mathematical analysis""></a></h2> <p>In addition to these three branches of theory, clave has in recent years been thoroughly analyzed mathematically. The structure of clave can be understood in terms of cross-rhythmic ratios, above all, three-against-two (3:2). Godfried Toussaint, a Research Professor of Computer Science, has published a book and several papers on the mathematical analysis of clave and related African bell patterns. Toussaint uses geometry and the Euclidean algorithm as a means of exploring the significance of clave.</p> <h2 id="types" tabindex="-1">Types <a class="header-anchor" href="#types" aria-label="Permalink to "Types""></a></h2> <h3 id="son-clave" tabindex="-1">Son clave <a class="header-anchor" href="#son-clave" aria-label="Permalink to "Son clave""></a></h3> <p>Son clave has strokes on 1, 1a, 2&, 3&, 4.</p> <p>4/4:</p> <pre><code>1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . X . . . X . X . . . || </code></pre> <p>12/8:</p> <pre><code>1 & a 2 & a 3 & a 4 & a || X . X . X . . X . X . . || </code></pre> <p>The most common clave pattern used in Cuban popular music is called the son clave, named after the Cuban musical genre of the same name. Clave is the basic period, composed of two rhythmically opposed cells, one antecedent and the other consequent. Clave was initially written in two measures of 2/4 in Cuban music. When written this way, each cell or clave half is represented within a single measure.</p> <h3 id="three-side-two-side" tabindex="-1">Three-side / two-side <a class="header-anchor" href="#three-side-two-side" aria-label="Permalink to "Three-side / two-side""></a></h3> <p>The antecedent half has three strokes and is called the three-side of the clave. The consequent half (second measure above) of clave has two strokes and is called the two-side.</p> <blockquote> <p>Going only slightly into the rhythmic structure of our music we find that all its melodic design is constructed on a rhythmic pattern of two measures, as though both were only one, the first is antecedent, strong, and the second is consequent, weak.<br> — Grenet (1939)</p> </blockquote> <blockquote> <p>[With] clave... the two measures are not at odds, but rather, they are balanced opposites like positive and negative, expansive and contractive or the poles of a magnet. As the pattern is repeated, an alternation from one polarity to the other takes place creating the pulse and rhythmic drive. Were the pattern to be suddenly reversed, the rhythm would be destroyed as in a reversing of one magnet within a series... the patterns are held in place according to both the internal relationships between the drums and their relationship with clave... Should the drums fall out of clave (and in contemporary practice they sometimes do) the internal momentum of the rhythm will be dissipated and perhaps even broken.<br> — Amira and Cornelius (1992)</p> </blockquote> <h3 id="tresillo" tabindex="-1">Tresillo <a class="header-anchor" href="#tresillo" aria-label="Permalink to "Tresillo""></a></h3> <p>In Cuban popular music, the first three strokes of son clave are also known collectively as <a href="https://en.wikipedia.org/wiki/Tresillo_(rhythm)" target="_blank" rel="noreferrer">tresillo</a>, a Spanish word meaning triplet i.e. three almost equal beats in the same time as two main beats. However, in the vernacular of Cuban popular music, the term refers to the figure shown here. <a href="https://en.wikipedia.org/wiki/Contradanza" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Contradanza</a></p> <youtube-embed video="1QFU5tJc9ok" /><p><a href="https://www.thefader.com/2015/06/10/tresillo-club-music" target="_blank" rel="noreferrer">Tresillo club music @ The Fader</a></p> <h4 id="cinquillo" tabindex="-1">Cinquillo <a class="header-anchor" href="#cinquillo" aria-label="Permalink to "Cinquillo""></a></h4> <p>The cinquillo pattern is another common embellishment of tresillo. Cinquillo is used frequently in the Cuban contradanza (the "habanera") and the danzón. The figure is also a common bell pattern found throughout sub-Saharan Africa.</p> <h2 id="rumba-clave" tabindex="-1">Rumba clave <a class="header-anchor" href="#rumba-clave" aria-label="Permalink to "Rumba clave""></a></h2> <h3 id="the-rumba-clave-rhythm" tabindex="-1">The rumba clave rhythm <a class="header-anchor" href="#the-rumba-clave-rhythm" aria-label="Permalink to "The rumba clave rhythm""></a></h3> <p>Rumba clave has strokes on 1, 1a, 2a, 3&, 4.</p> <h4 id="_4-4" tabindex="-1">4/4 <a class="header-anchor" href="#_4-4" aria-label="Permalink to "4/4""></a></h4> <pre><code>1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . . X . . X . X . . . || </code></pre> <h4 id="_12-8" tabindex="-1">12/8 <a class="header-anchor" href="#_12-8" aria-label="Permalink to "12/8""></a></h4> <pre><code>1 & a 2 & a 3 & a 4 & a || X . X . . X . X . X . . || </code></pre> <p>The other main clave pattern is the rumba clave. Rumba clave is the key pattern used in Cuban rumba. The use of the triple-pulse form of the rumba clave in Cuba can be traced back to the iron bell (ekón) part in abakuá music. The form of rumba known as columbia is culturally and musically connected with abakuá which is an Afro Cuban cabildo that descends from the Kalabari of Cameroon. Columbia also uses this pattern. Sometimes 12/8 rumba clave is clapped in the accompaniment of Cuban batá drums. The 4/4 form of rumba clave is used in yambú, guaguancó and popular music.</p> <p>There is some debate as to how the 4/4 rumba clave should be notated for guaguancó and yambú. In actual practice, the third stroke on the three-side and the first stroke on the two-side often fall in rhythmic positions that do not fit neatly into music notation. Triple-pulse strokes can be substituted for duple-pulse strokes. Also, the clave strokes are sometimes displaced in such a way that they don't fall within either a triple-pulse or duple-pulse "grid". Therefore, many variations are possible.</p> <p>The first regular use of the rumba clave in Cuban popular music began with the mozambique, created by Pello el Afrikan in the early 1960s. When used in popular music (such as songo, timba or Latin jazz) rumba clave can be perceived in either a 3–2 or 2–3 sequence.</p> <h3 id="standard-bell-pattern" tabindex="-1">Standard bell pattern <a class="header-anchor" href="#standard-bell-pattern" aria-label="Permalink to "Standard bell pattern""></a></h3> <p>The seven-stroke standard bell pattern contains the strokes of both clave patterns. Some North American musicians call this pattern clave. Other North American musicians refer to the triple-pulse form as the 6/8 bell because they write the pattern in two measures of 6/8.</p> <p>Like clave, the standard pattern is expressed in both triple and duple-pulse. The standard pattern has strokes on: <strong>1, 1a, 2& 2a, 3&, 4, 4a.</strong></p> <h4 id="_12-8-1" tabindex="-1">12/8 <a class="header-anchor" href="#_12-8-1" aria-label="Permalink to "12/8""></a></h4> <pre><code>1 & a 2 & a 3 & a 4 & a || X . X . X X . X . X . X || </code></pre> <h4 id="_4-4-1" tabindex="-1">4/4 <a class="header-anchor" href="#_4-4-1" aria-label="Permalink to "4/4""></a></h4> <pre><code>1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . X X . . X . X . . X || </code></pre> <p>The ethnomusicologist A.M. Jones observes that what we call son clave, rumba clave, and the standard pattern are the most commonly used key patterns (also called bell patterns, timeline patterns and guide patterns) in Sub-Saharan African music traditions and he considers all three to be basically the same pattern. Clearly, they are all expressions of the same rhythmic principles. The three key patterns are found within a large geographic belt extending from Mali in northwest Africa to Mozambique in southeast Africa.</p> <h3 id="_6-8-clave-as-used-by-north-american-musicians" tabindex="-1">"6/8 clave" as used by North American musicians <a class="header-anchor" href="#_6-8-clave-as-used-by-north-american-musicians" aria-label="Permalink to ""6/8 clave" as used by North American musicians""></a></h3> <p>In Afro-Cuban folkloric genres the triple-pulse (12/8 or 6/1) rumba clave is the archetypal form of the guide pattern. Even when the drums are playing in duple-pulse (4 4), as in guaguancó, the clave is often played with displaced strokes that are closer to triple-pulse than duple-pulse. John Santos states: "The proper feel of this [rumba clave] rhythm, is closer to triple [pulse].”</p> <p>Conversely, in salsa and Latin jazz, especially as played in North America, 4/4 is the basic framework and 6/8 is considered something of a novelty and in some cases, an enigma. The cross-rhythmic structure (multiple beat schemes) is frequently misunderstood to be metrically ambiguous. North American musicians often refer to Afro-Cuban 6/8 rhythm as a feel, a term usually reserved for those aspects of musical nuance not practically suited for analysis. As used by North American musicians, "6/8 clave" can refer to one of three types of triple-pulse key patterns.</p> <h4 id="triple-pulse-standard-pattern" tabindex="-1">Triple-pulse standard pattern <a class="header-anchor" href="#triple-pulse-standard-pattern" aria-label="Permalink to "Triple-pulse standard pattern""></a></h4> <p>When one hears triple-pulse rhythms in Latin jazz the percussion is most often replicating the Afro-Cuban rhythm bembé. The standard bell is the key pattern used in bembé and so with compositions based on triple-pulse rhythms, it is the seven-stroke bell, rather than the five-stroke clave that is the most familiar to jazz musicians. Consequently, some North American musicians refer to the triple-pulse standard pattern as "6/8 clave".</p> <h4 id="triple-pulse-rumba-clave" tabindex="-1">Triple-pulse rumba clave <a class="header-anchor" href="#triple-pulse-rumba-clave" aria-label="Permalink to "Triple-pulse rumba clave""></a></h4> <p>Some refer to the triple-pulse form of rumba clave as "6/8 clave". When rumba clave is written in 6/8 the four underlying main beats are counted: 1, 2, 1, 2.</p> <pre><code>1 & a 2 & a |1 & a 2 & a || X . X . . X |. X . X . . || </code></pre> <blockquote> <p>Claves... are not usually played in Afro-Cuban 6/8 feels... [and] the clave [pattern] is not traditionally played in 6/8 though it may be helpful to do so to relate the clave to the 6/8 bell pattern.<br> —Thress (1994)</p> </blockquote> <p>The main exceptions are: the form of rumba known as Columbia, and some performances of abakuá by rumba groups, where the 6/8 rumba clave pattern is played on claves.</p> <h4 id="triple-pulse-son-clave" tabindex="-1">Triple-pulse son clave <a class="header-anchor" href="#triple-pulse-son-clave" aria-label="Permalink to "Triple-pulse son clave""></a></h4> <p>Triple-pulse son clave is the least common form of clave used in Cuban music. It is, however, found across an enormously vast area of sub-Saharan Africa. The first published example (1920) of this pattern identified it as a hand-clap part accompanying a song from Mozambique.</p> <h2 id="cross-rhythm-and-the-correct-metric-structure" tabindex="-1">Cross-rhythm and the correct metric structure <a class="header-anchor" href="#cross-rhythm-and-the-correct-metric-structure" aria-label="Permalink to "Cross-rhythm and the correct metric structure""></a></h2> <p>Because 6/8 clave-based music is generated from cross-rhythm, it is possible to count or feel the 6/8 clave in several different ways. The ethnomusicologist Arthur Morris Jones correctly identified the importance of this key pattern, but he mistook its accents as indicators of meter rather than the counter-metric phenomena they are. Similarly, while Anthony King identified the triple-pulse "son clave" as the ‘standard pattern’ in its simplest and most basic form, he did not correctly identify its metric structure. King represented the pattern in a polymetric 7+5/8 time signature.</p> <p>It wasn't until African musicologists like C.K. Ladzekpo entered into the discussion in the 1970s and 80s that the metric structure of sub-Saharan rhythm was unambiguously defined. The writings of Victor Kofi Agawu and David Locke must also be mentioned in this regard.</p> <p>Observing the dancer's steps almost always reveals the main beats of the music. Because the main beats are usually emphasized in the steps and not the music, it is often difficult for an "outsider" to feel the proper metric structure without seeing the dance component. Kubik states: "To understand the emotional structure of any music in Africa, one has to look at the dancers as well and see how they relate to the instrumental background" (2010: 78).</p> <blockquote> <p>For cultural insiders, identifying the... ‘dance feet’ occurs instinctively and spontaneously. Those not familiar with the choreographic supplement, however, sometimes have trouble locating the main beats and expressing them in movement. Hearing African music on recordings alone without prior grounding in its dance-based rhythms may not convey the choreographic supplement. Not surprisingly, many misinterpretations of African rhythm and meter stem from a failure to observe the dance.<br> — Agawu, (2003)[59]</p> </blockquote> <h2 id="controversy-over-use-and-origins" tabindex="-1">Controversy over use and origins <a class="header-anchor" href="#controversy-over-use-and-origins" aria-label="Permalink to "Controversy over use and origins""></a></h2> <p>Perhaps the greatest testament to the musical vitality of the clave is the spirited debate it engenders, both in terms of musical usage and historical origins. This section presents examples from non-Cuban music, which some musicians (not all) hold to be representative of the clave. The most common claims, those of Brazilian and subsets of American popular music, are described below.</p> <h3 id="in-africa" tabindex="-1">In Africa <a class="header-anchor" href="#in-africa" aria-label="Permalink to "In Africa""></a></h3> <h4 id="a-widely-used-bell-pattern" tabindex="-1">A widely used bell pattern <a class="header-anchor" href="#a-widely-used-bell-pattern" aria-label="Permalink to "A widely used bell pattern""></a></h4> <p>Clave is a Spanish word and its musical usage as a pattern played on claves was developed in the western part of Cuba, particularly the cities of Matanzas and Havana. Some writings have claimed that the clave patterns originated in Cuba. One frequently repeated theory is that the triple-pulse African bell patterns morphed into duple-pulse forms as a result of the influence of European musical sensibilities. "The duple meter feel [of 4/4 rumba clave] may have been the result of the influence of marching bands and other Spanish styles..."— Washburne (1995).</p> <p>However, the duple-pulse forms have existed in sub-Saharan Africa for centuries. The patterns the Cubans call clave are two of the most common bell parts used in Sub-Saharan African music traditions. Natalie Curtis, A.M. Jones, Anthony King and John Collins document the triple-pulse forms of what we call “son clave” and “rumba clave” in West, Central, and East Africa. Francis Kofi and C.K. Ladzekpo document several Ghanaian rhythms that use the triple or duple-pulse forms of "son clave". Percussion scholar royal hartigan identifies the duple-pulse form of "rumba clave" as a timeline pattern used by the Yoruba and Ibo of Nigeria, West Africa. He states that this pattern is also found in the high-pitched boat-shaped iron bell known as atoke played in the Akpese music of the Eve people of Ghana. There are many recordings of traditional African music where one can hear the five-stroke "clave" used as a bell pattern.</p> <h4 id="popular-dance-music" tabindex="-1">Popular dance music <a class="header-anchor" href="#popular-dance-music" aria-label="Permalink to "Popular dance music""></a></h4> <p>Cuban music has been popular in sub-Saharan Africa since the mid-twentieth century. To the Africans, clave-based Cuban popular music sounded both familiar and exotic. Congolese bands started doing Cuban covers and singing the lyrics phonetically. Soon, they were creating their original Cuban-like compositions, with lyrics sung in French or Lingala, a lingua franca of the western Congo region. The Congolese called this new music rumba, although it was based on the son. The Africans adapted guajeos to electric guitars and gave them their regional flavor. The guitar-based music gradually spread out from the Congo, increasingly taking on local sensibilities. This process eventually resulted in the establishment of several different distinct regional genres, such as soukous.</p> <h4 id="highlife" tabindex="-1">Highlife <a class="header-anchor" href="#highlife" aria-label="Permalink to "Highlife""></a></h4> <p>Highlife was the most popular genre in Ghana and Nigeria during the 1960s. This arpeggiated highlife guitar part is essentially a <a href="https://en.wikipedia.org/wiki/Guajeo" target="_blank" rel="noreferrer">guajeo</a>. The rhythmic pattern is known in Cuba as baqueteo. The pattern of attack-points is nearly identical to the 3–2 clave motif guajeo shown earlier in this article. The bell pattern known in Cuba as clave, is indigenous to Ghana and Nigeria, and is used in highlife.</p> <h4 id="afrobeat" tabindex="-1">Afrobeat <a class="header-anchor" href="#afrobeat" aria-label="Permalink to "Afrobeat""></a></h4> <p>Afrobeat guitar part is a variant of the 2–3 onbeat/offbeat motif. Even the melodic contour is guajeo-based. 2–3 claves are shown above the guitar for reference only. The clave pattern is not ordinarily played in afrobeat.</p> <h3 id="guide-patterns-in-cuban-versus-non-cuban-music" tabindex="-1">Guide-patterns in Cuban versus non-Cuban music <a class="header-anchor" href="#guide-patterns-in-cuban-versus-non-cuban-music" aria-label="Permalink to "Guide-patterns in Cuban versus non-Cuban music""></a></h3> <p>There is some debate as to whether or not clave, as it appears in Cuban music, functions in the same way as its sister rhythms in other forms of music (Brazilian, North American and African). Certain forms of Cuban music demand a strict relationship between the clave and other musical parts, even across genres. This same structural relationship between the guide-pattern and the rest of the ensemble is easily observed in many sub-Saharan rhythms, as well as rhythms from Haiti and Brazil. However, the 3–2/2–3 concept and terminology are limited to certain types of Cuban-based popular music and are not used in the music of Africa, Haiti, Brazil or in Afro-Cuban folkloric music. In American pop music, the clave pattern tends to be used as an element of rhythmic color, rather than a guide-pattern and as such is superimposed over many types of rhythms.</p> <h4 id="in-brazilian-music" tabindex="-1">In Brazilian music <a class="header-anchor" href="#in-brazilian-music" aria-label="Permalink to "In Brazilian music""></a></h4> <p>Both Cuba and Brazil imported Yoruba, Fon and Congolese slaves. Therefore, it is not surprising that we find the bell pattern the Cubans call clave in the Afro-Brazilian music of Macumba and Maculelê (dance). "Son clave" and "rumba clave" are also used as a tamborim part in some batucada arrangements. The structure of Afro-Brazilian bell patterns can be understood in terms of the clave concept (see below).</p> <p>Bell pattern 1 is used in maculelê (dance) and some Candomblé and Macumba rhythms. Pattern 1 is known in Cuba as son clave. Bell 2 is used in afoxê and can be thought of as pattern 1 embellished with four additional strokes. Bell 3 is used in batucada. Pattern 4 is the maracatu bell and can be thought of as pattern 1 embellished with four additional strokes.</p> <h4 id="bossa-nova-pattern" tabindex="-1">Bossa nova pattern <a class="header-anchor" href="#bossa-nova-pattern" aria-label="Permalink to "Bossa nova pattern""></a></h4> <p>The so-called "bossa nova clave" (or "Brazilian clave") has a similar rhythm to that of the son clave, but the second note on the two-side is delayed by one pulse (subdivision). The rhythm is typically played as a snare rim pattern in bossa nova music. The pattern is shown below in 2 4, as it is written in Brazil. In North American charts it is more likely to be written in cut-time.</p> <p>According to drummer Bobby Sanabria the Brazilian composer Antonio Carlos Jobim, who developed the pattern, considers it to be merely a rhythmic motif and not a clave (guide pattern). Jobim later regretted that Latino musicians misunderstood the role of this bossa nova pattern.</p> <h4 id="other-brazilian-examples" tabindex="-1">Other Brazilian examples <a class="header-anchor" href="#other-brazilian-examples" aria-label="Permalink to "Other Brazilian examples""></a></h4> <p>The examples below are transcriptions of several patterns resembling the Cuban clave that is found in various styles of Brazilian music, on the ago-gô and surdo instruments.</p> <p>Legend: Time signature: 2/4; L=low bell, H=high bell, O = open surdo hit, X = muffled surdo hit, and | divides the measure:</p> <ul> <li>Style: Samba 3:2; LL.L.H.H|L.L.L.H. (More common 3:2: .L.L.H.H|L.L.L.H.)</li> <li>Style: Maracatu 3:2; LH.HL.H.|L.H.LH.H</li> <li>Style: Samba 3:2; L|.L.L..L.|..L..L.L|</li> <li>Instrument: 3rd Surdo 2:3; X...O.O.|X...OO.O</li> <li>Variation of samba style: Partido Alto 2:3; L.H..L.L|.H..L.L.</li> <li>Style: Maracatu 2:3; L.H.L.H.|LH.HL.H.</li> <li>Style: Samba-Reggae or Bossanova 3:2; O..O..O.|..O..O..</li> <li>Style: Ijexa 3:2; LL.L.LL.|L.L.L.L. (HH.L.LL.|H.H.L.L.)</li> </ul> <p>For 3rd example above, the clave pattern is based on a common accompaniment pattern played by the guitarist. B=bass note played by guitarist's thumb, C=chord played by fingers.</p> <pre><code>&|1 & 2 & 3 & 4 &|1 & 2 & 3 & 4 &|| C|B C . C B . C .|B . C . B C . C|| </code></pre> <p>The singer enters on the wrong side of the clave and the ago-gô player adjusts accordingly. This recording cuts off the first bar so that it sounds like the bell comes in on the third beat of the second bar. This is suggestive of a pre-determined rhythmic relationship between the vocal part and the percussion and supports the idea of a clave-like structure in Brazilian music.</p> <youtube-embed video="DhOLGoBCDjw" /><h3 id="in-jamaican-and-french-caribbean-music" tabindex="-1">In Jamaican and French Caribbean music <a class="header-anchor" href="#in-jamaican-and-french-caribbean-music" aria-label="Permalink to "In Jamaican and French Caribbean music""></a></h3> <p>The son clave rhythm is present in Jamaican mento music, and can be heard on 1950s-era recordings such as "Don’t Fence Her In", "Green Guava" or "Limbo" by Lord Tickler, "Mango Time" by Count Lasher, "Linstead Market/Day O" by The Wigglers, "Bargie" by The Tower Islanders, "Nebuchanezer" by Laurel Aitken and others. The Jamaican population is part of the same origin (Congo) as many Cubans, which perhaps explains the shared rhythm. It is also heard frequently in Martinique's biguine and Dominica's Jing ping. Just as likely however is the possibility that claves and the clave rhythm spread to Jamaica, Trinidad and the other small islands of the Caribbean through the popularity of Cuban son recordings from the 1920s onward.</p> <h3 id="experimental-clave-music" tabindex="-1">Experimental clave music <a class="header-anchor" href="#experimental-clave-music" aria-label="Permalink to "Experimental clave music""></a></h3> <h4 id="art-music" tabindex="-1">Art music <a class="header-anchor" href="#art-music" aria-label="Permalink to "Art music""></a></h4> <p>The clave rhythm and clave concept have been used in some modern art music ("classical") compositions. "Rumba Clave" by Cuban percussion virtuoso Roberto Vizcaiño has been performed in recital halls around the world. Another clave-based composition that has "gone global" is the snare drum suite "Cross" by Eugene D. Novotney.</p> <h4 id="odd-meter-clave" tabindex="-1">Odd meter "clave" <a class="header-anchor" href="#odd-meter-clave" aria-label="Permalink to "Odd meter "clave"""></a></h4> <p>Technically speaking, the term odd meter clave is an oxymoron. Clave consists of two even halves, in a divisive structure of four main beats. However, in recent years jazz musicians from Cuba and outside of Cuba have been experimenting with creating new "claves" and related patterns in various odd meters. Clave which is traditionally used in a divisive rhythm structure, has inspired many new creative inventions in an additive rhythm context.</p> <blockquote> <p>. . . I developed the concept of adjusting claves to other time signatures, with varying degrees of success. What became obvious to me quite quickly was that the closer I stuck to the general rules of clave the more natural the pattern sounded. Clave has a natural flow with a certain tension and resolves points. I found if I kept these points in the new meters they could still flow seamlessly, allowing me to play longer phrases. It also gave me many reference points and reduced my reliance on "one".<br> — Guilfoyle (2006: 10)</p> </blockquote> <h5 id="recommended-listening-for-odd-meter-clave" tabindex="-1">Recommended listening for odd-meter "clave" <a class="header-anchor" href="#recommended-listening-for-odd-meter-clave" aria-label="Permalink to "Recommended listening for odd-meter "clave"""></a></h5> <p>Here are some examples of recordings that use odd meter clave concepts.</p> <ul> <li>Dafnis Prieto About the Monks (Zoho).</li> <li>Sebastian Schunke Symbiosis (Pimienta Records).</li> <li>Paoli Mejias Mi Tambor (JMCD).</li> <li>John Benitez Descarga in New York (Khaeon).</li> <li>Deep Rumba A Calm in the Fire of Dances (American Clave).</li> <li>Nachito Herrera Bembe en mi casa (FS Music).</li> <li>Bobby Sanabria Quarteto Aché (Zoho).</li> <li>Julio Barretto Iyabo (3d).</li> <li>Michel Camilo Triangulo (Telarc).</li> <li>Samuel Torres Skin Tones (www.samueltorres.com).</li> <li>Horacio "el Negro" Hernandez Italuba (Universal Latino).</li> <li>Tony Lujan Tribute (Bella Records).</li> <li>Edward Simon La bikina (Mythology).</li> <li>Jorge Sylvester In the Ear of the Beholder (Jazz Magnet).</li> <li>Uli Geissendoerfer "The Extension" (CMO)</li> <li>Manuel Valera In Motion (Criss Cross Jazz).</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/yuting-gao.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[CMYK]]></title> <link>https://chromatone.center/practice/color/cmyk/</link> <guid>https://chromatone.center/practice/color/cmyk/</guid> <pubDate>Wed, 20 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Subtractive color mixer]]></description> <content:encoded><![CDATA[<client-only> <color-cmyk style="position: sticky; top: 0;" /></client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/cmyk.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Sound]]></title> <link>https://chromatone.center/theory/sound/</link> <guid>https://chromatone.center/theory/sound/</guid> <pubDate>Wed, 20 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[The ingenious hearing system and its medium]]></description> <content:encoded><![CDATA[<p>What's the ultimate <a href="./nature/">Nature of sound</a> and how do humans <a href="./hearing/">Percieve it aurally</a>? There's a whole science of <a href="./psychoacoustics/">Psychoacoustics</a> to answer these and even deeper questions about the sound and hearing phenomena.</p> <p>The main parameter that makes a sound musical is its <a href="./pitch/">Pitch</a> and its <a href="./timbre/">Overtones</a> composition.</p> <youtube-embed video="cD7YFUYLpDc" />]]></content:encoded> <enclosure url="https://chromatone.center/waves.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Modal interplay]]></title> <link>https://chromatone.center/theory/interplay/</link> <guid>https://chromatone.center/theory/interplay/</guid> <pubDate>Tue, 19 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Exploring the physical, physiological, neurological and psychological links between sight and hearing – the two main modalities of human perception.]]></description> <content:encoded><![CDATA[<p>Chromatone is a term composed of two parts. "Chroma" is an Ancient Greek word χρῶμα (khrôma) and stands for "color". "Tone" comes from Latin word tonus ("sound") derived from Ancient Greek τόνος (tónos, “strain, tension, pitch”). Together they form Chromatone – the colorful notation system. It's based on combining the two circular models – the octave equivalence in music and the color circle of visual arts. This gives not only the colors for each of 12 chromatic notes of modern music, but can be extended to derive a certain color for any acoustic frequency.</p> <h1 id="circle-of-colors-and-notes" tabindex="-1">Circle of colors and notes <a class="header-anchor" href="#circle-of-colors-and-notes" aria-label="Permalink to "Circle of colors and notes""></a></h1> <img src="./logo.svg"> <p>A is the lowest frequency note and red is the lowest frequency color. It’s the starting point. Then we divide the <a href="./spectrum/">Light spectrum</a> into 12 parts and get scientifically correspondent colors for every note in an octave. Now we can see the circle of musical intervals with our eyes and use it to remember all the musical semitones. It may be like an artificial <a href="./synesthesia/">Synesthesia</a> to be developed to improve music learning and performing skills. Learn more about history of <a href="./visual-music/">Visual Music with Michael Filimowicz</a>.</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Colored_music_notation" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Colored_music_notation</a></li> <li><a href="https://en.wikipedia.org/wiki/Visual_music" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Visual_music</a></li> <li><a href="https://www.researchgate.net/publication/334643020_Interactive_Visual_Music" target="_blank" rel="noreferrer">https://www.researchgate.net/publication/334643020_Interactive_Visual_Music</a></li> <li><a href="https://github.com/GeWu-Lab/awesome-audiovisual-learning" target="_blank" rel="noreferrer">https://github.com/GeWu-Lab/awesome-audiovisual-learning</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/logo.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Indian tala]]></title> <link>https://chromatone.center/theory/rhythm/system/tala/</link> <guid>https://chromatone.center/theory/rhythm/system/tala/</guid> <pubDate>Tue, 19 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[The Indian rhythmic language and art of konnakkol]]></description> <content:encoded><![CDATA[<beat-bars v-bind="tala" /><h2 id="tala" tabindex="-1">Tala <a class="header-anchor" href="#tala" aria-label="Permalink to "Tala""></a></h2> <p>A <a href="https://en.wikipedia.org/wiki/Tala_(music)" target="_blank" rel="noreferrer">Tala</a> (IAST tāla), sometimes spelled Titi or Pipi, literally means a "clap, tapping one's hand on one's arm, a musical measure". It is the term used in Indian classical music to refer to musical meter, that is any rhythmic beat or strike that measures musical time. The measure is typically established by hand clapping, waving, touching fingers on thigh or the other hand, verbally, striking of small cymbals, or a percussion instrument in the Indian subcontinental traditions. Along with raga which forms the fabric of a melodic structure, the tala forms the life cycle and thereby constitutes one of the two foundational elements of Indian music.</p> <p>Tala is an ancient music concept traceable to Vedic era texts of Hinduism, such as the Samaveda and methods for singing the Vedic hymns. The music traditions of the North and South India, particularly the raga and tala systems, were not considered as distinct till about the 16th century. There on, during the tumultuous period of Islamic rule of the Indian subcontinent, the traditions separated and evolved into distinct forms. The tala system of the north is called Hindustaani, while the south is called Carnaatic. However, the tala system between them continues to have more common features than differences.</p> <p>Tala in the Indian tradition embraces the time dimension of music, the means by which musical rhythm and form were guided and expressed. While a tala carries the musical meter, it does not necessarily imply a regularly recurring pattern. In the major classical Indian music traditions, the beats are hierarchically arranged based on how the music piece is to be performed. The most widely used tala in the South Indian system is Adi tala. In the North Indian system, the most common tala is teental.</p> <h2 id="etymology" tabindex="-1">Etymology <a class="header-anchor" href="#etymology" aria-label="Permalink to "Etymology""></a></h2> <p>Tāļa (ताळ) is a Sanskrit word, which means "being established". Adi tala is one of the most used talas in Carnatic music.</p> <h2 id="terminology-and-definitions" tabindex="-1">Terminology and definitions <a class="header-anchor" href="#terminology-and-definitions" aria-label="Permalink to "Terminology and definitions""></a></h2> <p>According to David Nelson – an Ethnomusicology scholar specializing in Carnatic music, a tala in Indian music covers "the whole subject of musical meter". Indian music is composed and performed in a metrical framework, a structure of beats that is a tala. The tala forms the metrical structure that repeats, in a cyclical harmony, from the start to end of any particular song or dance segment, making it conceptually analogous to meters in Western music. However, talas have certain qualitative features that classical European musical meters do not. For example, some talas are much longer than any classical Western meter, such as a framework based on 29 beats whose cycle takes about 45 seconds to complete when performed. Another sophistication in talas is the lack of "strong, weak" beat composition typical of the traditional European meter. In classical Indian traditions, the tala is not restricted to permutations of strong and weak beats, but its flexibility permits the accent of a beat to be decided by the shape of musical phrase.</p> <blockquote> <p><img src="./Narada.jpg" alt=""> A painting depicting the Vedic sage-musician Narada, with a tala instrument in his left hand.</p> </blockquote> <p>A tala measures musical time in Indian music. However, it does not imply a regular repeating accent pattern, instead its hierarchical arrangement depends on how the musical piece is supposed to be performed. A metric cycle of a tala contains a specific number of beats, which can be as short as 3 beats or as long as 128 beats. The pattern repeats, but the play of accent and empty beats are an integral part of Indian music architecture. Each tala has subunits. In other words, the larger cyclic tala pattern has embedded smaller cyclic patterns, and both of these rhythmic patterns provide the musician and the audience to experience the play of harmonious and discordant patterns at two planes. A musician can choose to intentionally challenge a pattern at the subunit level by contradicting the tala, explore the pattern in exciting ways, then bring the music and audience experience back to the fundamental pattern of cyclical beats.</p> <p>The tala as the time cycle, and the raga as the melodic framework, are the two foundational elements of classical Indian music. The raga gives an artist the ingredients palette to build the melody from sounds, while the tala provides her with a creative framework for rhythmic improvisation using time.</p> <p>The basic rhythmic phrase of a tala when rendered on a percussive instrument such as tabla is called a <strong>theka</strong>. The beats within each rhythmic cycle are called <strong>matras</strong>, and the first beat of any rhythmic cycle is called the <strong>sam</strong>. An empty beat is called <strong>khali</strong>. The subdivisions of a tala are called <strong>vibhagas</strong> or <strong>khands</strong>. In the two major systems of classical Indian music, the first count of any tala is called sam. The cyclic nature of a tala is a major feature of the Indian tradition, and this is termed as <strong>avartan</strong>. Both raga and tala are open frameworks for creativity and allow theoretically infinite number of possibilities, however, the tradition considers 108 talas as basic.</p> <youtube-embed video="xcPUnpOLDYM"/><h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <p>The roots of tala and music in ancient India are found in the Vedic literature of Hinduism. The earliest Indian thought combined three arts, instrumental music (vadya), vocal music (gita) and dance (nrtta). As these fields developed, sangita became a distinct genre of art, in a form equivalent to contemporary music. This likely occurred before the time of Yāska (~500 BCE), since he includes these terms in his nirukta studies, one of the six Vedanga of ancient Indian tradition. Some of the ancient texts of Hinduism such as the Samaveda (~1000 BCE) are structured entirely to melodic themes, it is sections of Rigveda set to music.</p> <p>The Samaveda is organized into two formats. One part is based on the musical meter, another by the aim of the rituals. The text is written with embedded coding, where svaras (octave note) is either shown above or within the text, or the verse is written into parvans (knot or member). These markings identify which units are to be sung in a single breath, each unit based on multiples of one eighth. The hymns of Samaveda contain melodic content, form, rhythm and metric organization. This structure is, however, not unique or limited to Samaveda. The Rigveda embeds the musical meter too, without the kind of elaboration found in the Samaveda. For example, the <strong>Gayatri mantra</strong> contains three metric lines of exactly eight syllables, with an embedded ternary rhythm.</p> <p>According to Lewis Rowell – a professor of Music specializing on classical Indian music, the need and impulse to develop mathematically precise musical meters in the Vedic era may have been driven by the Indian use of oral tradition for transmitting vast amounts of Vedic literature. Deeply and systematically embedded structure and meters may have enabled the ancient Indians a means to detect and correct any errors of memory or oral transmission from one person or generation to the next. According to Michael Witzel,</p> <blockquote> <p>The Vedic texts were orally composed and transmitted, without the use of script, in an unbroken line of transmission from teacher to student that was formalized early on. This ensured an impeccable textual transmission superior to the classical texts of other cultures; it is, in fact, something like a tape-recording.... Not just the actual words, but even the long-lost musical (tonal) accent (as in old Greek or in Japanese) has been preserved up to the present.<br> — Michael Witzel</p> </blockquote> <p>The Samaveda also included a system of chironomy, or hand signals to set the recital speed. These were mudras (finger and palm postures) and jatis (finger counts of the beat), a system at the foundation of talas. The chants in the Vedic recital text, associated with rituals, are presented to be measured in matras and its multiples in the invariant ratio of 1:2:3. This system is also the basis of every tala.</p> <blockquote> <p><img src="./five-gandharva.jpg" alt=""> Five Gandharvas (celestial musicians) from 4th-5th century CE, northwest Indian subcontinent, carrying the four types of musical instruments. Gandharvas are discussed in Vedic era literature.</p> </blockquote> <p>In the ancient traditions of Hinduism, two musical genre appeared, namely Gandharva (formal, composed, ceremonial music) and Gana (informal, improvised, entertainment music). The Gandharva music also implied celestial, divine associations, while the Gana also implied singing. The Vedic Sanskrit musical tradition had spread widely in the Indian subcontinent, and according to Rowell, the ancient Tamil classics make it "abundantly clear that a cultivated musical tradition existed in South India as early as the last few pre-Christian centuries".</p> <p>The classic Sanskrit text Natya Shastra is at the foundation of the numerous classical music and dance of India. Before Natyashastra was finalized, the ancient Indian traditions had classified musical instruments into four groups based on their acoustic principle (how they work, rather than the material they are made of). These four categories are accepted as given and are four separate chapters in the Natyashastra, one each on <strong>stringed</strong> instruments (chordophones), <strong>hollow</strong> instruments (aerophones), <strong>solid</strong> instruments (idiophones), and <strong>covered</strong> instruments (membranophones). Of these, states Rowell, the idiophone in the form of "small bronze cymbals" were used for tala. Almost the entire chapter of Natyashastra on idiophones, by Bharata, is a theoretical treatise on the system of tala. Time keeping with idiophones was considered a separate function than that of percussion (membranophones), in the early Indian thought on music theory.</p> <p>The early 13th century Sanskrit text Sangitaratnakara (literally, "Ocean of Music and Dance"), by Śārṅgadeva patronized by King Sighana of the Yadava dynasty in Maharashtra, mentions and discusses ragas and talas. He identifies seven tala families, then subdivides them into rhythmic ratios, presenting a methodology for improvisation and composition that continues to inspire modern era Indian musicians. Sangitaratnakara is one of the most complete historic medieval era Hindu treatises on this subject that has survived into the modern era, that relates to the structure, technique and reasoning behind ragas and talas.</p> <p>The centrality and significance of Tala to music in ancient and early medieval India is also expressed in numerous temple reliefs, in both Hinduism and Jainism, such as through the carving of musicians with cymbals at the fifth century Pavaya temple sculpture near Gwalior, and the Ellora Caves.</p> <youtube-embed video="XyUxY9huI_s"/><h2 id="description" tabindex="-1">Description <a class="header-anchor" href="#description" aria-label="Permalink to "Description""></a></h2> <p>In the South Indian system (Carnatic), a full tala is a group of seven suladi talas. These are cyclic (avartana), with three parts (anga) traditionally written down with laghu, drutam and anudrutam symbols. Each tala is divided in two ways to perfect the musical performance, one is called kala (kind) and the other gati (pulse).</p> <p>Each repeated cycle of a tala is called an avartan. This is counted additively in sections (vibhag or anga) which roughly correspond to bars or measures but may not have the same number of beats (matra, akshara) and may be marked by accents or rests. So the Hindustani <strong>Jhoomra tal</strong> has 14 beats, counted 3+4+3+4, which differs from <strong>Dhamar tal</strong>, also of 14 beats but counted 5+2+3+4. The spacing of the vibhag accents makes them distinct, otherwise, again, since <strong>Rupak tal</strong> consists of 7 beats, two cycles of it of would be indistinguishable from one cycle of the related Dhamar tal. However the most common Hindustani tala, Teental, is a regularly-divisible cycle of four measures of four beats each.</p> <p>The first beat of any tala, called <strong>sam</strong> (pronounced as the English word 'sum' and meaning even or equal) is always the most important and heavily emphasised. It is the point of resolution in the rhythm where the percussionist's and soloist's phrases culminate: a soloist has to sound an important note of the raga there, and a North Indian classical dance composition must end there. However, melodies do not always begin on the first beat of the tala but may be offset, for example to suit the words of a composition so that the most accented word falls upon the sam. The term talli, literally "shift", is used to describe this offset in Tamil. A composition may also start with an anacrusis on one of the last beats of the previous cycle of the tala, called ateeta eduppu in Tamil.</p> <p>The tāla is indicated visually by using a series of rhythmic hand gestures called <strong>kriyas</strong> that correspond to the angas or "limbs", or vibhag of the tāla. These movements define the tala in Carnatic music, and in the Hindustani tradition too, when learning and reciting the tala, the first beat of any vibhag is known as <strong>tali</strong> ("clap") and is accompanied by a clap of the hands, while an "empty" (khali) vibhag is indicated with a sideways wave of the dominant clapping hand (usually the right) or the placing of the back of the hand upon the base hand's palm instead. But northern definitions of tala rely far more upon specific drum-strokes, known as bols, each with its own name that can be vocalized as well as written. In one common notation the sam is denoted by an 'X' and the khali, which is always the first beat of a particular vibhag, denoted by '0' (zero).</p> <p>A tala does not have a fixed tempo (laya) and can be played at different speeds. In Hindustani classical music a typical recital of a raga falls into two or three parts categorized by the quickening tempo of the music; <strong>Vilambit</strong> (delayed, i.e., slow), <strong>Madhya</strong> (medium tempo) and <strong>Drut</strong> (fast). Carnatic music adds an extra slow and fast category, categorised by divisions of the pulse; <strong>Chauka</strong> (1 stroke per beat), <strong>Vilamba</strong> (2 strokes per beat), <strong>Madhyama</strong> (4 strokes per beat), <strong>Drut</strong>(8 strokes per beat) and lastly <strong>Adi-drut</strong>(16 strokes per beat).</p> <p>Indian classical music, both northern and southern, have theoretically developed since ancient times numerous tala, though in practice some talas are very common, and some are rare.</p> <h2 id="in-carnatic-music" tabindex="-1">In Carnatic music <a class="header-anchor" href="#in-carnatic-music" aria-label="Permalink to "In Carnatic music""></a></h2> <p>Tala was introduced to Karnataka music by its founder Purandara Dasa. Carnatic music uses various classification systems of tālas such as the <strong>Chapu</strong> (4 talas), <strong>Chanda</strong> (108 talas) and <strong>Melakarta</strong> (72 talas). The <strong>Suladi Sapta Tāla</strong> system (35 talas) is used here, according to which there are seven families of tāla. A tāla cannot exist without reference to one of five jatis, differentiated by the length in beats of the laghu, thus allowing thirty-five possible tālas. With all possible combinations of tala types and laghu lengths, there are 5 x 7 = 35 talas having lengths ranging from 3 (Tisra-jati Eka tala) to 29 (sankeerna jati dhruva tala) aksharas. The seven tala families and the number of aksharas for each of the 35 talas are;</p> <table tabindex="0"> <thead> <tr> <th>Tala</th> <th>Anga Notation</th> <th>Tisra (3)</th> <th>Chatusra (4)</th> <th>Khanda (5)</th> <th>Misra (7)</th> <th>Sankeerna (9)</th> </tr> </thead> <tbody> <tr> <td>Dhruva</td> <td>lOll</td> <td>11</td> <td><strong>14</strong></td> <td>17</td> <td>23</td> <td>29</td> </tr> <tr> <td>Matya</td> <td>lOl</td> <td>8</td> <td><strong>10</strong></td> <td>12</td> <td>16</td> <td>20</td> </tr> <tr> <td>Rupaka</td> <td>Ol</td> <td>5</td> <td><strong>6</strong></td> <td>7</td> <td>9</td> <td>11</td> </tr> <tr> <td>Jhampa</td> <td>lUO</td> <td><strong>6</strong></td> <td>7</td> <td>8</td> <td>10</td> <td>12</td> </tr> <tr> <td>Triputa</td> <td>lOO</td> <td>7</td> <td><strong>8</strong></td> <td>9</td> <td>11</td> <td>13</td> </tr> <tr> <td>Ata</td> <td>llOO</td> <td>10</td> <td><strong>12</strong></td> <td>14</td> <td>18</td> <td>22</td> </tr> <tr> <td>Eka</td> <td>l</td> <td>3</td> <td><strong>4</strong></td> <td>5</td> <td>7</td> <td>9</td> </tr> </tbody> </table> <p>In practice, only a few talas have compositions set to them. The most common tala is <strong>Chaturasra-nadai Chaturasra-jaati Triputa tala</strong>, also called Adi tala (Adi meaning primordial in Sanskrit). Nadai is a term which means subdivision of beats. Many kritis and around half of the varnams are set to this tala. Other common talas include:</p> <ul> <li>Chaturasra-nadai Chaturasra-jaati Rupaka tala (or simply Rupaka tala). A large body of krtis is set to this tala.</li> <li>Khanda Chapu (a 10-count) and Misra Chapu (a 14-count), both of which do not fit very well into the suladi sapta tala scheme. Many padams are set to Misra Chapu, while there are also krtis set to both the above talas.</li> <li>Chatusra-nadai Khanda-jati Ata tala (or simply Ata tala). Around half of the varnams are set to this tala.</li> <li>Tisra-nadai Chatusra-jati Triputa tala (Adi Tala Tisra-Nadai). A few fast-paced kritis are set to this tala. As this tala is a twenty-four beat cycle, compositions in it can be and sometimes are sung in Rupaka talam.</li> </ul> <youtube-embed video="NvcILiwkaDc"/><h3 id="strokes" tabindex="-1">Strokes <a class="header-anchor" href="#strokes" aria-label="Permalink to "Strokes""></a></h3> <p>There are 6 main angas/strokes in talas;</p> <ul> <li><strong>Anudhrutam</strong>, a single beat, notated 'U', a downward clap of the open hand with the palm facing down.</li> <li><strong>Dhrutam</strong>, a pattern of 2 beats, notated 'O', a downward clap with the palm facing down followed by a second downward clap with the palm facing up.</li> <li><strong>Laghu</strong>, a pattern with a variable number of beats, 3, 4, 5, 7 or 9, depending on the jati. It is notated 'l' and consists of a downward clap with the palm facing down followed by counting from little finger to thumb and back, depending on the jati.</li> <li><strong>Guru</strong>, a pattern represented by a 8 beats . It is notated ‘8’ and consists of a downward clap with the palm facing down followed by circling movement of the right hand with closed fingers in the clockwise direction.</li> <li><strong>Plutham</strong>, a pattern of 12 beats notated ‘3’, it consists of a downward clap with the palm facing down followed by counting from little finger to the middle finger, a krishya (waving the hand towards the left hand side 4 times) and a sarpini (waving the hand towards the right 4 times)</li> <li><strong>Kakapadam</strong>, a pattern of 16 beats notated ’x’, it consists of a downward clap with the palm facing down followed by counting from little finger to the middle finger, a pathakam (waving the hand upwards 4 times),a krishya and a sarpini</li> </ul> <h3 id="jatis" tabindex="-1">Jatis <a class="header-anchor" href="#jatis" aria-label="Permalink to "Jatis""></a></h3> <p>Each tala can incorporate one of the five following jatis.</p> <table tabindex="0"> <thead> <tr> <th>Jati</th> <th>Number of Aksharas</th> </tr> </thead> <tbody> <tr> <td>Chaturasra</td> <td>4</td> </tr> <tr> <td>Thisra</td> <td>3</td> </tr> <tr> <td>Khanda</td> <td>5</td> </tr> <tr> <td>Misra</td> <td>7</td> </tr> <tr> <td>Sankeerna</td> <td>9</td> </tr> </tbody> </table> <p>Each tala family has a default jati associated with it; the tala name mentioned without qualification refers to the default jati.</p> <ul> <li><strong>Dhruva tala</strong> is by default chaturasra jati</li> <li><strong>Matya</strong> tala is chaturasra jati</li> <li><strong>Rupaka</strong> tala is chaturasra jati</li> <li><strong>Jhampa</strong> tala is misra jati</li> <li><strong>Triputa</strong> tala is tisra jati (chaturasra jati type is also known as Adi tala)</li> <li><strong>Ata</strong> tala is kanda jati</li> <li><strong>Eka</strong> tala is chaturasra jati</li> <li>For all the 72 melakarta talas and the 108 talas the jathi is mostly chatusram</li> </ul> <p>For example, one cycle of khanda-jati rupaka tala comprises a 2-beat dhrutam followed by a 5-beat laghu. The cycle is, thus, 7 aksharas long. Chaturasra nadai khanda-jati Rupaka tala has 7 aksharam, each of which is 4 matras long; each avartana of the tala is 4 x 7 = 28 matras long. For Misra nadai Khanda-jati Rupaka tala, it would be 7 x 7 = 49 matra.</p> <h3 id="jati-nadai-in-tamil-nadaka-in-telugu-nade-in-kannada" tabindex="-1">Jati (nadai in Tamil, nadaka in Telugu, nade in Kannada) <a class="header-anchor" href="#jati-nadai-in-tamil-nadaka-in-telugu-nade-in-kannada" aria-label="Permalink to "Jati (nadai in Tamil, nadaka in Telugu, nade in Kannada)""></a></h3> <p>The number of maatras in an akshara is called the nadai. This number can be 3, 4, 5, 7 or 9, and take the same name as the jatis. The default nadai is Chatusram:</p> <table tabindex="0"> <thead> <tr> <th>Jati</th> <th>Maatras</th> <th>Phonetic representation of beats</th> </tr> </thead> <tbody> <tr> <td>Tisra</td> <td>3</td> <td>Tha Ki Ta</td> </tr> <tr> <td>Chatusra</td> <td>4</td> <td>Tha Ka Dhi Mi</td> </tr> <tr> <td>Khanda</td> <td>5</td> <td>Tha Ka Tha Ki Ta</td> </tr> <tr> <td>Misra</td> <td>7</td> <td>Tha Ki Ta Tha Ka Dhi Mi</td> </tr> <tr> <td>Sankeerna</td> <td>9</td> <td>Tha Ka Dhi Mi Tha Ka Tha Ki Ta</td> </tr> </tbody> </table> <p>Sometimes, pallavis are sung as part of a Ragam Thanam Pallavi exposition in some of the rarer, more complicated talas; such pallavis, if sung in a non-Chatusra-nadai tala, are called nadai pallavis. In addition, pallavis are often sung in chauka kale (slowing the tala cycle by a magnitude of four times), although this trend seems to be slowing.</p> <h3 id="kala" tabindex="-1">Kāla <a class="header-anchor" href="#kala" aria-label="Permalink to "Kāla""></a></h3> <p>Kāla refers to the change of tempo during a rendition of song, typically doubling up the speed. Onnaam kaalam is 1st speed, Erandaam kaalam is 2nd speed and so on. Erandaam kaalam fits in twice the number of aksharaas (notes) into the same beat, thus doubling the tempo. Sometimes, Kāla is also used similar to Layā, for example Madhyama Kālam or Chowka Kālam.</p> <youtube-embed video="MBvAYPvfmEk" /><h2 id="in-hindustani-music" tabindex="-1">In Hindustani music <a class="header-anchor" href="#in-hindustani-music" aria-label="Permalink to "In Hindustani music""></a></h2> <p>Talas have a vocalised and therefore recordable form wherein individual beats are expressed as phonetic representations of various strokes played upon the tabla. Various Gharanas (literally "Houses" which can be inferred to be "styles" – basically styles of the same art with cultivated traditional variances) also have their own preferences. For example, the Kirana Gharana uses Ektaal more frequently for Vilambit Khayal while the Jaipur Gharana uses Trital. Jaipur Gharana is also known to use Ada Trital, a variation of Trital for transitioning from Vilambit to Drut laya.</p> <p>The Khyal vibhag has no beats on the bayan, i.e. no bass beats this can be seen as a way to enforce the balance between the usage of heavy (bass dominated) and fine (treble) beats or more simply it can be thought of another mnemonic to keep track of the rhythmic cycle (in addition to Sam). The khali is played with a stressed syllable that can easily be picked out from the surrounding beats.</p> <p>Some rare talas even contain a "half-beat". For example, Dharami is an 11 1/2 beat cycle where the final "Ka" only occupies half the time of the other beats. This tala's 6th beat does not have a played syllable – in western terms it is a "rest".</p> <h3 id="common-hindustani-talas" tabindex="-1">Common Hindustani talas <a class="header-anchor" href="#common-hindustani-talas" aria-label="Permalink to "Common Hindustani talas""></a></h3> <p>Some talas, for example Dhamaar, Ek, Jhoomra and Chau talas, lend themselves better to slow and medium tempos. Others flourish at faster speeds, like Jhap or Rupak talas. Trital or Teental is one of the most popular, since it is as aesthetic at slower tempos as it is at faster speeds.</p> <p>There are many talas in Hindustani music, some of the more popular ones are:</p> <table tabindex="0"> <thead> <tr> <th>Name</th> <th>Beats</th> <th>Division</th> <th>Vibhaga</th> </tr> </thead> <tbody> <tr> <td>Tintal (or Trital or Teental)</td> <td>16</td> <td>4+4+4+4</td> <td>X 2 0 3</td> </tr> <tr> <td>Jhoomra</td> <td>14</td> <td>3+4+3+4</td> <td>X 2 0 3</td> </tr> <tr> <td>Tilwada</td> <td>16</td> <td>4+4+4+4</td> <td>X 2 0 3</td> </tr> <tr> <td>Dhamar</td> <td>14</td> <td>5+2+3+4</td> <td>X 2 0 3</td> </tr> <tr> <td>Ektal and Chautal</td> <td>12</td> <td>2+2+2+2+2+2</td> <td>X 0 2 0 3 4</td> </tr> <tr> <td>Jhaptal</td> <td>10</td> <td>2+3+2+3</td> <td>X 2 0 3</td> </tr> <tr> <td>Keherwa</td> <td>8</td> <td>4+4</td> <td>X 0</td> </tr> <tr> <td>Rupak (Mughlai/Roopak)</td> <td>7</td> <td>3+2+2</td> <td>X 2 3</td> </tr> <tr> <td>Dadra</td> <td>6</td> <td>3+3</td> <td>X 0</td> </tr> </tbody> </table> <h3 id="_72-melakarta-talas" tabindex="-1">72 melakarta talas <a class="header-anchor" href="#_72-melakarta-talas" aria-label="Permalink to "72 melakarta talas""></a></h3> <table class="wikitable"> <tbody><tr> <td><b>S.No</b> </td> <td><b>Name of Raga</b> </td> <td><b>Pattern of the symbols of angas</b> </td> <td><b>Aksharas</b> </td></tr> <tr> <td>1 </td> <td>Kanakaangi </td> <td>1 Anudhrutha, 1 Dhrutha, 1 Guru, 1 Laghu </td> <td>15 </td></tr> <tr> <td>2 </td> <td>Rathnaangi </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu </td> <td>20 </td></tr> <tr> <td>3 </td> <td>Ganamurthi </td> <td>1 Laghu, 2 Anudhruthas, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Anudhrutha </td> <td>22 </td></tr> <tr> <td>4 </td> <td>Vanaspathi </td> <td>1 Laghu, 2 Anudhruthas, 1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>22 </td></tr> <tr> <td>5 </td> <td>Maanavathi </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>20 </td></tr> <tr> <td>6 </td> <td>Dhanarupi </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhritha </td> <td>15 </td></tr> <tr> <td>7 </td> <td>Senaavathi </td> <td>1 Gurus, 1 Dhrutha Sekara Viraamam, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>25 </td></tr> <tr> <td>8 </td> <td>Hanumathodi </td> <td>1 Guru, 2 Anudhruthas, 1 Laghu, 1 Dhrutha, 1 Pluta, 1 Dhrutha, 1 Laghu </td> <td>34 </td></tr> <tr> <td>9 </td> <td>Dhenuka </td> <td>1 Pluta, 2 Anudhruthas, 1 Dhrutha </td> <td>16 </td></tr> <tr> <td>10 </td> <td>Natakapriya </td> <td>3 Dhruthas, 1 Laghu, 1 Dhrutha </td> <td>12 </td></tr> <tr> <td>11 </td> <td>Kokilapriya </td> <td>1 Guru, 1 Anudhrutha, 1 Dhrutha, 2 Laghus, 1 Dhrutha </td> <td>21 </td></tr> <tr> <td>12 </td> <td>Rupaavathi </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>19 </td></tr> <tr> <td>13 </td> <td>Gayakapriya </td> <td>1 Laghu, 1 Anudhrutha, 2 Dhruthas, 1 Laghu, 1 Dhrutha </td> <td>15 </td></tr> <tr> <td>14 </td> <td>Vagula bharanam </td> <td>1 Laghu, 1 Anudhrutha, 2 Dhruthas, 1 Laghu, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Dhrutha Sekara Viraamam </td> <td>28 </td></tr> <tr> <td>15 </td> <td>Maya malava goulam </td> <td>1 Laghu, 2 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Anudhrutha </td> <td>31 </td></tr> <tr> <td>16 </td> <td>Chakravaham </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 2 Laghus, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>24 </td></tr> <tr> <td>17 </td> <td>Suryakantham </td> <td>1 Guru, 1 Dhrutha Sekara Viraamam, 1 Dhrutha, 1 Guru, 1 Pluta </td> <td>33 </td></tr> <tr> <td>18 </td> <td>Haata kambari </td> <td>1 Guru, 2 Dhruthas, 1 Guru, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>27 </td></tr> <tr> <td>19 </td> <td>Jankaradh wani </td> <td>1 Pluta, 3 Dhrutha Sekara Viraamams, 1 Pluta, 1 Dhrutha, 1 Anudhrutha </td> <td>36 </td></tr> <tr> <td>20 </td> <td>Nata bhairavi </td> <td>1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Laghu, 2 Dhrutha Sekara Viraamams, 1 Laghu, 1 Anudhrutha </td> <td>19 </td></tr> <tr> <td>21 </td> <td>Keeravani </td> <td>2 Dhrutha Sekara Viraamams, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>18 </td></tr> <tr> <td>22 </td> <td>Karahara priya </td> <td>2 Dhrutha Sekara Viraamams, 1 Guru, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha </td> <td>24 </td></tr> <tr> <td>23 </td> <td>Gowri manohari </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 2 Laghus, 1 Dhrutha, 2 Gurus, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam </td> <td>37 </td></tr> <tr> <td>24 </td> <td>Varuna priya </td> <td>1 Laghu, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>20 </td></tr> <tr> <td>25 </td> <td>Maara ranjani </td> <td>1 Laghu, 2 Dhrutha Sekara Viraamams, 2 Gurus, 2 Anudhruthas </td> <td>28 </td></tr> <tr> <td>26 </td> <td>Charukesi </td> <td>1 Guru, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha </td> <td>22 </td></tr> <tr> <td>27 </td> <td>Sarasaangi </td> <td>1 Guru, 1 Dhrutha Sekara Viraamam, 1 Pluta, 1 Dhrutha, 1 Laghu </td> <td>29 </td></tr> <tr> <td>28 </td> <td>Harikamboji </td> <td>1 Guru, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Pluta, 1 Guru, 1 Anudhrutha </td> <td>41 </td></tr> <tr> <td>29 </td> <td>Dheera sankara bharanam </td> <td>1 Guru, 2 Dhrutha Sekara Viraamams, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Dhrutha, 2 Laghus, 1 Anudhrutha, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Dhrutha Sekara Viraamam </td> <td>50 </td></tr> <tr> <td>30 </td> <td>Nagaa nandhini </td> <td>1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha, 1 Guru, 2 Anudhruthas </td> <td>23 </td></tr> <tr> <td>31 </td> <td>Yagapriya </td> <td>1 Dhrutha Sekara Viraamam, 2 Laghus, 1 Dhrutha </td> <td>13 </td></tr> <tr> <td>32 </td> <td>Raga vardhini </td> <td>3 Laghus, 1 Anudhrutha, 1 Guru, 1 Dhrutha, 1 Anudhrutha </td> <td>24 </td></tr> <tr> <td>33 </td> <td>Gangeya bhushani </td> <td>1 Guru, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Dhrutha, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha </td> <td>38 </td></tr> <tr> <td>34 </td> <td>Vaga dheeshwari </td> <td>1 Laghu, 1 Dhrutha, 1 Laghu, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Guru, Dhrutha, 1 Dhrutha Sekara Viraamam </td> <td>34 </td></tr> <tr> <td>35 </td> <td>Soolini </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha </td> <td>12 </td></tr> <tr> <td>36 </td> <td>Chala Naata </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 2 Dhruthas </td> <td>15 </td></tr> <tr> <td>37 </td> <td>Chalagam </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Guru, 1 Anudhrutha </td> <td>22 </td></tr> <tr> <td>38 </td> <td>Jalaarnavam </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 2 Gurus, 1 Dhrutha </td> <td>32 </td></tr> <tr> <td>39 </td> <td>Jaalavarali </td> <td>1 Guru, 1 Dhrutha Sekara Viraamam, 2 Laghus, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha </td> <td>25 </td></tr> <tr> <td>40 </td> <td>Navaneetham </td> <td>1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>15 </td></tr> <tr> <td>41 </td> <td>Paavani </td> <td>1 Dhrutha Sekara Viraamam, 1 Laghu, 2 Anudhruthas </td> <td>9 </td></tr> <tr> <td>42 </td> <td>Raghupriya </td> <td>1 Dhrutha Sekara Viraamam, 1 Laghu, Anudhrutha, 1 Laghu, 1 Dhrutha </td> <td>14 </td></tr> <tr> <td>43 </td> <td>Kavaambothi </td> <td>1 Laghu, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Pluta, 1 Guru, 1 Anudhrutha </td> <td>36 </td></tr> <tr> <td>44 </td> <td>Bhavapriya </td> <td>1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha </td> <td>16 </td></tr> <tr> <td>45 </td> <td>Subha panthuvarali </td> <td>1 Laghu, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha </td> <td>35 </td></tr> <tr> <td>46 </td> <td>Shadvitha maargini </td> <td>1 Guru, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Guru, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>44 </td></tr> <tr> <td>47 </td> <td>Swarnaangi </td> <td>1 Guru, 1 Laghu, 1 Dhrutha, 1 Pluta, 1 Dhrutha, 1 Laghu </td> <td>32 </td></tr> <tr> <td>48 </td> <td>Divyamani </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>27 </td></tr> <tr> <td>49 </td> <td>Davalaambari </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>28 </td></tr> <tr> <td>50 </td> <td>Naama narayani </td> <td>1 Dhrutha, 1 Laghu, 2 Dhruthas, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>22 </td></tr> <tr> <td>51 </td> <td>Kaamavartha </td> <td>1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Pluta, 1 Anudhrutha </td> <td>27 </td></tr> <tr> <td>52 </td> <td>Raamapriya </td> <td>2 Laghus, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>16 </td></tr> <tr> <td>53 </td> <td>Gamanashrama </td> <td>2 Laghus, 1 Dhrutha, 1 Anudhrutha, 1 Laghu , 1 Dhrutha </td> <td>17 </td></tr> <tr> <td>54 </td> <td>Viswambari </td> <td>1 Laghu, 1 Anudhrutha, 1 Pluta, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>27 </td></tr> <tr> <td>55 </td> <td>Syamalangi </td> <td>1 Guru, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu </td> <td>25 </td></tr> <tr> <td>56 </td> <td>Shanmukha priya </td> <td>1 Pluta, 1 Laghu, 1 Dhrutha, 1Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha </td> <td>27 </td></tr> <tr> <td>57 </td> <td>Simhendra madhyamam </td> <td>1 Guru, 1 Kakapada, 1 Laghu, 1 Dhrutha, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Dhrutha Sekara Viraamam </td> <td>69 </td></tr> <tr> <td>58 </td> <td>Hemaavathi </td> <td>1 Pluta, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>30 </td></tr> <tr> <td>59 </td> <td>Dharmavathi </td> <td>1 Pluta, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>30 </td></tr> <tr> <td>60 </td> <td>Neethimathi </td> <td>1Dhrutha, 1Laghu, 1Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>22 </td></tr> <tr> <td>61 </td> <td>Kaanthamani </td> <td>2 Gurus, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>28 </td></tr> <tr> <td>62 </td> <td>Rishabhapriya </td> <td>1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha, 1 Laghu, 1 Dhrutha </td> <td>21 </td></tr> <tr> <td>63 </td> <td>Lathaangi </td> <td>1 Laghu, 1 Pluta, 1 Anudhrutha, 1 Laghu </td> <td>21 </td></tr> <tr> <td>64 </td> <td>Vachaspathi </td> <td>1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>29 </td></tr> <tr> <td>65 </td> <td>Mecha Kalyani </td> <td>1 Guru, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Dhrutha, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>30 </td></tr> <tr> <td>66 </td> <td>Chithraambari </td> <td>1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Pluta, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam </td> <td>29 </td></tr> <tr> <td>67 </td> <td>Sucharithra </td> <td>1 Guru, 1 Laghu, 2 Dhrutha Sekara Viraamams, 1 Guru, 1 Anudhrutha </td> <td>27 </td></tr> <tr> <td>68 </td> <td>Jyothi swarupini </td> <td>1 Kakapada, 1 Anudhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Pluta, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha </td> <td>48 </td></tr> <tr> <td>69 </td> <td>Dathuvardhani </td> <td>1 Guru, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Anudhrutha, 1 Pluta, 1 Anudhrutha </td> <td>36 </td></tr> <tr> <td>70 </td> <td>Naasikha bhushani </td> <td>1 Dhrutha, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha, 1 Laghu, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha </td> <td>32 </td></tr> <tr> <td>71 </td> <td>Kosalam </td> <td>1 Guru, 1 Anudhrutha, 2 Gurus, 1 Anudhruthas </td> <td>26 </td></tr> <tr> <td>72 </td> <td>Rasikapriya </td> <td>1 Dhrutha Sekara Viraamam, 1 Guru, 1 Dhrutha Sekara Viraamam, 1 Laghu, 1 Dhrutha </td> <td>20 </td></tr></tbody></table> <youtube-embed video="2IBah0k836A" /><h3 id="_7-saptangachakram-7-angas" tabindex="-1">7 Saptangachakram (7 angas) <a class="header-anchor" href="#_7-saptangachakram-7-angas" aria-label="Permalink to "7 Saptangachakram (7 angas)""></a></h3> <table tabindex="0"> <thead> <tr> <th>Anga</th> <th>Symbol</th> <th>Aksharakala</th> </tr> </thead> <tbody> <tr> <td>Anudrutam</td> <td>U</td> <td>1</td> </tr> <tr> <td>Druta</td> <td>O</td> <td>2</td> </tr> <tr> <td>Druta-virama</td> <td>UO</td> <td>3</td> </tr> <tr> <td>Laghu (Chatusra-jati)</td> <td>l</td> <td>4</td> </tr> <tr> <td>Guru</td> <td>8</td> <td>8</td> </tr> <tr> <td>Plutam</td> <td>3</td> <td>12</td> </tr> <tr> <td>Kakapadam</td> <td>x</td> <td>16</td> </tr> </tbody> </table> <p><a href="https://www.mridangams.com/2007/09/tala-dhasa-pranas.html" target="_blank" rel="noreferrer">https://www.mridangams.com/2007/09/tala-dhasa-pranas.html</a></p> <hr> <h2 id="konnakkol" tabindex="-1">Konnakkol <a class="header-anchor" href="#konnakkol" aria-label="Permalink to "Konnakkol""></a></h2> <p>Konnakol (also spelled Konokol, Konakkol, Konnakkol) (Tamil: கொன்னக்கோல் koṉṉakkōl) (Malayalam: വായ്ത്താരി) is the art of performing percussion syllables vocally in South Indian Carnatic music. Konnakol is the spoken component of solkattu, which refers to a combination of konnakol syllables spoken while simultaneously counting the tala (meter) with the hand. It is comparable in some respects to bol in Hindustani music, but allows the composition, performance or communication of rhythms. A similar concept in Hindustani classical music is called padhant.</p> <youtube-embed video="DYEh5uXrL4w"/><h3 id="usage" tabindex="-1">Usage <a class="header-anchor" href="#usage" aria-label="Permalink to "Usage""></a></h3> <p>Musicians from a variety of traditions have found konnakol useful in their practice. Prominent among these is John McLaughlin, who led the Mahavishnu Orchestra and has long used konnakol as a compositional aid. V. Selvaganesh, who plays alongside McLaughlin in the group Remember Shakti, and Ranjit Barot, who plays with McLaughlin in the group 4th Dimension, are other noted konnakol virtuosos. Few of the prominent names performing konnakol are B K Chandramouli, Dr T K Murthy, B C Manjunath, Somashekhar Jois.</p> <p>Danish musician Henrik Andersen wrote the book Shortcut To Nirvana (2005) and the DVD Learn Konnakol (2014). Andersen was a student of Trilok Gurtu (India) and Pete Lockett (U.K.).</p> <p>Subash Chandran's disciple Dr Joel, who teaches konnakol in the U.K., is noted for incorporating it into rock and Western classical music, notably in a concerto commissioned in 2007 by the viola soloist Rivka Golani. The trio J G Laya (Chandran, Sri Thetakudi Harihara Vinayakram, and Dr Joel) showcased the konnakol of Chandran and helped the previously fading art form return to prominence in the 1980s. Chandran released an instructional DVD on konnakol in 2007. McLaughlin and Selvaganesh also released an instructional DVD on konnakol in 2007.</p> <youtube-embed video="mOMLRMfIYf0"/><p>Jazz saxophonist, konnakol artist, and composer Arun Luthra incorporates konnakol and Carnatic music rhythms (as well as Hindustani classical music rhythms) in his work. More recently, drummer Steve Smith has also incorporated Konnakol in his performances with Vital Information and his clinics.</p> <p>Konnakol should not be confused with the practice in Hindustani music (the classical music of northern India) of speaking tabla "bols", which indicate the finger placement to be used by a percussionist. By contrast, konnakol syllables are aimed at optimising vocal performance, and vastly outnumber any commonly used finger placements on mridangam or any other hand percussion instrument. Further, all the differences between Carnatic and north Indian rhythms apply equally to konnakol and tabla bols.</p> <p>The artist improvises within a structure that interrelates with the raga being played and within the talam preferred in the compositions. In mridangam, kanjira, or ghatam, the percussion is limited to physical characteristics of their structure and construction: the resonance of skin over jackfruit wood, clay shells, or clay pots. The human voice has a direct and dramatic way of expressing the percussive aspects in music directly.</p> <p>Trichy Shri R Thayumanavar gave a rebirth to konnakol. His disciple Andankoil AVS Sundararajan, a vocal and miruthangam Vidwan, is a konnakol expert, as is Mridangam Vidwan Shri T S Nandakumar.</p> <h3 id="solkattu" tabindex="-1">Solkattu <a class="header-anchor" href="#solkattu" aria-label="Permalink to "Solkattu""></a></h3> <table tabindex="0"> <thead> <tr> <th>Sol</th> <th>Sollu-Solkattus</th> <th>Jatti</th> </tr> </thead> <tbody> <tr> <td>Letter</td> <td>Word</td> <td>Sentence</td> </tr> </tbody> </table> <p>Konnakol uses rhythmic solfege for different subdivisions of the beat called "Solkattu." Common ones are:</p> <table tabindex="0"> <thead> <tr> <th>N</th> <th>Name</th> <th>Syllables</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>Chatusra 1/2 Speed</td> <td>Tha Ka</td> </tr> <tr> <td>3</td> <td>Tisra</td> <td>Tha Ki Ta</td> </tr> <tr> <td>4</td> <td>Chatusra</td> <td>Tha Ka Dhi Mi</td> </tr> <tr> <td>5</td> <td>Khanda</td> <td>Tha Dhi Gi Na Thom</td> </tr> <tr> <td>6</td> <td>Tisra Double Speed</td> <td>Tha Ka Dhi Mi Tha Ka</td> </tr> <tr> <td>7</td> <td>Misra</td> <td>Tha Ka Di Mi Tha Ki Ta</td> </tr> <tr> <td>8</td> <td>Chatusra Double Speed</td> <td>Tha Ka Dhi Mi Tha Ka Jhu No</td> </tr> <tr> <td>9</td> <td>Sankirna</td> <td>Tha Ka Dhi Mi Ta Dhi Gi Na Thom</td> </tr> <tr> <td>10</td> <td>Khanda Double Speed</td> <td>Tha Ka Tha Ki Ta Tha Dhi Gi Na Thom, or Tha Ki Ta Dhim†2 Tha Dhi Gi Na Thom</td> </tr> </tbody> </table> <p>†'2' suffix signifies solfege syllable is held twice as long.</p> <youtube-embed video="ZuZF8BaOt58"/> <youtube-embed video="iurhjlBum0o"/> <h2 id="tihai-the-rhythmic-cadence" tabindex="-1">Tihai - the rhythmic cadence <a class="header-anchor" href="#tihai-the-rhythmic-cadence" aria-label="Permalink to "Tihai - the rhythmic cadence""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Tihai" target="_blank" rel="noreferrer">Tihai</a> (pronounced ti-'ha-yi) is a polyrhythmic technique found in Indian classical music, and often used to conclude a piece. Tihais can be either sung or played on an instrument. Tihais are sometimes used to distort the listeners’ perception of time, only to reveal the consistent underlying cycle at the sam.</p> <h3 id="definition" tabindex="-1">Definition <a class="header-anchor" href="#definition" aria-label="Permalink to "Definition""></a></h3> <p>Tihai is the repetition of specific group of BOL or BEATS by three times.</p> <h3 id="usage-1" tabindex="-1">Usage <a class="header-anchor" href="#usage-1" aria-label="Permalink to "Usage""></a></h3> <p>Typically, a tihai is used as a rhythmic cadence, i.e., a rhythmic variation that marks the end of a melody or rhythmic composition, creating a transition to another section of the music.</p> <youtube-embed video="0kJ4PA2yOSU" /><h3 id="structure" tabindex="-1">Structure <a class="header-anchor" href="#structure" aria-label="Permalink to "Structure""></a></h3> <p>The basic internal format of the tihai is three equal repetitions of a rhythmic pattern (or rhythmo-melodic pattern), interspersed with 2 (usually) equal rests.</p> <p>The ending point of the tihai is calculated to fall on a significant point in the rhythmic cycle (called tala), most often the first beat (called sum and pronounced "some"). The other most common ending point of a tihai is the beginning of the gat or bandish, which is often found several beats before the sum.</p> <p>If the three groupings are played with two groupings of rests, which are equally long, then the tihai is called Dumdaar.</p> <p>Otherwise, if there are no rests between the three groupings, then the tihai is called Bedumdaar (or for short, Bedum).</p> <p>Sometimes, a pattern is played on the tabla that is almost identical to a tihai, except for the fact that it ends on the beat just before the sum. Such patterns are known as anagat.</p> <youtube-embed video="HXLGO-yTgzo" />]]></content:encoded> <enclosure url="https://chromatone.center/ricky-singh.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[HSL, Lch, HWB]]></title> <link>https://chromatone.center/practice/color/hsl/</link> <guid>https://chromatone.center/practice/color/hsl/</guid> <pubDate>Mon, 18 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Circular color mixer]]></description> <content:encoded><![CDATA[<client-only> <color-hsl style="position: sticky; top: 0;" /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/lch.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Extract colors]]></title> <link>https://chromatone.center/practice/experiments/extract/</link> <guid>https://chromatone.center/practice/experiments/extract/</guid> <pubDate>Mon, 18 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Get prominent colors of any given picture. Locally.]]></description> <content:encoded><![CDATA[<ExtractColors/>]]></content:encoded> <enclosure url="https://chromatone.center/extractcolors.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Simplex]]></title> <link>https://chromatone.center/practice/experiments/simplex/</link> <guid>https://chromatone.center/practice/experiments/simplex/</guid> <pubDate>Mon, 18 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Simplex noise]]></description> <content:encoded><![CDATA[<Simplex/>]]></content:encoded> </item> <item> <title><![CDATA[Lab]]></title> <link>https://chromatone.center/practice/color/lab/</link> <guid>https://chromatone.center/practice/color/lab/</guid> <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Lightness + A and B components color mixer]]></description> <content:encoded><![CDATA[<client-only> <color-lab class="max-h-90svh" style="position: sticky; top: 0;" /></client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/lab.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Pulse, beat and tempo]]></title> <link>https://chromatone.center/theory/rhythm/pulse/</link> <guid>https://chromatone.center/theory/rhythm/pulse/</guid> <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Time feeling of humans]]></description> <content:encoded><![CDATA[<h2 id="pulse" tabindex="-1">Pulse <a class="header-anchor" href="#pulse" aria-label="Permalink to "Pulse""></a></h2> <p>In music theory, the pulse is a musical piece's either audible or implied series of uniformly spaced beats—in other words, uniformly timed instants of punctuating sound—and thus is the monotonous "tapping" that sets the tempo and that underlies or anchors the rhythm. Whereas the rhythm, being a musical creation that at times can intricately depart from the pulse, may become too difficult for an untrained listener to fully match, nearly any listener instinctively matches the pulse by simply tapping uniformly, despite rhythmic variations in timing of sounds atop the pulse. A performance may leave certain beats silent, not literally sounded, but the pulse remains as an abstraction. For example, even after a silent passage in a piece, the piece typically resumes on beat, as it were, by referencing the implied pulse, established before the silence.</p> <p>The pulse may be audible or implied. The <strong>tempo</strong> of the piece is the speed of the pulse. If a pulse becomes too fast it would become a drone; one that is too slow would be perceived as unconnected sounds. When the period of any continuous beat is faster than 8–10 per second or slower than 1 per 1.5–2 seconds, it cannot be perceived as such. "Musical" pulses are generally specified in the range 40 to 240 beats per minute. The pulse is not necessarily the fastest or the slowest component of the rhythm but the one that is perceived as basic. This is currently most often designated as a crotchet or quarter note when written.</p> <p>Pulse is related to and distinguished from rhythm (grouping), beats, and meter:</p> <blockquote> <p>A pulse is one of a series of regularly recurring, precisely equivalent ["undifferentiated"] stimuli. Like the tick of a metronome or a watch, pulses mark off equal units in the temporal continuum.... A sense of regular pulses, once established, tends to be continued in the mind and musculature of the listener, even though the sound has stopped.... The human mind tends to impose some sort of organization upon such equal pulses. ...<br> [Pulse is] an important part of musical experience. Not only is pulse necessary for the existence of meter ["there can be no meter without an underlying pulse to establish the units of measurement"], but it generally, though not always, underlies and reinforces rhythmic experience.<br> Meter is the measurement of the number of pulses between more or less regularly recurring accents. Therefore, in order for meter to exist, some of the pulses in a series must be accented—marked for consciousness—relative to others. When pulses are thus counted within a metric context, they are referred to as beats. — Leonard B. Meyer and Cooper (1960)</p> </blockquote> <youtube-embed video="2UphAzryVpY" /><h2 id="pulse-groups" tabindex="-1">Pulse groups <a class="header-anchor" href="#pulse-groups" aria-label="Permalink to "Pulse groups""></a></h2> <p>While ideal pulses are identical, when pulses are variously accented, this produces two- or three-pulse pulse groups such as <strong>strong-weak</strong> and <strong>strong-weak-weak</strong> and any longer group may be broken into such groups of two and three. In fact there is a natural tendency to perceptually group or differentiate an ideal pulse in this way. A repetitive, regularly accented pulse-group is called a <strong>metre</strong>.</p> <p>Pulses can occur at multiple metric levels. Pulse groups may be distinguished as synchronous, if all pulses on slower levels coincide with those on faster levels, and nonsynchronous, if not.</p> <p>An isochronal or equally spaced pulse on one level that uses varied pulse groups (rather than just one pulse group the whole piece) create a pulse on the (slower) multiple level that is non-isochronal (a stream of 2+3... at the eighth note level would create a pulse of a quarter note+dotted quarter note as its multiple level).</p> <h2 id="tempo" tabindex="-1">Tempo <a class="header-anchor" href="#tempo" aria-label="Permalink to "Tempo""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Tempo" target="_blank" rel="noreferrer">Tempo</a> (Italian for "time"; plural tempos, or tempi from the Italian plural) is the speed or pace of a given piece. In classical music, tempo is typically indicated with an instruction at the start of a piece (often using conventional Italian terms) and is usually measured in beats per minute (or BPM). In modern classical compositions, a "metronome mark" in beats per minute may supplement or replace the normal tempo marking, while in modern genres like electronic dance music, tempo will typically simply be stated in BPM.</p> <p>Tempo may be separated from articulation and meter, or these aspects may be indicated along with tempo, all contributing to the overall texture. While the ability to hold a steady tempo is a vital skill for a musical performer, tempo is changeable. Depending on the genre of a piece of music and the performers' interpretation, a piece may be played with slight tempo rubato or drastic variances. In ensembles, the tempo is often indicated by a conductor or by one of the instrumentalists, for instance the drummer.</p> <h3 id="measurement-of-the-tempo" tabindex="-1">Measurement of the tempo <a class="header-anchor" href="#measurement-of-the-tempo" aria-label="Permalink to "Measurement of the tempo""></a></h3> <p>While tempo is described or indicated in many different ways, including with a range of words (e.g., "Slowly", "Adagio" and so on), it is typically measured in beats per minute (BPM or BPM). For example, a tempo of 60 beats per minute signifies one beat per second, while a tempo of 120 beats per minute is twice as rapid, signifying one beat every 0.5 seconds. The note value of a beat will typically be that indicated by the denominator of the time signature. For instance, in 4/4 the beat will be a crotchet, or quarter note.</p> <p>This measurement and indication of tempo became increasingly popular during the first half of the 19th century, after Johann Nepomuk Maelzel invented the metronome. Beethoven was one of the first composers to use the metronome; in the 1810s he published metronomic indications for the eight symphonies he had composed up to that time.</p> <p>Instead of beats per minute, some 20th-century classical composers (e.g., Béla Bartók, Alberto Ginastera, and John Cage) specify the total playing time for a piece, from which the performer can derive tempo.</p> <p>With the advent of modern electronics, BPM became an extremely precise measure. Music sequencers use the BPM system to denote tempo. In popular music genres such as electronic dance music, accurate knowledge of a tune's BPM is important to DJs for the purposes of beatmatching.</p> <p>The speed of a piece of music can also be gauged according to measures per minute (mpm) or bars per minute (BPM), the number of measures of the piece performed in one minute. This measure is commonly used in ballroom dance music.</p> <h2 id="choosing-speed" tabindex="-1">Choosing speed <a class="header-anchor" href="#choosing-speed" aria-label="Permalink to "Choosing speed""></a></h2> <p>In different musical contexts, different instrumental musicians, singers, conductors, bandleaders, music directors or other individuals will select the tempo of a song or piece. In a popular music or traditional music group or band, the bandleader or drummer may select the tempo. In popular and traditional music, whoever is setting the tempo often counts out one or two bars in tempo. In some songs or pieces in which a singer or solo instrumentalist begins the work with a solo introduction (prior to the start of the full group), the tempo they set will provide the tempo for the group. In an orchestra or concert band, the conductor normally sets the tempo. In a marching band, the drum major may set the tempo. In a sound recording, in some cases a record producer may set the tempo for a song (although this would be less likely with an experienced bandleader).</p> <h2 id="basic-tempo-markings" tabindex="-1">Basic tempo markings <a class="header-anchor" href="#basic-tempo-markings" aria-label="Permalink to "Basic tempo markings""></a></h2> <p>Here follows a list of common tempo markings. The beats per minute (BPM) values are very rough approximations for 4/4 time.</p> <p>These terms have also been used inconsistently through time and in different geographical areas. One striking example is that Allegretto hastened as a tempo from the 18th to the 19th century: originally it was just above Andante, instead of just below Allegro as it is now. As another example, a modern largo is slower than an adagio, but in the Baroque period it was faster.</p> <p>From slowest to fastest:</p> <ul> <li><strong>Larghissimo</strong> – very, very slow (24 BPM and under)</li> <li><strong>Adagissimo</strong> – very slow (24-40 BPM)</li> <li><strong>Grave</strong> – very slow (25–45 BPM)</li> <li><strong>Largo</strong> – slow and broad (40–60 BPM)</li> <li><strong>Lento</strong> – slow (45–60 BPM)</li> <li><strong>Larghetto</strong> – rather slow and broad (60–66 BPM)</li> <li><strong>Adagio</strong> – slow with great expression (66–76 BPM)</li> <li><strong>Adagietto</strong> – slower than andante (72–76 BPM) or slightly faster than adagio (70–80 BPM)</li> <li><strong>Andante</strong> – at a walking pace (76–108 BPM)</li> <li><strong>Andantino</strong> – slightly faster than andante (although, in some cases, it can be taken to mean slightly slower than andante) (80–108 BPM)</li> <li><strong>Marcia moderato</strong> – moderately, in the manner of a march (83–85 BPM)</li> <li><strong>Moderato</strong> – at a moderate speed (108–120 BPM)</li> <li><strong>Andante moderato</strong> – between andante and moderato (thus the name) (92–112 BPM)</li> <li><strong>Allegretto</strong> – by the mid-19th century, moderately fast (112–120 BPM); see paragraph above for earlier usage</li> <li><strong>Allegro moderato</strong> – close to, but not quite allegro (116–120 BPM)</li> <li><strong>Allegro</strong> – fast, quick, and bright (120–156 BPM) (molto allegro is slightly faster than allegro, but always in its range; 124-156 BPM)</li> <li><strong>Vivace</strong> – lively and fast (156–176 BPM)</li> <li><strong>Vivacissimo</strong> – very fast and lively (172–176 BPM)</li> <li><strong>Allegrissimo</strong> or Allegro vivace – very fast (172–176 BPM)</li> <li><strong>Presto</strong> – very, very fast (168–200 BPM)</li> <li><strong>Prestissimo</strong> – even faster than presto (200 BPM and over)</li> </ul> <p>Additional terms</p> <ul> <li><strong>A piacere</strong> – the performer may use their own discretion with regard to tempo and rhythm; literally "at pleasure"</li> <li><strong>Assai</strong> – (very) much</li> <li><strong>A tempo</strong> – resume previous tempo</li> <li><strong>Con grazia</strong> – with grace, or gracefully</li> <li><strong>Con moto</strong> – Italian for "with movement"; can be combined with a tempo indication, e.g., Andante con moto</li> <li><strong>Lamentoso</strong> – sadly, plaintively</li> <li><strong>L'istesso</strong>, L'istesso tempo, or Lo stesso tempo – at the same speed; L'istesso is used when the actual speed of the music has not changed, despite apparent signals to the contrary, such as changes in time signature or note length (half notes in 4</li> <li>4 could change to whole notes in 2</li> <li>2, and they would all have the same duration)</li> <li><strong>Ma non tanto</strong> – but not so much; used in the same way and has the same effect as Ma non troppo (see immediately below) but to a lesser degree</li> <li><strong>Ma non troppo</strong> – but not too much; used to modify a basic tempo to indicate that the basic tempo should be reined in to a degree; for example, Adagio ma non troppo to mean ″Slow, but not too much″, Allegro ma non troppo to mean ″Fast, but not too much″</li> <li><strong>Maestoso</strong> – majestically, stately</li> <li><strong>Molto</strong> – very</li> <li><strong>meno</strong> - Less</li> <li><strong>Più</strong> - more</li> <li><strong>Poco</strong> – a little</li> <li><strong>Subito</strong> – suddenly</li> <li><strong>Tempo comodo</strong> – at a comfortable speed</li> <li><strong>Tempo di...</strong> – the speed of a ... (such as Tempo di valzer (speed of a waltz, dotted quarter note. ≈ 60 BPM or quarter note≈ 126 BPM), Tempo di marcia (speed of a march, quarter note ≈ 120 BPM))</li> <li><strong>Tempo giusto</strong> – at a consistent speed, at the 'right' speed, in strict tempo</li> <li><strong>Tempo primo</strong> – resume the original (first) tempo</li> <li><strong>Tempo semplice</strong> – simple, regular speed, plainly</li> </ul> <h3 id="english-tempo-markings" tabindex="-1">English tempo markings <a class="header-anchor" href="#english-tempo-markings" aria-label="Permalink to "English tempo markings""></a></h3> <p>English indications, for example quickly, have also been used, by Benjamin Britten and Percy Grainger, among many others. In jazz and popular music lead sheets and fake book charts, terms like "fast", "laid back", "steady rock", "medium", "medium-up", "ballad", "brisk", "brightly" "up", "slowly", and similar style indications may appear. In some lead sheets and fake books, both tempo and genre are indicated, e.g., "slow blues", "fast swing", or "medium Latin". The genre indications help rhythm section instrumentalists use the correct style. For example, if a song says "medium shuffle", the drummer plays a shuffle drum pattern; if it says "fast boogie-woogie", the piano player plays a boogie-woogie bassline.</p> <p>"Show tempo", a term used since the early days of Vaudeville, describes the traditionally brisk tempo (usually 160–170 BPM) of opening songs in stage revues and musicals.</p> <p>Humourist Tom Lehrer uses facetious English tempo markings in his anthology Too Many Songs by Tom Lehrer. For example, "National Brotherhood Week" is to be played "fraternally"; "We Will All Go Together" is marked "eschatologically"; and "Masochism Tango" has the tempo "painstakingly". His English contemporaries Flanders and Swann have similarly marked scores, with the music for their song "The Whale (Moby Dick)" shown as "oceanlike and vast".</p> <h2 id="electronic-music" tabindex="-1">Electronic music <a class="header-anchor" href="#electronic-music" aria-label="Permalink to "Electronic music""></a></h2> <p>Typical tempo ranges for most common elecytonic music genres:</p> <ul> <li><strong>Dub:</strong> 60-90 BPM</li> <li><strong>Hip-hop:</strong> 60-100 BPM</li> <li><strong>Triphop/Downtempo:</strong> 90-110 BPM</li> <li><strong>House:</strong> 115-130 BPM</li> <li><strong>UK garage/2-step:</strong> 130-135 BPM</li> <li><strong>Grime:</strong> 140 BPM</li> <li><strong>Techno/Trance:</strong> 120-140 BPM</li> <li><strong>Acid Techno:</strong> 135-150</li> <li><strong>Schranz:</strong> 150</li> <li><strong>Dubstep:</strong> 135-145 BPM (70-75 BPM in half-time)</li> <li><strong>Hardstyle:</strong> 150 BPM</li> <li><strong>Juke/Footwork:</strong> 160 BPM</li> <li><strong>Drum and bass:</strong> 160-180 BPM</li> </ul> <h3 id="extreme-tempo" tabindex="-1">Extreme tempo <a class="header-anchor" href="#extreme-tempo" aria-label="Permalink to "Extreme tempo""></a></h3> <p>More extreme tempos are achievable at the same underlying tempo with very fast drum patterns, often expressed as drum rolls. Such compositions often exhibit a much slower underlying tempo, but may increase the tempo by adding additional percussive beats. Extreme metal subgenres such as speedcore and grindcore often strive to reach unusually fast tempo. The use of extreme tempo was very common in the fast bebop jazz from the 1940s and 1950s. A common jazz tune such as "Cherokee" was often performed at quarter note equal to or sometimes exceeding 368 BPM. Some of Charlie Parker's famous tunes ("Bebop", "Shaw Nuff") have been performed at 380 BPM plus.</p> <youtube-embed video="GH2k8GccrqI" /><p>There is also a subgenre of speedcore known as Extratone, which is defined by music with a BPM over 3,600, or sometimes 1,000 BPM or over.</p> <h3 id="beatmatching" tabindex="-1">Beatmatching <a class="header-anchor" href="#beatmatching" aria-label="Permalink to "Beatmatching""></a></h3> <p>In popular music genres such as disco, house music and electronic dance music, beatmatching is a technique that DJs use that involves speeding up or slowing down a record (or CDJ player, a speed-adjustable CD player for DJ use) to match the tempo of a previous or subsequent track, so both can be seamlessly mixed. Having beatmatched two songs, the DJ can either seamlessly crossfade from one song to another, or play both tracks simultaneously, creating a layered effect.</p> <p>DJs often beatmatch the underlying tempos of recordings, rather than their strict BPM value suggested by the kick drum, particularly when dealing with high tempo tracks. A 240 BPM track, for example, matches the beat of a 120 BPM track without slowing down or speeding up, because both have an underlying tempo of 120 quarter notes per minute. Thus, some soul music (around 75–90 BPM) mixes well with a drum and bass beat (from 150–185 BPM). When speeding up or slowing down a record on a turntable, the pitch and tempo of a track are linked: spinning a disc 10% faster makes both pitch and tempo 10% higher. Software processing to change the pitch without changing the tempo is called pitch-shifting. The opposite operation, changing the tempo without changing the pitch, is called time-stretching.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/austin-ban.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Rhythm, timeline, ostinato]]></title> <link>https://chromatone.center/theory/rhythm/study/</link> <guid>https://chromatone.center/theory/rhythm/study/</guid> <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Definitions of the temporal dimension in music]]></description> <content:encoded><![CDATA[<youtube-embed video="YPU5XrmORCQ" /><p>A few examples of definitions and characterizations of rhythm, both ancient and modern. (from the book <a href="https://en.wikipedia.org/wiki/The_Geometry_of_Musical_Rhythm" target="_blank" rel="noreferrer">"The geometry of musical rhythm: What makes a 'Good' Rhythm good?"</a> by <a href="http://cgm.cs.mcgill.ca/~godfried/" target="_blank" rel="noreferrer">Godfried T. Toussaint</a> <a href="http://cgm.cs.mcgill.ca/~godfried/publications/geometry-of-rhythm.pdf" target="_blank" rel="noreferrer">(abstract)</a></p> <h2 id="definitions" tabindex="-1">Definitions <a class="header-anchor" href="#definitions" aria-label="Permalink to "Definitions""></a></h2> <ul> <li><strong>Plato:</strong> “An order of movement.”</li> <li><strong>Baccheios the Elder:</strong> “A measuring of time by means of some kind of movement.”††† Phaedrus: “Some measured thesis of syllables, placed together in certain ways.”‡‡‡ Aristoxenus: “Time, divided by any of those things that are capable of being rhythmed.”</li> <li><strong>Nichomacus:</strong> “Well marked movement of ‘times’.”</li> <li><strong>Leophantus:</strong> “Putting together of ‘times’ in due proportion, considered with regard to symmetry amongst them.”</li> <li><strong>Didymus:</strong> “A schematic arrangement of sounds.”</li> <li><strong>D. Wright:</strong> “Rhythm is the way in which time is organized within measures.”</li> <li><strong>A. C. Lewis:</strong> “Rhythm is the language of time.”</li> <li><strong>J. Martineau:</strong> “Rhythm is the component of music that punctuates time, carrying us from one beat to the next, and it subdivides into simple ratios.”</li> <li><strong>A. C. Hall:</strong> “Rhythm is made by durations of sound and silence and by accent.”</li> <li><strong>T. H. Garland and C. V. Kahn:</strong> “Rhythm is created whenever the time continuum is split up into pieces by some sound or movement.”</li> <li><strong>J. Bamberger:</strong> “The many different ways in which time is organized in music.”</li> <li><strong>J. Clough, J. Conley, and C. Boge:</strong> “Patterns of duration and accent of musical sounds moving through time.”</li> <li><strong>G. Cooper and L. B. Meyer:</strong> “Rhythm may be defined as the way in which one or more unaccented beats are grouped in relation to an accented one.”</li> <li><strong>D. J. Levitin:</strong> “Rhythm refers to the durations of a series of notes, and to the way that they group together into units.”</li> <li><strong>A. D. Patel:</strong> “The systematic patterning of sound in terms of timing, accent, and grouping.”</li> <li><strong>R. Parncutt:</strong> “A musical rhythm is an acoustic sequence evoking a sensation of pulse.”</li> <li><strong>C. B. Monahan and E. C. Carterette:</strong> “Rhythm is the perception of both regular and irregular accent patterns and their interaction.”</li> <li><strong>M. Clayton:</strong> “Rhythm, then, may be interpreted either as an alternation of stresses or as a succession of durations.”</li> <li><strong>B. C. Wade:</strong> “A rhythm is a specific succession of durations.”</li> <li><strong>S. Arom:</strong> “For there to be rhythm, sequences of audible events must be characterized by contrasting features.”‡‡‡‡ Arom goes on to specify that there are three types of con- trasting features that may operate in combination: duration, accent, and tone color (timbre). Contrast in each of these may be present or absent, and when accentuation or tone contrasts are present they may be regular or irregular. With these marking parameters, Arom generates a combinatorial classification of rhythms.</li> <li><strong>C. Egerton Lowe writes:</strong> “There is, I think, no other term used in music over which more ambiguity is shown.” Then he provides a discussion of a dozen definitions found in the literature.</li> </ul> <youtube-embed video="gy2kyRrXm2g" /><h3 id="timelines-ostinatos-and-meter" tabindex="-1">Timelines, Ostinatos and Meter <a class="header-anchor" href="#timelines-ostinatos-and-meter" aria-label="Permalink to "Timelines, Ostinatos and Meter""></a></h3> <p>In much traditional, classical, and contemporary music around the world, one hears a distinctive and characteristic rhythm that appears to be an essential feature of the music, that stands out above the other rhythms, and that repeats throughout most if not the entire piece. Sometimes this essential feature will be merely an isochronous pulsation without any recognizable periodicity. At other times, the music will be characterized by unique periodic patterns. These special rhythms are generally called <strong>timelines</strong>.* Timelines should be distinguished from the more general term rhythmic ostinatos. A rhythmic <strong>ostinato</strong> (from the word obstinate) refers to a rhythm or phrase that is continually repeated during a musical piece. Timelines, on the other hand, are more particular ostinatos that are easily recognized and remembered, play a distinguished role in the music, and also serve the functions of conductor and regulator, by signaling to other musicians the fundamen- tal cyclic structure of the piece. Thus, timelines act as an orienting device that facilitates musicians to stay together and helps soloists navigate the rhythmic landscape offered by the other instruments.</p> <p>In ethnomusicology, the use of the word timeline is generally limited to asymmetric durational patterns of sub-Saharan origin such as the <strong>tresillo</strong>. In this book, however, the term is expanded to cover similar notions used in other cultures such as <strong>compás</strong> in the flamenco music of Southern Spain, <strong>tala</strong> in India, <strong>loop</strong> in electronic dance music (EDM), and just plain rhythmic ostinatos in any type of music. A word is in order concerning the ubiquitous related concept referred to as meter in Western music. There is slightly less vagueness present in the many definitions of <strong>meter</strong> published, as there is of the definitions of rhythm listed above. There is also much discussion about the differences between meter and rhythm. Meter is usually defined in terms of a hierarchy of accent pat- terns, and considered to be more regular than rhythm. Some music, such as sub-Saharan African music is claimed to have only pulsation as a temporal reference, and no meter in the strict sense of the word. Christopher Hasty’s book titled Meter as Rhythm considers meter to be a special case of rhythm. In this book, the word “timeline” is expanded to include all those meters used in music around the world, that function as time-keepers, or ostinatos, and determine the predominant underlying rhythmic structure of a piece. Here, meter is viewed as just another rhythm that may be sounded or merely felt by the performer or listener, and it is also represented as a binary sequence. While consideration of a metric context is indispensable for a complete understanding of rhythm, the underlying assumption in this book is that it can also be profitable to focus on purely inter-onset durational issues.</p> <youtube-embed video="uDhwFTw4VnI" /><p><a href="https://en.wikipedia.org/wiki/Additive_rhythm_and_divisive_rhythm" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Additive_rhythm_and_divisive_rhythm</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/roger-hoover.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Flamenco compás]]></title> <link>https://chromatone.center/theory/rhythm/system/flamenco/</link> <guid>https://chromatone.center/theory/rhythm/system/flamenco/</guid> <pubDate>Sat, 16 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[A variety of both contemporary and traditional musical styles typical of southern Spain]]></description> <content:encoded><![CDATA[<beat-bars v-bind="flamenco" /><p>Flamenco (Spanish pronunciation: [flaˈmeŋko]), in its strictest sense, is an art form based on the various folkloric music traditions of southern Spain, developed within the gitano subculture of the region of Andalusia, but also having a historical presence in Extremadura and Murcia. In a wider sense, it is a portmanteau term used to refer to a variety of both contemporary and traditional musical styles typical of southern Spain. Flamenco is closely associated to the gitanos of the Romani ethnicity who have contributed significantly to its origination and professionalization. However, its style is uniquely Andalusian and flamenco artists have historically included Spaniards of both gitano and non-gitano heritage.</p> <youtube-embed video="z0dtTRhAGVE" /><p>The oldest record of flamenco music dates to 1774 in the book Las Cartas Marruecas by José Cadalso. The development of flamenco over the past two centuries is well documented: "the theatre movement of sainetes (one-act plays) and tonadillas, popular song books and song sheets, customs, studies of dances, and toques, perfection, newspapers, graphic documents in paintings and engravings. ... in continuous evolution together with rhythm, the poetic stanzas, and the ambiance.”</p> <youtube-embed video="sCpjPWWQB3s" /><p>Flamenco is rooted in various Andalusian popular musical styles, although its origins and influences are the subject of many hypotheses which may or may not have ideological implications and none of which are necessarily mutually exclusive. The most widespread is that flamenco was developed through the cross-cultural interchange among the various groups which coexisted in close proximity in Andalusia's lower classes, combining southern Spain's indigenous, Byzantine, Moorish and Romani musical traditions.</p> <p>On 16 November 2010, UNESCO declared flamenco one of the Masterpieces of the Oral and Intangible Heritage of Humanity.</p> <youtube-embed video="zZ1456V7WlQ" /><h1 id="list-of-important-flamenco-forms-for-guitar-and-their-compas" tabindex="-1">List of Important Flamenco Forms for Guitar and their Compás: <a class="header-anchor" href="#list-of-important-flamenco-forms-for-guitar-and-their-compas" aria-label="Permalink to "List of Important Flamenco Forms for Guitar and their Compás:""></a></h1> <p>Learning the difference between flamenco forms can be challenging. However, once you become familiar with the forms you’ll have a much deeper appreciation for the music and culture of flamenco!</p> <p><strong>Here’s a list of most Important flamenco guitar Toques (Palos) and their Compás:</strong></p> <p><a href="#soleares"><strong>Soleares</strong></a><br> <a href="#alegrias">Alegrías</a><br> <a href="#bulerias">Bulerías</a><br> <a href="#solea">Soleá</a></p> <p><a href="#fandangos"><strong>Fandangos</strong></a><br> <a href="#fandango-de-huelva">Fandango de Huelva</a><br> <a href="#granaina-granadinas">Granaína/Granadinas</a><br> <a href="#malaguena">Malagueña</a></p> <p><a href="#siguiriyas"><strong>Siguiriyas</strong></a><br> <a href="#serranas">Serranas</a></p> <p><a href="#tangos"><strong>Tangos</strong></a><br> <a href="#farruca">Farruca</a><br> <a href="#garrotin">Garrotín</a><br> <a href="#tarrantas-tarrantos">Tarantas/Tarantos</a><br> <a href="#tientos">Tientos</a></p> <p><a href="#ida-y-vuelta"><strong>Ida y Vuelta</strong></a><br> <a href="#colombianas">Colombianas</a><br> <a href="#guajiras">Guajiras</a><br> <a href="#rumba">Rumba</a></p> <p><a href="#other-toques"><strong>Other Toques</strong></a><br> <a href="#sevillanas">Sevillanas</a><br> <a href="#zambra">Zambra</a><br> <a href="#zapateado">Zapateado</a></p> <p>Learning Flamenco guitar is a riveting adventure, full of historical intrigue and technical challenges. For those of us who haven’t lived extensively in Spain or other places where flamenco music is an integral part of life, it can be difficult to know where to begin.</p> <p><strong>Perhaps the best introductory approach is to become familiar with the most common flamenco styles, known as _palo_s or <em>toques</em></strong> (from the guitarist perspective).</p> <p><img src="./el_jaleo-painting.webp" alt="El Jaleo, a painting of flamenco dancer and guitarists by John Singer Sargent"></p> <blockquote> <p><em>El Jaleo,</em> painting by John Singer Sargent</p> </blockquote> <h3 id="what-does-flamenco-toque-or-palo-mean" tabindex="-1">What does flamenco Toque or Palo mean? <a class="header-anchor" href="#what-does-flamenco-toque-or-palo-mean" aria-label="Permalink to "What does flamenco Toque or Palo mean?""></a></h3> <p>You can consider a flamenco <em>toque</em> or <em>palo</em> as a musical form (compositional structure).</p> <p>Each form signifies the following:</p> <ul> <li>Rhythmic structure (compás)</li> <li>Tempo</li> <li>Key or Mode</li> <li>Harmonic patterns (such as chord progressions)</li> <li>Melodic phrasing elements that have been established, developed, and transmitted for hundreds of years</li> </ul> <p>So how is this helpful? Simply put, saying the name of a form conveys a lot of meaning that otherwise takes a long time to describe.</p> <p>For instance, without the name of a form, I would have to say “play this piece Andante in 3/4 time, E phrygian mode. You should place accents on beat 12, 3, 6, 8, and 10. The chord progression is…” blah blah blah.</p> <p>Instead, one can simply say this is a “Soleá” <em>toque</em>, and know generally what to expect!</p> <p>While flamenco guitar is usually improvisational based on the norms of a given toque, there are well-known compositions, familiar ‘licks’, and standard phrases that players will incorporate on a regular basis.¹</p> <h3 id="what-s-the-difference-between-palo-and-toque" tabindex="-1">What’s the difference between Palo and Toque? <a class="header-anchor" href="#what-s-the-difference-between-palo-and-toque" aria-label="Permalink to "What’s the difference between Palo and Toque?""></a></h3> <p>First of all, <strong>what’s the difference between a <em>palo</em> and <em>toque</em></strong> in flamenco? The short answer is there is no practical difference other than <em>palo</em> is a more general term to describe the classification system whereas <strong><em>toque</em> is guitar-specific</strong>.</p> <p>The word <em>palo</em> in Spanish has several definitions, but in this context a “branch” or “suit of cards” is the best translation as it refers to a categorization or classification system.</p> <p>The word <em>toque</em>, meaning “to touch”, refers to the same exact system but from the guitarist perspective. Secondly, the term ‘<em>tocoar</em>‘ refers to a particular guitarist, and their particular repertoire and style of playing.</p> <p><img src="./paco-de-lucia-flamenco.jpg" alt="Paco de Lucía holding his flamenco guitar"></p> <blockquote> <p>Paco de Lucía, flamenco guitarist and composer</p> </blockquote> <p>The Flamenco <em>palos</em> system is extremely robust. In fact, there’s dozens of regional, historical, and of course musical distinctions to consider. Each of these forms has special nuances that inform a guitarist as to which form is which.</p> <h3 id="what-are-the-three-top-level-flamenco-form-categories" tabindex="-1">What are the three top-level flamenco form categories? <a class="header-anchor" href="#what-are-the-three-top-level-flamenco-form-categories" aria-label="Permalink to "What are the three top-level flamenco form categories?""></a></h3> <p>The codified Flamenco <em>palo</em> system has been classified in many different ways.² However, the forms generally fall into three main categories:</p> <ul> <li><em><strong>Cante</strong></em> (singing)</li> <li><em><strong>Toque</strong></em> (guitar playing)</li> <li><em><strong>Baile</strong></em> (dance)</li> </ul> <p>Since our focus is on flamenco guitar (<em>toque</em>), the remainder of this article will refer to the Flamenco style as <em>toque</em>.</p> <p>To get a better sense of what a <em>toque</em> is, it’s worth referring to an excerpt by Juan Martín’s <em>El Arte Flamenco de la Guitarra</em>:</p> <blockquote> <p><em>“There is no exact English equivalent for ‘toque’…. but as you become more familiar with Flamenco you will soon find that the different toques are easily distinguished. Each has a characteristic and recurring pattern of beats and accents (i.e. its compás) and it also has its own kinds of key and harmonic structure. As a result, it has not only a particular rhythmic form but also a characteristic sound and range of expression.”</em></p> <p>– Juan Martín, <em>El Arte Flamenco de la Guitarra</em>: Volume 1</p> </blockquote> <p>As Martín so eloquently describes, once you become familiar with the various <em>toques</em>, their rhythmic structures (<em>compás</em>), harmonic and even melodic nuances, you’ll be able to quickly identify which style is being played–ultimately allowing you to understand and enjoy flamenco music in a more intimate and purposeful way!</p> <h2 id="list-of-common-popular-toques" tabindex="-1">List of Common & Popular Toques <a class="header-anchor" href="#list-of-common-popular-toques" aria-label="Permalink to "List of Common & Popular Toques""></a></h2> <p>Here’s the list and basic description of the most common popular flamenco guitar toques and subcategories, classified by origin and <em>compás</em>:</p> <h2 id="soleares" tabindex="-1">Soleares <a class="header-anchor" href="#soleares" aria-label="Permalink to "Soleares""></a></h2> <p>Many people consider the Soleares (plural of <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a>) the most important and fundamental toque in flamenco guitar pedagogy.</p> <p>As Juan Martín states, “<em>the rhythm of Soleares takes you deep into the heart of Flamenco, for it is a toque which embodies many of Flamenco’s most vital elements of rhythm and harmony and from which many other toques are derived. In Andalucía, every student of the flamenco guitar will start with it</em>.”</p> <p>Like many aspects of Flamenco, historical origins of the Soleares are uncertain and highly controversial. However, the Soleares became most well-known in the regions of Seville and Cadíz.</p> <p><strong>The basic compás for Soleares (<a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a>), the most widely used in all Flamenco music is:</strong></p> <p><img src="./solea-compas-accents-01.png" alt="The basic Solea (Soleares) Compás rhythm cycle with accent marks on the 3, 6, 8, 10, and 12 beat. "></p> <blockquote> <p>Basic Soleares Compás, accents on beats 3, 6, 8, 10, and 12</p> </blockquote> <h3 id="alegrias" tabindex="-1">Alegrías <a class="header-anchor" href="#alegrias" aria-label="Permalink to "Alegrías""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/alegrias/" target="_blank" rel="noreferrer">Alegrías</a> (meaning joy) are a lively branch of Soleares (usually 100-180 BPM) that originated in Cadíz. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/alegrias/" target="_blank" rel="noreferrer">Alegrías</a> are in a major key (typically E major or A major), and are popular to perform with a dancer or as a solo.</p> <p>The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/alegrias/" target="_blank" rel="noreferrer">Alegría</a> includes two sections for the dancer: the <em>silencio</em>, a minor key section; and the <em>escobilla</em>, which includes a virtuosic guitar solo and gradual increase in rhythm.</p> <h3 id="bulerias" tabindex="-1">Bulerías <a class="header-anchor" href="#bulerias" aria-label="Permalink to "Bulerías""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/bulerias/" target="_blank" rel="noreferrer">Bulerías</a> is the fastest branch of Soleares, with a lively, intense dissonance that compliments the advanced rhythmic structure of the <em>compás</em>. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/bulerias/" target="_blank" rel="noreferrer">Bulerías</a> are possibly the most popular, yet also the most virtuosic and demanding for flamenco guitarists. There are many variations of the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/bulerias/" target="_blank" rel="noreferrer">Bulería</a> <em>compás</em> and accent patterns, which you can learn about <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/bulerias/" target="_blank" rel="noreferrer">here</a>.</p> <h2 id="solea" tabindex="-1">Soleá <a class="header-anchor" href="#solea" aria-label="Permalink to "Soleá""></a></h2> <p>Synonymous with the name Soleares, <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a> is a slow, solemn, and majestic form that likely comes from the Spanish word <em>soledad</em>, meaning solitude or loneliness. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a> is known as the “Mother of Flamenco”.</p> <p>Tragedy, death, and desperation are the common subject matter for the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a> <em>cante</em> (singers)– passion that you can also hear in the guitar playing. After a long night of dancing and singing lively toques, a guitarist may play a <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a> as a melancholy conclusion. When the shot glasses are dry and there’s a stillness in the air when sun is on the verge of rising, play the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Soleá</a>.</p> <h2 id="fandangos" tabindex="-1">Fandangos <a class="header-anchor" href="#fandangos" aria-label="Permalink to "Fandangos""></a></h2> <p>Compared to other toques, <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandangos</a> have a shorter rhythmic cycle that may feel more familiar to musicians trained in classical or other Western music styles. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandangos</a> were influenced from Arab-Moorish music, and Portuguese music.</p> <p>Flamenco <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandangos</a> have a 3/4 rhythm (previously 6/8, now 3/4 or 3/8), with an accent on the first beat. Some <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandangos</a> are very metric and appropriate for dance, whereas others have more of a free-form atmosphere (known as <em>en toque libre</em> or “very freely”).</p> <h2 id="fandango-de-huelva" tabindex="-1">Fandango de Huelva <a class="header-anchor" href="#fandango-de-huelva" aria-label="Permalink to "Fandango de Huelva""></a></h2> <p>Within the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandango</a> <em>toque</em> are regional styles known as ‘<em>fandangos locales’</em>. Fandangos de Huelva is one such example.</p> <p>Unlike the <em>toque libre</em> fandango styles, Fandangos de Huelva follows a strict <em>compás</em> structure that can be danced to.</p> <p>You can count the 12 beat <em>compás</em> of Fandango de Huelva into beats of 3 as seen here:</p> <p><img src="./fandangos-de-huelva-compas-01.png" alt="Fandangos de Huelva 12 beat compás, silent beats are indicated with quarter rests."></p> <blockquote> <p>Fandangos de Huelva Compás, silent beats indicated with quarter rests</p> </blockquote> <h3 id="granaina-granadinas" tabindex="-1">Granaína/Granadinas <a class="header-anchor" href="#granaina-granadinas" aria-label="Permalink to "Granaína/Granadinas""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/granadinas/" target="_blank" rel="noreferrer">Granadinas</a> or Granaína is a variant of the Granada fandangos. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/granadinas/" target="_blank" rel="noreferrer">Granaína</a> is relatively slow, with a freeing rhythm (<em>toque libre</em>) and rich embellishments that convey both a dreamlike and flowing quality. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/granadinas/" target="_blank" rel="noreferrer">Granaína</a> is unique in that it’s in Phrygian mode based on the B note.</p> <h3 id="•-malaguena" tabindex="-1">• Malagueña <a class="header-anchor" href="#•-malaguena" aria-label="Permalink to "• Malagueña""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/malaguenas/" target="_blank" rel="noreferrer">Malagueñas</a> are another <em>toque libre</em> fandango from the area of Málaga. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/malaguenas/" target="_blank" rel="noreferrer">Malagueña</a> began as a relatively fast metric form in 6/8 time to accompany dance, then slowed the tempo down and added more embellishments in the 19th century.</p> <p>Later, guitarists like <a href="https://en.wikipedia.org/wiki/Ram%C3%B3n_Montoya" target="_blank" rel="noreferrer">Ramón Montoya (1879-1949)</a> began playing <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/malaguenas/" target="_blank" rel="noreferrer">Malagueñas</a> freely, while still incorporating the distinctive melodic phrases that gave rise to the form’s popularity.</p> <h2 id="siguiriyas" tabindex="-1">Siguiriyas <a class="header-anchor" href="#siguiriyas" aria-label="Permalink to "Siguiriyas""></a></h2> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/seguiriyas/" target="_blank" rel="noreferrer">Siguiriyas</a>, also spelled <em>seguiriyas</em>, <em>siguerillas</em>, or <em>siguirillas</em>, is a deep, expressive style evoking a tragic feeling similar to the Soleá. Slow, somber, and sentimental, the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/seguiriyas/" target="_blank" rel="noreferrer">Siguiriyas</a> <em>compás</em> follows the 12 beat cycle but with a different accent pattern than the Soleares as seen here:</p> <p><img src="./seguiriya-compas-01.png" alt="Siguiriya Compás with accent marks and simpler counting system"></p> <blockquote> <p>Siguiriya Compás with accent marks and simpler counting system</p> </blockquote> <h3 id="serranas" tabindex="-1">Serranas <a class="header-anchor" href="#serranas" aria-label="Permalink to "Serranas""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/serranas/" target="_blank" rel="noreferrer">Serranas</a> originated as a melodious <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/seguiriyas/" target="_blank" rel="noreferrer">Siguiriya</a> style in the rural Ronda (Málaga) area. The guitarist <a href="https://en.wikipedia.org/wiki/Silverio_Franconetti" target="_blank" rel="noreferrer">Silverio Franconetti (1831-1889)</a> developed the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/serranas/" target="_blank" rel="noreferrer">Serrana</a> form in his performance interpretations.</p> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/serranas/" target="_blank" rel="noreferrer">Serranas</a> have a characteristic emphasis on melodic “third” intervals, whereas most flamenco melodies are only minor second or major second apart.</p> <h2 id="tangos" tabindex="-1">Tangos <a class="header-anchor" href="#tangos" aria-label="Permalink to "Tangos""></a></h2> <p>First, it’s important to note that Flamenco Tangos are unrelated to the Latin-American form by the same name found in Argentina. Tangos are joyous, upbeat, and follow a relatively straightforward four-beat rhythm structure displayed in 4/4/ time.</p> <p>Tangos came from the regions of Cadíz (most prominent), Jerez, Málaga, and Seville.</p> <h3 id="farruca" tabindex="-1">Farruca <a class="header-anchor" href="#farruca" aria-label="Permalink to "Farruca""></a></h3> <p>People believe that the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/farruca/" target="_blank" rel="noreferrer">Farruca</a> originated in Galicia, Spain. Men traditionally dance to this form with no singing component. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/farruca/" target="_blank" rel="noreferrer">Farruca</a> is played in A minor, with a <em>compás</em> of two measures of 4/4 time signature with accents on beats 1, 3, 5, and 7. You can see the accent pattern example below:</p> <p><img src="./farruca-compas-01.png" alt="Farruca Compás of 8 beat cycle with accented beats on 1, 3, 5, and 7."></p> <blockquote> <p>Farruca Compás and accented beats</p> </blockquote> <h3 id="garrotin" tabindex="-1">Garrotín <a class="header-anchor" href="#garrotin" aria-label="Permalink to "Garrotín""></a></h3> <p>The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/garrotin/" target="_blank" rel="noreferrer">Garrotín</a> is a festive and cheerful style in major mode, 2/4 time. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/garrotin/" target="_blank" rel="noreferrer">Garrotín</a> originated in northern Spain near Asturias, and can include singing or dancing as well as guitar accompaniment. The renowned flamenco singer <a href="https://en.wikipedia.org/wiki/La_Ni%C3%B1a_de_los_Peines" target="_blank" rel="noreferrer">La Niña de los Peines</a> helped popularize the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/garrotin/" target="_blank" rel="noreferrer">Garrotín</a>.</p> <h3 id="tarantas-tarantos" tabindex="-1">Tarantas/Tarantos <a class="header-anchor" href="#tarantas-tarantos" aria-label="Permalink to "Tarantas/Tarantos""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantas/" target="_blank" rel="noreferrer">Tarantas</a> is a quintessential <em>toque libre</em>, with very lofty, repetitive <em>ligado</em> phrases (hammer ons/pull offs). <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantas/" target="_blank" rel="noreferrer">Tarrantas</a> have a characteristic technique called <em>arrastre</em>, in which the right hand ring finger (a) drags from the high to the low strings in a quick successive manner (similar to an upstroke but a bit slower and disconnected). <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantas/" target="_blank" rel="noreferrer">Tarantas</a> are commonly notated with no measure bar lines to indicate the free manner in which you should play it.</p> <p>The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantos/" target="_blank" rel="noreferrer"><strong>Tarantos</strong></a> contrasts to the <em>toque libre</em> <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantas/" target="_blank" rel="noreferrer">Tarantas</a>, with a strong rhythmic feel structured in a <em>compás</em> of 4s (2/4 or 4/4 time). The key signature and basic chord structure of <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantos/" target="_blank" rel="noreferrer">Tarantos</a> is the same as <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tarantas/" target="_blank" rel="noreferrer">Tarantas</a> (two sharps), so their harmonic relationship is clear and distinct.</p> <h3 id="tientos" tabindex="-1">Tientos <a class="header-anchor" href="#tientos" aria-label="Permalink to "Tientos""></a></h3> <p>The Tientos is often danced, and is a bit slower than other tangos. However every <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tientos/" target="_blank" rel="noreferrer">Tientos</a> does speed up and become a tango by the end following the dance <em>escobilla</em> sections. You can count the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/tientos/" target="_blank" rel="noreferrer">Tientos</a> four beat <em>compás</em> with accents on beats 2 and 4.</p> <p>There is a distinctive syncopation in the rhythm of the Tientos. You can see one of the simplest variations below:</p> <p><img src="./tientos-compas-01.png" alt="Tientos Compás simple variation with syncopation."></p> <blockquote> <p>Basic Tientos Compás</p> </blockquote> <h2 id="ida-y-vuelta" tabindex="-1">Ida y Vuelta <a class="header-anchor" href="#ida-y-vuelta" aria-label="Permalink to "Ida y Vuelta""></a></h2> <p>The Spanish expression Ida y Vuelta (“departure and return” or “round trip”) refers to <em>palos</em> that were exported from Spain to the New World, particularly Cuba, where they evolved with African and Native American music influences. Immigrants later imported these forms back to Spain with a new flair.</p> <h3 id="colombianas" tabindex="-1">Colombianas <a class="header-anchor" href="#colombianas" aria-label="Permalink to "Colombianas""></a></h3> <p>People credit the famous flamenco singer <a href="https://en.wikipedia.org/wiki/Pepe_Marchena" target="_blank" rel="noreferrer">Pepe Marchena</a> for creating the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/colombianas/" target="_blank" rel="noreferrer">Colombiana</a> (or <em>Colombina</em>) in 1931. The guitarist Ramon Montoya accompanied Marchena in recordings of the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/colombianas/" target="_blank" rel="noreferrer">Colombiana</a> the following year. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/colombianas/" target="_blank" rel="noreferrer">Colombianas</a> are in a major mode and follow a 4-beat <em>compás</em> similar to the Rumba:</p> <p><img src="./colombiana-compas-01.png" alt="Flamenco Colombiana basic compás rhythm with accents. "></p> <blockquote> <p>Basic Colombiana Compás</p> </blockquote> <h3 id="guajiras" tabindex="-1">Guajiras <a class="header-anchor" href="#guajiras" aria-label="Permalink to "Guajiras""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/guajiras/" target="_blank" rel="noreferrer">Guajiras</a> is a <em>toque</em> based on a Cuban rural genre known as Punto Guajira Cubana. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/guajiras/" target="_blank" rel="noreferrer">Guajira</a> is in a major mode, and follows a 12 beat <em>compás</em> similar to the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/solea/" target="_blank" rel="noreferrer">Solea</a>, with accents on beats 3, 6, 8, 10, and 12. <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/guajiras/" target="_blank" rel="noreferrer">Guajiras</a> is in a major mode, with a characteristic descending melodic phrase in the bass from F# to F to E. For more info on the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/guajiras/" target="_blank" rel="noreferrer">Guajiras</a> <em>compás</em> and examples, check out <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/guajiras/" target="_blank" rel="noreferrer">this page</a>.</p> <h3 id="rumba" tabindex="-1">Rumba <a class="header-anchor" href="#rumba" aria-label="Permalink to "Rumba""></a></h3> <p>The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba</a>, meaning “party”, originated in Havana, Cuba. The flamenco <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba</a> became popular in the late 20th century by artists such as <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitarists-and-albums/" target="_blank" rel="noreferrer">Paco de Lucia</a>, and Rodrigo Y Gabriella and Gipsy Kings, among others. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba</a> is in a minor mode, and notated in 4/4 time.</p> <p>However, you can also count the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba’s</a> rhythm as 8 beats (3+3+2) in a single measure. The <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba’s</a> characteristic accents on beats 1, 4, and 7, give the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/rumba/" target="_blank" rel="noreferrer">Rumba</a> an exciting drive:</p> <blockquote></blockquote> <p><img src="./rumba-rhythm-compas-01.png" alt="Rumba 4/4 Rhythm with a cycle of 3+3+2 structure, including accents on 1, 4, and 7."></p> <blockquote> <p>Basic Rumba Rhythmic Cycle</p> </blockquote> <h2 id="other-toques" tabindex="-1">Other Toques <a class="header-anchor" href="#other-toques" aria-label="Permalink to "Other Toques""></a></h2> <h3 id="•-sevillanas" tabindex="-1">• Sevillanas <a class="header-anchor" href="#•-sevillanas" aria-label="Permalink to "• Sevillanas""></a></h3> <p>Sevillanas are a lively, popular toque with dance accompaniment, usually notated in 3/4 time similar to the <em><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/fandangos/" target="_blank" rel="noreferrer">Fandango</a></em> rhythm.</p> <p>People usually play the Sevillianas in a set of four short sequential pieces. In other words, a “Sevillianas performance” will have four or more short Sevillianas that immediately follow one another. Each one is in a different key or mode.</p> <p>Sevillanas begin with a series of rhythmic <em>rasgueado</em>, followed by a brief melodic sequence (<em>salida</em>), followed by more <em>reasgueado</em>, then the full melody (<em>copla</em>). The third <em>copla</em> is usually a slight variation from the previous two.</p> <h3 id="zambra" tabindex="-1">Zambra <a class="header-anchor" href="#zambra" aria-label="Permalink to "Zambra""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zambra/" target="_blank" rel="noreferrer">Zambra</a> is a flamenco style that originated in Granada and Almeira. People believe that the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zambra/" target="_blank" rel="noreferrer">Zambra</a> is from earlier Moorish dance styles, and thus the melodies usually have an arabesque quality.</p> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zambra/" target="_blank" rel="noreferrer">Zambras</a> are very metric, typically have a bouncy, alternating bass on strong beats. To play a <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zambra/" target="_blank" rel="noreferrer">Zambra</a>, guitarists will usually tune their guitar bass string down to drop D.</p> <h3 id="zapateado" tabindex="-1">Zapateado <a class="header-anchor" href="#zapateado" aria-label="Permalink to "Zapateado""></a></h3> <p><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zapateado/" target="_blank" rel="noreferrer">Zapateado</a> is a lively style in 6/8 time and popular dance form (in Spanish, <em>zapato</em> means shoe). The form originally appeared in Cadíz. However, one can find similar forms in Mexico (the <em>Zapateo</em>). Guitarists usually play the <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/zapateado/" target="_blank" rel="noreferrer">Zapateado</a> in C major, although <a href="https://richterguitar.com/flamenco-guitar/flamenco-guitarists-and-albums/" target="_blank" rel="noreferrer">Sabicas</a> has a unique well-known composition “<a href="https://www.youtube.com/watch?v=DyNwpxPjfYg" target="_blank" rel="noreferrer">Zapateado in Re</a>” (D major).</p> <h5 id="footnotes" tabindex="-1">Footnotes <a class="header-anchor" href="#footnotes" aria-label="Permalink to "Footnotes""></a></h5> <p>¹ Parallels can be drawn to jazz guitarists learning licks and solos of famous jazz predecessors. Contemporary players will often incorporate and/or elaborate on these phrases on the spot during a performance as an homage.</p> <p>² Another way to classify these styles is by mood: <em>cante jondo</em> (serious and solemn), <em>cante chico</em> (lighter and festive), and <em>cante intermedio</em> (a style that doesn’t fit into either category).</p> <h1 id="sources" tabindex="-1">Sources <a class="header-anchor" href="#sources" aria-label="Permalink to "Sources""></a></h1> <ul> <li><a href="https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/" target="_blank" rel="noreferrer">https://richterguitar.com/flamenco-guitar/flamenco-guitar-toques-and-palos/</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/flam.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Rhythm]]></title> <link>https://chromatone.center/theory/rhythm/</link> <guid>https://chromatone.center/theory/rhythm/</guid> <pubDate>Fri, 15 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Musical exploration of time]]></description> <content:encoded><![CDATA[<p><a href="./study/">What is Rhythm</a> and how does the temporal dimension of music evolve? How we can conceptualize the human <a href="./pulse/">Musical time feeling</a>? How can we <a href="./counting/">Count fast enough</a> to understand the inner relations of different rhythmic patterns?</p> <p>The main concept in modern Western rhythmic tradition is the <a href="./meter/">Meters</a> differentiation. While there's a plenty of other <a href="./system/">national musical time organisation systems</a>, that may emphasize some other aspects of the temporal music feel, such as <a href="./groove/">Syncopation, swing and groove</a>.</p> <p>And there's so much to explore in practice while learning basic <a href="./rudiments/">Drum rudiments</a> on your own.</p> <h2 id="embodiment-of-music" tabindex="-1">Embodiment of music <a class="header-anchor" href="#embodiment-of-music" aria-label="Permalink to "Embodiment of music""></a></h2> <table tabindex="0"> <thead> <tr> <th>Timescale</th> <th>Movement</th> <th>Time</th> </tr> </thead> <tbody> <tr> <td>Phrase</td> <td>Breath</td> <td>2-8 sec</td> </tr> <tr> <td>Pulse</td> <td>Walking</td> <td>250-2000 ms</td> </tr> <tr> <td>Note</td> <td>Fingers</td> <td>100-1000 ms</td> </tr> <tr> <td>Microrhythm</td> <td>Talk</td> <td>5-50 ms</td> </tr> </tbody> </table> <youtube-embed video="UMFazztdbAI"/> ]]></content:encoded> <enclosure url="https://chromatone.center/brent-ninaber.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Drone]]></title> <link>https://chromatone.center/theory/melody/drone/</link> <guid>https://chromatone.center/theory/melody/drone/</guid> <pubDate>Thu, 14 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[A harmonic or monophonic effect or accompaniment where a note or chord is continuously sounded throughout most or all of a piece]]></description> <content:encoded><![CDATA[<p>A drone is a harmonic or monophonic effect or accompaniment where a note or chord is continuously sounded throughout most or all of a piece. A drone may also be any part of a musical instrument used to produce this effect; an archaic term for this is burden (bourdon or burdon) such as a "drone [pipe] of a bagpipe", the pedal point in an organ, or the lowest course of a lute. Α burden is also part of a song that is repeated at the end of each stanza, such as the chorus or refrain.</p> <h2 id="musical-effect" tabindex="-1">Musical effect <a class="header-anchor" href="#musical-effect" aria-label="Permalink to "Musical effect""></a></h2> <blockquote> <p>"Of all harmonic devices, it [a drone] is not only the simplest, but probably also the most fertile."</p> </blockquote> <p>A drone effect can be achieved through a sustained sound or through repetition of a note. It most often establishes a tonality upon which the rest of the piece is built. A drone can be instrumental, vocal or both. Drone (both instrumental and vocal) can be placed in different ranges of the polyphonic texture: in the lowest part, in the highest part, or in the middle. The drone is most often placed upon the tonic or dominant. A drone on the same pitch as a melodic note tends to both hide that note and to bring attention to it by increasing its importance.</p> <p>A drone differs from a pedal tone or point in degree or quality. A pedal point may be a form of nonchord tone and thus required to resolve unlike a drone, or a pedal point may simply be considered a shorter drone, a drone being a longer pedal point.</p> <h2 id="history-and-distribution" tabindex="-1">History and distribution <a class="header-anchor" href="#history-and-distribution" aria-label="Permalink to "History and distribution""></a></h2> <blockquote> <p><img src="./images/A_Lady_Playing_the_Tanpura,_ca._1735.jpg" alt=""> A Lady Playing the Tanpura, ca. 1735.</p> </blockquote> <p>The systematic use of drones originated in instrumental music of ancient Southwest Asia, and spread north and west to Europe, east to India, and south to Africa. It is used in Indian music and is played with the tanpura (or tambura) and other Indian drone instruments like the ottu, the ektar, the dotara (or dotar; dutar in Persian Central Asia), the surpeti, the surmandal (or swarmandal) and the shankh (conch shell). Most of the types of bagpipes that exist worldwide have up to three drones, making this one of the first instruments that comes to mind when speaking of drone music. In America, most forms of the African-influenced banjo contain a drone string. Since the 1960s, the drone has become a prominent feature in drone music and other forms of avant-garde music.</p> <p>In vocal music drone is particularly widespread in traditional musical cultures, particularly in Europe, Polynesia and Melanesia. It is also present in some isolated regions of Asia (like among Pearl-divers in the Persian Gulf, some national minorities of South-West China, Taiwan, Vietnam, and Afghanistan).</p> <h2 id="part-s-of-a-musical-instrument" tabindex="-1">Part(s) of a musical instrument <a class="header-anchor" href="#part-s-of-a-musical-instrument" aria-label="Permalink to "Part(s) of a musical instrument""></a></h2> <blockquote> <p><img src="./images/Golowan_Festival_Penzance_June_2005_Mid-Argyl_band2.jpg" alt=""> Highland bagpipes, with drone pipes over the pipers' left shoulders</p> </blockquote> <p>Drone is also the term for the part of a musical instrument intended to produce the drone effect's sustained pitch, generally without the ongoing attention of the player. Different melodic Indian instruments (e.g. the sitar, the sarod, the sarangi and the rudra veena) contain a drone. For example, the sitar features three or four resonating drone strings, and Indian notes (sargam) are practiced to a drone. Bagpipes (like the Great Highland Bagpipe and the Zampogna) feature a number of drone pipes, giving the instruments their characteristic sounds. A hurdy-gurdy has one or more drone strings. The fifth string on a five-string banjo is a drone string with a separate tuning peg that places the end of the string five frets down the neck of the instrument; this string is usually tuned to the same note as that which the first string produces when played at the fifth fret, and the drone string is seldom fretted. The bass strings of the Slovenian drone zither also freely resonate as a drone. The Welsh Crwth also features two drone strings.</p> <h2 id="use-in-musical-compositions" tabindex="-1">Use in musical compositions <a class="header-anchor" href="#use-in-musical-compositions" aria-label="Permalink to "Use in musical compositions""></a></h2> <p>Composers of Western classical music occasionally used a drone (especially one on open fifths) to evoke a rustic or archaic atmosphere, perhaps echoing that of Scottish or other early or folk music. Examples include the following:</p> <ul> <li>Haydn, Symphony No. 104, "London", opening of finale, accompanying a folk melody</li> <li>Beethoven, Symphony No. 6, "Pastoral", opening and trio section of scherzo</li> <li>Mendelssohn, Symphony No. 3 in A minor, opus 56, 'Scottish', especially the finale.</li> <li>Chopin, Mazurkas, Op. 7: all five contain a drone.</li> <li>Berlioz, Harold in Italy, accompanying oboes as they imitate the piffero of Italian peasants</li> <li>Richard Strauss, Also sprach Zarathustra, Introduction: the opening grows out of a drone effect in the orchestra.</li> <li>Mahler, Symphony No. 1, introduction; a seven-octave drone on A evokes "the awakening of nature at the earliest dawn"</li> <li>Bartók, in his adaptations for piano of Hungarian and other folk music</li> </ul> <p>The best-known drone piece in the concert repertory is the Prelude to Wagner's Das Rheingold (1854) wherein low horns and bass instruments sustain an E♭ throughout the entire movement. The atmospheric ostinato effect that opens Beethoven's Ninth Symphony, which inspired similar gestures in the opening of all the symphonies of Anton Bruckner, represents a gesture derivative of drones.</p> <p>One consideration for composers of common practice keyboard music was equal temperament. The adjustments lead to slight mistunings as heard against a sustained drone. Even so, drones have often been used to spotlight dissonance purposefully.</p> <p>Modern concert musicians make frequent use of drones, often with just or other non-equal tempered tunings. Drones are a regular feature in the music of composers indebted to the chant tradition, such as Arvo Pärt, Sofia Gubaidulina, and John Tavener. The single-tones that provided the impetus for minimalism through the music of La Monte Young and many of his students qualify as drones. David First, the band Coil, the early experimental compilations of John Cale (Sun Blindness Music, Dream Interpretation, and Stainless Gamelan), Pauline Oliveros and Stuart Dempster, Alvin Lucier (Music On A Long Thin Wire), Ellen Fullman, Lawrence Chandler and Arnold Dreyblatt all make notable use of drones. The music of Italian composer Giacinto Scelsi is essentially drone-based. Shorter drones or the general concept of a continuous element are often used by many other composers. Other composers whose music is entirely based on drones include Charlemagne Palestine and Phill Niblock. The Immovable Do by Percy Grainger contains a sustained high C (heard in the upper woodwinds) that lasts for the entirety of the piece. Drone pieces also include Loren Rush's Hard Music (1970) and Folke Rabe's Was?? (1968), as well as Robert Erickson's Down at Piraeus. The avant-garde guitarist Glenn Branca also used drones extensively. French singer Camille uses a continuous B throughout her album Le_Fil.</p> <p>Drones continue to be characteristic of folk music. Early songs by Bob Dylan employ the effect with a retuned guitar in "Masters of War" and "Mr. Tambourine Man". The song "You Will Be My Ain True Love", written by Sting for the 2003 movie Cold Mountain and performed by Alison Krauss and Sting, uses drone bass.</p> <p>Drones are used widely in the blues and blues-derived genres. Jerry Lee Lewis featured drones in solos and fills. Drones were virtually absent in original rock and roll music, but gained popularity after the Beatles used drones in a few popular compositions (for example, "Blackbird" has a drone in the middle of a texture throughout the whole song, "Tomorrow Never Knows" makes use of tambura). They also used high drone for the dramatic effect in some sections of several of their compositions (like the last verses of "Yesterday" and "Eleanor Rigby"). The rock band U2 uses drones in their compositions particularly widely. In the Led Zeppelin song "In The Light", a keyboard drone is used throughout the song, mostly in the intro.</p> <h2 id="use-for-musical-training" tabindex="-1">Use for musical training <a class="header-anchor" href="#use-for-musical-training" aria-label="Permalink to "Use for musical training""></a></h2> <p>Drones are used by a number of music education programs for ear training and pitch awareness, as well as a way to improvise ensemble music. A shruti box is often used by vocalists in this style of musical training. Drones, owing to their acoustic properties and following their longstanding use in ritual and chant, can be useful in constructing aural structures outside common practice expectations of harmony and melody.</p> <h2 id="shruti-box" tabindex="-1">Shruti box <a class="header-anchor" href="#shruti-box" aria-label="Permalink to "Shruti box""></a></h2> <p>A shruti box (sruti box or surpeti) is an instrument, originating from the Indian subcontinent, that traditionally works on a system of bellows. It is similar to a harmonium and is used to provide a drone in a practice session or concert of Indian classical music. It is used as an accompaniment to other instruments and notably the flute. The shruti box is also used in classical singing. In classical singing, the shruti box is used to help tune the voice. The use of the shruti box has widened with the cross-cultural influences of world music and new-age music to provide a drone for many other instruments as well as vocalists.</p> <p><img src="./s-l1600.jpg" alt=""></p> <p>Adjustable buttons allow tuning. Nowadays, electronic shruti boxes are commonly used, which are called shruthi pettige in Kannada, shruti petti in Tamil and Telugu and sur peti in Hindi. Recent versions also allow for changes to be made in the tempo, and the notes such as Madhyamam, Nishadam to be played in place of the usual three notes (i.e., Lower shadjam, panchamam, and the upper shadjam)</p> <h3 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h3> <p>Before the arrival of the harmonium in the Indian subcontinent, musicians used either a tambura or a specific pitch reference instrument, such as the nadaswaram, to produce the drone. Some forms of music such as Yakshagana used the pungi reedpipe as drone. After the Western small pump harmonium became popular, musicians would modify the harmonium to automatically produce the reference pitch. Typically, one would open up the cover and adjust the stop of the harmonium to produce a drone.</p> <p>Later, a keyless version of the harmonium was invented for the specific purpose of producing the drone sound. It was given the name shruti box or sruti box. These types of instruments had controls on the top or on the side of the box for controlling the pitch.</p> <p><img src="./images/kyw-professionalconcert-shruti-box-teak-wood.jpg" alt=""></p> <p>The shruti box is enjoying a renaissance in the West amongst traditional and contemporary musicians, who are using it for a range of different styles. In the early nineties, traditional Irish singer Nóirín Ní Riain brought the shruti box to Ireland, giving it a minor place in traditional Irish music. More recently Scottish folk artist Karine Polwart and Julie Fowlis use the instrument, using it on some of their songs. Singers find it very useful as an accompaniment and instrumentalists enjoy the drone reference it gives to play along with.</p> <h3 id="jivari" tabindex="-1">Jivari <a class="header-anchor" href="#jivari" aria-label="Permalink to "Jivari""></a></h3> <blockquote> <p><img src="./images/Sitar_jawari.jpg" alt=""> The Javari of a sitar, made from ebony, showing graphite marks from the first two strings</p> </blockquote> <p>Javārī, (also: 'joārī', 'juvārī', 'jvārī' (alternately transcribed 'jawārī', 'jowārī', 'joyārī', 'juwārī', and 'jwārī')) in Indian classical music refers to the overtone-rich "buzzing" sound characteristic of classical Indian string instruments such as the tanpura, sitar, surbahar, rudra veena and Sarasvati veena. Javari can refer to the acoustic phenomenon itself and to the meticulously carved bone, ivory or wooden bridges that support the strings on the sounding board and produce this particular effect. A similar sort of bridge is used on traditional Ethiopian lyres, as well as on the ancient Greek kithara, and the "bray pins" of some early European harps operated on the same principle. A similar sound effect, called in Japanese sawari, is used on some traditional Japanese instruments as well.</p> <p>Under the strings of tanpuras, which are unfretted (unstopped), and occasionally under those bass drone strings of sitars and surbahars which are seldom fretted, cotton threads are placed on the javari bridge to control the exact position of the node and its height above the curved surface, in order to more precisely refine the sound of javari. These cotton threads are known in Hindi as 'jīvā', meaning "life" and referring to the brighter tone heard from the plucked string once the thread has been slid into the correct position. This process is called "adjusting the javari". After a substantial time of playing, the surface directly under the string will wear out through the erosive impact of the strings. The sound will become thin and sharp and tuning also becomes a problem. Then a skilled, experienced craftsman needs to redress and polish the surface, which is called "doing the javari" ("'Javārī Sāf Karnā' or "Cleaning the Javārī'").</p> <blockquote> <p><img src="./images/Topvieuw_of_a_tambura_bridge.jpg" alt=""> Top view of a rosewood tambura bridge. Notice the marks left by the strings as the javari-maker assures that the contact-lines on the surface of the bridge are continuous and even. As a further test strings are pulled sideways and lengthwise in order to rub the bridge with the string, to better judge the quality of the surface, as unevenness in the surface shows clearly as a gap.</p> </blockquote> <p>The rich and very much 'alive' resonant sound requires great sensitivity and experience in the tuning process. In the actual tuning, the fundamentals are of lesser interest as attention is drawn to the sustained harmonics that should be clearly audible, particularly the octaves, fifths, major thirds and minor sevenths of the (fundamental) tone of the string. The actual tuning is done on three levels: firstly by means of the large pegs, secondly, by carefully shifting tuning-beads for micro-tuning and thirdly, on a tanpura, by even more careful shifting of the cotton threads that pass between the strings and the bridge, somewhat before the zenith of its curve.</p> <h4 id="effect" tabindex="-1">Effect <a class="header-anchor" href="#effect" aria-label="Permalink to "Effect""></a></h4> <p>Typical of javari on an instrument with preferably long strings, is that on the soundboard the strings run over a wide bridge with a very flat parabolic curve. The curvature of the bridge has been made in a precise relation to the optimum level of playing, or more exact, a precise amplitude of each string. Any string, given length, density, pitch and tension, wants to be plucked within the limits of its elasticity, and so vibrate harmoniously with a steady pitch. When a string of a tanpura is plucked properly, it produces a tone with a certain amplitude that will slowly decrease as the tone fades out. In this gradual process, the string, moving up and down according to its frequency, will make a periodic grazing contact with the curved surface of the bridge. The exact grazing-spot will gradually shift up the sloping surface, as a function of the decreasing amplitude, finally dissolving into the rest-position of the open string. In this complex dynamic sonation process, the shifting grazing will touch upon micro-nodes on the string, exciting a wide range of harmonics in a sweeping mode. The desired effect is that of a cascading row of harmonics in a rainbow of sound. As an analogy, a properly shaped and adjusted javari is similar to the refraction of white light through a prism. When the prism is of good proportions and quality and used properly, the phenomenon should produce itself. "The voice of an artist which is marked by a rich sound resembling that produced by two consonants played together, is often loosely known to have Javārī in it, although such use is arbitrary."</p> <h4 id="construction" tabindex="-1">Construction <a class="header-anchor" href="#construction" aria-label="Permalink to "Construction""></a></h4> <p>The javari of a tanpura is in a way fine-tuned with a cotton thread under the string. Both the thread itself and its function is called 'jiva'. The jiva lifts the string by its diameter off the bridge and gives the necessary clearance and adjustability. By carefully shifting the jiva the sequence of the shifting grazing on the parabolic surface of the bridge becomes 'tuneable' within limits. For each string there should be a spot relative to the curve of the bridge where optimum sound quality is found. Within the area of optimum resonance and sustain, a little play should be available for further fine-tuning, in which the jiva can hardly be seen to move. Staying with optics, shifting the jiva would be similar to using the manual fine focus on a camera. Experienced 'javari-makers' will agree that the 'javari' has to be made specific to certain string lengths, gauges and pitches and certain amplitudes.</p> <blockquote> <p><img src="./images/Side_view_of_Tanjore-style_rosewood_tanpura_bridge_with_cotton_threads_adjusted_for_full_resonance.jpg" alt=""> Side view of a Tanjore-style rosewood tanpura bridge with cotton threads adjusted for full resonance.</p> </blockquote> <p>The curvature of the bridge of the main strings of a sitar will be different from that of the smaller and lower bridge in front of the main bridge, which carries the sympathetic resonance-strings (tarafs). As this choir of thinner and shorter strings is excited solely by the sympathetic resonance with the tones played on the main strings, the general amplitude is smaller, so accordingly the curvature will be flatter. The making of a perfectly sounding javari for any instrument requires a very high degree of skill and expertise. Tanpuras are the only instruments that are always used with jiva-threads, except the octave-tamburis. Sitar, Rudra Veena, Sarasvati Veena, all have parabolic wide javari bridges for the main playing strings. Sarod and Sarangi have some of their sympathetic resonance strings (tarafs) on small, flat javari-bridges similar to that of the sitar. The javari of a sitar will be made according to the wishes of the player, either 'open',('khula') with a bright sounding javari-effect, or 'closed' ('band') with a relatively more plain tone, or something in between ('ghol'). The choice depends on the preference of the sitar-player and on the adapted playing style.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/s-l1600.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Counting]]></title> <link>https://chromatone.center/theory/rhythm/counting/</link> <guid>https://chromatone.center/theory/rhythm/counting/</guid> <pubDate>Wed, 13 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[A system of regularly occurring sounds that serve to assist with the performance or audition of music by allowing the easy identification of the beat.]]></description> <content:encoded><![CDATA[<p>Counting is a system of regularly occurring sounds that serve to assist with the performance or audition of music by allowing the easy identification of the beat. Commonly, this involves verbally counting the beats in each measure as they occur, whether there be 2 beats, 3 beats, 4 beats, or even 5 beats. In addition to helping to normalize the time taken up by each beat, counting allows easier identification of the beats that are stressed. Counting is most commonly used with rhythm (often to decipher a difficult rhythm) and form and often involves subdivision.</p> <h2 id="introduction-to-systems-numbers-and-syllables" tabindex="-1">Introduction to Systems - Numbers and Syllables <a class="header-anchor" href="#introduction-to-systems-numbers-and-syllables" aria-label="Permalink to "Introduction to Systems - Numbers and Syllables""></a></h2> <p>The method involving numbers may be termed count chant, "to identify it as a unique instructional process."</p> <p>In lieu of simply counting the beats of a measure, other systems can be used which may be more appropriate to the particular piece of music. Depending on the tempo, the divisions of a beat may be vocalized as well (for slower times), or skipping numbers altogether (for faster times). As an alternative to counting, a metronome can be used to accomplish the same function.</p> <p>Triple meter, such as 3/4, is often counted <strong>1 2 3</strong>, while compound meter, such as 6 8, is often counted in two and subdivided <strong>One-and-ah-Two-and-ah</strong> but may be articulated as <strong>One-la-lee-Two-la-lee</strong>.</p> <p>For each subdivision employed a new syllable is used. For example, sixteenth notes in 4/4 are counted <strong>1 e & a 2 e & a 3 e & a 4 e & a</strong>, using numbers for the quarter note, "&" for the eighth note, and "e" and "a" for the sixteenth note level.</p> <p>Triplets may be counted <strong>1 tri ple 2 tri ple 3 tri ple 4 tri ple</strong> and sixteenth note triplets <strong>1 la li + la li 2 la li + la li</strong>. Quarter note triplets, due to their different rhythmic feel, may be articulated differently as <strong>1 dra git 3 dra git</strong>.</p> <p>Rather than numbers or nonsense syllables, a random word may be assigned to a rhythm to clearly count each beat. An example is with a triplet, so that a triplet subdivision is often counted <strong>"tri-po-let"</strong>. The Kodály Method uses <strong>"Ta"</strong> for quarter notes and <strong>"Ti-Ti"</strong> for eighth notes. For sextuplets simply say triplet twice, while quintuplets may be articulated as <strong>"un-i-vers-i-ty"</strong>. In some approaches, "rote-before-note", the fractional definitions of notes are not taught to children until after they are able to perform syllable or phrase-based versions of these rhythms.</p> <blockquote> <p>"However the counting may be syllabized, the important skill is to keep the pulse steady and the division exact." – Alfred Blatter 2007</p> </blockquote> <h2 id="numbers-systems" tabindex="-1">Numbers Systems <a class="header-anchor" href="#numbers-systems" aria-label="Permalink to "Numbers Systems""></a></h2> <h3 id="numbers" tabindex="-1">Numbers <a class="header-anchor" href="#numbers" aria-label="Permalink to "Numbers""></a></h3> <p>Ultimately, musicians count using <strong>numbers</strong>, <strong>“ands”</strong> and vowel sounds. Downbeats within a measure are called <strong>1, 2, 3…</strong> Upbeats are represented with a plus sign and are called “and” (i.e. <strong>1 + 2 +</strong>), and further subdivisions receive the sounds “ee” and “uh” (i.e. <strong>1 e + a 2 e + a</strong>). Musicians do not agree on what to call triplets: some simply say the word triplet (“<strong>trip-a-let</strong>”), or another three-syllable word (like pineapple or elephant) with an antepenultimate accent. Some use numbers along with the word triplet (i.e. “<strong>1-trip-let</strong>”). Still others have devised sounds like “ah-lee” or “la-li” added after the number (i.e. <strong>1-la-li, 2-la-li</strong> or <strong>1-tee-duh, 2-tee-duh</strong>).</p> <h3 id="traditional-american-system" tabindex="-1">Traditional American system <a class="header-anchor" href="#traditional-american-system" aria-label="Permalink to "Traditional American system""></a></h3> <p>Counts the beat <strong>number</strong> on the tactus, <strong>&</strong> on the half beat, and <strong>n-e-&-a</strong> for four sixteenth notes, <strong>n-&-a</strong> for a triplet or three eighth notes in compound meter, where n is the beat number.</p> <h3 id="eastman-system" tabindex="-1">Eastman System <a class="header-anchor" href="#eastman-system" aria-label="Permalink to "Eastman System""></a></h3> <p>The beat <strong>numbers</strong> are used for the tactus, <strong>te</strong> for the half beat, and <strong>n-ti-te-ta</strong> for four sixteenths. Triplets or three eighth notes in compound meter are <strong>n-la-li</strong> and six sixteenth notes in compound meter is <strong>n-ta-la-ta-li-ta</strong>.</p> <h3 id="froseth-system" tabindex="-1">Froseth System <a class="header-anchor" href="#froseth-system" aria-label="Permalink to "Froseth System""></a></h3> <p>Counting system using <strong>n-ne</strong>, <strong>n-ta-ne-ta</strong>, <strong>n-na-ni</strong>, and <strong>n-ta-na-ta-ni-ta</strong>. All three systems have internal consistency for all divisions of the beat except the tactus, which changes according to the beat number.</p> <h2 id="syllables-systems" tabindex="-1">Syllables Systems <a class="header-anchor" href="#syllables-systems" aria-label="Permalink to "Syllables Systems""></a></h2> <p>Syllables systems are categorized as "Beat Function Systems" - when the tactus (pulse) has certain syllable A, and the half-beat is always certain syllable B, regardless of how the rest of the measure is filled out.</p> <h3 id="french-system" tabindex="-1">French System <a class="header-anchor" href="#french-system" aria-label="Permalink to "French System""></a></h3> <p>The French "Time-Names system", also called the "Galin-Paris-Cheve system", originally used French words. Toward the middle of the nineteenth century the American musician Lowell Mason (affectionately named the "Father of Music Education") adapted the French Time-Names system for use in the United States, and instead of using the French names of the notes, he replaced these with a system that identified the value of each note within a meter and the measure.</p> <ul> <li>Whole Note: <strong>Ta-a-a-a</strong></li> <li>Half Note: <strong>Ta-a</strong></li> <li>Quarter Note: <strong>Ta</strong></li> <li>2 Eighth Note: <strong>Ta Te</strong></li> <li>4 Sixteenth Notes: <strong>Tafa Tefe</strong></li> </ul> <h3 id="kodaly-method" tabindex="-1">Kodály Method <a class="header-anchor" href="#kodaly-method" aria-label="Permalink to "Kodály Method""></a></h3> <ul> <li>Whole Note: <strong>Ta-a-a-a</strong> or <strong>ta-o-o-o</strong></li> <li>Half Note: <strong>Ta-a</strong> or <strong>ta-o</strong></li> <li>Quarter Note: <strong>Ta</strong></li> <li>1 Eighth Note: <strong>Ti</strong></li> <li>2 Eighth Notes: <strong>Ti-Ti</strong></li> <li>4 Sixteenth Notes: <strong>Ti-ri-ti-ri</strong></li> <li>Eighth Note Triplet: <strong>Tri-o-la</strong></li> <li>Eighth Note followed by a Quarter Note and another Eighth Note: <strong>Syn-co-pa</strong></li> </ul> <h3 id="ward-method" tabindex="-1">Ward Method <a class="header-anchor" href="#ward-method" aria-label="Permalink to "Ward Method""></a></h3> <ul> <li>Whole Note: <strong>Lang-ng-ng-ng</strong></li> <li>Half Note: <strong>Lang-ng</strong></li> <li>Quarter Note: <strong>La</strong></li> <li>2 Eighth Notes: <strong>Lira</strong></li> <li>Dotted Quarter followed by Eighth: <strong>La-ira</strong></li> </ul> <h3 id="edwin-gordon-system" tabindex="-1">Edwin Gordon System <a class="header-anchor" href="#edwin-gordon-system" aria-label="Permalink to "Edwin Gordon System""></a></h3> <h4 id="usual-duple-meter" tabindex="-1">Usual Duple Meter <a class="header-anchor" href="#usual-duple-meter" aria-label="Permalink to "Usual Duple Meter""></a></h4> <ul> <li>Whole Note: <strong>Du-u-u-u</strong></li> <li>Half Note: <strong>Du-u</strong></li> <li>Quarter Note: <strong>Du</strong></li> <li>2 Eighth Notes: <strong>Du-De</strong></li> <li>4 Sixteenth Notes: <strong>Du-Ta-De-Ta</strong></li> </ul> <h4 id="usual-triple-meter" tabindex="-1">Usual Triple Meter <a class="header-anchor" href="#usual-triple-meter" aria-label="Permalink to "Usual Triple Meter""></a></h4> <ul> <li>Dotted Quarter Note: <strong>Du</strong></li> <li>3 Eighth Notes: <strong>Du-Da-Di</strong></li> <li>6 Sixteenth Notes: <strong>Du-Ta-Da-Ta-Di-Ta</strong></li> </ul> <p>Unusual meters pair the duple and triple meter syllables, and employ the "b" consonant.</p> <h3 id="takadimi" tabindex="-1">Takadimi <a class="header-anchor" href="#takadimi" aria-label="Permalink to "Takadimi""></a></h3> <p>The beat is always called ta. In simple meters, the division and subdivision are always <strong>ta-di</strong> and <strong>ta-ka-di-mi</strong>. Any note value can be the beat, depending on the time signature. In compound meters (wherein the beat is generally notated with dotted notes), the division and subdivision are always <strong>ta-ki-da</strong> and <strong>ta-va-ki-di-da-ma</strong>.</p> <p>The note value does not receive a particular name; the note’s position within the beat gets the name. This system allows children to internalize a steady beat and to naturally discover the subdivisions of beat, similar to the <strong>down-ee-up-ee</strong> system.</p> <h4 id="examples-of-simple-meter-rhythms-takadimi" tabindex="-1">Examples of Simple Meter Rhythms (Takadimi) <a class="header-anchor" href="#examples-of-simple-meter-rhythms-takadimi" aria-label="Permalink to "Examples of Simple Meter Rhythms (Takadimi)""></a></h4> <ul> <li>Whole Note = <strong>ta-a-a-a</strong></li> <li>Half Note = <strong>Ta-a</strong></li> <li>Quarter Note = <strong>Ta</strong></li> <li>Two Eighth Notes = <strong>Ta-Di</strong></li> <li>Four Sixteenth Notes = <strong>Ta-Ka-Di-Mi</strong></li> <li>Eighth Rest + Eighth Note = <strong>X-Di</strong></li> <li>Eighth Note + Two Sixteenth Notes = <strong>Taaa-Di-Mi</strong></li> <li>Two Sixteenth Notes + Eighth Note = <strong>Ta-Ka-Diii</strong></li> </ul> <h4 id="examples-of-compound-meter-rhythms-takadimi" tabindex="-1">Examples of Compound Meter Rhythms (Takadimi) <a class="header-anchor" href="#examples-of-compound-meter-rhythms-takadimi" aria-label="Permalink to "Examples of Compound Meter Rhythms (Takadimi)""></a></h4> <ul> <li>Dotted Whole Note = <strong>Ta-a-a-a</strong></li> <li>Dotted Half Note = <strong>Ta-a</strong></li> <li>Dotted Quarter Note = <strong>Ta</strong></li> <li>Three Eighth Notes Beamed Together = <strong>Ta-Ki-Da</strong></li> <li>Eighth Note + Eighth Rest + Eighth Note = <strong>Ta-X-Da</strong></li> <li>Six Sixteenth Notes = <strong>Ta-Va-Ki-Di-Da-Ma</strong></li> <li>Eighth Note + Four Sixteenth Notes = <strong>Ta-aa-Ki-Di-Da-Ma</strong></li> <li>Four Sixteenth Notes + Eighth Note = <strong>Ta-Va-Ki-Di-Da-aa</strong></li> <li>Two Sixteenth Notes + Eighth Note + Two Sixteenth Notes = <strong>Ta-Va-Ki-ii-Da-Ma</strong></li> </ul> <youtube-embed video="8gNM51Q55XY" /><h3 id="takatiki" tabindex="-1">Takatiki <a class="header-anchor" href="#takatiki" aria-label="Permalink to "Takatiki""></a></h3> <p>This is a beat-function system used by some Kodály teachers that was developed by Laurdella Foulkes-Levy, and was designed to be easier to say than Gordon's system or the Takadimi system while still honoring the beat-function. The beat is said as "Ta" in both duple and triple meters, but the beat divisions are performed differently between the two meters. The "t" consonant always falls on the main beat and beat division, and the "k" consonant is always when the beat divides again. Alternating "t" and "k" in quick succession is easy to say, as they fall on two different parts of the tongue, making it very easy to say these syllables at a fast tempo (much like tonguing on recorder or flute). It is also a logical system since it always alternates between the same two consonants.</p> <h4 id="duple-meter" tabindex="-1">Duple Meter <a class="header-anchor" href="#duple-meter" aria-label="Permalink to "Duple Meter""></a></h4> <ul> <li>Whole Note: <strong>Ta-a-a-a</strong> (no added accent on each beat)</li> <li>Half Note: <strong>Ta-a</strong> (no added accent on each beat)</li> <li>Quarter Note: <strong>Ta</strong></li> <li>2 Eighth Notes: <strong>Ta-Ti</strong></li> <li>4 Sixteenth Notes: <strong>Ta-Ka-Ti-Ki</strong></li> <li>Sixteenth Note Combinations: <strong>Ta---Ti-Ki</strong>, <strong>Ta-Ka-Ti---</strong>, <strong>Ta-Ka---Ki</strong></li> <li>Eighth Note followed by a Quarter Note and another Eighth Note: <strong>Ta-Ti---Ti</strong></li> <li>Eighth Note Triplet: <strong>Ta-Tu-Te</strong></li> <li>Rests: (silent)</li> </ul> <h4 id="triple-meter" tabindex="-1">Triple Meter <a class="header-anchor" href="#triple-meter" aria-label="Permalink to "Triple Meter""></a></h4> <ul> <li>Dotted Half Note: <strong>Ta-a-a-</strong> (no added accent on each beat)</li> <li>Dotted Quarter Note : <strong>Ta-</strong></li> <li>3 Eighth Notes: <strong>Ta-Tu-Te</strong></li> <li>Eighth Note Combinations: <strong>Ta----Te</strong>, <strong>Ta-Tu-----</strong></li> <li>6 Sixteenth Notes: <strong>Ta-Ka-Tu-Ku-Te-Ke</strong></li> <li>Sixteenth Note Combinations: <strong>Ta--Tu-Ku-Te</strong>, <strong>Ta-Ka-Tu---Te</strong>, <strong>Ta--Tu--Te-Ke</strong></li> <li>Rests: (silent)</li> </ul> <h3 id="ta-titi" tabindex="-1">Ta Titi <a class="header-anchor" href="#ta-titi" aria-label="Permalink to "Ta Titi""></a></h3> <ul> <li>Whole Note: <strong>Toe / ta-ah-ah-ah</strong></li> <li>Dotted Half Note: <strong>Toom / ta-ah-ah</strong></li> <li>Half Note: <strong>Too / ta-ah</strong></li> <li>Dotted Quarter Note: <strong>Tom / ta-a</strong></li> <li>Quarter Note: <strong>Ta</strong></li> <li>1 Eighth Note: <strong>Ti</strong></li> <li>2 Eighth Notes: <strong>Ti-Ti</strong></li> <li>Eighth Note Triplet: <strong>Tri-o-la</strong></li> <li>2 Sixteenth Notes: <strong>Tika / Tiri</strong></li> <li>4 Sixteenth Notes: <strong>TikaTika / Tiritiri</strong></li> <li>2 Sixteenth Notes and 1 Eighth Note: <strong>Tika-Ti / Tiri-Ti</strong></li> <li>1 Eighth Note and 2 Sixteenths: <strong>Ti-Tika / Ti-Tiri</strong></li> </ul> <p>This system allows the value of each note to be clearly represented no matter its placement within the beat/measure</p> <youtube-embed video="1SMmc9gQmHQ" />]]></content:encoded> <enclosure url="https://chromatone.center/claire-brear.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[African cross-beats]]></title> <link>https://chromatone.center/theory/rhythm/system/crossbeat/</link> <guid>https://chromatone.center/theory/rhythm/system/crossbeat/</guid> <pubDate>Wed, 13 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Combinations of two and more rhuthms as a basis of music]]></description> <content:encoded><![CDATA[<h2 id="cross-beat" tabindex="-1">Cross-beat <a class="header-anchor" href="#cross-beat" aria-label="Permalink to "Cross-beat""></a></h2> <p>In music, a <a href="https://en.wikipedia.org/wiki/Cross-beat" target="_blank" rel="noreferrer">cross-beat</a> or cross-rhythm is a specific form of <a href="https://en.wikipedia.org/wiki/Polyrhythm" target="_blank" rel="noreferrer">polyrhythm</a>. The term cross rhythm was introduced in 1934 by the musicologist Arthur Morris Jones (1889–1980). It refers to when the rhythmic conflict found in polyrhythms is the basis of an entire musical piece.</p> <h2 id="etymology" tabindex="-1">Etymology <a class="header-anchor" href="#etymology" aria-label="Permalink to "Etymology""></a></h2> <p>The term "cross rhythm" was introduced in 1934 by the musicologist Arthur Morris Jones (1889–1980), who, with Klaus Wachsmann, took-up extended residence in Zambia and Uganda, respectively, as missionaries, educators, musicologists, and museologists.</p> <blockquote> <p>Cross-rhythm. A rhythm in which the regular pattern of accents of the prevailing meter is contradicted by a conflicting pattern and not merely a momentary displacement that leaves the prevailing meter fundamentally unchallenged.<br> — The New Harvard Dictionary of Music (1986: 216)</p> </blockquote> <h2 id="african-music" tabindex="-1">African music <a class="header-anchor" href="#african-music" aria-label="Permalink to "African music""></a></h2> <h3 id="one-main-system" tabindex="-1">One main system <a class="header-anchor" href="#one-main-system" aria-label="Permalink to "One main system""></a></h3> <p>African cross-rhythm is most prevalent within the greater Niger-Congo linguistic group, which dominates the continent south of the Sahara Desert. (Kubik, p. 58) Cross-rhythm was first identified as the basis of sub-Saharan rhythm by A.M. Jones. Later, the concept was more fully explained in the lectures of Ewe master drummer and scholar C.K. Ladzekpo, and in the writings of David Locke. Jones observes that the shared rhythmic principles of Sub-Saharan African music traditions constitute one main system. Similarly, Ladzekpo affirms the profound homogeneity of sub-Saharan African rhythmic principles. In Sub-Saharan African music traditions (and many of the diaspora musics) cross-rhythm is the generating principle; the meter is in a permanent state of contradiction.</p> <h3 id="an-embodiment-of-the-people" tabindex="-1">An embodiment of the people <a class="header-anchor" href="#an-embodiment-of-the-people" aria-label="Permalink to "An embodiment of the people""></a></h3> <blockquote> <p>At the center of a core of rhythmic traditions and composition is the technique of cross-rhythm. The technique of cross-rhythm is a simultaneous use of contrasting rhythmic patterns within the same scheme of accents or meter … By the very nature of the desired resultant rhythm, the main beat scheme cannot be separated from the secondary beat scheme. It is the interplay of the two elements that produces the cross-rhythmic texture.<br> — Ladzekpo, a: "Myth"</p> </blockquote> <blockquote> <p>From the philosophical perspective of the African musician, cross-beats can symbolize the challenging moments or emotional stress we all encounter. Playing cross-beats while fully grounded in the main beats, prepares one for maintaining a life-purpose while dealing with life’s challenges. Many sub-Saharan languages do not have a word for rhythm, or even music. From the African viewpoint, the rhythms represent the very fabric of life itself; they are an embodiment of the people, symbolizing interdependence in human relationships.<br> — Clave Matrix, p. 21</p> </blockquote> <youtube-embed video="DHPDbkXQV0M"/> <h2 id="cross-rhythmic-ratios" tabindex="-1">Cross-rhythmic ratios <a class="header-anchor" href="#cross-rhythmic-ratios" aria-label="Permalink to "Cross-rhythmic ratios""></a></h2> <h3 id="_3-2" tabindex="-1">3:2 <a class="header-anchor" href="#_3-2" aria-label="Permalink to "3:2""></a></h3> <p>The cross-rhythmic ratio three-over-two (3:2) or vertical <a href="https://en.wikipedia.org/wiki/Hemiola" target="_blank" rel="noreferrer">hemiola</a>, is the most significant rhythmic cell found in sub-Saharan rhythms. The following measure is evenly divided by three beats and two beats. The two cycles do not share equal status though. The two bottom notes are the primary beats, the ground, the main temporal referent. The three notes above are the secondary beats. Typically, the dancer's feet mark the primary beats, while the secondary beats are accented musically.</p> <blockquote> <p>We have to grasp the fact that if from childhood you are brought up to regard beating 3 against 2 as being just as normal as beating in synchrony, then you develop a two dimensional attitude to rhythm... This bi-podal conception is... part of the African's nature<br> — Jones (1959: 102)</p> </blockquote> <p>Novotney observes: "The 3:2 relationship (and [its] permutations) is the foundation of most typical polyrhythmic textures found in West African musics." 3:2 is the generative or theoretic form of sub-Saharan rhythmic principles. Agawu succinctly states: "[The] resultant (3:2) rhythm holds the key to understanding … there is no independence here, because 2 and 3 belong to a single Gestalt."</p> <p>African Xylophones such as the balafon and gyil play cross-rhythms, which are often the basis of ostinato melodies. In the following example, a Ghanaian gyil sounds the three-against-two cross-rhythm. The left hand (lower notes) sounds the two main beats, while the right hand (upper notes) sounds the three cross-beats. (Clave Matrix p. 22) Ghanaian gyil Ghanaian gyil sounds 3:2 cross-rhythm. About this soundPlay (help·info)</p> <h3 id="_6-4" tabindex="-1">6:4 <a class="header-anchor" href="#_6-4" aria-label="Permalink to "6:4""></a></h3> <h4 id="the-primary-cycle-of-four-beats" tabindex="-1">The primary cycle of four beats <a class="header-anchor" href="#the-primary-cycle-of-four-beats" aria-label="Permalink to "The primary cycle of four beats""></a></h4> <p>A great deal of African music is built upon a cycle of four main beats. This basic musical period has a bipartite structure; it is made up of two cells, consisting of two beats each. Ladzekpo states: "The first most useful measure scheme consists of four main beats with each main beat measuring off three equal pulsations [12/8] as its distinctive feature … The next most useful measure scheme consists of four main beats with each main beat flavored by measuring off four equal pulsations [4/4]." (b: "Main Beat Schemes") The four-beat cycle is a shorter period than what is normally heard in European music. This accounts for the stereotype of African music as "repetitive." (Kubik, p. 41) A cycle of only two main beats, as in the case of 3:2, does not constitute a complete primary cycle. (Kubik, Vol. 2, p. 63) Within the primary cycle there are two cells of 3:2, or, a single cycle of six-against-four (6:4). The six cross-beats are represented below as quarter-notes for visual emphasis. Six-against-four cross-rhythm (note that this is identical to the three-over-two cross-rhythm above, played twice).</p> <blockquote> <p>Interacting the four recurrent triple structure main beat schemes (four beat scheme) simultaneously with the six recurrent two pulse beat schemes (six beat scheme) produces the first most useful cross rhythmic texture in the development of Anlo-Ewe dance-drumming.<br> — Ladzekpo (c: "Six Against Four")</p> </blockquote> <p>The following notated example is from the kushaura part of the traditional mbira piece "Nhema Mussasa." The left hand plays the ostinato "bass line," built upon the four main beats, while the right hand plays the upper melody, consisting of six cross-beats. The composite melody is an embellishment of the 6:4 cross-rhythm. (Clave Matrix p. 35)</p> <h3 id="_3-4" tabindex="-1">3:4 <a class="header-anchor" href="#_3-4" aria-label="Permalink to "3:4""></a></h3> <p>If every other cross-beat is sounded, the three-against-four (3:4) cross-rhythm is generated. The "slow" cycle of three beats is more metrically destabilizing and dynamic than the six beats. The Afro-Cuban rhythm abakuá (Havana-style) is based on the 3:4 cross-rhythm. The three-beat cycle is represented as half-notes in the following example for visual emphasis.</p> <blockquote> <p>In contrast to the four main beat scheme, the rhythmic motion of the three beat scheme is slower. A simultaneous interaction of these two beat schemes with contrasting rhythmic motions produces the next most useful cross rhythmic texture in the development of sub-Saharan dance-drumming. The composite texture of the three-against-four cross rhythm produces a motif covering a length of the musical period. The motif begins with the component beat schemes coinciding and continues with the beat schemes in alternate motions thus showing a progression from a "static" beginning to a "dynamic" continuation<br> — Ladzekpo ("Three Against Four")</p> </blockquote> <p>The following pattern is an embellishment of the three-beat cycle, commonly heard in African music. It consists of three sets of three strokes each.</p> <h3 id="_1-5-4-or-3-8" tabindex="-1">1.5:4 (or 3:8) <a class="header-anchor" href="#_1-5-4-or-3-8" aria-label="Permalink to "1.5:4 (or 3:8)""></a></h3> <p>Even more metrically destabilizing and dynamic than 3:4, is the one and a half beat-against-four (1.5:4) cross-rhythm. Another way to think of it is as three "very slow" cross-beats spanning two main beat cycles (of four beats each), or three beats over two periods (measures), a type of macro "hemiola." In terms of the beat scheme comprising the complete 24-pulse cross-rhythm, the ratio is 3:8. The three cross-beats are shown as whole notes below for visual emphasis.</p> <p>The 1.5:4 cross-rhythm is the basis for the open tone pattern of the enú (large batá drum head) for the Afro-Cuban rhythm changó (Shango). It is the same pattern as the previous figure, but the strokes occur at half the rate.</p> <p>The following bell pattern is used in the Ewe rhythm kadodo. The pattern consists of three modules—two pairs of strokes, and a single stroke. The three single stroke are muted. The pattern is another embellishment of the 1.5:4 cross-rhythm.</p> <h3 id="_4-3" tabindex="-1">4:3 <a class="header-anchor" href="#_4-3" aria-label="Permalink to "4:3""></a></h3> <p>When duple pulses (4/1) are grouped in sets of three, the four-against-three (4:3) cross-rhythm is generated. The four cross-beats cycle every three main beats. In terms of cross-rhythm only, this is the same as having duple cross-beats in a triple beat scheme, such as 3/4 or 6/4. The pulses on the top line are grouped in threes for visual emphasis.</p> <p>However, this 4:3 is within a duple beat scheme, with duple (quadruple) subdivisions of the beats. Since the musical period is a cycle of four main beats, the 4:3 cross-rhythm significantly contradicts the period by cycling every three main beats. The complete cross-beat cycle is shown below in relation to the key pattern known in Afro-Cuban music as clave. (Rumba, p. xxxi) The subdivisions are grouped (beamed) in sets of four to reflect the proper metric structure. The complete cross-beat cycle is three claves in length. Within the context of the complete cross-rhythm, there is a macro 4:3—four 4:3 modules-against-three claves. Continuous duple-pulse cross-beats are often sounded by the quinto, the lead drum in the Cuban genres rumba and conga. (Rumba, pps. 69–86)</p> <blockquote> <p>While 3:2 pervades ternary music, quaternary music seldom uses tuplets; instead, a set of dotted notes may temporarily make 2:3 and 4:3 temporal structures.<br> — Locke ("Metric Matric")</p> </blockquote> <h3 id="duple-pulse-correlative-of-3-2" tabindex="-1">Duple-pulse correlative of 3:2 <a class="header-anchor" href="#duple-pulse-correlative-of-3-2" aria-label="Permalink to "Duple-pulse correlative of 3:2""></a></h3> <p>In sub-Saharan rhythm the four main beats are typically divided into three or four pulses, creating a 12-pulse (12/8), or 16-pulse (4/1) cycle. (Ladzekpo, b: "Main Beat Scheme") Every triple-pulse pattern has its duple-pulse correlative; the two pulse structures are two sides of the same coin. Cross-beats are generated by grouping pulses contrary to their given structure, for example: groups of two or four in 12 8 or groups of three or six in 4/4. (Rumba, p. 180) The duple-pulse correlative of the three cross-beats of the hemiola, is a figure known in Afro-Cuban music as tresillo. Tresillo is a Spanish word meaning ‘triplet’—three equal notes within the same time span normally occupied by two notes. As used in Cuban popular music, tresillo refers to the most basic duple-pulse rhythmic cell. The pulse names of tresillo and the three cross-beats of the hemiola are identical: one, one-ah, two-and.</p> <p>The composite pattern of tresillo and the main beats is commonly known as the habanera, congo, tango-congo, or tango. The habanera rhythm is the duple-pulse correlative of the vertical hemiola (above). The three cross-beats of the hemiola are generated by grouping triple pulses in twos: 6 pulses ÷ 2 = 3 cross-beats. Tresillo is generated by grouping duple pulses in threes: 8 pulses ÷ 3 = 2 cross-beats (consisting of three pulses each), with a remainder of a partial cross-beat (spanning two pulses). In other words, 8 ÷ 3 = 2, r2. Tresillo is a cross-rhythmic fragment. It contains the first three cross-beats of 4:3. (Rumba, p. xxx)</p> <h2 id="cross-rhythm-not-polymeter" tabindex="-1">Cross-rhythm, not polymeter <a class="header-anchor" href="#cross-rhythm-not-polymeter" aria-label="Permalink to "Cross-rhythm, not polymeter""></a></h2> <p>Early ethnomusicological analysis often perceived African music as polymetric. Pioneers such as A.M. Jones and Anthony King identified the prevailing rhythmic emphasis as metrical accents (main beats), instead of the contrametrical accents (cross-beats) they in fact are. Some of their music examples are polymetric, with multiple and conflicting main beat cycles, each requiring its own separate time signature. King shows two Yoruba dundun pressure drum ("talking drum") phrases in relation to the five-stroke standard pattern, or "clave," played on the kagano dundun (top line). The standard pattern is written in a polymetric 7/8 + 5/8 time signature. One dundun phrase is based on a grouping of three pulses written in 3/8, and the other, a grouping of four pulses written in 4/1. Complicating the transcription further, one polymetric measure is offset from the other two.</p> <blockquote> <p>African music is often characterized as polymetric, because, in contrast to most Western music, African music cannot be notated without assigning different meters to the different instruments of an ensemble<br> — Chernoff (1979: 45).</p> </blockquote> <p>More recent writings represent African music as cross-rhythmic, within a single meter.</p> <blockquote> <p>Of the many reasons why the notion of polymeter must be rejected, I will mention three. First, if polymeter were a genuine feature of African music, we would expect to find some indication of its pertinence in the discourses and pedagogical schemes of African musicians, carriers of the tradition. As far as I know, no such data is available...Second, because practically all the ensemble music in which polymeter is said to be operative in dance music, and given the grounding demanded by choreography, it is more likely that these musics unfold within polyrhythmic matrices in single meters rather than in ... "mixed" meters ... Third, decisions about how to represent drum ensemble music founder on the assumption, made most dramatically by Jones, that accents are metrical rather than phenomenal...phenomenal accents play a more important role in African music than metrical accents. Because meter and grouping are distinct, postulating a single meter in accordance with the dance allows phenomenal or contrametric accents to emerge against a steady background. Polymeter fails to convey the true accentual structure of African music insofar as it creates the essential tension between a firm and stable background and a fluid foreground<br> — Agawu (2003: 84, 85)</p> </blockquote> <blockquote> <p>[The] term ‘polymetric’ is only applicable to a very special kind of phenomenon. If we take "metre" in its primary sense of metrum (the metre being the temporal reference unit), ‘polymetric’ would describe the simultaneous unfolding of several parts in a single work at different tempos so as not to be reducible to a single metrum. This happens in some modern music, such as some of Charles Ives' works, Elliott Carter’s Symphony, B.A. Zimmermann’s opera "Die Soldaten," and Pierre Boulez’s "Rituel." Being polymetric in the strict sense, these works can only be performed with several simultaneous conductors — Arom (1991: 205)</p> </blockquote> <p>When written within a single meter, we see that the dundun in the second line sounds the main beats, and the subdivision immediately preceding it. The first cell (half measure) of the top line is a hemiola. The two dunduns shown in the second and third lines sound an embellishment of the three-over-four (3:4) cross-rhythm—expressed as three pairs of strokes against four pairs of strokes. (Clave Matrix p. 216)</p> <h3 id="adaptive-instruments" tabindex="-1">Adaptive instruments <a class="header-anchor" href="#adaptive-instruments" aria-label="Permalink to "Adaptive instruments""></a></h3> <p>Sub-Saharan instruments are constructed in a variety of ways to generate cross-rhythmic melodies. Some instruments organize the pitches in a uniquely divided alternate array – not in the straight linear bass to treble structure that is so common to many western instruments such as the piano, harp, and marimba.</p> <p>Lamellophones including mbira, mbila, mbira huru, mbira njari, mbira nyunga, marimba, karimba, kalimba, likembe, and okeme. These instruments are found in several forms indigenous to different regions of Africa and most often have equal tonal ranges for right and left hands. The kalimba is a modern version of these instruments originated by the pioneer ethnomusicologist Hugh Tracey in the early 20th century which has over the years gained world-wide popularity.</p> <p>Chordophones, such as the West African kora, and Doussn'gouni, part of the harp-lute family of instruments, also have this African separated double tonal array structure. Another instrument, the Marovany from Madagascar is a double sided box zither which also employs this divided tonal structure. The Gravikord is a new American instrument closely related to both the African kora and the kalimba. It was created to exploit this adaptive principle in a modern electro-acoustic instrument.</p> <p>On these instruments one hand of the musician is not primarily in the bass nor the other primarily in the treble, but both hands can play freely across the entire tonal range of the instrument. Also the fingers of each hand can play separate independent rhythmic patterns and these can easily cross over each other from treble to bass and back, either smoothly or with varying amounts of syncopation. This can all be done within the same tight tonal range, without the left and right hand fingers ever physically encountering each other. These simple rhythms will interact musically to produce complex cross rhythms including repeating on beat/off beat pattern shifts that would be very difficult to create by any other means. This characteristically African structure allows often simple playing techniques to combine with each other and produce cross-rhythmic music of great beauty and complexity.</p> <h2 id="jazz" tabindex="-1">Jazz <a class="header-anchor" href="#jazz" aria-label="Permalink to "Jazz""></a></h2> <p>The New Harvard Dictionary of Music calls swing "an intangible rhythmic momentum in jazz," adding that "swing defies analysis; claims to its presence may inspire arguments." The only specific description offered is the statement that "triplet subdivisions contrast with duple subdivisions." The argument could be made that by nature of its simultaneous triple and duple subdivisions, swing is fundamentally a form of polyrhythm. However, the use of true systematic cross-rhythm in jazz did not occur until the second half of the twentieth century.</p> <h3 id="_3-2-or-6-4" tabindex="-1">3:2 (or 6:4) <a class="header-anchor" href="#_3-2-or-6-4" aria-label="Permalink to "3:2 (or 6:4)""></a></h3> <p>In 1959 Mongo Santamaria recorded "Afro Blue," the first jazz standard built upon a typical African 3:2 cross-rhythm. The song begins with the bass repeatedly playing 3 cross-beats per each measure of 6/8 (3:2), or 6 cross-beats per 12/8 measure (6:4). The following example shows the original ostinato "Afro Blue" bass line. The slashed noteheads are not bass notes, but are shown to indicate the main beats, where you would normally tap your foot to "keep time."</p> <h3 id="_3-4-1" tabindex="-1">3:4 <a class="header-anchor" href="#_3-4-1" aria-label="Permalink to "3:4""></a></h3> <p>On the original "Afro Blue," drummer Willie Bobo played an abakuá bell pattern on a snare drum, using brushes. This cross-rhythmic figure divides the twelve-pulse cycle into three sets of four pulses. Since the main beats (four sets of three pulses) are present whether sounded or not, this bell pattern can be considered an embellishment of the three-against-four (3:4) cross-rhythm. Bobo used this same pattern and instrumentation on the Herbie Hancock jazz-descarga "Succotash."</p> <h3 id="_2-3" tabindex="-1">2:3 <a class="header-anchor" href="#_2-3" aria-label="Permalink to "2:3""></a></h3> <p>In 1963 John Coltrane recorded "Afro Blue" with the great jazz drummer Elvin Jones. Jones inverted the metric hierarchy of Santamaria's composition, performing it instead as duple cross-beats over a 3/4 "jazz waltz" (2:3). This 2:3 in a swung 3/4 is perhaps the most common example of overt cross-rhythm in jazz.</p> <h2 id="duple-pulse-correlative-of-3-2-1" tabindex="-1">Duple-pulse correlative of 3:2 <a class="header-anchor" href="#duple-pulse-correlative-of-3-2-1" aria-label="Permalink to "Duple-pulse correlative of 3:2""></a></h2> <p>The Wayne Shorter composition "Footprints" may have been the first overt expression of the 6:4 cross-rhythm (two cycles of 3:2) used by a straight ahead jazz group. On the version recorded on Miles Smiles by Miles Davis, the bass switches to 4/4 at 2:20. The 4/4 figure is known as tresillo in Latin music and is the duple-pulse correlative of the cross-beats in triple-pulse. Throughout the piece, the four main beats are maintained.</p> <p>In recent decades, jazz has incorporated many different types of complex cross-rhythms, as well as other types of polyrhythms.</p> <youtube-embed video="lVPLIuBy9CY" />]]></content:encoded> <enclosure url="https://chromatone.center/african-drums.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chord progressions]]></title> <link>https://chromatone.center/practice/chord/progressions/</link> <guid>https://chromatone.center/practice/chord/progressions/</guid> <pubDate>Tue, 12 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Successive chord loops as the foundation of modern music]]></description> <content:encoded><![CDATA[<chord-progressions :list="named" />]]></content:encoded> <enclosure url="https://chromatone.center/progressions.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Song structure]]></title> <link>https://chromatone.center/theory/composition/song/</link> <guid>https://chromatone.center/theory/composition/song/</guid> <pubDate>Tue, 12 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[The form variations of a songwriting process]]></description> <content:encoded><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Song_structure" target="_blank" rel="noreferrer">Song structure</a> is the arrangement of a song, and is a part of the songwriting process. It is typically sectional, which uses repeating forms in songs. Common forms include bar form, 32-bar form, verse–chorus form, ternary form, strophic form, and the 12-bar blues. Popular music songs traditionally use the same music for each verse or stanza of lyrics (as opposed to songs that are "through-composed"—an approach used in classical music art songs). Pop and traditional forms can be used even with songs that have structural differences in melodies.[clarification needed] The most common format in modern popular music is introduction (intro), verse, pre-chorus, chorus (or refrain), verse, pre-chorus, chorus, bridge ("middle eight"), verse, chorus and outro. In rock music styles, notably heavy metal music, there is usually one or more guitar solos in the song, often found after the middle chorus part. In pop music, there may be a guitar solo, or a solo may be performed by a synthesizer player or sax player.</p> <p>The foundation of popular music is the "verse" and "chorus" structure. Some writers use a simple "verse, hook, verse, hook, bridge, hook" method. "Pop and rock songs nearly always have both a verse and a chorus. The primary difference between the two is that when the music of the verse returns, it is almost always given a new set of lyrics, whereas the chorus usually retains the same set of lyrics every time its music appears." Both are essential elements, with the verse usually played first (exceptions abound, of course, with "She Loves You" by The Beatles being an early example in the rock music genre). Each verse usually employs the same melody (possibly with some slight modifications), while the lyrics usually change for each verse. The chorus (or "refrain") usually consists of a melodic and lyrical phrase that repeats. Pop songs may have an introduction and coda ("tag"), but these elements are not essential to the identity of most songs. Pop songs often connect the verse and chorus via a pre-chorus, with a bridge section usually appearing after the second chorus.</p> <p>The verse and chorus are usually repeated throughout a song, while the intro, bridge, and coda (also called an "outro") are usually only used once. Some pop songs may have a solo section, particularly in rock or blues-influenced pop. During the solo section, one or more instruments play a melodic line which may be the melody used by the singer, or, in blues or jazz improvised.</p> <p><img src="./structure.jpg" alt="Anatomy of songs"></p> <h2 id="elements" tabindex="-1">Elements <a class="header-anchor" href="#elements" aria-label="Permalink to "Elements""></a></h2> <h3 id="introduction" tabindex="-1">Introduction <a class="header-anchor" href="#introduction" aria-label="Permalink to "Introduction""></a></h3> <p>The <a href="https://en.wikipedia.org/wiki/Introduction_(music)" target="_blank" rel="noreferrer">introduction</a> is a unique section that comes at the beginning of the piece. Generally speaking, an introduction contains just music and no words. It usually builds up suspense for the listener so when the downbeat drops in, it creates a pleasing sense of release. The intro also creates the atmosphere of the song. As such, the rhythm section typically plays in the "feel" of the song that follows. For example, for a blues shuffle, a band starts playing a shuffle rhythm. In some songs, the intro is one or more bars of the tonic chord (the "home" key of the song). With songs, another role of the intro is to give the singer the key of the song. For this reason, even if an intro includes chords other than the tonic, it generally ends with a cadence, either on the tonic or dominant chord.</p> <p>The introduction may also be based around the chords used in the verse, chorus, or bridge, or a stock "turnaround" progression may be played, such as the I–vi–ii–V progression (particularly in jazz influenced pop songs). More rarely, the introduction may begin by suggesting or implying another key. For example, a song in C Major might begin with an introduction in G Major, which makes the listener think that the song will eventually be in G Major. A cliche used to indicate to the listener that this G Major section is in fact the dominant chord of another key area is to add the dominant seventh, which in this case would shift the harmony to a G7 chord. In some cases, an introduction contains only drums or percussion parts that set the rhythm and "groove" for the song. Alternately the introduction may consist of a solo section sung by the lead singer (or a group of backup singers), or a riff played by an instrumentalist.</p> <p>The most straightforward, and least risky way to write an introduction is to use a section from the song. This contains melodic themes from the song, chords from one of the song's sections, and the beat and style of the song. However, not all songs have an intro of this type. Some songs have an intro that does not use any of the material from the song that is to follow. With this type of intro, the goal is to create interest in the listener and make them unsure of what will happen. This type of intro could consist of a series of loud, accented chords, punctuated by cymbal, with a bassline beginning near the end, to act as a pitch reference point for the singer.</p> <h3 id="verse" tabindex="-1">Verse <a class="header-anchor" href="#verse" aria-label="Permalink to "Verse""></a></h3> <p>In popular music, a <a href="https://en.wikipedia.org/wiki/Verse%E2%80%93chorus_form" target="_blank" rel="noreferrer">verse</a> roughly corresponds to a poetic stanza because it consists of rhyming lyrics most often with an AABB or ABAB rhyme scheme. When two or more sections of the song have almost identical music but different lyrics, each section is considered one verse.</p> <p>Musically, "the verse is to be understood as a unit that prolongs the tonic... The musical structure of the verse nearly always recurs at least once with a different set of lyrics." The tonic or "home key" chord of a song can be prolonged in a number of ways. Pop and rock songs often use chords closely related to the tonic, such as iii or vi, to prolong the tonic. In the key of C Major, the iii chord would be E Minor and the vi chord would be A Minor. These chords are considered closely related to the tonic because they share chord tones. For example, the chord E Minor includes the notes E and G, both of which are part of the C Major triad. Similarly, the chord A Minor includes the notes C and E, both part of the C Major triad.</p> <p>Lyrically, "the verse contains the details of the song: the story, the events, images and emotions that the writer wishes to express....Each verse will have different lyrics from the others." "A verse exists primarily to support the chorus or refrain...both musically and lyrically." A verse of a song, is a repeated sung melody where the words change from use to use (though not necessarily a great deal).</p> <h3 id="pre-chorus" tabindex="-1">Pre-chorus <a class="header-anchor" href="#pre-chorus" aria-label="Permalink to "Pre-chorus""></a></h3> <p>An optional section that may occur after the verse is the pre-chorus. Also known as a "build", "channel", or "transitional bridge", the pre-chorus functions to connect the verse to the chorus with intermediary material, typically using subdominant (usually built on the IV chord or ii chord, which in the key of C Major would be an F Major or D minor chord) or similar transitional harmonies. "Often, a two-phrase verse containing basic chords is followed by a passage, often harmonically probing, that leads to the full chorus." Often, when verse and chorus use the same harmonic structure, the pre-chorus introduces a new harmonic pattern or harmony that prepares the verse chords to transition into the chorus.</p> <p>For example, if a song is set in C Major, and the songwriter aims to get to a chorus that focuses on the dominant chord (G Major) being tonicized (treated like a "home key" for a short period), a chord progression could be used for the pre-chorus that gets the listener ready to hear the chorus' chord (G Major) as an arrival key. One widely used way to accomplish this is to precede the G Major chord with its own ii–V7 chords. In the key given, ii of G Major would be an A minor chord. V7 of G Major would be D7. As such, with the example song, this could be done by having a pre-chorus that consists of one bar of A minor and one bar of D7. This would allow the listener to expect a resolution from ii–V to I, which in this case is the temporary tonic of G Major. The chord A minor would not be unusual to the listener, as it is a shared chord that exists in both G Major and C Major. A minor is the ii chord in G Major, and it is the vi chord in C Major. The chord that would alert the listener that a change was taking place is the D7 chord. There is no D7 chord in C Major. A listener experienced with popular and traditional music would hear this as a secondary dominant. Harmonic theorists and arrangers would call it V7/V or five of five, as the D7 chord is the dominant (or fifth) chord of G Major.</p> <h3 id="chorus-or-refrain" tabindex="-1">Chorus or refrain <a class="header-anchor" href="#chorus-or-refrain" aria-label="Permalink to "Chorus or refrain""></a></h3> <p>The terms <a href="https://en.wikipedia.org/wiki/Refrain" target="_blank" rel="noreferrer">chorus and refrain</a> are often used interchangeably, both referring to a recurring part of a song. When a distinction is made, the chorus is the part that contains the hook or the "main idea" of a song's lyrics and music, and there is rarely variation from one repetition of the chorus to the next. A refrain is a repetitive phrase or phrases that serve the function of a chorus lyrically, but are not in a separate section or long enough to be a chorus. For example, refrains are found in the Beatles' "She Loves You" ("yeah, yeah, yeah"), AC/DC's "You Shook Me All Night Long", Paul Simon's "The Sound of Silence", and "Deck the Halls" ("fa la la la la").</p> <p>The chorus or refrain is the element of the song that repeats at least once both musically and lyrically. It is always of greater musical and emotional intensity than the verse. "The chorus, which gets its name from a usual thickening of texture from the addition of backing vocals, is always a discrete section that nearly always prolongs the tonic and carries an unvaried poetic text." In terms of narrative, the chorus conveys the main message or theme of the song. Normally the most memorable element of the song for listeners, the chorus usually contains the hook.</p> <h3 id="post-chorus" tabindex="-1">Post-chorus <a class="header-anchor" href="#post-chorus" aria-label="Permalink to "Post-chorus""></a></h3> <p>An optional section that may occur after the chorus is the <a href="https://en.wikipedia.org/wiki/Post-chorus" target="_blank" rel="noreferrer">post-chorus</a> (or postchorus). The term can be used generically for any section that comes after a chorus, but more often refers to a section that has similar character to the chorus, but is distinguishable in close analysis. The concept of a post-chorus has been particularly popularized and analyzed by music theorist Asaf Peres, who is followed in this section.</p> <p>Characterizations of post-chorus vary, but are broadly classed into simply a second chorus (in Peres's terms, a detached postchorus) or an extension of the chorus (in Peres's terms, an attached postchorus). Some restrict "post-chorus" to only cases where it is an extension of a chorus (attached postchorus), and do not consider the second part of two-part choruses (detached postchorus) as being a "post"-chorus.</p> <p>As with distinguishing the pre-chorus from a verse, it can be difficult to distinguish the post-chorus from the chorus. In some cases they appear separately – for example, the post-chorus only appears after the second and third chorus, but not the first – and thus are clearly distinguishable. In other cases they always appear together, and thus a "chorus + post-chorus" can be considered a subdivision of the overall chorus, rather than an independent section.</p> <p>Characterization of a post-chorus varies, beyond "comes immediately after the chorus"; Peres characterizes it by two conditions: it maintains or increases sonic energy, otherwise it's a bridge or verse; and contains a melodic hook (vocal or instrumental), otherwise it's a transition.</p> <p>Detached post-choruses typically have distinct melody and lyrics from the chorus:</p> <ul> <li>Chandelier (Sia, 2014): the chorus begins and ends with "I'm gonna swing from the chandelier / From the chandelier", while the post-chorus repeats instead "holding on", in "I'm holding on for dear life" and "I'm just holding on for tonight", and has a new melody, but the same chord progression as the chorus.</li> </ul> <p>Lyrics of attached post-choruses typically repeat the hook/refrain from the chorus, with little additional content, often using vocables like "ah" or "oh". Examples include:</p> <ul> <li>"Umbrella" (Rihanna, 2007): the chorus begins "When the sun shine, we shine together" and run through "You can stand under my umbrella / You can stand under my umbrella, ella, ella, eh, eh, eh", which is followed by three more repetitions of "Under my umbrella, ella, ella, eh, eh, eh", the last one adding another "eh, eh-eh". Here the division between chorus and post-chorus is blurred, as the "ella, ella" begins in the chorus, and was a play on the reverb effect.</li> <li>"Shape of You" (Ed Sheeran, 2017): the chorus runs "I'm in love with the shape of you ... Every day discovering something brand new / I'm in love with your body", and the post-chorus repeats vocables and the hook "Oh—I—oh—I—oh—I—oh—I / I'm in love with your body", then repeats the end of the chorus, switching "your body" to "the shape of you": "Every day discovering something brand new / I'm in love with the shape of you"</li> <li>"Girls Like You" (Maroon 5, 2018): the chorus runs "'Cause girls like you ... I need a girl like you, yeah, yeah ... I need a girl like you, yeah, yeah", and the post-chorus repeats the hook with added "yeah"s: "Yeah, yeah, yeah, yeah, yeah, yeah / I need a girl like you, yeah, yeah / Yeah yeah yeah, yeah, yeah, yeah / I need a girl like you".</li> </ul> <p>Hybrids are also common (Peres: hybrid postchorus), where the post-chorus keeps the hook from the chorus (like an attached postchorus), but introduces some additional content (hook or melody, like a detached postchorus.</p> <h3 id="bridge" tabindex="-1">Bridge <a class="header-anchor" href="#bridge" aria-label="Permalink to "Bridge""></a></h3> <p>A <a href="https://en.wikipedia.org/wiki/Bridge_(music)" target="_blank" rel="noreferrer">bridge</a> may be a transition, but in popular music, it more often is "...a section that contrasts with the verse...[,] usually ends on the dominant...[,] [and] often culminates in a strong re-transitional." "The bridge is a device that is used to break up the repetitive pattern of the song and keep the listener's attention....In a bridge, the pattern of the words and music change." For example, John Denver's "Country Roads" is a song with a bridge while Stevie Wonder's "You Are the Sunshine of My Life" is a song without one.</p> <p>In music theory, "middle eight" (a common type of bridge) refers to a section of a song with a significantly different melody and lyrics, which helps the song develop itself in a natural way by creating a contrast to the previously played, usually placed after the second chorus in a song.</p> <p>A song employing a middle eight might look like:</p> <pre><code> .... .... .... .... ........ .... .... Intro-{Verse-Chorus}{Verse-Chorus}-Middle 8-{Chorus}-{Chorus}-(Outro) </code></pre> <p>By adding a powerful upbeat middle eight, musicians can then end the song with a hook in the end chorus and finale.</p> <h3 id="conclusion-or-outro" tabindex="-1">Conclusion or outro <a class="header-anchor" href="#conclusion-or-outro" aria-label="Permalink to "Conclusion or outro""></a></h3> <p>The conclusion or (in popular-music terminology) <a href="https://en.wikipedia.org/wiki/Conclusion_(music)#Outro" target="_blank" rel="noreferrer">outro</a> of a song is a way of finishing or completing the song. It signals to the listeners that the song is nearing its close. The reason for having an outro is that if a song just ended at the last bar of a section, such as on the last verse or the last chorus, this might feel too abrupt for listeners. By using an outro, the songwriter signals that the song is, in fact, nearing its end. This gives the listeners a good sense of closure. For DJs, the outro is a signal that they need to be ready to mix in their next song.</p> <p>In general, songwriters and arrangers do not introduce any new melodies or riffs in the outro. However, a melody or riff used throughout the song may be re-used as part of an outro. Generally, the outro is a section where the energy of the song, broadly defined, dissipates. For example, many songs end with a fade-out, in which the song gets quieter and quieter. In many songs, the band does a ritardando during the outro, a process of gradually slowing down the tempo. Both the fade-out and the ritardando are ways of decreasing the intensity of a song and signalling that it is nearing its conclusion.</p> <p>For an outro that fades out, the arranger or songwriter typically repeats a short section of the music over and over. This can be the chorus, for example. An audio engineer then uses the fader on the mixing board to gradually decrease the volume of the recording. When a tribute band plays a cover song that, in the recorded version ends with a fade-out, the live band may imitate that by playing progressively quieter.</p> <p>Another way many pop and rock songs end is with a tag. There are two types of tags: the instrumental tag and the instrumental/vocal tag. With an instrumental tag, the vocalist no longer sings, and the band's rhythm section takes over the music to finish off the song. A tag is often a vamp of a few chords that the band repeats. In a jazz song, this could be a standard turnaround, such as I–vi–ii–V7 or a stock progression, such as ii–V7. If the tag includes the tonic chord, such as a vamp on I–IV, the bandleader typically cues the last time that the penultimate chord (a IV chord in this case) is played, leading to an ending on the I chord. If the tag does not include the tonic chord, such as with a ii–V7 tag, the bandleader cues the band to do a cadence that resolves onto the tonic (I) chord. With an instrumental and vocal tag, the band and vocalist typically repeat a section of the song, such as the chorus, to give emphasis to its message. In some cases, the vocalist may use only a few words from the chorus or even one word. Some bands have the guitar player do a guitar solo during the outro, but it is not the focus of the section; instead, it is more to add interesting improvisation. A guitar solo during an outro is typically mixed lower than a mid-song guitar solo.</p> <h3 id="elision" tabindex="-1">Elision <a class="header-anchor" href="#elision" aria-label="Permalink to "Elision""></a></h3> <p>This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2020) (Learn how and when to remove this template message)</p> <p>An elision is a section of music where different sections overlap one another, usually for a short period. It is mostly used in fast-paced music, and it is designed to create tension and drama. Songwriters use elision to keep the song from losing its energy during cadences, the points at which the music comes to rest on, typically on a tonic or dominant chord. If a song has a section that ends with a cadence on the tonic, if the songwriter gives this cadence a full bar, with the chord held as a whole note, this makes the listener feel like the music is stopping. However, if songwriters use an elided cadence, they can bring the section to a cadence on the tonic, and then, immediately after this cadence, begin a new section of music which overlaps with the cadence. Another form of elision would, in a chorus later in the song, to interject musical elements from the bridge.</p> <h3 id="instrumental-solo" tabindex="-1">Instrumental solo <a class="header-anchor" href="#instrumental-solo" aria-label="Permalink to "Instrumental solo""></a></h3> <p>A <a href="https://en.wikipedia.org/wiki/Solo_(music)" target="_blank" rel="noreferrer">solo</a> is a section designed to showcase an instrumentalist (e.g. a guitarist or a harmonica player) or less commonly, more than one instrumentalist (e.g., a trumpeter and a sax player). Guitar solos are common in rock music, particularly heavy metal and in the blues. The solo section may take place over the chords from the verse, chorus, or bridge, or over a standard solo backing progression, such as the 12-bar blues progression. In some pop songs, the solo performer plays the same melodies that were performed by the lead singer, often with flourishes and embellishments, such as riffs, scale runs, and arpeggios. In blues- or jazz-influenced pop songs, the solo performers may improvise a solo.</p> <h3 id="ad-lib" tabindex="-1">Ad lib <a class="header-anchor" href="#ad-lib" aria-label="Permalink to "Ad lib""></a></h3> <p>An <a href="https://en.wikipedia.org/wiki/Ad_libitum" target="_blank" rel="noreferrer">ad lib</a> section of a song (usually in the coda or outro) occurs when the main lead vocal or a second lead vocal breaks away from the already established lyric and/or melody to add melodic interest and intensity to the end of the song. Often, the ad lib repeats the previously sung line using variations on phrasing, melodic shape, and/or lyric, but the vocalist may also use entirely new lyrics or a lyric from an earlier section of the song. During an ad lib section, the rhythm may become freer (with the rhythm section following the vocalist), or the rhythm section may stop entirely, giving the vocalist the freedom to use whichever tempo sounds right. During live performances, singers sometimes include ad libs not originally in the song, such as making a reference to the town of the audience or customizing the lyrics to the current events of the era.</p> <p>There is a distinction between ad lib as a song section and ad lib as a general term. Ad lib as a general term can be applied to any free interpretation of the musical material.</p> <h2 id="aaba-form" tabindex="-1">AABA form <a class="header-anchor" href="#aaba-form" aria-label="Permalink to "AABA form""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Thirty-two-bar_form" target="_blank" rel="noreferrer">Thirty-two-bar form</a> uses four sections, most often eight measures long each (4×8=32), two verses or A sections, a contrasting B section (the bridge or "middle-eight") and a return of the verse in one last A section (AABA). The B section is often intended as a contrast to the A sections that precede and follow it. The B section may be made to contrast by putting it in a new harmony. For example, with the jazz standard "I've Got Rhythm", the A sections are all tonic prolongations based around the I–vi–ii–V chord progression (B♭ in the standard key); however, the B section changes key and moves to V/vi, or D7 in the standard key, which then does a circle of fifths movement to G7, C7 and finally F7, setting the listener up for a return to the tonic Bb in the final A section.</p> <p>The "I've Got Rhythm" example also provides contrast because the harmonic rhythm changes in the B section. Whereas the A sections contain a vibrant, exciting feel of two chord changes per bar (e.g., the first two bars are often B♭–g minor/c minor–F7), the B section consists of two bars of D7, two bars of G7, two bars of C7 and two bars of F7. In some songs, the "feel" also changes in the B section. For example, the A sections may be in swing feel, and the B section may be in Latin or Afro-Cuban feel.</p> <p>While the form is often described as AABA, this does not mean that the A sections are all exactly the same. The first A section ends by going back to the next A section, and the second A section ends and transitions into the B section. As such, at the minimum, the composer or arranger often modifies the harmony of the end of the different A sections to guide the listener through the key changes. As well, the composer or arranger may re-harmonize the melody on one or more of the A sections, to provide variety. Note that with a reharmonization, the melody does not usually change; only the chords played by the accompaniment musicians change.</p> <p>Examples include "Deck the Halls":</p> <blockquote> <p>A: Deck the hall with boughs of holly, A: 'Tis the season to be jolly. B: Don we now our gay apparel, A: Troll the ancient Yuletide carol.</p> </blockquote> <h2 id="variation-on-the-basic-structure" tabindex="-1">Variation on the basic structure <a class="header-anchor" href="#variation-on-the-basic-structure" aria-label="Permalink to "Variation on the basic structure""></a></h2> <p>Verse-chorus form or ABA form may be combined with AABA form, in compound AABA forms. That means that every A section or B section can consist of more then one section (for example Verse-Chorus). In that way the modern popular song structure can be viewed as a AABA form, where the B is the bridge.</p> <p>AAA format may be found in Bob Dylan's "The Times They Are a-Changin'", and songs like "The House of the Rising Sun", and "Clementine". Also "Old MacDonald", "Amazing Grace", "The Thrill Is Gone", and Gordon Lightfoot's "The Wreck of the Edmund Fitzgerald".</p> <p>AABA may be found in Crystal Gayle's "Don't It Make My Brown Eyes Blue", Billy Joel's "Just the Way You Are", and The Beatles' "Yesterday".</p> <p>ABA (verse/chorus or chorus/verse) format may be found in Pete Seeger's "Turn! Turn! Turn!" (chorus first) and The Rolling Stones's "Honky Tonk Woman" (verse first).</p> <p>ABAB may be found in AC/DC's "Back in Black", Jimmy Buffett's "Margaritaville", The Archies's "Sugar, Sugar", and The Eagles's "Hotel California".</p> <p>ABABCB format may be found in John Cougar Mellencamp's "Hurts So Good", Tina Turner's "What's Love Got to Do with It?", and ZZ Top's "Sharp Dressed Man". Variations include Smokey Robinson's "My Guy", The Beatles's "Ticket to Ride",[18] The Pretenders' "Back on the Chain Gang" (ABABCAB), Poison's "Every Rose Has Its Thorn" (ABABCBAB), and Billy Joel's "It's Still Rock and Roll to Me" (ABABCABCAB).</p> ]]></content:encoded> <enclosure url="https://chromatone.center/structure.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Meters]]></title> <link>https://chromatone.center/theory/rhythm/meter/</link> <guid>https://chromatone.center/theory/rhythm/meter/</guid> <pubDate>Tue, 12 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Recurring patterns of accents]]></description> <content:encoded><![CDATA[<p>Basic meters are:</p> <ul> <li><a href="./simple/">Simple</a></li> <li><a href="./complex/">Complex</a></li> <li><a href="./compound/">Compound</a></li> </ul> <p>And <a href="./time/">Double time</a> makes it even more fun.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/rachel-loughman.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Simple meters]]></title> <link>https://chromatone.center/theory/rhythm/meter/simple/</link> <guid>https://chromatone.center/theory/rhythm/meter/simple/</guid> <pubDate>Tue, 12 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Rhythmic meters]]></description> <content:encoded><![CDATA[<beat-bars v-bind="rhythm.simple" /><p>Simple metre (or simple time) is a metre in which each beat of the bar divides naturally into two (as opposed to three) equal parts. The top number in the time signature will be 2, 3, 4, 5, etc.</p> <p>Simple time signatures consist of two numerals, one stacked above the other:</p> <ul> <li>The lower numeral indicates the note value that represents one beat (the beat unit). This number is typically a power of 2.</li> <li>The upper numeral indicates how many such beats constitute a bar.</li> </ul> <p>For instance, 2/4 means two quarter-note (crotchet) beats per bar, while 3/8 means three eighth-notes (quavers) per bar, which are beats at slower tempos (but at faster tempos, 3/8 becomes compound time, with one beat per bar). The most common simple time signatures are 2/4, 3/4, and 4/4.</p> <h1 id="simple-time" tabindex="-1">Simple time <a class="header-anchor" href="#simple-time" aria-label="Permalink to "Simple time""></a></h1> <h2 id="simple-duple-time" tabindex="-1">Simple duple time <a class="header-anchor" href="#simple-duple-time" aria-label="Permalink to "Simple duple time""></a></h2> <p>Two or four beats to a bar, each divided by two, the top number being "2" or "4" (2/4, 2/8, 2/2 ... 4/4, 4/8, 4/2 ...). When there are four beats to a bar, it is alternatively referred to as "quadruple" time.</p> <h3 id="_2-2" tabindex="-1">2/2 <a class="header-anchor" href="#_2-2" aria-label="Permalink to "2/2""></a></h3> <p>Alla breve, cut time: Used for marches and fast orchestral music.</p> <h3 id="_2-4" tabindex="-1">2/4 <a class="header-anchor" href="#_2-4" aria-label="Permalink to "2/4""></a></h3> <p>Used for polkas, galops, and marches.</p> <h2 id="simple-triple-time" tabindex="-1">Simple triple time <a class="header-anchor" href="#simple-triple-time" aria-label="Permalink to "Simple triple time""></a></h2> <p>Three beats to a bar, each divided by two, the top number being "3" (3/4, 3/8, 3/2 ...)</p> <h3 id="_3-4" tabindex="-1">3/4 <a class="header-anchor" href="#_3-4" aria-label="Permalink to "3/4""></a></h3> <p>In the time signature 3/4, each bar contains three quarter-note beats, and each of those beats divides into two eighth notes, making it a simple metre. More specifically, it is a simple triple metre because there are three beats in each measure; simple duple (two beats) or simple quadruple (four) are also common metres.</p> <p>Used for waltzes, minuets, scherzi, polonaises, mazurkas, country & western ballads, R&B, sometimes used in pop. Is widely used in African music.</p> <h3 id="_3-8" tabindex="-1">3/8 <a class="header-anchor" href="#_3-8" aria-label="Permalink to "3/8""></a></h3> <p>Also used for the above but usually suggests higher tempo or shorter hypermeter</p> <h2 id="simple-quadruple-time" tabindex="-1">Simple quadruple time <a class="header-anchor" href="#simple-quadruple-time" aria-label="Permalink to "Simple quadruple time""></a></h2> <h3 id="_4-4" tabindex="-1">4/4 <a class="header-anchor" href="#_4-4" aria-label="Permalink to "4/4""></a></h3> <p>Common time: Widely used in most forms of Western popular music. Most common time signature in rock, blues, country, funk, and pop.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Compound meters]]></title> <link>https://chromatone.center/theory/rhythm/meter/compound/</link> <guid>https://chromatone.center/theory/rhythm/meter/compound/</guid> <pubDate>Sun, 10 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Rhythmic meters]]></description> <content:encoded><![CDATA[<beat-bars v-bind="rhythm.compound" /><p>Compound metre (or compound time), is a metre in which each beat of the bar divides naturally into three equal parts. That is, each beat contains a triple pulse. The top number in the time signature will be 6, 9, 12, 15, 18, 24, etc.</p> <p>Compound metres are written with a time signature that shows the number of divisions of beats in each bar as opposed to the number of beats. For example, compound duple (two beats, each divided into three) is written as a time signature with a numerator of six, for example, 6/8. Contrast this with the time signature 3/4, which also assigns six eighth notes to each measure, but by convention connotes a simple triple time: 3 quarter-note beats.</p> <h3 id="_6-8" tabindex="-1">6/8 <a class="header-anchor" href="#_6-8" aria-label="Permalink to "6/8""></a></h3> <p>Duple time - 2 strong beats in a bar dividing into 3 eighth notes</p> <p>Although 3/4 and 6/8 are not to be confused, they use bars of the same length, so it is easy to "slip" between them just by shifting the location of the accents.</p> <h3 id="_9-8" tabindex="-1">9/8 <a class="header-anchor" href="#_9-8" aria-label="Permalink to "9/8""></a></h3> <p>Compound tripple time - 3 strong beats in a bar dividing into 3 eighth notes</p> <h3 id="_12-8" tabindex="-1">12/8 <a class="header-anchor" href="#_12-8" aria-label="Permalink to "12/8""></a></h3> <p>Compound quadruple time - 4 strong beats in a bar dividing into 3 eighth notes</p> <p>Compound metre divided into three parts could theoretically be transcribed into musically equivalent simple metre using triplets. Likewise, simple metre can be shown in compound through duples. In practice, however, this is rarely done because it disrupts conducting patterns when the tempo changes. When conducting in 6/8, conductors typically provide two beats per bar; however, all six beats may be performed when the tempo is very slow.</p> <p>Compound time is associated with "lilting" and dancelike qualities. Folk dances often use compound time. Many Baroque dances are often in compound time: some gigues, the courante, and sometimes the passepied and the siciliana.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Complex meters]]></title> <link>https://chromatone.center/theory/rhythm/meter/complex/</link> <guid>https://chromatone.center/theory/rhythm/meter/complex/</guid> <pubDate>Fri, 08 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Asymmetric, irregular, unusual, or odd meters]]></description> <content:encoded><![CDATA[<beat-bars v-bind="rhythm.complex" /><p>Signatures that do not fit the usual duple or triple categories are called complex, asymmetric, irregular, unusual, or odd. The term odd meter, however, sometimes describes time signatures in which the upper number is simply odd rather than even, including 3/4 and 9/8.</p> <p>The irregular meters (not fitting duple or triple categories) are common in some non-Western music, but rarely appeared in formal written Western music until the 19th century. Early anomalous examples appeared in Spain between 1516 and 1520, but the Delphic Hymns to Apollo (one by Athenaeus is entirely in quintuple meter, the other by Limenius predominantly so), carved on the exterior walls of the Athenian Treasury at Delphi in 128 BC are in the relatively common cretic meter, with five beats to a foot.</p> <p>The third movement of Frédéric Chopin's Piano Sonata No. 1 (1828) is an early, but by no means the earliest, example of 5/4 time in solo piano music. Anton Reicha's Fugue No. 20 from his Thirty-six Fugues, published in 1803, is also for piano and is in 5/8. The waltz-like second movement of Tchaikovsky's Pathétique Symphony (shown below), often described as a "limping waltz", is a notable example of 5/4 time in orchestral music.</p> <p>Examples from 20th-century classical music include:</p> <ul> <li>Gustav Holst's "Mars, the Bringer of War" and "Neptune, the Mystic" from The Planets (both in 5/4)</li> <li>Paul Hindemith's "Fuga secunda" in G from Ludus Tonalis (5/8)</li> <li>the ending of Stravinsky's The Firebird (7/4)</li> <li>the fugue from Heitor Villa-Lobos's Bachianas Brasileiras No. 9 (11/8)</li> <li>the themes for the Mission: Impossible television series by Lalo Schifrin (in 5/4) and for Room 222 by Jerry Goldsmith (in 7/4)</li> </ul> <p>In the Western popular music tradition, unusual time signatures occur as well, with progressive rock in particular making frequent use of them. The use of shifting meters in The Beatles' "Strawberry Fields Forever" and the use of quintuple meter in their "Within You, Without You" are well-known examples, as is Radiohead's "Paranoid Android" (includes 7/8).</p> <p>Paul Desmond's jazz composition "Take Five", in 5/4 time, was one of a number of irregular-meter compositions that The Dave Brubeck Quartet played. They played other compositions in 11/4 ("Eleven Four"), 7/4 ("Unsquare Dance"), and 9/8 ("Blue Rondo à la Turk"), expressed as 2+2+2+3/8. This last is an example of a work in a signature that, despite appearing merely compound triple, is actually more complex. Brubeck's title refers to the characteristic aksak meter of the Turkish karşılama dance.</p> <p>However, such time signatures are only unusual in most Western music. Traditional music of the Balkans uses such meters extensively. Bulgarian dances, for example, include forms with 5, 7, 9, 11, 13, 15, 22, 25 and other numbers of beats per measure. These rhythms are notated as additive rhythms based on simple units, usually 2, 3 and 4 beats, though the notation fails to describe the metric "time bending" taking place, or compound meters.</p> <h2 id="quintuple-meter" tabindex="-1">Quintuple meter <a class="header-anchor" href="#quintuple-meter" aria-label="Permalink to "Quintuple meter""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Quintuple_meter" target="_blank" rel="noreferrer">Quintuple meter</a> or quintuple time is a musical meter characterized by five beats in a measure.</p> <p>They may consist of any combination of variably stressed or equally stressed beats.</p> <p>Like the more common duple, triple, and quadruple meters, it may be simple, with each beat divided in half, or compound, with each beat divided into thirds. The most common time signatures for simple quintuple meter are 5/4 and 5/8, and compound quintuple meter is most often written in 15/8.</p> <youtube-embed video="kXl0F9oF92E" /><h3 id="notation" tabindex="-1">Notation <a class="header-anchor" href="#notation" aria-label="Permalink to "Notation""></a></h3> <p>Simple quintuple meter can be written in 5/4 or 5/8 time, but may also be notated by using regularly alternating bars of triple and duple meters, for example 2/4 + 3/1. Compound quintuple meter, with each of its five beats divided into three parts, can similarly be notated using a time signature of 15/8, by writing triplets on each beat of a simple quintuple signature, or by regularly alternating meters such as 6/8 + 9/8.</p> <p>Another notational variant involves compound meters, in which two or three numerals take the place of the expected numerator. In simple quintuple meter, the 5 may be replaced as 2+3/8 or 2+1+2/8 for example. A time signature of 15/8, however, does not necessarily mean the music is in a compound quintuple meter. It may, for example, indicate a bar of triple meter in which each beat is subdivided into five parts. In this case, the meter is sometimes characterized as "triple quintuple time".</p> <p>It is also possible for a 15/8 time signature to be used for an irregular, or additive, metrical pattern, such as groupings of 3+3+3+2+2+2 eighth notes or, for example in the Hymn to the Sun and Hymn to Nemesis by Mesomedes of Crete, 2+2+2+2+2+3+2, which may alternatively be given the composite signature 8+7/8.</p> <p>Similarly, the presence of some bars with a 5/4 or 5/8 meter signature does not necessarily mean that the music is in quintuple meter overall. The regular alternation of 5/4 and 4/4 in Bruce Hornsby's "The Tango King" (from the album Hot House), for example, results in an overall nonuple meter (5+4 = 9).</p> <youtube-embed video="PHdU5sHigYQ" /><p><a href="https://en.wikipedia.org/wiki/Sextuple_metre" target="_blank" rel="noreferrer">Sextuple meters</a> are rather rare.</p> <h2 id="septuple-meter" tabindex="-1">Septuple meter <a class="header-anchor" href="#septuple-meter" aria-label="Permalink to "Septuple meter""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Septuple_meter" target="_blank" rel="noreferrer">Septuple meter</a> is a meter with each bar divided into 7 notes of equal duration, usually 7/4 or 7/8 (or in compound meter, 21/8 time). The stress pattern can be 2+2+3, 3+2+2, or occasionally 2+3+2, although a survey of certain forms of mostly American popular music suggests that 2+2+3 is the most common among these three in these styles.</p> <p>A time signature of 21/8, however, does not necessarily mean that the bar is a compound septuple meter with seven beats, each divided into three. This signature may, for example, be used to indicate a bar of triple meter in which each beat is subdivided into seven parts. In this case, the meter is sometimes characterized as "triple septuple time". It is also possible for a 21/8 time signature to be used for an irregular, or "additive" metrical pattern, such as groupings of 3 + 3 + 3 + 2 + 3 + 2 + 3 + 2 eighth notes.</p> <p>Septuple meter can also be notated by using regularly alternating bars of triple and duple or quadruple meters, for example 4/4 + 3/4, or 6/8 + 6/8 + 9/8, or through the use of compound meters, in which two or three numerals take the place of the expected numerator 7, for example, 2+2+3/8, or 5+2/8.</p> <youtube-embed video="_yExwkQYcp0" /><h3 id="balkan-folk-music" tabindex="-1">Balkan folk music <a class="header-anchor" href="#balkan-folk-music" aria-label="Permalink to "Balkan folk music""></a></h3> <p>Septuple rhythms are characteristic of some European folk idioms, particularly in the Balkan countries. An example from Macedonia is the traditional tune "Jovano Jovanke", which can be transcribed in 7/8. Bulgarian dances are particularly noted for the use of a variety of irregular, or heterometric rhythms. The most popular of these is the rachenitsa, a type of khoro in a rapid septuple meter divided 2+2+3. In the Pirin area, the khoro has a rhythm subdivided 3+2+2, and two varieties of it are the pravo makedonsko ("straight Macedonian") and the mazhka rachenitsa ("men's rachenitsa"). Septuple rhythms are also found in Bulgarian vocal music, such as the koleda ritual songs sung by young men on Christmas Eve and Christmas to bless livestock, households, or specific family members.</p> <youtube-embed video="xVgUKgORUnc" /><h2 id="additive-meters" tabindex="-1">Additive meters <a class="header-anchor" href="#additive-meters" aria-label="Permalink to "Additive meters""></a></h2> <p>To indicate more complex patterns of stresses, such as additive rhythms, more complex time signatures can be used. Additive meters have a pattern of beats that subdivide into smaller, irregular groups. Such meters are sometimes called imperfect, in contrast to perfect meters, in which the bar is first divided into equal units.</p> <p>For example, the time signature 3+2+3/8 means that there are 8 quaver beats in the bar, divided as the first of a group of three eighth notes (quavers) that are stressed, then the first of a group of two, then first of a group of three again. The stress pattern is usually counted as 3+2+3/8: <strong>one</strong> two three <em>one</em> two <em>one</em> two three ...</p> <p>This kind of time signature is commonly used to notate folk and non-Western types of music. In classical music, Béla Bartók and Olivier Messiaen have used such time signatures in their works. The first movement of Maurice Ravel's Piano Trio in A Minor is written in 8 8, in which the beats are likewise subdivided into 3+2+3 to reflect Basque dance rhythms.</p> <p>Romanian musicologist Constantin Brăiloiu had a special interest in compound time signatures, developed while studying the traditional music of certain regions in his country. While investigating the origins of such unusual meters, he learned that they were even more characteristic of the traditional music of neighboring peoples (e.g., the Bulgarians). He suggested that such timings can be regarded as compounds of simple two-beat and three-beat meters, where an accent falls on every first beat, even though, for example in Bulgarian music, beat lengths of 1, 2, 3, 4 are used in the metric description. In addition, when focused only on stressed beats, simple time signatures can count as beats in a slower, compound time. However, there are two different-length beats in this resulting compound time, a one half-again longer than the short beat (or conversely, the short beat is 2⁄3 the value of the long). This type of meter is called aksak (the Turkish word for "limping"), impeded, jolting, or shaking, and is described as an irregular bichronic rhythm. A certain amount of confusion for Western musicians is inevitable, since a measure they would likely regard as 7 16, for example, is a three-beat measure in aksak, with one long and two short beats (with subdivisions of 2+2+3, 2+3+2, or 3+2+2).</p> <p>Folk music may make use of metric time bends, so that the proportions of the performed metric beat time lengths differ from the exact proportions indicated by the metric. Depending on playing style of the same meter, the time bend can vary from non-existent to considerable; in the latter case, some musicologists may want to assign a different meter. For example, the Bulgarian tune "Eleno Mome" is written in one of three forms: (1) 7 = 2+2+1+2, (2) 13 = 4+4+2+3, or (3) 12 = 3+4+2+3, but an actual performance (e.g., "Eleno Mome") may be closer to 4+4+2+3. The Macedonian 3+2+2+3+2 meter is even more complicated, with heavier time bends, and use of quadruples on the threes. The metric beat time proportions may vary with the speed that the tune is played. The Swedish Boda Polska (Polska from the parish Boda) has a typical elongated second beat.</p> <p>In Western classical music, metric time bend is used in the performance of the Viennese waltz. Most Western music uses metric ratios of 2:1, 3:1, or 4:1 (two-, three- or four-beat time signatures)—in other words, integer ratios that make all beats equal in time length. So, relative to that, 3:2 and 4:3 ratios correspond to very distinctive metric rhythm profiles. Complex accentuation occurs in Western music, but as syncopation rather than as part of the metric accentuation.</p> <p>Brăiloiu borrowed a term from Turkish medieval music theory: aksak. Such compound time signatures fall under the "aksak rhythm" category that he introduced along with a couple more that should describe the rhythm figures in traditional music. The term Brăiloiu revived had moderate success worldwide, but in Eastern Europe it is still frequently used. However, aksak rhythm figures occur not only in a few European countries, but on all continents, featuring various combinations of the two and three sequences. The longest are in Bulgaria. The shortest aksak rhythm figures follow the five-beat timing, comprising a two and a three (or three and two).</p> <youtube-embed video="j9GgmGLPbWU" /><p>The term additive rhythm is also often used to refer to what are also incorrectly called asymmetric rhythms and even irregular rhythms – that is, meters which have a regular pattern of beats of uneven length. For example, the time signature 4/4 indicates each bar is eight quavers long, and has four beats, each a crotchet (that is, two quavers) long. The asymmetric time signature 3+3+2/8, on the other hand, while also having eight quavers in a bar, divides them into three beats, the first three quavers long, the second three quavers long, and the last just two quavers long.</p> <p>These kinds of rhythms are used, for example, by Béla Bartók, who was influenced by similar rhythms in Bulgarian folk music. The third movement of Bartók's String Quartet No. 5, a scherzo marked alla bulgarese features a "9/8 rhythm (4+2+3)". Stravinsky's Octet for Wind Instruments "ends with a jazzy 3+3+2 = 8 swung coda". Additive patterns also occur in some music of Philip Glass, and other minimalists, most noticeably the "one-two-one-two-three" chorus parts in Einstein on the Beach. They may also occur in passing in pieces which are on the whole in conventional meters. In jazz, Dave Brubeck's song "Blue Rondo à la Turk" features bars of nine quavers grouped into patterns of 2+2+2+3 at the start. George Harrison's song "Here Comes the Sun" on the Beatles' album Abbey Road features a rhythm "which switches between 11/8, 4/4 and 7/8 on the bridge". "The special effect of running even eighth notes accented as if triplets against the grain of the underlying backbeat is carried to a point more reminiscent of Stravinsky than of the Beatles".</p> <p>Olivier Messiaen made extensive use of additive rhythmic patterns, much of it stemming from his close study of the rhythms of Indian music. His "Danse de la fureur, pour les sept trompettes" from The Quartet for the End of Time is a bracing example. A gentler exploration of additive patterns can be found in "Le Regard de la Vierge" from the same composer's piano cycle Vingt Regards sur l'enfant-Jésus.</p> <p>György Ligeti's Étude No. 13, "L'escalier du diable" features patterns involving quavers grouped in twos and threes. The rhythm at the start of the study follows the pattern 2+2+3, then 2+2+2+3. According to the composer's note, the 12/8 time signature "serves only as a guideline, the actual meter consists of 36 quavers (three 'bars'), divided assymetrically".</p> <p><a href="https://en.wikipedia.org/wiki/List_of_musical_works_in_unusual_time_signatures" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/List_of_musical_works_in_unusual_time_signatures</a></p> <youtube-embed video="_1d-Axi4mhY" />]]></content:encoded> </item> <item> <title><![CDATA[Common time]]></title> <link>https://chromatone.center/theory/rhythm/meter/time/</link> <guid>https://chromatone.center/theory/rhythm/meter/time/</guid> <pubDate>Wed, 06 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Common time and half- and double-time changes]]></description> <content:encoded><![CDATA[<beat-bars v-bind="rhythm.times" /><h2 id="common-time" tabindex="-1">Common-time <a class="header-anchor" href="#common-time" aria-label="Permalink to "Common-time""></a></h2> <p>Rhythm pattern characteristic of much popular music including rock, quarter note (crotchet) or "regular" time: "bass drum on beats 1 and 3 and snare drum on beats 2 and 4 of the measure [bar]...add eighth notes [quavers] on the hi-hat".</p> <p>Time signatures are defined by how they divide the measure (in 9/8, complex triple time, each measure is divided in three, each of which is divided into three eighth notes: 3×3=9). In "common" time, often considered 4/4, each level is divided in two (simple duple time: 2×2=4). In a common-time rock drum pattern each measure (a whole note) is divided in two by the bass drum (half note), each half is divided in two by the snare drum (quarter note, collectively the bass and snare divide the measure into four), and each quarter note is divided in two by a ride pattern (eighth note). "Half"-time refers to halving this division (divide each measure into quarter notes with the ride pattern), while "double"-time refers to doubling this division (divide each measure into sixteenth notes with the ride pattern).</p> <h2 id="half-time" tabindex="-1">Half-time <a class="header-anchor" href="#half-time" aria-label="Permalink to "Half-time""></a></h2> <p>In popular music, half-time is a type of meter and tempo that alters the rhythmic feel by essentially doubling the tempo resolution or metric division/level in comparison to common-time. Thus, two measures of 4/4 approximate a single measure of 8/8, while a single measure of 4/4 emulates 2/2. Half-time is not to be confused with alla breve or odd time. Though notes usually get the same value relative to the tempo, the way the beats are divided is altered. While much music typically has a backbeat on quarter note (crotchet) beats two and four, half time would increase the interval between backbeats to double, thus making it hit on beats three and seven, or the third beat of each measure (count out of an 8 beat measure [bar], common practice in half time):</p> <pre><code>1 2 3 4 1 2 3 4 1 2 3 4 5 6 7 8 1 2 3 4 </code></pre> <p>Essentially, a half time 'groove' is one that expands one measure over the course of two. The length of each note is doubled while its frequency is halved.</p> <p>A classic example is the half-time shuffle, a variation of a shuffle rhythm, which is used extensively in hip-hop and some blues music. Some of the variations of the basic groove are notoriously difficult to play on drum set. It is also a favorite in some pop and rock tunes. Some classic examples are the Purdie Shuffle by Bernard Purdie which appears in "Home At Last" and "Babylon Sisters", both of which are Steely Dan songs. "Fool in the Rain" by Led Zeppelin uses a derivation of the Purdie Shuffle, and Jeff Porcaro of Toto created a hybridization of the Zeppelin and Purdie shuffles called the Rosanna shuffle for the track "Rosanna".</p> <youtube-embed video="g41Ab8iDaD0" /><p>In half time, the feel of notes are chopped in half, but the actual time value remains the same. For example, at the same tempo, 8th notes (quavers) would sound like 16ths (semiquavers). In the case of the half time shuffle, triplets sound like 16th note (semiquaver) triplets, etc. By preserving the tempo, the beat is stretched by a factor of 2.</p> <blockquote> <p><img src="./Double,_common,_and_half_times_same_tempo.png" alt=""> Double-, common, and half- time offbeats at the same tempo.</p> </blockquote> <blockquote> <p><img src="./Double,_common,_and_half_times_equivalent_tempo.png" alt=""> Double-, common, and half- time offbeats at equivalent tempos.</p> </blockquote> <h2 id="double-time" tabindex="-1">Double time <a class="header-anchor" href="#double-time" aria-label="Permalink to "Double time""></a></h2> <p>In music and dance, double-time is a type of meter and tempo or rhythmic feel by essentially halving the tempo resolution or metric division/level. It is also associated with specific time signatures such as 2/2. Contrast with half time.</p> <p>In jazz the term means using note values twice as fast as previously but without changing the pace of the chord progressions. It is often used during improvised solos.</p> <p>"Double time [is] doubling a rhythm pattern within its original bar structure.":</p> <pre><code>1 2 3 4 1 2 3 4 1 2 3 4 </code></pre> <p>It may help to picture the way musicians count each metric level in 4/4:</p> <pre><code>quarter: 1 2 3 4 eighth: 1 & 2 & 3 & 4 & sixteenth: 1 e & a 2 e & a 3 e & a 4 e & a </code></pre> ]]></content:encoded> <enclosure url="https://chromatone.center/Double,_common,_and_half_times_same_tempo.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Circular metronome]]></title> <link>https://chromatone.center/practice/rhythm/circle/</link> <guid>https://chromatone.center/practice/rhythm/circle/</guid> <pubDate>Sat, 02 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Looped rhythm and polyrhythm exploration tool]]></description> <content:encoded><![CDATA[<h2 id="welcome-to-the-all-mighty-chromatone-circular-metronome" tabindex="-1">Welcome to the all-mighty Chromatone circular metronome <a class="header-anchor" href="#welcome-to-the-all-mighty-chromatone-circular-metronome" aria-label="Permalink to "Welcome to the all-mighty Chromatone circular metronome""></a></h2> <p>It's the center for deep exploration of any kind of rhythmic patterns. We have two independent wheels ready to hold any variety of looping beats. Here's how you operate it:</p> <ol> <li>Set the <strong>number of steps</strong> with the top left slider. The external loop takes up to 48 steps for long sequences. The internal loop is smaller and can hold up to 16 steps.</li> <li>Set the <strong>number of subdivisions</strong> of a measure with the top right slider. If the left and right numbers are equal you get a full loop cycle equal to one measure. But many more combinations are possible!</li> <li>Turn steps <strong>on and off</strong> by clicking the colored wheel segments. The colorful lines connect all active steps of a sequence.</li> <li>Use the <i class="p-3 mr-1 i-ic-baseline-refresh"></i> button to reset the mutes distribution to the <a href="./../../../theory/rhythm/system/euclidean/">Euclidean rhythm</a> given the number of mutes and given the loop steps count.</li> <li>You can <strong>set accents</strong> on certain steps by clicking the step number. Filled circles produce an accented sound, hollow ones are regular beats.</li> <li>Of course you can choose one of <strong>5 sound kits</strong> for the wheels independently. Just drag or click the bottom left slider (that one with the letters A-E).</li> <li>If you want your own sounds to be played by the metronome – choose the last letter <strong>F</strong>. It'll open the two recorders and loaders for that. <strong>Record any sound</strong> from your microphone or <strong>upload any sound file</strong> for both regular and accented beats.</li> <li>Adjust the <strong>panning</strong> <i class="p-3 mr-1 i-mdi-pan-horizontal"></i> and the <strong>volume</strong> <i class="p-3 mr-1 i-la-volume-up"></i> of each loop with the sliders on the bottom right. You can even set two independent rhythms to play in two separate channels (left and right).</li> <li>Press <strong>play</strong> button <i class="p-3 mr-1 i-la-play"></i> in the top right corner and listen to the patterns you've created. The playback may be paused <i class="p-3 mr-1 i-la-pause"></i> and resumed from that place or stopped and reset with the stop button <i class="p-3 mr-1 i-la-stop"></i>. If you don't hear any sound on you mobile – just turn off the silent mode. Tested on iOS, it works!</li> <li>There are two arrows <i class="p-3 mr-1 i-la-angle-left"></i> and <i class="p-3 mr-1 i-la-angle-right"></i> near the meter numbers at the top of the loops. With them you can <strong>rotate the pattern</strong> left or right at any moment, producing even more complex and evolving sequences.</li> <li>The center circle of the metronome is very interactive. Use the <i class="p-3 mr-1 i-la-minus"></i> and <i class="p-3 mr-1 i-la-plus"></i> buttons to incrementally <strong>change the tempo</strong> by one BPM. Or press <i class="p-3 mr-1 i-la-slash"></i> and <i class="p-3 mr-1 i-la-times"></i> sectors to divide/multiply the tempo by two. It's like traversing the octaves of sound pitches. Notice that you get the same 'note' and starting color for these tempos.</li> <li>You can also just tap and drag the center circle to change the tempo gradually, either with touch on mobile or mouse pointer on desktop.</li> <li>The top left corner with the tempo information works the same – <strong>drag it</strong> to set the tempo you want.</li> <li>At the bottom of the screen you can find buttons to get the tempo from the world around you to the app. The bottom left ear button <i class="p-3 mr-1 i-tabler-ear"></i> activates the sophisticated <strong>analyser that can determine the tempo</strong> from any incoming audio signal. Turn it on and let the microphone hear your rhythm: clap it, stomp it, sign it or just turn the music on. The app will listen to the audio and show its guess in the box near the 'ear' button. If you like what you get just press the number and the tempo will be set to the metronome itself.</li> <li>The bottom right hand button <i class="p-3 mr-1 i-fluent-tap-double-20-regular"></i>is just good old <strong>tap tempo</strong>. Tap it three times or more to see the tempo you've imagined or hear playing. The more you tap – the more precise the result gets. Then tap the number to set the main tempo to the new value.</li> <li>If you see a square icon <i class="p-3 mr-1 i-la-expand"></i> at the bottom left corner, you are able to open the circle metronome to the <strong>full screen</strong>. What an immense experience!</li> <li>Once you build some interesting patterns you can <strong>export them as a MIDI file</strong> to use in any DAW or other MIDI-compatible tool. Just press the <i class="p-3 mr-1 i-la-file-download"></i> button at the right bottom corner and choose a place to save it to your system. Then you can drag and drop it to your DAW timeline, choose intrument for the tracks and transpose the notes to desired notes.</li> <li>Explore our <a href="./../../../theory/rhythm/">rhythm theory section</a> for inspiration about what to dial into the loops. This app can act as a simple visual and audial cue for your music practice or become a tool to explore the enormous space of possible rhythmic combinations. Polyrhythms have never been so easy to see and internalise. The colors and the form of the metronome can help sticking to the tempo even in silence. Be creative and feel the power of this rhythm visualisation tool.</li> <li>You can also control all the parameters of the metronome with your MIDI-controller. Use the knobs to send the <strong>CC messages number 1-16</strong> on <strong>channel 1</strong> for it. Two loops are independently controlled by these MIDI commands. You can easily change the CC numbers here.</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/tempo.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Composers]]></title> <link>https://chromatone.center/theory/composition/composers/</link> <guid>https://chromatone.center/theory/composition/composers/</guid> <pubDate>Sat, 02 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Interviews and retrospectives]]></description> <content:encoded><![CDATA[<h2 id="steve-reich" tabindex="-1">Steve Reich <a class="header-anchor" href="#steve-reich" aria-label="Permalink to "Steve Reich""></a></h2> <youtube-embed video="4guApFvA3nk" /><youtube-embed video="_2EZ4ZBK4pQ" /><youtube-embed video="muH9JZZ3tG8" /><h2 id="naithan-bosse" tabindex="-1">Naithan Bosse <a class="header-anchor" href="#naithan-bosse" aria-label="Permalink to "Naithan Bosse""></a></h2> <p><a href="https://www.naithan.com/networked-music-performance/" target="_blank" rel="noreferrer">Network compositions</a></p> <ul> <li><a href="/public/media/pdf/Paresthesia_Score_July25_2019.pdf">Paresthesia - for distributed woodwinds and percussion (2016). PDF score</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/brian-eno-A-Year-with-Swollen-Appendices-faber-book.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Rhythmic systems]]></title> <link>https://chromatone.center/theory/rhythm/system/</link> <guid>https://chromatone.center/theory/rhythm/system/</guid> <pubDate>Sat, 02 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[Different systems of rhythmic organisation]]></description> <content:encoded><![CDATA[<p>Different cultures have different rhytmic systems evolved through ages. India has <a href="./tala/">Tala</a> system, Spain has grown its <a href="./flamenco/">Flamenco</a> tradition, Latin America has the <a href="./clave/">Clave</a> sound and many approaches have evolved from <a href="./crossbeat/">African cross-beats</a>.</p> <youtube-embed video="ZROR_E5bFEI" />]]></content:encoded> <enclosure url="https://chromatone.center/korean-drums.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Rhythm bars]]></title> <link>https://chromatone.center/practice/rhythm/bars/</link> <guid>https://chromatone.center/practice/rhythm/bars/</guid> <pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[A linear metronome and polyrhythm exploration tool]]></description> <content:encoded><![CDATA[<client-only > <beat-bars /> </client-only > <h2 id="flexible-metronome-bars-to-construct-any-possible-rhythm" tabindex="-1">Flexible metronome bars to construct any possible rhythm <a class="header-anchor" href="#flexible-metronome-bars-to-construct-any-possible-rhythm" aria-label="Permalink to "Flexible metronome bars to construct any possible rhythm""></a></h2> <p>Here you have a linear version of the colorful metronome to play with.</p> <ol> <li>You can add any number of tracks to explore different rhythm combinations.</li> <li>For each bar you can set a number of beats (1-16) with the top left slider. It's corresponding to the top number of a meter.</li> <li>Set the number of subdivisions (1-16) of a measure with the top right slider. If the left and right numbers are equal you get a full loop cycle equal to one measure. But many more combinations are possible!</li> <li>Turn steps on and off by clicking the colored segments. The colorful lines connect all active steps of the sequence.</li> <li>You can set accents on certain steps by clicking the step number. Filled circles produce an accented sound, hollow ones are regular beats.</li> <li>The most powerful thing is the <strong>subdivisions</strong> of any step – just click and <strong>drag the bottom of any beat</strong> to subdivide it into any number of steps. Compose very complex patterns with triplets, quadriplets, quintuplets and more. You can even mute any of the subdivisions – pure rhythmic freedom to play and to see. Change the BPM by dragging across the tempo information at the top right. Or by clicking the add/multiply buttons.</li> <li>You can choose one of 5 sound kits for the bar independently. Just drag or click the bottom right slider (that one with the letters A-D).</li> <li>Adjust the panning <i class="p-3 mr-1 i-mdi-pan-horizontal"></i> and the volume <i class="p-3 mr-1 i-la-volume-up"></i> of each loop with the sliders on the bottom left. You can even set two independent rhythms to play in two separate channels (left and right).</li> <li>Press play button <i class="p-3 mr-1 i-la-play"></i> at the top left of the control bar and listen to the patterns you've created. The playback may be paused <i class="p-3 mr-1 i-la-pause"></i> and resumed from that place or stopped and reset with the stop button <i class="p-3 mr-1 i-la-stop"></i>. If you don't hear any sound on you mobile – just turn off the silent mode. Tested on iOS, it works!</li> <li>There are two arrows <i class="p-3 mr-1 i-la-angle-left"></i> and <i class="p-3 mr-1 i-la-angle-right"></i> near the meter numbers at the top of the loops. With them you can rotate the pattern left or right at any moment, producing even more complex and evolving sequences.</li> <li>Use the <i class="p-3 mr-1 i-la-minus"></i> and <i class="p-3 mr-1 i-la-plus"></i> buttons of the control bar at the top to incrementally change the tempo by one BPM. Or press <i class="p-3 mr-1 i-la-slash"></i> and <i class="p-3 mr-1 i-la-times"></i> button to divide/multiply the tempo by two. It's like traversing the octaves of sound pitches. Notice that you get the same 'note' and starting color for these tempos.</li> <li>You can also just tap and drag the section with the BPM numbers to change the tempo gradually, either with touch on mobile or mouse pointer on desktop.</li> <li>At the control bar you can find buttons to get the tempo from the world around you. The ear button <i class="p-3 mr-1 i-tabler-ear"></i> activates the sophisticated analyser that can determine the tempo from any incoming audio signal. Turn it on and let the microphone hear your rhythm: clap it, stomp it, sign it or just turn the music on. The app will listen to the audio and show its guess in the box near the <i class="p-3 mr-1 i-tabler-ear"></i> button. If you like what you get just press the number and the tempo will be set to the metronome itself.</li> <li>The hand button <i class="p-3 mr-1 i-fluent-tap-double-20-regular"></i> is just good old tap tempo. Tap it three times or more to see the tempo you've imagined or hear playing. The more you tap – the more precise the result gets. Then tap the number to set the main tempo to the new value.</li> <li>If you see a square icon <i class="p-3 mr-1 i-la-expand"></i> at the bottom left corner, you are able to open the circle metronome to the full screen. What an immense experience!</li> <li>Once you build some interesting patterns you can export them as a MIDI file to use in any DAW or other MIDI-compatible tool. Just press the <i class="p-3 mr-1 i-la-file-download"></i> button at the bottom and choose a place to save it to your system. Then you can drag and drop it to your DAW timeline, choose intrument for the tracks and transpose the notes to desired notes.</li> <li>Explore our <a href="./../../../theory/rhythm/">rhythm theory section</a> for inspiration about what to dial into the loops. This app can act as a simple visual and audial cue for your music practice or become a tool to explore the enormous space of possible rhythmic combinations. Polyrhythms have never been so easy to see and internalise. The colors and the form of the metronome can help sticking to the tempo even in silence. Be creative and feel the power of this rhythm visualisation tool.</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/metro-bars.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Light]]></title> <link>https://chromatone.center/theory/color/light/</link> <guid>https://chromatone.center/theory/color/light/</guid> <pubDate>Fri, 01 Oct 2021 00:00:00 GMT</pubDate> <description><![CDATA[The nature and properies of electromagnatic radiation]]></description> <content:encoded><![CDATA[<p>What is <a href="./sun/">Sun</a> and how does it's <a href="./em-waves/">EM-waves</a> split into the whole <a href="./spectrum/">Waves spectrum</a>.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/prism-light.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Pitch]]></title> <link>https://chromatone.center/practice/pitch/</link> <guid>https://chromatone.center/practice/pitch/</guid> <pubDate>Thu, 30 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Explorations of the acoustic frequency domain]]></description> <content:encoded><![CDATA[<p>Find any giver pitch on any giver instrument with the colorful <a href="./tuner/">Tuner</a> that analyzes incoming audio and exactly shows you the main pitch.</p> <p>Or you may start plotting the detected pitch an endless <a href="./roll/">Pitch Roll</a> and have fun drawing ups and downs with sound. Much more complex and information rich plot is produced by the <a href="./spectrogram/">Spectrograph</a> that draws FFT frequency content of any incoming audio.</p> <p>We can not only analyze, but produce different pitches with this web-site. There's a traditional <a href="./drone/">Drone machine</a> that will generate a rich in upper harmonics sound of any desired pitch and a fifth above it to generate a pleasant background for vocal practice and instrument tuning.</p> <p>If you want to combine more simultaneous drone pitches - feel free to browse the <a href="./table/">Pitch table</a> of all audible audio frequencies and beyond!</p> <p>And there's the universal <a href="./fretboard/">Fretboard calculator</a> that may help you build your own custom fretted string instrument with any given dimestions.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/paulette-wooten.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Articulation and ornamentation]]></title> <link>https://chromatone.center/theory/melody/articulation/</link> <guid>https://chromatone.center/theory/melody/articulation/</guid> <pubDate>Thu, 30 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[String techniques and way to play]]></description> <content:encoded><![CDATA[<h2 id="articulation-elements" tabindex="-1">Articulation elements <a class="header-anchor" href="#articulation-elements" aria-label="Permalink to "Articulation elements""></a></h2> <youtube-embed video="sFBz_VDpTVY" /><youtube-embed video="b_mAdJfcEZg" /><h2 id="string-technique-instructions" tabindex="-1">String technique instructions <a class="header-anchor" href="#string-technique-instructions" aria-label="Permalink to "String technique instructions""></a></h2> <youtube-embed video="ux3Z3yAK-UE" /><h2 id="ornamentation" tabindex="-1">Ornamentation <a class="header-anchor" href="#ornamentation" aria-label="Permalink to "Ornamentation""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Ornament_(music)" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Ornament_(music)</a></p> <youtube-embed video="64lyO-tlSZI" /><youtube-embed video="Hx_-ZWk0sy0" />]]></content:encoded> </item> <item> <title><![CDATA[Study of scales]]></title> <link>https://chromatone.center/theory/scales/study/</link> <guid>https://chromatone.center/theory/scales/study/</guid> <pubDate>Thu, 30 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The principles for analyzing different combinations of notes]]></description> <content:encoded><![CDATA[<blockquote> <p>This original text is from <a href="https://ianring.com/musictheory/scales/" target="_blank" rel="noreferrer">The Exciting Universe Of Music Theory by Ian Ring</a></p> </blockquote> <p>There is no rule stating how many notes a scale must include. The most common scales in Western music contain seven pitches and are thus called “heptatonic” (meaning “seven tones”). Other scales have fewer notes—five-note “pentatonic” scales are quite common in popular music. There’s even a scale that uses all 12 pitches: it’s called the “chromatic” scale.</p> <p>What we have in the 12-tone system is a binary "word" made of 12 bits. We can assign one bit to each degree of the chromatic scale, and use the power of binary arithmetic and logic to do some pretty awesome analysis with them. When represented as bits it reads from right to left - the lowest bit is the root, and each bit going from right to left ascends by one semitone.</p> <p>The total number of possible combinations of on and off bits is called the "power set". The number of sets in a power set of size n is (2^n). Using a word of 12 bits, the power set (2^12) is equal to 4096. The fun thing about binary power sets is that we can produce every possible combination, by merely invoking the integers from 0 (no tones) to 4095 (all 12 tones).</p> <p>This means that every possible combination of tones in the 12-tone set can be represented by a number between 0 and 4095. We don't need to remember the fancy names like "phrygian", we can just call it scale number 1451. Convenient!</p> <blockquote> <p>An important concept here is that any set of tones can be represented by a number. This number is not "ordinal" - it's not merely describing the position of the set in an indexed scheme; it's also not "cardinal" because it's not describing an amount of something. This is a nominal number because the number <em>is</em> the scale. You can do binary arithmetic with it, and you are adding and subtracting scales with no need to convert the scale into some other representation.</p> </blockquote> <youtube-embed video="Vq2xt2D3e3E" /><h2 id="interval-pattern" tabindex="-1">Interval Pattern <a class="header-anchor" href="#interval-pattern" aria-label="Permalink to "Interval Pattern""></a></h2> <p>Another popular way of representing a scale is by its interval pattern. When I was learning the major scale, I was taught to say aloud: "tone, tone, semitone, tone, tone, tone, semitone". Many music theorists like to represent a scale this way because it's accurate and easy to understand: "TTSTTTS". Having a scale's interval pattern has merit as an intermediary step can make some kinds of analysis simpler. Expressed numerically - which is more convenient for computation - the major scale is [2,2,1,2,2,2,1].</p> <h2 id="pitch-class-sets" tabindex="-1">Pitch Class Sets <a class="header-anchor" href="#pitch-class-sets" aria-label="Permalink to "Pitch Class Sets""></a></h2> <p>Yet another way to represent a scale is as a "pitch class set", where the tones are assigned numbers 0 to 11 (sometimes using "T" and "E" for 10 and 11), and the set enumerates the ones present in the scale. A pitch class set for the major scale is notated like this: {0,2,4,5,7,9,11}. The "scales" we'll study here are a subset of Pitch Classes (ie those that have a root, and obey Zeitler's Rules) and we can use many of the same mathematical tricks to manipulate them.</p> <h2 id="what-is-a-scale" tabindex="-1">What is a scale? <a class="header-anchor" href="#what-is-a-scale" aria-label="Permalink to "What is a scale?""></a></h2> <p>Or more importantly, what is <em>not</em> a scale?</p> <p>Now that we have the superset of all possible sets of tones, we can whittle it down to exclude ones that we don't consider to be a legitimate "scale". We can do this with just two rules.</p> <h3 id="a-scale-starts-on-the-root-tone" tabindex="-1">A scale starts on the root tone. <a class="header-anchor" href="#a-scale-starts-on-the-root-tone" aria-label="Permalink to "A scale starts on the root tone.""></a></h3> <p>This means any set of notes that doesn't have that first bit turned on is not eligible to be called a scale. This cuts our power set in exactly half, leaving 2048 sets.</p> <p>In binary, it's easy to see that the first bit is on or off. In decimal, you can easily tell this by whether the number is odd or even. All even numbers have the first bit off; therefore all scales are represented by an odd number.</p> <p>We could have cut some corners in our discussion of scales by omitting the root tone (always assumed to be on) to work with 11 bits instead of 12, but there are compelling reasons for keeping the 12-bit number for our scales, such as simplifying the analysis of symmetry, simplifying the calculation of modes, and performing analysis of sonorities that do not include the root, where an even number is appropriate.</p> <p>Scales remaining: <strong>2048</strong></p> <h3 id="a-scale-does-not-have-any-leaps-greater-than-n-semitones" tabindex="-1">A scale does not have any leaps greater than n semitones. <a class="header-anchor" href="#a-scale-does-not-have-any-leaps-greater-than-n-semitones" aria-label="Permalink to "A scale does not have any leaps greater than n semitones.""></a></h3> <p>For the purposes of this exercise we are saying n = 4, a.k.a. a major third. Any collection of tones that has an interval greater than a major third is not considered a "scale". This configuration is consistent with Zeitler's constant used to generate his comprehensive list of scales.</p> <p>Scales remaining: <strong>1490</strong></p> <p>Now that we've whittled our set of tones to only the ones we'd call a "scale", let's count how many there are with each number of tones.</p> <table tabindex="0"> <thead> <tr> <th style="text-align:left">number of tones</th> <th style="text-align:left">how many scales</th> </tr> </thead> <tbody> <tr> <td style="text-align:left">1</td> <td style="text-align:left">0</td> </tr> <tr> <td style="text-align:left">2</td> <td style="text-align:left">0</td> </tr> <tr> <td style="text-align:left">3</td> <td style="text-align:left">1</td> </tr> <tr> <td style="text-align:left">4</td> <td style="text-align:left">31</td> </tr> <tr> <td style="text-align:left">5</td> <td style="text-align:left">155</td> </tr> <tr> <td style="text-align:left">6</td> <td style="text-align:left">336</td> </tr> <tr> <td style="text-align:left">7</td> <td style="text-align:left">413</td> </tr> <tr> <td style="text-align:left">8</td> <td style="text-align:left">322</td> </tr> <tr> <td style="text-align:left">9</td> <td style="text-align:left">165</td> </tr> <tr> <td style="text-align:left">10</td> <td style="text-align:left">55</td> </tr> <tr> <td style="text-align:left">11</td> <td style="text-align:left">11</td> </tr> <tr> <td style="text-align:left">12</td> <td style="text-align:left">1</td> </tr> </tbody> </table> <h2 id="modes" tabindex="-1">Modes <a class="header-anchor" href="#modes" aria-label="Permalink to "Modes""></a></h2> <p>There is a lot of confusion about what is a "mode", chiefly because the word is used slightly differently in various contexts.</p> <p>When we say "C major", the word "major" refers to a specific pattern of whole- and half-steps. The "C" tells us to begin that pattern on the root tone of "C".</p> <p>Modes are created when you use the same patterns of whole- and half-steps, but you begin on a different step. For instance, the "D Dorian" mode uses all the same notes as C major (the white keys on a piano), but it begins with D. Compared with the Major (also known as "Ionian" mode), the Dorian sounds different, because relative to the root note D, it has a minor third and a minor seventh.</p> <p>The best way to understand modes is to think of a toy piano where the black keys are just painted on - all you have are the white keys: C D E F G A B. Can you play a song that sounds like it's in a minor key? You can't play a song in C minor, because that would require three flats. So instead you play the song with A as the root (A B C D E F G). That scale is a mode of the major scale, called the Aeolian Mode.</p> <p>When you play that minor scale, you're not playing "C minor", you're playing the relative minor of C, which is "A minor". Modes are relatives of each other if they have the same pattern of steps, starting on different steps.</p> <p>To compute a mode of the current scale, we "rotate" all the notes down one semitone. Then if the rotated notes have an on bit in the root, then it is a mode of the original scale. It's as if you take the bracelet diagram that we've been using throughout this study, and twist it like a dial so that a different note is at the top, in the root position.</p> <ul> <li>101010110101 = 2741 - major scale, "ionian" mode</li> <li>110101011010 = 3418 - rotated down 1 semitone - not a scale</li> <li>011010101101 = 1709 - rotated down 2 semitones - "dorian"</li> <li>101101010110 = 2902 - rotated down 3 semitones - not a scale</li> <li>010110101011 = 1451 - rotated down 4 semitones - "phrygian"</li> <li>101011010101 = 2773 - rotated down 5 semitones - "lydian"</li> <li>110101101010 = 3434 - rotated down 6 semitones - not a scale</li> <li>011010110101 = 1717 - rotated down 7 semitones - "mixolydian"</li> <li>101101011010 = 2906 - rotated down 8 semitones - not a scale</li> <li>010110101101 = 1453 - rotated down 9 semitones - "aeolian"</li> <li>101011010110 = 2774 - rotated down 10 semitones - not a scale</li> <li>010101101011 = 1387 - rotated down 11 semitones - "locrian"</li> </ul> <p>When we do this to every scale, we see modal relationships between scales, and we also discover symmetries when a scale is a mode of itself on another degree.</p> <h2 id="imperfection" tabindex="-1">Imperfection <a class="header-anchor" href="#imperfection" aria-label="Permalink to "Imperfection""></a></h2> <p>Imperfection is a concept invented (so far as I can tell) by William Zeitler, to describe the presence or absense of perfect fifths in the scale tones. Any tone in the scale that does not have the perfect fifth above it represented in the scale is an "imperfect" tone. The number of imperfections is a metric that plausibly correlates with the perception of dissonance in a sonority.</p> <p>The only scale that has no imperfections is the 12-tone chromatic scale.</p> <p><a href="https://ianring.com/musictheory/scales/" target="_blank" rel="noreferrer">https://ianring.com/musictheory/scales/</a></p> ]]></content:encoded> </item> <item> <title><![CDATA[Numbered notation]]></title> <link>https://chromatone.center/theory/notes/alternative/numbered/</link> <guid>https://chromatone.center/theory/notes/alternative/numbered/</guid> <pubDate>Wed, 29 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[A simplified notation using numbers to show scale degrees]]></description> <content:encoded><![CDATA[<h3 id="jian-pu-simplified-notation" tabindex="-1">Jian pu - 'simplified notation' <a class="header-anchor" href="#jian-pu-simplified-notation" aria-label="Permalink to "Jian pu - 'simplified notation'""></a></h3> <p>A similar invention was presented by Jean-Jacques Rousseau in his work presented to the French Academy of Sciences in 1742. Due to its straightforward correspondence to the standard notation, it is possible that many other claims of independent invention are also true. Grove's credits Emile J.M. Chevé.</p> <p>Although the system is used to some extent in Germany, France, and the Netherlands, and more by the Mennonites in Russia, it has never become popular in the Western world. Number notation was used extensively in the 1920s and 30s by Columbia University, Teachers College music educator Satis Coleman, who felt it "proved to be very effective for speed with adults, and also as a means simple enough for young children to use in writing and reading tunes which they sing, and which they play on simple instruments."</p> <p>The system is very popular among some Asian people, making conventions to encode and decode music more accessible than in the West, as more Chinese can sight read jianpu than standard notation. Most Chinese traditional music scores and popular song books are published in jianpu, and jianpu notation is often included in vocal music with staff notation.</p> <p><img src="./china.jpg" alt=""></p> <p><img src="./jianpu.jpg" alt=""></p> <youtube-embed video="TyB1efr8nGY" /><p><img src="./AmazingGraceNumberedMusicalNotation.png" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/china.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chords]]></title> <link>https://chromatone.center/practice/chord/</link> <guid>https://chromatone.center/practice/chord/</guid> <pubDate>Sat, 25 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Three and more note collections]]></description> <content:encoded><![CDATA[<p>There are thousands of possible 3-5 notes combinations, that are used as chords in modern music. If you need the get the shape of chords on a string instument - try the <a href="./tabs/">Chord Tabs app</a>.</p> <p>To navigate the multitude of chord combinations and sequences you can explore their <a href="./modes/">Modal functions</a> in different scales. The relations between chords really shine in multiaxis interpretations like the Tonnetz <a href="./array/">Tonal array</a> or the interactive <a href="./fifths/">Circle of fifths</a> chords wheel.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/roberta-sorge.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Bilinear notation]]></title> <link>https://chromatone.center/theory/notes/alternative/bilinear/</link> <guid>https://chromatone.center/theory/notes/alternative/bilinear/</guid> <pubDate>Sat, 25 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Upgraded staff notation by Jose A. Sotorrio]]></description> <content:encoded><![CDATA[<p>Bilinear is quite similar to <a href="http://musicnotation.org/system/twinline-notation-by-thomas-reed/" target="_blank" rel="noreferrer">Reed’s Twinline</a>. Sotorrio maintains he had no prior knowledge of Twinline when designing Bilinear, since he relied primarily on Gardner Read’s Source Book of Proposed Music Notation Reforms which does not include Twinline. (It was published in 1987, just after Twinline was introduced in 1986.) Since Twinline is the earlier system (Bilinear was introduced in 1997), Sotorrio now offers Bilinear as a variant of Twinline.</p> <p><img src="./bilinear-jose-sotorrio.png" alt=""></p> <p>The two systems share the same line pattern and the same alternating oval and triangle shaped noteheads, but there are differences in their details. Twinline’s triangles are right triangles with the 90 degree angle at their tip, while Bilinear’s triangles have a sharper angle at their tip. Also, the shape, color, and size of noteheads in Bilinear may be different depending on a note’s duration, as illustrated in the following image (courtesy of the Bilinear website):</p> <p><img src="./blinearcomparison2.jpg" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/bilinear-jose-sotorrio.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Scales]]></title> <link>https://chromatone.center/practice/scale/</link> <guid>https://chromatone.center/practice/scale/</guid> <pubDate>Thu, 23 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[5 or more notes sets to choose from]]></description> <enclosure url="https://chromatone.center/kimberleigh-crowie.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Triads]]></title> <link>https://chromatone.center/theory/chords/triads/</link> <guid>https://chromatone.center/theory/chords/triads/</guid> <pubDate>Wed, 22 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Chords consisting of three notes]]></description> <content:encoded><![CDATA[<h2 id="major-and-minor" tabindex="-1">Major and minor <a class="header-anchor" href="#major-and-minor" aria-label="Permalink to "Major and minor""></a></h2> <p>A major triad can also be described by its intervals: the interval between the bottom and middle notes is a major third and the interval between the middle and top notes is a minor third.</p> <p>In Western classical music from 1600 to 1820 and in Western pop, folk and rock music, a major chord is usually played as a triad. Along with the minor triad, the major triad is one of the basic building blocks of tonal music in the Western common practice period and Western pop, folk and rock music. It is considered consonant, stable, or not requiring resolution. In Western music, a minor chord "sounds darker than a major chord", giving off a sense of sadness or somber feeling.</p> <chroma-profile-collection :collection="triads.majmin" /><p>A unique particularity of the minor chord is that this is the only chord of three notes in which the three notes have one harmonic – hearable and with a not too high row – in common (more or less exactly, depending on the tuning system used). This harmonic, common to the three notes, is situated 2 octaves above the high note of the chord. This is the sixth harmonic of the root of the chord, the fifth of the middle note, and the fourth of the high note:</p> <blockquote> <p>In the example C, E♭, G, the common harmonic is a G 2 octaves above.</p> </blockquote> <p>Demonstration:</p> <ul> <li>Minor third = 6:5 = 12:10</li> <li>Major third = 5:4 = 15:12</li> <li>So the ratios of minor chord are 10:12:15</li> <li>And the explication of the unique harmonic in common, between the three notes, is verified by : 10 × 6 = 12 × 5 = 15 × 4</li> </ul> <hr> <h2 id="suspended-chords" tabindex="-1">Suspended chords <a class="header-anchor" href="#suspended-chords" aria-label="Permalink to "Suspended chords""></a></h2> <p>A suspended chord (or sus chord) is a musical chord in which the (major or minor) third is omitted and replaced with a perfect fourth or, less commonly, a major second. The lack of a minor or a major third in the chord creates an open sound, while the dissonance between the fourth and fifth or second and root creates tension. When using popular-music symbols, they are indicated by the symbols "sus4" and "sus2".</p> <p>The term is borrowed from the contrapuntal technique of suspension, where a note from a previous chord is carried over to the next chord, and then resolved down to the third or tonic, suspending a note from the previous chord. However, in modern usage, the term concerns only the notes played at a given time; in a suspended chord, the added tone does not necessarily resolve and is not necessarily "prepared" (i.e., held over) from the prior chord. As such, in C–F–G, F would resolve to E (or E♭, in the case of C minor), but in rock and popular music, "the term is used to indicate only the harmonic structure, with no implications about what comes before or after," though preparation of the fourth occurs about half the time and traditional resolution of the fourth occurs usually. In modern jazz, a third can be added to the chord voicing, as long as it is above the fourth.</p> <p>Each suspended chord has two inversions. Suspended second chords are inversions of suspended fourth chords, and vice versa. For example, Gsus2 (G–A–D) is the first inversion of Dsus4 (D–G–A) which is the second inversion of Gsus2 (G–A–D). The sus2 and sus4 chords both have an inversion that creates a quartal chord (A–D–G) with two stacked perfect fourths.</p> <p>Sevenths on suspended chords are "virtually always minor sevenths", while the 9sus4 chord is similar to an eleventh chord and may be notated as such. For example, C9sus4 (C–F–G–B♭–D) may be notated C11 (C–G–B♭–D–F).</p> <chroma-profile-collection :collection="triads.sus" /><hr> <h2 id="augmented-and-diminished" tabindex="-1">Augmented and diminished <a class="header-anchor" href="#augmented-and-diminished" aria-label="Permalink to "Augmented and diminished""></a></h2> <p>A diminished triad (also known as the minor flatted fifth) is a triad consisting of two minor thirds above the root. It is a minor triad with a lowered (flattened) fifth. When using chord symbols, it may be indicated by the symbols "dim", "o", "m♭5", or "MI(♭5)". However, in most popular-music chord books, the symbol "dim" and "o" represents a diminished seventh chord (a four-tone chord), which in some modern jazz books and music theory books is represented by the "dim7" or "o7" symbols.</p> <p>In major scales, a diminished triad occurs only on the seventh scale degree. For instance, in the key of C, this is a B diminished triad (B, D, F). Since the triad is built on the seventh scale degree, it is also called the leading-tone triad. This chord has a dominant function. Unlike the dominant triad or dominant seventh, the leading-tone triad functions as a prolongational chord rather than a structural chord since the strong root motion by fifth is absent.</p> <p>On the other hand, in natural minor scales, the diminished triad occurs on the second scale degree; in the key of C minor, this is the D diminished triad (D, F, A♭). This triad is consequently called the supertonic diminished triad. Like the supertonic triad found in a major key, the supertonic diminished triad has a predominant function, almost always resolving to a dominant functioning chord.</p> <p>If the music is in a minor key, diminished triads can also be found on the raised seventh note, ♯viio. This is because the ascending melodic minor scale has a raised sixth and seventh degree. For example, the chord progression ♯viio–i is common.</p> <p>The leading-tone diminished triad and supertonic diminished triad are usually found in first inversion (viio6 and iio6, respectively) since the spelling of the chord forms a diminished fifth with the bass. This differs from the fully diminished seventh chord, which commonly occurs in root position. In both cases, the bass resolves up and the upper voices move downwards in contrary motion.</p> <chroma-profile-collection :collection="triads.mod" /><p>The term augmented triad arises from an augmented triad being considered a major chord whose top note (fifth) is raised. When using popular-music symbols, it is indicated by the symbol "+" or "aug". For example, the augmented triad built on C, written as C+, has pitches C–E–G♯:</p> <p>Whereas a major triad, such as C–E–G, contains a major third (C–E) and a minor third (E–G), with the interval of the fifth (C–G) being perfect, the augmented triad has an augmented fifth, becoming C–E–G♯. In other words, the top note is raised a semitone.</p> <p>The augmented triad on the V may be used as a substitute dominant, and may also be considered as ♭III+. The example below shows ♭III+ as a substitute dominant in a ii-V-I turnaround in C major.</p> <p>Though rare, the augmented chord occurs in rock music, "almost always as a linear embellishment linking an opening tonic chord with the next chord," for example John Lennon's "(Just Like) Starting Over" and The Beatles' "All My Loving". Thus, with an opening tonic chord, an augmented chord results from ascending or descending movement between the fifth and sixth degrees, such as in the chord progression I – I+ – vi. This progression forms the verse for Oasis's 2005 single "Let There Be Love" (I – I+ – vi – IV)</p> <h2 id="synthetic-triads" tabindex="-1">Synthetic triads <a class="header-anchor" href="#synthetic-triads" aria-label="Permalink to "Synthetic triads""></a></h2> <p>A synthetic chord is a made-up or non-traditional (synthetic) chord (collection of pitches) which cannot be analyzed in terms of traditional harmonic structures, such as the triad or seventh chord.</p> <blockquote> <p>This title is applied to a group of notes, usually a scale-like succession of pitches, with a fixed progression of tones and semitones. This scale can obviously be transposed to any pitch, and depending on its intervallic makeup, will have a fixed number of possible transpositions. Furthermore, the sintetakkord can be used either vertically or horizontally; Roslavets' music is not concerned with the order of the pitches, but rather with the whole 'field' thus created, so that the system is less oriented toward themes and more toward harmonic fields.</p> <p>— Sitsky (1994)</p> </blockquote> <chroma-profile-collection :collection="triads.synthetic" />]]></content:encoded> <enclosure url="https://chromatone.center/austin-prock.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chromatic staff]]></title> <link>https://chromatone.center/theory/notes/alternative/chromatic-staff/</link> <guid>https://chromatone.center/theory/notes/alternative/chromatic-staff/</guid> <pubDate>Wed, 22 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Extension of the regular staff to have room for all the 12 notes]]></description> <content:encoded><![CDATA[<blockquote> <p>“The need for a new notation, or a radical improvement of the old, is greater than it seems, and the number of ingenious minds that have tackled the problem is greater than one might think.” — Arnold Schoenberg</p> </blockquote> <p>Here is a chromatic scale on a traditional diatonic staff (above) and the same chromatic scale on a chromatic staff with five lines (below). This is just one of many versions of chromatic staff.</p> <p><img src="./chromatic-staff.png" alt=""></p> <p>On a chromatic staff each note has its own line or space on the staff. On the traditional staff only seven notes have their own line or space, the notes from just one key (C major/A minor, the white keys on the piano). The remaining notes (the black keys) have to be represented by altering these seven notes with sharp signs (#) or flat signs (b), either in the key signature or as an accidental.</p> <p><img src="./octaves-chromatic-5-line.png" alt=""></p> <p>All of these features of traditional music notation combine to make reading music much more difficult than it might be with a better notation system. For an analogy, imagine trying to do arithmetic with Roman numerals. It can be done, but the notation system makes a big difference. Of course it is important to view traditional notation in its broader historical context and to keep in mind the innovations and reforms that it has undergone over time.</p> <p><a href="http://musicnotation.org/" target="_blank" rel="noreferrer">Music notation project</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/chromatic-staff.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Chroma]]></title> <link>https://chromatone.center/practice/chroma/</link> <guid>https://chromatone.center/practice/chroma/</guid> <pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Explore all possible combinations of all the 12 chromatic notes]]></description> <content:encoded><![CDATA[<p>Navigating the multifarious space of pitch class combinations is a wonderful journey that may be accompanied with different tools we've made. Plot a mathematically perfect <a href="./waveform/">Chroma Waveform</a> of a sine wave pitch oscillations collections.</p> <p>Or just explore the collections of all possible note combinations of chroma with the <a href="./profile/">Chroma profile</a>. Or construct any note collection with the <a href="./compass/">Chroma Compass</a>. And in case you'll need to organize all those notes in loops in time - that's what the <a href="./grid/">Chroma grid</a> is designed for.</p> <p>There's another endless <a href="./gram/">Chroma Roll</a> that will plot the relative pitch class power for any incoming audio. The same data may help you see the pitch class content of any sound with the <a href="./see/">Chroma See</a>.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/compass.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Polytonality]]></title> <link>https://chromatone.center/theory/harmony/polytonality/</link> <guid>https://chromatone.center/theory/harmony/polytonality/</guid> <pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The musical use of more than one key simultaneously.]]></description> <content:encoded><![CDATA[<p>Polytonality (also polyharmony) is the musical use of more than one key simultaneously. Bitonality is the use of only two different keys at the same time. Polyvalence is the use of more than one harmonic function, from the same key, at the same time.</p> <youtube-embed video="lxPvWRXkEbE" /><h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <h3 id="in-traditional-music" tabindex="-1">In traditional music <a class="header-anchor" href="#in-traditional-music" aria-label="Permalink to "In traditional music""></a></h3> <youtube-embed video="iGPO1kcTTIc" /><p>Lithuanian traditional singing style sutartines is based on polytonality. A typical sutartines song is based on a six-bar melody, where the first three bars contains melody based on the notes of the triad of a major key (for example, in G major), and the next three bars is based on another key, always a major second higher or lower (for example, in A major). This six-bar melody is performed as a canon, and repetition starts from the fourth bar. As a result, parts are constantly singing in different tonality (key) simultaneously (in G and in A). As a traditional style, sutartines disappeared in Lithuanian villages by the first decades of the 20th century, but later became a national musical symbol of Lithuanian music.</p> <youtube-embed video="Wij_cgVGOxw" /><youtube-embed video="zeEnhlRteiA" /><hr> <p>Tribes throughout India—including the Kuravan of Kerala, the Jaunsari of Uttar Pradesh, the Gond, the Santal, and the Munda—also use bitonality, in responsorial song.</p> <h3 id="in-classical-music" tabindex="-1">In classical music <a class="header-anchor" href="#in-classical-music" aria-label="Permalink to "In classical music""></a></h3> <p>In J. S. Bach's Clavier-Übung III, there is a two-part passage where, according to Scholes: "It will be seen that this is a canon at the fourth below; as it is a strict canon, all the intervals of the leading 'voice' are exactly imitated by the following 'voice', and since the key of the leading part is D minor modulating to G minor, that of the following part is necessarily A minor modulating to D minor. Here, then, we have a case of polytonality, but Bach has so adjusted his progressions (by the choice at the critical moment of notes common to two keys) that while the right hand is doubtless quite under the impression that the piece is in D minor, etc., and the left hand that it is in A minor, etc., the listener feels that the whole thing is homogeneous in key, though rather fluctuating from moment to moment. In other words, Bach is trying to make the best of both worlds—the homotonal one of his own day and (prophetically) the polytonal one of a couple of centuries ater."</p> <youtube-embed video="zJAcqI2HE8c" /><p>Another early use of polytonality occurs in the classical period in the finale of Wolfgang Amadeus Mozart's composition A Musical Joke, which he deliberately ends with the violins, violas and horns playing in four discordant keys simultaneously. However, it was not featured prominently in non-programmatic contexts until the twentieth century, particularly in the work of Charles Ives (Psalm 67, c. 1898–1902), Béla Bartók (Fourteen Bagatelles, Op. 6, 1908), and Stravinsky (Petrushka, 1911). Ives claimed that he learned the technique of polytonality from his father, who taught him to sing popular songs in one key while harmonizing them in another.</p> <p>Although it is only used in one section and intended to represent drunken soldiers, there is an early example of polytonality in Heinrich Ignaz Franz Biber's short composition Battalia, written in 1673.</p> <youtube-embed video="EkwqPJZe8ms" /><p>Stravinsky's The Rite of Spring is widely credited with popularizing bitonality, and contemporary writers such as Casella (1924) describe him as progenitor of the technique: "the first work presenting polytonality in typical completeness—not merely in the guise of a more or less happy 'experiment', but responding throughout to the demands of expression—is beyond all question the grandiose Le Sacre du Printemps of Stravinsky (1913)".</p> <youtube-embed video="ISdxPv2u-ns" /><p>Bartók's "Playsong" demonstrates easily perceivable bitonality through "the harmonic motion of each key ... [being] relatively uncomplicated and very diatonic". Here, the "duality of key" featured is A minor and C♯ minor.</p> <blockquote> <youtube-embed video="zZanU1ZaN6k" /><p>Example of polytonality or extended tonality from Milhaud's Saudades do Brasil (1920), right hand in B major and left hand in G major, or both hands in extended G major.</p> </blockquote> <p>Other polytonal composers influenced by Stravinsky include those in the French group, Les Six, particularly Darius Milhaud, as well as Americans such as Aaron Copland.</p> <youtube-embed video="vC3qQpyp4rI" /><p>Benjamin Britten used bi- and polytonality in his operas, as well as enharmonic relationships, for example to signify the conflict between Claggart (F minor) and Billy (E major) in Billy Budd (note the shared enharmonically equivalent G♯/A♭) or to express the main character's "maladjustment" in Peter Grimes.</p> <h3 id="polytonality-and-polychords" tabindex="-1">Polytonality and polychords <a class="header-anchor" href="#polytonality-and-polychords" aria-label="Permalink to "Polytonality and polychords""></a></h3> <p>Polytonality requires the presentation of simultaneous key-centers. The term "polychord" describes chords that can be constructed by superimposing multiple familiar tonal sonorities. For example, familiar ninth, eleventh, and thirteenth chords can be built from or decomposed into separate chords:</p> <p>Thus polychords do not necessarily suggest polytonality, but they may not be explained as a single tertian chord. The Petrushka chord is an example of a polychord. This is the norm in jazz, for example, which makes frequent use of "extended" and polychordal harmonies without any intended suggestion of "multiple keys."</p> <h2 id="polyvalency" tabindex="-1">Polyvalency <a class="header-anchor" href="#polyvalency" aria-label="Permalink to "Polyvalency""></a></h2> <p>The following passage, taken from Beethoven's Piano Sonata in E♭, Op. 81a (Les Adieux), suggests clashes between tonic and dominant harmonies in the same key.</p> <p>Leeuw points to Beethoven's use of the clash between tonic and dominant, such as in his Third Symphony, as polyvalency rather than bitonality, with polyvalency being, "the telescoping of diverse functions that should really occur in succession to one another".</p> <h2 id="polymodality" tabindex="-1">Polymodality <a class="header-anchor" href="#polymodality" aria-label="Permalink to "Polymodality""></a></h2> <p>Passages of music, such as Poulenc's Trois mouvements perpétuels, I., may be misinterpreted as polytonal rather than polymodal. In this case, two scales[clarification needed] are recognizable but are assimilated through the common tonic (B♭).</p> <h2 id="polyscalarity" tabindex="-1">Polyscalarity <a class="header-anchor" href="#polyscalarity" aria-label="Permalink to "Polyscalarity""></a></h2> <p>Polyscalarity is defined as "the simultaneous use of musical objects which clearly suggest different source-collections. Specifically in reference to Stravinsky's music, Tymoczko uses the term polyscalarity out of deference to terminological sensibilities. In other words, the term is meant to avoid any implication that the listener can perceive two keys at once. Though Tymoczko believes that polytonality is perceivable, he believes polyscalarity is better suited to describe Stravinsky's music. This term is also used as a response to Van den Toorn's analysis against polytonality. Van den Toorn, in an attempt to dismiss polytonal analysis used a monoscalar approach to analyze the music with the octatonic scale. However, Tymoczko states that this was problematic in that it does not resolve all instances of multiple interactions between scales and chords. Moreover, Tymoczko quotes Stravinsky's claim that the music of Petrouchka's second tableau was conceived "in two keys". Polyscalarity is then a term encompassing multiscalar superimpositions and cases which give a different explanation than the octatonic scale.</p> <h2 id="challenges" tabindex="-1">Challenges <a class="header-anchor" href="#challenges" aria-label="Permalink to "Challenges""></a></h2> <p>Some music theorists, including Milton Babbitt and Paul Hindemith have questioned whether polytonality is a useful or meaningful notion or "viable auditory possibility". Babbitt called polytonality a "self-contradictory expression which, if it is to possess any meaning at all, can only be used as a label to designate a certain degree of expansion of the individual elements of a well-defined harmonic or voice-leading unit". Other theorists to question or reject polytonality include Allen Forte and Benjamin Boretz, who hold that the notion involves logical incoherence.</p> <p>Other theorists, such as Dmitri Tymoczko, respond that the notion of "tonality" is a psychological, not a logical notion. Furthermore, Tymoczko argues that two separate key-areas can, at least at a rudimentary level, be heard at one and the same time: for example, when listening to two different pieces played by two different instruments in two areas of a room.</p> <h2 id="octatonicism" tabindex="-1">Octatonicism <a class="header-anchor" href="#octatonicism" aria-label="Permalink to "Octatonicism""></a></h2> <youtube-embed video="esD90diWZds" /><p>Some critics of the notion of polytonality, such as Pieter van den Toorn, argue that the octatonic scale accounts in concrete pitch-relational terms for the qualities of "clashing", "opposition", "stasis", "polarity", and "superimposition" found in Stravinsky's music and, far from negating them, explains these qualities on a deeper level. For example, the passage from Petrushka, cited above, uses only notes drawn from the C octatonic collection C–C♯–D♯–E–F♯–G–A–A♯.</p> <h2 id="polymodal-chromaticism" tabindex="-1">Polymodal chromaticism <a class="header-anchor" href="#polymodal-chromaticism" aria-label="Permalink to "Polymodal chromaticism""></a></h2> <p>In music, polymodal chromaticism is the use of any and all musical modes sharing the same tonic simultaneously or in succession and thus creating a texture involving all twelve notes of the chromatic scale (total chromatic). Alternately it is the free alteration of the other notes in a mode once its tonic has been established.</p> <p>The term was coined by composer, ethnomusicologist, and pianist Béla Bartók. The technique became a means in Bartók's composition to avoid, expand, or develop major-minor tonality (i.e. common practice harmony). This approach differed from that used by Arnold Schoenberg and his followers in the Second Viennese School and later serialists.</p> <p>The concept was indicated by Bartók's folk-music-derived view of each note of the chromatic scale as being "of equal value" and thus to be used "freely and independently" (autobiography) and supported by references to the conception below in his Harvard Lectures (1943). The concept may be extended to the construction of non-diatonic modes from the pitches of more than one diatonic mode such as distance models including 1:3, the alternation of semitones and minor thirds, for example C–E♭–E–G–A♭–B–C which includes both the tonic and dominant as well as "'two of the most typical degrees from both major and minor' (E and B, E♭ and A♭, respectively) Kárpáti 1975 p. 132)".</p> <p>Bartók had realised that both melodic minor scales gave rise to four chromatic steps between the two scales' fifths and the rising melodic minor scale's seventh degrees when superimposed. Consequently, he started investigating if the same pattern could be established in some way in the beginning of any scales and came to realise that superimposing a Phrygian and a Lydian scale with the same tonic resulted in what looked like a chromatic scale. Bartók's twelve-tone Phrygian/Lydian polymode, however, differed from the chromatic scale as used by, for example, late-Romantic composers like Richard Strauss and Richard Wagner. During the late 19th century the chromatic altering of a chord or melody was a change in strict relation to its functional non-altered version. Alterations in the twelve-tone Phrygian/Lydian polymode, the other hand, were "diatonic ingredients of a diatonic modal scale."</p> <blockquote> <p>Phrygian mode (C) C–D♭–E♭–F–G–A♭–B♭–C Lydian mode (C) C–D–E–F♯–G–A–B–C Twelve-tone Phrygian/Lydian polymode (C) C–D♭–D–E♭–E–F–F♯–G–A♭–A–B♭–B–C</p> <p>Twelve-tone Phrygian-Lydian polymode</p> </blockquote> <p>Melodies could be developed and transformed in novel ways through diatonic extension and chromatic compression, while still having coherent links to their original forms. Bartók described this as a new means to develop a melody.</p> <p>Bartók started to superimpose all possible diatonic modes on each other in order to extend and compress melodies in ways that suited him, unrestricted by Baroque-Romantic tonality as well as strict serial methods such as the twelve-tone technique.</p> <p>In 1941, Bartók's ethnomusicological studies brought him into contact with the music of Dalmatia and he realised that the Dalmatian folk-music used techniques that resembled polymodal chromaticism. Bartók had defined and used polymodal chromaticism in his own music before this. The discovery inspired him to continue to develop the technique.</p> <p>Examples of Bartók's use of the technique include No. 80 ("Hommage à R. Sch.") from Mikrokosmos featuring C Phrygian/Lydian (C–D♭–E♭–F–G–A♭–B♭–C/C–D–E–F♯–G–A–B–C). Lendvai identifies the technique in the late works of Modest Mussorgsky, Richard Wagner, Franz Liszt, and Giuseppe Verdi.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Klavarskribo]]></title> <link>https://chromatone.center/theory/notes/alternative/klavar/</link> <guid>https://chromatone.center/theory/notes/alternative/klavar/</guid> <pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Vertical chromatic staff notation by Cornelis Pot]]></description> <content:encoded><![CDATA[<p>Klavarskribo (sometimes shortened to klavar) is a music notation system that was introduced in 1931 by the Dutchman Cornelis Pot (1885–1977). The name means "keyboard writing" in Esperanto. It differs from conventional music notation in a number of ways and is intended to be easily readable.</p> <p><img src="./Klavar.png" alt=""></p> <p>The stave on which the notes are written is vertical so the music is read from top to bottom. Each note has its own individual position, low notes on the left and high notes on the right as on the piano. This stave consists of groups of two and three vertical lines corresponding to the black keys (notes) of the piano. White notes are written in the seven white spaces between the lines. Therefore sharps and flats are no longer needed, as each note has its own place in the octave. The evident correspondence between the stave and a piano induced Pot to use the name Klavarskribo.</p> <p><img src="./KlavarExplain_3-E.png" alt=""></p> <p>All notes are provided with stems—stems to the right: play with the right hand, stems to the left: left hand.</p> <youtube-embed video="efTv05nWNhk" /><youtube-embed video="5mTRUF6q5-I" /><p><img src="./Klavar-debussy.png" alt=""></p> <p><a href="https://www.klavarskribo.eu/en/" target="_blank" rel="noreferrer"></a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/Klavar.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Ring Tone Text Transfer Language]]></title> <link>https://chromatone.center/theory/notes/computer/ring-tone/</link> <guid>https://chromatone.center/theory/notes/computer/ring-tone/</guid> <pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Note encoding format for ringtones]]></description> <content:encoded><![CDATA[<p><a href="https://en.wikipedia.org/wiki/Ring_Tone_Text_Transfer_Language" target="_blank" rel="noreferrer">Ring Tone Text Transfer Language</a> (RTTTL) was developed by Nokia[citation needed] to be used to transfer ringtones to cellphone by Nokia.</p> <p>The RTTTL format is a string divided into three sections: name, default value, and data.</p> <p>The jintu section consists of a string describing the name of the ringtone. It can be no longer than 10 characters, and cannot contain a colon ":" character. (However, since the Smart Messaging specification allows names up to 15 characters in length, some applications processing RTTTL also do so.)</p> <p>The default value section is a set of values separated by commas, where each value contains a key and a value separated by an = character, which describes certain defaults which should be adhered to during the execution of the ringtone. Possible names are</p> <ul> <li>d - duration</li> <li>o - octave</li> <li>b - beat, tempo</li> </ul> <p>The data section consists of a set of character strings separated by commas, where each string contains a duration, pitch, octave and optional dotting (which increases the duration of the note by one half).</p> <p>The format of RTTTL notation is similar to the Music Macro Language found in BASIC implementations present on many early microcomputers.</p> <h2 id="technical-specification" tabindex="-1">Technical specification <a class="header-anchor" href="#technical-specification" aria-label="Permalink to "Technical specification""></a></h2> <p>To be recognized by ringtone programs, an RTTTL/Nokring format ringtone must contain three specific elements: name, settings, and notes.</p> <p>For example, here is the RTTTL ringtone for Haunted House:</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>HauntHouse: d=4,o=5,b=108: 2a4, 2e, 2d#, 2b4, 2a4, 2c, 2d, 2a#4, 2e., e, 1f4, 1a4, 1d#, 2e., d, 2c., b4, 1a4, 1p, 2a4, 2e, 2d#, 2b4, 2a4, 2c, 2d, 2a#4, 2e., e, 1f4, 1a4, 1d#, 2e., d, 2c., b4, 1a4</span></span></code></pre> </div><p>The three parts are separated by a colon.</p> <ul> <li> <p>Part 1: name of the ringtone (here: "HauntHouse"), a string of characters represents the name of the ringtone</p> </li> <li> <p>Part 2: settings (here: d=4,o=5,b=108), where "d=" is the default duration of a note. In this case, the "4" means that each note with no duration specifier (see below) is by default considered a quarter note. "8" would mean an eighth note, and so on. Accordingly, "o=" is the default octave. There are four octaves in the Nokring/RTTTL format. And "b=" is the tempo, in "beats per minute".</p> </li> <li> <p>Part 3: the notes. Each note is separated by a comma and includes, in sequence: a duration specifier, a standard music note, either a, b, c, d, e, f or g, and an octave specifier, as in scientific pitch notation. If no duration or octave specifier are present, the default applies.</p> </li> </ul> <h3 id="durations" tabindex="-1">Durations <a class="header-anchor" href="#durations" aria-label="Permalink to "Durations""></a></h3> <p>Standard musical durations are denoted by the following notations:</p> <ul> <li>1 - whole note</li> <li>2 - half note</li> <li>4 - quarter note</li> <li>8 - eighth note</li> <li>16 - sixteenth note</li> <li>32 - thirty-second note</li> </ul> <p>Dotted rhythm patterns can be formed by appending a period (".") character to the end of a duration/beat/octave element.</p> <h3 id="pitch" tabindex="-1">Pitch <a class="header-anchor" href="#pitch" aria-label="Permalink to "Pitch""></a></h3> <pre><code>P - rest or pause A - A A# - A♯ / B♭ B - B / C♭ C - C C# - C♯ / D♭ D - D D# - D♯ / E♭ E - E / F♭ F - F / E♯ F# - F♯ / G♭ G - G G# - G♯ / A♭ </code></pre> <h3 id="octave" tabindex="-1">Octave <a class="header-anchor" href="#octave" aria-label="Permalink to "Octave""></a></h3> <p>The RTTTL format allows octaves starting from the A below middle C and going up four octaves. This corresponds with the inability of cellphones to reproduce certain tones audibly. These octaves are numbered from lowest pitch to highest pitch from 4 to 7.</p> <p>The octave should be left out of the notation in the case of a rest or pause in the pattern. Example</p> <p>An example of the RTTTL format would be</p> <p><code>fifth:d=4,o=5,b=63:8P,8G5,8G5,8G5,2D#5</code></p> ]]></content:encoded> </item> <item> <title><![CDATA[Temperaments]]></title> <link>https://chromatone.center/theory/notes/temperaments/</link> <guid>https://chromatone.center/theory/notes/temperaments/</guid> <pubDate>Mon, 20 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Tuning systems]]></description> <content:encoded><![CDATA[<p>There's a plenty of ways to get all the notes to play with. The natural <a href="./just/">Just intonation</a> is based on simple ratios. If we take only one ratio of 2:3 we can construct the <a href="./pythagorean/">Pythagorean</a> tuning.</p> <p>Logarithms are in the base of the modern <a href="./equal/">12-TET</a> and anyone can <a href="./tunings/">compare it</a> to other systems easily.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/derek-story.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Dodeka]]></title> <link>https://chromatone.center/theory/notes/alternative/dodeka/</link> <guid>https://chromatone.center/theory/notes/alternative/dodeka/</guid> <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Keyboard and notation redesigned for consistence and ease of use]]></description> <content:encoded><![CDATA[<p>The Dodeka Keyboard Design is an isomorphic keyboard invented and designed by Jacques-Daniel Rochat. It is similar to a piano keyboard but with only a single row of keys containing each chromatic note.The keys corresponding to C, E and A flat are highlighted to provide visual landmarks. The creators aimed to create a rational and chromatic approach to music and performance. As an isomorphic keyboard, any musical sequence or interval has the same shape in each of the 12 keys.</p> <p><img src="./DODEKA_Keyboard-comparison.png" alt=""></p> <p>The Dodeka Music Notation is an alternative music notation or musical notation system invented and designed in 1980s by inventor and musician Jacques-Daniel Rochat in an attempt to improve upon traditional music notation.</p> <p><img src="./Dodeka-music-notation-staff-pitch-web.jpg" alt=""></p> <p>Unlike conventional musical notation, the Dodeka music notation system uses a chromatic scale of 12 pitches and follows an equal pitch intervals configuration, with 4 lines per octave. In this configuration, the 12 notes of an octave appear in four positions vis-à-vis the staff lines, that is, either on, between, above and below the lines.</p> <p>Each pitch has its own unique place on the staff. And while conventional music notation may alter notes using accidental signs or key signatures, notes in the Dodeka notation appear as they are. There are no more key signatures or accidental signs in this musical system.</p> <blockquote> <p><img src="./dodeka-alternative-music-notation-moonlight-D.jpg" alt=""> (Excerpt of Beethoven Für Elise written in Dodeka Notation)</p> </blockquote> <p>The Dodeka notation system represents note duration in a visual manner. Note lengths are represented through the notes graphical shapes, similar to what can be found in sequencer programmes. The reference time unit or time value being the quarter note (or crotchet), all durations are expressed as visual ratios from this reference point. For example, a whole note is the representation of four quarter note lengths. At the opposite, an eighth note (or quaver) is twice as short as a quarter note. All the other symbols and articulation marks are also <a href="https://www.dodekamusic.com/learn/alternative-music-notation/dodeka-musical-symbols-list-meaning/" target="_blank" rel="noreferrer">reimagined in Dodeka</a>.</p> <p><a href="https://apps.apple.com/us/app/dodeka-music/id1260932281?ls=1" target="_blank" rel="noreferrer"><img src="./dodeka-app.png" alt=""></a></p> <h3 id="dodeka-note-names" tabindex="-1">Dodeka note names <a class="header-anchor" href="#dodeka-note-names" aria-label="Permalink to "Dodeka note names""></a></h3> <p>The objective was to create 2-letter names that convey a relationship between the names of the notes and their position on the staff. We did that using letters that are not present in the English (anglo-saxon) designation. For example, the note Do# (C#) is called Ka (K) because it shares the same position as La (A) (ie. both notes are above a line).</p> <p>Following this logic, the 12 notes can be written as: Do / Ka / Ré / To(l) / Mi / Fa / Hu / So(l) / Pi / La / Vé / Si.</p> <p>In English, we only use the first letters, which gives us the following sequence: C / K / D / T / E / F / H / G / P / A / V / B.</p> <p><a href="https://www.dodekamusic.com" target="_blank" rel="noreferrer">Dodeka music</a></p> <p><img src="./dodeka-keys.jpg" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/dodeka-keys.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Notes]]></title> <link>https://chromatone.center/theory/notes/</link> <guid>https://chromatone.center/theory/notes/</guid> <pubDate>Sat, 18 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Historical and cultorological research on different notation systems]]></description> <content:encoded><![CDATA[<YoutubeEmbed video="Eq3bUFgEcb4" /><p>Let's first learn about <a href="./temperaments/">Temperaments</a> as ways to divide an octave into distinct notes to play. Then let's find out how these notes got their names with <a href="./solmization/">Solmization</a> and how they were written down in <a href="./national/">National notation systems</a>. The West-European tradition and its <a href="./staff/">Classic staff notation</a> is one of the most influential worldwide, but still there's a plenty of ancient and modern <a href="./alternative/">Alternative notation systems</a> to explore and experiment with. There's a whole world of <a href="./computer/">Computer notation</a> in the digital realm.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Grand_staff.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Parsons code]]></title> <link>https://chromatone.center/theory/notes/alternative/parsons/</link> <guid>https://chromatone.center/theory/notes/alternative/parsons/</guid> <pubDate>Fri, 17 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[A simple notation used to identify a piece of music through melodic motion]]></description> <content:encoded><![CDATA[<p>The Parsons code, formally named the Parsons code for melodic contours, is a simple notation used to identify a piece of music through melodic motion — movements of the pitch up and down. Denys Parsons (father of Alan Parsons) developed this system for his 1975 book The Directory of Tunes and Musical Themes. Representing a melody in this manner makes it easier to index or search for pieces, particularly when the notes' values are unknown. Parsons covered around 15,000 classical, popular and folk pieces in his dictionary. In the process he found out that *UU is the most popular opening contour, used in 23% of all the themes, something that applies to all the genres.</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>Parsons Code of Ode to Joy</span></span> <span class="line"><span></span></span> <span class="line"><span> Parsons$ ./contour *RUURDDDDRUURDR</span></span> <span class="line"><span> *-*</span></span> <span class="line"><span> / \</span></span> <span class="line"><span> * *</span></span> <span class="line"><span> / \</span></span> <span class="line"><span> *-* * *-*</span></span> <span class="line"><span> \ / \</span></span> <span class="line"><span> * * *-*</span></span> <span class="line"><span> \ /</span></span> <span class="line"><span> *-*</span></span></code></pre> </div><p>The first note of a melody is denoted with an asterisk (*), although some Parsons code users omit the first note. All succeeding notes are denoted with one of three letters to indicate the relationship of its pitch to the previous note:</p> <ul> <li> <ul> <li>= first tone as reference,</li> </ul> </li> <li>u = "up", for when the note is higher than the previous note,</li> <li>d = "down", for when the note is lower than the previous note,</li> <li>r = "repeat", for when the note has the same pitch as the previous note.</li> </ul> <p><a href="https://www.musipedia.org/melodic_contour.html" target="_blank" rel="noreferrer">Search a melody by its Parsons code at Musipedia</a></p> <h3 id="some-examples" tabindex="-1">Some examples <a class="header-anchor" href="#some-examples" aria-label="Permalink to "Some examples""></a></h3> <ul> <li>Ode to Joy: *RUURDDDDRUURDR</li> <li>"Twinkle Twinkle Little Star": *rururddrdrdrdurdrdrdurdrdrddrururddrdrdrd</li> <li>"Silent Night": *udduuddurdurdurudddudduruddduddurudduuddduddd</li> <li>"Aura Lea" ("Love Me Tender"): *uduududdduu</li> <li>"White Christmas": *udduuuu</li> <li>First verse in Madonna's "Like a Virgin": *rrurddrdrrurdudurrrrddrduuddrdu</li> </ul> <p>There are <a href="http://ismir2003.ismir.net/papers/Uitdenbogerd.pdf" target="_blank" rel="noreferrer">studies</a> showing that despite its simplicity, Parsons code is still too hard for non-musicians to formulate and interpret the code for melody search. Yet it may be useful for more skilled musicians, but the audio-based search becomes more widely spread and adopted.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Pentatonic scales]]></title> <link>https://chromatone.center/theory/scales/pentatonic/</link> <guid>https://chromatone.center/theory/scales/pentatonic/</guid> <pubDate>Thu, 16 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[5 very consonant notes to play easily together]]></description> <content:encoded><![CDATA[<p>Musicology commonly classifies <a href="https://en.wikipedia.org/wiki/Pentatonic_scale" target="_blank" rel="noreferrer">pentatonic scales</a> as either hemitonic or anhemitonic. Hemitonic scales contain one or more semitones and anhemitonic scales do not contain semitones.</p> <h2 id="major-pentatonic-scale" tabindex="-1">Major pentatonic scale <a class="header-anchor" href="#major-pentatonic-scale" aria-label="Permalink to "Major pentatonic scale""></a></h2> <p>Anhemitonic pentatonic scales can be constructed in many ways. The major pentatonic scale may be thought of as a gapped or incomplete major scale. However, the pentatonic scale has a unique character and is complete in terms of tonality. One construction takes five consecutive pitches from the circle of fifths; starting on C, these are C, G, D, A, and E. Transposing the pitches to fit into one octave rearranges the pitches into the major pentatonic scale: C, D, E, G, A.</p> <p>Another construction works backward: It omits two pitches from a diatonic scale. If one were to begin with a C major scale, for example, one might omit the fourth and the seventh scale degrees, F and B. The remaining notes then make up the major pentatonic scale: C, D, E, G, and A.</p> <p>Omitting the third and seventh degrees of the C major scale obtains the notes for another transpositionally equivalent anhemitonic pentatonic scale: F, G, A, C, D. Omitting the first and fourth degrees of the C major scale gives a third anhemitonic pentatonic scale: G, A, B, D, E.</p> <p>The black keys on a piano keyboard comprise a G-flat major (or equivalently, F-sharp major) pentatonic scale: G-flat, A-flat, B-flat, D-flat, and E-flat, which is exploited in Chopin's black key étude.</p> <audio class="my-4" controls> <source src="/audio/Frederic_Chopin_-_Opus_10_-_Twelve_Grand_Etudes_-_G_Flat_Major.mp3" type="audio/mpeg"> </audio> <h2 id="minor-pentatonic-scale" tabindex="-1">Minor pentatonic scale <a class="header-anchor" href="#minor-pentatonic-scale" aria-label="Permalink to "Minor pentatonic scale""></a></h2> <p>Although various hemitonic pentatonic scales might be called minor, the term is most commonly applied to the relative minor pentatonic derived from the major pentatonic, using scale tones 1, 3, 4, 5, and 7 of the natural minor scale. (It may also be considered a gapped blues scale.) The C minor pentatonic scale, the relative of the E-flat pentatonic scale is C, E-flat, F, G, B-flat. The A minor pentatonic, the relative minor of C pentatonic, comprises the same tones as the C major pentatonic, starting on A, giving A, C, D, E, G. This minor pentatonic contains all three tones of an A minor triad.</p> <p>The standard tuning of a guitar uses the notes of an E minor pentatonic scale: E-A-D-G-B-E, contributing to its frequency in popular music.</p> <chroma-profile-collection :collection="$frontmatter.pentatonics" /><h2 id="japanese-scales" tabindex="-1">Japanese scales <a class="header-anchor" href="#japanese-scales" aria-label="Permalink to "Japanese scales""></a></h2> <p>The organization of notes to create a musical scale has many different applications in different cultures and types of music. One of the most common approaches to organizing musical structures is known as the Mode or Mode(s). Since the Heian Period, there has been disagreement and contention between musical scholars regarding Japanese music and modal theory. There has long been a debate about Japanese modes and what defines them, to this day there is not a single modal theory that can completely explain Japanese music. Music scales are critical in clarifying and identifying musical pieces, however, there has been no single scale model that can identify all Japanese music into one classification or category of music. In order to be understood by western scholars, The different variations of Japanese modal scales are often compared to the western Major Scale. Various modal theories from around the world have been imported to attempt and analyze Japanese music structure, but often the modal theories suggested do not reflect what is actually present in the music it is being applied to. The classical structures of most Japanese music originates in China and was not concerned with developing a universal scale or mode until Western music had been imported. After the Heian period began was when Western modal theories became widely acknowledged by Japanese society, though it often stayed in its own category as it could not entirely explain Japanese music across all its different iterations.</p> <p>The most common version of the Japanese mode is a somewhat inaccurate term for a pentatonic musical scale which is used commonly in traditional Japanese music. The intervals of the scale are major second, minor third, perfect fifth, and minor sixth (for example, the notes A, B, C, E, F and up to A.) - which is essentially a natural minor scale in Western music theory without the subdominant and subtonic, which is the same operation performed on the major scale to produce the pentatonic major scale. The more correct term would be kumoijoshi, as given by William P. Malm for one of the three tuning scales of the koto adapted from shamisen music.</p> <p>In addition to being used almost exclusively in traditional Japanese compositions, it is found frequently in video game music and the pieces of contemporary composers such as Anne Boyd.</p> <chroma-profile-collection :collection="$frontmatter.japanese" />]]></content:encoded> </item> <item> <title><![CDATA[Intervals]]></title> <link>https://chromatone.center/theory/intervals/</link> <guid>https://chromatone.center/theory/intervals/</guid> <pubDate>Wed, 15 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Different kinds of relations between two notes]]></description> <content:encoded><![CDATA[<p><img src="./chromatic.svg" alt="svg"></p> <p>Two notes sounding simultaneously or sequentially form the basic building block for all the emotional expressiveness of music. They form some kind of relationships that bring up some distinct feeling.</p> <p>The most basic is the 1:2 ratio of an <a href="./unison-octave/">Unison and octave</a>, that are foundational for cyclic nature of pitch class space. <a href="./fifth-fourth/">Perfect Fifth and Fourth</a> are the foundational consonances that bring joy of the simple 2:3 and 3:4 ratios. This is the root of the 12 note system as a whole.</p> <p><a href="./third-sixth/">Thirds and Sixth</a> are the imperfect consonances to evoke deeper feelings while the <a href="./second-seventh/">Seconds and Sevenths</a> are the sharper dissonances to spice everything up. We'll explore the process of the <a href="./emancipation/">Emancipation of dissonance</a> to find out how did we get at this point in our pitch pairs interpretation. And how science finally got <a href="./dissonance/">a measure for sound consonance</a>. And take a brief look at some mathematical implications of building <a href="./cycles/">interval chains</a>.</p> <youtube-embed video="3sUpoSTy8zw" /><youtube-embed video="cyW5z-M2yzw" />]]></content:encoded> <enclosure url="https://chromatone.center/guang-yang.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[ABC notation]]></title> <link>https://chromatone.center/theory/notes/computer/abc/</link> <guid>https://chromatone.center/theory/notes/computer/abc/</guid> <pubDate>Wed, 15 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[A shorthand form of musical notation for computers]]></description> <content:encoded><![CDATA[<p><a href="https://abcnotation.com/wiki/abc:standard:v2.1" target="_blank" rel="noreferrer">ABC notation</a> is a shorthand form of musical notation for computers. In basic form it uses the letter notation with a–g, A–G, and z, to represent the corresponding notes and rests, with other elements used to place added value on these – sharp, flat, raised or lowered octave, the note length, key, and ornamentation. This form of notation began from a combination of Helmholtz pitch notation and using ASCII characters to imitate standard musical notation (bar lines, tempo marks, etc.) that could facilitate the sharing of music online, and also added a new and simple language for software developers, not unlike other notations designed for ease, such as tablature and solfège.</p> <client-only > <abc-editor /> </client-only> <p><a href="https://abcnotation.com/browseTunes" target="_blank" rel="noreferrer">Browse tunes</a></p> <p>The earlier ABC notation was built on, standardized, and changed by Chris Walshaw to better fit the keyboard and an ASCII character set, with the help and input of others. Originally designed to encode folk and traditional Western European tunes (e.g., from England, Ireland, and Scotland) which are typically single-voice melodies that can be written in standard notation on a single staff line, the extensions by Walshaw and others has opened this up with an increased list of characters and headers in a syntax that can also support metadata for each tune:</p> <ul> <li>The index, when there are more than one tune in a file (X:)- the title (T:),</li> <li>the time signature (M:),</li> <li>the default note length (L:),</li> <li>the type of tune (R:),</li> <li>the key (K:) <ul> <li>with the clef (K: clef=[treble|alto|tenor|bass|perc])</li> </ul> </li> </ul> <p>Lines following the key designation represent the tune.</p> <p>After a surge of renewed interest in clarifying some ambiguities in the 2.0 draft and suggestions for new features, serious discussion of a new (and official) standard resumed in 2011, culminating in the release of ABC 2.1 as a new standard in late December 2011. Chris Walshaw has become involved again and is coordinating the effort to further improve and clarify the language, with plans for topics to be addressed in future versions to be known as ABC 2.2 and ABC 2.3 .</p> <youtube-embed video="H8hWKP5cEXE" /><h2 id="more-links" tabindex="-1">More links <a class="header-anchor" href="#more-links" aria-label="Permalink to "More links""></a></h2> <ul> <li><a href="https://abcnotation.com/learn" target="_blank" rel="noreferrer">https://abcnotation.com/learn</a></li> <li><a href="https://www.abcjs.net/" target="_blank" rel="noreferrer">https://www.abcjs.net/</a></li> </ul> ]]></content:encoded> </item> <item> <title><![CDATA[Diatonic scales]]></title> <link>https://chromatone.center/theory/scales/diatonic/</link> <guid>https://chromatone.center/theory/scales/diatonic/</guid> <pubDate>Wed, 15 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The seven 7-notes set rotations]]></description> <content:encoded><![CDATA[<h1 id="all-diatonic-scales-and-modes" tabindex="-1">All diatonic scales and modes <a class="header-anchor" href="#all-diatonic-scales-and-modes" aria-label="Permalink to "All diatonic scales and modes""></a></h1> <youtube-embed video="YJO-Fm7uRX4"></youtube-embed><h2 id="what-is-a-diatonic-scale" tabindex="-1">What Is a Diatonic Scale? <a class="header-anchor" href="#what-is-a-diatonic-scale" aria-label="Permalink to "What Is a Diatonic Scale?""></a></h2> <p>A diatonic scale is a type of musical scale that contains seven tones of a note per octave (the distance between one note and the following note that also bears its name).</p> <h3 id="tones" tabindex="-1">Tones <a class="header-anchor" href="#tones" aria-label="Permalink to "Tones""></a></h3> <p>Diatonic scales consist of five whole tones, also known as whole steps or the major second, and two half steps (semitones), which are the shortest musical intervals (the distance between tones) in Western music, separated by either two or three tones. A whole step on a piano keyboard represents two keys, while a half step is a single key.</p> <h3 id="letter-names" tabindex="-1">Letter names <a class="header-anchor" href="#letter-names" aria-label="Permalink to "Letter names""></a></h3> <p>Also known as a heptatonic scale in music theory, diatonic scales use all seven letter names, or notes in a sequence. Chords built from the seven notes in each key are called diatonic chords. Tonality, or the system of organizing keys and chords in Western music, has been based on the diatonic system from the Middle Ages to the present day.</p> <h3 id="scales" tabindex="-1">Scales <a class="header-anchor" href="#scales" aria-label="Permalink to "Scales""></a></h3> <p>Diatonic scales include both the major scale, or Ionian mode, which is the most frequently used musical scale, and the natural minor scale, or Aeolian mode, which uses the same number of notes as the major scale, but in a different pitch. Both scales are part of the six <a href="https://en.wikipedia.org/wiki/Mode_(music)" target="_blank" rel="noreferrer">“church mode”</a> scales established for religious music during the medieval period, which continue to form the basis for contemporary diatonic scales.</p> <h2 id="tonic-the-root-note" tabindex="-1">Tonic - the root note <a class="header-anchor" href="#tonic-the-root-note" aria-label="Permalink to "Tonic - the root note""></a></h2> <p>These scales always have a tonal center to which all the notes relate and lead to in different ways. It is called the root note of the scale or tonic.</p> <h2 id="the-7-modes-of-the-diatonic-scale" tabindex="-1">The 7 Modes of the Diatonic Scale <a class="header-anchor" href="#the-7-modes-of-the-diatonic-scale" aria-label="Permalink to "The 7 Modes of the Diatonic Scale""></a></h2> <chroma-profile-collection :collection="diatonic" />]]></content:encoded> </item> <item> <title><![CDATA[Integer notation]]></title> <link>https://chromatone.center/theory/notes/alternative/integer/</link> <guid>https://chromatone.center/theory/notes/alternative/integer/</guid> <pubDate>Tue, 14 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[System that uses numbers to show notes]]></description> <content:encoded><![CDATA[<p>In music, integer notation is the translation of pitch classes and/or interval classes into whole numbers. Thus if C = 0, then C♯ = 1 ... A♯ = 10, B = 11, with "10" and "11" substituted by "t" and "e" in some sources, A and B in others (like the duodecimal numeral system, which also uses "t" and "e", or A and B, for "10" and "11"). This allows the most economical presentation of information regarding post-tonal materials.</p> <p>To avoid the problem of enharmonic spellings, theorists typically represent pitch classes using numbers beginning from zero, with each successively larger integer representing a pitch class that would be one semitone higher than the preceding one, if they were all realised as actual pitches in the same octave. Because octave-related pitches belong to the same class, when an octave is reached, the numbers begin again at zero. This cyclical system is referred to as modular arithmetic and, in the usual case of chromatic twelve-tone scales, pitch-class numbering is regarded as "modulo 12" (customarily abbreviated "mod 12" in the music-theory literature) — that is, every twelfth member is identical.</p> <p>One can map a pitch's fundamental frequency f (measured in hertz) to a real number p using the equation</p> <pre><code>p = 9 + 12 log 2 ( f / 440 Hz ). </code></pre> <p>This creates a linear pitch space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and middle C (C4) is assigned the number 0 (thus, the pitches on piano are −39 to +48). Indeed, the mapping from pitch to real numbers defined in this manner forms the basis of the MIDI Tuning Standard, which uses the real numbers from 0 to 127 to represent the pitches C−1 to G9 (thus, middle C is 60).</p> <p>To represent pitch classes, we need to identify or "glue together" all pitches belonging to the same pitch class. The result is a cyclical quotient group that musicians call pitch class space and mathematicians call R/12Z. Points in this space can be labelled using real numbers in the range 0 ≤ x < 12. These numbers provide numerical alternatives to the letter names of elementary music theory:</p> <ul> <li>0 = C,</li> <li>1 = C♯/D♭,</li> <li>2 = D,</li> <li>2.5 = Dhalf sharp (quarter tone sharp),</li> <li>3 = D♯/E♭,</li> </ul> <p>and so on. In this system, pitch classes represented by integers are classes of twelve-tone equal temperament (assuming standard concert A).</p> ]]></content:encoded> </item> <item> <title><![CDATA[Standard pitch notation]]></title> <link>https://chromatone.center/theory/notes/alternative/scientific/</link> <guid>https://chromatone.center/theory/notes/alternative/scientific/</guid> <pubDate>Tue, 14 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[American SPN]]></description> <content:encoded><![CDATA[<p><img src="./Scientific_pitch_notation_octaves_of_C.png" alt=""></p> <p>Scientific pitch notation (SPN), also known as American standard pitch notation (ASPN) and international pitch notation (IPN),[1] is a method of specifying musical pitch by combining a musical note name (with accidental if needed) and a number identifying the pitch's octave.</p> <p>Although scientific pitch notation was originally designed as a companion to scientific pitch (see below), the two are not synonymous. Scientific pitch is a pitch standard—a system that defines the specific frequencies of particular pitches (see below). Scientific pitch notation concerns only how pitch names are notated, that is, how they are designated in printed and written text, and does not inherently specify actual frequencies. Thus, the use of scientific pitch notation to distinguish octaves does not depend on the pitch standard used.</p> <p>The notation makes use of the traditional tone names (A to G) which are followed by numbers showing which octave they are part of.</p> <p>For standard A440 pitch equal temperament, the system begins at a frequency of 16.35160 Hz, which is assigned the value C0.</p> <p>The octave 0 of the scientific pitch notation is traditionally called the sub-contra octave, and the tone marked C0 in SPN is written as ,,C or C,, or CCC in traditional systems, such as Helmholtz notation. Octave 0 of SPN marks the low end of what humans can actually perceive, with the average person being able to hear frequencies no lower than 20 Hz as pitches.</p> <h3 id="use" tabindex="-1">Use <a class="header-anchor" href="#use" aria-label="Permalink to "Use""></a></h3> <p>Scientific pitch notation is often used to specify the range of an instrument. It provides an unambiguous means of identifying a note in terms of textual notation rather than frequency, while at the same time avoiding the transposition conventions that are used in writing the music for instruments such as the clarinet and guitar. It is also easily translated into staff notation, as needed. In describing musical pitches, nominally enharmonic spellings can give rise to anomalies where, for example in meantone temperaments C♭ 4 is a lower frequency than B3; but such paradoxes usually do not arise in a scientific context.</p> <p>Scientific pitch notation avoids possible confusion between various derivatives of Helmholtz notation which use similar symbols to refer to different notes. For example, "c" in Helmholtz's original notation refers to the C below middle C, whereas "C" in ABC Notation refers to middle C itself. With scientific pitch notation, middle C is always C4, and C4 is never any note but middle C. This notation system also avoids the "fussiness" of having to visually distinguish between four and five primes, as well as the typographic issues involved in producing acceptable subscripts or substitutes for them. C7 is much easier to quickly distinguish visually from C8, than is, for example, c′′′′ from c′′′′′, and the use of simple integers (e.g. C7 and C8) makes subscripts unnecessary altogether.</p> <p>Although pitch notation is intended to describe sounds audibly perceptible as pitches, it can also be used to specify the frequency of non-pitch phenomena. Notes below E0 or higher than E♭ 10 are outside most humans' hearing range, although notes slightly outside the hearing range on the low end may still be indirectly perceptible as pitches due to their overtones falling within the hearing range.</p> <h2 id="helmholtz-pitch-notation" tabindex="-1">Helmholtz pitch notation <a class="header-anchor" href="#helmholtz-pitch-notation" aria-label="Permalink to "Helmholtz pitch notation""></a></h2> <p>Helmholtz pitch notation is a system for naming musical notes of the Western chromatic scale. Fully described and normalized by the German scientist Hermann von Helmholtz, it uses a combination of upper and lower case letters (A to G), and the sub- and super-prime symbols ( ͵ ′ or ⸜ ⸝) to denote each individual note of the scale. It is one of two formal systems for naming notes in a particular octave, the other being scientific pitch notation.</p> <h3 id="use-1" tabindex="-1">Use <a class="header-anchor" href="#use-1" aria-label="Permalink to "Use""></a></h3> <p>The accenting of the scale in Helmholtz notation always starts on the note C and ends at B (e.g. C D E F G A B). The note C is shown in different octaves by using upper-case letters for low notes, and lower-case letters for high notes, and adding sub-primes and primes in the following sequence: C͵͵ C͵ C c c′ c″ c‴ (or ,,C ,C C c c′ c″ c‴ or C⸜⸜ C⸜ C c c⸝ c⸝⸝ c⸝⸝⸝) and so on.</p> <p>Middle C is designated c′, therefore the octave from middle C upwards is c′–b′.</p> <p>Whole octaves may also be given a name based on "English strokes notation". For example, the octave from c′–b′ is called the one-line octave or (less common) once-accented octave. Correspondingly, the notes in the octave may be called one-lined C (for c′), etc.</p> <p>This diagram gives examples of the lowest and highest note in each octave, giving their name in the Helmholtz system, and the "German method" of octave nomenclature. (The octave below the contra octave is known as the sub-contra octave).</p> <img src="./Helmholtz-pitch-notation.svg"> ]]></content:encoded> <enclosure url="https://chromatone.center/Scientific_pitch_notation_octaves_of_C.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Melodic minor]]></title> <link>https://chromatone.center/theory/scales/melodic/</link> <guid>https://chromatone.center/theory/scales/melodic/</guid> <pubDate>Tue, 14 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The melodic minor scale is essentially a natural minor scale with raised sixth and seventh scale degrees.]]></description> <content:encoded><![CDATA[<p>A great deal of modern jazz harmony arises from the modes of the ascending form of the melodic minor scale, also known as the jazz melodic minor scale. This scale is essentially a diatonic major scale with a lowered third, for example C–D–E♭–F–G–A–B–C. As with any other scale, the modes are derived from playing the scale from different root notes, causing a series of jazz scales to emerge.</p> <h2 id="the-7-modes-of-the-melodic-minor-scale" tabindex="-1">The 7 modes of the Melodic Minor Scale <a class="header-anchor" href="#the-7-modes-of-the-melodic-minor-scale" aria-label="Permalink to "The 7 modes of the Melodic Minor Scale""></a></h2> <chroma-profile-collection :collection="melodic" />]]></content:encoded> </item> <item> <title><![CDATA[Piano roll]]></title> <link>https://chromatone.center/theory/notes/computer/piano-roll/</link> <guid>https://chromatone.center/theory/notes/computer/piano-roll/</guid> <pubDate>Mon, 13 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[A common form of music in modern DAWs]]></description> <content:encoded><![CDATA[<p>The Buffalo Convention of December 10, 1908 established two future roll formats for the US-producers of piano rolls for self-playing pianos. The two formats had different punchings of 65 and 88 notes, but the same width (11+1⁄4 inches or 286 millimetres); thus 65-note rolls would be perforated at 6 holes to the inch, and 88-note rolls at 9 holes to the inch, leaving margins at both ends for future developments. This made it possible to play the piano rolls on any self-playing instrument built according to the convention, albeit sometimes with a loss of special functionality. This format became a loose world standard.</p> <p><img src="./PlayerPianoRoll.jpg" alt=""></p> <p><img src="./FL.png" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/PlayerPianoRoll.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Double harmonic]]></title> <link>https://chromatone.center/theory/scales/double/</link> <guid>https://chromatone.center/theory/scales/double/</guid> <pubDate>Mon, 13 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Balanced heptatonic scales with two augmented second intervals]]></description> <content:encoded><![CDATA[<p>The double harmonic major scale is a musical scale with a flattened second and sixth degree. This is also known as Mayamalavagowla, Bhairav Raga, Byzantine scale, Arabic (Hijaz Kar), and Gypsy major. It can be likened to a gypsy scale because of the diminished step between the 1st and 2nd degrees. Arabic scale may also refer to any Arabic mode, the simplest of which, however, to Westerners, resembles the double harmonic major scale.</p> <h2 id="the-7-modes-of-the-double-harmonic-major" tabindex="-1">The 7 modes of the Double harmonic major <a class="header-anchor" href="#the-7-modes-of-the-double-harmonic-major" aria-label="Permalink to "The 7 modes of the Double harmonic major""></a></h2> <chroma-profile-collection :collection="double" /><youtube-embed video="jiAo-ZA7Ijg" /><p>It is referred to as the "double harmonic" scale because it contains two harmonic tetrads featuring augmented seconds. By contrast, both the harmonic major and harmonic minor scales contain only one augmented second, located between their sixth and seventh degrees.</p> <p>The scale contains a built-in tritone substitution, a dominant seventh chord a half step above the root, with strong harmonic movement towards the tonic chord.</p> <p>The double harmonic scale is not commonly used in classical music from Western culture, as it does not closely follow any of the basic musical modes, nor is it easily derived from them. It also does not easily fit into common Western chord progressions such as the authentic cadence. This is because it is mostly used as a modal scale, not intended for much movement through chord progressions.</p> <p>The Arabic scale (in the key of E) was used in Nikolas Roubanis's "Misirlou", and in the Bacchanale from the opera Samson and Delilah by Saint-Saëns. Claude Debussy used the scale in "Soirée dans Grenade", "La Puerta del Vino", and "Sérénade interrompue" to evoke Spanish flamenco music or Moorish heritage. In popular music, Ritchie Blackmore of Deep Purple and Rainbow used the scale in pieces such as "Gates of Babylon" and "Stargazer". The Miles Davis jazz standard "Nardis" also makes use of the double harmonic. Opeth used this scale in their song "Bleak" from the album Blackwater Park. Megadeth use the scale in a guitar solo from their song "The Threat Is Real" from their 2015 album Dystopia. It is also used by Hans Zimmer in his score for Dune.</p> <youtube-embed video="n7hSabSnCEg" /><h2 id="symmetry-and-balance" tabindex="-1">Symmetry and balance <a class="header-anchor" href="#symmetry-and-balance" aria-label="Permalink to "Symmetry and balance""></a></h2> <p>The double harmonic scale features radial symmetry, or symmetry around its root, or center note. Breaking up the three note chromaticism and removing this symmetry by sharpening the 2nd or flattening the 7th note respectively by one semitone yields the harmonic major and Phrygian Dominant mode of the harmonic minor scales respectively, each of which, unlike the double harmonic minor scale, has a full diminished chord backbone.</p> <p>This scale (and its modes like the Hungarian minor scale) is the only seven-note scale (in 12-tone equal temperament) that is perfectly balanced; this means that when its pitches are represented as points on a circle (whose full circumference represents an octave), their average position (or "centre of mass") is the centre of the circle.</p> <youtube-embed video="hNHAdf8XOik" /><h2 id="tetrads" tabindex="-1">Tetrads <a class="header-anchor" href="#tetrads" aria-label="Permalink to "Tetrads""></a></h2> <p>The main chords of the double harmonic major are:</p> <p>I7M bII7M iii6 iv7M V7(b5) bVI7M(#5) viisus2add13(b5)</p> <p>There are other possibilities of tetrad:</p> <p>I7M(#5) bII7 bii7M bii7 bii7(b5) III6 iv° V6(b5) bvi°</p> <h2 id="modes" tabindex="-1">Modes <a class="header-anchor" href="#modes" aria-label="Permalink to "Modes""></a></h2> <p>Like all heptatonic (seven-pitch) scales, the double harmonic scale has a mode for each of its individual scale degrees. The most commonly known of these modes is the 4th mode, the Hungarian minor scale, most similar to the harmonic minor scale with a raised 4th degree. The modes are as follows:</p> <table tabindex="0"> <thead> <tr> <th>Mode</th> <th>Name of scale</th> <th>Degrees</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>1</td> <td>Double harmonic major</td> <td>1</td> </tr> <tr> <td>2</td> <td>Lydian ♯2 ♯6</td> <td>1</td> </tr> <tr> <td>3</td> <td>Ultraphrygian</td> <td>1</td> </tr> <tr> <td>4</td> <td><a href="https://en.wikipedia.org/wiki/Hungarian_minor_scale" title="Hungarian minor scale" target="_blank" rel="noreferrer">Hungarian/Gypsy minor</a></td> <td>1</td> </tr> <tr> <td>5</td> <td><a href="https://en.wikipedia.org/w/index.php?title=Oriental_mode&action=edit&redlink=1" title="Oriental mode (page does not exist)" target="_blank" rel="noreferrer">Oriental</a></td> <td>1</td> </tr> <tr> <td>6</td> <td>Ionian ♯2 ♯5</td> <td>1</td> </tr> <tr> <td>7</td> <td>Locrian bb3 bb7</td> <td>1</td> </tr> </tbody> </table> <youtube-embed video="LW6qGy3RtwY" /><h2 id="related-scales" tabindex="-1">Related scales <a class="header-anchor" href="#related-scales" aria-label="Permalink to "Related scales""></a></h2> <p>Some of the closest existing scales to the double harmonic major scale are the Phrygian dominant scale, the fifth mode of the harmonic minor scale, as they are alike save for the Phrygian dominant's flattened seventh degree. The harmonic major scale (also known as major flat 6 and Ionian flat 6) is identical to the standard major scale aside from the sixth scale degree being flattened by a semitone, differing from the double harmonic major in having a natural second degree.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Monochord]]></title> <link>https://chromatone.center/practice/sound/monochord/</link> <guid>https://chromatone.center/practice/sound/monochord/</guid> <pubDate>Sun, 12 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Virtual string for frequency and length ratio explorations]]></description> <content:encoded><![CDATA[<MonoChord style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>Drag the divider to explore the string divisions of a monochord, just like Pythagoras did.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Modulation]]></title> <link>https://chromatone.center/theory/harmony/modulation/</link> <guid>https://chromatone.center/theory/harmony/modulation/</guid> <pubDate>Sun, 12 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Changing keys during the composition]]></description> <content:encoded><![CDATA[<h2 id="tonicization" tabindex="-1">Tonicization <a class="header-anchor" href="#tonicization" aria-label="Permalink to "Tonicization""></a></h2> <p>In music, <a href="https://en.wikipedia.org/wiki/Tonicization" target="_blank" rel="noreferrer">tonicization</a> is the treatment of a pitch other than the overall tonic (the "home note" of a piece) as a temporary tonic in a composition. In Western music that is tonal, the piece is heard by the listener as being in a certain key. A tonic chord has a dominant chord; in the key of C major, the tonic chord is C major and the dominant chord is G major or G dominant seventh. The dominant chord, especially if it is a dominant seventh, is heard by Western composers and listeners familiar with music as resolving (or "leading") to the tonic, due to the use of the leading note in the dominant chord. A tonicized chord is a chord other than the tonic chord to which a dominant or dominant seventh chord progresses. When a dominant chord or dominant seventh chord is used before a chord other than the tonic, this dominant or dominant seventh chord is called a secondary dominant. When a chord is tonicized, this makes this non-tonic chord sound temporarily like a tonic chord.</p> <h3 id="examples" tabindex="-1">Examples <a class="header-anchor" href="#examples" aria-label="Permalink to "Examples""></a></h3> <p>Using Roman numeral chord analysis, a chord labeled "V/ii" (colloquially referred to as "five of two") would refer to the V chord of a different key; specifically, a key named after the ii chord of the original tonic. This would usually resolve to the ii chord (of the original key). In this situation, the ii has been tonicized.</p> <p>For example, in a piece in the key of C major, the ii chord is D minor, because D is the second scale degree in a C major scale. The D is minor because to construct a triad over D using only the pitches available in the key of C major—i.e. no sharps, no flats—the triad must be minor—the individual notes D, F and A. The V/ii chord is composed of the pitches in a V chord in the key of ii (key of D minor). The pitches used in a V/ii in this example include the notes A, C# and E (creating an A major chord). In the key of D minor, an A major chord is the dominant chord. In the key of C major, C sharp is an accidental. One can often find examples of tonicization by looking for accidentals, as there are always accidentals involved in tonicization. However, it is important to note that the opposite is not true—just because there is an accidental does not mean that it is definitely a case of tonicization.</p> <p>Only major and minor chords may be tonicized. Diminished chords and augmented chords cannot be tonicized because they do not represent stable key areas in Western music. For example, a B minor chord (B, D, F#) occurring in any of its closely related keys may be tonicized with an F# major chord (V/V) because B minor also represents a key area—the key of B minor. However, a B diminished chord (B, D, F) may not be tonicized because "B diminished" could not be a stable key area; there is no key area in Western classical music that has B, D, & F—the pitches that make up the B diminished chord—as the first, third and fifth scale degrees, respectively. This holds true of all diminished and augmented chords.</p> <p>Tonicizations may last for multiple chords. Taking the example given above with the chord progression V/ii → ii, it is possible to extend this sequence backwards. Instead of just V/ii → ii, there could be iv/ii → V/ii → ii (additionally, thinking about the last chord in the sequence: ii, as i/ii, it becomes clear why the phrase "temporary tonic"—see above—is often used in relation to tonicization). Though perceptions vary as a general rule if a chord is treated as the tonic for longer than a phrase before returning to the previous key area, then the treatment is considered a modulation to a new key.</p> <h2 id="modulation" tabindex="-1">Modulation <a class="header-anchor" href="#modulation" aria-label="Permalink to "Modulation""></a></h2> <p>In a song in C major, if a composer treats another key as the tonic (for example, the ii chord, D minor) for a short period by alternating between A7 (the notes A, C#, E and G) and D minor, and then returns to the tonic (C Major), this is a tonicization of the key of D minor. However, if a song in C major shifts to the key of D minor and stays in this second, new key for a significant period, then this is usually considered to be a modulation to the new key (in this case, from C major to D minor). In effect, D minor has become the new key of the song.</p> <p>"A secondary dominant is like a miniature modulation; for just an instant, the harmony moves out of the diatonic chords of the key."</p> <p>In music, <a href="https://en.wikipedia.org/wiki/Modulation_(music)" target="_blank" rel="noreferrer">modulation</a> is the change from one tonality (tonic, or tonal center) to another. This may or may not be accompanied by a change in key signature. Modulations articulate or create the structure or form of many pieces, as well as add interest. Treatment of a chord as the tonic for less than a phrase is considered tonicization.</p> <blockquote> <p>Modulation is the essential part of the art. Without it there is little music, for a piece derives its true beauty not from the large number of fixed modes which it embraces but rather from the subtle fabric of its modulation.<br> — Charles-Henri Blainville (1767)</p> </blockquote> <h2 id="requirements" tabindex="-1">Requirements <a class="header-anchor" href="#requirements" aria-label="Permalink to "Requirements""></a></h2> <ul> <li>Harmonic: quasi-tonic, modulating dominant, pivot chord</li> <li>Melodic: recognizable segment of the scale of the quasi-tonic or strategically placed leading-tone</li> <li>Metric & rhythmic: quasi-tonic and modulating dominant on metrically accented beats, prominent pivot chord</li> </ul> <p>The <strong>quasi-tonic</strong> is the tonic of the new key established by the modulation. The modulating dominant is the dominant of the quasi-tonic. The pivot chord is a predominant to the modulating dominant and a chord common to both the keys of the tonic and the quasi-tonic. For example, in a modulation to the dominant, ii/V–V/V–V could be a pivot chord, modulating dominant, and quasi-tonic.</p> <h2 id="types" tabindex="-1">Types <a class="header-anchor" href="#types" aria-label="Permalink to "Types""></a></h2> <h3 id="common-chord-modulation" tabindex="-1">Common-chord modulation <a class="header-anchor" href="#common-chord-modulation" aria-label="Permalink to "Common-chord modulation""></a></h3> <p>Common-chord modulation (also known as diatonic-pivot-chord modulation) moves from the original key to the destination key (usually a closely related key) by way of a chord both keys share: "Most modulations are made smoother by using one or more chords that are common to both keys." For example, G major and D major have four triad chords in common: G major, B minor, D major and E minor.</p> <p>Any chord with the same root note and chord quality (major, minor, diminished) can be used as the pivot chord. Therefore, chords that are not generally found in the style of the piece (for example, major VII chords in a J. S. Bach-style chorale) are also not likely to be chosen as the pivot chord. The most common pivot chords are the predominant chords (ii and IV) in the new key. In analysis of a piece that uses this style of modulation, the common chord is labeled with its function in both the original and the destination keys, as it can be heard either way.</p> <p>Where an altered chord is used as a pivot chord in either the old or new key (or both), this would be referred to as altered common chord modulation, in order to distinguish the chromaticism that would be introduced from the otherwise, diatonic method.</p> <h3 id="enharmonic-modulation" tabindex="-1">Enharmonic modulation <a class="header-anchor" href="#enharmonic-modulation" aria-label="Permalink to "Enharmonic modulation""></a></h3> <p>Modulation from D major to D♭ major in Schubert's Op. 9, No. 14, D. 365, mm. 17–24, using the German sixth, in the new key, that is enharmonic to the dominant seventh in the old key.</p> <p>An enharmonic modulation takes place when one treats a chord as if it were spelled enharmonically as a functional chord in the destination key, and then proceeds in the destination key. There are two main types of enharmonic modulations: dominant seventh/augmented sixth, and (fully) diminished seventh. Any dominant seventh or German sixth can be reinterpreted as the other by respelling the m7 or A6 chord tone (respectively) in order to modulate to a key a half-step away (descending or ascending); if the fifth-from-root chord tone of a German sixth is omitted, the result is an Italian sixth. A diminished seventh chord meanwhile, can be respelled in multiple other ways to form a diminished seventh chord in a key a minor third (m3 as root), tritone (d5 as root) or major sixth (d7 as root) away. Where the dominant seventh is found in all diatonic scales, the diminished seventh is found only in the harmonic scale naturally; an augmented sixth is itself an altered chord, relying on the raised fourth scale degree.</p> <p>By combining the diminished seventh with a dominant seventh and/or augmented sixth, altering only one pivot note (by a half tone), it is possible to modulate quite smoothly from any key to any other in at most three chords, no matter how distant the starting and ending keys (be aware that, only when modulating between key signatures featuring double-sharps/flats, may the need to respell natural notes enharmonically arise); however, this may or may not require the use of altered chords (operating in the harmonic minor without augmented sixth would not) where the effect can be less subtle than other modulations. The following are examples used to describe this in chord progressions starting from the key of D minor (these chords may instead be used in other keys as borrowed chords, such as the parallel major, or other forms of the minor):</p> <ul> <li>C♯–E–G–B♭ (dim. 7th), C–E–G–B♭ (lowering the root a semitone to a modulating dom. 7th), F–A–C (quasi-tonic) takes us to F major—a relative major modulation (though not enharmonic); but exactly the same progression enharmonically C♯–E–G–B♭, C–E–G–A♯ (Ger. aug. 6th), E–G–B–E (quasi-tonic) takes us somewhat unexpectedly to E natural/harmonic minor—a half-step modulation (ascending).</li> <li>C♯–E–G–B♭ (dim. 7th), A–C♯–E–G (lowering the 7th a semitone and respelling as a modulating dom. 7th), D–F♯–A (quasi-tonic) takes us to the key of D major—a parallel modulation (though not enharmonic). Enharmonically: C♯–E–G–B♭, A–C♯–E–Fdouble sharp (Ger. aug. 6th), C♯–E–G♯ (quasi-tonic) modulates to C♯ minor—a major seventh modulation/half-step descending.</li> <li>C♯–E–G–B♭ (dim. 7th), C♯–E♭–G–B♭ ≡ E♭–G–B♭–D♭ (lowering the major third a half tone and respelling as a modulating dom. 7th), A♭–C–E♭ (quasi-tonic) leads to A♭ major—a minor third and relative modulation (or tritone modulation if starting in D Major).</li> </ul> <p>Note that in standard voice leading practice, any type of augmented sixth chord favors a resolution to the dominant chord (see: augmented sixth chord), with the exception of the German sixth, where it is difficult to avoid incurring parallel fifths; to prevent this, a cadential six four is commonly introduced before the dominant chord (which would then typically resolve to the tonic to establish tonality in the new key), or an Italian/French sixth is used instead.</p> <p>In short, lowering any note of a diminished seventh chord a half tone leads to a dominant seventh chord (or German sixth enharmonically), the lowered note being the root of the new chord. Raising any note of a diminished seventh chord a half tone leads to a half-diminished seventh chord, the root of which is a whole step above the raised note. This means that any diminished chord can be modulated to eight different chords by simply lowering or raising any of its notes. If also employing enharmonic respelling of the diminished seventh chord, such as that beginning the modulation in the above examples (allowing for three other possible diminished seventh chords in other keys), it quickly becomes apparent the versatility of this combination technique and the wide range of available options in key modulation.</p> <p>This type of modulation is particularly common in Romantic music, in which chromaticism rose to prominence.</p> <p>Other types of enharmonic modulation include the augmented triad (III+) and French sixth (Fr+6). Augmented triad modulation occurs in the same fashion as the diminished seventh, that is, to modulate to another augmented triad in a key: a major third (M3 as root) or minor sixth (A5 as root) away. French augmented sixth (Fr+6) modulation is achieved similarly but by respelling both notes of either the top or bottom major third (i.e. root and major third or diminished fifth and augmented sixth) enharmonically and inverting with the other major third (i.e. diminished fifth and augmented sixth becomes root and major third of the new Fr+6); either choice results in the same chord and key modulation (a tritone away), as the diminished fifth always becomes the new root.</p> <h3 id="common-tone-modulation" tabindex="-1">Common-tone modulation <a class="header-anchor" href="#common-tone-modulation" aria-label="Permalink to "Common-tone modulation""></a></h3> <p>Common-tone modulation uses a sustained or repeated pitch from the old key as a bridge between it and the new key (common tone). Usually, this pitch will be held alone before the music continues in the new key. For example, a held F from a section in B♭ major could be used to transition to F major. This is used, for example, in Schubert's Unfinished Symphony. "If all of the notes in the chord are common to both scales (major or minor), then we call it a common chord modulation. If only one or two of the notes are common, then we call it common tone modulation."</p> <p>Starting from a major chord, for example G major (G–B–D), there are twelve potential goals using a common-tone modulation: G minor, G♯ minor, B♭ major, B major, B minor, C major, C minor, D minor, D major, E♭ major, E major, E minor. Thus common-tone modulations are convenient for modulation by diatonic or chromatic third.</p> <h3 id="chromatic-modulation" tabindex="-1">Chromatic modulation <a class="header-anchor" href="#chromatic-modulation" aria-label="Permalink to "Chromatic modulation""></a></h3> <p>A chromatic modulation is so named because it occurs at the point of a chromatic progression, one which involves the chromatic inflection of one or more notes whose letter name, thus, remains the same though altered through an accidental. Chromatic modulations are often between keys which are not closely related. A secondary dominant or other chromatically altered chord may be used to lead one voice chromatically up or down on the way to the new key. (In standard four-part chorale-style writing, this chromatic line will most often be in one voice.) For example, a chromatic modulation from C major to D minor.</p> <p>In this case, the IV chord in C major (F major) would be spelled F–A–C, the V/ii chord in C major (A major) spelled A–C♯–E, and the ii chord in C major (D minor), D–F–A. Thus the chromaticism, C–C♯–D, along the three chords; this could easily be part-written so those notes all occurred in one voice. Despite the common chord (ii in C major or i in D minor), this modulation is chromatic due to this inflection.</p> <p>The consonant triads for chromatic modulation are ♭III, ♭VI, ♭II, ♯iv, vii, and ♭VII in major, and ♮iii, ♮vi, ♭II, ♯iv, ii, and ♮vii in minor.</p> <p>In the example pictured, a chromatic modulation from F major to D minor.</p> <p>In this case, the V chord in F major (C major) would be spelled C–E–G, the V in D minor (A major) would be spelled A–C♯–E. Thus the chromaticism, C–C♯–D, which is here split between voices but may often easily be part-written so that all three notes occur in one voice.</p> <p>The combination of chromatic modulation with enharmonic modulation in late Romantic music led to extremely complex progressions in the music of such composers as César Franck, in which two or three key shifts may occur in the space of a single bar, each phrase ends in a key harmonically remote from its beginning, and great dramatic tension is built while all sense of underlying tonality is temporarily in abeyance. Good examples are to be found in the opening of his Symphony in D minor, of which he himself said (see Wikiquote) "I dared much, but the next time, you will see, I will dare even more..."; and his Trois Chorals for organ, especially the first and third of these, indeed fulfill that promise.</p> <h3 id="phrase-modulation" tabindex="-1">Phrase modulation <a class="header-anchor" href="#phrase-modulation" aria-label="Permalink to "Phrase modulation""></a></h3> <p>Phrase (also called direct, static, or abrupt) modulation is a modulation in which one phrase ends with a cadence in the original key, and the next phrase begins in the destination key without any transition material linking the two keys. This type of modulation is frequently done to a closely related key—particularly the dominant or the relative major/minor key.</p> <p>An unprepared modulation is a modulation "without any harmonic bridge", characteristic of impressionism.</p> <p>For example:</p> <pre><code> A E A F B♭ F A major I V I F major I IV I </code></pre> <h3 id="sequential-modulation" tabindex="-1">Sequential modulation <a class="header-anchor" href="#sequential-modulation" aria-label="Permalink to "Sequential modulation""></a></h3> <p>"A passage in a given key ending in a cadence might be followed by the same passage transposed (up or down) to another key," this being known as sequential modulation. Although a sequence does not have to modulate, it is also possible to modulate by way of a sequence. A sequential modulation is also called rosalia. The sequential passage will begin in the home key, and may move either diatonically or chromatically. Harmonic function is generally disregarded in a sequence, or, at least, it is far less important than the sequential motion. For this reason, a sequence may end at a point that suggests a different tonality than the home key, and the composition may continue naturally in that key.</p> <h3 id="chain-modulation" tabindex="-1">Chain modulation <a class="header-anchor" href="#chain-modulation" aria-label="Permalink to "Chain modulation""></a></h3> <p>Distant keys may be reached sequentially through closely related keys by chain modulation, for example C to G to D or C to C minor to E♭ major. A common technique is the addition of the minor seventh after each tonic is reached, thus turning it into a dominant seventh chord:</p> <pre><code>D → D7 G → G7 C → C7 F I → V7 I → V7 I → V7 I </code></pre> <h3 id="changes-between-parallel-keys" tabindex="-1">Changes between parallel keys <a class="header-anchor" href="#changes-between-parallel-keys" aria-label="Permalink to "Changes between parallel keys""></a></h3> <p>Since modulation is defined as a change of tonic (tonality or tonal center), the change between minor and its parallel major or the reverse is technically not a modulation but a change in mode. Major tonic harmony that concludes music in minor contains what is known as a Picardy third. Any harmony associated with the minor mode in the context of major musical passages is often referred to as a borrowed chord, which creates mode mixture.</p> <h3 id="common-modulations" tabindex="-1">Common modulations <a class="header-anchor" href="#common-modulations" aria-label="Permalink to "Common modulations""></a></h3> <p>The most common modulations are to closely related keys (I, V, IV, vi, iii, ii). V (dominant) is the most frequent goal and, in minor, III (relative key) is also a common goal. Modulation to the dominant or the subdominant is relatively simple as they are adjacent steps on the circle of fifths. Modulations to the relative major or minor are also simple, as these keys share all pitches in common. Modulation to distantly related keys is often done smoothly through using chords in successive related keys, such as through the circle of fifths, the entirety of which may be used in either direction:</p> <pre><code>D – A – E – B/C♭ – F♯/G♭ – C♯/D♭ – G♯/A♭ – D♯/E♭ – A♯/B♭ – F – C – G – D </code></pre> <p>If a given key were G major, the following chart could be used:</p> <pre><code>C — G — D </code></pre> <p>From G (which is the given key), a musician would go P5 (a perfect fifth) above G (which is D) and also P5 below G (which is C).</p> <p>From this, the musician would go to G major's relative minor which is E minor, and potentially to C major and D major's related minor as well (a musician who does not know the related minor for C and D major may also go P5 below or above E minor).</p> <pre><code>C — G — D | | | Am Em Bm </code></pre> <p>By using the relative minor keys one can find the specific key that the key can modulate into.</p> <p>Many musicians use the circle of fifths to find these keys and make similar charts to help with the modulation.</p> <youtube-embed video="epqYft12nV4" /><h2 id="significance" tabindex="-1">Significance <a class="header-anchor" href="#significance" aria-label="Permalink to "Significance""></a></h2> <p>In certain classical music forms, a modulation can have structural significance. In sonata form, for example, a modulation separates the first subject from the second subject. Frequent changes of key characterize the development section of sonatas. Moving to the subdominant is a standard practice in the trio section of a march in a major key, while a minor march will typically move to the relative major.</p> <p>Changes of key may also represent changes in mood. In many genres of music, moving from a lower key to a higher often indicates an increase in energy.</p> <p>Change of key is not possible in the full chromatic or the twelve tone technique, as the modulatory space is completely filled; i.e., if every pitch is equal and ubiquitous there is nowhere else to go. Thus other differentiating methods are used, most importantly ordering and permutation. However, certain pitch formations may be used as a "tonic" or home area.</p> <h2 id="other-types" tabindex="-1">Other types <a class="header-anchor" href="#other-types" aria-label="Permalink to "Other types""></a></h2> <p>Though modulation generally refers to changes of key, any parameter may be modulated, particularly in music of the 20th and 21st century. Metric modulation (known also as tempo modulation) is the most common, while timbral modulation (gradual changes in tone color), and spatial modulation (changing the location from which sound occurs) are also used.</p> <p>Modulation may also occur from a single tonality to a polytonality, often by beginning with a duplicated tonic chord and modulating the chords in contrary motion until the desired polytonality is reached.</p> <h2 id="chromatic-mediant-changes" tabindex="-1">Chromatic mediant changes <a class="header-anchor" href="#chromatic-mediant-changes" aria-label="Permalink to "Chromatic mediant changes""></a></h2> <p><a href="https://youtu.be/mjFACmJ5ON8" target="_blank" rel="noreferrer">https://youtu.be/mjFACmJ5ON8</a></p> ]]></content:encoded> </item> <item> <title><![CDATA[MIDI]]></title> <link>https://chromatone.center/theory/notes/computer/midi/</link> <guid>https://chromatone.center/theory/notes/computer/midi/</guid> <pubDate>Sun, 12 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Standard for digital music communication]]></description> <content:encoded><![CDATA[<p>At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers. The MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work. In 1982, the first instruments were released with MIDI, the Roland Jupiter-6 and the Prophet 600. In 1983, the first MIDI drum machine, the Roland TR-909, and the first MIDI sequencer, the Roland MSQ-700 were released. The first computer to support MIDI, the NEC PC-88 and PC-98, was released in 1982. The MIDI standard connected ground-breaking hardware like Yamaha’s DX7 synthesiser and Roland’s TR-909 drum machine.</p> <p><img src="./midi-notes.jpg" alt=""></p> <p>A MIDI message is an instruction that controls some aspect of the receiving device. A MIDI message consists of a status byte, which indicates the type of the message, followed by up to two data bytes that contain the parameters. MIDI messages can be channel messages sent on only one of the 16 channels and monitored only by devices on that channel, or system messages that all devices receive. Each receiving device ignores data not relevant to its function.There are five types of message: Channel Voice, Channel Mode, System Common, System Real-Time, and System Exclusive.</p> <p><img src="./midi_data.gif" alt=""></p> <p>Channel Voice messages transmit real-time performance data over a single channel. Examples include "note-on" messages which contain a MIDI note number that specifies the note's pitch, a velocity value that indicates how forcefully the note was played, and the channel number; "note-off" messages that end a note; program change messages that change a device's patch; and control changes that allow adjustment of an instrument's parameters. MIDI notes are numbered from 0 to 127 assigned to C−1 to G9. This corresponds to a range of 8.175799 to 12543.85 Hz (assuming equal temperament and 440 Hz A4) and extends beyond the 88 note piano range from A0 to C8. Middle C has the number 60. A4 (A440) – 69.</p> <p><img src="./GM_Standard_Drum_Map_on_the_keyboard.svg" alt="svg"></p> <h2 id="midi-clock" tabindex="-1">MIDI clock <a class="header-anchor" href="#midi-clock" aria-label="Permalink to "MIDI clock""></a></h2> <p>MIDI beat clock, or simply MIDI clock, is a clock signal that is broadcast via MIDI to ensure that several MIDI-enabled devices such as a synthesizer or music sequencer stay in synchronization. Clock events are sent at a rate of 24 pulses per quarter note. Those pulses are used to maintain a synchronized tempo for synthesizers that have BPM-dependent voices and also for arpeggiator synchronization.</p> <p>MIDI beat clock differs from MIDI timecode in that MIDI beat clock is tempo-dependent.</p> <p>Location information can be specified using MIDI Song Position Pointer (SPP, see below), although many simple MIDI devices ignore this message. Messages</p> <p>MIDI beat clock defines the following real-time messages:</p> <ul> <li>clock (decimal 248, hex 0xF8)</li> <li>start (decimal 250, hex 0xFA)</li> <li>continue (decimal 251, hex 0xFB)</li> <li>stop (decimal 252, hex 0xFC)</li> </ul> <p>MIDI also specifies a System Common message called Song Position Pointer (SPP). SPP can be used in conjunction with the above realtime messages for complete sync. This message consists of 3 bytes; a status byte (decimal 242, hex 0xF2), followed by two 7-bit data bytes (least significant byte first) forming a 14-bit value which specifies the number of "MIDI beats" (1 MIDI beat = a 16th note = 6 clock pulses) since the start of the song. This message only needs to be sent once if a jump to a different position in the song is needed. Thereafter only realtime clock messages need to be sent to advance the song position one tick at a time.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/GM_Standard_Drum_Map_on_the_keyboard.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Symmetrical scales]]></title> <link>https://chromatone.center/theory/scales/symmetrical/</link> <guid>https://chromatone.center/theory/scales/symmetrical/</guid> <pubDate>Sun, 12 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Modes of limited transpostions and interval cycles]]></description> <content:encoded><![CDATA[<p>Asymmetric scales are "far more common" than symmetric scales and this may be accounted for by the inability of symmetric scales to possess the property of uniqueness (containing each interval class a unique number of times) which assists with determining the location of notes in relation to the first note of the scale.</p> <p>Modes of limited transposition are musical modes or scales that fulfill specific criteria relating to their symmetry and the repetition of their interval groups. These scales may be transposed to all twelve notes of the chromatic scale, but at least two of these transpositions must result in the same pitch classes, thus their transpositions are "limited". They were compiled by the French composer Olivier Messiaen, and published in his book La technique de mon langage musical ("The Technique of my Musical Language").</p> <blockquote> <p>Based on our present chromatic system, a tempered system of 12 sounds, these modes are formed of several symmetrical groups, the last note of each group always being common with the first of the following group. At the end of a certain number of chromatic transpositions which varies with each mode, they are no longer transposable, giving exactly the same notes as the first.</p> </blockquote> <p>Messiaen found ways of employing all of the modes of limited transposition harmonically, melodically, and sometimes polyphonically. The whole-tone and octatonic scales have enjoyed quite widespread use since the turn of the 20th century, particularly by Debussy (the whole-tone scale) and Stravinsky (the octatonic scale).</p> <p>The symmetry inherent in these modes (which means no note can be perceived as the tonic), together with certain rhythmic devices, Messiaen described as containing "the charm of impossibilities".</p> <p>The composer Tōru Takemitsu made frequent use of Messiaen's modes, particularly the third mode.</p> <h2 id="definition-by-chromatic-transposition" tabindex="-1">Definition by chromatic transposition <a class="header-anchor" href="#definition-by-chromatic-transposition" aria-label="Permalink to "Definition by chromatic transposition""></a></h2> <p>Transposing the diatonic major scale up in semitones results in a different set of notes being used each time. For example, C major consists of C, D, E, F, G, A, B, and the scale a semitone higher (D♭ major) consists of D♭, E♭, F, G♭, A♭, B♭, C. By transposing D♭ major up another semitone, another new set of notes (D major) is produced, and so on, giving 12 different diatonic scales in total. When transposing a mode of limited transposition this is not the case. For example, the mode of limited transposition that Messiaen labelled "Mode 1", which is the whole tone scale, contains the notes C, D, E, F♯, G♯, A♯; transposing this mode up a semitone produces C♯, D♯, F, G, A, B. Transposing this up another semitone produces D, E, F♯, G♯, A♯, C, which is the same set of notes as the original scale. Since transposing the mode up a whole tone produces the same set of notes, mode 1 has only 2 transpositions.</p> <p>Any scale having 12 different transpositions is not a mode of limited transposition.</p> <h2 id="definition-by-shifting-modal-degrees" tabindex="-1">Definition by shifting modal degrees <a class="header-anchor" href="#definition-by-shifting-modal-degrees" aria-label="Permalink to "Definition by shifting modal degrees""></a></h2> <p>Consider the intervals of the major scale: tone, tone, semitone, tone, tone, tone, semitone. Starting the scale on a different degree will always create a new mode with individual interval layouts—for example starting on the second degree of a major scale gives the "Dorian mode"—tone, semitone, tone, tone, tone, semitone, tone. This is not so of the modes of limited transposition, which can be modally shifted only a limited number of times. For example, mode 1, the whole tone scale, contains the intervals tone, tone, tone, tone, tone, tone. Starting on any degree of the mode gives the same sequence of intervals, and therefore the whole tone scale has only 1 mode. Messiaen's mode 2, or the diminished scale, consists of semitone, tone, semitone, tone, semitone, tone, semitone, tone, which can be arranged only 2 ways, starting with either a tone or a semitone. Therefore mode 2 has two modes.</p> <p>Any scale having the same number of modes as notes is not a mode of limited transposition.</p> <h3 id="whole-tone-scale" tabindex="-1">Whole tone scale <a class="header-anchor" href="#whole-tone-scale" aria-label="Permalink to "Whole tone scale""></a></h3> <p>Interval cycle 2 (C2).</p> <p>A whole-tone scale is a scale in which each note is separated from its neighbors by the interval of a whole tone. In twelve-tone equal temperament, there are only two complementary whole-tone scales, both six-note or hexatonic scales. A single whole tone scale can also be thought of as a "six-tone equal temperament".</p> <p>The whole-tone scale has no leading tone and because all tones are the same distance apart, "no single tone stands out, and the scale creates a blurred, indistinct effect". This effect is especially emphasised by the fact that triads built on such scale tones are all augmented triads. Indeed, all six tones of a whole-tone scale can be played simply with two augmented triads whose roots are a major second apart. Since they are symmetrical, whole-tone scales do not give a strong impression of the tonic or tonality.</p> <p>Whole tone scale was used notably by Bach and Mozart, Glinka and Rimsky-Korsakov. H. C. Colles names as the "childhood of the whole-tone scale" the music of Berlioz and Schubert in France and Austria and then Russians Glinka and Dargomyzhsky. Claude Debussy, who had been influenced by Russians, along with other impressionist composers made extensive use of whole tone scales. Voiles, the second piece in Debussy's first book of Préludes, is almost entirely within one whole-tone scale.</p> <p>The rāga Sahera in Hindustani classical music uses the same intervals as the whole-tone scale.</p> <h3 id="from-second-to-the-seventh-modes" tabindex="-1">From second to the seventh modes <a class="header-anchor" href="#from-second-to-the-seventh-modes" aria-label="Permalink to "From second to the seventh modes""></a></h3> <chroma-profile-collection :collection="limited" />]]></content:encoded> </item> <item> <title><![CDATA[MIDI]]></title> <link>https://chromatone.center/practice/midi/</link> <guid>https://chromatone.center/practice/midi/</guid> <pubDate>Fri, 10 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Seeing digital music information streams]]></description> <content:encoded><![CDATA[<p>MIDI is the global standard for transfering music information. It's a protocol, that consists of small realtime commands, that you can observe with any MIDI-enabled browser and MIDI conroller connected to your device with the <a href="./log/">MIDI Log app</a>.</p> <p><a href="./router/">MIDI Router</a> may help you transfer messages from certain inputs to certain outputs for building even quite complex setups. All the notes played and all the knobs changed are nicely visualised in the <a href="./monitor/">MIDI monitor</a> or drawn on an endless <a href="./roll/">MIDI Roll</a>. And if we loop the roll around a central axis, we get the <a href="./radar/">MIDI radar</a> that spins at the speed of current clock signal and marks all events on the circular timeline.</p> <p>If you've found some nice melodic or harmonic moves and want to save it for later - try the <a href="./recorder/">MIDI Recorder</a>. Save the .mid files to your device and analyze them later with the experimental <a href="./visualizer/">MIDI File Visualizer</a>.</p> <h2 id="how-to-output-midi-from-chromatone-web-apps-to-your-daw" tabindex="-1">How to output MIDI from Chromatone web-apps to your DAW <a class="header-anchor" href="#how-to-output-midi-from-chromatone-web-apps-to-your-daw" aria-label="Permalink to "How to output MIDI from Chromatone web-apps to your DAW""></a></h2> <h3 id="_1-set-up-the-midi-driver" tabindex="-1">1. Set up the MIDI Driver <a class="header-anchor" href="#_1-set-up-the-midi-driver" aria-label="Permalink to "1. Set up the MIDI Driver""></a></h3> <ul> <li>On a Mac: <ul> <li>Open the "Audio-Midi Setup" app.</li> <li>In the menu, select Window -> Midi-Studio.</li> <li>Double Click "IAC Driver".</li> <li>Make sure "Device is ready" is checked.</li> </ul> </li> <li>On a Windows PC: Install <a href="http://www.tobias-erichsen.de/software/loopmidi.html" target="_blank" rel="noreferrer">loopMidi</a></li> </ul> <p>Check out <a href="https://help.ableton.com/hc/en-us/articles/209774225-Setting-up-a-virtual-MIDI-bus" target="_blank" rel="noreferrer">Ableton's guide</a> for more information and screenshots (applies to all DAWs).</p> <h3 id="_2-enable-the-midi-device-in-your-daw" tabindex="-1">2. Enable the Midi device in your DAW <a class="header-anchor" href="#_2-enable-the-midi-device-in-your-daw" aria-label="Permalink to "2. Enable the Midi device in your DAW""></a></h3> <p>Depending on the DAW you use, there might be additional steps you have to take.</p> <p>In Ableton Live, open up Preferences -> Link | Tempo | Midi. Under "Midi Ports" there is an entry called "In: IAC Driver". Make sure "Track" and "Remote" is checked.</p> <h3 id="_3-make-sure-midi-out-is-enabled-in-chromatone" tabindex="-1">3. Make sure MIDI OUT is enabled in Chromatone <a class="header-anchor" href="#_3-make-sure-midi-out-is-enabled-in-chromatone" aria-label="Permalink to "3. Make sure MIDI OUT is enabled in Chromatone""></a></h3> ]]></content:encoded> <enclosure url="https://chromatone.center/puk-khantho.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chords]]></title> <link>https://chromatone.center/theory/chords/</link> <guid>https://chromatone.center/theory/chords/</guid> <pubDate>Fri, 10 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Harmonic sets of pitches/frequencies consisting of multiple notes that are heard as if sounding simultaneously]]></description> <content:encoded><![CDATA[<p>For many practical and theoretical purposes, arpeggios and broken chords (in which the notes of the chord are sounded one after the other, rather than simultaneously), or sequences of chord tones, may also be considered as chords in the right musical context.</p> <p>Chords and sequences of chords are frequently used in modern West African and Oceanic music, Western classical music, and Western popular music; yet, they are absent from the music of many other parts of the world.</p> <p>In tonal Western classical music (music with a tonic key or "home key"), the most frequently encountered chords are <a href="./triads/">triads</a>, so called because they consist of three distinct notes: the root note, and intervals of a third and a fifth above the root note. Chords with more than three notes include <a href="./tetrads/">added tone chords</a>, <a href="./pentads/">extended chords</a>, <a href="./more/">even more extended chords</a> and <a href="./clusters/">tone clusters</a>, which are used in contemporary classical music, jazz and almost any other genre.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/am7-res.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Tetrads]]></title> <link>https://chromatone.center/theory/chords/tetrads/</link> <guid>https://chromatone.center/theory/chords/tetrads/</guid> <pubDate>Fri, 10 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[All musically meaningful combinations of 4 notes]]></description> <content:encoded><![CDATA[<h2 id="seventh-chords" tabindex="-1">Seventh chords <a class="header-anchor" href="#seventh-chords" aria-label="Permalink to "Seventh chords""></a></h2> <p>A seventh chord is a chord consisting of a triad plus a note forming an interval of a seventh above the chord's root. When not otherwise specified, a "seventh chord" usually means a dominant seventh chord: a major triad together with a minor seventh. However, a variety of sevenths may be added to a variety of triads, resulting in many different types of seventh chords.</p> <p>In its earliest usage, the seventh was introduced solely as an embellishing or nonchord tone. The seventh destabilized the triad, and allowed the composer to emphasize movement in a given direction. As time passed and the collective ear of the western world became more accustomed to dissonance, the seventh was allowed to become a part of the chord itself, and in some modern music, jazz in particular, nearly every chord is a seventh chord. Additionally, the general acceptance of equal temperament during the 19th century reduced the dissonance of some earlier forms of sevenths.</p> <p>Most textbooks name these chords formally by the type of triad and type of seventh; hence, a chord consisting of a major triad and a minor seventh above the root is referred to as a major/minor seventh chord. When the triad type and seventh type are identical (i.e. they are both major, minor, or diminished), the name is shortened. For instance, a major/major seventh is generally referred to as a major seventh. This rule is not valid for augmented chords: since the augmented/augmented chord is not commonly used, the abbreviation augmented is used for augmented/minor, rather than augmented/augmented. Additionally, half-diminished stands for diminished/minor, and dominant stands for major/minor. When the type is not specified at all, the triad is assumed to be major, and the seventh is understood as a minor seventh (e.g. a "C" chord is a "C major triad", and a "C7" chord is a "C major/minor seventh chord", also known as a "C dominant seventh chord").</p> <h2 id="tertian" tabindex="-1">Tertian <a class="header-anchor" href="#tertian" aria-label="Permalink to "Tertian""></a></h2> <p>The most common chords are tertian, constructed using a sequence of major thirds (spanning 4 semitones) and/or minor thirds (3 semitones). Since there are 3 third intervals in a seventh chord (4 notes) and each can be major or minor, there are 8 possible combinations, however, only seven of them are commonly found in western music. The augmented augmented seventh chord, defined by a root, a major third, an augmented fifth, and an augmented seventh (i.e., a sequence of 3 major thirds, such as C–E–G♯–B♯), is a rarely used tertian seventh chord. The reason is that the augmented seventh interval is enharmonically equivalent to one entire octave (in equal temperament, 3 major thirds = 12 semitones = 1 octave) and is hence perfectly consonant with the chord root.</p> <chroma-profile-collection :collection="tetrads.tertian" /><hr> <h2 id="non-tertian" tabindex="-1">Non-tertian <a class="header-anchor" href="#non-tertian" aria-label="Permalink to "Non-tertian""></a></h2> <p>Seventh chords can also be constructed using augmented or diminished thirds. These chords are not tertian and can be used in non-tertian harmony. There are many (mathematically, 64) chords that can be built, however, only few of them are used.</p> <chroma-profile-collection :collection="tetrads.nontertian" />]]></content:encoded> <enclosure url="https://chromatone.center/kelly-sikkema.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Unison and octave]]></title> <link>https://chromatone.center/theory/intervals/unison-octave/</link> <guid>https://chromatone.center/theory/intervals/unison-octave/</guid> <pubDate>Fri, 10 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Intervals inside the same pitch class]]></description> <content:encoded><![CDATA[<h2 id="unison-p1" tabindex="-1">Unison P1 <a class="header-anchor" href="#unison-p1" aria-label="Permalink to "Unison P1""></a></h2> <abc-render abc="[A4A4] AA" /><chroma-profile chroma="100000000000" /><p>Unison is two or more musical parts that sound either the same pitch or pitches separated by intervals of one or more octaves, usually at the same time.</p> <p>Unison or perfect unison (also called a prime, or perfect prime) may refer to the (pseudo-)interval formed by a tone and its duplication (in German, Unisono, Einklang, or Prime), for example C–C, as differentiated from the second, C–D, etc. In the unison the two pitches have the ratio of 1:1 or 0 half steps and zero cents. Although two tones in unison are considered to be the same pitch, they are still perceivable as coming from separate sources, whether played on instruments of a different type or of the same type. This is because a pair of tones in unison come from different locations or can have different "colors" (timbres), i.e. come from different musical instruments or human voices. Voices with different colors have, as sound waves, different waveforms. These waveforms have the same fundamental frequency but differ in the amplitudes of their higher harmonics. The unison is considered the most consonant interval while the near unison is considered the most dissonant. The unison is also the easiest interval to tune. The unison is abbreviated as "P1".</p> <h2 id="octave-p8" tabindex="-1">Octave P8 <a class="header-anchor" href="#octave-p8" aria-label="Permalink to "Octave P8""></a></h2> <abc-render abc="[A4a] Aa" /><p>An octave (Latin: octavus: eighth) or perfect octave (sometimes called the diapason) is the interval between one musical pitch and another with double its frequency. The octave relationship is a natural phenomenon that has been referred to as the "basic miracle of music," the use of which is "common in most musical systems." The interval between the first and second harmonics of the harmonic series is an octave.</p> <p>Any two musical notes with fundamental frequencies in a ratio equal to 2n (n is any integer) are perceived as very similar and represent the simplest interval in music – an octave. Human pitch perception is periodic so that “color” or chroma of all the notes that are an octave apart seem circularly equivalent and brings them together into one pitch class.</p> <img src="./key-intervals.svg"> <p>To emphasize that it is one of the perfect intervals (including unison, perfect fourth, and perfect fifth), the octave is designated P8.</p> <h2 id="notation" tabindex="-1">Notation <a class="header-anchor" href="#notation" aria-label="Permalink to "Notation""></a></h2> <p>Octaves are identified with various naming systems. Among the most common are the scientific, Helmholtz, organ pipe, and MIDI note systems. In scientific pitch notation, a specific octave is indicated by a numerical subscript number after note name. In this notation, middle C is C4, because of the note's position as the fourth C key on a standard 88-key piano keyboard, while the C an octave higher is C5.</p> <h3 id="octave-equivalence" tabindex="-1">Octave equivalence <a class="header-anchor" href="#octave-equivalence" aria-label="Permalink to "Octave equivalence""></a></h3> <p>After the unison, the octave is the simplest interval in music. The human ear tends to hear both notes as being essentially "the same", due to closely related harmonics. Notes separated by an octave "ring" together, adding a pleasing sound to music. The interval is so natural to humans that when men and women are asked to sing in unison, they typically sing in octave.</p> <img src="./octaves.svg"> <p>For this reason, notes an octave apart are given the same note name in the Western system of music notation—the name of a note an octave above A is also A. This is called octave equivalence, the assumption that pitches one or more octaves apart are musically equivalent in many ways, leading to the convention "that scales are uniquely defined by specifying the intervals within an octave". The conceptualization of pitch as having two dimensions, pitch height (absolute frequency) and pitch class (relative position within the octave), inherently include octave circularity. Thus all C♯s, or all 1s (if C = 0), in any octave are part of the same pitch class.</p> <p>A <a href="https://en.wikipedia.org/wiki/Pitch_class" target="_blank" rel="noreferrer">pitch class</a> (p.c. or pc) is a set of all pitches that are a whole number of octaves apart, e.g., the pitch class A consists of the As in all octaves. "The pitch class A stands for all possible As, in whatever octave position."</p> <blockquote> <p>Although there is no formal upper or lower limit to this sequence, only a few of these pitches are audible to the human ear. Yet we can imagine seeing the 40th octave as the frequency gets to the visual spectrum.</p> </blockquote> <p>Pitch class is important because human pitch-perception is periodic: pitches belonging to the same pitch class are perceived as having a similar quality or color, a property called <strong>"octave equivalence"</strong>.</p> <p>Psychologists refer to the quality of a pitch as its <strong>"chroma"</strong>. A chroma is an attribute of pitches (as opposed to tone height), just like hue is an attribute of color. A pitch class is a set of all pitches that share the same chroma, just like "the set of all yellow things" is the collection of all yellow objects.</p> <h2 id="pitch-class-space" tabindex="-1">Pitch class space <a class="header-anchor" href="#pitch-class-space" aria-label="Permalink to "Pitch class space""></a></h2> <p>In music theory, pitch-class space is the circular space representing all the notes (pitch classes) in a musical octave. In this space, there is no distinction between tones that are separated by an integral number of octaves. For example, C4, C5, and C6, though different pitches, are represented by the same point in pitch class space.</p> <p>Since pitch-class space is a circle, we return to our starting point by taking a series of steps in the same direction: beginning with C, we can move "upward" in pitch-class space, through the pitch classes C♯, D, D♯, E, F, F♯, G, G♯, A, A♯, and B, returning finally to C. By contrast, pitch space is a linear space: the more steps we take in a single direction, the further we get from our starting point.</p> <p>Deutsch and Feroe (1981), and Lerdahl and Jackendoff (1983) use a "reductional format" to represent the perception of pitch-class relations in tonal contexts. These two-dimensional models resemble bar graphs, using height to represent a pitch class's degree of importance or centricity. Lerdahl's version uses five levels: the first (highest) contains only the tonic, the second contains tonic and dominant, the third contains tonic, mediant, and dominant, the fourth contains all the notes of the diatonic scale, and the fifth contains the chromatic scale. In addition to representing centricity or importance, the individual levels are also supposed to represent "alphabets" that describe the melodic possibilities in tonal music (Lerdahl 2001, 44–46). The model asserts that tonal melodies will be cognized in terms of one of the five levels a-e:</p> <blockquote> <p>Level a: C C Level b: C G C Level c: C E G C Level d: C D E F G A B C Level e: C D♭ D E♭ E F F♯ G A♭ A B♭ B C</p> <p>(Lerdahl 1992, 113)</p> </blockquote> <p>Note that Lerdahl's model is meant to be cyclical, with its right edge identical to its left. One could therefore display Lerdahl's graph as a series of five concentric circles representing the five melodic "alphabets." In this way one could unite the circular representation depicted at the beginning of this article with Lerdahl's flat two-dimensional representation depicted above.</p> <p>According to David Kopp (2002, 1), "Harmonic space, or tonal space as defined by Fred Lerdahl, is the abstract nexus of possible normative harmonic connections in a system, as opposed to the actual series of temporal connections in a realized work, linear or otherwise."</p> ]]></content:encoded> </item> <item> <title><![CDATA[Scale degrees]]></title> <link>https://chromatone.center/theory/scales/degrees/</link> <guid>https://chromatone.center/theory/scales/degrees/</guid> <pubDate>Fri, 10 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Positions of notes on a scale]]></description> <content:encoded><![CDATA[<p>The term scale degree refers to the position of a particular note on a scale relative to the tonic, the first and main note of the scale from which each octave is assumed to begin. Degrees are useful for indicating the size of intervals and chords and whether they are major or minor.</p> <p>In the most general sense, the scale degree is the number given to each step of the scale, usually starting with 1 for tonic. In a more specific sense, scale degrees are given names that indicate their particular function within the scale. This definition implies a functional scale, as is the case in tonal music.</p> <scale-degrees /><p>The degrees of the traditional major and minor scales may be identified several ways:</p> <ul> <li>by their ordinal numbers, as the first, second, third, fourth, fifth, sixth, or seventh degrees of the scale, sometimes raised or lowered;</li> <li>by Arabic numerals (1, 2, 3, 4 …), as in the Nashville Number System, sometimes with carets (scale degree 1, scale degree 2, scale degree 3, scale degree 4 …);</li> <li>by their name according to the movable do solfège system: do, re, mi, fa, so(l), la, and si (or ti).</li> <li>by Roman numerals (I, II, III, IV …);</li> <li>by the English name for their function: <ul> <li>tonic,</li> <li>supertonic,</li> <li>mediant,</li> <li>subdominant,</li> <li>dominant,</li> <li>submediant,</li> <li>subtonic or leading note (leading tone in the United States),</li> <li>and tonic again.</li> </ul> </li> </ul> <p>These names are derived from a scheme where the tonic note is the 'centre'. Then the supertonic and subtonic are, respectively, a second above and below the tonic; the mediant and submediant are a third above and below it; and the dominant and subdominant are a fifth above and below the tonic.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Fifth and fourth]]></title> <link>https://chromatone.center/theory/intervals/fifth-fourth/</link> <guid>https://chromatone.center/theory/intervals/fifth-fourth/</guid> <pubDate>Wed, 08 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Perfect, but not equivalent intervals]]></description> <content:encoded><![CDATA[<h2 id="fifth-p5" tabindex="-1">Fifth P5 <a class="header-anchor" href="#fifth-p5" aria-label="Permalink to "Fifth P5""></a></h2> <abc-render abc="[A4e] Ae" /><chroma-profile :chroma="'100000010000'" /><p>The second most harmonic interval is the fifth – a 3/2 of any given frequency. Pythagoras is claimed to be the first to use this law to construct pleasant musical notes combinations. This principle is foundational for the modern 12-TET equal temperament. Take the lowest starting frequency and go up in two ways:</p> <ul> <li>multiplying it by two – stepping an octave above,</li> <li>and also multiplying by 1.5 – stepping a fifth in a time.</li> </ul> <img src="./images/circle-of-fifths-exp.svg"> <p>After 7 octaves and 12 fifths you’ll end up on the same starting tone. And you’ll find that you’ve pressed all the other tones on the way. So any other step of fifth gives us a new note. Get the note frequencies, then divide them by two until they’re in the same octave with the starting frequency. And there you got it – 12 notes in any given octave.</p> <img src="./images/oct-equation.svg"> <p>This equation shows the approximate equality of the values of 12 perfect fifth intervals and 7 octaves. If we use the just interval of 3/2 we get the small difference, that represents the Pythagorean comma.</p> <p>In musical tuning, the Pythagorean comma (or ditonic comma), named after the ancient mathematician and philosopher Pythagoras, is the small interval (or comma) existing in Pythagorean tuning between two enharmonically equivalent notes such as C and B♯ (Play), or D♭ and C♯. It is equal to the frequency ratio. (1.5)12⁄27 =. 531441⁄524288 ≈ 1.01364, or about 23.46 cents, roughly a quarter of a semitone (in between 75:74 and 74:73).</p> <img src="./images/key-intervals.svg"> <h3 id="_12-tone-equal-temperament" tabindex="-1">12-Tone Equal Temperament <a class="header-anchor" href="#_12-tone-equal-temperament" aria-label="Permalink to "12-Tone Equal Temperament""></a></h3> <p>Twelve-tone equal temperament is the musical system that divides the octave into 12 parts, all of which are equally tempered (equally spaced) on a logarithmic scale, with a ratio equal to the 12th root of 2 (12√2 ≈ 1.05946). That resulting smallest interval, 1⁄12 the width of an octave, is called a semitone or half step.</p> <img src="./images/tet-fifth-equation.svg" /> <h2 id="perfect-fourth-p4" tabindex="-1">Perfect fourth P4 <a class="header-anchor" href="#perfect-fourth-p4" aria-label="Permalink to "Perfect fourth P4""></a></h2> <abc-render abc="[A4d] Ad" /><chroma-profile :chroma="'100001000000'" /><p>Perfect fourth is the inverse of the perfect fifth.</p> <p>The perfect fourth may be derived from the harmonic series as the interval between the third and fourth harmonics. The term perfect identifies this interval as belonging to the group of perfect intervals, so called because they are neither major nor minor.</p> <p>A perfect fourth in just intonation corresponds to a pitch ratio of 4:3, or about 498 cents, while in equal temperament a perfect fourth is equal to five semitones, or 500 cents.</p> <p>The perfect fourth is a perfect interval like the unison, octave, and perfect fifth, and it is a sensory consonance. In common practice harmony, however, it is considered a stylistic dissonance in certain contexts, namely in two-voice textures and whenever it occurs "above the bass in chords with three or more notes". If the bass note also happens to be the chord's root, the interval's upper note almost always temporarily displaces the third of any chord, and, in the terminology used in popular music, is then called a suspended fourth.</p> <h2 id="quartal-and-quintal-harmony" tabindex="-1">Quartal and quintal harmony <a class="header-anchor" href="#quartal-and-quintal-harmony" aria-label="Permalink to "Quartal and quintal harmony""></a></h2> <p>In music, quartal harmony is the building of harmonic structures built from the intervals of the perfect fourth, the augmented fourth and the diminished fourth. For instance, a three-note quartal chord on C can be built by stacking perfect fourths, C–F–B♭.</p> <p>Quintal harmony is harmonic structure preferring the perfect fifth, the augmented fifth and the diminished fifth. For instance, a three-note quintal chord on C can be built by stacking perfect fifths, C–G–D.</p> <p>Regarding chords built from perfect fourths alone, composer Vincent Persichetti writes that:</p> <blockquote> <p>Chords by perfect fourth are ambiguous in that, like all chords built by equidistant intervals (diminished seventh chords or augmented triads), any member can function as the root. The indifference of this rootless harmony to tonality places the burden of key verification upon the voice with the most active melodic line.</p> </blockquote> ]]></content:encoded> </item> <item> <title><![CDATA[Third and sixth]]></title> <link>https://chromatone.center/theory/intervals/third-sixth/</link> <guid>https://chromatone.center/theory/intervals/third-sixth/</guid> <pubDate>Mon, 06 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The imperfect consonant intervals]]></description> <content:encoded><![CDATA[<h2 id="major-third-m3" tabindex="-1">Major third M3 <a class="header-anchor" href="#major-third-m3" aria-label="Permalink to "Major third M3""></a></h2> <abc-render abc="[A^c] A^c" /><chroma-profile :chroma="'100010000000'" /><p>The major third may be derived from the harmonic series as the interval between the fourth and fifth harmonics. The major third is classed as an imperfect consonance and is considered one of the most consonant intervals after the unison, octave, perfect fifth, and perfect fourth.</p> <p>A major third is slightly different in different musical tunings:</p> <ul> <li>in just intonation corresponds to a pitch ratio of 5:4 (fifth harmonic in relation to the fourth) or 386.31 cents;</li> <li>in equal temperament, a major third is equal to four semitones, a ratio of 21/3:1 (about 1.2599) or 400 cents, 13.69 cents wider than the 5:4 ratio.</li> <li>The older concept of a ditone (two 9:8 major seconds) made a dissonantly wide major third with the ratio 81:64 (408 cents).</li> <li>The septimal major third is 9:7 (435 cents), the undecimal major third is 14:11 (418 cents), and the tridecimal major third is 13:10 (452 cents).</li> </ul> <p>In equal temperament three major thirds in a row are equal to an octave (for example, A♭ to C, C to E, and E to G♯; G♯ and A♭ represent the same note). This is sometimes called the "circle of thirds". In just intonation, however, three 5:4 major third, the 125th subharmonic, is less than an octave. For example, three 5:4 major thirds from C is B♯ (C to E to G♯ to B♯) (B♯ = 5^3 / 2^6 = 125 / 64 ). The difference between this just-tuned B♯ and C, like that between G♯ and A♭, is called the "enharmonic diesis", about 41 cents (the inversion of the 125/64 interval: 128 / 125 = 2^7 / 3^3 ).</p> <p><img src="./Comparison_of_major_thirds.png" alt=""></p> <h2 id="minor-third-m3" tabindex="-1">Minor third m3 <a class="header-anchor" href="#minor-third-m3" aria-label="Permalink to "Minor third m3""></a></h2> <abc-render abc="[A4c] Ac" /><chroma-profile :chroma="'100100000000'" /><p>A minor third is a musical interval that encompasses three half steps, or semitones. It is called minor because it is the smaller of the two: the major third spans an additional semitone.</p> <p>The minor third may be derived from the harmonic series as the interval between the fifth and sixth harmonics, or from the 19th harmonic. The minor third is also obtainable in reference to a fundamental note from the undertone series, while the major third is obtainable as such from the overtone series. The 12-TET minor third (300 cents) more closely approximates the nineteenth harmonic with only 2.49 cents error.</p> <ul> <li>A minor third, in just intonation, corresponds to a pitch ratio of 6:5 or 315.64 cents.</li> <li>In an equal tempered tuning, a minor third is equal to three semitones, a ratio of 2^1/4:1 (about 1.189), or 300 cents, 15.64 cents narrower than the 6:5 ratio.</li> </ul> <p>The minor third is commonly used to express sadness in music, and research shows that this mirrors its use in speech, as a tone similar to a minor third is produced during sad speech.</p> <p><img src="./Comparison_of_minor_thirds.png" alt=""></p> <h2 id="minor-sixth-m6" tabindex="-1">Minor sixth m6 <a class="header-anchor" href="#minor-sixth-m6" aria-label="Permalink to "Minor sixth m6""></a></h2> <abc-render abc="[A4f] Af" /><chroma-profile :chroma="'100000001000'" /><p>Minor sixth is the inverse the major third.</p> <p>In the common practice period, sixths were considered interesting and dynamic consonances along with their inverses the thirds.</p> <p>In just intonation multiple definitions of a minor sixth can exist:</p> <ul> <li> <p>In 3-limit tuning, i.e. Pythagorean tuning, the minor sixth is the ratio 128:81, or 792.18 cents, i.e. 7.82 cents flatter than the 12-ET-minor sixth. This is denoted with a "-" (minus) sign (see figure).</p> </li> <li> <p>In 5-limit tuning, a minor sixth most often corresponds to a pitch ratio of 8:5 or 814 cents; i.e. 13.7 cents sharper than the 12-ET-minor sixth.</p> </li> <li> <p>In 11-limit tuning, the 11:7 undecimal minor sixth is 782.49 cents.</p> </li> </ul> <p>In the common practice period, sixths were considered interesting and dynamic consonances along with their inverses the thirds, but in medieval times they were considered dissonances unusable in a stable final sonority. In that period they were tuned to the flatter Pythagorean minor sixth of 128:81. In 5-limit just intonation, the minor sixth of 8:5 is classed as a consonance.</p> <h2 id="major-sixth-m6" tabindex="-1">Major sixth M6 <a class="header-anchor" href="#major-sixth-m6" aria-label="Permalink to "Major sixth M6""></a></h2> <abc-render abc="[A4^f] A^f" /><chroma-profile :chroma="'100000000100'" /><p>Major sixth is the inverse the minor third.</p> <p>The major sixth spans nine semitones. It is a sixth because it encompasses six note letter names (C, D, E, F, G, A) and six staff positions.</p> <p>The major sixth is one of the consonances of common practice music, along with the unison, octave, perfect fifth, major and minor thirds, minor sixth, and (sometimes) the perfect fourth. In the common practice period, sixths were considered interesting and dynamic consonances along with their inverses the thirds. In medieval times theorists always described them as Pythagorean major sixths of 27/16 and therefore considered them dissonances unusable in a stable final sonority. We cannot know how major sixths actually were sung in the Middle Ages. In just intonation, the (5/3) major sixth is classed as a consonance of the 5-limit.</p> <p>The nineteenth subharmonic is a major sixth, 32/19 = 902.49 cents.</p> <ul> <li>In just intonation, the most common major sixth is the pitch ratio of 5:3 , approximately 884 cents.</li> <li>In 12-tone equal temperament, a major sixth is equal to nine semitones, exactly 900 cents, with a frequency ratio of the (9/12) root of 2 over 1.</li> <li>Another major sixth is the Pythagorean major sixth with a ratio of 27:16, approximately 906 cents,[4] called "Pythagorean" because it can be constructed from three just perfect fifths (C-A = C-G-D-A = 702+702+702-1200=906). It corresponds to the interval between the 27th and the 16th harmonics.</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/Comparison_of_major_thirds.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Syncopation, swing and groove]]></title> <link>https://chromatone.center/theory/rhythm/groove/</link> <guid>https://chromatone.center/theory/rhythm/groove/</guid> <pubDate>Sun, 05 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The propulsive quality or "feel" of a rhythm and swung notes to serve it]]></description> <content:encoded><![CDATA[<h2 id="syncopation" tabindex="-1">Syncopation <a class="header-anchor" href="#syncopation" aria-label="Permalink to "Syncopation""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Syncopation" target="_blank" rel="noreferrer">Syncopation</a> is a musical term meaning a variety of rhythms played together to make a piece of music, making part or all of a tune or piece of music off-beat. More simply, syncopation is "a disturbance or interruption of the regular flow of rhythm": a "placement of rhythmic stresses or accents where they wouldn't normally occur". It is the correlation of at least two sets of time intervals.</p> <p>Syncopation is used in many musical styles, especially dance music. According to music producer Rick Snoman, "All dance music makes use of syncopation, and it's often a vital element that helps tie the whole track together". In the form of a back beat, syncopation is used in virtually all contemporary popular music.</p> <p>Syncopation can also occur when a strong harmony is simultaneous with a weak beat, for instance, when a 7th-chord is played on the second beat of 3/4 measure or a dominant chord is played at the fourth beat of a 4/4 measure. The latter occurs frequently in tonal cadences for 18th- and early-19th-century music and is the usual conclusion of any section.</p> <youtube-embed video="RuvA4b_2pk0" /><h3 id="types-of-syncopation" tabindex="-1">Types of syncopation <a class="header-anchor" href="#types-of-syncopation" aria-label="Permalink to "Types of syncopation""></a></h3> <p>Technically, "syncopation occurs when a temporary displacement of the regular metrical accent occurs, causing the emphasis to shift from a strong accent to a weak accent". "Syncopation is", however, "very simply, a deliberate disruption of the two- or three-beat stress pattern, most often by stressing an off-beat, or a note that is not on the beat."</p> <youtube-embed video="_uqu-aD9HpU" /><h4 id="suspension" tabindex="-1">Suspension <a class="header-anchor" href="#suspension" aria-label="Permalink to "Suspension""></a></h4> <p>For the following example, there are two points of syncopation where the third beats are sustained from the second beats. In the same way, the first beat of the second bar is sustained from the fourth beat of the first bar.</p> <p>Though syncopation may be very complex, dense or complex-looking rhythms often contain no syncopation. However, whether it is a placed rest or an accented note, any point in a piece of music that changes the listener's sense of the downbeat is a point of syncopation because it shifts where the strong and weak accents are built.</p> <h4 id="off-beat-syncopation" tabindex="-1">Off-beat syncopation <a class="header-anchor" href="#off-beat-syncopation" aria-label="Permalink to "Off-beat syncopation""></a></h4> <p>The stress can shift by less than a whole beat, so it occurs on an offbeat, whereas the notes are expected to occur on the beat. Playing a note ever so slightly before, or after, a beat is another form of syncopation because this produces an unexpected accent:</p> <p>It can be helpful to think of a 4/4 rhythm in eighth notes and count it as "1-and-2-and-3-and-4-and". In general, emphasizing the "and" would be considered the off-beat.</p> <h4 id="anticipated-bass" tabindex="-1">Anticipated bass <a class="header-anchor" href="#anticipated-bass" aria-label="Permalink to "Anticipated bass""></a></h4> <p>Anticipated bass is a bass tone that comes syncopated shortly before the downbeat, which is used in Son montuno Cuban dance music. Timing can vary, but it usually occurs on the 2+ and the 4 of the 4/4 time, thus anticipating the third and first beats. This pattern is known commonly as the Afro-Cuban bass tumbao.</p> <h4 id="transformation" tabindex="-1">Transformation <a class="header-anchor" href="#transformation" aria-label="Permalink to "Transformation""></a></h4> <p>Richard Middleton suggests adding the concept of transformation to Narmour's prosodic rules which create rhythmic successions in order to explain or generate syncopations. "The syncopated pattern is heard 'with reference to', 'in light of', as a remapping of, its partner." He gives examples of various types of syncopation: Latin, backbeat, and before-the-beat.</p> <p><strong>Latin equivalent of simple 4/4:</strong> a syncopated rhythm in which the first and fourth beat are provided as expected, but the accent occurs unexpectedly in between the second and third beats, creating a familiar "Latin rhythm" known as tresillo.</p> <p><strong>Backbeat transformation of simple 4/4</strong>: the accent may be shifted from the first to the second beat in duple meter (and the third to fourth in quadruple), creating the backbeat rhythm. Different crowds will "clap along" at concerts either on 1 and 3 or on 2 and 4.</p> <p>This demonstrates how each syncopated pattern may be heard as a remapping, "with reference to" or "in light of", an unsyncopated pattern.</p> <h2 id="swing" tabindex="-1">Swing <a class="header-anchor" href="#swing" aria-label="Permalink to "Swing""></a></h2> <p>The term <a href="https://en.wikipedia.org/wiki/Swing_(jazz_performance_style)" target="_blank" rel="noreferrer">swing</a> has two main uses. Colloquially, it is used to describe the propulsive quality or "feel" of a rhythm, especially when the music prompts a visceral response such as foot-tapping or head-nodding. This sense can also be called "groove".</p> <p>The term swing, as well as swung note(s) and swung rhythm, is also used more specifically to refer to a technique (most commonly associated with jazz but also used in other genres) that involves alternately lengthening and shortening the first and second consecutive notes in the two part pulse-divisions in a beat.</p> <h3 id="overview" tabindex="-1">Overview <a class="header-anchor" href="#overview" aria-label="Permalink to "Overview""></a></h3> <p>Like the term "groove", which is used to describe a cohesive rhythmic "feel" in a funk or rock context, the concept of "swing" can be hard to define. Indeed, some dictionaries use the terms as synonyms: "Groovy ... denotes music that really swings." The Jazz in America glossary defines swing as, "when an individual player or ensemble performs in such a rhythmically coordinated way as to command a visceral response from the listener (to cause feet to tap and heads to nod); an irresistible gravitational buoyancy that defies mere verbal definition."</p> <p>When jazz performer Cootie Williams was asked to define it, he joked, "Define it? I'd rather tackle Einstein's theory!" When Louis Armstrong was asked on the Bing Crosby radio show what swing was, he said, "Ah, swing, well, we used to call it syncopation—then they called it ragtime, then blues—then jazz. Now, it's swing. Ha! Ha! White folks, yo'all sho is a mess."</p> <p>Benny Goodman, the 1930s-era bandleader nicknamed the "King of Swing", called swing "free speech in music", whose most important element is "the liberty a soloist has to stand and play a chorus in the way he feels it". His contemporary Tommy Dorsey gave a more ambiguous definition when he proposed that "Swing is sweet and hot at the same time and broad enough in its creative conception to meet every challenge tomorrow may present." Boogie-woogie pianist Maurice Rocco argues that the definition of swing "is just a matter of personal opinion". When asked for a definition of swing, Fats Waller replied, "Lady, if you gotta ask, you'll never know."</p> <blockquote> <p>What is Swing? Perhaps the best answer, after all, was supplied by the hep-cat who rolled her eyes, stared into the far-off and sighed, "You can feel it, but you just can't explain it. Do you dig me?" — Treadwell (1946), p.10</p> </blockquote> <p>Stanley Dance, in The World of Swing, devoted the two first chapters of his work to discussions of the concept of swing with a collection of the musicians who played it. They described a kinetic quality to the music. It was compared to flying; "take off" was a signal to start a solo. The rhythmic pulse continued between the beats, expressed in dynamics, articulation, and inflection. Swing was as much in the music anticipating the beat, like the swing of a jumprope anticipating the jump, as in the beat itself. Swing has been defined in terms of formal rhythmic devices, but according to the Jimmie Lunceford tune, "T'aint whatcha do, it's the way thatcha do it" (say it so it swings).</p> <h3 id="swing-as-a-rhythmic-style" tabindex="-1">Swing as a rhythmic style <a class="header-anchor" href="#swing-as-a-rhythmic-style" aria-label="Permalink to "Swing as a rhythmic style""></a></h3> <p>In swing rhythm, the pulse is divided unequally, such that certain subdivisions (typically either eighth note or sixteenth note subdivisions) alternate between long and short durations. Certain music of the Baroque and Classical era is played using notes inégales, which is analogous to swing. In shuffle rhythm, the first note in a pair may be twice (or more) the duration of the second note. In swing rhythm, the ratio of the first note's duration to the second note's duration can take on a range of magnitudes. The first note of each pair is often understood to be twice as long as the second, implying a triplet feel, but in practice the ratio is less definitive and is often much more subtle. In traditional jazz, swing is typically applied to eighth notes. In other genres, such as funk and jazz-rock, swing is often applied to sixteenth notes.</p> <p>In most jazz music, especially of the big band era and later, the second and fourth beats of a 4/4 measure are emphasized over the first and third, and the beats are lead-in—main-beat couplets (dah-DUM, dah-DUM....). The "dah" anticipates, or leads into, the "DUM." The "dah" lead-in may or may not be audible. It may be occasionally accented for phrasing or dynamic purposes.</p> <p>The instruments of a swing rhythm section express swing in different ways from each other, and the devices evolved as the music developed. During the early development of swing music, the bass was often played with lead-in—main-note couplets, often with a percussive sound. Later, the lead-in note was dropped but incorporated into the physical rhythm of the bass player to help keep the beat "solid."</p> <p>Similarly, the rhythm guitar was played with the lead-in beat in the player's physical rhythm but inaudible. The piano was played with a variety of devices for swing. Chord patterns played in the rhythm of a dotted-eight—sixteenth couplet were characteristic of boogie-woogie playing (sometimes also used in boogie-woogie horn section playing). The "swing bass" left hand, used by James P. Johnson, Fats Waller, and Earl Hines, used a bass note on the first and third beats, followed by a mid-range chord to emphasize the second and fourth beats. The lead-in beats were not audible, but expressed in the motion of the left arm.</p> <p>Swing piano also put the first and third beats a role anticipatory to the emphasized second and fourth beats in two-beat bass figures.</p> <p>As swing music developed, the role of the piano in the ensemble changed to emphasize accents and fills; these were often played on the lead-in to the main beat, adding a punch to the rhythm. Count Basie's style was sparse, played as accompaniment to the horn sections and soloists.</p> <p>The bass and snare drums started the swing era as the main timekeepers, with the snare usually used for either lead-ins or emphasis on the second and fourth beats. It was soon found that the high-hat cymbal could add a new dimension to the swing expressed by the drum kit when played in a two-beat "ti-tshhh-SH" figure, with the "ti" the lead-in to the "tshhh" on the first and third beats, and the "SH" the emphasized second and fourth beats.</p> <p>With that high-hat figure, the drummer expressed three elements of swing: the lead-in with the "ti," the continuity of the rhythmic pulse between the beats with the "tshhh," and the emphasis on the second and fourth beats with the "SH". Early examples of that high-hat figure were recorded by the drummer Chick Webb. Jo Jones carried the high-hat style a step further, with a more continuous-sounding "t'shahhh-uhh" two beat figure while reserving the bass and snare drums for accents. The changed role of the drum kit away from the heavier style of the earlier drumming placed more emphasis on the role of the bass in holding the rhythm.</p> <p>Horn sections and soloists added inflection and dynamics to the rhythmic toolbox, "swinging" notes and phrases. One of the characteristic horn section sounds of swing jazz was a section chord played with a strong attack, a slight fade, and a quick accent at the end, expressing the rhythmic pulse between beats. That device was used interchangeably or in combination with a slight downward slur between the beginning and the end of the note.</p> <p>Similarly, section arrangements sometimes used a series of triplets, either accented on the first and third notes or with every other note accented to make a 3/2 pattern. Straight eighth notes were commonly used in solos, with dynamics and articulation used to express phrasing and swing. Phrasing dynamics built swing across two or four measures or, in the innovative style of tenor saxophonist Lester Young, across odd sequences of measures, sometimes starting or stopping without regard to place in the measure.</p> <p>The rhythmic devices of the swing era became subtler with bebop. Bud Powell and other piano players influenced by him mostly did away with left-hand rhythmic figures, replacing them with chords. The ride cymbal played in a "ting-ti-ting" pattern took the role of the high-hat, the snare drum was mainly used for lead-in accents, and the bass drum was mainly used for occasional "bombs." But the importance of the lead-in as a rhythmic device was still respected. Drummer Max Roach emphasized the importance of the lead-in, audible or not, in "protecting the beat." Bebop soloists rose to the challenge of keeping a swinging feel in highly sophisticated music often played at a breakneck pace. The groundbreakers of bebop had come of age as musicians with swing and, while breaking the barriers of the swing era, still reflected their swing heritage.</p> <p>Various rhythmic swing approximations:</p> <ul> <li>≈1:1 = eighth note + eighth note, "straight eighths."</li> <li>≈3:2 = long eighth + short eighth.</li> <li>≈2:1 = triplet quarter note + triplet eighth, triple meter;</li> <li>≈3:1 = dotted eighth note + sixteenth note.</li> </ul> <p>The subtler end of the range involves treating written pairs of adjacent eighth notes (or sixteenth notes, depending on the level of swing) as slightly asymmetrical pairs of similar values. On the other end of the spectrum, the "dotted eighth – sixteenth" rhythm, consists of a long note three times as long as the short. Prevalent "dotted rhythms" such as these in the rhythm section of dance bands in the mid-20th century are more accurately described as a "shuffle"; they are also an important feature of baroque dance and many other styles.</p> <p>In jazz, the swing ratio typically lies somewhere between 1:1 and 3:1, and can vary considerably. Swing ratios in jazz tend to be wider at slower tempos and narrower at faster tempos. In jazz scores, swing is often assumed, but is sometimes explicitly indicated. For example, "Satin Doll", a swing era jazz standard, was notated in 4/4 time and in some versions includes the direction, medium swing.</p> <h3 id="genres-using-swing-rhythm" tabindex="-1">Genres using swing rhythm <a class="header-anchor" href="#genres-using-swing-rhythm" aria-label="Permalink to "Genres using swing rhythm""></a></h3> <p>Swing is commonly used in swing jazz, ragtime, blues, jazz, western swing, new jack swing, big band jazz, swing revival, funk, funk blues, R&B, soul music, rockabilly, neo rockabilly, rock and hip-hop. Much written music in jazz is assumed to be performed with a swing rhythm. Styles that always use traditional (triplet) rhythms, resembling "hard swing", include foxtrot, quickstep and some other ballroom dances, Stride piano, and 1920s-era Novelty piano (the successor to Ragtime style).</p> <h2 id="groove" tabindex="-1">Groove <a class="header-anchor" href="#groove" aria-label="Permalink to "Groove""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Groove_(music)" target="_blank" rel="noreferrer">Groove</a> is the sense of an effect ("feel") of changing pattern in a propulsive rhythm or sense of "swing". In jazz, it can be felt as a quality of persistently repeated rhythmic units, created by the interaction of the music played by a band's rhythm section (e.g. drums, electric bass or double bass, guitar, and keyboards). Groove is a significant feature of popular music, and can be found in many genres, including salsa, rock, soul, funk, and fusion.</p> <p>Characteristic rock groove: "bass drum on beats 1 and 3 and snare drum on beats 2 and 4 of the measure...add eighth notes on the hi-hat".</p> <p>From a broader ethnomusicological perspective, groove has been described as "an unspecifiable but ordered sense of something that is sustained in a distinctive, regular and attractive way, working to draw the listener in." Musicologists and other scholars have analyzed the concept of "groove" since around the 1990s. They have argued that a "groove" is an "understanding of rhythmic patterning" or "feel" and "an intuitive sense" of "a cycle in motion" that emerges from "carefully aligned concurrent rhythmic patterns" that stimulates dancing or foot-tapping on the part of listeners. The concept can be linked to the sorts of ostinatos that generally accompany fusions and dance musics of African derivation (e.g. African-American, Afro-Cuban, Afro-Brazilian, etc.).</p> <p>The term is often applied to musical performances that make one want to move or dance, and enjoyably "groove" (a word that also has sexual connotations). The expression "in the groove" (as in the jazz standard) was widely used from around 1936 to 1945, at the height of the swing era, to describe top-notch jazz performances. In the 1940s and 1950s, groove commonly came to denote musical "routine, preference, style, [or] source of pleasure."</p> <h3 id="description" tabindex="-1">Description <a class="header-anchor" href="#description" aria-label="Permalink to "Description""></a></h3> <h4 id="musicians-perspectives" tabindex="-1">Musicians' perspectives <a class="header-anchor" href="#musicians-perspectives" aria-label="Permalink to "Musicians' perspectives""></a></h4> <p>Like the term "swing", which is used to describe a cohesive rhythmic "feel" in a jazz context, the concept of "groove" can be hard to define. Marc Sabatella's article Establishing The Groove argues that "groove is a completely subjective thing." He claims that "one person may think a given drummer has a great feel, while another person may think the same drummer sounds too stiff, and another may think he is too loose." Similarly, a bass educator states that while "groove is an elusive thing" it can be defined as "what makes the music breathe" and the "sense of motion in the context of a song".</p> <p>In a musical context, general dictionaries define a groove as "a pronounced, enjoyable rhythm" or the act of "creat[ing], danc[ing] to, or enjoy[ing] rhythmic music". Steve Van Telejuice explains the "groove" as the point in this sense when he defines it as a point in a song or performance when "even the people who can't dance wanna feel like dancing..." due to the effect of the music.</p> <p>Bernard Coquelet argues that the "groove is the way an experienced musician will play a rhythm compared with the way it is written (or would be written)" by playing slightly "before or after the beat". Coquelet claims that the "notion of groove actually has to do with aesthetics and style"; "groove is an artistic element, that is to say human,...and "it will evolve depending on the harmonic context, the place in the song, the sound of the musician's instrument, and, in interaction with the groove of the other musicians", which he calls "collective" groove". Minute rhythmic variations by the rhythm section members such as the bass player can dramatically change the feel as a band plays a song, even for a simple singer-songwriter groove.</p> <h4 id="theoretical-analysis" tabindex="-1">Theoretical analysis <a class="header-anchor" href="#theoretical-analysis" aria-label="Permalink to "Theoretical analysis""></a></h4> <p>UK musicologist Richard Middleton (1999) notes that while "the concept of groove" has "long [been] familiar in musicians' own usage", musicologists and theorists have only more recently begun to analyze this concept. Middleton states that a groove "... marks an understanding of rhythmic patterning that underlies its role in producing the characteristic rhythmic 'feel' of a piece". He notes that the "feel created by a repeating framework" is also modified with variations. "Groove", in terms of pattern-sequencing, is also known as "shuffle note"—where there is deviation from exact step positions.</p> <p>When the musical slang phrase "Being in the groove" is applied to a group of improvisers, this has been called "an advanced level of development for any improvisational music group", which is "equivalent to Bohm and Jaworski's descriptions of an evoked field", which systems dynamics scholars claim are "forces of unseen connection that directly influence our experience and behaviour". Peter Forrester and John Bailey argue that the "chances of achieving this higher level of playing" (i.e., attain a "groove") are improved when the musicians are "open to other's musical ideas", "finding ways of complementing other participant's [sic] musical ideas", and "taking risks with the music".</p> <p>Turry and Aigen cite Feld's definition of groove as "an intuitive sense of style as process, a perception of a cycle in motion, a form or organizing pattern being revealed, a recurrent clustering of elements through time". Aigen states that "when [a] groove is established among players, the musical whole becomes greater than the sum of its parts, enabling a person [...] to experience something beyond himself which he/she cannot create alone (Aigen 2002, p.34)".</p> <p>Jeff Pressing's 2002 article claimed that a "groove or feel" is "a cognitive temporal phenomenon emerging from one or more carefully aligned concurrent rhythmic patterns, characterized by...perception of recurring pulses, and subdivision of structure in such pulses,...perception of a cycle of time, of length 2 or more pulses, enabling identification of cycle locations, and...effectiveness of engaging synchronizing body responses (e.g. dance, foot-tapping)".</p> <h4 id="neuroscientific-perspectives" tabindex="-1">Neuroscientific perspectives <a class="header-anchor" href="#neuroscientific-perspectives" aria-label="Permalink to "Neuroscientific perspectives""></a></h4> <p>The "groove" has been cited as an example of sensory-motor coupling between neural systems. Sensory-motor coupling is the coupling or integration of the sensory system and motor system. Sensorimotor integration is not a static process. For a given stimulus, there is no one single motor command. "Neural responses at almost every stage of a sensorimotor pathway are modified at short and long timescales by biophysical and synaptic processes, recurrent and feedback connections, and learning, as well as many other internal and external variables". Recent research has shown that at least some styles of modern groove-oriented rock music are characterized by an "aesthetics of exactitude" and the strongest groove stimulation could be observed for drum patterns without microtiming deviations.</p> <h3 id="use-in-different-genres" tabindex="-1">Use in different genres <a class="header-anchor" href="#use-in-different-genres" aria-label="Permalink to "Use in different genres""></a></h3> <h4 id="jazz" tabindex="-1">Jazz <a class="header-anchor" href="#jazz" aria-label="Permalink to "Jazz""></a></h4> <p>In some more traditional styles of jazz, the musicians often use the word "swing" to describe the sense of rhythmic cohesion of a skilled group. However, since the 1950s, musicians from the organ trio and latin jazz subgenres have also used the term "groove". Jazz flute player Herbie Mann talks a lot about "the groove." In the 1950s, Mann "locked into a Brazilian groove in the early '60s, then moved into a funky, soulful groove in the late '60s and early '70s. By the mid-'70s he was making hit disco records, still cooking in a rhythmic groove." He describes his approach to finding the groove as follows: "All you have to do is find the waves that are comfortable to float on top of." Mann argues that the "epitome of a groove record" is "Memphis Underground or Push Push", because the "rhythm section [is] locked all in one perception."</p> <h4 id="reggae" tabindex="-1">Reggae <a class="header-anchor" href="#reggae" aria-label="Permalink to "Reggae""></a></h4> <p>In Jamaican reggae, dancehall, and dub music, the creole term "riddim" is used to describe the rhythm patterns created by the drum pattern or a prominent bassline. In other musical contexts a "riddim" would be called a "groove" or beat. One of the widely copied "riddims", Real Rock, was recorded in 1967 by Sound Dimension. "It was built around a single, emphatic bass note followed by a rapid succession of lighter notes. The pattern repeated over and over hypnotically. The sound was so powerful that it gave birth to an entire style of reggae meant for slow dancing called rub a dub."</p> <youtube-embed video="ne1oIaPIyIw" /><h4 id="r-b" tabindex="-1">R&B <a class="header-anchor" href="#r-b" aria-label="Permalink to "R&B""></a></h4> <p>The "groove" is also associated with funk performers, such as James Brown's drummers Clyde Stubblefield and Jabo Starks, and with soul music. "In the 1950s, when 'funk' and 'funky' were used increasingly as adjectives in the context of soul music—the meaning being transformed from the original one of a pungent odor to a re-defined meaning of a strong, distinctive groove." As "[t]he soul dance music of its day, the basic idea of funk was to create as intense a groove as possible." When a drummer plays a groove that "is very solid and with a great feel...", this is referred to informally as being "in the pocket"; when a drummer "maintains this feel for an extended period of time, never wavering, this is often referred to as a deep pocket."</p> <youtube-embed video="jczjHqV2IUg" /><h4 id="hip-hop" tabindex="-1">Hip hop <a class="header-anchor" href="#hip-hop" aria-label="Permalink to "Hip hop""></a></h4> <p>A concept similar to "groove" or "swing" is also used in other African-American genres such as hip hop. The rhythmic groove that jazz artists call a sense of “swing” is sometimes referred to as having "flow" in the hip hop scene. "Flow is as elemental to hip hop as the concept of swing is to jazz". Just as the jazz concept of "swing" involves performers deliberately playing behind or ahead of the beat, the hip-hop concept of flow is about "funking with one's expectations of time"—that is, the rhythm and pulse of the music. "Flow is not about what is being said so much as how one is saying it".</p> ]]></content:encoded> <enclosure url="https://chromatone.center/jadson-thomas.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Seconds and sevenths]]></title> <link>https://chromatone.center/theory/intervals/second-seventh/</link> <guid>https://chromatone.center/theory/intervals/second-seventh/</guid> <pubDate>Sat, 04 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The rather dissonant shortest intervals]]></description> <content:encoded><![CDATA[<h2 id="second" tabindex="-1">Second <a class="header-anchor" href="#second" aria-label="Permalink to "Second""></a></h2> <h3 id="major-second-m2" tabindex="-1">Major second M2 <a class="header-anchor" href="#major-second-m2" aria-label="Permalink to "Major second M2""></a></h3> <abc-render abc="[A4B] AB" /><chroma-profile :chroma="'101000000000'" /><p>A major second (sometimes also called <strong>whole tone</strong> or a <strong>whole step</strong>) is a second spanning two semitones. The major second is the interval that occurs between the first and second degrees of a major scale, the tonic and the supertonic.</p> <p>On a musical keyboard, a major second is the interval between two keys separated by one key, counting white and black keys alike. On a guitar string, it is the interval separated by two frets. In moveable-do solfège, it is the interval between do and re. It is considered a melodic step, as opposed to larger intervals called skips.</p> <p>In just intonation, major seconds can occur in at least two different frequency ratios: 9:8 (about 203.9 cents) major tone and 10:9 minor tone (about 182.4 cents). The largest (9:8) ones are called major tones or greater tones, the smallest (10:9) are called minor tones or lesser tones. Their size differs by exactly one syntonic comma (81:80, or about 21.5 cents). Some equal temperaments, such as 15-ET and 22-ET, also distinguish between a greater and a lesser tone.</p> <p><img src="./images/Comparison_of_major_seconds.png" alt=""></p> <p>The major tone may be derived from the harmonic series as the interval between the eighth and ninth harmonics. The minor tone may be derived from the harmonic series as the interval between the ninth and tenth harmonics.</p> <p>In Pythagorean music theory, the epogdoon (Ancient Greek: ἐπόγδοον) is the interval with the ratio 9 to 8. The word is composed of the prefix epi- meaning "on top of" and ogdoon meaning "one eighth"; so it means "one eighth in addition".</p> <blockquote> <p><img src="./Epogdoon.jpg" alt=""></p> <p>Diagram showing relations between epogdoon, diatessaron, diapente, and diapason</p> </blockquote> <h3 id="minor-second-m2" tabindex="-1">Minor second m2 <a class="header-anchor" href="#minor-second-m2" aria-label="Permalink to "Minor second m2""></a></h3> <abc-render abc="[A4_B] A_B" /><chroma-profile :chroma="'110000000000'" /><p>A semitone, also called a half step or a half tone, is the smallest musical interval commonly used in Western tonal music, and it is considered the most dissonant when sounded harmonically. It is defined as the interval between two adjacent notes in a 12-tone scale.</p> <p>In twelve-tone equal temperament all semitones are equal in size (100 cents). In other tuning systems, "semitone" refers to a family of intervals that may vary both in size and name. In Pythagorean tuning, seven semitones out of twelve are diatonic, with ratio 256:243 or 90.2 cents (Pythagorean limma), and the other five are chromatic, with ratio 2187:2048 or 113.7 cents (Pythagorean apotome); they differ by the Pythagorean comma of ratio 531441:524288 or 23.5 cents. In quarter-comma meantone, seven of them are diatonic, and 117.1 cents wide, while the other five are chromatic, and 76.0 cents wide; they differ by the lesser diesis of ratio 128:125 or 41.1 cents. 12-tone scales tuned in just intonation typically define three or four kinds of semitones. For instance, Asymmetric five-limit tuning yields chromatic semitones with ratios 25:24 (70.7 cents) and 135:128 (92.2 cents), and diatonic semitones with ratios 16:15 (111.7 cents) and 27:25 (133.2 cents).</p> <p><img src="./images/Comparison_of_minor_seconds.png" alt=""></p> <p>Melodically, this interval is very frequently used, and is of particular importance in cadences. In the perfect and deceptive cadences it appears as a resolution of the leading-tone to the tonic. In the plagal cadence, it appears as the falling of the subdominant to the mediant. It also occurs in many forms of the imperfect cadence, wherever the tonic falls to the leading-tone.</p> <p>Harmonically, the interval usually occurs as some form of dissonance or a nonchord tone that is not part of the functional harmony. It may also appear in inversions of a major seventh chord, and in many added tone chords.</p> <h3 id="minor-seventh-m7" tabindex="-1">Minor seventh m7 <a class="header-anchor" href="#minor-seventh-m7" aria-label="Permalink to "Minor seventh m7""></a></h3> <abc-render abc="[A4g] Ag" /><chroma-profile :chroma="'100000000010'" /><p>The minor seventh spans ten semitones.</p> <p>Minor seventh intervals rarely feature in melodies (and especially in their openings) but occur more often than major sevenths.</p> <p>Consonance and dissonance are relative, depending on context, the minor seventh being defined as a dissonance requiring resolution to a consonance.</p> <p>In just intonation there is both a 16:9 "small just minor seventh", also called the "Pythagorean small minor seventh", equivalent to two perfect fourths stacked on top of each other, and a 9:5 "large just minor seventh" equivalent to a perfect fifth and a minor third on top of each other.</p> <h3 id="major-seventh-m7" tabindex="-1">Major seventh M7 <a class="header-anchor" href="#major-seventh-m7" aria-label="Permalink to "Major seventh M7""></a></h3> <abc-render abc="[A4^g] A^g" /><chroma-profile :chroma="'100000000001'" /><p>The major seventh spans eleven semitones.</p> <p>A major seventh in just intonation most often corresponds to a pitch ratio of 15:8 (About this soundplay (help·info)); in 12-tone equal temperament, a major seventh is equal to eleven semitones, or 1100 cents, about 12 cents wider than the 15:8 major seventh.</p> <p>The major seventh interval is considered one of the most dissonant intervals after its inversion the minor second. For this reason, its melodic use is infrequent in classical music.</p> <p>The major seventh chord is however very common in jazz, especially 'cool' jazz, and has a characteristically soft and sweet sound.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Epogdoon.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Indian Raga]]></title> <link>https://chromatone.center/theory/scales/raga/</link> <guid>https://chromatone.center/theory/scales/raga/</guid> <pubDate>Fri, 03 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Improvisational music framework]]></description> <content:encoded><![CDATA[<p>A raga or raag (IAST: rāga; also raaga or ragam; literally "coloring, tingeing, dyeing") is a melodic framework for improvisation akin to a melodic mode in Indian classical music. The rāga is a unique and central feature of the classical Indian music tradition, and as a result has no direct translation to concepts in classical European music. Each rāga is an array of melodic structures with musical motifs, considered in the Indian tradition to have the ability to "colour the mind" and affect the emotions of the audience.</p> <p>Each rāga provides the musician with a musical framework within which to improvise. Improvisation by the musician involves creating sequences of notes allowed by the rāga in keeping with rules specific to the rāga. Rāgas range from small rāgas like Bahar and Shahana that are not much more than songs to big rāgas like Malkauns, Darbari and Yaman, which have great scope for improvisation and for which performances can last over an hour. Rāgas may change over time, with an example being Marwa, the primary development of which has been going down into the lower octave, in contrast with the traditional middle octave. Each rāga traditionally has an emotional significance and symbolic associations such as with season, time and mood. The rāga is considered a means in the Indian musical tradition to evoking specific feelings in an audience. Hundreds of rāga are recognized in the classical tradition, of which about 30 are common, and each rāga has its "own unique melodic personality".</p> <p>Every raga has a swara (a note or named pitch) called shadja, or adhara sadja, whose pitch may be chosen arbitrarily by the performer. This is taken to mark the beginning and end of the saptak (loosely, octave). The raga also contains an adhista, which is either the swara Ma or the swara Pa. The adhista divides the octave into two parts or anga - the purvanga, which contains lower notes, and the uttaranga, which contains higher notes. Every raga has a vadi and a samvadi. The vadi is the most prominent swara, which means that an improvising musician emphasizes or pays more attention to the vadi than to other notes. The samvadi is consonant with the vadi (always from the anga that does not contain the vadi) and is the second most prominent swara in the raga.</p> <img src="./sarigama.svg" /> <p>According to Monier Monier-Williams, the term comes from a Sanskrit word for "the act of colouring or dyeing", or simply a "colour, hue, tint, dye". The term also connotes an emotional state referring to a "feeling, affection, desire, interest, joy or delight", particularly related to passion, love, or sympathy for a subject or something. In the context of ancient Indian music, the term refers to a harmonious note, melody, formula, building block of music available to a musician to construct a state of experience in the audience.</p> <p>The word appears in the ancient Principal Upanishads of Hinduism, as well as the Bhagavad Gita.[ For example, verse 3.5 of the Maitri Upanishad and verse 2.2.9 of the Mundaka Upanishad contain the word rāga. The Mundaka Upanishad uses it in its discussion of soul (Atman-Brahman) and matter (Prakriti), with the sense that the soul does not "color, dye, stain, tint" the matter. The Maitri Upanishad uses the term in the sense of "passion, inner quality, psychological state". The term rāga is also found in ancient texts of Buddhism where it connotes "passion, sensuality, lust, desire" for pleasurable experiences as one of three impurities of a character. Alternatively, rāga is used in Buddhist texts in the sense of "color, dye, hue".</p> <youtube-embed video="J8QgzZQ3hyc" /><p>In 1933, states José Luiz Martinez – a professor of music, Stern refined this explanation to "the rāga is more fixed than mode, less fixed than the melody, beyond the mode and short of melody, and richer both than a given mode or a given melody; it is mode with added multiple specialities".</p> <p>According to Walter Kaufmann, though a remarkable and prominent feature of Indian music, a definition of rāga cannot be offered in one or two sentences. rāga is a fusion of technical and ideational ideas found in music, and may be roughly described as a musical entity that includes note intonation, relative duration and order, in a manner similar to how words flexibly form phrases to create an atmosphere of expression. In some cases, certain rules are considered obligatory, in others optional. The rāga allows flexibility, where the artist may rely on simple expression, or may add ornamentations yet express the same essential message but evoke a different intensity of mood.</p> <p>A rāga has a given set of notes, on a scale, ordered in melodies with musical motifs. A musician playing a rāga, states Bruno Nettl, may traditionally use just these notes, but is free to emphasize or improvise certain degrees of the scale. The Indian tradition suggests a certain sequencing of how the musician moves from note to note for each rāga, in order for the performance to create a rasa (mood, atmosphere, essence, inner feeling) that is unique to each rāga. A rāga can be written on a scale. Theoretically, thousands of rāga are possible given 5 or more notes, but in practical use, the classical tradition has refined and typically relies on several hundred. For most artists, their basic perfected repertoire has some forty to fifty rāgas. Rāga in Indian classic music is intimately related to tala or guidance about "division of time", with each unit called a matra (beat, and duration between beats).</p> <h2 id="_72-melakarta-ragas" tabindex="-1">72 Melakarta ragas <a class="header-anchor" href="#_72-melakarta-ragas" aria-label="Permalink to "72 Melakarta ragas""></a></h2> <p>Here’s a list of an aesthetic and scientifically designed chart of the 72 parent ragas which have been assigned to 12 chakras/wheel each comprising six ragas.</p> <img src="./Melakarta.katapayadi.sankhya.png" /> <p>Continuing the extremely complex system from which our Carnatic classical music was derived we here arrive at a very aesthetic and scientifically designed chart of the 72 parent ragas. These are called melakarta ragas which numbering 72 have been re-scheduled into 36 each taking the ‘shuddha madhyama’ and ‘prati (sharp) madhyama’, namely the note ‘Ma’ respectively. Again, these 72 ragas have been assigned to 12 chakras/wheel each comprising six ragas. Therefore, we have six chakras in the Ma1 (shuddha madhyama) scale and six in the Ma2 (prati madhyama).</p> <p>The nomenclature given to the chakras carries great import. For instance, the first chakra being numero uno is called ‘Indu’ after the moon of which we have just one in our universe; chakra two is named after the eyes: netra; the third after agni (fire) which exists in three forms (treat agni); the fourth after the four Vedas, the fifth is the bana chakra after the pancha bana (five arrows of Manmatha/Cupid), the sixth Rithu after the seasons (shat rithu); the seventh is Rishi after the sapta (7)rishis; the eighth is called ‘Vasu chakra’ (our mythology speaks of 8 Vasus/superhumans); the ninth is ‘Brahma’ of which we are told there are 9 (nava brahma/creators); the tenth is ‘Disi’ or direction and in Indian system we count 10 directions; the eleventh is Rudra chakra named after 11 rudras/deities and finally the 12th chakra is the ‘Aditya’ or sun of which there are 12 in the galaxy apart from ours.</p> <youtube-embed video="mrFJ9MLR7do" /><p>This apart, the entire chart is made user-friendly by assigning the syllabic notes in an order that facilitates easy memorising for a student of music. For instance, all the ragas in the Indu chakra have a common note in rishabha and gandhara, viz,. ‘ra-ga’ (shuddha rishabha and shuddha gandhara-ri1 and ga1). The only notes that change their position in numerical order are the daivatha and nishadha (dha & ni). Similarly, in Netra chakra, it is ‘ra and gi’. The rishabha remains the same (ri1) while the gandhara changes to ‘sadharana gandharam (ga2). The third Agni chakra is identified with ‘ra-gu’ which means no change in the placement of rishabha but then the gandhara changes to antara gandhara (ga3). When it comes to the fourth Veda chakra, it is ‘ri-gi’ where the rishabha undergoes a change in position to chatursruti rishabha (ri2) with the gandhara remaining constant at sadharana gandhara (ga2). In the Bana chakra (ri-gu) the chatusruti rishabha is constant ‘ri2’ while the gandhara turns ‘ga3’. Finally, in the sixth Rithu chakra which completes the shuddha madhyama ragas, it is ‘ru-gu’ where the rishabha is placed in shatsruti (ri3) and the gandhara remains in ‘ga3’. The same is repeated in the next six chakras (37-72) which come under the prati madhayama melakarta ragas corresponding to the 36 shuddha madhyama ragas.</p> <youtube-embed video="G5xfoEVyJRg" /><p>Now let’s get the list of these 72 ragas: Kanakangi, Ratnangi, Ganamurti, Vanaspati, Manavati, Tanarupi (Indu chakra); Senavati, Hanumathodi (todi), Dhenuka, Natakapriya, Kokilapriya, Rupavathi (Netra chakra); Gayakapriya, Vakulapriya, Mayamalavagowla, Chakravakam, Suryakantam, Hatakambari (Agni Chakra); Jhankaradwani, Natabhairavi, Keeravani, Kharaharapriya, Gourimanohari, Varunapriya (Veda chakra); Mara ranjani, Charukesi, Sarasangi, Harikambhoji, Dheera Sankarabharanam (Sankarabharanam), Naganandini (Bana chakra); Yagapriya, Ragavardhini, Gaangeyabhushani, Vagadeeshwari, Shulini, Chalanata (Naata) (Ritu chakra); Salagam, Jalarnavam, Jhalavarali, Navaneetam, Pavani, Raghupriya (Rishi chakra); Shadvidhamargini, Shuba Panthuvarali, Gavambhodi, Suvarnangi, Divyamani, Bhavapriya (Vasu chakra); Dhavalambari, Naamanarayani, Kamavardhini, Ramapriya, Gamanashrama, Vishwambari (Brahma chakra); Shyamalangi, Shanmukhapriya, Simhendra Madhyamam, Hemavathi, Dharmavathi, Neethimathi (Disi chakra); Kantamani, Rishabapriya, Lataangi, Vaachaspati, Meccha Kalyani (Kalyani), Chitrambari (Rudra chakra); Sucharita, Jyothiswaroopini, Dhatuvardhini, Kosalam, Nasika bhushani, Rasikapriya (Aditya chakra).</p> <h2 id="katapayadi-system" tabindex="-1">Katapayadi system <a class="header-anchor" href="#katapayadi-system" aria-label="Permalink to "Katapayadi system""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Katapayadi_system" target="_blank" rel="noreferrer">ka·ṭa·pa·yā·di</a> (Devanagari: कटपयादि) system (also known as Paralppēru, Malayalam: പരല്പ്പേര്) of numerical notation is an ancient Indian alphasyllabic numeral system to depict letters to numerals for easy remembrance of numbers as words or verses. Assigning more than one letter to one numeral and nullifying certain other letters as valueless, this system provides the flexibility in forming meaningful words out of numbers which can be easily remembered.</p> <p>Following verse found in Śaṅkaravarman's Sadratnamāla explains the mechanism of the system.</p> <blockquote> <p>नञावचश्च शून्यानि संख्या: कटपयादय:।<br> मिश्रे तूपान्त्यहल् संख्या न च चिन्त्यो हलस्वर:॥</p> </blockquote> <p>Transiliteration:</p> <blockquote> <p>nanyāvacaśca śūnyāni saṃkhyāḥ kaṭapayādayaḥ<br> miśre tūpāntyahal saṃkhyā na ca cintyo halasvaraḥ</p> </blockquote> <p>Translation: na (न), nya (ञ) and a (अ)-s, i.e., vowels represent zero. The nine integers are represented by consonant group beginning with ka, ṭa, pa, ya. In a conjunct consonant, the last of the consonants alone will count. A consonant without a vowel is to be ignored.</p> <p>Explanation: The assignment of letters to the numerals are as per the following arrangement (In Devanagari, Kannada, Telugu & Malayalam respectively)</p> <table tabindex="0"> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>0</th> </tr> </thead> <tbody> <tr> <td>ka क ಕ క ക</td> <td>kha ख ಖ ఖ ഖ</td> <td>ga ग ಗ గ ഗ</td> <td>gha घ ಘ ఘ ഘ</td> <td>nga ङ ಙ జ్ఞ ങ</td> <td>ca च ಚ చ ച</td> <td>cha छ ಛ ఛ ഛ</td> <td>ja ज ಜ జ ജ</td> <td>jha झ ಝ ఝ ഝ</td> <td>nya ञ ಞ ఞ ഞ</td> </tr> <tr> <td>ṭa ट ಟ ట ട</td> <td>ṭha ठ ಠ ఠ ഠ</td> <td>ḍa ड ಡ డ ഡ</td> <td>ḍha ढ ಢ ఢ ഢ</td> <td>ṇa ण ಣ ణ ണ</td> <td>ta त ತ త ത</td> <td>tha थ ಥ థ ഥ</td> <td>da द ದ ద ദ</td> <td>dha ध ಧ ధ ധ</td> <td>na न ನ న ന</td> </tr> <tr> <td>pa प ಪ ప പ</td> <td>pha फ ಫ ఫ ഫ</td> <td>ba ब బ ബ</td> <td>bha भ ಭ భ ഭ</td> <td>ma म ಮ మ മ</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>ya य ಯ య യ</td> <td>ra र ರ ర ര</td> <td>la ल ల ల ല</td> <td>va व ವ వ വ</td> <td>śha श ಶ శ ശ</td> <td>sha ष ಷ ష ഷ</td> <td>sa स ಸ స സ</td> <td>ha ह ಹ హ ഹ</td> <td>–</td> <td>–</td> </tr> </tbody> </table> <ul> <li>Consonants have numerals assigned as per the above table. For example, ba (ब) is always 3 whereas 5 can be represented by either nga (ङ) or ṇa (ण) or ma (म) or śha (श).</li> <li>All stand-alone vowels like a (अ) and ṛ (ऋ) are assigned to zero.</li> <li>In case of a conjunct, consonants attached to a non-vowel will be valueless. For example, kya (क्या) is formed by k (क्) + ya (य) + a (अ). The only consonant standing with a vowel is ya (य). So the corresponding numeral for kya (क्या) will be 1.</li> <li>There is no way of representing the decimal separator in the system.</li> <li>Indians used the Hindu–Arabic numeral system for numbering, traditionally written in increasing place values from left to right. This is as per the rule "अङ्कानां वामतो गतिः" which means numbers go from right to left.</li> </ul> <p>The melakarta ragas of the Carnatic music is named so that the first two syllables of the name will give its number. This system is sometimes called the Ka-ta-pa-ya-di sankhya. The Swaras 'Sa' and 'Pa' are fixed, and here is how to get the other swaras from the melakarta number.</p> <p>Melakartas 1 through 36 have Ma1 and those from 37 through 72 have Ma2. The other notes are derived by noting the (integral part of the) quotient and remainder when one less than the melakarta number is divided by 6. If the melakarta number is greater than 36, subtract 36 from the melakarta number before performing this step. 'Ri' and 'Ga' positions: the raga will have: Ri1 and Ga1 if the quotient is 0 Ri1 and Ga2 if the quotient is 1 Ri1 and Ga3 if the quotient is 2 Ri2 and Ga2 if the quotient is 3 Ri2 and Ga3 if the quotient is 4 Ri3 and Ga3 if the quotient is 5 'Da' and 'Ni' positions: the raga will have: Da1 and Ni1 if remainder is 0 Da1 and Ni2 if remainder is 1 Da1 and Ni3 if remainder is 2 Da2 and Ni2 if remainder is 3 Da2 and Ni3 if remainder is 4 Da3 and Ni3 if remainder is 5</p> <h3 id="raga-dheerasankarabharanam" tabindex="-1">Raga Dheerasankarabharanam <a class="header-anchor" href="#raga-dheerasankarabharanam" aria-label="Permalink to "Raga Dheerasankarabharanam""></a></h3> <p>The katapayadi scheme associates dha ↔ "title" 9 and ra ↔ "title" 2, hence the raga's melakarta number is 29 (92 reversed). Now 29 ≤ "title" 36, hence Dheerasankarabharanam has Ma1. Divide 28 (1 less than 29) by 6, the quotient is 4 and the remainder 4. Therefore, this raga has Ri2, Ga3 (quotient is 4) and Da2, Ni3 (remainder is 4). Therefore, this raga's scale is Sa Ri2 Ga3 Ma1 Pa Da2 Ni3 SA.</p> <h3 id="raga-mechakalyani" tabindex="-1">Raga MechaKalyani <a class="header-anchor" href="#raga-mechakalyani" aria-label="Permalink to "Raga MechaKalyani""></a></h3> <p>From the coding scheme Ma ↔ "title" 5, Cha ↔ "title" 6. Hence the raga's melakarta number is 65 (56 reversed). 65 is greater than 36. So MechaKalyani has Ma2. Since the raga's number is greater than 36 subtract 36 from it. 65–36=29. 28 (1 less than 29) divided by 6: quotient=4, remainder=4. Ri2 Ga3 occurs. Da2 Ni3 occurs. So MechaKalyani has the notes Sa Ri2 Ga3 Ma2 Pa Da2 Ni3 SA.</p> <h3 id="exception-for-simhendramadhyamam" tabindex="-1">Exception for Simhendramadhyamam <a class="header-anchor" href="#exception-for-simhendramadhyamam" aria-label="Permalink to "Exception for Simhendramadhyamam""></a></h3> <p>As per the above calculation, we should get Sa ↔ "title" 7, Ha ↔ "title" 8 giving the number 87 instead of 57 for Simhendramadhyamam. This should be ideally Sa ↔ "title" 7, Ma ↔ "title" 5 giving the number 57. So it is believed that the name should be written as Sihmendramadhyamam (as in the case of Brahmana in Sanskrit).</p> <h2 id="chakras" tabindex="-1">Chakras <a class="header-anchor" href="#chakras" aria-label="Permalink to "Chakras""></a></h2> <p>The 72 melakarta ragas are split into 12 groups called chakras, each containing 6 ragas. The ragas within the chakra differ only in the dhaivatham and nishadham notes (D and N), as illustrated below. The name of each of the 12 chakras suggest their ordinal number as well.[2][4]</p> <ul> <li><strong>Indu</strong> stands for the moon, of which we have only one - hence it is the first chakra.</li> <li><strong>Netra</strong> means eyes, of which we have two - hence it is the second.</li> <li><strong>Agni</strong> is the third chakra as it denotes the three agnis or the holy fires (laukikaagni - earthly fire, daavaagni - lightning, and divyaagni - the Sun).</li> <li><strong>Veda</strong> denoting four Vedas or scriptures namely Rigveda, Samaveda, Yajurveda, Atharvaveda is the name of the fourth chakra.</li> <li><strong>Bana</strong> comes fifth as it stands for the five banas of Manmatha.</li> <li><strong>Rutu</strong> is the sixth chakra standing for the 6 seasons of Hindu calendar.</li> <li><strong>Rishi</strong>, meaning sage, is the seventh chakra representing the seven sages.</li> <li><strong>Vasu</strong> stands for the eight vasus of Hinduism.</li> <li><strong>Brahma</strong> comes next of which there are 9.</li> <li>The 10 directions, including akash (sky) and patal (nether region), is represented by the tenth chakra, <strong>Disi</strong>.</li> <li>Eleventh chakra is <strong>Rudra</strong> of which represents the eleven names of Lord Shiva.</li> <li>Twelfth comes <strong>Aditya</strong> of which stands for the twelve names of Lord Surya or the Sun God.</li> </ul> <h2 id="asampurna-melakarta" tabindex="-1">Asampurna Melakarta <a class="header-anchor" href="#asampurna-melakarta" aria-label="Permalink to "Asampurna Melakarta""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Asampurna_Melakarta" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Asampurna_Melakarta</a></p> <p>The Asampurna Melakarta (transliterated as Asaṃpūrṇa Mēḷakarta) scheme is the system of 72 ragas (musical scales) originally proposed in the 17th century by Venkatamakhin in his Chaturdanda Prakasikha. This proposal used scales with notes that do not conform to the sampurna raga system. Skipped notes or repeated notes, etc., were used in some of the ragas. Some of the ragas of any Melakarta system will use Vivadi swaras (discordant notes). The original system is supposed to avoid such ill-effects and was followed by the Muthuswami Dikshitar school. The naming of the original system followed Katapayadi system. Muthuswami Dikshitar's compositions use the name of these ragas in the lyrics of the songs and is still referred to by those names even in radio / TV announcements of these songs.</p> <p>Later Govindacharya came up with a more mathematical and regular system of 72 ragas, which is currently considered fundamental ragas (musical scales) in Carnatic music (South Indian classical music). These melakarta ragas were sampurna ragas. Some of the names of the ragas had to be modified to fit into the Katapayadi system.</p> <p>In the Asampurna Melakarta system, there is no set rule for the ragas in contrast to the currently used system of Melakarta ragas. Some ragas though are the same in both systems (like 15 - Mayamalavagowla and 29 - Dheerasankarabharanam), and in some cases the scales are same, while names are different (like 8 - Janatodi and Hanumatodi, 56 - Chamaram and Shanmukhapriya).</p> <p><a href="http://music-raagaa.blogspot.com/p/72-melakartha-raagas_17.html" target="_blank" rel="noreferrer">http://music-raagaa.blogspot.com/p/72-melakartha-raagas_17.html</a></p> <p><a href="https://www.ragasurabhi.com/index.html" target="_blank" rel="noreferrer">https://www.ragasurabhi.com/index.html</a></p> <p><a href="http://carnatica.net/origin.htm" target="_blank" rel="noreferrer">http://carnatica.net/origin.htm</a></p> <p><a href="http://www.melakarta.com/index.html" target="_blank" rel="noreferrer">http://www.melakarta.com/index.html</a></p> <p><a href="http://www.carnaticcorner.com/articles/22_srutis.htm" target="_blank" rel="noreferrer">http://www.carnaticcorner.com/articles/22_srutis.htm</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/five-gandharva.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Harmonic movements]]></title> <link>https://chromatone.center/theory/harmony/movement/</link> <guid>https://chromatone.center/theory/harmony/movement/</guid> <pubDate>Thu, 02 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Ways to move chords]]></description> <content:encoded><![CDATA[<h2 id="descending-fifths-progressions" tabindex="-1">Descending fifths progressions <a class="header-anchor" href="#descending-fifths-progressions" aria-label="Permalink to "Descending fifths progressions""></a></h2> <youtube-embed video="v=fJvxaIfX6V4"></youtube-embed><p>Discusses chord progressions whose roots move by descending fifth, especially as they appear in Jazz standards.</p> <h2 id="chord-substitutions" tabindex="-1">Chord substitutions <a class="header-anchor" href="#chord-substitutions" aria-label="Permalink to "Chord substitutions""></a></h2> <p>In music theory, <a href="https://en.wikipedia.org/wiki/Chord_substitution" target="_blank" rel="noreferrer">chord substitution</a> is the technique of using a chord in place of another in a progression of chords, or a chord progression. Much of the European classical repertoire and the vast majority of blues, jazz and rock music songs are based on chord progressions. "A chord substitution occurs when a chord is replaced by another that is made to function like the original. Usually substituted chords possess two pitches in common with the triad that they are replacing."</p> <p>A chord progression may be repeated to form a song or tune. Composers, songwriters and arrangers have developed a number of ways to add variety to a repeated chord progression. There are many ways to add variety to music, including changing the dynamics (loudness and softness).</p> <p>The diminished triad can be used to substitute for the dominant seventh chord. In major scales, a diminished triad occurs only on the seventh scale degree. For instance, in the key of C, this is a B diminished triad (B, D, F). Since the triad is built on the seventh scale degree, it is also called the leading-tone triad. This chord has a dominant function. Unlike the dominant triad or dominant seventh, the leading-tone triad functions as a prolongational chord rather than a structural chord since the strong root motion by fifth is absent.</p> <h3 id="use-in-blues-jazz-and-rock-music" tabindex="-1">Use in blues, jazz and rock music <a class="header-anchor" href="#use-in-blues-jazz-and-rock-music" aria-label="Permalink to "Use in blues, jazz and rock music""></a></h3> <p>Jazz musicians often substitute chords in the original progression to create variety and add interest to a piece.</p> <p>The substitute chord must have some harmonic quality and degree of function in common with the original chord, and often only differs by one or two notes. Scott DeVeaux describes a "penchant in modern jazz for harmonic substitution."</p> <p>One simple type of chord substitution is to replace a given chord with a chord that has the same function. Thus, in the simple chord progression I–ii–V–I, which in the key of C major would be the chords C–DM–G–C, a musician could replace the I chords with "tonic substitutes". The most widely used substitutes are iii and vi (in a Major key), which in this case would be the chords "e minor" and "a minor".</p> <p>This simple chord progression with tonic substitutes could become iii–ii–V–vi or, with chord names, "e minor–d minor–G Major–a minor". Given the overlap in notes between the original tonic chords and the chord substitutes (for example, C major is the notes "C, E, and G", and "e minor" is the notes "E, G and B"), the melody is likely to be supported by the new chords. The musician typically uses her/his "ear" (sense of the musical style and harmonic suitability) to determine if the chord substitution works with the melody.</p> <p>There are also subdominant substitutes and dominant substitutes. For subdominant chords, in the key of C major, in the chord progression C major/F major/G7/C major (a simple I /IV/V7/I progression), the notes of the subdominant chord, F major, are "F, A, and C". As such, a performer or arranger who wished to add variety to the song could try using a chord substitution for a repetition of this progression. One simple chord substitute for IV is the "ii" chord, a minor chord built on the second scale degree. In the key of C major, the "ii" chord is "d minor", which is the notes "D, F, and A". As there are two shared notes between the IV and "ii" chords, a melody that works well over IV is likely to be supported by the "ii" chord.</p> <h3 id="types" tabindex="-1">Types <a class="header-anchor" href="#types" aria-label="Permalink to "Types""></a></h3> <p>The ii–V substitution is when a chord or each chord in a progression is preceded by its supertonic (ii7) and dominant (V7), or simply its dominant. For example, a C major chord would be preceded by Dm7 and G7. Since secondary dominant chords are often inserted between the chords of a progression rather than replacing one, this may be considered as 'addition' rather than 'substitution'.</p> <p>Chord quality alteration is when the quality of a chord is changed, and the new chord of similar root and construction, but with one pitch different, is substituted for the original chord, for example the minor sixth for the major seventh, or the major seventh for the minor.</p> <p>The diminished seventh chord is often used in place of a dominant 7th chord. In the key of A Major the V chord, E dominant 7th (which is made up the notes E, G♯, B, and D) can be replaced with a G♯ diminished seventh chord (G♯, B, D, F). If the diminished seventh chord (G♯) is followed by the I chord (A), this creates chromatic (stepwise semitonal) root movement, which can add musical interest in a song mainly constructed around the interval of the fourth or fifth. The diminished seventh chord on the sharpened second scale degree, ♯IIo7, may be used as a substitute dominant, for example in C: ♯IIo7 = D♯–F♯–A–C♮ ↔ B–D♯–F♯–A = VII7, which creates the chromatic root movement D – D♭ – C. Contrast with the original ii–V–I progression in C, which creates the leading-tone B – C.</p> <p>In a tritone substitution, the substitute chord only differs slightly from the original chord. If the original chord in a song is G7 (G, B, D, F), the tritone substitution would be D♭7 (D♭, F, A♭, C♭). Note that the 3rd and 7th notes of the G7 chord are found in the D♭7 chord (albeit with a change of role). The tritone substitution is widely used for V7 chords in the popular jazz chord progression "ii-V-I". In the key of C, this progression is "d minor, G7, C Major". With tritone substitution, this progression would become "d minor, D♭7, C Major," which contains chromatic root movement. When performed by the bass player, this chromatic root movement creates a smooth-sounding progression. "Tritone substitutions and altered dominants are nearly identical...Good improvisers will liberally sprinkle their solos with both devices. A simple comparison of the notes generally used with the given chord [notation] and the notes used in tri-tone substitution or altered dominants will reveal a rather stunning contrast, and could cause the unknowledgeable analyzer to suspect errors. ...(the distinction between the two [tri-tone substitution and altered dominant] is usually a moot point)."</p> <p>Tonic substitution is the use of chords that sound similar to the tonic chord (or I chord) in place of the tonic. In major keys, the chords iii and vi are often substituted for the I chord, to add interest. In the key of C major, the I major 7 chord is "C, E, G, B," the iii chord ("III–7") is E minor 7 ("E, G, B, D") and the vi minor 7 chord is A minor 7 ("A, C, E, G"). Both of the tonic substitute chords use notes from the tonic chord, which means that they usually support a melody originally designed for the tonic (I) chord.</p> <p>The relative major/minor substitution shares two common tones and is so called because it involves the relation between major and minor keys with the same key signatures, such as C major and A minor.</p> <p>The augmented triad on the fifth scale degree may be used as a substitute dominant, and may also be considered as ♭III+, for example in C: V+ = G–B–D♯, ♭III+ = E♭–G–B♮, and since in every key: D♯ = E♭. "Backdoor ii–V" in C: IV7–♭VII7–I. Chord symbols for the conventional ii–V progression are above the staff, with the chord symbols for the substitution in parentheses.</p> <p>The chord a minor third above, ♭VII7, may be substituted for the dominant, and may be preceded by its ii: iv7. Due to common use the two chords of the backdoor progression (IV7-♭VII7) may be substituted for the dominant chord. In C major the dominant would be G7: GBDF, sharing two common tones with B♭7: B♭DFA♭. A♭ and F serve as upper leading-tones back to G and E, respectively, rather than B♮ and F serving as the lower and upper leading-tones to C and E.</p> <h3 id="application" tabindex="-1">Application <a class="header-anchor" href="#application" aria-label="Permalink to "Application""></a></h3> <p>In jazz, chord substitutions can be applied by composers, arrangers, or performers. Composers may use chord substitutions when they are basing a new jazz tune on an existing chord progression from an old jazz standard or a song from a musical; arrangers for a big band or jazz orchestra may use chord substitutions in their arrangement of a tune, to add harmonic interest or give a different "feel" to a song; and instrumentalists may use chord substitutions in their performance of a song. Given that many jazz songs have repetition of internal sections, such as with a 32-bar AABA song form, performers or arrangers may use chord substitution within the A sections to add variety to the song.</p> <p>Jazz "comping" instruments (piano, guitar, organ, vibes) often use chord substitution to add harmonic interest to a jazz tune with slow harmonic change. For example, the jazz standard chord progression of "rhythm changes" uses a simple eight bar chord progression in the bridge with the chords III7, VI7, II7, V7; in the key of B♭, these chords are D7, G7, C7, and F7 (each for two bars). A jazz guitarist might add a "ii–V7" aspect to each chord, which would make the progression: "a minor, D7, d minor, G7, g minor, C7, c minor, F7. Alternatively, tritone substitutions could be applied to the progression.</p> <blockquote> <p>Note that both the back door progression and ♯IIo7, when substituted for V7, introduces notes that seem wrong or anachronistic to the V7 chord (such as the fourth and the major seventh). They work only because the given instances of those chords are familiar to the ear; hence when an improviser uses them against the V7, the listener's ear hears the given precedents for the event, instead of the conflict with the V7. — Coker (1997), p. 82</p> </blockquote> <p>Theoretically, any chord can substitute for any other chord, as long as the new chord supports the melody. In practice, though, only a few options sound musically and stylistically appropriate to a given melody. This technique is used in music such as bebop or fusion to provide more sophisticated harmony, or to create a new-sounding re-harmonization of an old jazz standard.</p> <p>Jazz soloists and improvisers also use chord substitutions to help them add interest to their improvised solos. Jazz soloing instruments that can play chords, such as jazz guitar, piano, and organ players may use substitute chords to develop a chord solo over an existing jazz tune with slow-moving harmonies. Also, jazz improvisers may use chord substitution as a mental framework to help them create more interesting-sounding solos. For example, a saxophonist playing an improvised solo over the basic "rhythm changes" bridge (in B♭, this is "D7, G7, C7, and F7", each for two bars) might think of a more complex progression that uses substitute chords (e.g., "a minor, D7, d minor, G7, g minor, C7, c minor, F7). In doing so, this implies the substitute chords over the original progression, which adds interest for listeners.</p> <h2 id="tritone-substitution" tabindex="-1">Tritone substitution <a class="header-anchor" href="#tritone-substitution" aria-label="Permalink to "Tritone substitution""></a></h2> <p>The <a href="https://en.wikipedia.org/wiki/Tritone_substitution" target="_blank" rel="noreferrer">tritone substitution</a> is a common chord substitution found in both jazz and classical music. Where jazz is concerned, it was the precursor to more complex substitution patterns like Coltrane changes. Tritone substitutions are sometimes used in improvisation—often to create tension during a solo. Though examples of the tritone substitution, known in the classical world as an augmented sixth chord, can be found extensively in classical music since the Renaissance period, they were not heard until much later in jazz by musicians such as Dizzy Gillespie and Charlie Parker in the 1940s, as well as Duke Ellington, Art Tatum, Coleman Hawkins, Roy Eldridge and Benny Goodman.</p> <p>The tritone substitution can be performed by exchanging a dominant seventh chord for another dominant seven chord which is a tritone away from it. For example, in the key of C major one can use D♭7 instead of G7. (D♭ is a tritone away from G).</p> <p>In tonal music, a conventional perfect cadence consists of a dominant seventh chord followed by a tonic chord. For example, in the key of C major, the chord of G7 is followed by a chord of C. In order to execute a tritone substitution, common variant of this progression, one would replace the dominant seventh chord with a dominant chord that has its root a tritone away from the original.</p> <p>Franz Schubert's String Quintet in C major concludes with a dramatic final cadence that uses the third of the above progressions. The conventional G7 chord is replaced in bars 3 and 4 of the following example with a D♭7 chord, with a diminished fifth (G♮ as the enharmonic equivalent of Adouble flat); a chord otherwise known as a 'French sixth'.</p> <h2 id="coltrane-changes" tabindex="-1">Coltrane changes <a class="header-anchor" href="#coltrane-changes" aria-label="Permalink to "Coltrane changes""></a></h2> <p><a href="https://en.wikipedia.org/wiki/Coltrane_changes" target="_blank" rel="noreferrer">Coltrane changes</a> (Coltrane Matrix or cycle, also known as chromatic third relations and multi-tonic changes) are a harmonic progression variation using substitute chords over common jazz chord progressions. These substitution patterns were first demonstrated by jazz musician John Coltrane on the albums Bags & Trane (on the track "Three Little Words") and Cannonball Adderley Quintet in Chicago (on "Limehouse Blues"). Coltrane continued his explorations on the 1959 album Giant Steps and expanded on the substitution cycle in his compositions "Giant Steps" and "Countdown", the latter of which is a reharmonized version of Eddie Vinson's "Tune Up". The Coltrane changes are a standard advanced harmonic substitution used in jazz improvisation.</p> <youtube-embed video="5PVPM-KoILE" /><h3 id="function" tabindex="-1">Function <a class="header-anchor" href="#function" aria-label="Permalink to "Function""></a></h3> <p>The changes serve as a pattern of chord substitutions for the ii–V–I progression (supertonic–dominant–tonic) and are noted for the tonally unusual root movement by major thirds (either up or down by a major third interval), creating an augmented triad. Root movement by thirds is unusual in jazz, as the norm is circle of fifths root movement, such as ii-V-I, which in the key of C is D dorian, G7 and C major.</p> <h3 id="influences" tabindex="-1">Influences <a class="header-anchor" href="#influences" aria-label="Permalink to "Influences""></a></h3> <p>David Demsey, saxophonist and coordinator of jazz studies at William Paterson University, cites a number of influences leading to Coltrane's development of these changes. After Coltrane's death it was proposed that his "preoccupation with... chromatic third-relations" was inspired by religion or spirituality, with three equal key areas having numerological significance representing a "magic triangle", or, "the trinity, God, or unity." However, Demsey shows that though this meaning was of some importance, third relationships were much more "earthly", or rather historical, in origin. Mention should be made of his interests in Indian ragas during the early 1960s, the Trimurti of Vishnu, Brahma and Shiva may well have been an inherent reference in his chromatic third relations, tritone substitutes, and so on. In playing that style, Coltrane found it "easy to apply the harmonic ideas I had... I started experimenting because I was striving for more individual development." He developed his sheets of sound style while playing with Miles Davis and with pianist Thelonious Monk during this period. In terms of the origin of this “sheets of sound” technique, saxophonist Odean Pope considers pianist Hasaan Ibn Ali a major influence on Coltrane and his development of this signature style.</p> <p>Coltrane studied harmony with Dennis Sandole and at the Granoff School of Music in Philadelphia. He explored contemporary techniques and theory. He also studied the Thesaurus of Scales and Melodic Patterns by Nicolas Slonimsky (1947).</p> <p>The first appearance of the "Coltrane changes" appear in the verse to the standard "Till the Clouds Roll By" by Jerome Kern. The bridge of the Richard Rodgers song and jazz standard "Have You Met Miss Jones?" (1937) predated Tadd Dameron's "Lady Bird", after which Coltrane named his "Lazy Bird", by incorporating modulation by major third(s). (highlighted yellow below) "Giant Steps" and "Countdown" may both have taken the inspiration for their augmented tonal cycles from "Have You Met Miss Jones?".</p> <blockquote> <p>"Have You Met Miss Jones?" B section chord progression (bridge):</p> <p>│ B♭Maj7 │ A♭m7 D♭7 │ G♭Maj7 │ Em7 A7 | │ DMaj7 │ A♭m7 D♭7 │ G♭Maj7 │ Gm7 C7 │</p> </blockquote> <youtube-embed video="62tIvfP9A2w" /><h3 id="major-thirds-cycle" tabindex="-1">Major thirds cycle <a class="header-anchor" href="#major-thirds-cycle" aria-label="Permalink to "Major thirds cycle""></a></h3> <p>The harmonic use of the chromatic third relation originated in the Romantic era and may occur on any structural level, for example in chord progressions or through key changes. The standard Western chromatic scale has twelve equidistant semitones. When arranged according to the circle of fifths, it looks like this.</p> <blockquote> <p>Precisely because of this equidistancy, the roots of these three chords can produce a destabilizing effect; if C, A♭ and E appear as the tonic pitches of three key areas on a larger level, the identity of the composition's tonal center can only be determined by the closure of the composition. — Demsey (1991)</p> </blockquote> <p>Looking above at the marked chords from "Have You Met Miss Jones?", B♭, G♭ and D are spaced a major third apart. On the circle of fifths it appears as an equilateral triangle.</p> <p>By rotating the triangle, all of the thirds cycles can be shown. Note that there are only four unique thirds cycles. This approach can be generalized; different interval cycles will appear as different polygons on the diagram.</p> <youtube-embed video="CdIstkTHqO8" /><h3 id="standard-substitution" tabindex="-1">Standard substitution <a class="header-anchor" href="#standard-substitution" aria-label="Permalink to "Standard substitution""></a></h3> <p>Although "Giant Steps" and "Countdown" are perhaps the most famous examples, both of these compositions use slight variants of the standard Coltrane changes (The first eight bars of "Giant Steps" uses a shortened version that does not return to the I chord, and in "Countdown" the progression begins on ii7 each time.) The standard substitution can be found in several Coltrane compositions and arrangements recorded around this time. These include: "26-2" (a reharmonization of Charlie Parker's "Confirmation"), "Satellite" (based on the standard "How High the Moon"), "Exotica" (loosely based on the harmonic form of "I Can't Get Started"), Coltrane's arrangement of "But Not for Me", and on the bridge of his arrangement of "Body and Soul".</p> <p>In "Fifth House" (based on "Hot House", i.e. "What Is This Thing Called Love") the standard substitution is implied over an ostinato bass pattern with no chordal instrument instructed to play the chord changes. When Coltrane's improvisation superimposes this progression over the ostinato bass, it is easy to hear how he used this concept for his more free playing in later years.</p> <youtube-embed video="L_XJ_s5IsQc" />]]></content:encoded> </item> <item> <title><![CDATA[Sight reading]]></title> <link>https://chromatone.center/theory/notes/staff/sight-reading/</link> <guid>https://chromatone.center/theory/notes/staff/sight-reading/</guid> <pubDate>Thu, 02 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[The complexity of perfecting staff notaion reading]]></description> <content:encoded><![CDATA[<p>Sight-singing is used to describe a singer who is sight-reading. Both activities require the musician to play or sing the notated rhythms and pitches.</p> <h2 id="psychology" tabindex="-1">Psychology <a class="header-anchor" href="#psychology" aria-label="Permalink to "Psychology""></a></h2> <p>The ability to sight-read partly depends on a strong short-term musical memory. An experiment on sight reading using an eye tracker indicates that highly skilled musicians tend to look ahead further in the music, storing and processing the notes until they are played; this is referred to as the eye–hand span.</p> <p>Storage of notational information in working memory can be expressed in terms of the amount of information (load) and the time for which it must be held before being played (latency). The relationship between load and latency changes according to tempo, such that t = x/y, where t is the change in tempo, x is the change in load, and y is the change in latency. Some teachers and researchers have proposed that the eye–hand span can be trained to be larger than it would otherwise be under normal conditions, leading to more robust sight-reading ability.</p> <p>Human memory can be divided into three broad categories: long-term memory, sensory memory, and short-term (working) memory. According to the formal definition, working memory is "a system for temporarily storing and managing the information required to carry out complex cognitive tasks such as learning, reasoning, and comprehension". The paramount feature that distinguishes the working memory from both the long-term and sensory memory is this system's ability to simultaneously process and store information. The knowledge has what is called a "limited capacity", so there is only a certain amount of information that can be stored and it is easily accessible for only a small window of time after it has been processed, with a recall time block of roughly fifteen seconds to one minute.</p> <p>Experiments dealing with memory span have been conducted by George Miller in 1956 that indicated, "Most common number of items that can be stored in the working memory is five plus or minus two.” However, if this information is not retained and stored (“consolidated”) in one's long-term memory, it will fade quickly.</p> <p>Research indicates that the main area of the brain associated with the working memory is the prefrontal cortex. The prefrontal cortex is located in the frontal lobe of the brain. This area deals with cognition and contains two major neural loops or pathways that are central to processing tasks via the working memory: the visual loop, which is necessary for the visual component of the task, and the phonological loop, which deals with the linguistic aspects of the task (i.e. repeating the word or phrase). Although the hippocampus, in the temporal lobe, is the brain structure most frequently paired with memories, studies have indicated that its role is more vital for consolidation of the short-term memories into long-term ones than the ability to process, carry out, and briefly recall certain tasks.</p> <p>This type of memory has specifically come into focus when discussing sight reading, since the process of looking at musical notes for the first time and deciphering them while playing an instrument can be considered a complex task of comprehension. The main conclusion in terms of this idea is that working memory, short-term memory capacity and mental speed are three important predictors for sight reading achievement. Although none of the studies discredits the correlation between the amount of time one spends practicing and musical ability, specifically sight-reading proficiency, more studies are pointing to the level at which one’s working memory functions as the key factor in sight-reading abilities. As stated in one such study, "Working memory capacity made a statistically significant contribution as well (about 7 percent, a medium-size effect). In other words, if you took two pianists with the same amount of practice, but different levels of working memory capacity, it's likely that the one higher in working memory capacity would have performed considerably better on the sight-reading task."</p> <p>Based on the research and opinions of multiple musicians and scientists, the take home message about one's sight-reading ability and working memory capacity seems to be that “The best sight-readers combined strong working memories with tens of thousands of hours of practice.”</p> <p>Sight-reading also depends on familiarity with the musical idiom being performed; this permits the reader to recognize and process frequently occurring patterns of notes as a single unit, rather than individual notes, thus achieving greater efficiency. This phenomenon, which also applies to the reading of language, is referred to as chunking. Errors in sight-reading tend to occur in places where the music contains unexpected or unusual sequences; these defeat the strategy of "reading by expectation" that sight-readers typically employ.</p> <h2 id="professional-use" tabindex="-1">Professional use <a class="header-anchor" href="#professional-use" aria-label="Permalink to "Professional use""></a></h2> <p>Studio musicians (e.g., musicians employed to record pieces for commercials, etc.) often record pieces on the first take without having seen them before. Often, the music played on television is played by musicians who are sight-reading. This practice has developed through intense commercial competition in these industries.</p> <p>Kevin McNerney, jazz musician, professor, and private instructor, describes auditions for University of North Texas Jazz Lab Bands as being almost completely based on sight-reading: "you walk into a room and see three or four music stands in front of you, each with a piece of music on it (in different styles ...). You are then asked to read each piece in succession."</p> <p>This emphasis on sight-reading, according to McNerney, prepares musicians for studio work "playing backing tracks for pop performers or recording [commercials]". The expense of the studio, musicians, and techs makes sight-reading skills essential. Typically, a studio performance is "rehearsed" only once to check for copying errors before recording the final track. Many professional big bands also sight-read every live performance. They are known as "rehearsal bands", even though their performance is the rehearsal.</p> <p>According to Frazier, score reading is an important skill for those interested in the conducting profession and "Conductors such as the late Robert Shaw and Yoel Levi have incredibly strong piano skills and can read at sight full orchestral scores at the piano" (a process which requires the pianist to make an instant piano reduction of the key parts of the score).</p> <h2 id="pedagogy" tabindex="-1">Pedagogy <a class="header-anchor" href="#pedagogy" aria-label="Permalink to "Pedagogy""></a></h2> <p>Although 86% of piano teachers polled rated sight-reading as the most important or a highly important skill, only 7% of them said they address it systematically. Reasons cited were a lack of knowledge of how to teach it, inadequacy of the training materials they use, and deficiency in their own sight-reading skills. Teachers also often emphasize rehearsed reading and repertoire building for successful recitals and auditions to the detriment of sight-reading and other functional skills.</p> <p>Hardy reviewed research on piano sight-reading pedagogy and identified a number of specific skills essential to sight-reading proficiency:</p> <ul> <li>Technical fundamentals in reading and fingering</li> <li>Visualization of keyboard topography</li> <li>Tactile facility (psychomotor skills) and memory</li> <li>Ability to read, recognize, and remember groups of notes (directions, patterns, phrases, chords, rhythmic groupings, themes, inversions, intervals, etc.)</li> <li>Ability to read and remember ahead of playing with more and wider progressive fixations</li> <li>Aural imagery (ear-playing and sight-singing improves sight-reading)</li> <li>Ability to keep the basic pulse, read, and remember rhythm</li> <li>Awareness and knowledge of the music's structure and theory</li> </ul> <p>Beauchamp identifies five building blocks in the development of piano sight-reading skills:</p> <ul> <li>Grand-staff knowledge</li> <li>Security within the five finger positions</li> <li>Security with keyboard topography</li> <li>Security with basic accompaniment patterns</li> <li>Understanding of basic fingering principles</li> </ul> <p>Grand-staff knowledge consists of fluency in both clefs such that reading a note evokes an automatic and immediate physical response to the appropriate position on the keyboard. Beauchamp asserts it is better to sense and know where the note is than what the note is. The performer does not have time to think of the note name and translate it to a position, and the non-scientific note name does not indicate the octave to be played. Beauchamp reports success using a Key/Note Visualizer, note-reading flashcards, and computer programs in group and individual practice to develop grand-staff fluency.</p> <p>Udtaisuk also reports that a sense of keyboard geography and an ability to quickly and efficiently match notes to keyboard keys is important for sight-reading. He found that "computer programs and flash cards are effective ways to teach students to identify notes [and] enhance a sense of keyboard geography by highlighting the relationships between the keyboard and the printed notation".</p> <p>Most students do not sight-read well because it requires specific instruction, which is seldom given. A major challenge in sight-reading instruction, according to Hardy, is obtaining enough practice material. Since practicing rehearsed reading does not help improve sight-reading, a student can only use a practice piece once. Moreover, the material must be at just the right level of difficulty for each student, and a variety of styles is preferred. Hardy suggests music teachers cooperate to build a large lending library of music and purchase inexpensive music from garage sales and store sales.</p> <p><a href="https://en.wikipedia.org/wiki/Sight-reading" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Sight-reading</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/Michelangelo_Caravaggio_026.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Hexads and more]]></title> <link>https://chromatone.center/theory/chords/more/</link> <guid>https://chromatone.center/theory/chords/more/</guid> <pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Extended chords, added tone chords and other complex interval combinations]]></description> <content:encoded><![CDATA[<h2 id="chords-with-6-and-more-notes" tabindex="-1">Chords with 6 and more notes <a class="header-anchor" href="#chords-with-6-and-more-notes" aria-label="Permalink to "Chords with 6 and more notes""></a></h2> <chroma-profile-collection :collection="more" />]]></content:encoded> <enclosure url="https://chromatone.center/jacek-dylag.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Pentads]]></title> <link>https://chromatone.center/theory/chords/pentads/</link> <guid>https://chromatone.center/theory/chords/pentads/</guid> <pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[11th diatonic chord extensions and other 5-note chords]]></description> <content:encoded><![CDATA[<h2 id="pentads-–-the-five-note-chords" tabindex="-1">Pentads – the five note chords <a class="header-anchor" href="#pentads-–-the-five-note-chords" aria-label="Permalink to "Pentads – the five note chords""></a></h2> <chroma-profile-collection :collection="pentads" /><youtube-embed video="RFH1LD4KdWs" />]]></content:encoded> <enclosure url="https://chromatone.center/gabriel-gurrola.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Functional tonal harmony]]></title> <link>https://chromatone.center/theory/harmony/tonal/</link> <guid>https://chromatone.center/theory/harmony/tonal/</guid> <pubDate>Wed, 01 Sep 2021 00:00:00 GMT</pubDate> <description><![CDATA[Relating chords by their interval content]]></description> <content:encoded><![CDATA[<p><strong>Functional</strong> means based on <a href="https://en.wikipedia.org/wiki/Function_(music)" target="_blank" rel="noreferrer">functions</a>, <strong>tonal</strong> means based on <a href="https://en.wikipedia.org/wiki/Tonality" target="_blank" rel="noreferrer">tonality</a>, harmony means lively movement through all that emotional space.</p> <h2 id="tonality" tabindex="-1">Tonality <a class="header-anchor" href="#tonality" aria-label="Permalink to "Tonality""></a></h2> <p>So let’s begin with tonality. 99% of the songs you hear day to day are Tonal. Tonality is a system of harmony created & used in the Common-Practice Period (that is, in the Baroque, Classical and Romantic Eras of classical music), so from about 1700 to 1900. Tonal harmony is the ‘standard’ music theory that you learn through your Classical music studies. And, in fact, most of my previous lessons presuppose or function within ‘tonal harmony’.</p> <p>Tonality has the following features:</p> <ul> <li>It uses Major and minor keys</li> <li>It uses a Functional Harmony</li> <li>It has a Tonal Centre (i.e. root note)</li> </ul> <p>So point one is self-explanatory. Points two and three are more interesting. Tonality uses a Functional Harmony and has a Tonal Centre (that is, a Tonic). In tonal harmony every chord has a function, it can be categorised as either:</p> <ul> <li>Pre-Dominant;</li> <li>Dominant; or</li> <li>Tonic</li> </ul> <p>The function of a Pre-Dominant chord is to get you to the Dominant chord. The function of a Dominant chord is to get you to the Tonic chord, thus the harmony (that is, the chords) are ‘functional’.</p> <p>And the Tonic chord is the ‘Tonal Centre’. This can be thought of as a ‘Centre of Gravity’ to which all other chords gravitate and resolved into.</p> <p>Thus, a tonal chord progression sounds like it is moving towards the tonic. For example, take the below chord progression:</p> <p>Em7 A7 Dm7 G7 ???</p> <h2 id="what-chord-comes-next" tabindex="-1">What chord comes next? <a class="header-anchor" href="#what-chord-comes-next" aria-label="Permalink to "What chord comes next?""></a></h2> <p>We all know instinctively that it should be a CMaj7. It just SOUNDS like it needs to go back home and resolve to the tonic; thus there is a ‘tonal centre’. (Notice also that tonal chord progressions tend to move via the Circle of Fifths).</p> <p>The G7 gravitates and wants to resolve to CMaj7. This is because all Dominant chords have a tritone interval between their 3rd and 7th (in the case of G7 – B & F). This is known as a ‘diatonic tritone‘. The tritone is a very unstable and dissonant interval that wants to resolve. And it does so either:</p> <ul> <li>Inwards to C & E (1 & 3 of CMaj7 – creating a G7 to CMaj7 progression)</li> <li>Outwards to B♭ & G♭ (3 & 1 of G♭Maj7 – creating a D♭7 to G♭Maj7 progression)</li> </ul> <p>This ‘diatonic tritone’ is the basis of all tonal music. It is what makes the Dominant chord feel like it wants to resolve to the tonic (thus making the music ‘tonal’).</p> <h2 id="functional-tonal-harmony-1" tabindex="-1">Functional Tonal Harmony 1 <a class="header-anchor" href="#functional-tonal-harmony-1" aria-label="Permalink to "Functional Tonal Harmony 1""></a></h2> <p>Part one of three. Discusses consonance and dissonance, construction of the major scale, tendency tones, and harmonic function.</p> <youtube-embed video="qzzLj1tbVnA" /><h2 id="functional-tonal-harmony-2-minor-mode" tabindex="-1">Functional Tonal Harmony 2: Minor mode <a class="header-anchor" href="#functional-tonal-harmony-2-minor-mode" aria-label="Permalink to "Functional Tonal Harmony 2: Minor mode""></a></h2> <p>Discusses harmony in the minor mode. The three versions of the scale: Natural minor, harmonic minor and Melodic minor, and why we use them.</p> <youtube-embed video="d5jdbqU-DLw" /><h2 id="functional-tonal-harmony-3-secondary-dominants" tabindex="-1">Functional tonal harmony 3: Secondary dominants <a class="header-anchor" href="#functional-tonal-harmony-3-secondary-dominants" aria-label="Permalink to "Functional tonal harmony 3: Secondary dominants""></a></h2> <p>In music theory, a secondary dominant chord is a type of chord that functions as the dominant (V) of a chord other than the tonic (I) chord. This means that it creates a sense of tension that resolves to a chord other than the tonic.</p> <p>To create a secondary dominant chord, you use the dominant (V) chord of the chord you want to resolve to. For example, let's say we're in the key of C major, and we want to create a secondary dominant chord that resolves to the IV chord (F major). The V chord of F major is C7 (C dominant 7th), so we would play a C7 chord before the F major chord to create a sense of tension and resolution.</p> <p>The use of secondary dominant chords is a common technique in many musical styles, including jazz, pop, and classical music. They can add interest and complexity to a chord progression by introducing unexpected harmonic changes and creating new tonal centers.</p> <p>It's important to note that secondary dominant chords should be used tastefully and in moderation, as using them too frequently can create a sense of harmonic instability and disrupt the overall flow of a musical composition.</p> <youtube-embed video="6a5HvGQfDgg" /><youtube-embed video="py4HaueW50Q" /><h2 id="harmony-is-in-changes" tabindex="-1">Harmony is in changes <a class="header-anchor" href="#harmony-is-in-changes" aria-label="Permalink to "Harmony is in changes""></a></h2> <p>We build harmony out of different chords as steps of the emotional ladder but what really counts is the movement itself. We can go down and up, slow down or speed up, jump and even fly above if we want to. This is the story the composer tells to the listeners and it's vowen with movements.</p> <p>The functional relations between chords in a scale can be expanded quite drastically with deeper analysis of underlying intevals, that create the desired change in the mood of the music. Many new relations appear to build up into a huge landscape of the tonal space.</p> <h2 id="tonal-harmony-in-3d" tabindex="-1">Tonal Harmony in 3D <a class="header-anchor" href="#tonal-harmony-in-3d" aria-label="Permalink to "Tonal Harmony in 3D""></a></h2> <youtube-embed video="RcUXObvRLb4" /><h2 id="functional-chords-on-scale-degrees" tabindex="-1">Functional chords on scale degrees <a class="header-anchor" href="#functional-chords-on-scale-degrees" aria-label="Permalink to "Functional chords on scale degrees""></a></h2> <p>Here are some common chords that can be constructed for each of the scale degrees in tonal harmony:</p> <h3 id="tonic-scale-degrees" tabindex="-1">Tonic Scale Degrees <a class="header-anchor" href="#tonic-scale-degrees" aria-label="Permalink to "Tonic Scale Degrees""></a></h3> <ul> <li>First scale degree (I): major triad (I), major seventh chord (IMaj7)</li> <li>Third scale degree (iii): minor triad (iii), minor seventh chord (iiim7)</li> <li>Sixth scale degree (vi): minor triad (vi), minor seventh chord (vim7), dominant seventh chord (V7/vi)</li> </ul> <h3 id="subdominant-scale-degrees" tabindex="-1">Subdominant Scale Degrees <a class="header-anchor" href="#subdominant-scale-degrees" aria-label="Permalink to "Subdominant Scale Degrees""></a></h3> <ul> <li>Second scale degree (ii): minor triad (ii), minor seventh chord (iim7), dominant seventh chord (V7/ii)</li> <li>Fourth scale degree (IV): major triad (IV), major seventh chord (IVMaj7), dominant seventh chord (V7/IV)</li> </ul> <h3 id="dominant-scale-degrees" tabindex="-1">Dominant Scale Degrees <a class="header-anchor" href="#dominant-scale-degrees" aria-label="Permalink to "Dominant Scale Degrees""></a></h3> <ul> <li>Fifth scale degree (V): dominant seventh chord (V7), dominant ninth chord (V9), dominant thirteenth chord (V13), altered dominant chord (V7alt)</li> <li>Seventh scale degree (vii°): diminished triad (vii°), half-diminished seventh chord (viiø7), dominant seventh flat nine chord (V7b9/vii°), fully diminished seventh chord (vii°7)</li> </ul> ]]></content:encoded> </item> <item> <title><![CDATA[Solmization]]></title> <link>https://chromatone.center/theory/notes/solmization/</link> <guid>https://chromatone.center/theory/notes/solmization/</guid> <pubDate>Tue, 31 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Systems of attributing a distinct syllable to each note of a musical scale.]]></description> <content:encoded><![CDATA[<p>Solmization is a system of attributing a distinct syllable to each note of a musical scale. Various forms of solmization are in use and have been used throughout the world, but solfège is the most common convention in countries of Western culture.</p> <h2 id="solfege" tabindex="-1">Solfège <a class="header-anchor" href="#solfege" aria-label="Permalink to "Solfège""></a></h2> <p>Syllables are assigned to the notes of the scale and enable the musician to audiate, or mentally hear, the pitches of a piece of music being seen for the first time and then to sing them aloud. Through the Renaissance (and much later in some shapenote publications) various interlocking 4, 5 and 6-note systems were employed to cover the octave. The tonic sol-fa method popularized the seven syllables commonly used in English-speaking countries: do (or doh in tonic sol-fa), re, mi, fa, so(l), la, and ti (or si).</p> <p>The seven syllables normally used for this practice in English-speaking countries are: do, re, mi, fa, sol, la, and ti (with sharpened notes of di, ri, fi, si, li and flattened notes of te, le, se, me, ra). The system for other Western countries is similar, though si is often used as the final syllable rather than ti.</p> <p>There are two current ways of applying solfège:</p> <ol> <li><strong>fixed do</strong>, where the syllables are always tied to specific pitches (e.g. "do" is always "C-natural")</li> <li><strong>movable do</strong>, where the syllables are assigned to scale degrees, with "do" always the first degree of the major scale.</li> </ol> <p><img src="./The_Hand_of_Guido.jpg" alt=""></p> <p>In eleventh-century Italy, the music theorist Guido of Arezzo in his work "Micrologus" invented a notational system that named the six notes of the hexachord after the first syllable of each line of the Latin hymn Ut queant laxis, the "Hymn to St. John the Baptist", yielding ut, re, mi, fa, sol, la. Each successive line of this hymn begins on the next scale degree, so each note's name was the syllable sung at that pitch in this hymn.</p> <blockquote> <p><strong>Ut</strong> queant laxīs <strong>re</strong>sonāre fībrīs <strong>Mī</strong>ra gestōrum <strong>fa</strong>mulī tuōrum, <strong>Sol</strong>ve pollūtī <strong>la</strong>biī reātum, <strong>S</strong>ancte <strong>I</strong>ōhannēs.</p> </blockquote> <p>The words were written by Paulus Diaconus in the 8th century. They translate as:</p> <blockquote> <p>So that your servants may, with loosened voices, Resound the wonders of your deeds, Clean the guilt from our stained lips, O St. John.</p> </blockquote> <p>"Ut" was changed in the 1600s in Italy to the open syllable Do, at the suggestion of the musicologist Giovanni Battista Doni (based on the first syllable of his surname), and Si (from the initials for "Sancte Iohannes") was added to complete the diatonic scale. In Anglophone countries, "si" was changed to "ti" by Sarah Glover in the nineteenth century so that every syllable might begin with a different letter. "Ti" is used in tonic sol-fa (and in the famed American show tune "Do-Re-Mi").</p> <h2 id="movable-do" tabindex="-1">Movable do <a class="header-anchor" href="#movable-do" aria-label="Permalink to "Movable do""></a></h2> <p>In Movable do or tonic sol-fa, each syllable corresponds to a scale degree. This is analogous to the Guidonian practice of giving each degree of the hexachord a solfège name.</p> <p>Movable do is frequently employed in Australia, China, Japan (with 5th being so, and 7th being si), Ireland, the United Kingdom, the United States, Hong Kong, and English-speaking Canada. The movable do system is a fundamental element of the Kodály method used primarily in Hungary, but with a dedicated following worldwide. In the movable do system, each solfège syllable corresponds not to a pitch, but to a scale degree: The first degree of a major scale is always sung as "do", the second as "re", etc. (For minor keys, see below.) In movable do, a given tune is therefore always sol-faed on the same syllables, no matter what key it is in.</p> <p>Passages in a minor key may be sol-faed in one of two ways in movable do: either starting on do (using "me", "le", and "te" for the lowered third, sixth, and seventh degrees, and "la" and "ti" for the raised sixth and seventh degrees), which is referred to as "do-based minor", or starting on la (using "fi" and "si" for the raised sixth and seventh degrees). The latter (referred to as "la-based minor") is sometimes preferred in choral singing, especially with children.</p> <h3 id="tonic-sol-fa" tabindex="-1">Tonic sol-fa <a class="header-anchor" href="#tonic-sol-fa" aria-label="Permalink to "Tonic sol-fa""></a></h3> <p>Tonic sol-fa (or tonic sol-fah) is a pedagogical technique for teaching sight-singing, invented by Sarah Ann Glover (1785–1867) of Norwich, England and popularised by John Curwen, who adapted it from a number of earlier musical systems. It uses a system of musical notation based on movable do solfège, whereby every note is given a name according to its relationship with other notes in the key: the usual staff notation is replaced with anglicized solfège syllables (e.g. do, re, mi, fa, sol, la, ti, do) or their abbreviations (d, r, m, f, s, l, t, d). "Do" is chosen to be the tonic of whatever key is being used (thus the terminology moveable Do in contrast to the fixed Do system used by John Pyke Hullah). The original solfège sequence started with "Ut" which later became "Do".</p> <p><img src="./Solfege_Ireland.jpg" alt=""></p> <p>Solmization that represents the functions of pitches (such as tonic sol-fa) is called "functional" solmization. All musicians that use functional solmization use "do" to represent the tonic (also known as the "keynote") in the major mode. However, approaches to the minor mode fall into two camps. Some musicians use "do" to represent the tonic in minor (a parallel approach), whereas others prefer to label the tonic in minor as "la" (a relative approach) Both systems have their advantages: The former system more directly represents the scale-degree functions of the pitches in a key; the latter more directly represents the intervals between pitches in any given key signature.</p> <p><img src="./Curwen_Hand_Signs_MT.jpg" alt=""></p> <h2 id="fixed-do" tabindex="-1">Fixed do <a class="header-anchor" href="#fixed-do" aria-label="Permalink to "Fixed do""></a></h2> <p>In Fixed do, each syllable corresponds to the name of a note. This is analogous to the Romance system naming pitches after the solfège syllables, and is used in Romance and Slavic countries, among others, including Spanish-speaking countries.</p> <p>In the major Romance and Slavic languages, the syllables Do, Re, Mi, Fa, Sol, La, and Si are used to name notes the same way that the letters C, D, E, F, G, A, and B are used to name notes in English. For native speakers of these languages, solfège is simply singing the names of the notes, omitting any modifiers such as "sharp" or "flat" to preserve the rhythm.</p> <h3 id="chromatic-variants" tabindex="-1">Chromatic variants <a class="header-anchor" href="#chromatic-variants" aria-label="Permalink to "Chromatic variants""></a></h3> <p>Several chromatic fixed-do Systems that have also been devised to account for chromatic notes, and even for double-sharp and double-flat variants. The Yehnian being the first 24-EDO solfège, proposed even quartertonal syllables while having no exceptions of its rules, and usability for both Si and Ti users.</p> <table class="m-auto text-center" dir="ltr" cellspacing="0" cellpadding="0"> <tbody> <tr > <td colspan="2" rowspan="1" >Note name</td> <td colspan="5" rowspan="1" >Syllable</td> <td colspan="1" rowspan="2" > Pitch class </td> </tr> <tr > <td >English</td> <td >Romance</td> <td >Traditional</td> <td >Shearer</td> <td >Siler</td> <td >Sotorrio</td> <td >Yehnian (chromatic)</td> </tr> <tr > <td >C♭</td> <td >Do♭</td> <td> </td> <td >de</td> <td >do</td> <td >(Tsi)</td> <td >Də</td> <td >11</td> </tr> <tr > <td >C</td> <td >Do</td> <td >do</td> <td >do</td> <td >da</td> <td >Do</td> <td >Do</td> <td >0</td> </tr> <tr > <td >C♯</td> <td >Do♯</td> <td> </td> <td >di</td> <td >de</td> <td >Ga</td> <td >Du</td> <td >1</td> </tr> <tr > <td >D♭</td> <td >Re♭</td> <td> </td> <td >ra</td> <td >ro</td> <td >Ga</td> <td >Rə</td> <td >1</td> </tr> <tr > <td >D</td> <td >Re</td> <td >re</td> <td >re</td> <td >ra</td> <td >Ray</td> <td >Re</td> <td >2</td> </tr> <tr > <td >D♯</td> <td >Re♯</td> <td> </td> <td >ri</td> <td >re</td> <td >Nu</td> <td >Ru</td> <td >3</td> </tr> <tr > <td >E♭</td> <td >Mi♭</td> <td> </td> <td >me</td> <td >mo</td> <td >Nu</td> <td >Mə</td> <td >3</td> </tr> <tr > <td >E</td> <td >Mi</td> <td >mi</td> <td >mi</td> <td >ma</td> <td >Mi</td> <td >Mi</td> <td >4</td> </tr> <tr > <td >E♯</td> <td >Mi♯</td> <td> </td> <td >mai</td> <td >me</td> <td >(Fa)</td> <td >Mu</td> <td >5</td> </tr> <tr > <td >F♭</td> <td >Fa♭</td> <td> </td> <td >fe</td> <td >fo</td> <td >(Mi)</td> <td >Fə</td> <td >4</td> </tr> <tr > <td >F</td> <td >Fa</td> <td >fa</td> <td >fa</td> <td >fa</td> <td >Fa</td> <td >Fa</td> <td >5</td> </tr> <tr > <td >F♯</td> <td >Fa♯</td> <td> </td> <td >fi</td> <td >fe</td> <td >Jur</td> <td >Fu</td> <td >6</td> </tr> <tr > <td >G♭</td> <td >Sol♭</td> <td> </td> <td >se</td> <td >so</td> <td >Jur</td> <td >Səl / Sə</td> <td >6</td> </tr> <tr > <td >G</td> <td >Sol</td> <td >sol</td> <td >so</td> <td >sa</td> <td >Sol</td> <td >Sol</td> <td >7</td> </tr> <tr > <td >G♯</td> <td >Sol♯</td> <td> </td> <td >si</td> <td >se</td> <td >Ki</td> <td >Sul / Su</td> <td >8</td> </tr> <tr > <td >A♭</td> <td >La♭</td> <td> </td> <td >le</td> <td >lo</td> <td >Ki</td> <td >Lə</td> <td >8</td> </tr> <tr > <td >A</td> <td >La</td> <td >la</td> <td >la</td> <td >la</td> <td >La</td> <td >La</td> <td >9</td> </tr> <tr > <td >A♯</td> <td >La♯</td> <td> </td> <td >li</td> <td >le</td> <td >Pe</td> <td >Lu</td> <td >10</td> </tr> <tr > <td >B♭</td> <td >Si♭</td> <td> </td> <td >te</td> <td >to</td> <td >Pe</td> <td >Sə / Tə</td> <td >10</td> </tr> <tr > <td >B</td> <td >Si</td> <td >si</td> <td >ti</td> <td >ta</td> <td >Tsi</td> <td >Si / Ti</td> <td >11</td> </tr> <tr > <td >B♯</td> <td >Si♯</td> <td> </td> <td >tai</td> <td >te</td> <td >(Do)</td> <td >Su / Tu</td> <td >0</td> </tr> </tbody> </table> <h3 id="comparison-of-the-two-systems" tabindex="-1">Comparison of the two systems <a class="header-anchor" href="#comparison-of-the-two-systems" aria-label="Permalink to "Comparison of the two systems""></a></h3> <p>Movable Do corresponds to our psychological experience of normal tunes. If the song is sung a tone higher it is still perceived to be the same song, and the notes have the same relationship to each other, but in a fixed Do all the note names would be different. A movable Do emphasizes the musicality of the tune as the psychological perception of the notes is always relative to a key for the vast majority of people that do not have absolute pitch.</p> <p>Jose Sotorrio argues that fixed-do is preferable for serious musicians, as music involving complex modulations and vague tonality is often too ambiguous with regard to key for any movable system. That is, without a prior analysis of the music, any movable-do system would inevitably need to be used like a fixed-do system anyway, thus causing confusion. With fixed-do, the musician learns to regard any syllable as the tonic, which does not force them to make an analysis as to which note is the tonic when ambiguity occurs. Instead, with fixed-do the musician will already be practiced in thinking in multiple/undetermined tonalities using the corresponding syllables.</p> <p>In comparison to the movable do system, which draws on short-term relative pitch skills involving comparison to a pitch identified as the tonic of the particular piece being performed, fixed do develops long-term relative pitch skills involving comparison to a pitch defined independently of its role in the piece, a practice closer to the definition of each note in absolute terms as found in absolute pitch.</p> <p>Instrumentalists who begin sight-singing for the first time in college as music majors find movable do to be the system more consistent with the way they learned to read music.</p> <p>For choirs, sight-singing fixed do using chromatic movable do syllables is more suitable than sight-singing movable do for reading atonal music, polytonal music, pandiatonic music, music that modulates or changes key often, or music in which the composer simply did not write a key signature. It is not uncommon for this to be the case in modern or contemporary choral works.</p> <h2 id="tonic-sol-fa-notation" tabindex="-1">Tonic sol-fa notation <a class="header-anchor" href="#tonic-sol-fa-notation" aria-label="Permalink to "Tonic sol-fa notation""></a></h2> <p>In Curwen's system, the notes of the major scale (of any key) are notated with the single letters d, r, m, f, s, l, and t. For notes above the principal octave, an apostrophe follows the letter; notes below the principal octave have a subscript mark. Chromatic alterations are marked by the following vowel, "e" for sharp (pronounced "ee") and "a" for flat (pronounced "aw"). Thus, the ascending and descending chromatic scale is notated:</p> <blockquote> <p>d de r re m f fe s se l le t d'</p> <p>d' t ta l la s sa f m ma r ra d</p> </blockquote> <p>Such chromatic notes appear only as ornaments or as preparation for a modulation; once the music has modulated, then the names for the new key are used. The modulation itself is marked by superscript of the old note name preceding its new name; for example, in modulation to the dominant, the new tonic is notated as sd. The music then proceeds in the new key until another modulation is notated.</p> <h2 id="arabic-system" tabindex="-1">Arabic system <a class="header-anchor" href="#arabic-system" aria-label="Permalink to "Arabic system""></a></h2> <p>An alternative theory argues that the solfège syllables (do, re, mi, fa, sol, la, ti) derive from the syllables of an Arabic solmization system درر مفصّلات Durar Mufaṣṣalāt ("Detailed Pearls") (<strong>dāl, rā', mīm, fā', ṣād, lām, tā</strong>), mentioned in the works of Francisci a Mesgnien Meninski in 1680 and later discussed by Jean-Benjamin de La Borde in 1780. However, there is no documentary evidence for this theory.</p> <h2 id="indian-sargam" tabindex="-1">Indian sargam <a class="header-anchor" href="#indian-sargam" aria-label="Permalink to "Indian sargam""></a></h2> <p>The Svara solmization of India has origins in Vedic texts like the Upanishads, which discuss a musical system of seven notes, realized ultimately in what is known as sargam. In Indian classical music, the notes in order are: <strong>sa, re, ga, ma, pa, dha, and ni</strong>, which correspond to the Western solfege system.</p> <p>These seven degrees are shared by both major rāga system, that is the North Indian (Hindustani) and South Indian (Carnatic). The solfege (sargam) is learnt in abbreviated form: sa, ri (Carnatic) or re (Hindustani), ga, ma, pa, dha, ni, sa. Of these, the first that is "sa", and the fifth that is "pa", are considered anchors that are unalterable, while the remaining have flavors that differs between the two major systems.</p> <h2 id="byzantine-system" tabindex="-1">Byzantine system <a class="header-anchor" href="#byzantine-system" aria-label="Permalink to "Byzantine system""></a></h2> <p>Byzantine music uses syllables derived from the Greek alphabet to name notes: starting with A, the notes are <strong>pa (alpha), vu (beta, pronounced v in modern greek), ga (gamma), di (delta), ke (epsilon), zo (zeta), ni (eta)</strong>.</p> <h2 id="asian-systems" tabindex="-1">Asian systems <a class="header-anchor" href="#asian-systems" aria-label="Permalink to "Asian systems""></a></h2> <p>For Han people's music in China, the words used to name notes are (from fa to mi): <strong>上 (siong or shàng), 尺 (cei or chǐ), 工 (gōng), 凡 (huan or fán), 六 (liuo or liù), 五 (ngou or wǔ), 乙 (yik or yǐ)</strong>. The system is used for teaching sight-singing.</p> <p>For Japanese music, the first line of Iroha, an ancient poem used as a tutorial of traditional kana, is used for solmization. The syllables representing the notes A, B, C, D, E, F, G are <strong>i, ro, ha, ni, ho, he, to</strong> respectively. Shakuhachi musical notation uses another solmization system beginning "Fu Ho U".</p> <p>Javanese musicians derive syllables from numbers: <strong>ji-ro-lu-pat-ma-nem-pi</strong>. These names derive from one-syllable simplification of the Javanese numerals siji, loro, telu, papat, lima, enem, pitu. ([Pa]pat and pi[tu], corresponding to 4 and 7, are skipped in the pentatonic slendro scale.)</p> <h2 id="dodeka-system" tabindex="-1">DODEKA system <a class="header-anchor" href="#dodeka-system" aria-label="Permalink to "DODEKA system""></a></h2> <p>The objective was to create 2-letter names that convey a relationship between the names of the notes and their position on the staff.</p> <p>We did that using letters that are not present in the English (anglo-saxon) designation.</p> <p>For example, the note Do# (C#) is called Ka (K) because it shares the same position as La (A) (ie. both notes are above a line).</p> <p>Following this logic, the 12 notes can be written as: Do / Ka / Ré / To(l) / Mi / Fa / Hu / So(l) / Pi / La / Vé / Si.</p> <p>In English, we only use the first letters, which gives us the following sequence: C / K / D / T / E / F / H / G / P / A / V / B.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/The_Hand_of_Guido.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Nature of sound]]></title> <link>https://chromatone.center/theory/sound/nature/</link> <guid>https://chromatone.center/theory/sound/nature/</guid> <pubDate>Tue, 31 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The acoustic waves - their sources and mediums.]]></description> <content:encoded><![CDATA[<h2 id="what-is-sound" tabindex="-1">What is sound? <a class="header-anchor" href="#what-is-sound" aria-label="Permalink to "What is sound?""></a></h2> <youtube-embed video="24yESm63tSY" /><p>Acoustic vibrations propagate as mechanical waves of pressure in a transmission medium such as gas, liquid or solid. The speed of sound in air at 20 ºC is about 343 m/s (1,235 km/h) and complexly depends on density and pressure/stiffness of the medium. Audio range falls between infrasonic (<20 Hz) and ultrasonic (>20 kHz) frequencies.</p> <youtube-embed video="XLfQpv2ZRPU" /><p><img src="./Spherical_pressure_waves.gif" alt=""></p> <youtube-embed video="px3oVGXr4mo" /><h2 id="acoustic-vibrations" tabindex="-1">Acoustic vibrations <a class="header-anchor" href="#acoustic-vibrations" aria-label="Permalink to "Acoustic vibrations""></a></h2> <SoundVibrations class="my-16" id="sound-vibrations" /><h2 id="lindsay-s-wheel-of-acoustics" tabindex="-1">Lindsay's Wheel of Acoustics <a class="header-anchor" href="#lindsay-s-wheel-of-acoustics" aria-label="Permalink to "Lindsay's Wheel of Acoustics""></a></h2> <p><img src="./Lindsays_Wheel_of_Acoustics.svg" alt=""></p> <p><img src="./atmosphere-speed-of-sound.png" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/sound-waves.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[The Sun]]></title> <link>https://chromatone.center/theory/color/light/sun/</link> <guid>https://chromatone.center/theory/color/light/sun/</guid> <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The Sun is the center of our Solar system and is the main source of all light on Earth.]]></description> <content:encoded><![CDATA[<p>The Sun, a 4.5 billion-year-old yellow dwarf star, is the center of our Solar system and is the main source of light on Earth.</p> <p>The Sun has an absolute magnitude of +4.83, estimated to be brighter than about 85% of the stars in the Milky Way, most of which are red dwarfs.</p> <p>The Sun is a G-type main-sequence star that comprises about 99.86% of the mass of the Solar System. Its mass is estimated at 2 octillion tons, while it's losing 5 million tons of material each second in form of radiation and ionized corona flares that cool down in space and propagate as solar wind.</p> <p><img src="./images/sun.svg" alt="svg"></p> <p>The Sun is by far the brightest object in the Earth's sky, with an apparent magnitude of −26.74. This is about 13 billion times brighter than the next brightest star, Sirius, which has an apparent magnitude of −1.46.</p> <p><img src="./wind.gif" alt="wind"></p> <p>The Sun is about 149.6 million kilometers away from Earth. Light travels at a speed of about 299,792 kilometers per second. So, it takes about 8 minutes and 20 seconds for light to travel from the Sun to Earth.</p> <p><img src="./images/sun-granules.jpg" alt=""></p> <p>Thermonuclear reactions at temperatures about 14 million Kelvin in its core produce high energy gamma-rays that are absorbed and converted into lower energy radiation by ionized atoms in its relatively thin and much cooler (4000 - 6000 K) photosphere and chromosphere layers.</p> <p><img src="./images/Sunspot.jpg" alt=""></p> <h2 id="uv-radiation-of-the-sun" tabindex="-1">UV radiation of the Sun <a class="header-anchor" href="#uv-radiation-of-the-sun" aria-label="Permalink to "UV radiation of the Sun""></a></h2> <p><img src="./images/extreme_ultraviolet_sun.jpg" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/sun.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Human color perception]]></title> <link>https://chromatone.center/theory/color/perception/</link> <guid>https://chromatone.center/theory/color/perception/</guid> <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Physiology and features of the eyes]]></description> <content:encoded><![CDATA[<h2 id="human-eye" tabindex="-1">Human eye <a class="header-anchor" href="#human-eye" aria-label="Permalink to "Human eye""></a></h2> <youtube-embed video="eySkNWTI03Q" /><p>Perception of color derives from the stimulation of cone cells in the human eye by visible light. Light, containing all spectral colors is perceived white. Color of an object depends on the range of wavelengths of light that are absorbed or reflected by its surface. The sense of a particular color is produced in nervous system by combining signal from three types of cones, sensitive to red, green and blue ranges of the spectrum.</p> <p><img src="./images/Eyesection.svg" alt="Eye section"></p> <h2 id="retina-cells" tabindex="-1">Retina cells <a class="header-anchor" href="#retina-cells" aria-label="Permalink to "Retina cells""></a></h2> <ul> <li>Cones</li> <li>Rods</li> </ul> <youtube-embed video="_xKbjYBnHhc" /><p><img src="./images/retina.jpg" alt="Retina structure"></p> <p><img src="./images/Distribution_of_Cones_and_Rods_on_Human_Retina.png" alt="Distribution of cones and rods"></p> <h2 id="visual-pathway" tabindex="-1">Visual pathway <a class="header-anchor" href="#visual-pathway" aria-label="Permalink to "Visual pathway""></a></h2> <youtube-embed video="ai7QnHS7C7g" /><youtube-embed video="dKcCscadzkg" /><p><img src="./images/Human-visual-pathway.svg" alt="Visual pathway"></p> <p>Human eye with normal vision has three kinds of cone cells that sense light, having peaks of spectral sensitivity in short ("S", 420 nm – 440 nm), middle ("M", 530 nm – 540 nm), and long ("L", 560 nm – 580 nm) wavelengths. These cone cells underlie human color perception in conditions of medium and high brightness; in very dim light color vision diminishes, and the low-brightness, monochromatic "night vision" receptors, denominated "rod cells", become effective. Thus, three parameters corresponding to levels of stimulus of the three kinds of cone cells, in principle describe any human color sensation. Weighting a total light power spectrum by the individual spectral sensitivities of the three kinds of cone cells renders three effective values of stimulus; these three values compose a tristimulus specification of the objective color of the light spectrum. The three parameters, denoted "S", "M", and "L", are indicated using a 3-dimensional space denominated the "LMS color space", which is one of many color spaces devised to quantify human color vision.</p> <youtube-embed video="kN9lN9L8eKE" /><p><img src="./color-sensitivity.jpg" alt="Color sensitivity"></p> <p><img src="./images/Cone-fundamentals-with-srgb-spectrum.svg" alt="Normalized cone response"></p> <blockquote> <p><img src="./images/cie-1931.svg" alt="The CIE XYZ standard observer color matching functions"></p> <p>The CIE XYZ standard observer color matching functions</p> </blockquote> <p><img src="./images/Eyesensitivity.svg" alt=""></p> <p><img src="./images/Line_of_purples.png" alt=""></p> <h2 id="opposite-color-vision-theory" tabindex="-1">Opposite color vision theory <a class="header-anchor" href="#opposite-color-vision-theory" aria-label="Permalink to "Opposite color vision theory""></a></h2> <youtube-embed video="l8_fZPHasdo" /><youtube-embed video="FjIcwZrPB78" /><p><img src="./images/Diagram_of_the_opponent_process.png" alt="Opponent process diagram"></p> <h2 id="just-noticable-difference" tabindex="-1">Just noticable difference <a class="header-anchor" href="#just-noticable-difference" aria-label="Permalink to "Just noticable difference""></a></h2> <youtube-embed video="IZsAD_nm-q4" /><p>Tolerancing concerns the question "What is a set of colors that are imperceptibly/acceptably close to a given reference?" If the distance measure is perceptually uniform, then the answer is simply "the set of points whose distance to the reference is less than the just-noticeable-difference (JND) threshold." This requires a perceptually uniform metric in order for the threshold to be constant throughout the gamut (range of colors). Otherwise, the threshold will be a function of the reference color—cumbersome as a practical guide.</p> <p>In the CIE 1931 color space, for example, the tolerance contours are defined by the MacAdam ellipse, which holds L* (lightness) fixed. As can be observed on the adjacent diagram, the ellipses denoting the tolerance contours vary in size. It is partly this non-uniformity that led to the creation of CIELUV and CIELAB.</p> <blockquote> <p><img src="./images/CIExy1931_MacAdam.png" alt=""></p> <p>A MacAdam diagram in the CIE 1931 color space. The ellipses are shown ten times their actual size.</p> </blockquote> ]]></content:encoded> <enclosure url="https://chromatone.center/color-sensitivity.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[National notation systems of ancient and modern cultures]]></title> <link>https://chromatone.center/theory/notes/national/</link> <guid>https://chromatone.center/theory/notes/national/</guid> <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Greece, Korea and others]]></description> <content:encoded><![CDATA[<h2 id="indian-notation" tabindex="-1">Indian notation <a class="header-anchor" href="#indian-notation" aria-label="Permalink to "Indian notation""></a></h2> <h3 id="the-seven-varnas-of-a-saptak" tabindex="-1">The seven varnas of a saptak. <a class="header-anchor" href="#the-seven-varnas-of-a-saptak" aria-label="Permalink to "The seven varnas of a saptak.""></a></h3> <p>The Samaveda text (1200 BC – 1000 BC) contains notated melodies, and these are probably the world's oldest surviving ones.</p> <p><a href="https://en.wikipedia.org/wiki/Vedic_accent" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Vedic_accent</a></p> <p>The musical notation is written usually immediately above, sometimes within, the line of Samaveda text, either in syllabic or a numerical form depending on the Samavedic Sakha (school). The Indian scholar and musical theorist Pingala (c. 200 BC), in his Chanda Sutra, used marks indicating long and short syllables.</p> <p><img src="./Bhat_notation1.jpg" alt=""></p> <h2 id="saptak-सप्तक-7-svaras-octave" tabindex="-1">Saptak - सप्तक - 7 svaras - Octave <a class="header-anchor" href="#saptak-सप्तक-7-svaras-octave" aria-label="Permalink to "Saptak - सप्तक - 7 svaras - Octave""></a></h2> <table class="text-center text-sm"> <thead> <tr> <th>Solfege</th> <th>Syllable</th> <th>Name</th> <th>Meaning</th> <th>Variants</th> <th>Color</th> <th>Planet</th> <th>Chakra</th> </tr> </thead> <tbody> <tr v-for="svara in $frontmatter.svaras" :key="svara.name"> <td> {{svara.solfege}}</td> <td class="font-bold"> {{svara.mnem}}</td> <td> {{svara.name}}</td> <td> {{svara.trans}}</td> <td> {{svara.variants}}</td> <td :style="{backgroundColor: svara.color}"> {{svara.color}}</td> <td> {{svara.planet}}</td> <td> {{svara.chakra}}</td> </tr> </tbody> </table> <p><a href="https://en.wikipedia.org/wiki/Svara" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Svara</a></p> <p><a href="http://sanskritdictionary.com/%E1%B9%A3a%E1%B8%8Dja/242242/1" target="_blank" rel="noreferrer">http://sanskritdictionary.com/ṣaḍja/242242/1</a></p> <h2 id="hurrian-songs-the-worlds-oldest-music-notation-artefact" tabindex="-1">Hurrian songs - the worlds oldest music notation artefact <a class="header-anchor" href="#hurrian-songs-the-worlds-oldest-music-notation-artefact" aria-label="Permalink to "Hurrian songs - the worlds oldest music notation artefact""></a></h2> <p><img src="./images/Hurritische_hymne.gif" alt=""></p> <p>The Hurrian songs are a collection of music inscribed in cuneiform on clay tablets excavated from the ancient Amorite-Canaanite city of Ugarit, a headland in northern Syria, which date to approximately 1400 BCE. One of these tablets, which is nearly complete, contains the Hurrian Hymn to Nikkal (also known as the Hurrian cult hymn or A Zaluzi to the Gods, or simply h.6), making it the oldest surviving substantially complete work of notated music in the world.</p> <p>The complete song is one of about 36 such hymns in cuneiform writing, found on fragments of clay tablets excavated in the 1950s from the Royal Palace at Ugarit (present-day Ras Shamra, Syria), in a stratum dating from the fourteenth century BC, but is the only one surviving in substantially complete form.</p> <h2 id="ancient-greece" tabindex="-1">Ancient Greece <a class="header-anchor" href="#ancient-greece" aria-label="Permalink to "Ancient Greece""></a></h2> <p>Hymn to Applo in Delphi</p> <p><img src="./images/Delphichymn.jpg" alt=""></p> <h2 id="china" tabindex="-1">China <a class="header-anchor" href="#china" aria-label="Permalink to "China""></a></h2> <h3 id="gongche-mi-re" tabindex="-1">Gongche - 'mi re' <a class="header-anchor" href="#gongche-mi-re" aria-label="Permalink to "Gongche - 'mi re'""></a></h3> <ol> <li>上 - [sɑ̄ːŋ] - shàng - do</li> <li>尺 - [tsʰɛ́ː] - chě - re</li> <li>工 - [kʊ́ŋ] - gōng - mi</li> <li>凡 - [fɑ́ːn] - fán - fa</li> <li>六 - [líːu] - liù - sol</li> <li>五 - [wúː] - wǔ - la</li> <li>乙 - [jìː] - yǐ - si</li> </ol> <p><img src="./images/gongche.jpg" alt=""></p> <p><img src="./images/Kam_Hok_Yap_Mun-Yeung_Kwan_Sam_Tip.jpg" alt=""></p> <h2 id="korea" tabindex="-1">Korea <a class="header-anchor" href="#korea" aria-label="Permalink to "Korea""></a></h2> <p><strong>Jeongganbo</strong> musical notation system</p> <p><img src="./images/Jeongganbo.jpg" alt=""></p> <h2 id="japan" tabindex="-1">Japan <a class="header-anchor" href="#japan" aria-label="Permalink to "Japan""></a></h2> <p><strong>Kunkunshi</strong> (工工四 (Okinawan) pronounced [kuŋkunɕiː]) is the traditional notation system by which music is recorded in the Ryukyu Islands. The term kunkunshi originally referred to the first three notes of a widely known Chinese melody, although today it is used almost exclusively in reference to the sheet music.</p> <p>Kunkunshi is believed to have been first developed by Mongaku Terukina or by his student Choki Yakabi in the early to mid-1700s. However, it was not until the end of the 19th century that the form became standardized for writing sanshin music.</p> <p><img src="./images/Kunkunshi.jpg" alt=""></p> <p><img src="./images/Kunkunshi_for_Tinsagu_nu_Hana.png" alt=""></p> <youtube-embed video="O7DR4kjWG_c" />]]></content:encoded> <enclosure url="https://chromatone.center/Bhat_notation1.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Human auditory system]]></title> <link>https://chromatone.center/theory/sound/hearing/</link> <guid>https://chromatone.center/theory/sound/hearing/</guid> <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Explore the intricate and inredibly complicated mechanism of converting acoustic vibrations to electrical nerve signals.]]></description> <content:encoded><![CDATA[<youtube-embed video="eQEaiZ2j9oc" /><h2 id="tonal-perception" tabindex="-1">Tonal perception <a class="header-anchor" href="#tonal-perception" aria-label="Permalink to "Tonal perception""></a></h2> <p>Sound is amplified and transformed into nerve signals by mechanically activated hair cells emitting glutamate neurotransmitter in a basilar membrane in the cochlea of the human inner ear. It happens in a spiral organ with 2.5 coils of tonotopically organized bone tissue resonating with different frequencies in its different locations.</p> <youtube-embed video="WeQluId1hnQ" /><p><img src="./auditory-system.png" alt="AUditory system anatomy"></p> <p><img src="./auditory-system-2.jpg" alt="Auditory system anatomy"></p> <p><img src="./basilar-membrane.jpg" alt="Basilar membrane"></p> <youtube-embed video="XsXIOBx6cwI" /><p>Hearing is a crucial aspect of human communication, as the ear transforms sound vibrations from the environment into nerve impulses that are interpreted as sounds by the brain. The cochlea, a part of the inner ear, is responsible for frequency analysis, with different sections resonating at different frequencies. The place theory of hearing suggests that each place in the cochlea corresponds to the perception of a given frequency.</p> <youtube-embed video="XPHuiYInOsg" /><p>The just noticeable difference in frequency is about 1 Hz for frequencies lower than 1000 Hz for most people. However, the resonance curves of the cochlea are broad and overlap, making it difficult for the ear to pick out frequencies that are close together. The place theory of hearing cannot fully explain how we perceive different frequencies, as it does not account for the ability to hear sudden changes in frequency.</p> <p>The place theory of hearing is one of the two opposing theories that attempt to explain the perceptual processing of sound sensation, alongside the frequency theory explorable.com. The place theory suggests that each part of the cochlea resonates at a different frequency, with the stapes end resonating at high frequencies and the end furthest from the ossicles resonating at low frequencies. When a given frequency is presented to the cochlea, it causes motion in only one part, which sends a nerve impulse to the brain, enabling the perception of that frequency. However, the place theory has limitations, as the resonance curves are broad and overlap, making it difficult for the ear to distinguish frequencies that are close together.</p> <p>The temporal theory of hearing is another overlapping theory that aims to explain the richness of auditory phenomena experienced by humans. Unlike the place theory, the temporal theory focuses on the timing of neural activity, suggesting that the brain processes temporal information to perceive sound. This theory posits that the brain uses the timing of neural activity to distinguish between different frequencies, even when the resonance curves of the cochlea overlap. Combining the place theory and the temporal theory can help explain the complexity of human auditory perception and provide a more comprehensive understanding of how we perceive sound.</p> <youtube-embed video="geSDcollRos" />]]></content:encoded> <enclosure url="https://chromatone.center/auditory-system.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Light spectrum]]></title> <link>https://chromatone.center/theory/color/light/spectrum/</link> <guid>https://chromatone.center/theory/color/light/spectrum/</guid> <pubDate>Sun, 29 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Spectrum of solar radiation]]></description> <content:encoded><![CDATA[<p><img src="./images/em-spectrum.svg" alt="svg"></p> <h2 id="black-body-emission" tabindex="-1">Black body emission <a class="header-anchor" href="#black-body-emission" aria-label="Permalink to "Black body emission""></a></h2> <p>Black-body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, emitted by a black body (an idealized opaque, non-reflective body). It has a specific spectrum of wavelengths, inversely related to intensity that depend only on the body's temperature, which is assumed for the sake of calculations and theory to be uniform and constant.</p> <p><img src="./images/Color_temperature_black_body_800-12200K.svg" alt="svg"></p> <p>Black-body radiation has a characteristic, continuous frequency spectrum that depends only on the body's temperature, called the Planck spectrum or Planck's law. The spectrum is peaked at a characteristic frequency that shifts to higher frequencies with increasing temperature, and at room temperature most of the emission is in the infrared region of the electromagnetic spectrum. As the temperature increases past about 500 degrees Celsius, black bodies start to emit significant amounts of visible light. Viewed in the dark by the human eye, the first faint glow appears as a "ghostly" grey (the visible light is actually red, but low intensity light activates only the eye's grey-level sensors). With rising temperature, the glow becomes visible even when there is some background surrounding light: first as a dull red, then yellow, and eventually a "dazzling bluish-white" as the temperature rises. When the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation. The Sun, with an effective temperature of approximately 5800 K, is an approximate black body with an emission spectrum peaked in the central, yellow-green part of the visible spectrum, but with significant power in the ultraviolet as well.</p> <p><img src="./images/Wiens_law.svg" alt="svg"></p> <p><img src="./images/PlanckianLocus.png" alt=""></p> <p>In the longer wavelengths this deviation is not so noticeable, as hv are very small. In the shorter wavelengths of the ultraviolet range, however, classical theory predicts the energy emitted tends to infinity, hence the ultraviolet catastrophe. The theory even predicted that all bodies would emit most of their energy in the ultraviolet range, clearly contradicted by the experimental data which showed a different peak wavelength at different temperatures (see also Wien's law).</p> <p><img src="./images/Black_body.svg" alt="svg"></p> <p>As the temperature increases, the peak of the emitted black-body radiation curve moves to higher intensities and shorter wavelengths. The black-body radiation graph is also compared with the classical model of Rayleigh and Jeans.</p> <p>Instead, in the quantum treatment of this problem, the numbers of the energy modes are quantized, attenuating the spectrum at high frequency in agreement with experimental observation and resolving the catastrophe. The modes that had more energy than the thermal energy of the substance itself were not considered, and because of quantization modes having infinitesimally little energy were excluded.</p> <p>Thus for shorter wavelengths very few modes (having energy more than hν) were allowed, supporting the data that the energy emitted is reduced for wavelengths less than the wavelength of the observed peak of emission.</p> <p>Notice that there are two factors responsible for the shape of the graph. Firstly, longer wavelengths have a larger number of modes associated with them. Secondly, shorter wavelengths have more energy associated per mode. The two factors combined give the characteristic maximum wavelength.</p> <p>Calculating the black-body curve was a major challenge in theoretical physics during the late nineteenth century. The problem was solved in 1901 by Max Planck in the formalism now known as Planck's law of black-body radiation. By making changes to Wien's radiation law (not to be confused with Wien's displacement law) consistent with thermodynamics and electromagnetism, he found a mathematical expression fitting the experimental data satisfactorily. Planck had to assume that the energy of the oscillators in the cavity was quantized, i.e., it existed in integer multiples of some quantity. Einstein built on this idea and proposed the quantization of electromagnetic radiation itself in 1905 to explain the photoelectric effect. These theoretical advances eventually resulted in the superseding of classical electromagnetism by quantum electrodynamics. These quanta were called photons and the black-body cavity was thought of as containing a gas of photons. In addition, it led to the development of quantum probability distributions, called Fermi–Dirac statistics and Bose–Einstein statistics, each applicable to a different class of particles, fermions and bosons.</p> <h2 id="earth-atmosphere-em-radiation-absorption" tabindex="-1">Earth atmosphere EM radiation absorption <a class="header-anchor" href="#earth-atmosphere-em-radiation-absorption" aria-label="Permalink to "Earth atmosphere EM radiation absorption""></a></h2> <img src="./sun-spectrum.svg"> <img src="./images/spectral-lines.svg"> <youtube-embed video="-Xx7sPPTu3Y" />]]></content:encoded> <enclosure url="https://chromatone.center/sun-spectrum.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Psychoacoustics]]></title> <link>https://chromatone.center/theory/sound/psychoacoustics/</link> <guid>https://chromatone.center/theory/sound/psychoacoustics/</guid> <pubDate>Sun, 29 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The science of percieved sound]]></description> <content:encoded><![CDATA[<youtube-embed video="awiunFeiDuQ" /><h2 id="equal-loudness-contour" tabindex="-1">Equal-loudness contour <a class="header-anchor" href="#equal-loudness-contour" aria-label="Permalink to "Equal-loudness contour""></a></h2> <p>The human auditory system is sensitive to frequencies from about 20 Hz to a maximum of around 20,000 Hz, although the upper hearing limit decreases with age. Within this range, the human ear is most sensitive between 2 and 5 kHz, largely due to the resonance of the ear canal and the transfer function of the ossicles of the middle ear.</p> <p><img src="./equal-loudness.svg" alt=""></p> <p>Fletcher and Munson first measured equal-loudness contours using headphones (1933). In their study, test subjects listened to pure tones at various frequencies and over 10 dB increments in stimulus intensity. For each frequency and intensity, the listener also listened to a reference tone at 1000 Hz. Fletcher and Munson adjusted the reference tone until the listener perceived that it was the same loudness as the test tone. Loudness, being a psychological quantity, is difficult to measure, so Fletcher and Munson averaged their results over many test subjects to derive reasonable averages. The lowest equal-loudness contour represents the quietest audible tone—the absolute threshold of hearing. The highest contour is the threshold of pain.</p> <p><img src="./Audible.jpg" alt=""></p> <p>In 1956 Robinson and Dadson produced a new experimental determination that they believed was more accurate. It became the basis for a standard (ISO 226) that was considered definitive until 2003 when ISO revised the standard on the basis of recent assessments by research groups worldwide.</p> <p><a href="./illusions/">Audial illusions</a></p> <p><a href="https://en.wikipedia.org/wiki/Interaural_time_difference" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Interaural_time_difference</a></p> <youtube-embed video="Gc5eICzHkFU" />]]></content:encoded> <enclosure url="https://chromatone.center/hearing.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Scales]]></title> <link>https://chromatone.center/theory/scales/</link> <guid>https://chromatone.center/theory/scales/</guid> <pubDate>Sat, 28 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[A scale is a subset of 12 chromatic pitches]]></description> <content:encoded><![CDATA[<p>Scales are collections of pitch classes to play together. There's a plenty of such known and used <a href="./study/">Note combinations</a>.</p> <p>Five notes is enough to construct the most simple and pleasant sounding <a href="./pentatonic/">Pentatonic scales</a>. The most common scales nowadays are the 2 sets of 7 notes patterns: the <a href="./diatonic/">diatonic</a> and the <a href="./melodic/">jazz/melodic</a> families of scales. Every note in such a scale has its own distinct role as a <a href="./degrees/">Scale degree</a>.</p> <p>Breaking the diatonic rule the modernism era has discovered all the strange <a href="./symmetrical/">Symmetrical scales</a> that bring us more complex emotional effects.</p> <p>Let's not focus only on Western music theory, especially when it comes to scales. <a href="./raga/">Indian Raga</a> school has a much wider scope on note collections to be played and improvised over.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/gray-notes.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Frequency and pitch]]></title> <link>https://chromatone.center/theory/sound/pitch/</link> <guid>https://chromatone.center/theory/sound/pitch/</guid> <pubDate>Sat, 28 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The human perception of sound frequency as a place of it at a scale]]></description> <content:encoded><![CDATA[<p>Pitch is a perceptual property of sounds that allows their ordering on a frequency-related scale, or more commonly, pitch is the quality that makes it possible to judge sounds as "higher" and "lower" in the sense associated with musical melodies. Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.</p> <p>Pitch may be quantified as a frequency, but pitch is not a purely objective physical property; it is a subjective psychoacoustical attribute of sound. Historically, the study of pitch and pitch perception has been a central problem in psychoacoustics, and has been instrumental in forming and testing theories of sound representation, processing, and perception in the auditory system.</p> <h2 id="perception" tabindex="-1">Perception <a class="header-anchor" href="#perception" aria-label="Permalink to "Perception""></a></h2> <h3 id="pitch-and-frequency" tabindex="-1">Pitch and frequency <a class="header-anchor" href="#pitch-and-frequency" aria-label="Permalink to "Pitch and frequency""></a></h3> <p>Pitch is an auditory sensation in which a listener assigns musical tones to relative positions on a musical scale based primarily on their perception of the frequency of vibration. Pitch is closely related to frequency, but the two are not equivalent. Frequency is an objective, scientific attribute that can be measured. Pitch is each person's subjective perception of a sound wave, which cannot be directly measured. However, this does not necessarily mean that most people won't agree on which notes are higher and lower.</p> <p>The oscillations of sound waves can often be characterized in terms of frequency. Pitches are usually associated with, and thus quantified as, frequencies (in cycles per second, or hertz), by comparing the sounds being assessed against sounds with pure tones (ones with periodic, sinusoidal waveforms). Complex and aperiodic sound waves can often be assigned a pitch by this method.</p> <p>According to the American National Standards Institute, pitch is the auditory attribute of sound according to which sounds can be ordered on a scale from low to high. Since pitch is such a close proxy for frequency, it is almost entirely determined by how quickly the sound wave is making the air vibrate and has almost nothing to do with the intensity, or amplitude, of the wave. That is, "high" pitch means very rapid oscillation, and "low" pitch corresponds to slower oscillation. Despite that, the idiom relating vertical height to sound pitch is shared by most languages. At least in English, it is just one of many deep conceptual metaphors that involve up/down. The exact etymological history of the musical sense of high and low pitch is still unclear. There is evidence that humans do actually perceive that the source of a sound is slightly higher or lower in vertical space when the sound frequency is increased or reduced.</p> <p>In most cases, the pitch of complex sounds such as speech and musical notes corresponds very nearly to the repetition rate of periodic or nearly-periodic sounds, or to the reciprocal of the time interval between repeating similar events in the sound waveform.</p> <p>The pitch of complex tones can be ambiguous, meaning that two or more different pitches can be perceived, depending upon the observer. When the actual fundamental frequency can be precisely determined through physical measurement, it may differ from the perceived pitch because of overtones, also known as upper partials, harmonic or otherwise. A complex tone composed of two sine waves of 1000 and 1200 Hz may sometimes be heard as up to three pitches: two spectral pitches at 1000 and 1200 Hz, derived from the physical frequencies of the pure tones, and the combination tone at 200 Hz, corresponding to the repetition rate of the waveform. In a situation like this, the percept at 200 Hz is commonly referred to as the missing fundamental, which is often the greatest common divisor of the frequencies present.</p> <p>Pitch depends to a lesser degree on the sound pressure level (loudness, volume) of the tone, especially at frequencies below 1,000 Hz and above 2,000 Hz. The pitch of lower tones gets lower as sound pressure increases. For instance, a tone of 200 Hz that is very loud seems one semitone lower in pitch than if it is just barely audible. Above 2,000 Hz, the pitch gets higher as the sound gets louder. These results were obtained in the pioneering works by S. Stevens and W. Snow. Later investigations, i.e. by A. Cohen, had shown that in most cases the apparent pitch shifts were not significantly different from pitch‐matching errors. When averaged, the remaining shifts followed the directions of Stevens' curves but were small (2% or less by frequency, i.e. not more than a semitone).</p> <h3 id="theories-of-pitch-perception" tabindex="-1">Theories of pitch perception <a class="header-anchor" href="#theories-of-pitch-perception" aria-label="Permalink to "Theories of pitch perception""></a></h3> <p>Theories of pitch perception try to explain how the physical sound and specific physiology of the auditory system work together to yield the experience of pitch. In general, pitch perception theories can be divided into place coding and temporal coding. Place theory holds that the perception of pitch is determined by the place of maximum excitation on the basilar membrane.</p> <p>A place code, taking advantage of the tonotopy in the auditory system, must be in effect for the perception of high frequencies, since neurons have an upper limit on how fast they can phase-lock their action potentials. However, a purely place-based theory cannot account for the accuracy of pitch perception in the low and middle frequency ranges. Moreover, there is some evidence that some non-human primates lack auditory cortex responses to pitch despite having clear tonotopic maps in auditory cortex, showing that tonotopic place codes are not sufficient for pitch responses.</p> <p>Temporal theories offer an alternative that appeals to the temporal structure of action potentials, mostly the phase-locking and mode-locking of action potentials to frequencies in a stimulus. The precise way this temporal structure helps code for pitch at higher levels is still debated, but the processing seems to be based on an autocorrelation of action potentials in the auditory nerve. However, it has long been noted that a neural mechanism that may accomplish a delay—a necessary operation of a true autocorrelation—has not been found. At least one model shows that a temporal delay is unnecessary to produce an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters; however, earlier work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding pitch percept, and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch. To be a more complete model, autocorrelation must therefore apply to signals that represent the output of the cochlea, as via auditory-nerve interspike-interval histograms. Some theories of pitch perception hold that pitch has inherent octave ambiguities, and therefore is best decomposed into a pitch chroma, a periodic value around the octave, like the note names in western music—and a pitch height, which may be ambiguous, that indicates the octave the pitch is in.</p> <h3 id="just-noticeable-difference" tabindex="-1">Just-noticeable difference <a class="header-anchor" href="#just-noticeable-difference" aria-label="Permalink to "Just-noticeable difference""></a></h3> <p>The just-noticeable difference (jnd) (the threshold at which a change is perceived) depends on the tone's frequency content. Below 500 Hz, the jnd is about 3 Hz for sine waves, and 1 Hz for complex tones; above 1000 Hz, the jnd for sine waves is about 0.6% (about 10 cents). The jnd is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches. The jnd becomes smaller if the two tones are played simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.</p> <h3 id="aural-illusions" tabindex="-1">Aural illusions <a class="header-anchor" href="#aural-illusions" aria-label="Permalink to "Aural illusions""></a></h3> <p>The relative perception of pitch can be fooled, resulting in aural illusions. There are several of these, such as the tritone paradox, but most notably the Shepard scale, where a continuous or discrete sequence of specially formed tones can be made to sound as if the sequence continues ascending or descending forever.</p> <h2 id="definite-and-indefinite-pitch" tabindex="-1">Definite and indefinite pitch <a class="header-anchor" href="#definite-and-indefinite-pitch" aria-label="Permalink to "Definite and indefinite pitch""></a></h2> <p>Not all musical instruments make notes with a clear pitch. The unpitched percussion instrument (a class of percussion instrument) does not produce particular pitches. A sound or note of definite pitch is one where a listener can possibly (or relatively easily) discern the pitch. Sounds with definite pitch have harmonic frequency spectra or close to harmonic spectra.</p> <p>A sound generated on any instrument produces many modes of vibration that occur simultaneously. A listener hears numerous frequencies at once. The vibration with the lowest frequency is called the fundamental frequency; the other frequencies are overtones. Harmonics are an important class of overtones with frequencies that are integer multiples of the fundamental. Whether or not the higher frequencies are integer multiples, they are collectively called the partials, referring to the different parts that make up the total spectrum.</p> <p>A sound or note of indefinite pitch is one that a listener finds impossible or relatively difficult to identify as to pitch. Sounds with indefinite pitch do not have harmonic spectra or have altered harmonic spectra—a characteristic known as inharmonicity.</p> <p>It is still possible for two sounds of indefinite pitch to clearly be higher or lower than one another. For instance, a snare drum sounds higher pitched than a bass drum though both have indefinite pitch, because its sound contains higher frequencies. In other words, it is possible and often easy to roughly discern the relative pitches of two sounds of indefinite pitch, but sounds of indefinite pitch do not neatly correspond to any specific pitch.</p> <h2 id="labeling-pitches" tabindex="-1">Labeling pitches <a class="header-anchor" href="#labeling-pitches" aria-label="Permalink to "Labeling pitches""></a></h2> <p>Pitches are labeled using:</p> <ul> <li>Letters, as in Helmholtz pitch notation</li> <li>A combination of letters and numbers—as in scientific pitch notation, where notes are labelled upwards from C0, the 16 Hz C</li> <li>Numbers that represent the frequency in hertz (Hz), the number of cycles per second</li> </ul> <p>For example, one might refer to the A above middle C as a′, A4, or 440 Hz. In standard Western equal temperament, the notion of pitch is insensitive to "spelling": the description "G4 double sharp" refers to the same pitch as A4; in other temperaments, these may be distinct pitches. Human perception of musical intervals is approximately logarithmic with respect to fundamental frequency: the perceived interval between the pitches "A220" and "A440" is the same as the perceived interval between the pitches A440 and A880. Motivated by this logarithmic perception, music theorists sometimes represent pitches using a numerical scale based on the logarithm of fundamental frequency. For example, one can adopt the widely used MIDI standard to map fundamental frequency, f, to a real number, p, as follows</p> <blockquote> <p>p = 69 + 12 × log 2 ( f 440 Hz )</p> </blockquote> <p>This creates a linear pitch space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and A440 is assigned the number 69. Distance in this space corresponds to musical intervals as understood by musicians. An equal-tempered semitone is subdivided into 100 cents. The system is flexible enough to include "microtones" not found on standard piano keyboards. For example, the pitch halfway between C (60) and C♯ (61) can be labeled 60.5.</p> <youtube-embed video="Y7TesKMSE74" /><h2 id="pitch-standards-and-standard-pitch" tabindex="-1">Pitch standards and standard pitch <a class="header-anchor" href="#pitch-standards-and-standard-pitch" aria-label="Permalink to "Pitch standards and standard pitch""></a></h2> <p>A pitch standard (also concert pitch) is the conventional pitch reference a group of musical instruments are tuned to for a performance. Concert pitch may vary from ensemble to ensemble, and has varied widely over musical history.</p> <p>Standard pitch is a more widely accepted convention. The A above middle C is usually set at 440 Hz (often written as "A = 440 Hz" or sometimes "A440"), although other frequencies, such as 442 Hz, are also often used as variants. Another standard pitch, the so-called Baroque pitch, has been set in the 20th century as A = 415 Hz—approximately an equal-tempered semitone lower than A440 to facilitate transposition. The Classical pitch can be set to either 427 Hz (about halfway between A415 and A440) or 430 Hz (also between A415 and A440 but slightly sharper than the quarter tone). And ensembles specializing in authentic performance set the A above middle C to 432 Hz or 435 Hz when performing repertoire from the Romantic era.</p> <p>Transposing instruments have their origin in the variety of pitch standards. In modern times, they conventionally have their parts transposed into different keys from voices and other instruments (and even from each other). As a result, musicians need a way to refer to a particular pitch in an unambiguous manner when talking to each other.</p> <p>For example, the most common type of clarinet or trumpet, when playing a note written in their part as C, sounds a pitch that is called B♭ on a non-transposing instrument like a violin (which indicates that at one time these wind instruments played at a standard pitch a tone lower than violin pitch). To refer to that pitch unambiguously, a musician calls it concert B♭, meaning, "...the pitch that someone playing a non-transposing instrument like a violin calls B♭."</p> <h2 id="a440-pitch-standard" tabindex="-1">A440 (pitch standard) <a class="header-anchor" href="#a440-pitch-standard" aria-label="Permalink to "A440 (pitch standard)""></a></h2> <p>A440 (also known as Stuttgart pitch) is the musical pitch corresponding to an audio frequency of 440 Hz, which serves as a tuning standard for the musical note of A above middle C, or A4 in scientific pitch notation. It is standardized by the International Organization for Standardization as ISO 16. While other frequencies have been (and occasionally still are) used to tune the first A above middle C, A440 is now commonly used as a reference frequency to calibrate acoustic equipment and to tune pianos, violins, and other musical instruments.</p> <h3 id="history-and-use" tabindex="-1">History and use <a class="header-anchor" href="#history-and-use" aria-label="Permalink to "History and use""></a></h3> <p>Before standardization on 440 Hz, many countries and organizations followed the French standard since the 1860s of 435 Hz, which had also been the Austrian government's 1885 recommendation. Johann Heinrich Scheibler recommended A440 as a standard in 1834 after inventing the "tonometer" to measure pitch, and it was approved by the Society of German Natural Scientists and Physicians the same year.</p> <p>The American music industry reached an informal standard of 440 Hz in 1926, and some began using it in instrument manufacturing.</p> <p>In 1936, the American Standards Association recommended that the A above middle C be tuned to 440 Hz. This standard was taken up by the International Organization for Standardization in 1955 (reaffirmed by them in 1975) as ISO 16.</p> <p>It is designated A4 in scientific pitch notation because it occurs in the octave that starts with the fourth C key on a standard 88-key piano keyboard. On MIDI, A440 is note 69 (0x45 hexadecimal).</p> <h3 id="modern-practices" tabindex="-1">Modern practices <a class="header-anchor" href="#modern-practices" aria-label="Permalink to "Modern practices""></a></h3> <p>A440 is widely used as concert pitch in the United Kingdom and the United States. In continental Europe the frequency of A4 commonly varies between 440 Hz and 444 Hz. In the period instrument movement, a consensus has arisen around a modern baroque pitch of 415 Hz (with 440 Hz corresponding to A♯), a 'baroque' pitch for some special church music (in particular, some German church music, e.g. the pre-Leipzig period cantatas of Bach) known as Chorton pitch at 466 Hz (with 440 Hz corresponding to A♭), and classical pitch at 427–430 Hz.</p> <p>The US time and frequency station WWV broadcasts a 440 Hz signal at two minutes past every hour, with WWVH broadcasting the same tone at the first minute past every hour. This was added in 1936 to aid orchestras in tuning their instruments.</p> <h2 id="history-of-pitch-standards-in-western-music" tabindex="-1">History of pitch standards in Western music <a class="header-anchor" href="#history-of-pitch-standards-in-western-music" aria-label="Permalink to "History of pitch standards in Western music""></a></h2> <p>Historically, various standards have been used to fix the pitch of notes at certain frequencies. Various systems of musical tuning have also been used to determine the relative frequency of notes in a scale.</p> <youtube-embed video="si6QNVn40GM" /><h3 id="pre-19th-century" tabindex="-1">Pre-19th century <a class="header-anchor" href="#pre-19th-century" aria-label="Permalink to "Pre-19th century""></a></h3> <p>Until the 19th century, there was no coordinated effort to standardize musical pitch, and the levels across Europe varied widely. Pitches did not just vary from place to place, or over time—pitch levels could vary even within the same city. The pitch used for an English cathedral organ in the 17th century, for example, could be as much as five semitones lower than that used for a domestic keyboard instrument in the same city.</p> <p>Even within one church, the pitch used could vary over time because of the way organs were tuned. Generally, the end of an organ pipe would be hammered inwards to a cone, or flared outwards, to raise or lower the pitch. When the pipe ends became frayed by this constant process they were all trimmed down, thus raising the overall pitch of the organ.</p> <p>From the early 18th century, pitch could also be controlled with the use of tuning forks (invented in 1711), although again there was variation. For example, a tuning fork associated with Handel, dating from 1740, is pitched at A = 422.5 Hz, while a later one from 1780 is pitched at A = 409 Hz, about a quarter-tone lower. A tuning fork that belonged to Ludwig van Beethoven around 1800, now in the British Library, is pitched at A = 455.4 Hz, well over a half-tone higher.</p> <p>Overall, there was a tendency towards the end of the 18th century for the frequency of the A above middle C to be in the range of 400 to 450 Hz.</p> <p>The frequencies quoted here are based on modern measurements and would not have been precisely known to musicians of the day. Although Mersenne had made a rough determination of sound frequencies as early as the 17th century, such measurements did not become scientifically accurate until the 19th century, beginning with the work of German physicist Johann Scheibler in the 1830s. The term formerly used for the unit of pitch, cycle per second (CPS) was renamed the hertz (Hz) in the 20th century in honor of Heinrich Hertz.</p> <h3 id="pitch-inflation" tabindex="-1">Pitch inflation <a class="header-anchor" href="#pitch-inflation" aria-label="Permalink to "Pitch inflation""></a></h3> <p>During historical periods when instrumental music rose in prominence (relative to the voice), there was a continuous tendency for pitch levels to rise. This "pitch inflation" seemed largely a product of instrumentalists competing with each other, each attempting to produce a brighter, more "brilliant", sound than that of their rivals. On at least two occasions, pitch inflation had become so severe that reform became needed. At the beginning of the 17th century, Michael Praetorius reported in his encyclopedic Syntagma musicum that pitch levels had become so high that singers were experiencing severe throat strain and lutenists and viol players were complaining of snapped strings. The standard voice ranges he cites show that the pitch level of his time, at least in the part of Germany where he lived, was at least a minor third higher than today's. Solutions to this problem were sporadic and local, but generally involved the establishment of separate standards for voice and organ (German: Chorton, lit. 'choir tone') and for chamber ensembles (German: Kammerton, lit. 'chamber tone'). Where the two were combined, as for example in a cantata, the singers and instrumentalists might perform from music written in different keys. This system kept pitch inflation at bay for some two centuries.</p> <p>Concert pitch rose further in the 19th century as may be seen reflected in the tuning forks of France. The pipe organ tuning fork in Versailles Chapel in 1795 is 390 Hz, but in the Paris Opera an 1810 tuning fork gives A = 423 Hz, an 1822 fork gives A = 432 Hz, and an 1855 fork gives A = 449 Hz. At La Scala in Milan, the A above middle C rose as high as 451 Hz.</p> <h3 id="_19th-and-20th-century-standards" tabindex="-1">19th- and 20th-century standards <a class="header-anchor" href="#_19th-and-20th-century-standards" aria-label="Permalink to "19th- and 20th-century standards""></a></h3> <p>The strongest opponents of the upward tendency in pitch were singers, who complained that it was putting a strain on their voices. Largely due to their protests, the French government passed a law on February 16, 1859, which set the A above middle C at 435 Hz. This was the first attempt to standardize pitch on such a scale, and was known as the diapason normal. It became quite a popular pitch standard outside France as well, and has also been known at various times as French pitch, continental pitch or international pitch (the last of these not to be confused with the 1939 "international standard pitch" described below). An 1885 conference in Vienna established this value among Italy, Austria, Hungary, Russia, Prussia, Saxony, Sweden and Württemberg. This was included as "Convention of 16 and 19 November 1885 regarding the establishment of a concert pitch" in the Treaty of Versailles in 1919 which formally ended World War I. The diapason normal resulted in middle C being tuned at about 258.65 Hz.</p> <p>An alternative pitch standard known as philosophical or scientific pitch fixes middle C at 256 Hz (that is, 28 Hz), which results the A above it being approximately 430.54 Hz in equal temperament tuning. The appeal of this system is its mathematical idealism (the frequencies of all the Cs being powers of two). This system never received the same official recognition as the French A = 435 Hz and has not been widely used. This tuning has been promoted unsuccessfully by the LaRouche movement's Schiller Institute under the name Verdi tuning since Italian composer Giuseppe Verdi had proposed a slight lowering of the French tuning system. However, the Schiller Institute's recommended tuning for A of 432 Hz is for the Pythagorean ratio of 27:16, rather than the logarithmic ratio of equal temperament tuning.</p> <p>British attempts at standardisation in the 19th century gave rise to the old philharmonic pitch standard of about A = 452 Hz (different sources quote slightly different values), replaced in 1896 by the considerably "deflated" new philharmonic pitch at A = 439 Hz. The high pitch was maintained by Sir Michael Costa for the Crystal Palace Handel Festivals, causing the withdrawal of the principal tenor Sims Reeves in 1877, though at singers' insistence the Birmingham Festival pitch was lowered (and the organ retuned) at that time. At the Queen's Hall in London, the establishment of the diapason normal for the Promenade Concerts in 1895 (and retuning of the organ to A = 435.5 at 15 °C (59 °F), to be in tune with A = 439 in a heated hall) caused the Royal Philharmonic Society and others (including the Bach Choir, and the Felix Mottl and Arthur Nikisch concerts) to adopt the continental pitch thereafter.</p> <p>In England the term low pitch was used from 1896 onward to refer to the new Philharmonic Society tuning standard of A = 439 Hz at 68 °F, while "high pitch" was used for the older tuning of A = 452.4 Hz at 60 °F. Although the larger London orchestras were quick to conform to the new, low pitch, provincial orchestras continued using the high pitch until at least the 1920s, and most brass bands were still using the high pitch in the mid-1960s. Highland pipe bands continue to use an even sharper tuning, around A = 470–480 Hz, over a semitone higher than A440. As a result, bagpipes are often perceived as playing in B♭ despite being notated in A (as if they were transposing instruments in D-flat), and are often tuned to match B♭ brass instruments when the two are required to play together.</p> <p>The Stuttgart Conference of 1834 recommended C264 (A440) as the standard pitch based on Scheibler's studies with his Tonometer. For this reason A440 has been referred to as Stuttgart pitch or Scheibler pitch.</p> <p>In 1939, an international conference recommended that the A above middle C be tuned to 440 Hz, now known as concert pitch. As a technical standard this was taken up by the International Organization for Standardization in 1955 and reaffirmed by them in 1975 as ISO 16. The difference between this and the diapason normal is due to confusion over the temperature at which the French standard should be measured. The initial standard was A = 439 Hz, but this was superseded by A = 440 Hz, possibly because 439 Hz was difficult to reproduce in a laboratory since 439 is a prime number.</p> <youtube-embed video="EKTZ151yLnk" />]]></content:encoded> <enclosure url="https://chromatone.center/jacek-ulinski.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Timbre and overtones]]></title> <link>https://chromatone.center/theory/sound/timbre/</link> <guid>https://chromatone.center/theory/sound/timbre/</guid> <pubDate>Thu, 26 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The character of sound]]></description> <content:encoded><![CDATA[<h2 id="timbre" tabindex="-1">Timbre <a class="header-anchor" href="#timbre" aria-label="Permalink to "Timbre""></a></h2> <p>In music, timbre, also known as tone color or tone quality (from psychoacoustics), is the perceived sound quality of a musical note, sound or tone. Timbre distinguishes different types of sound production, such as choir voices and musical instruments. It also enables listeners to distinguish different instruments in the same category (e.g., an oboe and a clarinet, both woodwind instruments).</p> <p>The physical characteristics of sound that determine the perception of timbre include frequency spectrum and envelope. Singers and instrumental musicians can change the timbre of the music they are singing/playing by using different singing or playing techniques. For example, a violinist can use different bowing styles or play on different parts of the string to obtain different timbres (e.g., playing sul tasto produces a light, airy timbre, whereas playing sul ponticello produces a harsh, even and aggressive tone).</p> <h2 id="attributes-of-timbre" tabindex="-1">Attributes of timbre <a class="header-anchor" href="#attributes-of-timbre" aria-label="Permalink to "Attributes of timbre""></a></h2> <p>Many commentators have attempted to decompose timbre into component attributes. For example, J. F. Schouten (1968, 42) describes the "elusive attributes of timbre" as "determined by at least five major acoustic parameters", which Robert Erickson finds, "scaled to the concerns of much contemporary music":</p> <ul> <li>Range between tonal and noiselike character</li> <li>Spectral envelope</li> <li>Time envelope in terms of rise, duration, and decay (ADSR, which stands for "attack, decay, sustain, release")</li> <li>Changes both of spectral envelope (formant-glide) and fundamental frequency (micro-intonation)</li> <li>Prefix, or onset of a sound, quite dissimilar to the ensuing lasting vibration</li> </ul> <h2 id="amplitude-envelope" tabindex="-1">Amplitude envelope <a class="header-anchor" href="#amplitude-envelope" aria-label="Permalink to "Amplitude envelope""></a></h2> <p><img src="./adsr.svg" alt=""></p> <ul> <li><strong>Attack</strong>: time from silence to the loudest level</li> <li><strong>Decay</strong>: time from loudest level to sustain level</li> <li><strong>Sustain</strong>: level of volume while the sound is holded</li> <li><strong>Release</strong>: time to return to silence after releasing the hold</li> </ul> <h2 id="helmholz-resonance" tabindex="-1">Helmholz resonance <a class="header-anchor" href="#helmholz-resonance" aria-label="Permalink to "Helmholz resonance""></a></h2> <p>Helmholtz resonance or wind throb is the phenomenon of air resonance in a cavity, such as when one blows across the top of an empty bottle. The name comes from a device created in the 1850s by Hermann von Helmholtz, the Helmholtz resonator, which he used to identify the various frequencies or musical pitches present in music and other complex sounds.</p> <p>When air is forced into a cavity, the pressure inside increases. When the external force pushing the air into the cavity is removed, the higher-pressure air inside will flow out. Due to the inertia of the moving air the cavity will be left at a pressure slightly lower than the outside, causing air to be drawn back in. This process repeats, with the magnitude of the pressure oscillations increasing and decreasing asymptotically after the sound starts and stops.</p> <p><img src="./Helmholtz_resonator.jpg" alt=""></p> <p>When the resonator's 'nipple' is placed inside one's ear, a specific frequency of the complex sound can be picked out and heard clearly. In his book Helmholtz’ explains: When we "apply a resonator to the ear, most of the tones produced in the surrounding air will be considerably damped; but if the proper tone of the resonator is sounded, it brays into the ear most powerfully…. The proper tone of the resonator may even be sometimes heard cropping up in the whistling of the wind, the rattling of carriage wheels, the splashing of water."</p> <h2 id="harmonics-–-the-natural-resonances" tabindex="-1">Harmonics – the natural resonances <a class="header-anchor" href="#harmonics-–-the-natural-resonances" aria-label="Permalink to "Harmonics – the natural resonances""></a></h2> <img src="./Bowed_violin_string_helholz_corner.gif" > <youtube-embed video="9O3VEXzuOKI" /><p>A note isn't just a wave, it's a mix of resonating modes of oscillations.</p> <h3 id="tristimulus-timbre-model" tabindex="-1">Tristimulus timbre model <a class="header-anchor" href="#tristimulus-timbre-model" aria-label="Permalink to "Tristimulus timbre model""></a></h3> <p>The concept of tristimulus originates in the world of color, describing the way three primary colors can be mixed together to create a given color. By analogy, the musical tristimulus measures the mixture of harmonics in a given sound, grouped into three sections. It is basically a proposal of reducing a huge number of sound partials, that can amount to dozens or hundreds in some cases, down to only three values. The first tristimulus measures the relative weight of the first harmonic; the second tristimulus measures the relative weight of the second, third, and fourth harmonics taken together; and the third tristimulus measures the relative weight of all the remaining harmonics.</p> <youtube-embed video="Wpt3lmSFW3k" /><iframe class="m-auto" title="vimeo-player" src="https://player.vimeo.com/video/164848028?h=68deef5350" width="640" height="360" frameborder="0" allowfullscreen></iframe>]]></content:encoded> <enclosure url="https://chromatone.center/overtones.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Synesthesia]]></title> <link>https://chromatone.center/theory/interplay/synesthesia/</link> <guid>https://chromatone.center/theory/interplay/synesthesia/</guid> <pubDate>Wed, 25 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Neurological phenomenon of interlinked sensory signals]]></description> <content:encoded><![CDATA[<p>The term Synesthesia comes from the Ancient Greek σύν syn, 'together', and αἴσθησις aisthēsis, 'sensation'.</p> <p>Synesthesia (American English) or synaesthesia (British English) is a perceptual phenomenon in which stimulation of one sensory or cognitive pathway leads to involuntary experiences in a second sensory or cognitive pathway.People who report a lifelong history of such experiences are known as synesthetes. Awareness of synesthetic perceptions varies from person to person. In one common form of synesthesia, known as grapheme–color synesthesia or color–graphemic synesthesia, letters or numbers are perceived as inherently colored. In spatial-sequence, or number form synesthesia, numbers, months of the year, or days of the week elicit precise locations in space (for example, 1980 may be "farther away" than 1990), or may appear as a three-dimensional map (clockwise or counterclockwise). Synesthetic associations can occur in any combination and any number of senses or cognitive pathways.</p> <p>Little is known about how synesthesia develops. It has been suggested that synesthesia develops during childhood when children are intensively engaged with abstract concepts for the first time. This hypothesis – referred to as semantic vacuum hypothesis – explains why the most common forms of synesthesia are grapheme–color, spatial sequence, and number form. These are usually the first abstract concepts that educational systems require children to learn.</p> <h2 id="types" tabindex="-1">Types <a class="header-anchor" href="#types" aria-label="Permalink to "Types""></a></h2> <p>There are two overall forms of synesthesia:</p> <ul> <li>projective synesthesia: people who see colors, forms, or shapes when stimulated (the widely understood version of synesthesia).</li> <li>associative synesthesia: people who feel a very strong and involuntary connection between the stimulus and the sense that it triggers.</li> </ul> <p>For example, in chromesthesia (sound to color), a projector may hear a trumpet, and see an orange triangle in space, while an associator might hear a trumpet, and think very strongly that it sounds "orange".</p> <p>Synesthesia can occur between nearly any two senses or perceptual modes, and at least one synesthete, Solomon Shereshevsky, experienced synesthesia that linked all five senses.</p> <p>While nearly every logically possible combination of experiences can occur, several types are more common than others.</p> <h2 id="grapheme–color-synesthesia" tabindex="-1">Grapheme–color synesthesia <a class="header-anchor" href="#grapheme–color-synesthesia" aria-label="Permalink to "Grapheme–color synesthesia""></a></h2> <p><img src="./Number_Form-synesthesia.jpg" alt=""> A picture from the 2009 non-fiction book Wednesday Is Indigo Blue. Note the numbers 1-12 form an upside-down clock face.</p> <p>In one of the most common forms of synesthesia, individual letters of the alphabet and numbers (collectively referred to as graphemes) are "shaded" or "tinged" with a color. While different individuals usually do not report the same colors for all letters and numbers, studies with large numbers of synesthetes find some commonalities across letters (e.g., A is likely to be red).</p> <h2 id="chromesthesia" tabindex="-1">Chromesthesia <a class="header-anchor" href="#chromesthesia" aria-label="Permalink to "Chromesthesia""></a></h2> <p>Another common form of synesthesia is the association of sounds with colors. For some, everyday sounds such as doors opening, cars honking, or people talking can trigger seeing colors. For others, colors are triggered when musical notes or keys are being played. People with synesthesia related to music may also have perfect pitch because their ability to see/hear colors aids them in identifying notes or keys.</p> <p>The colors triggered by certain sounds, and any other synesthetic visual experiences, are referred to as photisms.</p> <p>Individuals rarely agree on what color a given sound is. B flat might be orange for one person and blue for another. Composers Franz Liszt and Nikolai Rimsky-Korsakov famously disagreed on the colors of musical keys.</p> <h2 id="composers-with-synesthesia" tabindex="-1">Composers with synesthesia <a class="header-anchor" href="#composers-with-synesthesia" aria-label="Permalink to "Composers with synesthesia""></a></h2> <p><strong>Franz Liszt</strong> is a composer who was known for asking performers to play with color. He was noted telling his orchestra to play the music in a "Bluer Fashion," since that is what the tone required. Synesthesia was not a common term in Liszt's time; people thought he was playing a trick on them when he referred to a color instead of a musical term.</p> <p><strong>Leonard Bernstein</strong> openly discussed his chromesthesia, which he described as a "timbre to color." Although he does not reference specific songs as being a certain color, he does explain the way it should sound to the artist performing. There are recordings of him stopping orchestras and singers when they are changing the "timbre." If someone changes the “timbre” or tone in a piece, it does not necessarily change the sound to the listener, but the composer with Chromesthesia will automatically know.</p> <p><strong>Amy Beach</strong> was another composer who had synesthesia. According to her perspective, each key signature was associated with a particular color. If an artist changed the key to suit their voice, then she would become upset because it would change the intended sound, portrayal, and emotion of the piece.</p> <p><strong>Olivier Messiaen</strong> was influenced by the color of musical keys for his compositions.</p> <p><strong>Alexander Scriabin</strong> was a Russian composer and pianist. He is famously regarded as a synesthete, but there is a lot of controversy surrounding whether he had chromesthesia or not. Scriabin was a major proponent of Theosophy, which had a system associating colors to feelings and emotions. This influenced the musician, who distinguished "spiritual" tonalities (like F-sharp major) from "earthly, material" ones (C major, F major). Furthermore, Alexander Scriabin developed a "keyboard with lights" or clavier à lumières, which directly matched musical notes with colors.</p> <p><img src="./Scriabin-Circle.svg" alt="svg"></p> <p>Scriabin's sound-to-color associations arranged into a circle of fifths, demonstrating its spectral quality.</p> <youtube-embed video="V3B7uQ5K0IU" /><p>Scriabin was friends with composer <strong>Nikolai Rimsky-Korsakov</strong>, who was a synesthete, and their sound-to-color associations were not the same. Specifically, Rimsky-Korsakov made a distinction between major and minor scales and his associations had a "more neutral, spontaneous character".</p> <h2 id="other-types" tabindex="-1">Other types <a class="header-anchor" href="#other-types" aria-label="Permalink to "Other types""></a></h2> <h3 id="spatial-sequence-synesthesia" tabindex="-1">Spatial sequence synesthesia <a class="header-anchor" href="#spatial-sequence-synesthesia" aria-label="Permalink to "Spatial sequence synesthesia""></a></h3> <p>Those with spatial sequence synesthesia (SSS) tend to see ordinal sequences as points in space. For example some people see months as a spiral or a column (this also happens with letters, numbers and any other sequence). People with SSS may have superior memories; in one study, they were able to recall past events and memories far better and in far greater detail than those without the condition. They can also see months or dates in the space around them, but most synesthetes "see" these sequences in their mind's eye. Some people see time like a clock above and around them.</p> <h3 id="number-form" tabindex="-1">Number form <a class="header-anchor" href="#number-form" aria-label="Permalink to "Number form""></a></h3> <p>A number form is a mental map of numbers that automatically and involuntarily appear whenever someone who experiences number-forms synesthesia thinks of numbers. These numbers might appear in different locations and the mapping changes and varies between individuals. Number forms were first documented and named in 1881 by Francis Galton in "The Visions of Sane Persons". It is suggested that this might be caused by "cross-activation" of the neural pathway that connects the parietal lobes and angular gyrus. Both of these areas are involved in numerical cognition and spatial cognition respectively.</p> <h3 id="auditory–tactile-synesthesia" tabindex="-1">Auditory–tactile synesthesia <a class="header-anchor" href="#auditory–tactile-synesthesia" aria-label="Permalink to "Auditory–tactile synesthesia""></a></h3> <p>In auditory–tactile synesthesia, certain sounds can induce sensations in parts of the body. For example, someone with auditory–tactile synesthesia may experience that hearing a specific word or sound feels like touch in one specific part of the body or may experience that certain sounds can create a sensation in the skin without being touched (not to be confused with the milder general reaction known as frisson, which affects approximately 50% of the population). It is one of the least common forms of synesthesia.</p> <h3 id="list-of-people-with-synesthesia" tabindex="-1">List of people with synesthesia <a class="header-anchor" href="#list-of-people-with-synesthesia" aria-label="Permalink to "List of people with synesthesia""></a></h3> <p><a href="https://en.wikipedia.org/wiki/List_of_people_with_synesthesia" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/List_of_people_with_synesthesia</a></p> <h2 id="frisson" tabindex="-1">Frisson <a class="header-anchor" href="#frisson" aria-label="Permalink to "Frisson""></a></h2> <p>Frisson (UK: /ˈfriːsɒn/ FREE-son, US: /friːˈsoʊn/ free-SOHN French: [fʁisɔ̃]; French for "shiver"), also known as aesthetic chills or musical chills is a psychophysiological response to rewarding auditory and/or visual stimuli that often induces a pleasurable or otherwise positively-valenced affective state and transient paresthesia (skin tingling or chills), sometimes along with piloerection (goose bumps) and mydriasis (pupil dilation).</p> <h3 id="frisson-is-caused-by-violations-of-musical-expectancy" tabindex="-1">Frisson is caused by violations of musical expectancy <a class="header-anchor" href="#frisson-is-caused-by-violations-of-musical-expectancy" aria-label="Permalink to "Frisson is caused by violations of musical expectancy""></a></h3> <p>Rhythmic, dynamic, harmonic, and/or melodic violations of a person’s explicit or implicit expectations are associated with musical frisson as a prerequisite. Loud, very high or low frequency, or quickly varying sounds (unexpected harmonies, moments of modulations, melodic appoggiaturas) has been shown to arouse the autonomic nervous system (ANS). Activation of the ANS has a consistent strong correlation with frisson, as one study showed that an opioid antagonist could block frisson from music. Leonard Meyer, a prominent philosopher of music, wrote in his text, “Emotion and Meaning in Music” that music’s ability to evoke emotion in the listener stems from its ability to meet and break expectations.</p> <h2 id="ideasthesia" tabindex="-1">Ideasthesia <a class="header-anchor" href="#ideasthesia" aria-label="Permalink to "Ideasthesia""></a></h2> <p>Ideasthesia (alternative spelling ideaesthesia) is a neuroscientific phenomenon in which activations of concepts (inducers) evoke perception-like sensory experiences (concurrents). The name comes from the Ancient Greek ἰδέα (idéa) and αἴσθησις (aísthēsis), meaning "sensing concepts" or "sensing ideas". The notion was introduced by neuroscientist Danko Nikolić as an alternative explanation for a set of phenomena traditionally covered by synesthesia.</p> <p>While "synesthesia" meaning "union of senses" implies the association of two sensory elements with little connection to the cognitive level, empirical evidence indicated that most phenomena linked to synesthesia are in fact induced by semantic representations. That is, the linguistic meaning of the stimulus is what is important rather than its sensory properties. In other words, while synesthesia presumes that both the trigger (inducer) and the resulting experience (concurrent) are of sensory nature, ideasthesia presumes that only the resulting experience is of sensory nature while the trigger is semantic.</p> <p><img src="./Booba-Kiki.svg" alt="svg"></p> <p>Over the past decade, it has been suggested that the Bouba/Kiki phenomenon is a case of ideasthesia. Most people will agree that the star-shaped object on the left is named Kiki and the round one on the right Bouba. It has been assumed that these associations come from direct connections between visual and auditory cortices. However, Gomez et al. have shown that Kiki/Bouba associations are much richer as either word and either image is associated semantically to a number of concepts such as white or black color, feminine vs. masculine, cold vs. hot, and others. These sound-shape associations seem to be related through a large overlap between semantic networks of Kiki and star-shape on the one hand, and Bouba and round-shape on the other hand. For example, both Kiki and star-shape are clever, small, thin and nervous. This indicates that behind Kiki-Bouba effect lies a rich semantic network. In other words, our sensory experience is largely determined by the meaning that we assign to stimuli.</p> <h2 id="implications-for-art-theory" tabindex="-1">Implications for art theory <a class="header-anchor" href="#implications-for-art-theory" aria-label="Permalink to "Implications for art theory""></a></h2> <p>The concept of ideasthesia has been often discussed in relation to art, and also used to formulate a psychological theory of art. According to the theory, we consider something to be a piece of art when experiences induced by the piece are accurately balanced with semantics induced by the same piece. Thus, a piece of art makes us both strongly think and strongly experience. Moreover, the two must be perfectly balanced such that the most salient stimulus or event is both the one that evokes strongest experiences (fear, joy, ... ) and strongest cognition (recall, memory, ...) — in other words, idea is well balanced with aesthesia.</p> <p>Ideasthesia theory of art may be used for psychological studies of aesthetics. It may also help explain classificatory disputes about art as its main tenet is that experience of art can only be individual, depending on person's unique knowledge, experiences and history. There could exist no general classification of art satisfactorily applicable to each and all individuals.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Number_Form-synesthesia.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Electromagnetic fields]]></title> <link>https://chromatone.center/theory/color/light/em-field/</link> <guid>https://chromatone.center/theory/color/light/em-field/</guid> <pubDate>Tue, 24 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[EM-waves propagationg through space]]></description> <content:encoded><![CDATA[<p>An electromagnetic field (also EM field) is a physical field, mathematical functions of position and time, representing the influences on and due to electric charges. The field at any point in space and time can be regarded as a combination of an electric field and a magnetic field. Because of the interrelationship between the fields, a disturbance in the electric field can create a disturbance in the magnetic field which in turn affects the electric field, leading to an oscillation that propagates through space, known as an electromagnetic wave.</p> <p>The way in which charges and currents (i.e. streams of charges) interact with the electromagnetic field is described by Maxwell's equation and the Lorentz force law. Maxwell's equations detail how the electric field converges towards or diverges away from electric charges, how the magnetic field curls around electrical currents, and how changes in the electric and magnetic fields influence each other. The Lorentz force law states that a charge subject to an electric field feels a force along the direction of the field, and a charge moving through a magnetic field feels a force that is perpendicular both to the magnetic field and to its direction of motion.</p> <p>The electromagnetic field is described by classical electrodynamics, an example of a classical field theory. This theory describes many macroscopic physical phenomena accurately. However, it was unable to explain the photoelectric effect and atomic absorption spectroscopy, experiments at the atomic scale. That required the use of quantum mechanics, specifically the quantization of the electromagnetic field and the development of quantum electrodynamics.</p> <p>Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin.</p> <h2 id="transformations-of-electromagnetic-fields" tabindex="-1">Transformations of electromagnetic fields <a class="header-anchor" href="#transformations-of-electromagnetic-fields" aria-label="Permalink to "Transformations of electromagnetic fields""></a></h2> <p>Whether a physical effect is attributable to an electric field or to a magnetic field is dependent upon the observer, in a way that special relativity makes mathematically precise. For example, suppose that a laboratory contains a long straight wire that carries an electrical current. In the frame of reference where the laboratory is at rest, the wire is motionless and electrically neutral: the current, composed of negatively charged electrons, moves against a background of positively charged ions, and the densities of positive and negative charges cancel each other out. A test charge near the wire would feel no electrical force from the wire. However, if the test charge is in motion parallel to the current, the situation changes. In the rest frame of the test charge, the positive and negative charges in the wire are moving at different speeds, and so the positive and negative charge distributions are Lorentz-contracted by different amounts. Consequently, the wire has a nonzero net charge density, and the test charge must experience a nonzero electric field and thus a nonzero force. In the rest frame of the laboratory, there is no electric field to explain the test charge being pulled towards or pushed away from the wire. So, an observer in the laboratory rest frame concludes that a magnetic field must be present.</p> <p>In general, a situation that one observer describes using only an electric field will be described by an observer in a different inertial frame using a combination of electric and magnetic fields. Analogously, a phenomenon that one observer describes using only a magnetic field will be, in a relatively moving reference frame, described by a combination of fields. The rules for relating the fields required in different reference frames are the Lorentz transformations of the fields.</p> <p>Thus, electrostatics and magnetostatics are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely a consequence of different frames of measurement. The fact that the two field variations can be reproduced just by changing the motion of the observer is further evidence that there is only a single actual field involved which is simply being observed differently.</p> <h2 id="reciprocal-behavior-of-electric-and-magnetic-fields" tabindex="-1">Reciprocal behavior of electric and magnetic fields <a class="header-anchor" href="#reciprocal-behavior-of-electric-and-magnetic-fields" aria-label="Permalink to "Reciprocal behavior of electric and magnetic fields""></a></h2> <p>The two Maxwell equations, Faraday's Law and the Ampère–Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as "a changing magnetic field inside a loop creates an electric voltage around the loop". This is the principle behind the electric generator.</p> <p>Ampere's Law roughly states that "an electrical current around a loop creates a magnetic field through the loop". Thus, this law can be applied to generate a magnetic field and run an electric motor.</p> <h2 id="behavior-of-the-fields-in-the-absence-of-charges-or-currents" tabindex="-1">Behavior of the fields in the absence of charges or currents <a class="header-anchor" href="#behavior-of-the-fields-in-the-absence-of-charges-or-currents" aria-label="Permalink to "Behavior of the fields in the absence of charges or currents""></a></h2> <p>A linearly polarized electromagnetic plane wave propagating parallel to the z-axis is a possible solution for the electromagnetic wave equations in free space. The electric field, E, and the magnetic field, B, are perpendicular to each other and the direction of propagation.</p> <p>Maxwell's equations can be combined to derive wave equations. The solutions of these equations take the form of an electromagnetic wave. In a volume of space not containing charges or currents (free space) – that is, where ρ and J are zero, the electric and magnetic fields satisfy these electromagnetic wave equations:</p> <p>James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law. This unified the physical understanding of electricity, magnetism, and light: visible light is but one portion of the full range of electromagnetic waves, the electromagnetic spectrum.</p> <h2 id="time-varying-em-fields-in-maxwell-s-equations" tabindex="-1">Time-varying EM fields in Maxwell's equations <a class="header-anchor" href="#time-varying-em-fields-in-maxwell-s-equations" aria-label="Permalink to "Time-varying EM fields in Maxwell's equations""></a></h2> <p>An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source. Such radiation can occur across a wide range of frequencies called the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.</p> <p>A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.</p> <p>A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of "close") will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field.</p> <p>Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.</p> <p>Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as RFID tags, metal detectors, and MRI scanner coils at higher frequencies.</p> <h2 id="health-and-safety" tabindex="-1">Health and safety <a class="header-anchor" href="#health-and-safety" aria-label="Permalink to "Health and safety""></a></h2> <p>The potential effects of electromagnetic fields on human health vary widely depending on the frequency, intensity of the fields, and the length of the exposure. Low frequency, low intensity, and short duration exposure to electromagnetic radiation is generally considered safe. On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Results_of_Michael_Faraday's_iron_filings_experiment._Wellcome_M0000164.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Electromagnetic radiation]]></title> <link>https://chromatone.center/theory/color/light/em-waves/</link> <guid>https://chromatone.center/theory/color/light/em-waves/</guid> <pubDate>Tue, 24 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[EM-waves propagationg through space]]></description> <content:encoded><![CDATA[<youtube-embed video="FWCN_uI5ygY" /><p>Synchronised oscillations (or their quanta, photons) of the electric and magnetic fields, propagating through space at the speed of ~300,000 km/s.</p> <p><img src="./emwavepropagation.jpg" alt=""></p> <p>Visible light is a certain portion of electromagnetic spectrum between infrared (too weak to excite electrons in molecules) and ultraviolet (powerful enough to cause irreversible chemical reactions in organic matter).</p> <img src="./em-acoustic.svg" > <youtube-embed video="V_jYXQFjCmA" />]]></content:encoded> <enclosure url="https://chromatone.center/emwavepropagation.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chromatic spectrum]]></title> <link>https://chromatone.center/theory/interplay/spectrum/</link> <guid>https://chromatone.center/theory/interplay/spectrum/</guid> <pubDate>Tue, 24 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The model in which musical octave meets color spectrum.]]></description> <content:encoded><![CDATA[<p>Sonic and electromagnetic waves have very different mediums as a carrier field. Sound is carried by air molecules, light travels through vacuum just because of ever present electromagnetic field.</p> <p>But they're stil oscillations, so we an compare their frequencies and wavelenghts. But what to choose? Let's try both!</p> <h2 id="frequency" tabindex="-1">Frequency <a class="header-anchor" href="#frequency" aria-label="Permalink to "Frequency""></a></h2> <p><img src="./acoustic-spectrum.svg" alt="svg"></p> <p>Let's start with frequencies. What is <strong>1 Hz</strong>? It's one oscillation per second. For EM it corresponds to radiation far in the <strong>long radio spectrum</strong>. We can't hear such slow air oscillations and 1 Hz is in <strong>infrasonic</strong> range.</p> <img src="./em-acoustic.svg" > <p>This juxtaposition shows that electromagnetic and acoustic oscillations are of entirely different nature and can’t be matched just as they are. Audible frequencies of oscillating air correspond to long radio range of EM spectrum. If compared by the wavelengths our notes are situated somewhere around the FM radio range. In turn the visible light oscillations are so fast, that they can be matched only with hypersonic waves in some rigid bodies.</p> <p>These oscillations are so short that they are comparable with the size of atoms in a crystal grid. The faster atoms move – the more heat they carry. A heated body starts to emit electromagnetic waves, starting from infrared and coming to the visible light range after about 1000K. So we can say, that sound and light are two main forms of oscillating energy propagation mechanics. And the similarities between them can be better justified not by their physical nature, but by the nature of human perception of them.</p> <h1 id="_40th-octave-imaginary-sound" tabindex="-1">40th octave imaginary sound <a class="header-anchor" href="#_40th-octave-imaginary-sound" aria-label="Permalink to "40th octave imaginary sound""></a></h1> <p>Let take it mathematically. Acoustic oscillations frequency multiplies by to with every octave. This means we can find an imaginary pitch for any given frequency. This means we can find rythm notes and also it's another way to bring light and sound together.</p> <p>Let's keep multiplying our A = 440 Hz by two until we reach the visible light spectrum – about 0.4–1 PHz. We can calculate all the notes frequencies and place them in the spectrogram. What we get is that A is near orange red, C is green and E is blue. Roughly. But If we consider a little lower base A frequency, we can see pretty nice corellation.</p> <img src="./spectrum.svg" /> <p>We can conclude that it's a fundamental property of our perception to close perceived parts of any spectrum into a seamless circle. And these circles are not just illusions as the resonances and periodicities are based on fundamental principles of physics.</p> <ColorSpectrum class="m-4" />]]></content:encoded> <enclosure url="https://chromatone.center/spectrum.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Sensory dissonance curve]]></title> <link>https://chromatone.center/practice/sound/dissonance/</link> <guid>https://chromatone.center/practice/sound/dissonance/</guid> <pubDate>Sun, 22 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The harmonic relations of notes]]></description> <content:encoded><![CDATA[<client-only > <SoundDissonance style="position: sticky; top: 0;" /></client-only > <div class="info custom-block"><p class="custom-block-title">INFO</p> <p>A simple curve for two sine waves is readily established and then we can calculate and explore sensory dissonance curves for complex sounds as the sum of interactions between their partials.</p> <p>Try dragging the note to hear the exact interval. Toggle the plain sine and rich sawtooth waveforms. Compare the feeling of consonance and the dips in the curve yourself.</p> <p><a href="./../../../theory/intervals/dissonance/">More info in the Theory research</a>.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/dissonance.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Tone clusters]]></title> <link>https://chromatone.center/theory/chords/clusters/</link> <guid>https://chromatone.center/theory/chords/clusters/</guid> <pubDate>Fri, 20 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Multiple adjacent tones played simultaneously]]></description> <content:encoded><![CDATA[<p>A tone cluster is a musical chord comprising at least three adjacent tones in a scale. Prototypical tone clusters are based on the chromatic scale and are separated by semitones. For instance, three adjacent piano keys (such as C, C♯, and D) struck simultaneously produce a tone cluster. Variants of the tone cluster include chords comprising adjacent tones separated diatonically, pentatonically, or microtonally. On the piano, such clusters often involve the simultaneous striking of neighboring white or black keys.</p> <p>The early years of the twentieth century saw tone clusters elevated to central roles in pioneering works by ragtime artists Jelly Roll Morton and Scott Joplin. In the 1910s, two classical avant-gardists, composer-pianists Leo Ornstein and Henry Cowell, were recognized as making the first extensive explorations of the tone cluster. During the same period, Charles Ives employed them in several compositions that were not publicly performed until the late 1920s or 1930s. Composers such as Béla Bartók and, later, Lou Harrison and Karlheinz Stockhausen became proponents of the tone cluster, which feature in the work of many 20th- and 21st-century classical composers. Tone clusters also play a significant role in the work of free jazz musicians such as Cecil Taylor and Matthew Shipp.</p> <p>In most Western music, tone clusters tend to be heard as dissonant. Clusters may be performed with almost any individual instrument on which three or more notes can be played simultaneously, as well as by most groups of instruments or voices. Keyboard instruments are particularly suited to the performance of tone clusters because it is relatively easy to play multiple notes in unison on them.</p> <p>In standard Western classical music practice, all tone clusters are classifiable as secundal chords—that is, they are constructed from minor seconds (intervals of one semitone), major seconds (intervals of two semitones), or, in the case of certain pentatonic clusters, augmented seconds (intervals of three semitones). Stacks of adjacent microtonal pitches also constitute tone clusters.</p> <p>In tone clusters, the notes are sounded fully and in unison, distinguishing them from ornamented figures involving acciaccaturas and the like. Their effect also tends to be different: where ornamentation is used to draw attention to the harmony or the relationship between harmony and melody, tone clusters are for the most part employed as independent sounds. While, by definition, the notes that form a cluster must sound at the same time, there is no requirement that they must all begin sounding at the same moment. For example, in R. Murray Schafer's choral Epitaph for Moonlight (1968), a tone cluster is constructed by dividing each choir section (soprano/alto/tenor/bass) into four parts. Each of the sixteen parts enters separately, humming a note one semitone lower than the note hummed by the previous part, until all sixteen are contributing to the cluster.</p> <p>Tone clusters have generally been thought of as dissonant musical textures, and even defined as such. As noted by Alan Belkin, however, instrumental timbre can have a significant impact on their effect: "Clusters are quite aggressive on the organ, but soften enormously when played by strings (possibly because slight, continuous fluctuations of pitch in the latter provide some inner mobility)." In his first published work on the topic, Henry Cowell observed that a tone cluster is "more pleasing" and "acceptable to the ear if its outer limits form a consonant interval." Cowell explains, "the natural spacing of so-called dissonances is as seconds, as in the overtone series, rather than sevenths and ninths....Groups spaced in seconds may be made to sound euphonious, particularly if played in conjunction with fundamental chord notes taken from lower in the same overtone series. Blends them together and explains them to the ear." Tone clusters have also been considered noise. As Mauricio Kagel says, "clusters have generally been used as a kind of anti-harmony, as a transition between sound and noise." Tone clusters thus also lend themselves to use in a percussive manner. Historically, they were sometimes discussed with a hint of disdain. One 1969 textbook defines the tone cluster as "an extra-harmonic clump of notes.</p> <h2 id="notation-and-execution" tabindex="-1">Notation and execution <a class="header-anchor" href="#notation-and-execution" aria-label="Permalink to "Notation and execution""></a></h2> <p>In his 1917 piece The Tides of Manaunaun, Cowell introduced a new notation for tone clusters on the piano and other keyboard instruments. In this notation, only the top and bottom notes of a cluster, connected by a single line or a pair of lines, are represented. This developed into the solid-bar style seen in the image on the right. Here, the first chord—stretching two octaves from D2 to D4—is a diatonic (so-called white-note) cluster, indicated by the natural sign below the staff. The second is a pentatonic (so-called black-note) cluster, indicated by the flat sign; a sharp sign would be required if the notes showing the limit of the cluster were spelled as sharps. A chromatic cluster—black and white keys together—is shown in this method by a solid bar with no sign at all. In scoring the large, dense clusters of the solo organ work Volumina in the early 1960s, György Ligeti, using graphical notation, blocked in whole sections of the keyboard.</p> <p><img src="./Cowell_tone_clusters.png" alt=""></p> <p>The performance of keyboard tone clusters is widely considered an "extended technique"—large clusters require unusual playing methods often involving the fist, the flat of the hand, or the forearm. Thelonious Monk and Karlheinz Stockhausen each performed clusters with their elbows; Stockhausen developed a method for playing cluster glissandi with special gloves. Don Pullen would play moving clusters by rolling the backs of his hands over the keyboard. Boards of various dimension are sometimes employed, as in the Concord Sonata (c. 1904–19) of Charles Ives; they can be weighted down to execute clusters of long duration. Several of Lou Harrison's scores call for the use of an "octave bar", crafted to facilitate high-speed keyboard cluster performance. Designed by Harrison with his partner William Colvig, the octave bar is</p> <blockquote> <p>a flat wooden device approximately two inches high with a grip on top and sponge rubber on the bottom, with which the player strikes the keys. Its length spans an octave on a grand piano. The sponge rubber bottom is sculpted so that its ends are slightly lower than its center, making the outer tones of the octave sound with greater force than the intermediary pitches. The pianist can thus rush headlong through fearfully rapid passages, precisely spanning an octave at each blow.</p> </blockquote> <h2 id="in-jazz" tabindex="-1">In jazz <a class="header-anchor" href="#in-jazz" aria-label="Permalink to "In jazz""></a></h2> <p>Scott Joplin wrote the first known published composition to include a musical sequence built around specifically notated tone clusters.</p> <p>Tone clusters have been employed by jazz artists in a variety of styles, since the very beginning of the form. Around the turn of the twentieth century, Storyville pianist Jelly Roll Morton began performing a ragtime adaptation of a French quadrille, introducing large chromatic tone clusters played by his left forearm. The growling effect led Morton to dub the piece his "Tiger Rag". In 1909, Scott Joplin's deliberately experimental "Wall Street Rag" included a section prominently featuring notated tone clusters.</p> <p>The fourth of Artie Matthews's Pastime Rags (1913–20) features dissonant right-hand clusters. Thelonious Monk, in pieces such as "Bright Mississippi" (1962), "Introspection" (1946) and "Off Minor" (1947), uses clusters as dramatic figures within the central improvisation and to accent the tension at its conclusion. They are heard on Art Tatum's "Mr. Freddy Blues" (1950), undergirding the cross-rhythms. By 1953, Dave Brubeck was employing piano tone clusters and dissonance in a manner anticipating the style free jazz pioneer Cecil Taylor would soon develop. The approach of hard bop pianist Horace Silver is an even clearer antecedent to Taylor's use of clusters. During the same era, clusters appear as punctuation marks in the lead lines of Herbie Nichols. In "The Gig" (1955), described by Francis Davis as Nichols's masterpiece, "clashing notes and tone clusters depic[t] a pickup band at odds with itself about what to play." Recorded examples of Duke Ellington's piano cluster work include "Summertime" (1961) and ...And His Mother Called Him Bill (1967) and This One's for Blanton!, his tribute to a former bass player, recorded in 1972 with bassist Ray Brown. Bill Evans' interpretation of “Come Rain or Come Shine” from the album Portrait in Jazz (1960), opens with a striking 5-tone cluster.</p> <p>In jazz, as in classical music, tone clusters have not been restricted to the keyboard. In the 1930s, the Jimmie Lunceford Orchestra's "Stratosphere" included ensemble clusters among an array of progressive elements. The Stan Kenton Orchestra's April 1947 recording of "If I Could Be With You One Hour Tonight," arranged by Pete Rugolo, features a dramatic four-note trombone cluster at the end of the second chorus. As described by critic Fred Kaplan, a 1950 performance by the Duke Ellington Orchestra features arrangements with the collective "blowing rich, dark, tone clusters that evoke Ravel." Chord clusters also feature in the scores of arranger Gil Evans. In his characteristically imaginative arrangement of George Gershwin's "There's a boat that's leaving soon for New York" from the album Porgy and Bess, Evans contributes chord clusters orchestrated on flutes, alto saxophone and muted trumpets as a background to accompany Miles Davis' solo improvisation. In the early 1960s, arrangements by Bob Brookmeyer and Gerry Mulligan for Mulligan's Concert Jazz Band employed tone clusters in a dense style bringing to mind both Ellington and Ravel. Eric Dolphy's bass clarinet solos would often feature "microtonal clusters summoned by frantic overblowing." Critic Robert Palmer called the "tart tone cluster" that "pierces a song's surfaces and penetrates to its heart" a specialty of guitarist Jim Hall's.</p> <p>Clusters are especially prevalent in the realm of free jazz. Cecil Taylor has used them extensively as part of his improvisational method since the mid-1950s. Like much of his musical vocabulary, his clusters operate "on a continuum somewhere between melody and percussion." One of Taylor's primary purposes in adopting clusters was to avoid the dominance of any specific pitch. Leading free jazz composer, bandleader, and pianist Sun Ra often used them to rearrange the musical furniture, as described by scholar John F. Szwed:</p> <blockquote> <p>When he sensed that [a] piece needed an introduction or an ending, a new direction or fresh material, he would call for a space chord, a collectively improvised tone cluster at high volume which "would suggest a new melody, maybe a rhythm." It was a pianistically conceived device which created another context for the music, a new mood, opening up fresh tonal areas.[105]</p> </blockquote> <p>As free jazz spread in the 1960s, so did the use of tone clusters. In comparison with what John Litweiler describes as Taylor's "endless forms and contrasts," the solos of Muhal Richard Abrams employ tone clusters in a similarly free, but more lyrical, flowing context. Guitarist Sonny Sharrock made them a central part of his improvisations; in Palmer's description, he executed "glass-shattering tone clusters that sounded like someone was ripping the pickups out of the guitar without having bothered to unplug it from its overdriven amplifier." Pianist Marilyn Crispell has been another major free jazz proponent of the tone cluster, frequently in collaboration with Anthony Braxton, who played with Abrams early in his career. Since the 1990s, Matthew Shipp has built on Taylor's innovations with the form. European free jazz pianists who have contributed to the development of the tone cluster palette include Gunter Hampel and Alexander von Schlippenbach.</p> <p>Don Pullen, who bridged free and mainstream jazz, "had a technique of rolling his wrists as he improvised—the outside edges of his hands became scarred from it—to create moving tone clusters," writes critic Ben Ratliff. "Building up from arpeggios, he could create eddies of noise on the keyboard...like concise Cecil Taylor outbursts." In the description of Joachim Berendt, Pullen "uniquely melodized cluster playing and made it tonal. He phrases impulsively raw clusters with his right hand and yet embeds them in clear, harmonically functional tonal chords simultaneously played with the left hand." John Medeski employs tone clusters as keyboardist for Medeski, Martin, and Wood, which mixes free jazz elements into its soul jazz/jam band style.</p> <h2 id="in-popular-music" tabindex="-1">In popular music <a class="header-anchor" href="#in-popular-music" aria-label="Permalink to "In popular music""></a></h2> <p>Like jazz, rock and roll has made use of tone clusters since its birth, if characteristically in a less deliberate manner—most famously, Jerry Lee Lewis's live-performance piano technique of the 1950s, involving fists, feet, and derrière. Since the 1960s, much drone music, which crosses the lines between rock, electronic, and experimental music, has been based on tone clusters. On The Velvet Underground's "Sister Ray," recorded in September 1967, organist John Cale uses tone clusters within the context of a drone; the song is apparently the closest approximation on record of the band's early live sound. Around the same time, Doors keyboardist Ray Manzarek began introducing clusters into his solos during live performances of the band's hit "Light My Fire."</p> <p>Kraftwerk's self-titled 1970 debut album employs organ clusters to add variety to its repeated tape sequences. In 1971, critic Ed Ward lauded the "tone-cluster vocal harmonies" created by Jefferson Airplane's three lead singers, Grace Slick, Marty Balin, and Paul Kantner. Tangerine Dream's 1972 double album Zeit is replete with clusters performed on synthesizer. In later rock practice, the D add9 chord characteristic of jangle pop involves a three-note set separated by major seconds (D, E, F♯), the sort of guitar cluster that may be characterized as a harp effect. The Beatles' 1965 song "We Can Work It Out" features a momentarily grating tone cluster with voices singing A sharp and C sharp against the accompanying keyboard playing a sustained chord on B to the word "time." The Band's 1968 song "The Weight" from their debut album Music from Big Pink features a dissonant vocal refrain with suspensions culminating in a 3-note cluster to the words "you put the load right on me."</p> <p>The sound of tone clusters played on the organ became a convention in radio drama for dreams. Clusters are often used in the scoring of horror and science-fiction films. For a 2004 production of the play Tone Clusters by Joyce Carol Oates, composer Jay Clarke—a member of the indie rock bands Dolorean and The Standard—employed clusters to "subtly build the tension", in contrast to what he perceived in the cluster pieces by Cowell and Ives suggested by Oates: “Some of it was like music to murder somebody to; it was like horror-movie music”.</p> <h2 id="use-in-other-music" tabindex="-1">Use in other music <a class="header-anchor" href="#use-in-other-music" aria-label="Permalink to "Use in other music""></a></h2> <youtube-embed video="0T1pyZZiBO0" /><p>In traditional Japanese gagaku, the imperial court music, a tone cluster performed on shō (a type of mouth organ) is generally employed as a harmonic matrix. Yoritsune Matsudaira, active from the late 1920s to the early 2000s, merged gagaku's harmonies and tonalities with avant-garde Western techniques. Much of his work is built on the shō's ten traditional cluster formations. Lou Harrison's Pacifika Rondo, which mixes Eastern and Western instrumentation and styles, mirrors the gagaku approach—sustained organ clusters emulate the sound and function of the shō. The shō also inspired Benjamin Britten in creating the instrumental texture of his 1964 dramatic church parable Curlew River. Its sound pervades the characteristically sustained cluster chords played on a chamber organ. Traditional Korean court and aristocratic music employs passages of simultaneous ornamentation on multiple instruments, creating dissonant clusters; this technique is reflected in the work of twentieth-century Korean German composer Isang Yun.</p> <p>Several East Asian free reed instruments, including the shō, were modeled on the sheng, an ancient Chinese folk instrument later incorporated into more formal musical contexts. Wubaduhesheng, one of the traditional chord formations played on the sheng, involves a three-pitch cluster. Malayan folk musicians employ an indigenous mouth organ that, like the shō and sheng, produces tone clusters. The characteristic musical form played on the bin-baja, a strummed harp of central India's Pardhan people, has been described as a "rhythmic ostinato on a tone cluster."</p> <p>Among the Asante, in the region that is today encompassed by Ghana, tone clusters are used in traditional trumpet music. A distinctive "tongue-rattling technique gives a greater vibrancy to...already dissonant tonal cluster[s].... [I]ntentional dissonance dispels evil spirits, and the greater the clangor, the greater the sound barrage.</p> <youtube-embed video="L9Z8Ty2-3xk?t=213" /><p>Wendy Carlos used essentially the exact reverse of this methodology to derive her Alpha scale, Beta scale and Gamma scale; they are the most consonant scales one can derive by treating tone clusters as the only type of triad that really exists, which is paradoxically an anti-harmonic monistic method.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/matthew-ball.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Subtractive color models]]></title> <link>https://chromatone.center/theory/color/models/subtractive/</link> <guid>https://chromatone.center/theory/color/models/subtractive/</guid> <pubDate>Fri, 20 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The colors produced by materials absorbing certain light frequencies. RYB and CMYK]]></description> <content:encoded><![CDATA[<p>Subtractive means that color is produced by absorbing some parts of white light spectrum by the material. Subtractive models are used in painting and printing, where different pigment mixtures make up different colors.</p> <p>A color model is subtractive in the sense that mixtures of dyes subtract specific wavelengths from the spectral power distribution of the illuminating light which is scattered back into the viewer's eye and is perceived as colored. Mixing of dyes is used to reproduce a gamut of colors, the resultant color from this layer is predicted by multiplying (not subtracting) the absorbance profiles of the dyes.</p> <h3 id="ryb" tabindex="-1">RYB <a class="header-anchor" href="#ryb" aria-label="Permalink to "RYB""></a></h3> <img src="./chromatography_1841.png"> <blockquote> <p>An RYB color chart from George Field's 1841 Chromatography; or, A treatise on colours and pigments: and of their powers in painting.</p> </blockquote> <p><strong>RYB</strong> (an abbreviation of red–yellow–blue) is a subtractive color model used in art and applied design in which red, yellow, and blue pigments are considered primary colors.</p> <p>In this context, the term primary color refers to three exemplar colors (red, yellow, and blue) as opposed to specific pigments. As illustrated, in the RYB color model, red, yellow, and blue are intermixed to create secondary color segments of orange, green, and purple. This set of primary colors emerged at a time when access to a large range of pigments was limited by availability and cost, and it encouraged artists and designers to explore the many nuances of color through mixing and intermixing a limited range of pigment colors. In art and design education, red, yellow, and blue pigments were usually augmented with white and black pigments, enabling the creation of a larger gamut of color nuances including tints and shades.</p> <p><img src="./tint-tone-shade.svg" alt=""></p> <blockquote> <p><strong>Jacob Christoph Le Blon</strong> was the first to apply the RYB color model to printing, specifically mezzotint printing, and he used separate plates for each color: yellow, red and blue plus black to add shades and contrast. In 'Coloritto', Le Blon asserted that “the art of mixing colours…(in) painting can represent all visible objects with three colours: yellow, red and blue; for all colours can be composed of these three, which I call Primitive”. Le Blon added that red and yellow make orange; red and blue, make purple; and blue and yellow make green (Le Blon, 1725, p6).</p> </blockquote> <h3 id="cmy-and-cmyk" tabindex="-1">CMY and CMYK <a class="header-anchor" href="#cmy-and-cmyk" aria-label="Permalink to "CMY and CMYK""></a></h3> <p>The <strong>CMY</strong> color model is a subtractive color model in which cyan, magenta and yellow pigments or dyes are added together in various ways to reproduce a broad array of colors.</p> <p>When the intensities for all the components are the same, the result is a shade of gray, lighter, or darker depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.</p> <h3 id="interactive-cmyk-mixer" tabindex="-1">Interactive CMYK mixer <a class="header-anchor" href="#interactive-cmyk-mixer" aria-label="Permalink to "Interactive CMYK mixer""></a></h3> <p>Drag any of the color components up or right to increase its value.</p> <color-cmyk /><p><strong>CMYK</strong> color model is a subtractive color model, based on the CMY color model, used in color printing, and is also used to describe the printing process itself, that is used in the layering technique by printers to create different colors on a white paper. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key. It uses K, black ink, since C, M, and Y inks are translucent and will only produce a gray color when laid on top of each other.</p> <p>With CMYK printing, halftoning (also called screening) allows for less than full saturation of the primary colors; tiny dots of each primary color are printed in a pattern small enough that humans perceive a solid color.</p> <p>Light, saturated colors often cannot be created with CMYK, and light colors in general may make visible the halftone pattern. Using a CcMmYK process, with the addition of light cyan and magenta inks to CMYK, can solve these problems.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cmyk.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Just intonation]]></title> <link>https://chromatone.center/theory/notes/temperaments/just/</link> <guid>https://chromatone.center/theory/notes/temperaments/just/</guid> <pubDate>Fri, 20 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[5-limit and other rational fraction based tunings]]></description> <content:encoded><![CDATA[<p><strong>Just intonation</strong> or pure intonation is the tuning of musical intervals as whole number ratios (such as 3:2 or 4:3) of frequencies. An interval tuned in this way is said to be pure, and is called a just interval. Just intervals (and chords created by combining them) consist of tones from a single harmonic series of an implied fundamental. For example, if the notes G3 and C4 are tuned as members of the harmonic series of the lowest C, their frequencies will be 3 and 4 times the fundamental frequency. The interval ratio between C4 and G3 is therefore 4:3, a just fourth.</p> <p>In Western musical practice, instruments are rarely tuned using only pure intervals — the desire for different keys to have identical intervals in Western music makes this impractical. Some instruments of fixed pitch, such as electric pianos, are commonly tuned using equal temperament, in which all intervals other than octaves consist of irrational-number frequency ratios. Acoustic pianos are usually tuned with the octaves slightly widened, and thus with no pure intervals at all.</p> <h3 id="terminology" tabindex="-1">Terminology <a class="header-anchor" href="#terminology" aria-label="Permalink to "Terminology""></a></h3> <p><a href="./../pythagorean/">Pythagorean tuning</a>, or 3-limit tuning, allows ratios including the numbers 2 and 3 and their powers, such as 3:2, a perfect fifth, and 9:4, a major ninth. Although the interval from C to G is called a perfect fifth for purposes of music analysis regardless of its tuning method, for purposes of discussing tuning systems musicologists may distinguish between a perfect fifth created using the 3:2 ratio and a tempered fifth using some other system, such as meantone or equal temperament.</p> <p><strong>5-limit tuning</strong> encompasses ratios additionally using the number 5 and its powers, such as 5:4, a major third, and 15:8, a major seventh. The specialized term perfect third is occasionally used to distinguish the 5:4 ratio from major thirds created using other tuning methods. <strong>7-limit</strong> and higher systems use higher partials in the overtone series.</p> <p><strong>Commas</strong> are very small intervals that result from minute differences between pairs of just intervals. For example, the 5:4 ratio is different from the Pythagorean (3-limit) major third (81:64) by a difference of 81:80, called the syntonic comma.</p> <p><img src="./intervals.svg" alt="svg"></p> <p>A twelve-tone scale can also be created by compounding harmonics up to the fifth: namely, by multiplying the frequency of a given reference note (the base note) by powers of 2, 3, or 5, or a combination of them. This method is called five-limit tuning.</p> <p><img src="./pentactys.svg" alt="svg"></p> <p>5-limit tuning encompasses ratios additionally using the number 5 and its powers, such as 5:4, a major third, and 15:8, a major seventh. 7-limit and higher systems use higher partials in the overtone series.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/intervals.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Pythagorean tuning]]></title> <link>https://chromatone.center/theory/notes/temperaments/pythagorean/</link> <guid>https://chromatone.center/theory/notes/temperaments/pythagorean/</guid> <pubDate>Fri, 20 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[3-limit tuning based on the 3:2 ratio]]></description> <content:encoded><![CDATA[<h2 id="pythagorean-tuning" tabindex="-1">Pythagorean tuning <a class="header-anchor" href="#pythagorean-tuning" aria-label="Permalink to "Pythagorean tuning""></a></h2> <p>Pythagorean tuning, or 3-limit tuning, also allows ratios including the number 3 and its powers, such as 3:2, a perfect fifth, and 9:4, a major ninth. 12-tone Pythagorean temperament is based on a stack of intervals called perfect fifths, each tuned in the ratio 3:2, the next simplest ratio after 2:1. Starting from D for example (D-based tuning), six other notes are produced by moving six times a ratio 3:2 up, and the remaining ones by moving the same ratio down:</p> <blockquote> <p>E♭–B♭–F–C–G–D–A–E–B–F♯–C♯–G♯</p> </blockquote> <p>This succession of eleven 3:2 intervals spans across a wide range of frequency (on a piano keyboard, it encompasses 77 keys). Since notes differing in frequency by a factor of 2 are given the same name, it is customary to divide or multiply the frequencies of some of these notes by 2 or by a power of 2. The purpose of this adjustment is to move the 12 notes within a smaller range of frequency, namely within the interval between the base note D and the D above it (a note with twice its frequency). This interval is typically called the basic octave (on a piano keyboard, an octave has only 12 keys).</p> <p><img src="./Tetractys.svg" alt="svg"></p> <p>The tetractys (Greek: τετρακτύς) is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row, which is the geometrical representation of the fourth triangular number. As a mystical symbol, it was very important to the secret worship of Pythagoreanism.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Tetractys.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Step sequencer]]></title> <link>https://chromatone.center/practice/chord/sequencer/</link> <guid>https://chromatone.center/practice/chord/sequencer/</guid> <pubDate>Thu, 19 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[A simple tool to build up melodies and chord progressions]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/sequencing/sequencer/" target="_blank" rel="noreferrer">https://chromatone.center/practice/sequencing/sequencer/</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/sequencer.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Step sequencer]]></title> <link>https://chromatone.center/practice/sequencing/sequencer/</link> <guid>https://chromatone.center/practice/sequencing/sequencer/</guid> <pubDate>Thu, 19 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[A simple tool to build up melodies and chord progressions]]></description> <content:encoded><![CDATA[<StepSequencer style="position: sticky; top: 1em;" />]]></content:encoded> <enclosure url="https://chromatone.center/sequencer.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Evolution of european notation systems]]></title> <link>https://chromatone.center/theory/notes/staff/evolution/</link> <guid>https://chromatone.center/theory/notes/staff/evolution/</guid> <pubDate>Thu, 19 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[From neumes to staves]]></description> <content:encoded><![CDATA[<h2 id="medieval-neumes-c-500–1400" tabindex="-1">Medieval neumes c. 500–1400 <a class="header-anchor" href="#medieval-neumes-c-500–1400" aria-label="Permalink to "Medieval neumes c. 500–1400""></a></h2> <youtube-embed video="2OBB5-bP6qs" /><p><img src="./images/Dies_irae_ms_fragm.jpg" alt=""> <img src="./images/Dies_irae.gif" alt=""></p> <p>Early Western medieval notation was written with neumes, which did not specify exact pitches but only the shape of the melodies, i.e. indicating when the musical line went up or down; presumably these were intended as mnemonics for melodies which had been taught by rote.</p> <p><img src="./images/adiastemic.jpg" alt=""></p> <p>During the 9th through 11th centuries a number of systems were developed to specify pitch more precisely, including diastematic neumes whose height on the page corresponded with their absolute pitch level (Longobardian and Beneventan manuscripts from Italy show this technique around AD 1000). Digraphic notation, using letter names similar to modern note names in conjunction with the neumes, made a brief appearance in a few manuscripts, but a number of manuscripts used one or more horizontal lines to indicate particular pitches.</p> <p><img src="./images/Digraphic_neumes.png" alt=""></p> <h3 id="diastemic-neumes" tabindex="-1">Diastemic neumes <a class="header-anchor" href="#diastemic-neumes" aria-label="Permalink to "Diastemic neumes""></a></h3> <p><img src="./images/Beneventan_music_manuscript.jpg" alt=""></p> <p><img src="./images/diastemic.jpg" alt=""></p> <p>In the early 11th century, Beneventan neumes (from the churches of Benevento in southern Italy) were written at varying distances from the text to indicate the overall shape of the melody; such neumes are called "heightened" or "diastematic" neumes, which showed the relative pitches between neumes. A few manuscripts from the same period use "digraphic" notation in which note names are included below the neumes. Shortly after this, one to four staff lines—an innovation traditionally ascribed to Guido d'Arezzo—clarified the exact relationship between pitches. One line was marked as representing a particular pitch, usually C or F. These neumes resembled the same thin, scripty style of the chironomic notation. By the 11th century, chironomic neumes had evolved into square notation; in Germany, a variant called Gothic neumes continued to be used until the 16th century. This variant is also known as Hufnagel notation, as the used neumes resemble the nails (hufnagels) one uses to attach horseshoes.</p> <p><img src="./images/Goth002-web.jpg" alt=""></p> <h2 id="russian-neumes" tabindex="-1">Russian neumes <a class="header-anchor" href="#russian-neumes" aria-label="Permalink to "Russian neumes""></a></h2> <p>Znamenny Chant is a singing tradition used in the Russian Orthodox Church which uses a "hook and banner" notation. Znamenny Chant is unison, melismatic liturgical singing that has its own specific notation, called the stolp notation. The symbols used in the stolp notation are called kryuki (Russian: крюки, 'hooks') or znamena (Russian: знамёна, 'signs'). Often the names of the signs are used to refer to the stolp notation. Znamenny melodies are part of a system, consisting of Eight Modes (intonation structures; called glasy); the melodies are characterized by fluency and well-balancedness (Kholopov 2003, 192). There exist several types of Znamenny Chant: the so-called Stolpovoy, Malyj (Little) and Bolshoy (Great) Znamenny Chant. Ruthenian Chant (Prostopinije) is sometimes considered a sub-division of the Znamenny Chant tradition, with the Muscovite Chant (Znamenny Chant proper) being the second branch of the same musical continuum.</p> <p><img src="./hooks_and_banners.png" alt=""></p> <p>Znamenny Chants are not written with notes (the so-called linear notation), but with special signs, called Znamëna (Russian for "marks", "banners") or Kryuki ("hooks"), as some shapes of these signs resemble hooks. Each sign may include the following components: a large black hook or a black stroke, several smaller black 'points' and 'commas' and lines near the hook or crossing the hook. Some signs may mean only one note, some 2 to 4 notes, and some a whole melody of more than 10 notes with a complicated rhythmic structure. The stolp notation was developed in Kievan Rus' as an East Slavic refinement of the Byzantine neumatic musical notation.</p> <p><img src="./images/Old_Believers_Octoechos_2.jpg" alt=""></p> <p>The most notable feature of this notation system is that it records transitions of the melody, rather than notes. The signs also represent a mood and a gradation of how this part of melody is to be sung (tempo, strength, devotion, meekness, etc.) Every sign has its own name and also features as a spiritual symbol. For example, there is a specific sign, called "little dove" (Russian: голубчик (golubchik)), which represents two rising sounds, but which is also a symbol of the Holy Ghost. Gradually the system became more and more complicated. This system was also ambiguous, so that almost no one, except the most trained and educated singers, could sing an unknown melody at sight. The signs only helped to reproduce the melody, not coding it in an unambiguous way.</p> <p><img src="./images/Kryuki.jpg" alt=""></p> <blockquote> <p><img src="./images/Musical_manuscript.jpg" alt="A musical manuscript of 1433 (Pantokratoros monastery, code 214)"> A musical manuscript of 1433 (Pantokratoros monastery, code 214)</p> </blockquote> <blockquote> <p><img src="./images/Easter_koinonikon_of_the_Kievan_Rus_with_Kondakarian_notation.jpg" alt=""> Easter koinonikon тҍло христово / σῶμα χριστοῦ ("The body of Christ") in echos plagios protos notated with kondakarian notation in 2 rows: great (red names) and small signs (blue names)</p> </blockquote> <hr> <h2 id="guido-d-arezzo" tabindex="-1">Guido d'Arezzo <a class="header-anchor" href="#guido-d-arezzo" aria-label="Permalink to "Guido d'Arezzo""></a></h2> <p>Guido of Arezzo or Guido d'Arezzo (c. 991–992 – after 1033) was an Italian music theorist and pedagogue of High medieval music. A Benedictine monk, he is regarded as the inventor—or by some, developer—of the modern staff notation that replaced the predominant neumatic notation and was thus massively influential to the development of Western musical notation and practice.</p> <blockquote> <p><img src="./images/Guido_e_Teodaldo.jpg" alt=""> Guido (left) showing Tedald the monochord, depicted in an 11th-century medieval manuscript</p> </blockquote> <p>By around 1013 he began teaching at Pomposa Abbey, but his antiphonary Prologus in antiphonarium and novel teaching methods based on staff notation brought considerable resentment from his colleagues. He thus moved to Arezzo in 1025 and under the patronage of Bishop Tedald of Arezzo he taught singers at the Arezzo Cathedral. Using staff notation, he was able to teach large amounts of music quickly and he wrote the multifaceted Micrologus, attracting attention from around Italy. Interested in his innovations, Pope John XIX called him to Rome. After arriving and beginning to explain his methods to the clergy, sickness sent him away in the summer. The rest of his life is largely unclear, but he settled in a monastery near Arezzo, probably one of the Avellana of the Camaldolese order.</p> <p><img src="./images/UtQueantLaxis-Arezzo.png" alt=""></p> <p>Guido developed new techniques for teaching, such as staff notation and the use of the "ut–re–mi–fa–sol–la" (do–re–mi–fa–so–la) mnemonic (solmization). The ut–re–mi-fa-sol-la syllables are taken from the initial syllables of each of the first six half-lines of the first stanza of the hymn Ut queant laxis, whose text is attributed to the Italian monk and scholar Paulus Diaconus (though the musical line either shares a common ancestor with the earlier setting of Horace's "Ode to Phyllis" (Odes 4.11), recorded in the Montpellier manuscript H425, or may have been taken from it).[23] Giovanni Battista Doni is known for having changed the name of note "Ut" (C), renaming it "Do" (in the "Do Re Mi ..." sequence known as solfège).[24] A seventh note, "Si" (from the initials for "Sancte Iohannes," Latin for Saint John the Baptist) was added shortly after to complete the diatonic scale.[25] In anglophone countries, "Si" was changed to "Ti" by Sarah Glover in the nineteenth century so that every syllable might begin with a different letter (this also freed up Si for later use as Sol-sharp). "Ti" is used in tonic sol-fa and in the song "Do-Re-Mi".</p> <p><img src="./images/Guidonian_scale2.png" alt=""></p> <h3 id="guidonian-hand" tabindex="-1">Guidonian hand <a class="header-anchor" href="#guidonian-hand" aria-label="Permalink to "Guidonian hand""></a></h3> <p>Guido is somewhat erroneously credited with the invention of the Guidonian hand, a widely used mnemonic system where note names are mapped to parts of the human hand. Only a rudimentary form of the Guidonian hand is actually described by Guido, and the fully elaborated system of natural, hard, and soft hexachords cannot be securely attributed to him.</p> <p><img src="./images/Guidonian_hand.jpg" alt=""></p> <p>In the 12th century, a development in teaching and learning music in a more efficient manner had arisen. Guido of Arezzo's alleged development of the Guidonian hand, more than a hundred years after his death, allowed for musicians to label a specific joint or fingertip with the gamut (also referred to as the hexachord in the modern era).[citation needed] Using specific joints of the hand and fingertips transformed the way one would learn and memorize solmization syllables. Not only did the Guidonian hand become a standard use in preparing music in the 12th century, its popularity grew more widespread well into the 17th and 18th century. The knowledge and use of the Guidonian hand would allow a musician to simply transpose, identify intervals, and aid in use of notation and the creation of new music. Musicians were able to sing and memorize longer sections of music and counterpoint during performances and the amount of time spent diminished dramatically.</p> <p><img src="./images/Guidonischehand.gif" alt="Guido's hand"></p> <youtube-embed video="veJNu1fi8p4" /><blockquote> <p><strong>Ut</strong> queant laxis<br> <strong>Re</strong>sonare fibris,<br> <strong>Mi</strong>ra gestorum<br> <strong>Fa</strong>muli tuorum,<br> <strong>So</strong>lve polluti<br> <strong>La</strong>bii reatum,<br> <strong>S</strong>ancte <strong>J</strong>ohannes.</p> </blockquote> <p><img src="./images/Ut_Queant_Laxis_MT.png" alt="Ut_Queant_Laxis"></p> <hr> <h2 id="square-notation" tabindex="-1">Square notation <a class="header-anchor" href="#square-notation" aria-label="Permalink to "Square notation""></a></h2> <p><img src="./images/gregorian.jpg" alt="Gregorian chant"></p> <youtube-embed video="QuRrd35kvUo" /><p>By the 13th century, the neumes of Gregorian chant were usually written in square notation on a staff with four lines and three spaces and a clef marker, as in the 14th–15th century Graduale Aboense shown here. In square notation, small groups of ascending notes on a syllable are shown as stacked squares, read from bottom to top, while descending notes are written with diamonds read from left to right. In melismatic chants, in which a syllable may be sung to a large number of notes, a series of smaller such groups of neumes are written in succession, read from left to right. A special symbol called the custos, placed at the end of a system, showed which pitch came next at the start of the following system. Special neumes such as the oriscus, quilisma, and liquescent neumes, indicate particular vocal treatments for these notes. This system of square notation is standard in modern chantbooks.</p> <p><img src="./images/Graduale_Aboense.jpg" alt=""></p> <hr> <h2 id="mensural-notation-of-renaissance-c-1400–1600" tabindex="-1">Mensural notation of Renaissance c. 1400–1600 <a class="header-anchor" href="#mensural-notation-of-renaissance-c-1400–1600" aria-label="Permalink to "Mensural notation of Renaissance c. 1400–1600""></a></h2> <p>Mensural notation is the musical notation system used for European vocal polyphonic music from the later part of the 13th century until about 1600. The term "mensural" refers to the ability of this system to describe precisely measured rhythmic durations in terms of numerical proportions between note values. Its modern name is inspired by the terminology of medieval theorists, who used terms like musica mensurata ("measured music") or cantus mensurabilis ("measurable song") to refer to the rhythmically defined polyphonic music of their age, as opposed to musica plana or musica choralis, i.e., Gregorian plainchant.</p> <p><img src="./images/Barbireau_illum.jpg" alt="Early 16th-century manuscript in mensural notation, containing a Kyrie by J. Barbireau."></p> <p>Mensural notation grew out of an earlier, more limited method of notating rhythms in terms of fixed repetitive patterns, the so-called rhythmic modes, which were developed in France around 1200. An early form of mensural notation was first described and codified in the treatise Ars cantus mensurabilis ("The art of measured chant") by Franco of Cologne (c. 1280). A much expanded system allowing for greater rhythmic complexity was introduced in France with the stylistic movement of the Ars nova in the 14th century, while Italian 14th-century music developed its own, somewhat different variant. Around 1400, the French system was adopted across Europe, and became the standard form of notation of the Renaissance music of the 15th and 16th centuries. After around 1600, mensural notation gradually evolved into modern measure (or bar) notation.</p> <p><img src="./images/CordierColor.jpg" alt=""></p> <p>The decisive innovation of mensural notation was the systematic use of different note shapes to denote rhythmic durations that stood in well-defined, hierarchical numerical relations to each other. While less context dependent than notation in rhythmic modes, mensural notation differed from the modern system in that the values of notes were still somewhat context-dependent.</p> <p><img src="./images/medieval.jpg" alt=""></p> <hr> <h2 id="baroque-music-c-1580–1750" tabindex="-1">Baroque music c. 1580–1750 <a class="header-anchor" href="#baroque-music-c-1580–1750" aria-label="Permalink to "Baroque music c. 1580–1750""></a></h2> <p>Baroque music forms a major portion of the "classical music" canon, and is now widely studied, performed, and listened to. The term "baroque" comes from the Portuguese word barroco, meaning "misshapen pearl". Key composers of the Baroque era include Johann Sebastian Bach, Antonio Vivaldi, George Frideric Handel, Claudio Monteverdi, Domenico Scarlatti, Alessandro Scarlatti, Henry Purcell, Georg Philipp Telemann, Jean-Baptiste Lully, Jean-Philippe Rameau, Marc-Antoine Charpentier, Arcangelo Corelli, François Couperin, Giuseppe Tartini, Heinrich Schütz, Jan Pieterszoon Sweelinck, Dieterich Buxtehude, and others.</p> <p><img src="./images/baroque.jpg" alt=""></p> <p><img src="./images/ballard.jpg" alt="A music piece for lute by Robert II Ballard, 1612"></p> <blockquote> <p><img src="./images/Bachlut1.png" alt=""></p> <p><img src="./images/bach2.png" alt=""> J.S. Bach</p> </blockquote> <h2 id="galant-music-c-1720–1770" tabindex="-1">Galant music c. 1720–1770 <a class="header-anchor" href="#galant-music-c-1720–1770" aria-label="Permalink to "Galant music c. 1720–1770""></a></h2> <p>In music, galant refers to the style which was fashionable from the 1720s to the 1770s. This movement featured a return to simplicity and immediacy of appeal after the complexity of the late Baroque era. This meant simpler, more song-like melodies, decreased use of polyphony, short, periodic phrases, a reduced harmonic vocabulary emphasizing tonic and dominant, and a clear distinction between soloist and accompaniment. C. P. E. Bach and Daniel Gottlob Türk, who were among the most significant theorists of the late 18th century, contrasted the galant with the "learned" or "strict" styles.</p> <p><img src="./images/1698_Campra_L-Europe_Galante.jpg" alt=""></p> <hr> <h2 id="classical-period-c-1750–1820" tabindex="-1">Classical period c. 1750–1820 <a class="header-anchor" href="#classical-period-c-1750–1820" aria-label="Permalink to "Classical period c. 1750–1820""></a></h2> <p>Classical music has a lighter, clearer texture than Baroque music, but a more sophisticated use of form. It is mainly homophonic, using a clear melody line over a subordinate chordal accompaniment, but counterpoint was by no means forgotten, especially in liturgical vocal music and, later in the period, secular instrumental music. Variety and contrast within a piece became more pronounced than before and the orchestra increased in size, range, and power.</p> <blockquote> <p><img src="./images/Phantasie_fur_eine_Orgelwalze.jpg" alt=""> W.A. Mozart</p> </blockquote> <p>The harpsichord was replaced as the main keyboard instrument by the piano (or fortepiano). Unlike the harpsichord, which plucks strings with quills, pianos strike the strings with leather-covered hammers when the keys are pressed, which enables the performer to play louder or softer (hence the original name "fortepiano," literally "loud soft") and play with more expression; in contrast, the force with which a performer plays the harpsichord keys does not change the sound. Instrumental music was considered important by Classical period composers. The main kinds of instrumental music were the sonata, trio, string quartet, quintet, symphony (performed by an orchestra) and the solo concerto, which featured a virtuoso solo performer playing a solo work for violin, piano, flute, or another instrument, accompanied by an orchestra. Vocal music, such as songs for a singer and piano (notably the work of Schubert), choral works, and opera (a staged dramatic work for singers and orchestra) were also important during this period.</p> <p>The best-known composers from this period are Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven, and Franz Schubert; other notable names include Carl Philipp Emanuel Bach, Johann Christian Bach, Luigi Boccherini, Domenico Cimarosa, Joseph Martin Kraus, Muzio Clementi, Christoph Willibald Gluck, Carl Ditters von Dittersdorf, André Grétry, Pierre-Alexandre Monsigny, Leopold Mozart, Michael Haydn, Giovanni Paisiello, Johann Baptist Wanhal, François-André Danican Philidor, Niccolò Piccinni, Antonio Salieri, Georg Christoph Wagenseil, Georg Matthias Monn, Johann Georg Albrechtsberger, Mauro Giuliani, Christian Cannabich and the Chevalier de Saint-Georges. Beethoven is regarded either as a Romantic composer or a Classical period composer who was part of the transition to the Romantic era. Schubert is also a transitional figure, as were Johann Nepomuk Hummel, Luigi Cherubini, Gaspare Spontini, Gioachino Rossini, Carl Maria von Weber, Jan Ladislav Dussek and Niccolò Paganini. The period is sometimes referred to as the era of Viennese Classicism (German: Wiener Klassik), since Gluck, Haydn, Salieri, Mozart, Beethoven, and Schubert all worked in Vienna.</p> <p><img src="./images/alt-cover-2.jpg" alt=""></p> <hr> <h2 id="romantic-period-c-1800–1910" tabindex="-1">Romantic period c. 1800–1910 <a class="header-anchor" href="#romantic-period-c-1800–1910" aria-label="Permalink to "Romantic period c. 1800–1910""></a></h2> <p>Romantic music is a stylistic movement in Western Classical music associated with the period of the 19th century commonly referred to as the Romantic era (or Romantic period). It is closely related to the broader concept of Romanticism—the intellectual, artistic and literary movement that became prominent in Europe from approximately 1800 until 1910.</p> <p>Romantic composers sought to create music that was individualistic, emotional, dramatic and often programmatic; reflecting broader trends within the movements of Romantic literature, poetry, art, and philosophy. Romantic music was often ostensibly inspired by (or else sought to evoke) non-musical stimuli, such as nature, literature, poetry, or the fine arts.</p> <blockquote> <p><img src="./images/beethoven.jpg" alt=""> L.V. Beethoven</p> </blockquote> <p>Influential composers of the early Romantic era include Adolphe Adam, Daniel Auber, Ludwig van Beethoven, Hector Berlioz, François-Adrien Boieldieu, Frédéric Chopin, Sophia Dussek, Ferdinand Hérold, Mikhail Glinka, Fanny Mendelssohn, Felix Mendelssohn, John Field, Ignaz Moscheles, Otto Nicolai, Gioachino Rossini, Ferdinand Ries, Vincenzo Bellini, Franz Berwald, Luigi Cherubini, Carl Czerny, Gaetano Donizetti, Johann Nepomuk Hummel, Carl Loewe, Niccolò Paganini, Giacomo Meyerbeer, Anton Reicha, Franz Schubert, Clara Schumann, Robert Schumann, Louis Spohr, Gaspare Spontini, Ambroise Thomas and Carl Maria von Weber. Later nineteenth-century composers would appear to build upon certain early Romantic ideas and musical techniques, such as the use of extended chromatic harmony and expanded orchestration. Such later Romantic composers include Albéniz, Bruckner, Granados, Smetana, Brahms, MacDowell, Tchaikovsky, Parker, Mussorgsky, Dvořák, Borodin, Delius, Liszt, Wagner, Mahler, Goldmark, Richard Strauss, Verdi, Puccini, Bizet, Lalo, Rimsky-Korsakov, Schoenberg, Sibelius, Stanford, Parry, Scriabin, Elgar, Grieg, Saint-Saëns, Fauré, Rachmaninoff, Glazunov, Chausson and Franck.</p> <blockquote> <p><img src="./images/chopin.jpg" alt=""> F. Chopin</p> </blockquote> <hr> <h2 id="modernism-c-1890–1975" tabindex="-1">Modernism c. 1890–1975 <a class="header-anchor" href="#modernism-c-1890–1975" aria-label="Permalink to "Modernism c. 1890–1975""></a></h2> <p>modernism is an aesthetic stance underlying the period of change and development in musical language that occurred around the turn of the 20th century, a period of diverse reactions in challenging and reinterpreting older categories of music, innovations that led to new ways of organizing and approaching harmonic, melodic, sonic, and rhythmic aspects of music, and changes in aesthetic worldviews in close relation to the larger identifiable period of modernism in the arts of the time. The operative word most associated with it is "innovation". Its leading feature is a "linguistic plurality", which is to say that no one music genre ever assumed a dominant position.</p> <p><img src="./images/satie.png" alt=""></p> <blockquote> <p>Inherent within musical modernism is the conviction that music is not a static phenomenon defined by timeless truths and classical principles, but rather something which is intrinsically historical and developmental. While belief in musical progress or in the principle of innovation is not new or unique to modernism, such values are particularly important within modernist aesthetic stances. — Edward Campbell (2010, p. 37)</p> </blockquote> <p>Examples include the celebration of Arnold Schoenberg's rejection of tonality in chromatic post-tonal and twelve-tone works and Igor Stravinsky's move away from metrical rhythm.</p> <p><img src="./images/Stravinsky-Piano-Rag-Music-1.jpg" alt=""></p> <h3 id="serialism-and-dodecaphony" tabindex="-1">Serialism and dodecaphony <a class="header-anchor" href="#serialism-and-dodecaphony" aria-label="Permalink to "Serialism and dodecaphony""></a></h3> <p>In music, serialism is a method of composition using series of pitches, rhythms, dynamics, timbres or other musical elements. Serialism began primarily with Arnold Schoenberg's twelve-tone technique, though some of his contemporaries were also working to establish serialism as a form of post-tonal thinking.</p> <p><img src="./images/schoenberg.png" alt=""></p> <p>Twelve-tone technique orders the twelve notes of the chromatic scale, forming a row or series and providing a unifying basis for a composition's melody, harmony, structural progressions, and variations. Other types of serialism also work with sets, collections of objects, but not necessarily with fixed-order series, and extend the technique to other musical dimensions (often called "parameters"), such as duration, dynamics, and timbre.</p> <blockquote> <p><img src="./images/messian.jpg" alt=""> Olivier Messiaen: „La Ville d'En-Haut“, für Klavier und kleines Orchester (1987), Manuskriptseite.</p> </blockquote> <p>Serialism of the first type is most specifically defined as a structural principle according to which a recurring series of ordered elements (normally a set—or row—of pitches or pitch classes) is used in order or manipulated in particular ways to give a piece unity. "Serial" is often broadly used to describe all music written in what Schoenberg called "The Method of Composing with Twelve Notes related only to one another", or dodecaphony, and methods that evolved from his methods. It is sometimes used more specifically to apply only to music in which at least one element other than pitch is treated as a row or series.</p> <blockquote> <p><img src="./images/xenakis.jpg" alt=""> I. Xenakis</p> </blockquote> <p>The twelve-tone technique—also known as dodecaphony, twelve-tone serialism, is a method of musical composition first devised by Austrian composer Josef Matthias Hauer, who published his "law of the twelve tones" in 1919. In 1923, Arnold Schoenberg (1874–1951) developed his own, better-known version of 12-tone technique, which became associated with the "Second Viennese School" composers, who were the primary users of the technique in the first decades of its existence. The technique is a means of ensuring that all 12 notes of the chromatic scale are sounded as often as one another in a piece of music while preventing the emphasis of any one note through the use of tone rows, orderings of the 12 pitch classes. All 12 notes are thus given more or less equal importance, and the music avoids being in a key. Over time, the technique increased greatly in popularity and eventually became widely influential on 20th-century composers. Many important composers who had originally not subscribed to or actively opposed the technique, such as Aaron Copland and Igor Stravinsky, eventually adopted it in their music.</p> <h3 id="aleatoric-music" tabindex="-1">Aleatoric music <a class="header-anchor" href="#aleatoric-music" aria-label="Permalink to "Aleatoric music""></a></h3> <p>Aleatoric music (also aleatory music or chance music; from the Latin word alea, meaning "dice") is music in which some element of the composition is left to chance, and/or some primary element of a composed work's realization is left to the determination of its performer(s). The term is most often associated with procedures in which the chance element involves a relatively limited number of possibilities.</p> <blockquote> <p><img src="./images/visconti_example.jpg" alt=""> L. Visconti</p> </blockquote> <p>Some writers do not make a distinction between aleatory, chance, and indeterminancy in music, and use the terms interchangeably. From this point of view, indeterminate or chance music can be divided into three groups: (1) the use of random procedures to produce a determinate, fixed score, (2) mobile form, and (3) indeterminate notation, including graphic notation and texts.</p> <blockquote> <p><img src="./images/Stockhausen.jpg" alt=""> Karlheinz Stockhausen’s Helicopter String Quartet. Watch an excerpt of the work performed live (in helicopters) from the 2003 Salzberg Festival.</p> </blockquote> <p>The first group includes scores in which the chance element is involved only in the process of composition, so that every parameter is fixed before their performance. In John Cage's Music of Changes (1951), for example, the composer selected duration, tempo, and dynamics by using the I Ching, an ancient Chinese book which prescribes methods for arriving at random numbers. Because this work is absolutely fixed from performance to performance, Cage regarded it as an entirely determinate work made using chance procedures. On the level of detail, Iannis Xenakis used probability theories to define some microscopic aspects of Pithoprakta (1955–56), which is Greek for "actions by means of probability". This work contains four sections, characterized by textural and timbral attributes, such as glissandi and pizzicati. At the macroscopic level, the sections are designed and controlled by the composer while the single components of sound are controlled by mathematical theories.</p> <blockquote> <p><img src="./images/xenakis-1.jpg" alt=""> Pithoprakta by I. Xenakis</p> </blockquote> <p>In the second type of indeterminate music, chance elements involve the performance. Notated events are provided by the composer, but their arrangement is left to the determination of the performer. Karlheinz Stockhausen's Klavierstück XI (1956) presents nineteen events which are composed and notated in a traditional way, but the arrangement of these events is determined by the performer spontaneously during the performance. In Earle Brown's Available forms II (1962), the conductor is asked to decide the order of the events at the very moment of the performance.</p> <p><img src="./images/available-forms.png" alt=""></p> <p><img src="./images/Karlheinz-Stockhausens-Klavierstueck-XI.png" alt=""></p> <youtube-embed video="NxLMtP8ejKA" /><blockquote> <p><img src="./images/lutoslawski.gif" alt=""> Witold Roman Lutosławski (25 January 1913 – 7 February 1994) was a Polish composer and conductor.</p> </blockquote> <p>The greatest degree of indeterminacy is reached by the third type of indeterminate music, where traditional musical notation is replaced by visual or verbal signs suggesting how a work can be performed, for example in graphic score pieces.</p> <blockquote> <p><img src="./images/brown-dec.jpg" alt=""> Earle Brown's December 1952</p> </blockquote> <p>Earle Brown's December 1952 (1952) shows lines and rectangles of various lengths and thicknesses that can read as loudness, duration, or pitch. The performer chooses how to read them. Another example is Morton Feldman's Intersection No. 2 (1951) for piano solo, written on coordinate paper.</p> <p><img src="./images/feldman-proj.jpg" alt=""></p> <p>Time units are represented by the squares viewed horizontally, while relative pitch levels of high, middle, and low are indicated by three vertical squares in each row. The performer determines what particular pitches and rhythms to play.</p> <blockquote> <p><img src="./images/drawing-xenakis.png" alt=""> I. Xenakis</p> </blockquote> <h3 id="john-cage" tabindex="-1">John Cage <a class="header-anchor" href="#john-cage" aria-label="Permalink to "John Cage""></a></h3> <p>John Milton Cage Jr. (September 5, 1912 – August 12, 1992) was an American composer, music theorist, artist, and philosopher. A pioneer of indeterminacy in music, electroacoustic music, and non-standard use of musical instruments, Cage was one of the leading figures of the post-war avant-garde.</p> <p><img src="./images/john-cage-1.jpg" alt=""></p> <p><img src="./images/john-cage-2.jpeg" alt=""></p> <p><img src="./images/john-cage-3.jpg" alt=""></p> <h3 id="earle-brown" tabindex="-1">Earle Brown <a class="header-anchor" href="#earle-brown" aria-label="Permalink to "Earle Brown""></a></h3> <p>Earle Brown (December 26, 1926 – July 2, 2002) was an American composer who established his own formal and notational systems. Brown was the creator of "open form," a style of musical construction.</p> <p><img src="./images/earle-brown-oh-k.jpg" alt=""></p> <p>Although Brown precisely notated compositions throughout his career using traditional notation, he also was an inventor and early practitioner of various innovative notations.</p> <p><img src="./images/cover-Folio_II_c_1980.png" alt=""></p> <p>In Twenty-Five Pages, and in other works, Brown used what he called "time notation" or "proportional notation" where rhythms were indicated by their horizontal length and placement in relation to each other and were to be interpreted flexibly. However, by Modules I and II (1966), Brown more often used stemless note heads which could be interpreted with even greater flexibility.</p> <p>In 1959, with Hodograph I, Brown sketched the contour and character abstractly in what he called "implicit areas" of the piece. This graphic style was more gestural and calligraphic than the geometric abstraction of December 1952. Beginning with Available Forms I, Brown used this graphic notation on the staff in some sections of the score.</p> <p><img src="./images/parallels.jpg" alt="parallels"></p> <p><img src="./images/notes-3d.jpg" alt="3d notes"></p> ]]></content:encoded> <enclosure url="https://chromatone.center/hooks_and_banners.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Additive color models]]></title> <link>https://chromatone.center/theory/color/models/additive/</link> <guid>https://chromatone.center/theory/color/models/additive/</guid> <pubDate>Wed, 18 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[The colors created by combining colored lights]]></description> <content:encoded><![CDATA[<p>A color model is additive in the sense that the three light beams are added together, and their light spectra add, wavelength for wavelength, to make the final color's spectrum. Because of properties, these three colors create white, this is in stark contrast to physical colors, such as dyes which create black when mixed.</p> <h2 id="interactive-rgb-color-mixer" tabindex="-1">Interactive RGB color mixer <a class="header-anchor" href="#interactive-rgb-color-mixer" aria-label="Permalink to "Interactive RGB color mixer""></a></h2> <p>Drag any of the color components up or right to increase its value.</p> <color-rgb /><p>Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system's white point. When the intensities for all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.</p> <h3 id="rgb" tabindex="-1">RGB <a class="header-anchor" href="#rgb" aria-label="Permalink to "RGB""></a></h3> <p>To form a color with RGB, three light beams (one red, one green, and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture.</p> <p>The RGB color model itself does not define what is meant by red, green, and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green, and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB.</p> <p><img src="./rgb_color_solid_cube.png" alt=""></p> <p>Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/rgb.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Harmony]]></title> <link>https://chromatone.center/theory/harmony/</link> <guid>https://chromatone.center/theory/harmony/</guid> <pubDate>Wed, 18 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Chord progressions and accompaniment]]></description> <content:encoded><![CDATA[<p>What is <a href="./study/">Harmony</a>?</p> <p>May two or more scales <a href="./polytonality/">Sound nice simultaneously</a>? Or how to <a href="./modulation/">Transition from one key to another</a>? Let's explore all the possible <a href="./movement/">Harmonic movements</a> in different frames like <a href="./chord-scale/">Chord-scale system</a> or the <a href="./non-chord/">Chord and nonchord tones approach</a></p> <h2 id="harmonic-rhythm" tabindex="-1">Harmonic rhythm <a class="header-anchor" href="#harmonic-rhythm" aria-label="Permalink to "Harmonic rhythm""></a></h2> <p>In music theory, harmonic rhythm, also known as harmonic tempo, is the rate at which the chords change (or progress) in a musical composition, in relation to the rate of notes. Thus a passage in common time with a stream of sixteenth notes and chord changes every measure has a slow harmonic rhythm and a fast surface or "musical" rhythm (16 notes per chord change), while a piece with a trickle of half notes and chord changes twice a measure has a fast harmonic rhythm and a slow surface rhythm (1 note per chord change). Harmonic rhythm may be described as strong or weak.</p> <p>According to William Russo harmonic rhythm is, "the duration of each different chord...in a succession of chords." According to Joseph Swain (2002 p. 4) harmonic rhythm, "is simply that perception of rhythm that depends on changes in aspects of harmony." According to Walter Piston (1944), "the rhythmic life contributed to music by means of the underlying changes of harmony. The pattern of the harmonic rhythm of a given piece of music, derived by noting the root changes as they occur, reveals important and distinctive features affecting the style and texture."</p> <p>Strong harmonic rhythm is characterized by strong root progressions and emphasis of root positions, weak contrapuntal bass motion, strong rhythmic placement in the measure (especially downbeat), and relatively longer duration.</p> <p>"The 'fastness' or 'slowness' of harmonic rhythm is not absolute, but relative,"[self-published source] and thus analysts compare the overall pace of harmonic rhythm from one piece to another, or the amount of variation of harmonic rhythm within a piece. For example, a key stylistic difference between Baroque music and Classical-period music is that the latter exhibits much more variety of harmonic rhythm, even though the harmony itself is less complex.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/omar-flores.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[String overtones]]></title> <link>https://chromatone.center/practice/sound/overtones/</link> <guid>https://chromatone.center/practice/sound/overtones/</guid> <pubDate>Thu, 12 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[String standing waves interactive visualization]]></description> <content:encoded><![CDATA[<SoundOvertones /><p>A string fixed from both ends produces a harmonic set of incremental frequency partials, also called harmonics.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/overtones.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Chord-scale system]]></title> <link>https://chromatone.center/theory/harmony/chord-scale/</link> <guid>https://chromatone.center/theory/harmony/chord-scale/</guid> <pubDate>Thu, 12 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Harmony oragnization using different scales for the sounding chord]]></description> <content:encoded><![CDATA[<p>The chord-scale system is a method of matching, from a list of possible chords, a list of possible scales. The system has been widely used since the 1970s and is "generally accepted in the jazz world today".</p> <p>However, the majority of older players used the chord tone/chord arpeggio method. The system is an example of the difference between the treatment of dissonance in jazz and classical harmony: "Classical treats all notes that don't belong to the chord...as potential dissonances to be resolved...Non-classical harmony just tells you which note in the scale to [potentially] avoid..., meaning that all the others are okay".</p> <p>The chord-scale system may be compared with other common methods of improvisation, first, the older traditional chord tone/chord arpeggio method, and where one scale on one root note is used throughout all chords in a progression (for example the blues scale on A for all chords of the blues progression: A7 E7 D7). In contrast, in the chord-scale system, a different scale is used for each chord in the progression (for example Mixolydian scales on A, E, and D for chords A7, E7, and D7, respectively). Improvisation approaches may be mixed, such as using "the blues approach" for a section of a progression and using the chord-scale system for the rest. Dominant seventh chord normally paired with mixolydian scale, the fifth mode of the major scale.</p> <p>The scales commonly used today consist of the seven modes of the diatonic scale, the seven modes of the melodic minor scale, the diminished scales, the whole-tone scale, and pentatonic and bebop scales. Students now typically learn as many as twenty-one scales, which may be compared with the four scales commonly used in jazz in the 1940s (major, minor, mixolydian, and blues) and the two later added by bebop (diminished and whole-tone) to the tonal resources of jazz. The corresponding scale for the C7♯11 chord, with added ninth and thirteenth tensions, is C lydian dominant, the fourth mode of the ascending melodic minor.</p> <p>Originating with George Russell's Lydian Chromatic Concept of Tonal Organization (1959), the chord-scale system is now the "most widely used method for teaching jazz improvisation in college". This approach is found in instructional books including Jerry Bergonzi's Inside Improvisation series and characterized by the highly influential Play-A-Long series by Jamey Aebersold. Aebersold's materials, and their orientation to learning by applying theory over backing tracks, also provided the first known publication of the blues scale in the 1970 revision of Volume 1 There are differences of approach within the system. For example, Russell associated the C major chord with the lydian scale, while teachers including John Mehegan, David Baker, and Mark Levine teach the major scale as the best match for a C major chord.</p> <p>Miles Davis's Lydian Chromatic Concept-influenced first modal jazz album Kind of Blue, is often given as an example of chord-scale relationships in practice.</p> <p>The chord-scale system provides familiarity with typical chord progressions, technical facility from practicing scales and chord arpeggios, and generally succeeds in reducing "clams", or notes heard as mistakes (through providing note-choice possibilities for the chords of progressions), and building "chops", or virtuosity.</p> <p>Disadvantages include the exclusion of non-chord tones characteristic of bop and free styles, the "in-between" sounds featured in the blues, and consideration of directionality created between the interaction of a solo and a chord progression: "The disadvantages of this system may become clear when students begin to question why their own playing does not sound like such outstanding linear-oriented players as Charlie Parker, Sonny Stitt or Johnny Griffin (or, for that matter, the freer jazz stylists)":</p> <blockquote> <p>The chord-scale method's 'vertical' approach...is 'static,' offering little assistance in generating musical direction through the movement of chords. Hence the importance of knowing the older chord tone approach. But...Swing- and bop-era songforms operate teleologically with regard to harmony. Highly regarded soloists in those styles typically imply the movements of chords...either by creating lines that voice-lead smoothly from one chord to another or by confounding the harmony pull through anticipating or delaying harmonic resolution.</p> </blockquote> <p>Essential considerations of a style such as Charlie Parker's, including "rhythm, phrase shape and length, dynamics, and tone color," as well as "passing tones, appoggiatura, and 'blue notes'" are unaddressed. This appears to have led educators to emphasize a specific repertoire of pieces most appropriate to the chord-scale system, such as John Coltrane's "Giant Steps", while excluding others, such as Coltrane's later styles of composition, and producing generations of "pattern" players among college-educated musicians.</p> <p><a href="https://en.wikipedia.org/wiki/Avoid_note" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Avoid_note</a></p> ]]></content:encoded> </item> <item> <title><![CDATA[Classic European music staff notation]]></title> <link>https://chromatone.center/theory/notes/staff/</link> <guid>https://chromatone.center/theory/notes/staff/</guid> <pubDate>Thu, 12 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[How the sheet notes are written and read from classic era till today]]></description> <content:encoded><![CDATA[<p><button :style="{background: state.colorize ? 'linear-gradient(#e66465, #9198e5)': ''}" class="button fixed right-16 bottom-4 z-20000 p-2 bg-light-400 dark-bg-dark-400 rounded-xl shadow active_bg-red-100" @click="state.colorize = !state.colorize">Colorize notes</button></p> <p><img src="./kvintcirklen.png" alt=""></p> <p>Standard notation is used to demonstrate how a piece is played. Unlike tablature, it applies to any instrument. It indicates key signatures, time signatures, rhythms, tempo, dynamics (how loud each instrument should be), and so on. A highly trained musician can sometimes take a piece of sheet music written in standard notation, look it over once or twice, and then play the song as though he or she had been playing it his or her whole life.</p> <p>For instance, below is the C major scale, including a C at the end, in standard notation.</p> <p>The standard notation staff has five lines and four spaces. From bottom to top the five lines are E G B D F, which is commonly memorized as an acrostic such as:</p> <ul> <li><strong>E</strong>very</li> <li><strong>G</strong>ood</li> <li><strong>B</strong>oy</li> <li><strong>D</strong>oes</li> <li><strong>F</strong>ine</li> </ul> <p>The four spaces between the five lines are F, A, C, and E, which should surely be easy for an English speaker to remember, because together they spell "face".</p> <p>But what about the first two notes, which are below the staff? Well, the second note is just below the E, so it must be D. The first is below that, so it must be C. It also has a line through it to indicate it is placed on an "invisible" line. This line is called a ledger line. A note could be placed below this ledger line, which would be B. Or a note could be placed below that, on another ledger line, and it would be A. Notes can continue to be placed on ledger lines above and below the staff infinitely, but extending too far from the staff is impractical, because the pitches will become very hard to read.</p> <h2 id="clefs" tabindex="-1">Clefs <a class="header-anchor" href="#clefs" aria-label="Permalink to "Clefs""></a></h2> <ol> <li>Treble (G) <abc-render :abc="'K:treble\nG8'" /></li> <li>Bass (F) <abc-render :abc="'K:bass\nF,8'" /></li> <li>Baritone (F) <abc-render :abc="'K:bass3\nF,8'" /></li> <li>Tenor (C) <abc-render :abc="'K:tenor\nc,8'" /></li> <li>Alto (C) <abc-render :abc="'K:alto\nc,8'" /></li> <li>Mezzosoprano (C) <abc-render :abc="'K:alto2\nc,8'" /></li> <li>Soprano (C) <abc-render :abc="'K:alto1\nc,8'" /></li> </ol> <h2 id="note-pitch" tabindex="-1">Note pitch <a class="header-anchor" href="#note-pitch" aria-label="Permalink to "Note pitch""></a></h2> <h3 id="natural-g" tabindex="-1">Natural (G) <a class="header-anchor" href="#natural-g" aria-label="Permalink to "Natural (G)""></a></h3> <abc-render :abc="'G8'" /><abc-render :abc="'K:Gb\n=G8'" /><h3 id="sharp-g" tabindex="-1">Sharp (G#) <a class="header-anchor" href="#sharp-g" aria-label="Permalink to "Sharp (G#)""></a></h3> <abc-render :abc="'^G8'" /><abc-render :abc="'K:Gb\n^^G8'" /><h3 id="flat-gb" tabindex="-1">Flat (Gb) <a class="header-anchor" href="#flat-gb" aria-label="Permalink to "Flat (Gb)""></a></h3> <abc-render :abc="'_G8'" /><abc-render :abc="'K:C#\n__G8'" /><h3 id="ascending" tabindex="-1">Ascending <a class="header-anchor" href="#ascending" aria-label="Permalink to "Ascending""></a></h3> <p>A A# B C C# D D# E F F# G G# A</p> <abc-render responsive :abc="'A,^A,B,C^CD^DEF^FG^GA'" /><h3 id="descending" tabindex="-1">Descending <a class="header-anchor" href="#descending" aria-label="Permalink to "Descending""></a></h3> <p>A Ab G Gb F E Eb D Db C B Bb A</p> <abc-render responsive :abc="`a,_a,G_GFE_ED_DCB,_B,A,`" /><p><img src="./chromatic-c.jpg" alt=""> <img src="./chromatic-Eb.jpg" alt=""></p> <h2 id="note-values-durations" tabindex="-1">Note values (durations) <a class="header-anchor" href="#note-values-durations" aria-label="Permalink to "Note values (durations)""></a></h2> <p>Whole note = 2 half notes = 4 quarter notes = 8 eighth notes = 16 sixteenth notes</p> <abc-render responsive :abc="`M:4/4\n|G8|G4A4|G2A2B2c2|GDGDGDGD|G/D/G/D/G/D/G/D/G/D/G/D/G/D/G/D/|`" /><h3 id="dotted-notes" tabindex="-1">Dotted notes <a class="header-anchor" href="#dotted-notes" aria-label="Permalink to "Dotted notes""></a></h3> <abc-render responsive :abc="`M:4/4\n|(G12|G4)|G5G2|G3GG3G|G3/2G/2G3/2G/2G3/2G/2G3/2G/2|`" /><h3 id="triplets" tabindex="-1">Triplets <a class="header-anchor" href="#triplets" aria-label="Permalink to "Triplets""></a></h3> <abc-render responsive :abc="`M:4/4\n|(3G4A4B4|(3G2A2B2 (3G2A2B2| (3GAB (3GAB (3GAB (3GAB|`" /><h3 id="other-tuplets" tabindex="-1">Other tuplets <a class="header-anchor" href="#other-tuplets" aria-label="Permalink to "Other tuplets""></a></h3> <abc-render responsive :abc="`M:4/4\n|(5G2A2B2c2d2|(7CDEFGAB|`" /><h3 id="rests" tabindex="-1">Rests <a class="header-anchor" href="#rests" aria-label="Permalink to "Rests""></a></h3> <abc-render responsive :abc="`M:4/4\n|z8|z4z4|z2z2z2z2|zzzzzzzz|z/z/z/z/z/z/z/z/z/z/z/z/z/z/z/z/|`" /><p><img src="./note-values-and-rests.png" alt=""></p> <blockquote> <p><img src="./Bachlut1.png" alt=""> J.S.Bach Prelude</p> </blockquote> <h3 id="alexander-scriabin-piano-concerto-in-f-sharp-minor-op-20" tabindex="-1">Alexander Scriabin - Piano Concerto in F sharp minor, Op. 20 <a class="header-anchor" href="#alexander-scriabin-piano-concerto-in-f-sharp-minor-op-20" aria-label="Permalink to "Alexander Scriabin - Piano Concerto in F sharp minor, Op. 20""></a></h3> <youtube-embed video="F734PyD3NAw" /><h2 id="vector-render-of-staff-notation-by-abcjs" tabindex="-1">Vector render of staff notation by ABCjs <a class="header-anchor" href="#vector-render-of-staff-notation-by-abcjs" aria-label="Permalink to "Vector render of staff notation by ABCjs""></a></h2> <p>Note colorization is very useful to build connections between classic and Chromatone music theory visualizations.</p> <abc-render responsive :abc="minuet" /><p><a href="./../computer/abc/">Play with the ABC notations editor</a></p> <p><a href="./sight-reading/">Sight reading</a></p> <p><a href="./evolution/">European tradition</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/kvintcirklen.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Color models]]></title> <link>https://chromatone.center/theory/color/models/</link> <guid>https://chromatone.center/theory/color/models/</guid> <pubDate>Tue, 10 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Different ways to measure and quantify colors]]></description> <content:encoded><![CDATA[<p><img src="./colors-exp-1.svg" alt=""></p> <p><img src="./color-models.svg" alt=""></p> <p>Let's explore <a href="./additive/">Additive</a>, <a href="./subtractive/">Subtractive</a> and <a href="./perceptual/">Perceptual</a> color models deeper.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/color-models.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Nonchord tones]]></title> <link>https://chromatone.center/theory/harmony/non-chord/</link> <guid>https://chromatone.center/theory/harmony/non-chord/</guid> <pubDate>Tue, 10 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Notes in a piece of music or song that are not part of the chord set out by the harmonic framework]]></description> <content:encoded><![CDATA[<h2 id="non-chord-tones" tabindex="-1">Non-chord tones <a class="header-anchor" href="#non-chord-tones" aria-label="Permalink to "Non-chord tones""></a></h2> <p>A nonchord tone (NCT), nonharmonic tone, or embellishing tone is a note in a piece of music or song that is not part of the implied or expressed chord set out by the harmonic framework. In contrast, a chord tone is a note that is a part of the functional chord (see: factor (chord)). Non-chord tones are most often discussed in the context of the common practice period of classical music, but they can be used in the analysis of other types of tonal music as well, such as Western popular music.</p> <p>Nonchord tones are often categorized as accented non-chord tones and unaccented non-chord tones depending on whether the dissonance occurs on an accented or unaccented beat (or part of a beat).</p> <p>Over time, some musical styles assimilated chord types outside of the common-practice style. In these chords, tones that might normally be considered nonchord tones are viewed as chord tones, such as the seventh of a minor seventh chord. For example, in 1940s-era bebop jazz, an F♯ played with a C 7 chord would be considered a chord tone if the chord were analyzed as C7(♯11). In European classical music, "[t]he greater use of dissonance from period to period as a result of the dialectic of linear/vertical forces led to gradual normalization of ninth, eleventh, and thirteenth chords [in analysis and theory]; each additional non-chord tone above the foundational triad became frozen into the chordal mass."</p> <h2 id="theory" tabindex="-1">Theory <a class="header-anchor" href="#theory" aria-label="Permalink to "Theory""></a></h2> <p>Chord and nonchord tones are defined by their membership (or lack of membership) in a chord: "The pitches which make up a chord are called chord-tones: any other pitches are called non-chord-tones." They are also defined by the time at which they sound: "Nonharmonic tones are pitches that sound along with a chord but are not chord pitches." For example, if an excerpt from a piece of music implies or uses a C-major chord, then the notes C, E and G are members of that chord, while any other note played at that time (e.g., notes such as F♯) is a nonchord tone. Such tones are most obvious in homophonic music but occur at least as frequently in contrapuntal music.</p> <p>According to Music in Theory and Practice, "Most nonharmonic tones are dissonant and create intervals of a second, fourth or seventh", which are required to resolve to a chord tone in conventional ways. If the note fails to resolve until the next change of harmony, it may instead create a seventh chord or extended chord. While theoretically in a three-note chord, there are nine possible nonchord tones in equal temperament, in practice nonchord tones are usually in the prevailing key. Augmented and diminished intervals are also considered dissonant, and all nonharmonic tones are measured from the bass note, or lowest note sounding in the chord except in the case of nonharmonic bass tones.</p> <p>Nonharmonic tones generally occur in a pattern of three pitches, of which the nonharmonic tone is the center:</p> <blockquote> <p>Chord tone – Nonchord tone – Chord tone Preparation – Dissonance – Resolution</p> </blockquote> <p>Nonchord tones are categorized by how they are used. The most important distinction is whether they occur on a strong or weak beat and are thus either accented or unaccented nonchord tones They are also distinguished by their direction of approach and departure and the voice or voices in which they occur and the number of notes they contain.</p> <h3 id="unaccented" tabindex="-1">Unaccented <a class="header-anchor" href="#unaccented" aria-label="Permalink to "Unaccented""></a></h3> <h4 id="anticipation" tabindex="-1">Anticipation <a class="header-anchor" href="#anticipation" aria-label="Permalink to "Anticipation""></a></h4> <p>An anticipation (ANT) occurs when this note is approached by step and then remains the same. It is basically a note of the second chord played early.</p> <p>A portamento is the late Renaissance precursor to the anticipation, though today it refers to a glissando.</p> <h4 id="neighbor-tone" tabindex="-1">Neighbor tone <a class="header-anchor" href="#neighbor-tone" aria-label="Permalink to "Neighbor tone""></a></h4> <p>A neighbor tone (NT) or auxiliary note (AUX) is a nonchord tone that passes stepwise from a chord tone directly above or below it (which frequently causes the NT to create dissonance with the chord) and resolves to the same chord tone.</p> <p>In practice and analysis, neighboring tones are sometimes differentiated depending upon whether or not they are lower or higher than the chord tones surrounding them. A neighboring tone that is a step higher than the surrounding chord tones is called an upper neighboring tone or an upper auxiliary note while a neighboring tone that is a step lower than the surrounding chord tones is a lower neighboring tone or lower auxiliary note. However, following Heinrich Schenker's usage in Free Composition, some authors reserve the term "neighbor note" to the lower neighbor a half step below the main note.</p> <p>The German term Nebennote is a somewhat broader category, including all nonchord tones approached from the main note by step.</p> <h4 id="escape-tone" tabindex="-1">Escape tone <a class="header-anchor" href="#escape-tone" aria-label="Permalink to "Escape tone""></a></h4> <p>An escape tone (ET) or echappée is a particular type of unaccented incomplete neighbor tone that is approached stepwise from a chord tone and resolved by a skip in the opposite direction back to the harmony.</p> <h4 id="passing-tone" tabindex="-1">Passing tone <a class="header-anchor" href="#passing-tone" aria-label="Permalink to "Passing tone""></a></h4> <p>A passing tone (PT) or passing note is a nonchord tone prepared by a chord tone a step above or below it and resolved by continuing in the same direction stepwise to the next chord tone (which is either part of the same chord or of the next chord in the harmonic progression).</p> <p>Where two nonchord tones are before the resolution they are double passing tones or double passing notes.</p> <h3 id="accented-non-chord-tones" tabindex="-1">Accented Non-Chord Tones <a class="header-anchor" href="#accented-non-chord-tones" aria-label="Permalink to "Accented Non-Chord Tones""></a></h3> <h4 id="passing-tone-1" tabindex="-1">Passing tone <a class="header-anchor" href="#passing-tone-1" aria-label="Permalink to "Passing tone""></a></h4> <p>A tone that sits between two chord tones and is between them.</p> <h4 id="neighbor-tone-1" tabindex="-1">Neighbor tone <a class="header-anchor" href="#neighbor-tone-1" aria-label="Permalink to "Neighbor tone""></a></h4> <p>A neighbour tone is where you skip up or down from a note (or chord tone) and then move back to the original note.</p> <h4 id="suspension-and-retardation" tabindex="-1">Suspension and retardation <a class="header-anchor" href="#suspension-and-retardation" aria-label="Permalink to "Suspension and retardation""></a></h4> <blockquote> <p>Endeavor, moreover, to introduce suspensions now in this voice, now in that, for it is incredible how much grace the melody acquires by this means. And every note which has a special function is rendered audible thereby. — Johann Joseph Fux (1725)</p> </blockquote> <p>A suspension (SUS) (sometimes referred to as a syncope) occurs when the harmony shifts from one chord to another, but one or more notes of the first chord (the preparation) are either temporarily held over into or are played again against the second chord (against which they are nonchord tones called the suspension) before resolving downwards to a chord tone by step (the resolution). The whole process is called a suspension as well as the specific nonchord tone(s).</p> <p>Suspensions may be further described with two numbers: (1) the interval between the suspended note and the bass note and (2) the interval between the resolution and the bass note. The most common suspensions are 4-3 suspension, 7-6 suspension, or 9-8 suspension. Note that except for the 9-8 suspensions, the numbers are typically referred to using the simple intervals, so for instance, if the intervals are actually an 11th and a 10th (the first example below), you would typically call it a 4-3 suspension. If the bass note is suspended, then the interval is calculated between the bass and the part that is most dissonant with it, often resulting in a 2-3 suspension.</p> <p>Suspensions must resolve downwards. If a tied note is prepared like a suspension but resolves upwards, it is called a retardation. Common retardations include 2-3 and 7-8 retardations.</p> <p>Decorated suspensions are common and consist of portamentos or double eighth notes, the second being a lower neighbor tone.</p> <p>A chain of suspensions constitutes the fourth species of counterpoint; an example may be found in the second movement of Arcangelo Corelli's Christmas Concerto.</p> <h4 id="appoggiatura" tabindex="-1">Appoggiatura <a class="header-anchor" href="#appoggiatura" aria-label="Permalink to "Appoggiatura""></a></h4> <p>An appoggiatura (APP) is a type of accented incomplete neighbor tone approached skip-wise from one chord tone and resolved stepwise to another chord tone ("overshooting" the chord tone).</p> <h4 id="nonharmonic-bass" tabindex="-1">Nonharmonic bass <a class="header-anchor" href="#nonharmonic-bass" aria-label="Permalink to "Nonharmonic bass""></a></h4> <p>Nonharmonic bass notes are bass notes that are not a member of the chord below which they are written. Examples include the Elektra chord. An example of a nonharmonic bass from the third movement of Stravinsky's Symphony of Psalms.</p> <h3 id="involving-more-than-three-notes" tabindex="-1">Involving more than three notes <a class="header-anchor" href="#involving-more-than-three-notes" aria-label="Permalink to "Involving more than three notes""></a></h3> <h4 id="changing-tones" tabindex="-1">Changing tones <a class="header-anchor" href="#changing-tones" aria-label="Permalink to "Changing tones""></a></h4> <p>Changing tones (CT) are two successive nonharmonic tones. A chord tone steps to a nonchord tone which skips to another nonchord tone which leads by step to a chord tone, often the same chord tone. They may imply neighboring tones with a missing or implied note in the middle. Also called double neighboring tones or neighbor group.</p> <h4 id="pedal-point" tabindex="-1">Pedal point <a class="header-anchor" href="#pedal-point" aria-label="Permalink to "Pedal point""></a></h4> <p>Another form of nonchord tone is a pedal point or pedal tone (PD) or note, almost always the tonic or dominant, which is held through a series of chord changes. The pedal point is almost always in the lowest voice (the term originates from organ playing), but it may be in an upper voice; then it may be called an inverted pedal. It may also be between the upper and lower voices, in which case it is called an internal pedal.</p> <h3 id="chromatic-nonharmonic-tone" tabindex="-1">Chromatic nonharmonic tone <a class="header-anchor" href="#chromatic-nonharmonic-tone" aria-label="Permalink to "Chromatic nonharmonic tone""></a></h3> <p>A chromatic nonharmonic tone is a nonharmonic tone that is chromatic, or outside of the key and creates half-step motion. The use of which, especially chromatic appoggiaturas and chromatic passing tones, increased in the Romantic Period.</p> <h2 id="outside-jazz" tabindex="-1">Outside (jazz) <a class="header-anchor" href="#outside-jazz" aria-label="Permalink to "Outside (jazz)""></a></h2> <p>In jazz improvisation, outside playing describes an approach where one plays over a scale, mode or chord that is harmonically distant from the given chord. There are several common techniques to playing outside, that include side-stepping or side-slipping, superimposition of Coltrane changes, and polytonality.</p> <h3 id="side-slipping" tabindex="-1">Side-slipping <a class="header-anchor" href="#side-slipping" aria-label="Permalink to "Side-slipping""></a></h3> <p>The term side-slipping or side-stepping has been used to describe several similar yet distinct methods of playing outside. In one version, one plays only the five "'wrong'" non-scale notes for the given chord and none of the seven scale or three to four chord tones, given that there are twelve notes in the equal tempered scale and heptatonic scales are generally used. Another technique described as sideslipping is the addition of distant ii–V relationships, such as a half-step above the original ii–V. This increases chromatic tension as it first moves away and then towards the tonic. Lastly, side-slipping can be described as playing in a scale a half-step above or below a given chord, before resolving, creating tension and release.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Equal temperament]]></title> <link>https://chromatone.center/theory/notes/temperaments/equal/</link> <guid>https://chromatone.center/theory/notes/temperaments/equal/</guid> <pubDate>Tue, 10 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[12-TET and other equal divisions of an octave]]></description> <content:encoded><![CDATA[<p>In classical music and Western music in general, the most common tuning system since the 18th century has been twelve-tone equal temperament (also known as 12 equal temperament, 12-TET or 12-ET; informally abbreviated to twelve equal), which divides the octave into 12 parts, all of which are equal on a logarithmic scale, with a ratio equal to the 12th root of 2 (12√2 ≈ 1.05946). That resulting smallest interval, 1⁄12 the width of an octave, is called a semitone or half step. In Western countries the term equal temperament, without qualification, generally means <a href="https://en.wikipedia.org/wiki/Equal_temperament" target="_blank" rel="noreferrer">12-TET</a>.</p> <h3 id="zhu-zaiyu" tabindex="-1">Zhu Zaiyu <a class="header-anchor" href="#zhu-zaiyu" aria-label="Permalink to "Zhu Zaiyu""></a></h3> <p>Zhu Zaiyu (朱載堉), a prince of the Ming court, spent thirty years on research based on the equal temperament idea originally postulated by his father. He described his new pitch theory in his Fusion of Music and Calendar 律暦融通 published in 1580. This was followed by the publication of a detailed account of the new theory of the equal temperament with a precise numerical specification for 12-TET in his 5,000-page work Complete Compendium of Music and Pitch (Yuelü quan shu 樂律全書) in 1584.</p> <p><img src="./zhu-zaiyu-1154.jpg" alt=""></p> <p>Zhu obtained his result mathematically by dividing the length of string and pipe successively by 12√2 ≈ 1.059463, and for pipe length by 24√2, such that after twelve divisions (an octave) the length was divided by a factor of 2:</p> <p><img src="./12-equation.svg" alt="svg"></p> <p>Similarly, after 84 divisions (7 octaves) the length was divided by a factor of 128.</p> <p><img src="./128-equation.svg" alt="svg"></p> <p>Zhu Zaiyu has been credited as the first person to solve the equal temperament problem mathematically.</p> <p><img src="./zhu-zaiyu-strings.jpg" alt=""></p> <h3 id="mathematics-of-12-tet" tabindex="-1">Mathematics of 12-TET <a class="header-anchor" href="#mathematics-of-12-tet" aria-label="Permalink to "Mathematics of 12-TET""></a></h3> <p><img src="./tet-equation.svg" alt="svg"></p> <p><img src="./tet-fifth-equation.svg" alt="svg"></p> <p><img src="./oct-equation.svg" alt="svg"></p> <p><img src="./Monochord_ET.png" alt=""></p> <h3 id="tuning-to-the-beats" tabindex="-1">Tuning to the beats <a class="header-anchor" href="#tuning-to-the-beats" aria-label="Permalink to "Tuning to the beats""></a></h3> <p>A precise equal temperament is possible using the 17th-century Sabbatini method of splitting the octave first into three tempered major thirds. This was also proposed by several writers during the Classical era. Tuning without beat rates but employing several checks, achieving virtually modern accuracy, was already done in the first decades of the 19th century. Using beat rates, first proposed in 1749, became common after their diffusion by Helmholtz and Ellis in the second half of the 19th century. The ultimate precision was available with 2-decimal tables published by White in 1917</p> <p><img src="./piano-tuning.png" alt=""></p> ]]></content:encoded> <enclosure url="https://chromatone.center/zhu-zaiyu-1154.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Perceptual color models]]></title> <link>https://chromatone.center/theory/color/models/perceptual/</link> <guid>https://chromatone.center/theory/color/models/perceptual/</guid> <pubDate>Sun, 08 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Color spaces based on the "standard observer" perception of colors]]></description> <content:encoded><![CDATA[<h2 id="munsell-color-system" tabindex="-1">Munsell color system <a class="header-anchor" href="#munsell-color-system" aria-label="Permalink to "Munsell color system""></a></h2> <p>In colorimetry, the Munsell color system is a color space that specifies colors based on three properties of color: hue (basic color), chroma (color intensity), and value (lightness). It was created by Professor Albert H. Munsell in the first decade of the 20th century and adopted by the United States Department of Agriculture (USDA) as the official color system for soil research in the 1930s.</p> <p><img src="./images/Munsell-system.svg" alt=""></p> <p>Several earlier color order systems had placed colors into a three-dimensional color solid of one form or another, but Munsell was the first to separate hue, value, and chroma into perceptually uniform and independent dimensions, and he was the first to illustrate the colors systematically in three-dimensional space. Munsell's system, particularly the later renotations, is based on rigorous measurements of human subjects' visual responses to color, putting it on a firm experimental scientific basis. Because of this basis in human visual perception, Munsell's system has outlasted its contemporary color models, and though it has been superseded for some uses by models such as CIELAB (L<em>a</em>b*) and CIECAM02, it is still in wide use today.</p> <blockquote> <p><img src="./images/munsell_1943_color_solid_cylindrical_coordinates.png" alt=""> Three-dimensional representation of the 1943 Munsell renotations (with portion cut away).</p> </blockquote> <h2 id="hsl-hsv-and-hsb-color-models" tabindex="-1">HSL, HSV and HSB color models <a class="header-anchor" href="#hsl-hsv-and-hsb-color-models" aria-label="Permalink to "HSL, HSV and HSB color models""></a></h2> <p>HSL (for hue, saturation, lightness) and HSV (for hue, saturation, value; also known as HSB, for hue, saturation, brightness) are alternative representations of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes. In these models, colors of each hue are arranged in a radial slice, around a central axis of neutral colors which ranges from black at the bottom to white at the top.</p> <p><img src="./images/hsl.png" alt=""></p> <p>The HSL representation models the way different paints mix together to create colour in the real world, with the lightness dimension resembling the varying amounts of black or white paint in the mixture (e.g. to create "light red", a red pigment can be mixed with white paint; this white paint corresponds to a high "lightness" value in the HSL representation). Fully saturated colors are placed around a circle at a lightness value of ½, with a lightness value of 0 or 1 corresponding to fully black or white, respectively.</p> <p>Meanwhile, the HSV representation models how colors appear under light. The difference between HSL and HSV is that a color with maximum lightness in HSL is pure white, but a color with maximum value/brightness in HSV is analogous to shining a white light on a colored object (e.g. shining a bright white light on a red object causes the object to still appear red, just brighter and more intense, while shining a dim light on a red object causes the object to appear darker and less bright).</p> <p><img src="./images/HSV_color_solid_cylinder_saturation_gray.png" alt=""></p> <p>The issue with both HSV and HSL is that these approaches do not effectively separate colour into their three value components according to human perception of color. This can be seen when the saturation settings are altered — it is quite easy to notice the difference in perceptual lightness despite the "V" or "L" setting being fixed.</p> <p>HWB is a cylindrical-coordinate representation of points in an RGB color model, similar to HSL and HSV. It was developed by HSV’s creator Alvy Ray Smith in 1996 to address some of the issues with HSV. HWB was designed to be more intuitive for humans to use and slightly faster to compute. The first coordinate, H (Hue), is the same as the Hue coordinate in HSL and HSV. W and B stand for Whiteness and Blackness respectively and range from 0–100% (or 0–1). The mental model is that the user can pick a main hue and then “mix” it with white and/or black to produce the desired color.</p> <h2 id="hcl-lch-color-space" tabindex="-1">HCL (Lch) color space <a class="header-anchor" href="#hcl-lch-color-space" aria-label="Permalink to "HCL (Lch) color space""></a></h2> <p>HCL (Hue-Chroma-Luminance) or Lch refers to any of the many cylindrical color space models that are designed to accord with human perception of color with the three parameters. Lch has been adopted by information visualization practitioners to present data without the bias implicit in using varying saturation. They are, in general, designed to have characteristics of both cylindrical translations of the RGB color space, such as HSL and HSV, and the L<em>a</em>b* color space.</p> <p>CIE-based Lch color spaces are transformations of the two chroma values (ab or uv) into the polar coordinates. The source color spaces are still very well-regarded for their uniformity, and the transformation does not cause degradation in this aspect. See the respective articles for how the underlying coordinates are derived.</p> <h2 id="interactive-hsl-lch-and-hwb-color-mixer" tabindex="-1">Interactive HSL, LCH and HWB color mixer <a class="header-anchor" href="#interactive-hsl-lch-and-hwb-color-mixer" aria-label="Permalink to "Interactive HSL, LCH and HWB color mixer""></a></h2> <p>Choose any of the models by clicking on its name. You can define a hue for the color by clicking on its sector and then change two other parameters either by dragging the bars on the side or just swiping across the circle. Top-down motion is for the L (W) component and right-left is for the other one.</p> <color-hsl /><h2 id="cie-1931-xyz" tabindex="-1">CIE 1931 XYZ <a class="header-anchor" href="#cie-1931-xyz" aria-label="Permalink to "CIE 1931 XYZ""></a></h2> <p>The CIE 1931 RGB color space and CIE 1931 XYZ color space were created by the International Commission on Illumination (CIE) in 1931. They resulted from a series of experiments done in the late 1920s by William David Wright using ten observers and John Guild using seven observers. The experimental results were combined into the specification of the CIE RGB color space, from which the CIE XYZ color space was derived.</p> <p>Due to the distribution of cones in the eye, the tristimulus values depend on the observer's field of view. To eliminate this variable, the CIE defined a color-mapping function called the standard (colorimetric) observer, to represent an average human's chromatic response within a 2° arc inside the fovea. This angle was chosen owing to the belief that the color-sensitive cones resided within a 2° arc of the fovea. Thus the CIE 1931 Standard Observer function is also known as the CIE 1931 2° Standard Observer.</p> <p>The CIE XYZ color space encompasses all color sensations that are visible to a person with average eyesight. That is why CIE XYZ (Tristimulus values) is a device-invariant representation of color. It serves as a standard reference against which many other color spaces are defined. A set of color-matching functions, like the spectral sensitivity curves of the LMS color space, but not restricted to non-negative sensitivities, associates physically produced light spectra with specific tristimulus values.</p> <p>A color space maps a range of physically produced colors from mixed light, pigments, etc. to an objective description of color sensations registered in the human eye, typically in terms of tristimulus values, but not usually in the LMS color space defined by the spectral sensitivities of the cone cells. The tristimulus values associated with a color space can be conceptualized as amounts of three primary colors in a tri-chromatic, additive color model. In some color spaces, including the LMS and XYZ spaces, the primary colors used are not real colors in the sense that they cannot be generated in any light spectrum.</p> <p>In the CIE 1931 model, Y is the luminance, Z is quasi-equal to blue (of CIE RGB), and X is a mix of the three CIE RGB curves chosen to be nonnegative (see § Definition of the CIE XYZ color space). Setting Y as luminance has the useful result that for any given Y value, the XZ plane will contain all possible chromaticities at that luminance.</p> <p>The unit of the tristimulus values X, Y, and Z is often arbitrarily chosen so that Y = 1 or Y = 100 is the brightest white that a color display supports. In this case, the Y value is known as the relative luminance. The corresponding whitepoint values for X and Z can then be inferred using the standard illuminants.</p> <p>n other words, the Z value is solely made up of the S cone response, the Y value a mix of L and M responses, and X value a mix of all three. This fact makes XYZ values analogous to, but different from, the LMS cone responses of the human eye.</p> <h2 id="cieluv-and-cielab" tabindex="-1">CIELUV and CIELAB <a class="header-anchor" href="#cieluv-and-cielab" aria-label="Permalink to "CIELUV and CIELAB""></a></h2> <p>CIELUV, is a color space adopted by the International Commission on Illumination (CIE) in 1976, as a simple-to-compute transformation of the 1931 CIE XYZ color space, but which attempted perceptual uniformity.</p> <p>Due to the distribution of cones in the eye, the tristimulus values depend on the observer's field of view. To eliminate this variable, the CIE defined a color-mapping function called the standard (colorimetric) observer, to represent an average human's chromatic response within a 2° arc inside the fovea.</p> <p><img src="./CIE_1976_UCS.png" alt=""></p> <p>The CIELAB color space also referred to as L<em>a</em>b* is a color space defined by the International Commission on Illumination (abbreviated CIE) in 1976. It expresses color as three values: L* for perceptual lightness, and a* and b* for the four unique colors of human vision: red, green, blue, and yellow. CIELAB was intended as a perceptually uniform space, where a given numerical change corresponds to similar perceived change in color. While the LAB space is not truly perceptually uniform, it nevertheless is useful in industry for detecting small differences in color.</p> <p>Like the CIEXYZ space it derives from, CIELAB colorspace is a device-independent, "standard observer" model. The colors it defines are not relative to any particular device such as a computer monitor or a printer, but instead relate to the CIE standard observer which is an averaging of the results of color matching experiments under laboratory conditions.</p> <p><img src="./images/Lab_color_at_luminance_75.png" alt=""></p> <p>The CIELAB space is three-dimensional, and covers the entire range of human color perception, or gamut. It is based on the opponent color model of human vision, where red/green forms an opponent pair, and blue/yellow forms an opponent pair. The lightness value, L*, also referred to as "Lstar," defines black at 0 and white at 100. The a* axis is relative to the green–red opponent colors, with negative values toward green and positive values toward red. The b* axis represents the blue–yellow opponents, with negative numbers toward blue and positive toward yellow.</p> <h2 id="interactive-lab-color-mixer" tabindex="-1">Interactive LAB color mixer <a class="header-anchor" href="#interactive-lab-color-mixer" aria-label="Permalink to "Interactive LAB color mixer""></a></h2> <p>Mix any possible color out of three color components by dragging the bars on the sides or simply by swiping the central color panel. You can also change the grid resolution and the range of A and B components with the sliders below the grid.</p> <color-lab /><p>While the intention behind CIELAB was to create a space that was more perceptually uniform than CIEXYZ using only a simple formula, CIELAB is known to lack perceptually uniformity, particularly in the area of blue hues.</p> <p>The lightness value, L* in CIELAB is calculated using the cube root of the relative luminance with an offset near black. This results in an effective power curve with an exponent of approximately 0.43 which represents the human eye's response to light under daylight (photopic) conditions.</p> <h2 id="hsl-and-lch-12-colors-cycle-comparison" tabindex="-1">HSL and LCH 12 colors cycle comparison <a class="header-anchor" href="#hsl-and-lch-12-colors-cycle-comparison" aria-label="Permalink to "HSL and LCH 12 colors cycle comparison""></a></h2> <color-table />]]></content:encoded> <enclosure url="https://chromatone.center/CIE_1976_UCS.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Tunings comparison]]></title> <link>https://chromatone.center/theory/notes/temperaments/tunings/</link> <guid>https://chromatone.center/theory/notes/temperaments/tunings/</guid> <pubDate>Thu, 05 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Ways to juxtapose and compare different tuning methods side by side]]></description> <content:encoded><![CDATA[<img src="./et-limits.svg" /> <h2 id="circle-of-tunings" tabindex="-1">Circle of tunings <a class="header-anchor" href="#circle-of-tunings" aria-label="Permalink to "Circle of tunings""></a></h2> <p>See and hear the slight differences between Pythagorean tunings, Just intonation and 12-TET. Click on the circle to start the note. Click again to stop it. You can hear the beatings between the same notes in various tunings and also hear the quality of the intervals in each of them.</p> <TuningCircle/>]]></content:encoded> <enclosure url="https://chromatone.center/tuning-circle.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[MIDI Recorder]]></title> <link>https://chromatone.center/practice/midi/recorder/</link> <guid>https://chromatone.center/practice/midi/recorder/</guid> <pubDate>Wed, 04 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Record MIDI as you play – visualize and save your music]]></description> <content:encoded><![CDATA[<client-only> <midi-recorder /> </client-only> <h2 id="work-in-progress" tabindex="-1">Work in progress <a class="header-anchor" href="#work-in-progress" aria-label="Permalink to "Work in progress""></a></h2> <p>This app is a draft to be iterated on. The idea is to make a tool to record some kind of visual midi sketches and store them as mid files and also directly in the browser.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Melody]]></title> <link>https://chromatone.center/theory/melody/</link> <guid>https://chromatone.center/theory/melody/</guid> <pubDate>Wed, 04 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Combinations of pitch and rhythm]]></description> <content:encoded><![CDATA[<p><a href="./study/">Melody</a> as a linear succession of tones brings <a href="./motion/">Motion</a> to music and it originates from <a href="./singing/">singing</a>. The basis for the singer is created with a <a href="./drone/">Drone</a> and the brightest colors of the voice are discovered through <a href="./articulation/">Articulation and ornamentation</a>.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/ani-adigyozalyan.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Tabs]]></title> <link>https://chromatone.center/practice/chord/tabs/</link> <guid>https://chromatone.center/practice/chord/tabs/</guid> <pubDate>Tue, 03 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Guitar and ukulele tabs for any chord in existence]]></description> <content:encoded><![CDATA[<ChordTabs />]]></content:encoded> <enclosure url="https://chromatone.center/tabs.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Color names]]></title> <link>https://chromatone.center/theory/color/names/</link> <guid>https://chromatone.center/theory/color/names/</guid> <pubDate>Sun, 01 Aug 2021 00:00:00 GMT</pubDate> <description><![CDATA[Know how to name any color of 12 equally spaced hues]]></description> <content:encoded><![CDATA[<ColorCards :list="col.colors" :langs="col.langs" /><ColorNames :list="col.colors" :langs="col.langs" /><img src="./color-names.svg"> <p><a href="https://en.wikipedia.org/wiki/Tertiary_color" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Tertiary_color</a></p> <img src="../models/palette.svg" width="400" height="400" /> ]]></content:encoded> <enclosure url="https://chromatone.center/color-names.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Composition]]></title> <link>https://chromatone.center/theory/composition/</link> <guid>https://chromatone.center/theory/composition/</guid> <pubDate>Sat, 10 Jul 2021 00:00:00 GMT</pubDate> <description><![CDATA[The act of conceiving a piece of music, the art of creating music, or the finished product]]></description> <content:encoded><![CDATA[<p>Here we try to tie up all the learned levels of music to establish systems if creating whole music pieces. <a href="./generative/">Generative theory of tonal music</a> views music evolution similar to human speach. The innovative rules of <a href="./serialism/">Atonality and serialism</a> bring music to the edge of complex mathematics.</p> <p><a href="./form/">Musical form</a> is the means of expression of the most complex ideas. <a href="./song/">Song structure</a> is the way to build the bigger frame for the ideas that are sung about. We'll explore the resulting <a href="./texture/">Texture</a> of music created by many great <a href="./composers/">Composers</a> of all times,</p> <YoutubeEmbed video="GS24I5tZQNU" />]]></content:encoded> <enclosure url="https://chromatone.center/dayne-topkin.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Texture]]></title> <link>https://chromatone.center/theory/composition/texture/</link> <guid>https://chromatone.center/theory/composition/texture/</guid> <pubDate>Sat, 10 Jul 2021 00:00:00 GMT</pubDate> <description><![CDATA[Texture In music, texture is how the tempo, melodic, and harmonic materials are combined in a musica]]></description> <content:encoded><![CDATA[<h2 id="texture" tabindex="-1">Texture <a class="header-anchor" href="#texture" aria-label="Permalink to "Texture""></a></h2> <p>In music, texture is how the tempo, melodic, and harmonic materials are combined in a musical composition, determining the overall quality of the sound in a piece. The texture is often described in regard to the density, or thickness, and range, or width, between lowest and highest pitches, in relative terms as well as more specifically distinguished according to the number of voices, or parts, and the relationship between these voices. For example, a thick texture contains many 'layers' of instruments. One of these layers could be a string section or another brass. The thickness also is changed by the amount and the richness of the instruments playing the piece. The thickness varies from light to thick. A piece's texture may be changed by the number and character of parts playing at once, the timbre of the instruments or voices playing these parts and the harmony, tempo, and rhythms used. The types categorized by number and relationship of parts are analyzed and determined through the labeling of primary textural elements: primary melody (PM), secondary melody (SM), parallel supporting melody (PSM), static support (SS), harmonic support (HS), rhythmic support (RS), and harmonic and rhythmic support (HRS).</p> <youtube-embed video="teh22szdnRQ" /><h3 id="common-types" tabindex="-1">Common types <a class="header-anchor" href="#common-types" aria-label="Permalink to "Common types""></a></h3> <p>In musical terms, particularly in the fields of music history and music analysis, some common terms for different types of texture are:</p> <ul> <li><strong>Monophonic</strong> - Monophonic texture includes a single melodic line with no accompaniment.PSMs often double or parallel the PM they support.</li> <li><strong>Biphonic</strong> - Two distinct lines, the lower sustaining a drone (constant pitch) while the other line creates a more elaborate melody above it. Pedal tones or ostinati would be an example of a SS. It is generally considered to be a type of polyphony. Pedal tone in Bach's Prelude No. 6 in D minor, BWV 851, from The Well-Tempered Clavier, Book I, mm. 1–2. All pedal tone notes are consonant except for the last three of the first measure.</li> <li><strong>Polyphonic or Counterpoint or Contrapuntal</strong> - Multiple melodic voices which are to a considerable extent independent from or in imitation with one another. Characteristic texture of the Renaissance music, also prevalent during the Baroque period. Polyphonic textures may contain several PMs.</li> <li><strong>Homophonic</strong> - The most common texture in Western music: melody and accompaniment. Multiple voices of which one, the melody, stands out prominently and the others form a background of harmonic accompaniment. If all the parts have much the same rhythm, the homophonic texture can also be described as homorhythmic. Characteristic texture of the Classical period and continued to predominate in Romantic music while in the 20th century, "popular music is nearly all homophonic," and, "much of jazz is also" though, "the simultaneous improvisations of some jazz musicians creates a true polyphony". Homophonic textures usually contain only one PM. HS and RS are often combined, thus labeled HRS.</li> <li><strong>Homorhythmic</strong> - Multiple voices with similar rhythmic material in all parts. Also known as "chordal". May be considered a condition of homophony or distinguished from it.</li> <li><strong>Heterophonic</strong> - Two or more voices simultaneously performing variations of the same melody.</li> <li><strong>Silence</strong> - No sound at all or the absence of intended sound</li> </ul> <p>Many classical pieces feature different kinds of texture within a short space of time. An example is the Scherzo from Schubert’s piano sonata in B major, D575. The first four bars are monophonic, with both hands performing the same melody an octave apart. Bars 5–10 are homophonic, with all voices coinciding rhythmically.Bars 11–20 are polyphonic. There are three parts, the top two moving in parallel (interval of a tenth). The lowest part imitates the rhythm of the upper two at the distance of three beats. The passage climaxes abruptly with a bar’s silence.</p> <p>After the silence, the polyphonic texture expands from three to four independent parts moving simultaneously in bars 21–24. The upper two parts are imitative, the lowest part consists of a repeated note (pedal point) and the remaining part weaves an independent melodic line. The final four bars revert to homophony, bringing the section to a close.</p> <youtube-embed video="xcQcAeiNK2Q" /><h3 id="additional-types" tabindex="-1">Additional types <a class="header-anchor" href="#additional-types" aria-label="Permalink to "Additional types""></a></h3> <p>Although in music instruction certain styles or repertoires of music are often identified with one of these descriptions this is basically added music (for example, Gregorian chant is described as monophonic, Bach Chorales are described as homophonic and fugues as polyphonic), many composers use more than one type of texture in the same piece of music.</p> <p>A simultaneity is more than one complete musical texture occurring at the same time, rather than in succession.</p> <p>A more recent type of texture first used by György Ligeti is <strong>micropolyphony</strong>. Other textures include <strong>polythematic</strong>, <strong>polyrhythmic</strong>, <strong>onomatopoeic</strong>, <strong>compound</strong>, and <strong>mixed</strong> or <strong>composite</strong> textures.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Practice]]></title> <link>https://chromatone.center/practice/</link> <guid>https://chromatone.center/practice/</guid> <pubDate>Wed, 07 Jul 2021 00:00:00 GMT</pubDate> <description><![CDATA[Web apps for music education and independent research]]></description> <content:encoded><![CDATA[<p>Here's a growing collection of free and open source interactive web experiences for everyone to explore. You can have a direct experience of natural patterns in <a href="./sound/">human sound perception</a> with just your laptop or a smartphone.</p> <p>We are building tools for everyone to use in many ways. Some may consider them as toys, but there's much depth for diving yourself or with other musicians. There's huge value for visuals and non-musicians also. Chromatone enables us to see and understand multiple layers of music theory without advanced ear-training. And vice versa! Here we can learn and explore more about our <a href="./color/">color perception</a> and the multitude of models engineered to navigate the color space.</p> <p>Music has many faces and it grows through a number of different modes of perception and comprehension. It's a multi-axis space that is build on the sensory and cognitive phenomena. One of the main is time. It gets quite emphasized with note durations and evolving <a href="./rhythm/">rhythmic structures</a> of organized <a href="./synth/noise/">noise</a>. It demonstrates and utilizes our ability to recognize patterns in repetitions of sounds at frequency range between tens and thousands of events per minute. With higher oscillation speeds we get to a distinct mode of <a href="./pitch/">pitch perception</a>.</p> <p>All 12 pitch classes form the so called <a href="./chroma/">Chroma</a> space, where numerous combinations of notes combine to become intervals, <a href="./chord/">chords</a> and scales. We can interact with the notes via <a href="./midi/">MIDI</a> protocol commands, or build our own <a href="./experiments/">experimental</a> visual music tools. There are also some apps for <a href="./jam/">Jamming</a> together in harmony and in sync.</p> <p>And there's a ever growing collection of <a href="./external/">external music web-apps</a> found on the internet.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/soundtrap.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chord Sheets Lab]]></title> <link>https://chromatone.center/practice/chord/chord-sheets/</link> <guid>https://chromatone.center/practice/chord/chord-sheets/</guid> <pubDate>Sat, 03 Jul 2021 00:00:00 GMT</pubDate> <description><![CDATA[Chordsheetjs experiments]]></description> <content:encoded><![CDATA[<ChordSheets/><p><a href="https://github.com/martijnversluis/ChordSheetJS" target="_blank" rel="noreferrer">ChordSheetJS Playground</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Electronic minimalism]]></title> <link>https://chromatone.center/theory/composition/electronic/</link> <guid>https://chromatone.center/theory/composition/electronic/</guid> <pubDate>Sat, 03 Jul 2021 00:00:00 GMT</pubDate> <description><![CDATA[Ways to produce compelling electronic compositions]]></description> <content:encoded><""></a></h2> <youtube-embed video="mh8jV6IGI84" /><h2 id="_1-subtle-motifs" tabindex="-1">1. Subtle motifs <a class="header-anchor" href="#_1-subtle-motifs" aria-label="Permalink to "1. Subtle motifs""></a></h2> <h3 id="two-elements-bouncing-at-each-other" tabindex="-1">Two elements bouncing at each other <a class="header-anchor" href="#two-elements-bouncing-at-each-other" aria-label="Permalink to "Two elements bouncing at each other""></a></h3> <p>Ensure something is going on in the lows, in the mids & the highs. Choose a phrase length & design it to create a simple groove.</p> <h2 id="_2-movement" tabindex="-1">2. Movement <a class="header-anchor" href="#_2-movement" aria-label="Permalink to "2. Movement""></a></h2> <h3 id="too-little-is-boring-too-much-is-distracting" tabindex="-1">Too little is boring, too much is distracting <a class="header-anchor" href="#too-little-is-boring-too-much-is-distracting" aria-label="Permalink to "Too little is boring, too much is distracting""></a></h3> <p>Create movement in the timbre of your elements. Use modulating effects, tweak filters, change envelope lengths, LFOs or automation to create movement.</p> <h2 id="_3-polymeters" tabindex="-1">3. Polymeters <a class="header-anchor" href="#_3-polymeters" aria-label="Permalink to "3. Polymeters""></a></h2> <h3 id="slightly-unstable-but-exciting-in-moderation" tabindex="-1">Slightly unstable, but exciting in moderation <a class="header-anchor" href="#slightly-unstable-but-exciting-in-moderation" aria-label="Permalink to "Slightly unstable, but exciting in moderation""></a></h3> <p>Simple grooves become endlesly listenable when you add 1 or 2 polymeters to them. During breaks polymeters make it hard to guess where the ‘1’ of the groove is, which is satisfying, when resolved.</p> <h2 id="_4-tension-and-release" tabindex="-1">4. Tension and release <a class="header-anchor" href="#_4-tension-and-release" aria-label="Permalink to "4. Tension and release""></a></h2> <h3 id="make-the-audience-wait" tabindex="-1">Make the audience wait <a class="header-anchor" href="#make-the-audience-wait" aria-label="Permalink to "Make the audience wait""></a></h3> <p>Evolve the global story of the track to build up tension and then release that tension. Make them want it, tease them, drive them crazy! Use contrast (busy vs sparse). Use reversed reverb tails as buildups.</p> <youtube-embed video="B_D3dCSylCg" /><h2 id="making-minimalist-music" tabindex="-1">Making minimalist music <a class="header-anchor" href="#making-minimalist-music" aria-label="Permalink to "Making minimalist music""></a></h2> <blockquote> <p>Only Having What You Need</p> </blockquote> <p>Write as little as possible.</p> <h3 id="rule-of-three" tabindex="-1">Rule of three <a class="header-anchor" href="#rule-of-three" aria-label="Permalink to "Rule of three""></a></h3> <p>At maximum three layers of music can have listeners attention.</p> <h3 id="avoid-habits-and-answer-questions" tabindex="-1">Avoid habits and answer questions <a class="header-anchor" href="#avoid-habits-and-answer-questions" aria-label="Permalink to "Avoid habits and answer questions""></a></h3> <ul> <li>Why am I doing this?</li> <li>What is doing this adding to the final result?</li> <li>Is this really making any difference?</li> <li>Is it really necessary?</li> </ul> <p>Minimalism and why it might be the most effective way to finish more music.</p> <youtube-embed video="8FNmtguAJGE" />]]></content:encoded> <enclosure url="https://chromatone.center/underdog.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Drum rudiments]]></title> <link>https://chromatone.center/theory/rhythm/rudiments/</link> <guid>https://chromatone.center/theory/rhythm/rudiments/</guid> <pubDate>Wed, 30 Jun 2021 00:00:00 GMT</pubDate> <description><![CDATA[Basic patterns played with drum sticks]]></description> <content:encoded><""></a></h3> <h2 id="historical-organization" tabindex="-1">Historical organization <a class="header-anchor" href="#historical-organization" aria-label="Permalink to "Historical organization""></a></h2> <p>(NARD Standard 26 American Drum Rudiments of 1933)</p> <h3 id="thirteen-essential-rudiments" tabindex="-1">Thirteen "essential" rudiments <a class="header-anchor" href="#thirteen-essential-rudiments" aria-label="Permalink to "Thirteen "essential" rudiments""></a></h3> <ul> <li>The double stroke open roll</li> <li>The five stroke roll</li> <li>The seven stroke roll</li> <li>The flam</li> <li>The flam accent</li> <li>The flam paradiddle</li> <li>The flamacue</li> <li>The drag (half drag or ruff)</li> <li>The single drag tap</li> <li>The double drag tap</li> <li>The double paradiddle</li> <li>The single ratamacue</li> <li>The triple ratamacue</li> </ul> <h3 id="second-thirteen-rudiments" tabindex="-1">Second thirteen rudiments <a class="header-anchor" href="#second-thirteen-rudiments" aria-label="Permalink to "Second thirteen rudiments""></a></h3> <ul> <li>The single stroke roll</li> <li>The nine stroke roll</li> <li>The ten stroke roll</li> <li>The eleven stroke roll</li> <li>The thirteen stroke roll</li> <li>The fifteen stroke roll</li> <li>The flam tap</li> <li>The single paradiddle</li> <li>The drag paradiddle No. 1</li> <li>The drag paradiddle No. 2</li> <li>The flam paradiddle-diddle</li> <li>The lesson 25</li> <li>The double ratamacue</li> </ul> <h3 id="last-fourteen-rudiments" tabindex="-1">Last fourteen rudiments <a class="header-anchor" href="#last-fourteen-rudiments" aria-label="Permalink to "Last fourteen rudiments""></a></h3> <p>In 1984, the Percussive Arts Society added 14 more rudiments to extend the list to the current 40 International Snare Drum Rudiments. The ordering was completely changed during this last re-organization.</p> <ul> <li>The single stroke four</li> <li>The single stroke seven</li> <li>The multiple bounce roll</li> <li>The triple stroke roll</li> <li>The six stroke roll</li> <li>The seventeen stroke roll</li> <li>The triple paradiddle</li> <li>The single paradiddle-diddle</li> <li>The single flammed mill</li> <li>The pataflafla</li> <li>The Swiss Army triplet</li> <li>The inverted flam tap</li> <li>The flam drag</li> <li>The single dragadiddle</li> </ul> <youtube-embed video="roT6Imp7lSg" /><h2 id="terminology" tabindex="-1">Terminology <a class="header-anchor" href="#terminology" aria-label="Permalink to "Terminology""></a></h2> <h3 id="single-stroke" tabindex="-1">Single stroke <a class="header-anchor" href="#single-stroke" aria-label="Permalink to "Single stroke""></a></h3> <p>A stroke performs a single percussive note. There are four basic single strokes.</p> <h3 id="double-stroke" tabindex="-1">Double stroke <a class="header-anchor" href="#double-stroke" aria-label="Permalink to "Double stroke""></a></h3> <p>A double stroke consists of two single strokes played by the same hand (either RR or LL).</p> <h3 id="diddle" tabindex="-1">Diddle <a class="header-anchor" href="#diddle" aria-label="Permalink to "Diddle""></a></h3> <p>A diddle is a double stroke played at the current prevailing speed of the piece. For example, if a sixteenth-note passage is being played then any diddles in that passage would consist of sixteenth notes.</p> <h3 id="paradiddle" tabindex="-1">Paradiddle <a class="header-anchor" href="#paradiddle" aria-label="Permalink to "Paradiddle""></a></h3> <p>A paradiddle consists of two single strokes followed by a double stroke, i.e., RLRR or LRLL. When multiple paradiddles are played in succession, the first note always alternates between right and left. Therefore, a single paradiddle is often used to switch the "lead hand" in drumming music. Mill Stroke</p> <p>A mill stroke is essentially a reversed paradiddle with the sticking RRLR or LLRL with an accent on the first note. The single flammed mill is the most common mill stroke variant in American playing.</p> <h3 id="drag" tabindex="-1">Drag <a class="header-anchor" href="#drag" aria-label="Permalink to "Drag""></a></h3> <p>A drag is a double stroke played at twice the speed of the context in which it is placed. For example, if a sixteenth-note passage is being played then any drags in that passage would consist of thirty-second notes. Drags can also be notated as grace notes, in which case the spacing between the notes can be interpreted by the player. On timpani, drags are often played with alternating sticking (lrL or rlR).</p> <p>In Scottish pipe band snare drumming, a drag consists of a flam where the gracenote is played as a "deadstick" (staccato note).[citation needed]</p> <h3 id="ruff" tabindex="-1">Ruff <a class="header-anchor" href="#ruff" aria-label="Permalink to "Ruff""></a></h3> <p>Historically, the modern Drag was known as a Ruff (or Rough) if played closed and a Half Drag when played open. Ruff can also refer to a single stroked set of grace notes preceding a regular note. In American playing the 3 Stroke Ruff has 2 single stroked grace notes before the primary or full note and a 4 Stroke Ruff has 3 singles before the primary note. Other rudimental systems have differing sticking methods and names for similar notation figures. Though still used and taught by drummers and drum teachers in practice, the 3 Stroke Ruff and 4 Stroke Ruff are not officially listed on the NARD or PAS rudiment sheets and the term Drag has eclipsed Ruff (or Rough) for the double stroked rudiments, in both open or closed execution, according to the current PAS standard terminology.</p> <h3 id="flam" tabindex="-1">Flam <a class="header-anchor" href="#flam" aria-label="Permalink to "Flam""></a></h3> <p>A flam consists of two single strokes played by alternating hands (rL or lR). The first stroke is a quieter grace note followed by a louder primary stroke on the opposite hand. The two notes are played almost simultaneously, and are intended to sound like a single, broader note. The temporal distance between the grace note and the primary note can vary depending on the style and context of the piece being played. In the past, or in some European systems, open flams and closed flams were listed as separate rudiments.</p> <h3 id="charge-stroke" tabindex="-1">Charge Stroke <a class="header-anchor" href="#charge-stroke" aria-label="Permalink to "Charge Stroke""></a></h3> <p>A charge stroke is a special variation on an open flam in which one or both of the notes are accented to provide a driving feel that can create the illusion that the downbeat has moved earlier in time. The two major types are French Lr or Rl and Swiss LR or RL with the first note preceding the downbeat, which falls on the second note, in both types. Charge strokes can be combined with flams or drags to create complex grace note figures preceding a downbeat.</p> <h3 id="roll" tabindex="-1">Roll <a class="header-anchor" href="#roll" aria-label="Permalink to "Roll""></a></h3> <p>Drum rolls are various techniques employed to produce a sustained, continuous sound.</p> <p><a href="https://en.wikipedia.org/wiki/Drum_rudiment" target="_blank" rel="noreferrer">https://en.wikipedia.org/wiki/Drum_rudiment</a></p> <p><a href="https://www.drumeo.com/beat/rudiments/" target="_blank" rel="noreferrer">https://www.drumeo.com/beat/rudiments/</a></p> <h3 id="practice-drum-rudiments-online-1" tabindex="-1"><a href="./../../../practice/rhythm/rudiments/">Practice drum rudiments online</a> <a class="header-anchor" href="#practice-drum-rudiments-online-1" aria-label="Permalink to "[Practice drum rudiments online](../../../practice/rhythm/rudiments/index.md)""></a></h3> ]]></content:encoded> <enclosure url="https://chromatone.center/josh-sorenson.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Noise lab]]></title> <link>https://chromatone.center/practice/synth/noise/</link> <guid>https://chromatone.center/practice/synth/noise/</guid> <pubDate>Tue, 22 Jun 2021 00:00:00 GMT</pubDate> <description><![CDATA[As white light is the combination of all colors, the white noise is the combination of all possible notes]]></description> <content:encoded><![CDATA[<SynthNoise /><h2 id="noise-generation-tool" tabindex="-1">Noise generation tool <a class="header-anchor" href="#noise-generation-tool" aria-label="Permalink to "Noise generation tool""></a></h2> <p>A simple tool to generate some noise. Let's look at the possibilities.</p> <ol> <li>The <strong>noise generator</strong> section <ol> <li>You can start the noise by tapping the <strong>NOISE</strong> button at the top left. There's a latch in the bottom of this button to fix the noise playing. Click it again to unlatch the sound playing. The other way is to press <strong>A</strong> – the sound will play as long as you hold it.</li> <li>The <strong>DRY</strong> slider determines the volume of the initial noise source.</li> <li>Choose the type of the noise (its 'color') between Brown, Pink and White. <ol> <li><a href="https://en.wikipedia.org/wiki/White_noise" target="_blank" rel="noreferrer">White noise</a> is a random signal having equal intensity at different frequencies, giving it a constant power spectral density.</li> <li><a href="https://en.wikipedia.org/wiki/Pink_noise" target="_blank" rel="noreferrer">Pink noise</a> or 1⁄f noise is a signal or process with a frequency spectrum such that the power spectral density (power per frequency interval) is inversely proportional to the frequency of the signal. In pink noise, each octave interval (halving or doubling in frequency) carries an equal amount of noise energy. Pink noise is one of the most common signals in biological systems.</li> <li>The spectral density of the <a href="https://en.wikipedia.org/wiki/Brownian_noise" target="_blank" rel="noreferrer">Brown noise</a> is inversely proportional to f^2, meaning it has higher intensity at lower frequencies, even more so than pink noise. It decreases in intensity by 6 dB per octave (20 dB per decade) and, when heard, has a "damped" or "soft" quality compared to white and pink noise.</li> </ol> </li> <li>Next is the <strong>ADSR</strong> controls group: drag <strong>ATTACK</strong>, <strong>DECAY</strong>, <strong>SUSTAIN</strong> and <strong>RELEASE</strong> sliders to adjust the signal envelope.</li> </ol> </li> <li><strong>Auto-filter</strong> section. Press or latch the <strong>FILTER</strong> button to engage the filter. Change the FREQUENCY, OCTAVES and Q-FACTOR of the filter. Choose <a href="https://en.wikipedia.org/wiki/Low-pass_filter" target="_blank" rel="noreferrer">LP</a> (Low-Pass), <a href="https://en.wikipedia.org/wiki/High-pass_filter" target="_blank" rel="noreferrer">HP</a> (High-pass) or <a href="https://en.wikipedia.org/wiki/Band-pass_filter" target="_blank" rel="noreferrer">BP</a> (Band-pass) filter type. Then goes the <strong>PLAY</strong> button to turn on the LFO of the filter. <strong>LFO</strong> and <strong>DEPTH</strong> sliders set the swing of the filter and next you have the choise of the Low Frequency Oscillator.</li> <li>A <a href="https://en.wikipedia.org/wiki/Bitcrusher" target="_blank" rel="noreferrer"><strong>Bitcrusher</strong></a> is an audio effect that produces distortion by reducing of the resolution or bandwidth of digital audio data. The resulting quantization noise may produce a "warmer" sound impression, or a harsh one, depending on the amount of reduction. Set the volume of the bus, the <strong>BITS</strong> resolution and the <strong>WET</strong> parameter of how much of the signal should come through.</li> <li><strong>Auto-panner</strong> section makes the sound move from left to right with another LFO. Turn on the PAN to turn on the effect. Latch the PLAY button to make the panning move. <strong>LFO</strong> sets the frequency of the movement, <strong>DEPTH</strong> sets the amplitude of it.</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/noise.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Mode degrees]]></title> <link>https://chromatone.center/practice/chord/modes/</link> <guid>https://chromatone.center/practice/chord/modes/</guid> <pubDate>Thu, 10 Jun 2021 00:00:00 GMT</pubDate> <description><![CDATA[Chords of diatonic mode degrees]]></description> <content:encoded><![CDATA[<chord-progressions :list="modes" />]]></content:encoded> <enclosure url="https://chromatone.center/modes.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Alternative notation systems]]></title> <link>https://chromatone.center/theory/notes/alternative/</link> <guid>https://chromatone.center/theory/notes/alternative/</guid> <pubDate>Fri, 04 Jun 2021 00:00:00 GMT</pubDate> <description><![CDATA[There's a plenty of proposed and actually used ways of writing down and communicating music information]]></description> <content:encoded><![CDATA[<p>Here's the growing list of all the known alternative approaches to notation systems:</p> <ul> <li><a href="./tabulature/">Tabulature</a></li> <li><a href="./numbered/">Numbered notation</a></li> <li><a href="./bilinear/">Bilinear notation</a></li> <li><a href="./chromatic-staff/">Chromatic staff</a></li> <li><a href="./klavar/">Klavarscribo</a></li> <li><a href="./dodeka/">Dodeka</a></li> <li><a href="./parsons/">Parsons code</a></li> <li><a href="./scientific/">Standard pitch notation</a></li> <li><a href="./integer/">Integer notation</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/kelly-sikkema.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[MIDI File visualizer]]></title> <link>https://chromatone.center/practice/midi/visualizer/</link> <guid>https://chromatone.center/practice/midi/visualizer/</guid> <pubDate>Thu, 20 May 2021 00:00:00 GMT</pubDate> <description><![CDATA[Render a MIDI-file to a colorful picture]]></description> <content:encoded><![CDATA[<client-only> <midi-visualizer /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/midi-visual.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Profile]]></title> <link>https://chromatone.center/practice/chroma/profile/</link> <guid>https://chromatone.center/practice/chroma/profile/</guid> <pubDate>Wed, 12 May 2021 00:00:00 GMT</pubDate> <description><![CDATA[Get info for any possible chroma combination]]></description> <content:encoded><![CDATA[<chroma-profile v-model:chroma="chroma" class="m-2" :editable="true" />]]></content:encoded> <enclosure url="https://chromatone.center/profile.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Palette with feedback]]></title> <link>https://chromatone.center/practice/chroma/palette/feedback/</link> <guid>https://chromatone.center/practice/chroma/palette/feedback/</guid> <pubDate>Thu, 22 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[More interesting and chaotic shader setup]]></description> <content:encoded><![CDATA[<ChromaPaletteFeedback class="h-screen" />]]></content:encoded> <enclosure url="https://chromatone.center/feedback.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Palette]]></title> <link>https://chromatone.center/practice/chroma/palette/</link> <guid>https://chromatone.center/practice/chroma/palette/</guid> <pubDate>Thu, 22 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[A MIDI reactive GLSL shader as a direct visual mathematical interpretation of musical notes]]></description> <content:encoded><![CDATA[<ChromaPalette style="position: sticky; top: 0;"/><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>Chroma Palette is an immersive digital artwork that leverages Denis Starov's Chromatone system to translate the aural into the visual. Incorporating a microphone to capture live ambient audio, the installation analyzes the sounds and assigns one of twelve unique colors to each of the twelve distinct pitch classes, effectively rendering a visual representation of the sound's frequency content on a screen. This innovative approach not only allows for the visualization of music but also creates a multisensory feedback loop, engaging both sight and hearing in a unified perceptual experience.</p> <p>The artwork operates within the broader context of the Chromatone music web-apps ecosystem, which is designed to facilitate the study and appreciation of music through visual means. By interacting with Pitch Palette, participants influence the soundscape, thereby altering the visual output in real-time. This participatory element is key, transforming viewers into an integral part of the installation and inviting them to explore the symbiotic relationship between different senses and the environment. Through Pitch Palette, Starov extends an invitation to journey into a synesthetic space where sound and color intersect and are experienced in unison.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/shader.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[GPU shader]]></title> <link>https://chromatone.center/practice/chroma/shader/</link> <guid>https://chromatone.center/practice/chroma/shader/</guid> <pubDate>Thu, 22 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[A MIDI reactive GLSL shader as a direct visual mathematical interpretation of musical notes]]></description> <content:encoded><![CDATA[<p>This page is moved to <a href="https://chromatone.center/practice/chroma/palette/" target="_blank" rel="noreferrer">https://chromatone.center/practice/chroma/palette/</a></p> ]]></content:encoded> </item> <item> <title><![CDATA[Compass]]></title> <link>https://chromatone.center/practice/chroma/compass/</link> <guid>https://chromatone.center/practice/chroma/compass/</guid> <pubDate>Tue, 20 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Explore any combination of 12 tone equal temperament pitches]]></description> <content:encoded><![CDATA[<client-only> <chroma-compass /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/compass.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Chroma Touch]]></title> <link>https://chromatone.center/practice/chroma/touch/</link> <guid>https://chromatone.center/practice/chroma/touch/</guid> <pubDate>Tue, 20 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Intuitive and performant WebMIDI intstrument]]></description> <content:encoded><![CDATA[<ChromaTouch class=" w-full max-w-full h-100vh" />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Waveform]]></title> <link>https://chromatone.center/practice/chroma/waveform/</link> <guid>https://chromatone.center/practice/chroma/waveform/</guid> <pubDate>Mon, 12 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Visualization of the sum waveform of any chroma note combination]]></description> <content:encoded><![CDATA[<p>Choose any of the notes to see the wavefrom of their combination.</p> <ChromaForm />]]></content:encoded> <enclosure url="https://chromatone.center/chromaform.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Synthesis]]></title> <link>https://chromatone.center/practice/synth/</link> <guid>https://chromatone.center/practice/synth/</guid> <pubDate>Sat, 10 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Ways to produce specific sounds]]></description> <content:encoded><![CDATA[<h2 id="resources" tabindex="-1">Resources <a class="header-anchor" href="#resources" aria-label="Permalink to "Resources""></a></h2> <ul> <li><a href="https://signalsmith-audio.co.uk/writing/2021/lets-write-a-reverb/" target="_blank" rel="noreferrer">https://signalsmith-audio.co.uk/writing/2021/lets-write-a-reverb/</a></li> <li><a href="https://ccrma.stanford.edu/~jos/" target="_blank" rel="noreferrer">https://ccrma.stanford.edu/~jos/</a></li> <li><a href="https://jackschaedler.github.io/circles-sines-signals/" target="_blank" rel="noreferrer">https://jackschaedler.github.io/circles-sines-signals/</a></li> <li><a href="https://www.soundonsound.com/synthesizers/synth-secrets" target="_blank" rel="noreferrer">https://www.soundonsound.com/synthesizers/synth-secrets</a></li> <li><a href="https://www.theaudioprogrammer.com/" target="_blank" rel="noreferrer">https://www.theaudioprogrammer.com/</a></li> <li><a href="https://www.dsprelated.com/" target="_blank" rel="noreferrer">https://www.dsprelated.com/</a></li> <li><a href="https://www.earlevel.com/main/" target="_blank" rel="noreferrer">https://www.earlevel.com/main/</a></li> <li><a href="http://www.willpirkle.com/" target="_blank" rel="noreferrer">http://www.willpirkle.com/</a></li> <li><a href="https://www.hackaudio.com/" target="_blank" rel="noreferrer">https://www.hackaudio.com/</a></li> <li><a href="https://maxgraf.space/code/2020/04/22/e2e-piano-epiano.html" target="_blank" rel="noreferrer">https://maxgraf.space/code/2020/04/22/e2e-piano-epiano.html</a></li> <li><a href="https://maxgraf.space/code/2020/06/05/pitch-aware-granular-synth.html" target="_blank" rel="noreferrer">https://maxgraf.space/code/2020/06/05/pitch-aware-granular-synth.html</a></li> <li><a href="https://maxgraf.space/code/2020/06/05/karplus-strong-synth.html" target="_blank" rel="noreferrer">https://maxgraf.space/code/2020/06/05/karplus-strong-synth.html</a></li> <li><a href="https://ccrma.stanford.edu/~jos/pasp/Feedback_Comb_Filters.html" target="_blank" rel="noreferrer">https://ccrma.stanford.edu/~jos/pasp/Feedback_Comb_Filters.html</a></li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/adi-goldstein.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Generative]]></title> <link>https://chromatone.center/practice/generative/</link> <guid>https://chromatone.center/practice/generative/</guid> <pubDate>Fri, 09 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Algorhythmic and randomized music choices and performance]]></description> <content:encoded><![CDATA[<h2 id="sequencing" tabindex="-1">Sequencing <a class="header-anchor" href="#sequencing" aria-label="Permalink to "Sequencing""></a></h2> <p><strong>Sequencing</strong> is the stringing together of a precisely-timed sequence of commands to the computer to make sound.</p> <p>The commands can be things like “play an audio sample” or “send a note-on command on this MIDI channel to this MIDI port” or “select the instrument for a MIDI channel”. In the old days, there was a clear distinction between sequencers, which knew how to record, edit and play back sequences of MIDI commands, but had no ability to record or play back sound on their own, and sound sources, which knew how to respond to MIDI commands by producing actual sound. These days, most applications can do at least something of both.</p> <h2 id="sequencers" tabindex="-1">Sequencers <a class="header-anchor" href="#sequencers" aria-label="Permalink to "Sequencers""></a></h2> <p>A music sequencer is a tool that allows you to program and playback sequences of notes, rhythms, and effects automatically instead of performing or recording each part in real-time. Sequencers don’t generate their own sounds. Rather, they send MIDI and CV information to trigger other instruments or effect parameters.</p> <p>When programming a sequencer, you typically have control over several things: note/pitch, note length, pattern length, velocity, effect parameters and more. In other words, sequencers allow you to program a performance over a specified period of time.</p> <p>Sequencers are at the core of modern electronic music production, with arrangements built from loops of sequenced drum beats and synthesizer patterns. They’re also useful as an accompaniment for live musicians.</p> <p>Sequencers take many different forms, both in music sequencer hardware and music sequencer software. External instruments like a piano can be synced with a sequencer to keep everything locked to the same tempo, while computer DAWs take non-linear composition to expansive new levels. The most common sequencers music producers use today, however, are the DAW’s piano roll and step sequencers.</p> <h2 id="web-apps" tabindex="-1">Web-apps <a class="header-anchor" href="#web-apps" aria-label="Permalink to "Web-apps""></a></h2> <p><a href="./ambience/">Ambient drone box</a> is an experiment where simplex noise meets generative music.</p> <p><a href="./pendulums/">Pendulums</a> are fun. Game like <a href="./matter/">Matter simulation</a> is even more fun. But orderly <a href="./bounce/">Bouncers</a> are a bit more pleasant to listen to. For weird vibes go for <a href="./numbers/">Number sequences</a> and English words sonification.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/gen.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Fretboard calculator]]></title> <link>https://chromatone.center/practice/pitch/fretboard/</link> <guid>https://chromatone.center/practice/pitch/fretboard/</guid> <pubDate>Fri, 09 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[A tool to get distances between frets for any scale length of any string instrument]]></description> <content:encoded><![CDATA[<FretboardTool :instruments="$frontmatter.instruments" /><save-svg svg="fretboard" />]]></content:encoded> <enclosure url="https://chromatone.center/fretboard.svg" length="0" type="image/svg"/> </item> <item> <title><![CDATA[Sequencing]]></title> <link>https://chromatone.center/practice/sequencing/</link> <guid>https://chromatone.center/practice/sequencing/</guid> <pubDate>Fri, 09 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Digital music composition]]></description> <content:encoded><![CDATA[<h2 id="sequencing" tabindex="-1">Sequencing <a class="header-anchor" href="#sequencing" aria-label="Permalink to "Sequencing""></a></h2> <p><strong>Sequencing</strong> is the stringing together of a precisely-timed sequence of commands to the computer to make sound.</p> <p>The commands can be things like “play an audio sample” or “send a note-on command on this MIDI channel to this MIDI port” or “select the instrument for a MIDI channel”. In the old days, there was a clear distinction between sequencers, which knew how to record, edit and play back sequences of MIDI commands, but had no ability to record or play back sound on their own, and sound sources, which knew how to respond to MIDI commands by producing actual sound. These days, most applications can do at least something of both.</p> <h2 id="sequencers" tabindex="-1">Sequencers <a class="header-anchor" href="#sequencers" aria-label="Permalink to "Sequencers""></a></h2> <p>A music sequencer is a tool that allows you to program and playback sequences of notes, rhythms, and effects automatically instead of performing or recording each part in real-time. Sequencers don’t generate their own sounds. Rather, they send MIDI and CV information to trigger other instruments or effect parameters.</p> <p>When programming a sequencer, you typically have control over several things: note/pitch, note length, pattern length, velocity, effect parameters and more. In other words, sequencers allow you to program a performance over a specified period of time.</p> <p>Sequencers are at the core of modern electronic music production, with arrangements built from loops of sequenced drum beats and synthesizer patterns. They’re also useful as an accompaniment for live musicians.</p> <p>Sequencers take many different forms, both in music sequencer hardware and music sequencer software. External instruments like a piano can be synced with a sequencer to keep everything locked to the same tempo, while computer DAWs take non-linear composition to expansive new levels. The most common sequencers music producers use today, however, are the DAW’s piano roll and step sequencers.</p> <h2 id="web-apps" tabindex="-1">Web-apps <a class="header-anchor" href="#web-apps" aria-label="Permalink to "Web-apps""></a></h2> <p><a href="./ambience/">Ambient drone box</a> is an experiment where simplex noise meets generative music.</p> <p><a href="./pendulums/">Pendulums</a> are fun. Game like <a href="./matter/">Matter simulation</a> is even more fun. But orderly <a href="./bounce/">Bouncers</a> are a bit more pleasant to listen to. For weird vibes go for <a href="./numbers/">Number sequences</a> and English words sonification.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/anton-shuvalov.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Shop]]></title> <link>https://chromatone.center/shop/</link> <guid>https://chromatone.center/shop/</guid> <pubDate>Tue, 06 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[Durable vinyl stickers for musical instruments and other printed and printable music theory memos]]></description> <enclosure url="https://chromatone.center/shop.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Paper]]></title> <link>https://chromatone.center/practice/visual/paper/</link> <guid>https://chromatone.center/practice/visual/paper/</guid> <pubDate>Mon, 05 Apr 2021 00:00:00 GMT</pubDate> <description><![CDATA[A 8 channel MIDI visualization app, simulating drawing on a fading out virtual paper]]></description> <enclosure url="https://chromatone.center/paper.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Jam]]></title> <link>https://chromatone.center/practice/jam/</link> <guid>https://chromatone.center/practice/jam/</guid> <pubDate>Tue, 30 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[A visual guide for collaborative music events]]></description> <content:encoded><![CDATA[<p>Apps for improvisational performances and casual jams. You can use our <a href="./session/">Jam Session</a> to display all the vital parameters of a session. Or put your device with <a href="./table/">Jam Table</a> app on the screen in front of your synths and players. Or press the Random button in <a href="./random/">Random Jam</a> app and play to a cleverly generated set of BPM, tonic note and a scale. We can add more for your need - feel free to <a href="./../../contacts/">contact us</a>.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/hans-vivek.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Jam session]]></title> <link>https://chromatone.center/practice/jam/session/</link> <guid>https://chromatone.center/practice/jam/session/</guid> <pubDate>Tue, 30 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[A visual guide for collaborative music events]]></description> <content:encoded><![CDATA[<JamSession />]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Sound laboratory]]></title> <link>https://chromatone.center/practice/experiments/lab/</link> <guid>https://chromatone.center/practice/experiments/lab/</guid> <pubDate>Thu, 25 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[An modular web app to explore sound synthesis and processing right in the browser]]></description> <enclosure url="https://chromatone.center/lab.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Visuals]]></title> <link>https://chromatone.center/practice/visual/</link> <guid>https://chromatone.center/practice/visual/</guid> <pubDate>Thu, 25 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[Apps that don't produce sound, but show pictures and react to sound or midi]]></description> <content:encoded><![CDATA[<p>Chromatone can be very precise in assigning colors to frequencies and we can use it to analyze and communicate music. But we can use this new music communication layer as a base for new visual explorations. Visual apps reacting to audio and MIDI are created here.</p> <p>One of the most complex no-build client-side Vue JS apps in our library is such a visualization of the incoming MIDI signals. <a href="./paper/">Midi-paper</a> is a multichannel reactive 2D MIDI visualization canvas with more than 8 layers of meaningful minimalistic vector graphics appearing on the screen. Made with Paper.js and Vue it's lightweight and efficient. Layers are pluggable and the app is self-contained in a folder nicely.</p> <p><a href="./hydra/">Hydra synth</a> is just a minimalistic experiment to run Hydra - the JS tool to live code shader effects with quite simple syntax. Minimal working prototype is here, now anyone can use it as a starter template for their own creations. What direction will this take at Chromatone - we'll see sometime soon.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Computer notation]]></title> <link>https://chromatone.center/theory/notes/computer/</link> <guid>https://chromatone.center/theory/notes/computer/</guid> <pubDate>Fri, 12 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[Ways to describe, store and communicate musical notes in digital realm]]></description> <content:encoded><![CDATA[<p>Computers opened up the diverse methods for describing, reproducing, and communicating music in the digital era. This includes MIDI, Piano Roll, ABC notation, and other digital notation systems. As we continue to innovate and experiment with technology, these tools are not only used to transcribe traditional music but also to create new forms of sonic expression.</p> <p>One of the most commonly used computer notations is MIDI (Musical Instrument Digital Interface). <a href="./midi/">MIDI</a> is a protocol that allows computers, synthesizers, MIDI controllers, sound cards, samplers and drum machines to control one another and exchange system data. It does not contain any sounds, but instructions that tell a device what to do.</p> <p>Another important computer notation is the <a href="./piano-roll/">Piano roll</a>. In modern digital audio workstations, the piano roll is a virtual grid representing time on the horizontal axis and MIDI notes on the vertical axis. This interface allows for precise control over the pitch, duration, and timing of notes. It has become a fundamental tool in digital music production, enabling composers to write, edit, and arrange their work in a visual interface.</p> <p><a href="./abc/">ABC-notation</a> is a simple yet powerful ASCII musical notation for folk and traditional music. It was designed as a language for notating music in plain text files. The simplicity of ABC notation makes it ideal for sharing tunes through email and over the internet. <a href="./ring-tone/">Ring Tone Text Transfer Language</a> is another example of compact ASCII-based computer notation.</p> <p>As we delve into the world of digital music notation, we also encounter innovative ways to represent music, such as colorizing staff notation or piano rolls. These methods provide a visual way to understand the tonal relationships and structures within a piece of music. For instance, the Chromatone system uses colors to denote different pitches, creating a vibrant and intuitive musical landscape.</p> <p>In conclusion, computer notation is a vast and evolving field, opening up new possibilities for how we create, interact with, and understand music. As technology continues to advance, we can only imagine the future innovations that will further revolutionize the way we notate and perceive music.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/FL.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Tutorship]]></title> <link>https://chromatone.center/tutor/</link> <guid>https://chromatone.center/tutor/</guid> <pubDate>Sat, 06 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[Personal guidance through complexities of music with easy to grasp visuals and web apps]]></description> <content:encoded><![CDATA[<!-- <script setup> import Tutor from './Tutor.vue' </script> <Tutor /> --> ]]></content:encoded> <enclosure url="https://chromatone.center/wes-hicks.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Emancipation of dissonance]]></title> <link>https://chromatone.center/theory/intervals/emancipation/</link> <guid>https://chromatone.center/theory/intervals/emancipation/</guid> <pubDate>Thu, 04 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[The process of gradual acceptance of the more dissonant intervals as consonant and musical]]></description> <content:encoded><![CDATA[<p>Schoenberg <a href="https://digital.library.unt.edu/ark:/67531/metadc9855/m2/1/high_res_d/dissertation.pdf" target="_blank" rel="noreferrer">believed</a> that the modern major and minor modes were artificial, produced by historical evolution rather than by or through nature. His first definition of the minor mode revolves around the notion of this mode as "synthetic" or "a product of art."</p> <p>Though Schoenberg believed that the major-minor system had evolved historically, it was not present in nature but rather had undergone a transformation to last with it. The major mode, especially, had evolved to include all of the nondiatonic notes of the seven church modes that were “constructed on the seven diatonic tones of our major scale”. Here, Schoenberg defends his notion that each bass note can impose its own overtones, thus becoming the root of each chord; though they may be construed as “artificial,” they are not because they imitate a “prototype” in nature, the overtone series.</p> <p>The major and minor modes are the simplification of an earlier modal system, with the addition of “nondiatonic phenomena.” In Harmonielehre, Schoenberg wrote of this central premise:</p> <blockquote> <p>If we sum up the characteristics of the church modes, we get major and minor plus a number of nondiatonic phenomena. And the way in which the nondiatonic events of one church mode were carried over to the other modes I conceive as the process by which our two present-day modes (major and minor) crystallized out of the church modes. Accordingly, major and minor contain all those nondiatonic possibilities inherently, by virtue of this historical synthesis</p> </blockquote> <p>Schoenberg characterized nondiatonic phenomena in the alteration of chords as a continuation of the major-minor system, and eventually taught that there is no difference between consonance and dissonance. This argument resulted in his famous theory of the ‘emancipation of the dissonance,’ a concept Schoenberg borrowed from Rudolph Louis’s Der Widerspruch in der Musik, which addresses historical connotations not intrinsic in the original meanings of consonance and dissonance. When working out the ‘emancipation of the dissonance,’ Schoenberg was “attacking a structural-ornamental distinction that claims to be valid for all music, not distinctions appropriate to individual pieces, styles or composers.”</p> <blockquote> <p>There are, then, no non-harmonic tones, no tones foreign to harmony, but merely tones foreign to the harmonic system. Passing tones, changing tones, suspensions, etc., are, like sevenths and ninths, nothing else but attempts to include in the possibilities of tones sounding together – these are of course, by definition, harmonies – something that sounds similar to the more remote overtones. Whoever gives rules for their use is describing, at best, the ways in which they are most generally used. He does not have the right, though, to claim that he has then precisely separated those possibilities in which they sound good from those in which they sound bad.</p> <p>"Harmonielehre"</p> </blockquote> <p>Turning points, pivot tones, neutralization, chromatic substitutes, and nondiatonic phenomena are concepts Schoenberg began teaching to address the changing context of dissonance in late nineteenth-century harmonic theory. In his chapter titled “At the Frontiers of Tonality,” Schoenberg began to illustrate what he called “vagrant harmonies,” defining the diminished triad, diminished seventh chords, and the augmented sixth chord and explaining how each of these chords functions in harmony.</p> <p>Just as the harmonic series was and is used as a justification for consonance, such as by Rameau, among others, the harmonic series is often used as physical or psychoacoustic justification for the gradual emancipation of intervals and chords found further and further up the harmonic series over time, such as is argued by Henry Cowell in defense of his tone clusters. Some argue further that they are not dissonances, but consonances higher up the harmonic series and thus more complex. Chailley (1951, 12); cited in Nattiez 1990 gives the following diagram, a specific timeline he proposes:</p> <p><img src="./Chailley_harmonic_series_emancipationt.png" alt=""></p> <h3 id="cooper-timeline" tabindex="-1">Cooper timeline <a class="header-anchor" href="#cooper-timeline" aria-label="Permalink to "Cooper timeline""></a></h3> <p><img src="./Overtone_series_and_Western_music_development.png" alt=""></p> <p>Cooper (1973, 6-7) proposes the following timeline:</p> <ul> <li>A. unison and octave singing (magadizing) in Greek music and Ambrosian and Greek chant,</li> <li>B. parallel fourths and fifths in organum, from c. 850</li> <li>C. triadic music; from c. 1400</li> <li>D. chordal seventh, from c. 1600</li> <li>E. chordal ninth, from c. 1750</li> <li>F. whole-tone scale, from c. 1880</li> <li>G. total chromaticism, twelve-tone technique, and microtones in the early 20th-century.</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/Chailley_harmonic_series_emancipationt.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Experiments]]></title> <link>https://chromatone.center/practice/experiments/</link> <guid>https://chromatone.center/practice/experiments/</guid> <pubDate>Tue, 02 Mar 2021 00:00:00 GMT</pubDate> <description><![CDATA[Standalone apps and external music resources]]></description> <content:encoded><![CDATA[<p>Here's our collection of various web music experiments built by us or other creators.</p> <p>The <a href="./lab/">Sound Laboratory</a> is another powerful standalone tool to explore basic sound synthesis and effect chains. Some even more old experiments are at <a href="./dev/">Dev</a>.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/experiments.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Sensory dissonance]]></title> <link>https://chromatone.center/theory/intervals/dissonance/</link> <guid>https://chromatone.center/theory/intervals/dissonance/</guid> <pubDate>Mon, 15 Feb 2021 00:00:00 GMT</pubDate> <description><![CDATA[The rather objective approach to pitch interval consonance and dissonance measure to model, calculate and extract]]></description> <content:encoded><""></a></h3> <p>Western musical styles use a large variety of chords and vertical sonorities. Based on objective acoustical properties, chords can be situated on a dissonant-consonant continuum. While this might to some extent converge with the unpleasant-pleasant continuum, subjective liking might diverge for various chord forms from music across different styles. Our study aimed to investigate how well appraisals of the roughness and pleasantness dimensions of isolated chords taken from real-world music are predicted by Parncutt’s established model of sensory dissonance. Furthermore, we related these subjective ratings to style of origin and acoustical features of the chords as well as musical sophistication of the raters. Ratings were obtained for chords deemed representative of the harmonic language of three different musical styles (classical, jazz and avant-garde music), plus randomly generated chords. Results indicate that pleasantness and roughness ratings were, on average, mirror opposites; however, their relative distribution differed greatly across styles, reflecting different underlying aesthetic ideals. Parncutt’s model only weakly predicted ratings for all but Classical chords, suggesting that listeners’ appraisal of the dissonance and pleasantness of chords bears not only on stimulus-side but also on listener-side factors. Indeed, we found that levels of musical sophistication negatively predicted listeners’ tendency to rate the consonance and pleasantness of any one chord as coupled measures, suggesting that musical education and expertise may serve to individuate how these musical dimensions are apprehended.</p> <p><a href="/public/media/pdf/sensory%20dissonance.pdf">Download the PDF article</a></p> <h3 id="an-article-by-naithan-bosse" tabindex="-1"><a href="https://www.naithan.com/sensory-dissonance/" target="_blank" rel="noreferrer">An article by Naithan Bosse</a> <a class="header-anchor" href="#an-article-by-naithan-bosse" aria-label="Permalink to "[An article by Naithan Bosse](https://www.naithan.com/sensory-dissonance/)""></a></h3> <p>While completing my doctoral studies at the University of Calgary, I wrote a Max external, nb.dissonance, to estimate the amount of “sensory dissonance” created for any input chord. The external is based on Sean Ferguson’s method for estimating sensory dissonance detailed in the document portion of his doctoral thesis, Concerto for Piano and Orchestra (2000), as well as Richard Parncutt’s descriptions in Harmony: A Psychoacoustical Approach (1989). This post summarizes Ferguson’s method for estimating sensory dissonance and includes examples of how I experimented with the concept of sensory dissonance during the early stages of composing my dissertation composition Through a Window. My intention in writing this post is mainly to solidify my understanding of the methods described by Ferguson and Parncutt and to recommend these sources to anyone interested in the topics described below.</p> <h3 id="sensory-dissonance-feature-extraction-a-case-study-by-anna-terzaroli" tabindex="-1"><a href="https://easychair.org/publications/open/qSM8" target="_blank" rel="noreferrer">Sensory Dissonance feature extraction: a case study</a> by Anna Terzaroli <a class="header-anchor" href="#sensory-dissonance-feature-extraction-a-case-study-by-anna-terzaroli" aria-label="Permalink to "[Sensory Dissonance feature extraction: a case study](https://easychair.org/publications/open/qSM8) by Anna Terzaroli""></a></h3> <p>An audio feature can become relevant as a musical feature. This paper focuses on the “Sensory Dissonance” audio feature and its use as a musical parameter useful to analyze and compose music of all genres. It is possible by developing a software tool able to detect the presence of dissonance understood as Sensory Dissonance, to quantify the dissonance and then to draw a graphic function of the traced dissonance. This function is placed under the sound which it relates, while the music signal may be written according to the western notation system. The obtained curve does not only provide information concerning the degree of dissonance: it also allows a deeper reading of the entire analyzed musical work.</p> <p><a href="/public/media/pdf/Sensory_Dissonance_feature_extraction_a_case_study.pdf">Download PDF article</a></p> <h3 id="relating-tuning-and-timbre-by-william-a-sethares" tabindex="-1"><a href="https://sethares.engr.wisc.edu/consemi.html" target="_blank" rel="noreferrer">Relating Tuning and Timbre</a> by William A. Sethares <a class="header-anchor" href="#relating-tuning-and-timbre-by-william-a-sethares" aria-label="Permalink to "[Relating Tuning and Timbre](https://sethares.engr.wisc.edu/consemi.html) by William A. Sethares""></a></h3> <p>If you've ever attempted to play music in weird tunings (where "weird" means anything other than 12 tone equal temperament), then you've probably noticed that certain timbres (or tones) sound good in some scales and not in others. 17 and 19 tone equal temperament are easy to play in, for instance, because many of the standard timbres in synthesizers sound fine in these tunings. I remember when I first played in 16 tone. I had to audition hundreds of sounds before I found a few good timbres. When I tried to play in 10 tone, though, none of the timbres in my synthesizers sounded good. This article explains why this happens, and shows how to design timbres and scales that complement each other. This suggests a way to design new musical instruments with unusual timbres that can play consonantly in unusual scales.</p> <p><a href="/public/media/pdf/tuning-timbre-spectrum-scale.pdf">Download the book "Tuning, Timbre, Spectrum, Scale"</a></p> <h3 id="sensory-perceptions-of-harmony-dissonance-color-and-timbre-creative-insights-in-the-fine-arts-and-daily-communication-by-ellen-gilmer" tabindex="-1"><a href="https://www.amazon.com/Sensory-Perceptions-Harmony-Dissonance-Timbre-ebook/dp/B07JYLY6W2" target="_blank" rel="noreferrer">Sensory Perceptions of Harmony, Dissonance, Color and Timbre: Creative Insights in the Fine Arts and Daily Communication</a> by Ellen Gilmer <a class="header-anchor" href="#sensory-perceptions-of-harmony-dissonance-color-and-timbre-creative-insights-in-the-fine-arts-and-daily-communication-by-ellen-gilmer" aria-label="Permalink to "[Sensory Perceptions of Harmony, Dissonance, Color and Timbre: Creative Insights in the Fine Arts and Daily Communication](https://www.amazon.com/Sensory-Perceptions-Harmony-Dissonance-Timbre-ebook/dp/B07JYLY6W2) by Ellen Gilmer""></a></h3> <p>Although no two people see colors, hear sounds or express thoughts, feelings, desires or realizations in exactly the same way, by sharing our unique perceptions of these sensory stimuli, we can communicate and appreciate the perceptions and viewpoints of others. Even the most dissonant musical chords, painterly strokes or written and spoken words gain values of harmony and acceptance as their rhythms, hues and tonalities grow familiar and vital to us all.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Interval cycles]]></title> <link>https://chromatone.center/theory/intervals/cycles/</link> <guid>https://chromatone.center/theory/intervals/cycles/</guid> <pubDate>Wed, 03 Feb 2021 00:00:00 GMT</pubDate> <description><![CDATA[Collection of pitch classes created from a sequence of the same interval class]]></description> <content:encoded><![CDATA[<p>In music, an <a href="https://en.wikipedia.org/wiki/Interval_cycle" target="_blank" rel="noreferrer">interval cycle</a> is a collection of pitch classes created from a sequence of the same interval class. In other words, a collection of pitches by starting with a certain note and going up by a certain interval until the original note is reached (e.g. starting from C, going up by 3 semitones repeatedly until eventually C is again reached - the cycle is the collection of all the notes met on the way). In other words, interval cycles "unfold a single recurrent interval in a series that closes with a return to the initial pitch class". See: wikt:cycle.</p> <p>Interval cycles are notated by George Perle using the letter "C" (for cycle), with an interval class integer to distinguish the interval. Thus the diminished seventh chord would be C3 and the augmented triad would be C4. A superscript may be added to distinguish between transpositions, using 0–11 to indicate the lowest pitch class in the cycle. "These interval cycles play a fundamental role in the harmonic organization of post-diatonic music and can easily be identified by naming the cycle."</p> <p>Interval cycles assume the use of equal temperament and may not work in other systems such as just intonation. For example, if the C4 interval cycle used justly-tuned major thirds it would fall flat of an octave return by an interval known as the diesis. Put another way, a major third above G♯ is B♯, which is only enharmonically the same as C in systems such as equal temperament, in which the diesis has been tempered out.</p> <p>Interval cycles are symmetrical and thus non-diatonic. However, a seven-pitch segment of C7 will produce the diatonic major scale.</p> <p>This is known also known as a generated collection. A minimum of three pitches are needed to represent an interval cycle.</p> <p>Cyclic tonal progressions in the works of Romantic composers such as Gustav Mahler and Richard Wagner form a link with the cyclic pitch successions in the atonal music of Modernists such as Béla Bartók, Alexander Scriabin, Edgard Varèse, and the Second Viennese School (Arnold Schoenberg, Alban Berg, and Anton Webern). At the same time, these progressions signal the end of tonality.</p> <p>Interval cycles are also important in jazz, such as in Coltrane changes.</p> <p>"Similarly," to any pair of transpositionally related sets being reducible to two transpositionally related representations of the chromatic scale, "the pitch-class relations between any pair of inversionally related sets is reducible to the pitch-class relations between two inversionally related representations of the semitonal scale." Thus an interval cycle or pair of cycles may be reducible to a representation of the chromatic scale.</p> <p>As such, interval cycles may be differentiated as ascending or descending, with, "the ascending form of the semitonal scale [called] a 'P cycle' and the descending form [called] an 'I cycle'," while, "inversionally related dyads [are called] 'P/I' dyads." P/I dyads will always share a sum of complementation. Cyclic sets are those "sets whose alternate elements unfold complementary cycles of a single interval," that is an ascending and descending cycle: Cyclic set (sum 9) from Berg's Lyric Suite</p> <p>In 1920 Berg discovered/created a "master array" of all twelve interval cycles:</p> <pre><code>Berg's Master Array of Interval Cycles Cycles P 0 11 10 9 8 7 6 5 4 3 2 1 0 P I I 0 1 2 3 4 5 6 7 8 9 10 11 0 _______________________________________ 0 0 | 0 0 0 0 0 0 0 0 0 0 0 0 0 11 1 | 0 11 10 9 8 7 6 5 4 3 2 1 0 10 2 | 0 10 8 6 4 2 0 10 8 6 4 2 0 9 3 | 0 9 6 3 0 9 6 3 0 9 6 3 0 8 4 | 0 8 4 0 8 4 0 8 4 0 8 4 0 7 5 | 0 7 2 9 4 11 6 1 8 3 10 5 0 6 6 | 0 6 0 6 0 6 0 6 0 6 0 6 0 5 7 | 0 5 10 3 8 1 6 11 4 9 2 7 0 4 8 | 0 4 8 0 4 8 0 4 8 0 4 8 0 3 9 | 0 3 6 9 0 3 6 9 0 3 6 9 0 2 10 | 0 2 4 6 8 10 0 2 4 6 8 10 0 1 11 | 0 1 2 3 4 5 6 7 8 9 10 11 0 0 0 | 0 0 0 0 0 0 0 0 0 0 0 0 0 </code></pre> <h2 id="generated-collection" tabindex="-1">Generated collection <a class="header-anchor" href="#generated-collection" aria-label="Permalink to "Generated collection""></a></h2> <p>In diatonic set theory, <a href="https://en.wikipedia.org/wiki/Generated_collection" target="_blank" rel="noreferrer">a generated collection</a> is a collection or scale formed by repeatedly adding a constant interval in integer notation, the generator, also known as an interval cycle, around the chromatic circle until a complete collection or scale is formed. All scales with the deep scale property can be generated by any interval coprime with (in twelve-tone equal temperament) twelve. (Johnson, 2003, p. 83)</p> <p>The C major diatonic collection can be generated by adding a cycle of perfect fifths (C7) starting at F: F-C-G-D-A-E-B = C-D-E-F-G-A-B. Using integer notation and modulo 12: 5 + 7 = 0, 0 + 7 = 7, 7 + 7 = 2, 2 + 7 = 9, 9 + 7 = 4, 4 + 7 = 11. 7-note segment of C5: the C major scale as a generated collection</p> <p>The C major scale could also be generated using cycle of perfect fourths (C5), as 12 minus any coprime of twelve is also coprime with twelve: 12 − 7 = 5. B-E-A-D-G-C-F.</p> <p>A generated collection for which a single generic interval corresponds to the single generator or interval cycle used is a MOS (for "moment of symmetry") or well formed generated collection. For example, the diatonic collection is well formed, for the perfect fifth (the generic interval 4) corresponds to the generator 7. Though not all fifths in the diatonic collection are perfect (B-F is a diminished fifth, tritone, or 6), a well formed generated collection has only one specific interval between scale members (in this case 6)—which corresponds to the generic interval (4, a fifth) but to not the generator (7). The major and minor pentatonic scales are also well formed. (Johnson, 2003, p. 83)</p> <p>The properties of generated and well-formedness were described by Norman Carey and David Clampitt in "Aspects of Well-Formed Scales" (1989), (Johnson, 2003, p. 151.) In earlier (1975) work, theoretician Erv Wilson defined the properties of the idea, and called such a scale a MOS, an acronym for "Moment of Symmetry". While unpublished, this terminology became widely known and used in the microtonal music community. For instance, the three-gap theorem implies that every generated collection has at most three different steps, the intervals between adjacent tones in the collection (Carey 2007).</p> <p>A degenerate well-formed collection is a scale in which the generator and the interval required to complete the circle or return to the initial note are equivalent and include all scales with equal notes, such as the whole-tone scale. (Johnson, 2003, p. 158, n. 14)</p> <p>A <a href="https://en.wikipedia.org/wiki/Bisector_(music)" target="_blank" rel="noreferrer">bisector</a> is a more general concept used to create collections that cannot be generated but includes all collections which can be generated.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Academy]]></title> <link>https://chromatone.center/academy/</link> <guid>https://chromatone.center/academy/</guid> <pubDate>Tue, 02 Feb 2021 00:00:00 GMT</pubDate> <description><![CDATA[Research and study platform to provide collaborative online education experience for our global community]]></description> <enclosure url="https://chromatone.center/wei-hunag.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Projects]]></title> <link>https://chromatone.center/projects/</link> <guid>https://chromatone.center/projects/</guid> <pubDate>Tue, 02 Feb 2021 00:00:00 GMT</pubDate> <description><![CDATA[Our initiatives and collaborations]]></description> <enclosure url="https://chromatone.center/tamara-bitter.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Chromatone color notation]]></title> <link>https://chromatone.center/theory/notes/color/</link> <guid>https://chromatone.center/theory/notes/color/</guid> <pubDate>Mon, 01 Feb 2021 00:00:00 GMT</pubDate> <description><![CDATA[Different ways to implementing the color-frequency equations for writing and reading music]]></description> <content:encoded><![CDATA[<p>There's a plenty of possible ways to make the Chromatone system work for written music communication. This whole web site is one big experiment to find the most useful implications of the simple equations. But there's more to explore!</p> <h2 id="colorize-the-staff-notation" tabindex="-1">Colorize the staff notation <a class="header-anchor" href="#colorize-the-staff-notation" aria-label="Permalink to "Colorize the staff notation""></a></h2> <p>First and the most obvious use of color in music is a simple coloring the regular staff notation. You can use 12 markers to denote any pitch on paper and we can modify existing apps and scripts to produce colored sheet music.</p> <img src="./chromatic-scale.svg"> <h2 id="chromatic-hand" tabindex="-1">Chromatic hand <a class="header-anchor" href="#chromatic-hand" aria-label="Permalink to "Chromatic hand""></a></h2> <p>The extension of the ancient method of linking finger phalanges and musical notes. We can make it consistent enough to use intuitively after a little practice.</p> <p><img src="./note-hand.svg" alt="Chromatic hand"></p> <h2 id="interval-hand" tabindex="-1">Interval hand <a class="header-anchor" href="#interval-hand" aria-label="Permalink to "Interval hand""></a></h2> <p>The 12 phalanges of fingers are ideal to reason not only about notes, but intervals too. Assume the tip of your index finger as a tonic note and build any interval, chord, or even scale just with your thumb. It enables you to practice melodies, progressions and more in any moment, at any place.</p> <p><img src="./hand.svg" alt="Interval hand"></p> <h2 id="colorful-piano-rolls" tabindex="-1">Colorful piano rolls <a class="header-anchor" href="#colorful-piano-rolls" aria-label="Permalink to "Colorful piano rolls""></a></h2> <p>Try the <a href="./../../../practice/midi/roll/">MIDI-roll</a> to look at incoming MIDI visualization.</p> <p>Try the <a href="./../../../practice/pitch/roll/">Pitch-roll</a> to see the main note graph of incoming audio on an endless roll.</p> <h2 id="colorful-spectrogram" tabindex="-1">Colorful spectrogram <a class="header-anchor" href="#colorful-spectrogram" aria-label="Permalink to "Colorful spectrogram""></a></h2> <p>Adding the colors to a regular spectrogram makes you see much more about the musical contents of any sound. You can easily see the fundamental pitch and the colors of all the main overtones for simple sounds.</p> <p>Try the <a href="./../../../practice/pitch/spectrogram/">Colorful spectrogram</a> online now.</p> <p><img src="./hands.svg" alt="Both hands"></p> ]]></content:encoded> <enclosure url="https://chromatone.center/midi-roll.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[School]]></title> <link>https://chromatone.center/school/</link> <guid>https://chromatone.center/school/</guid> <pubDate>Sun, 24 Jan 2021 00:00:00 GMT</pubDate> <description><![CDATA[In-person group classes in short courses as a great step up in content creation]]></description> <enclosure url="https://chromatone.center/photo.jpeg" length="0" type="image/jpeg"/> </item> <item> <title><![CDATA[Support]]></title> <link>https://chromatone.center/support/</link> <guid>https://chromatone.center/support/</guid> <pubDate>Sat, 02 Jan 2021 00:00:00 GMT</pubDate> <description><![CDATA[Share links, contribute code or donate funds to the ongoing research and development]]></description> <content:encoded><![CDATA[<map-globe class="mb-8" :dots="dots" /><p>Chromatone is an open source initiative made not to be a proprietary standard, but to become a wide spread language, used by musicians, visual artists along with music gear companies and app developers all over the world.</p> <h2 id="follow-and-share" tabindex="-1">Follow and share <a class="header-anchor" href="#follow-and-share" aria-label="Permalink to "Follow and share""></a></h2> <p>We have a subreddit to hang around and an Instagram account to tag in your posts. And it's important to let more people join the growing community of visual musicians. So follow us and spread the word about Chromatone through your social media and beyond.</p> <!-- <a href="https://www.producthunt.com/posts/chromatone?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-chromatone" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=381642&theme=neutral" alt="Chromatone - Visual music language to learn, explore and express with | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a> --> <ul> <li><a href="https://instagram.com/chromatone.center/" target="_blank" rel="noreferrer">instagram.com/chromatone.center</a></li> <li><a href="https://reddit.com/r/chromatone" target="_blank" rel="noreferrer">reddit.com/r/chromatone</a></li> </ul> <p>We are those, who learn, teach, explore and produce music with help of the new clean visual language. And it's getting better with every contribution. Say hi and someone will reply. Everything starts from the first step. More to build together!</p> <h2 id="contribute-code" tabindex="-1">Contribute code <a class="header-anchor" href="#contribute-code" aria-label="Permalink to "Contribute code""></a></h2> <p>If you're a designer, JS developer, audio analysis library or know how else the site may be better - contribute code to our open <a href="https://github.com/chromatone" target="_blank" rel="noreferrer">GitHub repository</a>. Or at least press that star button in the right top corner.</p> <h2 id="give-us-a-star" tabindex="-1">Give us a star <a class="header-anchor" href="#give-us-a-star" aria-label="Permalink to "Give us a star""></a></h2> <p>In future the project will grow big enough to become an Open Collective to be transparent about all the funding and expenses. We need to have at least 100 stars at <a href="https://github.com/chromatone/chromatone.center" target="_blank" rel="noreferrer">the repository on GitHub</a>. Let's do it!</p> <p><a href="https://star-history.com/#chromatone/chromatone.center&Date" target="_blank" rel="noreferrer"><img src="https://api.star-history.com/svg?repos=chromatone/chromatone.center&type=Date" alt="Star History Chart"></a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/diego-catto.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Contacts]]></title> <link>https://chromatone.center/contacts/</link> <guid>https://chromatone.center/contacts/</guid> <pubDate>Fri, 01 Jan 2021 00:00:00 GMT</pubDate> <description><![CDATA[The project and its author]]></description> <content:encoded><![CDATA[<author-card :author="f?.org" /><p>Chromatone is a self sustaining ecosystem of music learners, music teachers and new tools to learn, practice, compose and perform music visually. The source code for them is open and is developed as an internationally funded social initiative.</p> <p>Our mission is to build a complete visual music ecosystem built by international community of learners, teachers and performers. With open music labs as the new form of real time creative collaboration.</p> <p>The stickers and printable files sales bring fuel to feed the research and development process. Charity funding is very welcome! We are building a whole language here - and the wider we can reach and share the joy of playing Visual Music together. ✨</p> <author-card :author="f?.author" />]]></content:encoded> <enclosure url="https://chromatone.center/javier-balseiro.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Pixel sort]]></title> <link>https://chromatone.center/practice/experiments/pixel-sort/</link> <guid>https://chromatone.center/practice/experiments/pixel-sort/</guid> <pubDate>Sun, 18 Oct 2020 00:00:00 GMT</pubDate> <description><![CDATA[Count and sort pixels into 12 hue buckets of 5 shades.]]></description> <content:encoded><![CDATA[<PixelSort/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Chromatic hands]]></title> <link>https://chromatone.center/practice/chroma/hand/</link> <guid>https://chromatone.center/practice/chroma/hand/</guid> <pubDate>Tue, 13 Oct 2020 00:00:00 GMT</pubDate> <description><![CDATA[A way to connect musical notes, colors and your own body and consciousness.]]></description> <content:encoded><![CDATA[<div class="flex"> <ChromaHand v-for="right in [false,true]" :style="{transform: right?`translateX(0px) scaleX(-100%) ` : ''}" :right="right" /> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/hands.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Synthesis]]></title> <link>https://chromatone.center/theory/synthesis/</link> <guid>https://chromatone.center/theory/synthesis/</guid> <pubDate>Sat, 10 Oct 2020 00:00:00 GMT</pubDate> <description><![CDATA[Ways to generate musical sounds out of electric oscillations]]></description> <content:encoded><![CDATA[<youtube-embed video="Y7TesKMSE74" /><youtube-embed video="F1RsE4J9k9w" /><h2 id="types-of-synthesis" tabindex="-1">Types of synthesis <a class="header-anchor" href="#types-of-synthesis" aria-label="Permalink to "Types of synthesis""></a></h2> <ul> <li>Additive</li> <li>Subtractive</li> <li>Wavetable</li> <li>Amplitude modulation (Ring modulation)</li> <li>Frequency modulation</li> <li>Waveshaping <ul> <li>Saturation</li> <li>Wave folding</li> <li>Phase inversion</li> </ul> </li> <li>Hard sync</li> <li>Granular</li> </ul> <youtube-embed video="Gn5yixqjmN8" /><h2 id="dsp" tabindex="-1">DSP <a class="header-anchor" href="#dsp" aria-label="Permalink to "DSP""></a></h2> <p><a href="https://github.com/olilarkin/awesome-musicdsp" target="_blank" rel="noreferrer">https://github.com/olilarkin/awesome-musicdsp</a></p> <h2 id="delays" tabindex="-1">Delays <a class="header-anchor" href="#delays" aria-label="Permalink to "Delays""></a></h2> <p><a href="https://youtu.be/HGqFxjQI3is" target="_blank" rel="noreferrer">https://youtu.be/HGqFxjQI3is</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/35-54d35.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[The Real Book]]></title> <link>https://chromatone.center/theory/notes/staff/real-book/</link> <guid>https://chromatone.center/theory/notes/staff/real-book/</guid> <pubDate>Sun, 13 Sep 2020 00:00:00 GMT</pubDate> <description><![CDATA[History of 20th century modern jazz notation developments]]></description> <content:encoded><![CDATA[<blockquote> <p>Quotes from the deep into the topic podcast by <a href="https://99percentinvisible.org/episode/the-real-book/" target="_blank" rel="noreferrer">99% Invisible</a></p> </blockquote> <p>Since the mid-1970s, almost every jazz musician has owned a copy of the same book. It has a peach-colored cover, a chunky, 1970s-style logo, and a black plastic binding. It’s delightfully homemade-looking—like it was printed by a bunch of teenagers at a Kinkos. And inside is the sheet music for hundreds of common jazz tunes—also known as jazz “standards”—all meticulously notated by hand. It’s called the Real Book.</p> <p><img src="./images/history_page_1.jpeg" alt=""></p> <p>But if you were going to music school in the 1970s, you couldn’t just buy a copy of the Real Book at the campus bookstore. Because the Real Book… was <em>illegal</em>. The world’s most popular collection of jazz music was a totally unlicensed publication. It was a self-published book created without permission from music publishers or songwriters. It was duplicated at photocopy shops and sold on street corners, out of the trunks of cars, and under the table at music stores where people used secret code words to make the exchange. The full story of how the Real Book came to be this bootleg bible of jazz is a complicated one. It’s a story about what happens when an insurgent, improvisational art form like jazz gets codified and becomes something that you can learn from a book.</p> <h2 id="the-history-of-fake-books" tabindex="-1">The History of Fake Books <a class="header-anchor" href="#the-history-of-fake-books" aria-label="Permalink to "The History of Fake Books""></a></h2> <p><img src="./images/image011.png" alt=""> <a href="https://www.barnesandnoble.com/w/the-story-of-fake-books-barry-kernfeld/1111519327" target="_blank" rel="noreferrer">Barry Kernfeld</a> is a musicologist who has written a lot about the history of jazz and music piracy. Kernfeld says that long before the Real Book ever came out, jazz musicians were relying on collections of music they called fake books. Kernfeld says that the story of the first fake book began in the 1940s. “A man named George Goodwin in New York City, involved in radio in the early 1940s, was getting a little frustrated with all the intricacies of tracking licensing. And so he invented this thing that he called the Tune-Dex,” explains Kernfeld.</p> <p><img src="./images/TuneDex-600x733.jpeg" alt=""></p> <p>TuneDex card via <a href="https://blog.library.gsu.edu/2010/10/13/popular-music-tune-dex-cards/" target="_blank" rel="noreferrer">Georgia State University Library</a></p> <p>The Tune-Dex was an index card catalog designed for radio station employees to keep track of the songs they were playing on air. On one side the cards had information about a particular song, such as the composer, the publisher, and anything that one would need to know for payment rights. On the other side of the card were a few lines of bite-sized sheet music—just the song’s melody, lyrics, and chords so that radio station employees could glance at it and quickly recall the song. This abbreviated musical notation also made the cards useful to another group of people: working jazz musicians.</p> <p><img src="./images/s-l1600-600x450.jpeg" alt=""></p> <p>As a Black art form, jazz had developed out of a mix of other Black music traditions including spirituals and the blues. By the 1940s, a lot of “jazz” was popular dance music, and many jazz musicians were making their money playing live gigs in small clubs and bars. The standard jazz repertoire was mostly well-known pop songs from Broadway, or New York’s songwriting factory: “Tin Pan Alley.”</p> <YoutubeEmbed video="5WX_fKVWX2s" /><p>Jazz musicians would riff and freestyle over these songs. The art of improvisation has always been a key art form of jazz music. But what made the average gigging trumpeter or sax player truly valuable was their ability to play any one of hundreds of songs right there on the spot.</p> <p>To be prepared for any request, musicians would bring stacks and stacks of sheet music to every gig. But lugging around a giant pile of paper could be really cumbersome—this is where the Tune-Dex came in. Someone figured out that you could gather together a bunch of Tune-Dex cards, print copies of them on sheets of paper, add a table of contents and a simple binding, and then sell the finished product directly to musicians in the form of a book. They called them “fake books” because they helped musicians fake their way through unfamiliar songs. These first fake books were cheaper than regular sheet music, and a lot more organized. They became an essential tool for this entire class of working musicians.</p> <h2 id="bootleggers" tabindex="-1">Bootleggers <a class="header-anchor" href="#bootleggers" aria-label="Permalink to "Bootleggers""></a></h2> <p>Musicians loved these new fake books, but the music publishers hated them. They wanted musicians to buy legal sheet music, and so the publishing companies started cracking down on fake book bootleggers. That, of course, didn’t stop the bootleggers and by the 1950s, there were countless illegal fake books in existence, which were being used in nightclubs all across the country.</p> <YoutubeEmbed video="LLDELtfUsaQ" /><p>As helpful as fake books were, they had a lot of problems. They were notoriously illegible and confusingly laid out. The other big problem with these fake books at this point was that the music inside felt really out of date. The fake books hadn’t changed since the mid-40s, but jazz had. Disillusioned by commercial jazz that appealed to mainstream white audiences, a new generation of Black musicians took jazz improvisation to a new level. They experimented with more angular harmonies, technically demanding melodies and blindingly fast tempos. Their new style was called bebop.</p> <YoutubeEmbed video="09BB1pci8_o" /><p>Bebop was just the beginning. Over several decades, jazz exploded into this constellation of different styles. Meanwhile, the economics of jazz shifted too. There were fewer clubs, smaller paychecks, and more university jazz programs with steady teaching gigs. The ivory tower, not the nightclub, increasingly became a place for young musicians to learn, and for established musicians to earn a living. And if you’re going to jazz school, you need jazz books.</p> <p><img src="./images/1599px-WTB_Berklee_3-600x400.jpeg" alt=""></p> <p>Berklee College of Music. Photo by <a href="https://commons.wikimedia.org/wiki/User:Cryptic_C62" title="User:Cryptic C62" target="_blank" rel="noreferrer">Cryptic C62</a></p> <p>The fake books at the time hadn’t kept up with the music. They still contained the same old-fashioned collection of standards with the same old-fashioned collection of chord changes. If a young jazz musician wanted to try and play like Charles Mingus or Sonny Rollins, they weren’t going to learn from a book. That is… until two college kids invented the Real Book.</p> <h2 id="the-two-guys" tabindex="-1">The Two Guys <a class="header-anchor" href="#the-two-guys" aria-label="Permalink to "The Two Guys""></a></h2> <p>In the mid-70s, Steve Swallow began teaching at Boston’s Berklee College of Music, an elite private music school that boasted one of the first jazz performance programs in the country. Swallow had only been teaching at Berklee for a few months when two students approached him about a secret project. “I keep referring them to them as ‘the two guys who wrote the book,’ because…they swore me to secrecy. They made me agree that I would not divulge their names,” explains Swallow. The “two guys” wanted to make a new fake book, one that actually catered to the needs of contemporary jazz musicians and reflected the current state of jazz. And they needed Swallow’s help.</p> <p>From the very beginning, the students envisioned the Real Book as a cooler and more contemporary fake book than the stodgy, outdated ones they’d grown up with. They wanted it to include new songs from jazz fusion artists like Herbie Hancock, and free jazz pioneers like Ornette Coleman who were pushing the boundaries of the genre. They also wanted to include the old jazz standards from Broadway and Tin Pan Alley, but they wanted to update those classics with alternate chord changes that reflected the way modern musicians, like Miles Davis, were actually playing them.</p> <YoutubeEmbed video="jy1ICphDYTQ" /><p>Modern jazz musicians had altered a lot of classic standards over the years, with new harmonies and more complex chord changes. And to capture these new sounds, the students spent hours listening to recordings and transcribing what they heard, as best they could. It was a huge undertaking because most of these chord changes had never actually been written down. They weren’t necessarily thinking about it like this at the time, but the students were effectively establishing a new set of standardized harmonies for a handful of classic songs.</p> <p><img src="./images/0af38b0aa71f96f108ae83a243e5de8d.jpeg" alt=""></p> <p><img src="./images/history_page_1_002.jpeg" alt=""></p> <p>The music wasn’t the only part of their new fake book that the students wanted to improve. They also wanted to fix the aesthetic problems with the old fake books, and make something that was nice to look at and easy to read. One of “the two guys” notated all of the music by hand in this very distinctive and expressive script. He also designed and silk-screened the logo on the front cover: “The Real Book,” written in chunky, SchoolHouse Rock-style block letters.</p> <p><img src="./images/Screen-Shot-2021-04-06-at-11.46.19-AM-600x273.png" alt=""></p> <p>By the summer of 1975, the book was done, and the students took it to local photocopying shops where they cranked out hundreds of copies to sell directly to other students and a few local businesses near Berklee. Overnight, almost everyone had to have one. As the Real Book’s notoriety grew, so did the demand. The two students hadn’t printed enough copies to keep up, but it turns out, they didn’t need to. Not long after they created a few hundred copies of the book, bootleg versions began popping up all over the world. The Real Book had taken on a life of its own, and the students ironically found themselves in the same position as the music publishers and songwriters they’d originally cut out of the process, as they watched unlicensed copies of their work get duplicated and sold. After they released the first edition of the Real Book, the students put out two more editions to correct mistakes, and then their work was done. But the Real Book lived on, copied over and over again by new generations of bootleggers. And as the number of students in elite conservatory jazz programs continued to swell over the next few decades, the Real Book, with its modern repertoire, reharmonized standards, and beautiful handwriting, became the de-facto textbook for this new legion of jazz students. The unofficial official handbook of jazz.</p> <p><img src="./images/Screen-Shot-2021-04-06-at-11.45.30-AM-600x772.png" alt=""></p> <h2 id="the-real-real-book" tabindex="-1">The Real Real Book <a class="header-anchor" href="#the-real-real-book" aria-label="Permalink to "The Real Real Book""></a></h2> <p>Just like with old fake books, the success of the Real Book was a major problem for music publishers. Some companies released their own fake books, but they never managed to compete with the Real Book. The popularity of the Real Book meant that lots of people weren’t getting paid for their work. But in the mid-2000s, music executive Jeff Schroedl and the publisher Hal Leonard decided, if you can’t beat ’em, join ’em. They went through the Real Book page by page, secured the rights to almost every song, and published a completely legal version. You don’t need to buy the Real Book out of the back of someone’s car anymore. It’s available at your local music shop. They even wanted the same handwriting. Hal Leonard actually hired a copyist to mimic the old Real Book’s iconic script and turn it into a digital font, which means a digital copy of a physical copy of one anonymous Berklee student’s handwriting from the mid-70s will continue to live on for as long as new editions of the book are published.</p> <p><img src="./images/00240221FCz-600x776.jpeg" alt=""></p> <p>The Hal Leonard version of the Real Book</p> <p>When Hal Leonard finally published the legal version of the Real Book in 2004, it was great news if you were a composer with a song in there. You’d finally be getting royalties from the sale of the most popular jazz fake book of all time. But that didn’t totally solve the intellectual property problems with the Real Book. While the legalization of the Real Book did resolve most of its flagrant copyright violations, it didn’t clear up authorship disputes that go back to the early days of jazz. Many jazz songs arise out of collective tinkering and improvising in jam sessions. It’s sometimes quite hard to say who exactly wrote a given song, and power dynamics often impacted whose name actually got listed as an official songwriter. And so there are likely many musicians whose names will never appear on the songs they helped write, even if those songs appear in the legal Real Book.</p> <h2 id="useful-tool-or-reductive-cheat-sheet" tabindex="-1">Useful Tool, or Reductive Cheat Sheet? <a class="header-anchor" href="#useful-tool-or-reductive-cheat-sheet" aria-label="Permalink to "Useful Tool, or Reductive Cheat Sheet?""></a></h2> <p>Even if we put the intellectual property questions aside for a second, fake books like the Real Book still have plenty of critics. Nicholas Payton is a musician and record label owner, and he compares the Real Book to a study guide or a cheat sheet—a way to distill this complicated art form into a manageable packet of digestible information. To Payton, jazz isn’t just information to be learned. It’s a way of thinking and a form of expression. And it’s fundamentally a Black cultural phenomenon that can’t be taken out of its historical context. Payton says that reading books like the Real Book, even going to music school, can really only get you so far. If you want to learn to play, at some point you’re going to have to immerse yourself in the culture of the music. For Payton (and many musicians) learning directly from elders, in person, is a crucial part of what it means to really know the art form.</p> <p>There’s also the question of codification, and whether it’s useful to have one songbook filled with definitive versions of all these jazz tunes. Carolyn Wilkins has taught ensembles at Berklee College of Music, and she says that the chords that are written down in the Real Book sometimes get treated like the <em>right</em> way to play a particular song. But even though jazz has all of these “standards,” they’re not supposed to be played in one standard way. As you listen to different recordings of the same song by different jazz artists, it becomes obvious that there’s no one right way to play it. Wilkins says that the Real Book does have its place in jazz education. Over her years at Berklee, she’s seen how it can be a useful starting place as a tool to bring young jazz musicians together. The key, she says, is to treat the Real Book as a starting place. From there you need to go out and explore all the other ways people have played a particular song. “And then ultimately you must find your own way.”</p> <YoutubeEmbed video="1eRNRzyX3ac" /><hr> <ul> <li>Listen to the podcast at <a href="https://99percentinvisible.org/episode/the-real-book/" target="_blank" rel="noreferrer">99percentinvisible.org</a></li> <li><a href="http://biblio3.url.edu.gt/Libros/Real-Book/The_Real_Vol_1.pdf" target="_blank" rel="noreferrer">Download The Real Book (5th edition) [PDF]</a></li> </ul> <p>We see the Chromatone web-site as The Real Book for 21st century. As the old one enabled a whole class of professional jazz musicians, the new one enables the whole majority of casual visual musicians. It's a toolbox for everyone to study, explore and compose music themselves and with friends, at any place. And the more open and accessive it is - the more fun it will be for the world to start seeing music.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/images/Screen-Shot-2021-04-06-at-11.46.19-AM-600x273.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Color palette generator]]></title> <link>https://chromatone.center/practice/color/palette/</link> <guid>https://chromatone.center/practice/color/palette/</guid> <pubDate>Wed, 12 Aug 2020 00:00:00 GMT</pubDate> <description><![CDATA[Poline and other tools by meodai]]></description> <content:encoded><![CDATA[<client-only> <color-palette id="palette" class="max-w-60ch m-2" /> <div class="my-4 max-w-90 transform"> <save-svg class="" svg="palette" /> </div> </client-only> <ul> <li>Drag top and bottom rectangles to change Hue, Saturation and Lightness of both start and end colors of your palette.</li> <li>Drag the list of steps to change the number of steps in your palette</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Scale chords]]></title> <link>https://chromatone.center/practice/scale/chords/</link> <guid>https://chromatone.center/practice/scale/chords/</guid> <pubDate>Mon, 15 Jun 2020 00:00:00 GMT</pubDate> <description><![CDATA[Find all available chords for all degrees of any scale]]></description> <content:encoded><![CDATA[<control-scale /><scale-chords /><h2 id="how-to-use-it" tabindex="-1">How to use it <a class="header-anchor" href="#how-to-use-it" aria-label="Permalink to "How to use it""></a></h2> <ol> <li>Select a scale from the drop down menu.</li> <li>Select a tonic pitch with the top keyboard.</li> <li>Get lists of chords fit for each degree of the scale.</li> <li>Play a chord by pressing its block.</li> </ol> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Tabulature]]></title> <link>https://chromatone.center/theory/notes/alternative/tabulature/</link> <guid>https://chromatone.center/theory/notes/alternative/tabulature/</guid> <pubDate>Mon, 30 Mar 2020 00:00:00 GMT</pubDate> <description><![CDATA[Explicit note playing instructions for the instrument]]></description> <content:encoded><![CDATA[<p>Tablature (or tabulature, or tab for short) is a form of musical notation indicating instrument fingering rather than musical pitches. The word tablature originates from the Latin word tabulatura. Tabula is a table or slate, in Latin. To tabulate something means to put it into a table or chart.</p> <p>Tablature is common for fretted stringed instruments such as the lute, vihuela, or guitar, as well as many free reed aerophones such as the harmonica. Tablature was common during the Renaissance and Baroque eras, and is commonly used today in notating many forms of music. Three types of organ tablature were used in Europe: German, Spanish and Italian.</p> <blockquote> <p><img src="./Vihuela-Tab_Fuenllana_1554.jpg" alt=""> Example of numeric vihuela tablature from the book "Orphenica Lyra" by Miguel de Fuenllana (1554). Red numerals (original) mark the vocal part.</p> </blockquote> <p>While standard notation represents the rhythm and duration of each note and its pitch relative to the scale based on a twelve tone division of the octave, tablature is instead operationally based, indicating where and when a finger should be placed to generate a note, so pitch is denoted implicitly rather than explicitly.</p> <p><img src="./Tuning-chr.png" alt=""></p> <p>Tablature for plucked strings is based upon a diagrammatic representation of the strings and frets of the instrument, keyboard tablature represents the keys of the instrument, and woodwind tablature shows whether each of the fingerholes is to be closed or left open.</p> <p><img src="./capirola2.png" alt=""></p> <p><img src="./guitar-tabs.svg" alt=""></p> <p><img src="./Star-Wars-Ukulele-The-Imperial-March.jpg" alt=""></p> <h3 id="drum-tabs" tabindex="-1">Drum tabs <a class="header-anchor" href="#drum-tabs" aria-label="Permalink to "Drum tabs""></a></h3> <p>Instead of the durational notes normally seen on a piece of sheet music, drum tab uses proportional horizontal placement to indicate rhythm and vertical placement on a series of lines to represent which drum from the drum kit to stroke. Drum tabs frequently depict drum patterns.</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>HH|x-x-x-x-x-x-x-x-||</span></span> <span class="line"><span> S|----o-------o---||</span></span> <span class="line"><span> B|o-------o-------||</span></span> <span class="line"><span> 1 + 2 + 3 + 4 +</span></span></code></pre> </div><h3 id="keyboard-tabulature" tabindex="-1">Keyboard tabulature <a class="header-anchor" href="#keyboard-tabulature" aria-label="Permalink to "Keyboard tabulature""></a></h3> <p>The modern keyboard tabs are built using any monospaced font. THey show each octave of the piano on a separate lane (with "R" and "L" for the hand). Letters show the note played. ">" symbols show note durations in the fixed time periods, divided by "|" symbols. The actual chords are placed above the tabs.</p> <div class="language- vp-adaptive-theme"><button title="Copy Code" class="copy"></button><span class="lang"></span><pre class="shiki shiki-themes github-light github-dark vp-code" tabindex="0" v-pre=""><code><span class="line"><span>Penny Lane - The Beatles</span></span> <span class="line"><span>Tabbed By: Ebon-Ivor</span></span> <span class="line"><span></span></span> <span class="line"><span> C Am7 Dm7 Gsus7</span></span> <span class="line"><span> + 1 + 1 + 2 + 3 + 4 + 1 + 2 + 3 + 4 +</span></span> <span class="line"><span>R5|---c-d-e-|e>>d>>c---c>>>>---------|------------------c-d-e>|</span></span> <span class="line"><span>R4|g>>------|--------b------b>>a>>---|a-----------------a>>>>>|</span></span> <span class="line"><span>R4|---------|g>>>>>g>>>>>g>>>>>---g>>|--g-f>>>>>>>>>>g>>f>>>>>|</span></span> <span class="line"><span>R4|---------|e>>>>>e>>>>>e>>>>>e>>>>>|------------------------|</span></span> <span class="line"><span>R4|---------|------------------c>>>>>|c>>>>>c>>>>>c>>>>>------|</span></span> <span class="line"><span>R3|---------|------------------------|a>>>>>a>>>>>a>>>>>------|</span></span> <span class="line"><span>L3|---------|c>>>>>------------------|------------------------|</span></span> <span class="line"><span>L2|---------|------b>>>>>a>>>>>g>>>>>|f>>>>>d>>>>>g>>>>>g>>>>>|</span></span> <span class="line"><span></span></span> <span class="line"><span> C Am7 Cm7 Am7b5</span></span></code></pre> </div><p><a href="https://tabnabber.com/view_Tab.asp?tabID=11885&sArtist=Beatles%2C%20The&sName=Penny%20Lane" target="_blank" rel="noreferrer">tabnabber.com</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/capirola2.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Dev experiments]]></title> <link>https://chromatone.center/practice/experiments/dev/</link> <guid>https://chromatone.center/practice/experiments/dev/</guid> <pubDate>Tue, 04 Feb 2020 00:00:00 GMT</pubDate> <description><![CDATA[The playground for the color music theory education and exploration apps]]></description> <content:encoded><![CDATA[<h2 id="all-related-development-experiments" tabindex="-1">All related development experiments <a class="header-anchor" href="#all-related-development-experiments" aria-label="Permalink to "All related development experiments""></a></h2> <ul> <li> <h4 id="tonal-array" tabindex="-1">Tonal array <a class="header-anchor" href="#tonal-array" aria-label="Permalink to "Tonal array""></a></h4> <ul> <li><a href="https://array.chromatone.center/" target="_blank" rel="noreferrer">https://array.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/tonal-array" target="_blank" rel="noreferrer">https://github.com/chromatone/tonal-array</a></li> </ul> </li> <li> <h4 id="dev" tabindex="-1">Dev <a class="header-anchor" href="#dev" aria-label="Permalink to "Dev""></a></h4> <ul> <li><a href="https://dev.chromatone.center/" target="_blank" rel="noreferrer">https://dev.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/apps" target="_blank" rel="noreferrer">https://github.com/chromatone/apps</a></li> </ul> </li> <li> <h4 id="jam" tabindex="-1">Jam <a class="header-anchor" href="#jam" aria-label="Permalink to "Jam""></a></h4> <ul> <li><a href="https://jam.chromatone.center/" target="_blank" rel="noreferrer">https://jam.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/jam-session" target="_blank" rel="noreferrer">https://github.com/chromatone/jam-session</a></li> </ul> </li> <li> <h4 id="lab" tabindex="-1">Lab <a class="header-anchor" href="#lab" aria-label="Permalink to "Lab""></a></h4> <ul> <li><a href="https://lab.chromatone.center/" target="_blank" rel="noreferrer">https://lab.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/chromatone-lab" target="_blank" rel="noreferrer">https://github.com/chromatone/chromatone-lab</a></li> </ul> </li> <li> <h4 id="midi-monitor" tabindex="-1">MIDI-monitor <a class="header-anchor" href="#midi-monitor" aria-label="Permalink to "MIDI-monitor""></a></h4> <ul> <li><a href="https://midi.chromatone.center/" target="_blank" rel="noreferrer">https://midi.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/midi-monitor" target="_blank" rel="noreferrer">https://github.com/chromatone/midi-monitor</a></li> </ul> </li> <li> <h4 id="noise-lab" tabindex="-1">Noise lab <a class="header-anchor" href="#noise-lab" aria-label="Permalink to "Noise lab""></a></h4> <ul> <li><a href="https://noise.chromatone.center/" target="_blank" rel="noreferrer">https://noise.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/noise-lab" target="_blank" rel="noreferrer">https://github.com/chromatone/noise-lab</a></li> </ul> </li> <li> <h4 id="paper" tabindex="-1">Paper <a class="header-anchor" href="#paper" aria-label="Permalink to "Paper""></a></h4> <ul> <li><a href="https://paper.chromatone.center/" target="_blank" rel="noreferrer">https://paper.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/midi-paper" target="_blank" rel="noreferrer">https://github.com/chromatone/midi-paper</a></li> </ul> </li> <li> <h4 id="see-chroma" tabindex="-1">See Chroma <a class="header-anchor" href="#see-chroma" aria-label="Permalink to "See Chroma""></a></h4> <ul> <li><a href="https://see.chromatone.center/" target="_blank" rel="noreferrer">https://see.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/see.chromatone.center" target="_blank" rel="noreferrer">https://github.com/chromatone/see.chromatone.center</a></li> </ul> </li> <li> <h4 id="pitch-table" tabindex="-1">Pitch table <a class="header-anchor" href="#pitch-table" aria-label="Permalink to "Pitch table""></a></h4> <ul> <li><a href="https://table.chromatone.center/" target="_blank" rel="noreferrer">https://table.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/pitch-table" target="_blank" rel="noreferrer">https://github.com/chromatone/pitch-table</a></li> </ul> </li> <li> <h4 id="touchme" tabindex="-1">TouchMe <a class="header-anchor" href="#touchme" aria-label="Permalink to "TouchMe""></a></h4> <ul> <li><a href="https://touchme.chromatone.center/" target="_blank" rel="noreferrer">https://touchme.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/touchme" target="_blank" rel="noreferrer">https://github.com/chromatone/touchme</a></li> </ul> </li> <li> <h4 id="trombone" tabindex="-1">Trombone <a class="header-anchor" href="#trombone" aria-label="Permalink to "Trombone""></a></h4> <ul> <li><a href="https://trombone.chromatone.center/#/" target="_blank" rel="noreferrer">https://trombone.chromatone.center/#/</a></li> <li><a href="https://github.com/chromatone/pink-trombone" target="_blank" rel="noreferrer">https://github.com/chromatone/pink-trombone</a></li> </ul> </li> <li> <h4 id="tuner" tabindex="-1">Tuner <a class="header-anchor" href="#tuner" aria-label="Permalink to "Tuner""></a></h4> <ul> <li><a href="https://tuner.chromatone.center/#/" target="_blank" rel="noreferrer">https://tuner.chromatone.center/#/</a></li> <li><a href="https://github.com/chromatone/tuner" target="_blank" rel="noreferrer">https://github.com/chromatone/tuner</a></li> </ul> </li> <li> <h4 id="tunings" tabindex="-1">Tunings <a class="header-anchor" href="#tunings" aria-label="Permalink to "Tunings""></a></h4> <ul> <li><a href="https://tunings.chromatone.center" target="_blank" rel="noreferrer">https://tunings.chromatone.center</a></li> <li><a href="https://github.com/chromatone/svg-tunings" target="_blank" rel="noreferrer">https://github.com/chromatone/svg-tunings</a></li> </ul> </li> <li> <h4 id="circle-of-tones" tabindex="-1">Circle of tones <a class="header-anchor" href="#circle-of-tones" aria-label="Permalink to "Circle of tones""></a></h4> <ul> <li><a href="https://circle.chromatone.center/" target="_blank" rel="noreferrer">https://circle.chromatone.center/</a></li> <li><a href="https://github.com/chromatone/circle-of-tones" target="_blank" rel="noreferrer">https://github.com/chromatone/circle-of-tones</a></li> </ul> </li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/devs.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI Rows]]></title> <link>https://chromatone.center/practice/experiments/rows/</link> <guid>https://chromatone.center/practice/experiments/rows/</guid> <pubDate>Sun, 02 Feb 2020 00:00:00 GMT</pubDate> <description><![CDATA[Measurewise MIDI recorder]]></description> <content:encoded><![CDATA[<client-only> <MidiRows/> </client-only> ]]></content:encoded> </item> <item> <title><![CDATA[Hydra synth]]></title> <link>https://chromatone.center/practice/visual/hydra/</link> <guid>https://chromatone.center/practice/visual/hydra/</guid> <pubDate>Sun, 02 Feb 2020 00:00:00 GMT</pubDate> <description><![CDATA[<script setup import { defineClientComponent } from 'vitepress' const HydraSynth = defineClientCo]]></description> <content:encoded><![CDATA[<HydraSynth/><h2 id="hydra" tabindex="-1">Hydra <a class="header-anchor" href="#hydra" aria-label="Permalink to "Hydra""></a></h2> <p>Set of tools for livecoding networked visuals. Inspired by analog modular synthesizers, these tools are an exploration into using streaming over the web for routing video sources and outputs in realtime.</p> <p>Hydra uses multiple framebuffers to allow dynamically mixing, compositing, and collaborating between connected browser-visual-streams. Coordinate and color transforms can be applied to each output via chained functions.</p> <p><a href="https://hydra.ojack.xyz" target="_blank" rel="noreferrer">https://hydra.ojack.xyz</a></p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Microphone audio analysis]]></title> <link>https://chromatone.center/practice/sound/mic/</link> <guid>https://chromatone.center/practice/sound/mic/</guid> <pubDate>Sun, 03 Nov 2019 00:00:00 GMT</pubDate> <description><![CDATA[Use any mic or audio input device and see some valuable data extracted from it]]></description> <content:encoded><![CDATA[<client-only> <AudioInputMic class="m-2" /> <AudioAnalysisFFT class="m-2" /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/mic-app.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Drum machine]]></title> <link>https://chromatone.center/practice/rhythm/drum-machine/</link> <guid>https://chromatone.center/practice/rhythm/drum-machine/</guid> <pubDate>Sat, 02 Nov 2019 00:00:00 GMT</pubDate> <description><![CDATA[Simple drum sequencer with synthesized sounds]]></description> <content:encoded><![CDATA[<client-only> <audio-drums-sequencer class="m-2" /> </client-only> <p>Very basic drum machine made with Elementary.js. All sounds are synthesized by low level sound engine commands and run in independent Web Audio and WASM threads, ensuring high quality and no glitches in the sound.</p> <h2 id="todo" tabindex="-1">TODO <a class="header-anchor" href="#todo" aria-label="Permalink to "TODO""></a></h2> <ul> <li>sync with musical transport</li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/yianni-mathioudakis.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Musical Glossary]]></title> <link>https://chromatone.center/theory/glossary/</link> <guid>https://chromatone.center/theory/glossary/</guid> <pubDate>Mon, 04 Feb 2019 00:00:00 GMT</pubDate> <description><![CDATA[List of terms used in modern music conversations]]></description> <content:encoded><![CDATA[<ul> <li v-for="(meaning, term) in data" :key="term"> <b>{{term}}</b>: {{meaning}} </li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/tom-hermans.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[External resources]]></title> <link>https://chromatone.center/theory/resources/</link> <guid>https://chromatone.center/theory/resources/</guid> <pubDate>Sat, 02 Feb 2019 00:00:00 GMT</pubDate> <description><![CDATA[Links to some valuable information on music and color theory and more]]></description> <content:encoded><![CDATA[<ToolsList v-if="data" :data="data" />]]></content:encoded> <enclosure url="https://chromatone.center/ugur-akdemir.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Musical transport]]></title> <link>https://chromatone.center/practice/rhythm/time/</link> <guid>https://chromatone.center/practice/rhythm/time/</guid> <pubDate>Tue, 30 Oct 2018 00:00:00 GMT</pubDate> <description><![CDATA[How we subdivide time to have a rhythm]]></description> <content:encoded><![CDATA[<client-only> <AudioTimeMath /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[External tools]]></title> <link>https://chromatone.center/practice/external/</link> <guid>https://chromatone.center/practice/external/</guid> <pubDate>Sat, 08 Sep 2018 00:00:00 GMT</pubDate> <description><![CDATA[There's a plenty of opportunities to dive into different aspects of music theory and practice online]]></description> <content:encoded><![CDATA[<ToolsList v-if="data" :data="data" />]]></content:encoded> <enclosure url="https://chromatone.center/other.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Quartal and Quintal Harmony]]></title> <link>https://chromatone.center/theory/harmony/quartal/</link> <guid>https://chromatone.center/theory/harmony/quartal/</guid> <pubDate>Thu, 29 Jan 2015 00:00:00 GMT</pubDate> <description><![CDATA[The building of harmonic structures built from the intervals of the perfect fourth, the augmented fourth and the diminished fourth.]]></description> <content:encoded><![CDATA[<p>In music, quartal harmony is the building of harmonic structures built from the intervals of the perfect fourth, the augmented fourth and the diminished fourth. For instance, a three-note quartal chord on C can be built by stacking perfect fourths, C–F–B♭.</p> <p>Quintal harmony is harmonic structure preferring the perfect fifth, the augmented fifth and the diminished fifth. For instance, a three-note quintal chord on C can be built by stacking perfect fifths, C–G–D.</p> <h2 id="properties" tabindex="-1">Properties <a class="header-anchor" href="#properties" aria-label="Permalink to "Properties""></a></h2> <p>The terms quartal and quintal imply a contrast, either compositional or perceptual, with traditional harmonic constructions based on thirds: listeners familiar with music of the common practice period are guided by tonalities constructed with familiar elements: the chords that make up major and minor scales, all in turn built from major and minor thirds.</p> <p>Regarding chords built from perfect fourths alone, composer Vincent Persichetti writes that:</p> <blockquote> <p>Chords by perfect fourth are ambiguous in that, like all chords built by equidistant intervals (diminished seventh chords or augmented triads), any member can function as the root. The indifference of this rootless harmony to tonality places the burden of key verification upon the voice with the most active melodic line.</p> </blockquote> <p>Quintal harmony (the harmonic layering of fifths specifically) is a lesser-used term, and since the fifth is the inversion or complement of the fourth, it is usually considered indistinct from quartal harmony. Because of this relationship, any quartal chord can be rewritten as a quintal chord by changing the order of its pitches.</p> <p>Like tertian chords, a given quartal or quintal chord can be written with different voicings, some of which obscure its quartal structure.</p> <h2 id="history" tabindex="-1">History <a class="header-anchor" href="#history" aria-label="Permalink to "History""></a></h2> <p>In the Middle Ages, simultaneous notes a fourth apart were heard as a consonance. During the common practice period (between about 1600 and 1900), this interval came to be heard either as a dissonance (when appearing as a suspension requiring resolution in the voice leading) or as a consonance (when the root of the chord appears in parts higher than the fifth of the chord). In the later 19th century, during the breakdown of tonality in classical music, all intervallic relationships were once again reassessed. Quartal harmony was developed in the early 20th century as a result of this breakdown and reevaluation of tonality.</p> <h2 id="precursors" tabindex="-1">Precursors <a class="header-anchor" href="#precursors" aria-label="Permalink to "Precursors""></a></h2> <p>The Tristan chord is made up of the notes F♮, B♮, D♯ and G♯ and is the first chord heard in Wagner's opera Tristan und Isolde.</p> <p>The bottom two notes make up an augmented fourth, while the upper two make up a perfect fourth. This layering of fourths in this context has been seen as highly significant. The chord had been found in earlier works, notably Beethoven's Piano Sonata No. 18, but Wagner's use was significant, first because it is seen as moving away from traditional tonal harmony and even towards atonality, and second because with this chord Wagner actually provoked the sound or structure of musical harmony to become more predominant than its function, a notion which was soon after to be explored by Debussy and others.</p> <p>Despite the layering of fourths, it is rare to find musicologists identifying this chord as "quartal harmony" or even as "proto-quartal harmony", since Wagner's musical language is still essentially built on thirds, and even an ordinary dominant seventh chord can be laid out as augmented fourth plus perfect fourth (F–B–D–G). Wagner's unusual chord is really a device to draw the listener into the musical-dramatic argument which the composer is presenting to us.</p> <p>At the beginning of the 20th century, quartal harmony finally became an important element of harmony. Scriabin used a self-developed system of transposition using fourth-chords, like his Mystic chord (shown below) in his Piano Sonata No. 6.</p> <p>Scriabin wrote this chord in his sketches alongside other quartal passages and more traditional tertian passages, often passing between systems, for example widening the six-note quartal sonority (C–F♯–B♭–E–A–D) into a seven-note chord (C–F♯–B♭–E–A–D–G). Scriabin's sketches for his unfinished work Mysterium show that he intended to develop the Mystic chord into a huge chord incorporating all twelve notes of the chromatic scale.</p> <p>In France, Erik Satie experimented with planing in the stacked fourths (not all perfect) of his 1891 score for Le Fils des étoiles. Paul Dukas's The Sorcerer's Apprentice (1897) has a rising repetition in fourths, as the tireless work of out-of-control walking brooms causes the water level in the house to "rise and rise".</p> <youtube-embed video="DlML3adH9yQ" /><h2 id="_20th-and-21st-century-classical-music" tabindex="-1">20th- and 21st-century classical music <a class="header-anchor" href="#_20th-and-21st-century-classical-music" aria-label="Permalink to "20th- and 21st-century classical music""></a></h2> <p>Composers who use the techniques of quartal harmony include Claude Debussy, Francis Poulenc, Alexander Scriabin, Alban Berg, Leonard Bernstein, Arnold Schoenberg, Igor Stravinsky, Maurice Ravel, Joe Hisaishi and Anton Webern.[7]</p> <h3 id="schoenberg" tabindex="-1">Schoenberg <a class="header-anchor" href="#schoenberg" aria-label="Permalink to "Schoenberg""></a></h3> <p>Arnold Schoenberg's Chamber Symphony Op. 9 (1906) displays quartal harmony: the first measure and a half construct a five-part fourth chord with the notes (highlighted in red in the illustration) A–D♯–F–B♭–E♭–A♭ distributed over the five stringed instruments (the viola must tune down the lowest string by a minor third, and read in the unfamiliar tenor clef).</p> <p>The composer then picks out this vertical quartal harmony in a horizontal sequence of fourths from the horns, eventually leading to a passage of triadic quartal harmony (i.e., chords of three notes, each layer a fourth apart).</p> <p>Schoenberg was also one of the first to write on the theoretical consequences of this harmonic innovation. In his Theory of Harmony (Harmonielehre) of 1911, he wrote:</p> <blockquote> <p>The construction of chords by superimposing fourths can lead to a chord that contains all the twelve notes of the chromatic scale; hence, such construction does manifest a possibility for dealing systematically with those harmonic phenomena that already exist in the works of some of us: seven, eight, nine, ten, eleven, and twelve-part chords… But the quartal construction makes possible, as I said, accommodation of all phenomena of harmony.</p> </blockquote> <p>For Anton Webern, the importance of quartal harmony lay in the possibility of building new sounds. After hearing Schoenberg's Chamber Symphony, Webern wrote "You must write something like that, too!"</p> <h2 id="others" tabindex="-1">Others <a class="header-anchor" href="#others" aria-label="Permalink to "Others""></a></h2> <p>In his Theory of Harmony: "Besides myself my students Dr. Anton Webern and Alban Berg have written these harmonies (fourth chords), but also the Hungarian Béla Bartók or the Viennese Franz Schreker, who both go a similar way to Debussy, Dukas and perhaps also Puccini, are not far off."</p> <p>French composer Maurice Ravel used quartal chords in Sonatine (1906) and Ma mère l'Oye (1910), while American Charles Ives used quartal chords in his song "The Cage" (1906).</p> <p>Hindemith constructed large parts of his symphonic work Symphony: Mathis der Maler by means of fourth and fifth intervals. These steps are a restructuring of fourth chords (C–D–G becomes the fourth chord D–G–C), or other mixtures of fourths and fifths (D♯–A♯–D♯–G♯–C♯ in measure 3 of the example).</p> <p>Hindemith was, however, not a proponent of an explicit quartal harmony. In his 1937 writing Unterweisung im Tonsatz (The Craft of Musical Composition,Hindemith 1937) he wrote that "notes have a family of relationships, that are the bindings of tonality, in which the ranking of intervals is unambiguous," so much so, indeed, that in the art of triadic composition "...the musician is bound by this, as the painter to his primary colours, the architect to the three dimensions." He lined up the harmonic and melodic aspects of music in a row in which the octave ranks first, then the fifth and the third, and then the fourth. "The strongest and most unique harmonic interval after the octave is the fifth, the prettiest nevertheless is the third by right of the chordal effects of its Combination tones."</p> <p>The works of the Filipino composer Eliseo M. Pajaro [it; nl; tl] (1915–1984) are characterised by quartal and quintal harmonies, as well as by dissonant counterpoint and polychords.</p> <p>As a transition to the history of jazz, George Gershwin may be mentioned. In the first movement of his Concerto in F altered fourth chords descend chromatically in the right hand with a chromatic scale leading upward in the left hand.</p> <youtube-embed video="5nSW5jlbQcg" /><h2 id="jazz" tabindex="-1">Jazz <a class="header-anchor" href="#jazz" aria-label="Permalink to "Jazz""></a></h2> <p>Jazz is often understood as a synthesis of the European common practice harmonic vocabulary with textural paradigms from West African folk music—but it would be an oversimplification to describe jazz as sharing the same fundamental theory of harmony as European music. Important influences come from opera as well as from the instrumental work of Classical- and Romantic-era composers, and even that of the Impressionists. From the beginning, jazz musicians expressed a particular interest in rich harmonic colours, for which non-tertiary harmony was a means of exploration, as used by pianists and arrangers like Jelly Roll Morton, Duke Ellington, Art Tatum, Bill Evans, Milt Buckner, Chick Corea, Herbie Hancock, and especially McCoy Tyner.</p> <p>The hard bop of the 1950s made new applications of quartal harmony accessible to jazz.[citation needed] Quintet writing in which two melodic instruments (commonly trumpet and saxophone) may proceed in fourths, while the piano (as a uniquely harmonic instrument) lays down chords, but sparsely, only hinting at the intended harmony. This style of writing, in contrast with that of the previous decade, preferred a moderate tempo. Thin-sounding unison bebop horn sections occur frequently, but these are balanced by bouts of very refined polyphony such as is found in cool jazz.</p> <p>The "So What" chord uses three intervals of a fourth.</p> <p>On his watershed record Kind of Blue, Miles Davis with pianist Bill Evans used a chord consisting of three perfect fourth intervals and a major third on the composition "So What". This particular voicing is sometimes referred to as a So What chord, and can be analyzed (without regard for added sixths, ninths, etc.) as a minor seventh with the root on the bottom, or as a major seventh with the third on the bottom.</p> <p>From the outset of the 1960s, the employment of quartal possibilities had become so familiar that the musician now felt the fourth chord existed as a separate entity, self standing and free of any need to resolve. The pioneering of quartal writing in later jazz and rock, like the pianist McCoy Tyner's work with saxophonist John Coltrane's "classic quartet", was influential throughout this epoch. Oliver Nelson was also known for his use of fourth chord voicings. Tom Floyd claims that the "foundation of 'modern quartal harmony'" began in the era when the Charlie Parker–influenced John Coltrane added classically trained pianists Bill Evans and McCoy Tyner to his ensemble.</p> <p>Jazz guitarists cited as using chord voicings using quartal harmony include Johnny Smith, Tal Farlow, Chuck Wayne, Barney Kessel, Joe Pass, Jimmy Raney, Wes Montgomery—however, all in a traditional manner, as major 9th, 13th and minor 11th chords (an octave and fourth equals an 11th). Jazz guitarists cited as using modern quartal harmony include Jim Hall (especially Sonny Rollins's The Bridge), George Benson ("Sky Dive"), Pat Martino, Jack Wilkins ("Windows"), Joe Diorio, Howard Roberts, Kenny Burrell, Wes Montgomery, Henry Johnson, Russell Malone, Jimmy Bruno, Howard Alden, Bill Frisell, Paul Bollenback, Mark Whitfield, and Rodney Jones.</p> <p>Quartal harmony was also explored as a possibility under new experimental scale models as they were "discovered" by jazz.[citation needed] Musicians began to work extensively with the so-called church modes of old European music, and they became firmly situated in their compositional process. Jazz was well-suited to incorporate the medieval use of fourths to thicken lines into its improvisation. The pianists Herbie Hancock, and Chick Corea are two musicians well known for their modal experimentation. Around this time, a style known as free jazz also came into being, in which quartal harmony had extensive use, owing to the wandering nature of its harmony.</p> <youtube-embed video="MQtiT6syGmc" /><p>In jazz, the way chords were built from a scale came to be called voicing, and specifically quartal harmony was referred to as fourth voicing.</p> <p>Thus when the m11 and the dominant 7th sus (9sus above) chords in quartal voicings are used together they tend to "blend into one overall sound" sometimes referred to as modal voicings, and both may be applied where the m11 chord is called for during extended periods such as the entire chorus.</p> <h3 id="rock-music" tabindex="-1">Rock music <a class="header-anchor" href="#rock-music" aria-label="Permalink to "Rock music""></a></h3> <p>Disliking the sound of thirds (in equal-temperament tuning), Robert Fripp builds chords with perfect intervals in his new standard tuning.</p> <p>Quartal and quintal harmony have been used by Robert Fripp, guitarist of King Crimson. Fripp dislikes minor thirds and especially major thirds in equal temperament tuning, which is used by non-experimental guitars. The perfect fourths and fifths of just intonation are well approximated in equal temperament tuning, and perfect fifths and octaves are highly consonant intervals. Fripp builds chords using perfect fifths, fourths, and octaves in his new standard tuning (NST), a regular tuning having perfect fifths between its successive open strings.</p> <p>The 1971 album Tarkus by Emerson, Lake & Palmer depends on quartal harmony throughout, including a recurrent elaboration on the classical Alberti bass pattern, in this case consisting of three broken quartal three-note chords, the first two of which are also a perfect fourth apart, and the third a semitone higher than the first. Keith Emerson uses programmatic quintal harmony in several places for extended rapid obbligato passages where human fingering would be impracticable, the first on Hammond organ and the second on Modular Moog, in a similar manner to the mutation stops on pipe organs, such as the "Twelfth" at 2 2/3' pitch played against a 4' "Principal" (which plays the eighth note). In the second instance, the triad is both quartal and quintal, being 1+4+5.</p> <p>Ray Manzarek of The Doors was another keyboard player and composer who put classical and jazz elements, including quartal harmonies, into the service of rock music. The keyboard solo of "Riders on the Storm", for instance, has several passages where the melody line is doubled at an interval of a perfect fourth, and extensive use of (E dorian) minor chord voicings featuring the seven and three, spaced by that same interval, as the prominent notes.</p> <youtube-embed video="RcioU2aHha8" /><h2 id="examples-of-quartal-pieces" tabindex="-1">Examples of quartal pieces <a class="header-anchor" href="#examples-of-quartal-pieces" aria-label="Permalink to "Examples of quartal pieces""></a></h2> <h3 id="classical" tabindex="-1">Classical <a class="header-anchor" href="#classical" aria-label="Permalink to "Classical""></a></h3> <ul> <li> <p>William Albright</p> <ul> <li>Sonata for Alto Saxophone and Piano</li> </ul> </li> <li> <p>Alban Berg</p> <ul> <li>Sonata for Piano, Op. 1</li> <li>Wozzeck</li> </ul> </li> <li> <p>Carlos Chávez</p> <ul> <li>Sinfonía de Antígona (Symphony No. 1), uses quartal harmony throughout</li> <li>Sinfonía india (Symphony No. 2), the A-minor Sonora melody beginning in b. 183 is accompanied by quartal harmonies</li> </ul> </li> <li> <p>Aaron Copland</p> <ul> <li>Of Mice and Men</li> </ul> </li> <li> <p>Claude Debussy</p> <ul> <li>"La cathédrale engloutie", beginning and ending</li> </ul> </li> <li> <p>Norman Dello Joio</p> <ul> <li>Suite for Piano</li> </ul> </li> <li> <p>Caspar Diethelm</p> <ul> <li>Piano Sonata No. 7</li> </ul> </li> <li> <p>Alberto Ginastera</p> <ul> <li>12 American Preludes, Prelude #7</li> </ul> </li> <li> <p>Carlos Guastavino</p> <ul> <li>"Donde habite el olvido"</li> </ul> </li> <li> <p>Howard Hanson</p> <ul> <li>Symphony No. 2 ("Romantic")</li> </ul> </li> <li> <p>Walter Hartley</p> <ul> <li>Bacchanalia for Band</li> </ul> </li> <li> <p>Charles Ives</p> <ul> <li>"The Cage" (1906)</li> <li>Central Park in the Dark</li> <li>"Harpalus"</li> <li>Psalm 24, verse 5</li> <li>Psalm 90</li> <li>"Walking"</li> </ul> </li> <li> <p>Aram Khachaturian</p> <ul> <li>Toccata</li> </ul> </li> <li> <p>Benjamin Lees</p> <ul> <li>String Quartet No. 2, Adagio</li> </ul> </li> <li> <p>Darius Milhaud</p> <ul> <li>Sonatina for flute & piano, Op. 76</li> </ul> </li> <li> <p>Walter Piston</p> <ul> <li>Clarinet Concerto</li> <li>Ricercare for Orchestra</li> </ul> </li> <li> <p>Einojuhani Rautavaara</p> <ul> <li>"Kvartit" (Fourths), Op. 42, Études (Rautavaara)</li> </ul> </li> <li> <p>Maurice Ravel</p> <ul> <li>Ma mère l'oye : "Mouvt de Marche" of "Laideronnette"</li> </ul> </li> <li> <p>Ned Rorem</p> <ul> <li>King Midas, cantata</li> </ul> </li> <li> <p>Erik Satie</p> <ul> <li>Le Fils des étoiles</li> </ul> </li> <li> <p>Arnold Schoenberg</p> <ul> <li>The Book of the Hanging Gardens</li> <li>Chamber Symphony, Op. 9 slow section, b. 1–3</li> <li>Wind Quintet, Op. 26</li> </ul> </li> <li> <p>Cyril Scott</p> <ul> <li>Diatonic Study (1914)</li> </ul> </li> <li> <p>Nikos Skalkottas</p> <ul> <li>Suite No. 3 for Piano</li> </ul> </li> <li> <p>Stephen Sondheim</p> <ul> <li>Piano Sonata</li> </ul> </li> <li> <p>Karlheinz Stockhausen</p> <ul> <li>Klavierstück IX</li> </ul> </li> <li> <p>Howard Swanson</p> <ul> <li>"Saw a Grave"</li> </ul> </li> <li> <p>Heitor Villa-Lobos</p> <ul> <li>Nonet (1923)</li> </ul> </li> <li> <p>Anton Webern</p> <ul> <li>Variations for Piano, Op. 27</li> </ul> </li> <li> <p>John Williams</p> <ul> <li>Star Wars - Main Title (1977)</li> <li>Superman - Main Title (1978)</li> </ul> </li> </ul> <h3 id="jazz-1" tabindex="-1">Jazz <a class="header-anchor" href="#jazz-1" aria-label="Permalink to "Jazz""></a></h3> <ul> <li> <p>Miles Davis</p> <ul> <li>Kind of Blue</li> </ul> </li> <li> <p>Herbie Hancock</p> <ul> <li>"Maiden Voyage"</li> </ul> </li> <li> <p>Eddie Harris</p> <ul> <li>"Freedom Jazz Dance"[citation needed]</li> </ul> </li> <li> <p>McCoy Tyner</p> <ul> <li>"Contemplation"[citation needed]</li> <li>"Passion Dance"[citation needed]</li> </ul> </li> </ul> <h3 id="folk" tabindex="-1">Folk <a class="header-anchor" href="#folk" aria-label="Permalink to "Folk""></a></h3> <p>On her 1968 debut album Song to a Seagull, Joni Mitchell used quartal and quintal harmony in "Dawntreader", and she used quintal harmony in the title track Song to a Seagull.</p> <h3 id="rock" tabindex="-1">Rock <a class="header-anchor" href="#rock" aria-label="Permalink to "Rock""></a></h3> <ul> <li> <p>Emerson, Lake & Palmer</p> <ul> <li>Tarkus</li> </ul> </li> <li> <p>Frank Zappa</p> <ul> <li>"Zoot Allures"</li> </ul> </li> <li> <p>XTC</p> <ul> <li>"Rook" (composed by Andy Partridge, from the album Nonsuch)</li> </ul> </li> </ul> ]]></content:encoded> <enclosure url="https://chromatone.center/miles-davis-1969-b-billboard-1548.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Polymeter concentric circles]]></title> <link>https://chromatone.center/practice/experiments/polymeter/</link> <guid>https://chromatone.center/practice/experiments/polymeter/</guid> <pubDate>Fri, 02 Jan 2015 00:00:00 GMT</pubDate> <description><![CDATA[Cycles of measures of various number of beats]]></description> <content:encoded><![CDATA[<PolyMeter />]]></content:encoded> <enclosure url="https://chromatone.center/poly.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Spectral synthesis]]></title> <link>https://chromatone.center/theory/synthesis/spectral/</link> <guid>https://chromatone.center/theory/synthesis/spectral/</guid> <pubDate>Sun, 26 Oct 2014 00:00:00 GMT</pubDate> <description><![CDATA[How to synthesise any sound with sine waves]]></description> <content:encoded><![CDATA[<p>Spectral modeling synthesis (SMS) is an acoustic modeling approach for speech and other signals. SMS considers sounds as a combination of harmonic content and noise content. Harmonic components are identified based on peaks in the frequency spectrum of the signal, normally as found by the short-time Fourier transform. The signal that remains following removal of the spectral components, sometimes referred to as the residual, is then modeled as white noise passed through a time-varying filter. The output of the model, then, are the frequencies and levels of the detected harmonic components and the coefficients of the time-varying filter.</p> <p>Intuitively, the model can be applied to many types of audio signals. Speech signals, for example, include slowly changing harmonic sounds caused by vibration of the vocal cords plus wideband, noise-like sounds caused by the lips and mouth. Musical instruments also produce sounds containing both harmonic components and percussive, noise-like sounds when the notes are struck or changed.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Turkish rhythms]]></title> <link>https://chromatone.center/theory/rhythm/system/turkish/</link> <guid>https://chromatone.center/theory/rhythm/system/turkish/</guid> <pubDate>Fri, 03 Oct 2014 00:00:00 GMT</pubDate> <description><![CDATA[Usuls and aksaks]]></description> <content:encoded><![CDATA[<beat-bars v-bind="turkish" /><p>In Ottoman classical music, usul is an underlying rhythmic cycle that complements the melodic rhythm and sometimes helps shape the overall structure of a composition. An usul can be as short as two beats or as long as 128 beats. Usul is often translated as "meter", but usul and meter are not exactly the same. Both are repeating rhythmic patterns with more or less complex inner structures of beats of differing duration and weight. But a student learning Turkish music in the traditional meşk system first memorizes the usul kinetically by striking the knees with the hands. The student then sings the vocal or instrumental composition while performing the underlying usul. This pedagogical system helps the student memorize the composition while internalizing the underlying rhythmic structure.</p> <p>Usul patterns have standard pronounceable vocables built from combinations of the syllables düm, dü-üm, tek, tekkyaa, teke, te-ek, where düm, dü-üm indicate a strong low beat of single or double duration, and tek, tekkya, teke, te-ek indicate various combinations of light beats of half, single or double duration. Long usuls (e.g., 28/4, 32/4, 120/4) are compound metric structures that underlie longer sections of entire compositions.</p> <p>In Ottoman times, the usul was realized by drummers. Drums are generally omitted in modern performances except for Mevlevi. When performing music for the Mevlevi ceremony, drummers traditionally play embellished (velveleli) versions of the usuls.</p> <p>Instrumental improvisations (taksim) and vocal improvisations (gazel, mersiye, etc.) are generally performed in "free" rhythm, with no usul.</p> <p>The melodic counterpart to usul rhythmic mode is makam melodic mode. The parallel system to usul in Indian music is tala.</p> <h1 id="turkish-usuls-and-aksak-rhythms-a-comprehensive-guide" tabindex="-1">Turkish Usuls and Aksak Rhythms: A Comprehensive Guide <a class="header-anchor" href="#turkish-usuls-and-aksak-rhythms-a-comprehensive-guide" aria-label="Permalink to "Turkish Usuls and Aksak Rhythms: A Comprehensive Guide""></a></h1> <h2 id="introduction" tabindex="-1">Introduction <a class="header-anchor" href="#introduction" aria-label="Permalink to "Introduction""></a></h2> <p>Turkish music is renowned for its complex rhythmic structures, which form an integral part of both classical Ottoman music and Turkish folk traditions. Two key concepts in Turkish rhythm are Usul and Aksak.</p> <h3 id="usul" tabindex="-1">Usul <a class="header-anchor" href="#usul" aria-label="Permalink to "Usul""></a></h3> <p>Usul (plural: usuller) is the Turkish term for rhythmic cycle or meter. It's comparable to time signature in Western music, but often more complex. Usuller can range from simple patterns of a few beats to intricate cycles of over 100 beats. They provide the rhythmic foundation for Turkish classical and folk music.</p> <h3 id="aksak" tabindex="-1">Aksak <a class="header-anchor" href="#aksak" aria-label="Permalink to "Aksak""></a></h3> <p>Aksak, meaning "limping" in Turkish, refers to asymmetrical rhythms that combine units of 2 and 3 beats. These rhythms are characteristic of Turkish and Balkan music, creating a distinctive "limping" or "uneven" feel. While all aksak rhythms are usuller, not all usuller are aksak.</p> <h3 id="note-on-rhythm-notation" tabindex="-1">Note on Rhythm Notation <a class="header-anchor" href="#note-on-rhythm-notation" aria-label="Permalink to "Note on Rhythm Notation""></a></h3> <p>In this guide, we notate the rhythms using 16th notes (e.g., 5/16, 7/16) rather than 8th notes. This choice reflects the traditionally fast tempo at which many of these usuls are performed. In classical Turkish music, these rhythms are often played at a pace where each notated beat (16th note) corresponds roughly to one pulse, typically ranging from 80 to 120 beats per minute for the 16th note, depending on the specific usul and the nature of the piece.</p> <p>However, it's important to note that tempo can vary widely based on the style of music, the specific composition, and the performer's interpretation. Some slower pieces or more contemporary interpretations might take a more relaxed tempo.</p> <p>In the following descriptions, we use "X" to represent the main strong beat (düm in Turkish percussion terminology), "x" for secondary beats (tek), and "." for silent beats.</p> <h2 id="detailed-rhythm-descriptions" tabindex="-1">Detailed Rhythm Descriptions <a class="header-anchor" href="#detailed-rhythm-descriptions" aria-label="Permalink to "Detailed Rhythm Descriptions""></a></h2> <h3 id="_5-16-turk-aksagı" tabindex="-1">5/16 Türk Aksağı <a class="header-anchor" href="#_5-16-turk-aksagı" aria-label="Permalink to "5/16 Türk Aksağı""></a></h3> <ul> <li>Pattern: X.x.. (2+3)</li> <li>Description: One of the most fundamental aksak rhythms. Its name literally means "Turkish limping."</li> <li>Usage: Widely used in Turkish folk music, especially in western Turkey and the Balkans. It's also found in classical compositions and forms the basis for more complex rhythms.</li> <li>Significance: Türk Aksağı is often considered the quintessential aksak rhythm, embodying the characteristic "limping" feel of Turkish music.</li> </ul> <h3 id="_5-16-hucum-semai" tabindex="-1">5/16 Hücum Semai <a class="header-anchor" href="#_5-16-hucum-semai" aria-label="Permalink to "5/16 Hücum Semai""></a></h3> <ul> <li>Pattern: X..x. (3+2)</li> <li>Description: An alternative 5/16 pattern, often felt as a "reverse" of Türk Aksağı.</li> <li>Usage: Commonly used in the fast final section (hücum means "attack" in Turkish) of classical fasıl performances.</li> <li>Significance: Hücum Semai showcases how the same number of beats can create a distinctly different feel when arranged differently.</li> </ul> <h3 id="_7-16-devr-i-hindi" tabindex="-1">7/16 Devr-i Hindi <a class="header-anchor" href="#_7-16-devr-i-hindi" aria-label="Permalink to "7/16 Devr-i Hindi""></a></h3> <ul> <li>Pattern: X...x.. (4+3)</li> <li>Description: The name means "Indian cycle," though its origins are uncertain.</li> <li>Usage: Found in both classical Ottoman music and folk traditions.</li> <li>Significance: Devr-i Hindi demonstrates how Turkish music incorporates longer beat groupings into aksak rhythms.</li> </ul> <h3 id="_7-16-devr-i-turan" tabindex="-1">7/16 Devr-i Turan <a class="header-anchor" href="#_7-16-devr-i-turan" aria-label="Permalink to "7/16 Devr-i Turan""></a></h3> <ul> <li>Pattern: X.x.x.. (2+2+3)</li> <li>Description: An alternative 7/16 pattern, providing a different rhythmic feel from Devr-i Hindi.</li> <li>Usage: Less common than Devr-i Hindi, but still present in the classical repertoire.</li> <li>Significance: Devr-i Turan shows how the same total number of beats can be subdivided differently, creating distinct rhythmic characters.</li> </ul> <h3 id="_9-16-aksak" tabindex="-1">9/16 Aksak <a class="header-anchor" href="#_9-16-aksak" aria-label="Permalink to "9/16 Aksak""></a></h3> <ul> <li>Pattern: X.x.x.x.. (2+2+2+3)</li> <li>Description: The quintessential "limping" rhythm of Turkish music.</li> <li>Usage: Extremely common in both folk and classical music throughout Turkey and the Balkans.</li> <li>Significance: Aksak is perhaps the most well-known Turkish rhythm internationally, epitomizing the aksak concept.</li> </ul> <h3 id="_9-16-evfer" tabindex="-1">9/16 Evfer <a class="header-anchor" href="#_9-16-evfer" aria-label="Permalink to "9/16 Evfer""></a></h3> <ul> <li>Pattern: X..x..x.. (3+3+3)</li> <li>Description: While not strictly an aksak rhythm, its 9/16 meter is significant in Turkish music.</li> <li>Usage: Found in classical Turkish music, often for more lyrical pieces.</li> <li>Significance: Evfer demonstrates how Turkish music uses complex meters even in symmetrical patterns.</li> </ul> <h3 id="_9-16-agır-duyek" tabindex="-1">9/16 Ağır Düyek <a class="header-anchor" href="#_9-16-agır-duyek" aria-label="Permalink to "9/16 Ağır Düyek""></a></h3> <ul> <li>Pattern: X....x... (5+4)</li> <li>Description: A slower, more elaborate version of the common 8/8 Düyek rhythm.</li> <li>Usage: Used in slower, more serious classical pieces.</li> <li>Significance: Ağır Düyek shows how Turkish music adapts and extends simpler rhythms to create more complex ones.</li> </ul> <h3 id="_10-16-aksak-semai" tabindex="-1">10/16 Aksak Semai <a class="header-anchor" href="#_10-16-aksak-semai" aria-label="Permalink to "10/16 Aksak Semai""></a></h3> <ul> <li>Pattern: X.x.x.x.x. (2+2+2+2+2)</li> <li>Description: An extended aksak rhythm, building on the basic aksak pattern.</li> <li>Usage: Found in both classical and folk traditions, often for dance music.</li> <li>Significance: Aksak Semai demonstrates how aksak patterns can be extended to create longer, more complex rhythms.</li> </ul> <h3 id="_11-16-tek-vurus" tabindex="-1">11/16 Tek Vuruş <a class="header-anchor" href="#_11-16-tek-vurus" aria-label="Permalink to "11/16 Tek Vuruş""></a></h3> <ul> <li>Pattern: X.x.x.x.x.. (2+2+2+2+3)</li> <li>Description: A complex aksak rhythm. The name means "single strike," referring to the final long beat.</li> <li>Usage: Less common, but found in some classical compositions and regional folk music.</li> <li>Significance: Tek Vuruş showcases the flexibility of Turkish rhythm, extending the aksak principle to create highly complex patterns.</li> </ul> <h3 id="_13-16-nim-evsat" tabindex="-1">13/16 Nim Evsat <a class="header-anchor" href="#_13-16-nim-evsat" aria-label="Permalink to "13/16 Nim Evsat""></a></h3> <ul> <li>Pattern: X.x.x.x.x.x.. (2+2+2+2+2+3)</li> <li>Description: A highly complex aksak rhythm. "Nim" means "half," as this is half of the 26/16 Evsat rhythm.</li> <li>Usage: Found in advanced classical compositions.</li> <li>Significance: Nim Evsat represents the upper end of complexity in commonly used Turkish rhythms, demonstrating the sophistication of Turkish rhythm theory.</li> </ul> <h3 id="_15-16-raksan" tabindex="-1">15/16 Raksan <a class="header-anchor" href="#_15-16-raksan" aria-label="Permalink to "15/16 Raksan""></a></h3> <ul> <li>Pattern: X.x.x.x..x.x.. (2+2+2+3+2+2+2)</li> <li>Description: A long aksak rhythm. The name means "dancing" in Persian.</li> <li>Usage: Used in some classical pieces and regional folk music.</li> <li>Significance: Raksan shows how very long aksak patterns can still maintain a danceable quality, crucial in Turkish music.</li> </ul> <h3 id="_18-16-turk-aksagı-compound" tabindex="-1">18/16 Türk Aksağı (compound) <a class="header-anchor" href="#_18-16-turk-aksagı-compound" aria-label="Permalink to "18/16 Türk Aksağı (compound)""></a></h3> <ul> <li>Pattern: X.x.x..X.x..x.x.. (2+2+3 + 2+3 + 2+2+2)</li> <li>Description: A compound version of the 5/16 Türk Aksağı.</li> <li>Usage: Found in some advanced classical compositions.</li> <li>Significance: This rhythm demonstrates how basic aksak patterns can be combined to create more complex, compound rhythms.</li> </ul> <h3 id="_25-16-kapalı-curcuna" tabindex="-1">25/16 Kapalı Curcuna <a class="header-anchor" href="#_25-16-kapalı-curcuna" aria-label="Permalink to "25/16 Kapalı Curcuna""></a></h3> <ul> <li>Pattern: X.x.x..X.x.x..X.x.x..X.x.. (2+2+3 + 2+2+3 + 2+2+3 + 2+3)</li> <li>Description: One of the longest aksak usuls. "Kapalı" means "closed" and "curcuna" is a type of lively rhythm.</li> <li>Usage: Rare, used in very advanced classical compositions.</li> <li>Significance: Kapalı Curcuna represents the pinnacle of rhythmic complexity in Turkish music, showcasing how multiple aksak patterns can be combined into a single, extended rhythm.</li> </ul> <h2 id="conclusion" tabindex="-1">Conclusion <a class="header-anchor" href="#conclusion" aria-label="Permalink to "Conclusion""></a></h2> <p>These rhythms demonstrate the rich complexity of Turkish music's approach to meter and rhythm. The aksak patterns, in particular, showcase the sophisticated interplay between long and short beats that gives Turkish music its distinctive character. From the basic 5/16 Türk Aksağı to the complex 25/16 Kapalı Curcuna, these rhythms form a unique rhythmic language that is integral to the expression and emotion of Turkish music.</p> <p>The use of 16th note time signatures reflects the traditionally brisk pace of many usul performances in classical Turkish music. This rapid pulse contributes to the music's energetic and intricate character, allowing for subtle subdivisions and complex interplays between rhythm and melody.</p> <p>Understanding these rhythms not only provides insight into Turkish music but also offers a gateway to appreciating the diverse and intricate rhythmic systems found in music traditions around the world. The speed and complexity of these rhythms challenge musicians to develop high levels of rhythmic precision and listeners to engage with music in new and exciting ways.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/Misirli_Ahmet.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Balkan dances]]></title> <link>https://chromatone.center/theory/rhythm/system/balkan/</link> <guid>https://chromatone.center/theory/rhythm/system/balkan/</guid> <pubDate>Thu, 02 Oct 2014 00:00:00 GMT</pubDate> <description><![CDATA[Bulgarian rhythms and more]]></description> <content:encoded><![CDATA[<beat-bars v-bind="balkan" /><p>Bulgaria is famous for dance rhythms featuring uneven beats. Other Balkan countries also have some of these rhythms, but it is in Bulgaria that they are most common and intricate. Uneven rhythms include 5/16, 7/16, 9/16, 11/16, 15/16, 18/16 (7/16+11/16), and 22/16 (9/16+13/16). To be sure, the most common rhythm in Bulgaria is an even 2/4 or 6/8, (see Bulgarian Even Dance Rhythms) but uneven rhythms are also very common.</p> <p>Western musicians might be tempted when seeing an 11/16 time signature to count to 11, then start again, but that’s not the way a Bulgarian conceptualizes his or her 11-beat rhythm. Often the individual beats are very fast (up to 520 beats per minute, or more than 8 per second!), so the mind just can’t count that fast. Also, Bulgarians were playing this music before Western concepts of what constitutes rhythm were developed. The idea of 11/16 is a Western invention.</p> <p>Bulgarian folk musicians think of beats as either “quick” or “slow”, with the “slow” beat being approximately 1 1/2 times as long as the “quick”. Instead of trying to count a half beat, it’s easier to count a “quick” beat as 2, and a “slow” beat as 3. Ethnomusicologists have a couple of terms for dances and rhythms based on beats of uneven length. The more general term is additive rhythms. When applied to music from the Balkans the term aksak is also used. Aksak is a Turkish word for “limping, stumbling, or slumping”, which is the effect felt by a dancer moving to these rhythms.</p> <h3 id="_5-16-–-pajdusko" tabindex="-1">5/16 – Pajduško <a class="header-anchor" href="#_5-16-–-pajdusko" aria-label="Permalink to "5/16 – Pajduško""></a></h3> <p>If you count a 5-beat uneven rhythm as a 2+3, you get 1,2,+1,2,3, or 1,2,1,2,3, or Quick, Slow, or Q,S. The Q,S, rhythm is repeated, like a rather rapid heartbeat. Here’s what 5/16, or Q,S, sounds like. Listen for the bass drum thump.</p> <p>Bulgarians might know this rhythm as Quick-Slow, but more likely they would call it pajduško (pie-DOOSH-koh), because that’s name of the dance most commonly associated with this rhythm. See what pajduško looks like while you pick out the rhythm.</p> <youtube-embed video="fESztL_G54A" /><h3 id="_7-16-–-racenica" tabindex="-1">7/16 – Račenica <a class="header-anchor" href="#_7-16-–-racenica" aria-label="Permalink to "7/16 – Račenica""></a></h3> <p>By adding one more “Quick” beat to the front, you get Q,Q,S, or 1,2,1,2,1,2,3, or 7/16. This uneven rhythm is commonly referred to as račenica (ruh-cheh-NEE-tsah), after the popular dance. Račenica’s come in solo, couple, and choral dances, and vary greatly in speed from one region to the next. Here’s a rather fast example of what one couple račenica looks and sounds like. For the first 45 seconds you can watch the feet, as they keep the beat exactly.</p> <youtube-embed video="xenjhHHBZkA" /><p>See also račenica under Living Dances for more examples of the dance at various speeds and configurations.</p> <h3 id="_7-16-–-makedonsko-pravoto" tabindex="-1">7/16 – Makedonsko, Pravoto <a class="header-anchor" href="#_7-16-–-makedonsko-pravoto" aria-label="Permalink to "7/16 – Makedonsko, Pravoto""></a></h3> <p>Slow down the tempo, put the slow beat at the beginning, and you have a 7/16 rhythm that’s popular in Western Bulgaria and Macedonia. Bulgarians call it Makedonsko (Macedonian), Macedonians call it Pravoto or Lesno, International Folk Dancers call it Lesnoto. The pattern is S,Q,Q, OR 1,2,3,1,2,1,2. Here’s what it sounds like, sung by the “Queen of Gypsy Music” Esma Redžepova. The bass is (usually) hitting the first beat of 1,2,3,1,2,1,2.</p> <youtube-embed video="WYE3dO7LDRk" /><h3 id="_7-16-cetvorno" tabindex="-1">7/16 Četvorno <a class="header-anchor" href="#_7-16-cetvorno" aria-label="Permalink to "7/16 Četvorno""></a></h3> <p>Speed up that SQQ pattern and you get četvorno (chet-VOHR-noh)</p> <youtube-embed video="5u4jOtmmX_M" /><p> </p> <youtube-embed video="IolB6LVhJFE" /><h3 id="_9-16-–-dajcovo" tabindex="-1">9/16 – Dajčovo <a class="header-anchor" href="#_9-16-–-dajcovo" aria-label="Permalink to "9/16 – Dajčovo""></a></h3> <p>Add another Q to račenica‘s QQS and you get QQQS, or 1,2,1,2,1,2,1,2,3. Here’s the legendary Boris Karlov making it sound easy.</p> <p>Here’s what dajčovo the basic dance looks like:</p> <youtube-embed video="TtnYFUSpp5c" /><p>Beware there are MANY ways to do this dance.</p> <h3 id="_9-16-–-other-dances" tabindex="-1">9/16 – Other Dances <a class="header-anchor" href="#_9-16-–-other-dances" aria-label="Permalink to "9/16 – Other Dances""></a></h3> <p>There are MANY other Bulgarian and non-Bulgarian dances that use this rhythm. Here’s Djanguritsa from Pirin</p> <youtube-embed video="HwrmWzjV1vc" /><p>And Svornato from the Rhodope region</p> <youtube-embed video="icRt4ku6R_8" /><h3 id="_11-16-–-kopanica" tabindex="-1">11/16 – Kopanica <a class="header-anchor" href="#_11-16-–-kopanica" aria-label="Permalink to "11/16 – Kopanica""></a></h3> <p>Now it gets complicated. Instead of the slow beat being at either end of the chain of beats, it’s in the middle. QQSQQ</p> <p>The dance below is Kopanica</p> <youtube-embed video="i8Ao1o2uw3o" /><h3 id="_11-16-–-gankino" tabindex="-1">11/16 – Gankino <a class="header-anchor" href="#_11-16-–-gankino" aria-label="Permalink to "11/16 – Gankino""></a></h3> <p>And here’s Gankino</p> <youtube-embed video="WRdifDUq2EA" /><h3 id="_12-16-petrunino-horo-s-q-q-q-s" tabindex="-1">12/16- Petrunino horo S,Q,Q,Q,S, <a class="header-anchor" href="#_12-16-petrunino-horo-s-q-q-q-s" aria-label="Permalink to "12/16- Petrunino horo S,Q,Q,Q,S,""></a></h3> <p>Here, there’s a slow at both ends.</p> <youtube-embed video="dqUe-8WOc2E" /><p> </p> <youtube-embed video="SGWuxnDbXs" /><h3 id="_13-16-–-krivo-sadovsko-horo-q-q-q-s-q-q" tabindex="-1">13/16 – Krivo Sadovsko Horo Q,Q,Q,S,Q,Q, <a class="header-anchor" href="#_13-16-–-krivo-sadovsko-horo-q-q-q-s-q-q" aria-label="Permalink to "13/16 – Krivo Sadovsko Horo Q,Q,Q,S,Q,Q,""></a></h3> <youtube-embed video="rxiy5ZTzKMI" /><h3 id="_13-16-–-postupano-q-q-q-s-q-q" tabindex="-1">13/16 – Postupano Q,Q,Q,S,Q,Q, <a class="header-anchor" href="#_13-16-–-postupano-q-q-q-s-q-q" aria-label="Permalink to "13/16 – Postupano Q,Q,Q,S,Q,Q,""></a></h3> <p>Here’s, Postupano, from Macedonia. One look at the physical demands of the dance, as shown in this film from 1948, shows why it’s not too popular among recreational dancers today.</p> <youtube-embed video="YWY2zct7R-4" /><h3 id="_15-16-–-bucimis-q-q-q-q-s-q-q" tabindex="-1">15/16 – Bučimiš “Q,Q,Q,Q,S,Q,Q,” <a class="header-anchor" href="#_15-16-–-bucimis-q-q-q-q-s-q-q" aria-label="Permalink to "15/16 – Bučimiš “Q,Q,Q,Q,S,Q,Q,”""></a></h3> <p>See also article on <a href="https://folkdancefootnotes.org/dance/a-real-folk-dance-what-is-it/1st-generation-dances/bucimis-%d0%b1%d1%83%d1%87%d0%b8%d0%bc%d0%b8%d1%88-%d1%85%d0%be%d1%80%d0%be-bulgaria/" target="_blank" rel="noreferrer">Bucimiš</a></p> <youtube-embed video="xeGgdjY5oXI" /><h3 id="_18-16-–-jovino-horo-or-jove-malaj-mome-7-16-11-6-or-s-q-q-q-q-s-q-q" tabindex="-1">18/16 – Jovino Horo or Jove Malaj Mome: 7/16+11/6 or “S,Q,Q,Q,Q,S,Q,Q,” <a class="header-anchor" href="#_18-16-–-jovino-horo-or-jove-malaj-mome-7-16-11-6-or-s-q-q-q-q-s-q-q" aria-label="Permalink to "18/16 – Jovino Horo or Jove Malaj Mome: 7/16+11/6 or “S,Q,Q,Q,Q,S,Q,Q,”""></a></h3> <p>See also article on <a href="https://folkdancefootnotes.org/dance/a-real-folk-dance-what-is-it/1st-generation-dances/jove-malaj-mome-jovino-horo-oro-kolo-bulgaria-serbia-macedonia/" target="_blank" rel="noreferrer">Jove Malaj Mome</a></p> <youtube-embed video="S_-6SY7xGNw" /><h3 id="_22-16-–-sandansko-9-16-13-16-or-qqqs-qqqsqq" tabindex="-1">22/16 – Sandansko: 9/16 + 13/16 or “QQQS + QQQSQQ” <a class="header-anchor" href="#_22-16-–-sandansko-9-16-13-16-or-qqqs-qqqsqq" aria-label="Permalink to "22/16 – Sandansko: 9/16 + 13/16 or “QQQS + QQQSQQ”""></a></h3> <p>See also article on <a href="https://folkdancefootnotes.org/dance/a-real-folk-dance-what-is-it/2nd-generation-dances/sandansko-horo-%d1%81%d0%b0%d0%bd%d0%b4%d0%b0%d0%bd%d1%81%d0%ba%d0%be-pirin-or-strandzha-thrace-bulgaria/" target="_blank" rel="noreferrer">Sandansko</a></p> <youtube-embed video="G5tSEbTsrQM" /><p>25/16 – Sedi Donka: 7/16 + 11/16 + 7/16 or S,Q,Q,S,Q,Q,Q,Q,S,Q,Q,</p> <youtube-embed video="YG5MNYfJWmQ" /><hr> <h2 id="against-the-odds-an-exploration-of-bulgarian-rhythms" tabindex="-1">Against The Odds: an Exploration of Bulgarian Rhythms <a class="header-anchor" href="#against-the-odds-an-exploration-of-bulgarian-rhythms" aria-label="Permalink to "Against The Odds: an Exploration of Bulgarian Rhythms""></a></h2> <blockquote> <p>Vessela Stoyanova</p> </blockquote> <p>The overwhelming majority of Bulgarian folk music happens to be in odd meters–typically 5, 7, 9 and 11, with occasional combinations of those creating 13, 15, 17 and larger. Some musicologists have linked these odd meters to the history of the region’s languages–especially poetry–going back to Ancient Greece. Others connect them to dances, insofar as each odd time signature tends to be accompanied by a specific dance. Indeed, many odd metered song forms are named after such dances, for instance kopanitsa, which always implies 11/8. In fact, many accomplished folk musicians in Bulgaria could not tell you what the time signature of the music is; instead, they will refer to it in terms of its dance.</p> <p>The reason I feel compelled to share this information is twofold. On the one hand, Balkan music is becoming more and more prominent in the US. On the other hand, my command of odd meters has helped me greatly in assimilating difficult prog rock or contemporary classical pieces where odd meters are often used.</p> <p>Now to be fair–and, alas, to contradict the clever pun of my title–I prefer the term “irregular” instead of “odd,” because many Bulgarian rhythms are technically even, such as 8/8, 10/8, 12/8 or 22/8. However, within a given measure of these even time signatures, you would likely have beats of different lengths. The Bulgarian word for all of these rhythms would translate roughly as uneven-beat music. Think about the beats in a 6/8 measure (two dotted quarter notes) compared to that of a 3/4 measure (three quarter notes). If you are familiar with the melody from Westside Story, “I wanna live in America” (one measure of 6/8 followed by one measure of 3/4), imagine it as one long measure of 12/8. This is more akin to the beat ratios encountered in Balkan meters, where the dotted quarter beats co-exist with the quarter beats in the same measure in various combinations.</p> <p>To be clear, I am talking specifically about time signatures in which the denominator is 8 (or in some cases 16), but not 4. That is to say, the beat is not equal to the 8th note, but rather a group of 8th notes. In the west that phenomenon is typically expressed with time signatures of 6/8 or 12/8.</p> <p>Since Bulgarian time signatures are linked to dances, it is crucial that the music grooves. If this is the first time you’re attempting to feel or play Balkan odd meters, beware treating them as “missing a beat,” which is the most common Balkan groove killer I’ve encountered in the west. 7/8 is not 4/4 minus one eighth note! Imagine thinking of 3/4 as 4/4 minus one quarter note. I don’t think anyone will be waltzing to that. Moreover, if you are used to 4/4 (and the majority of westerners are), chances are your body will automatically revert back to it while playing, especially if you only allow yourself to count in terms of it.</p> <p>Balkan time signatures can also be understood as subdivisions of 2’s and 3’s. Native Bulgarian musicians don’t exactly think in these terms, but early Balkan musicologists found this to be an effective method of communicating the “uneven-beat” nature of Bulgarian folk music in western notation. And when Bela Bartok visited the region in the early twentieth-century, this way of notating the music became standard. In reality folk musicians in Bulgaria don’t think in terms of 2’s and 3’s, but in terms of short and long beats. The shortness and longness of beats may actually vary from village to village, so the subdivisions of 2’s and 3’s are approximations at best. But we encounter the same situation with swing music in which two 8th notes may be played closer to a dotted 8th and a 16th, or an 8th note triplet, but the actual interpretation is up to the musicians.</p> <p>So, the big questions: How short exactly is the short beat, and how long is the long beat? And how can one develop a sense of those lengths without resorting to counting?</p> <p>Here are some practical suggestions to help musicians who are inexperienced with Balkan rhythms:</p> <ul> <li>Start by counting. For a measure of 7/8, count: “One-two one-two one-two-three.” Or, if you are familiar with Indian solkattu: “ta-ka di-mi ta-ki-ta.” Turkish rhythmic syllables would be as follow: “Dum Dum Tek.” Or make up your own phrase in your own language, like “Ripe Red Strawberry.”</li> <li>If you are a visual learner, picture each beat as either a square or rectangle. The square would equal two 8th notes, while the triangle would equal three 8th notes.</li> <li>And last but not least, clap along with the music: Clap your hands together for the short beats; clap your hands on a table or your lap for the long ones.</li> <li>Learn the melody, regardless of your instrument. A couple of years ago I taught my three American stepdaughters a childrens’ song from Bulgaria. They learned it quickly and without hesitation, despite that it was in 7/8. Most musicians and educators, by contrast, tend to want to analyze the meter beforehand. But if you can sing the melody already, learning the rhythm happens much more easily.</li> </ul> <p>Once you’ve internalized the pulse enough to follow along with the music, start thinking of 7/8 as a measure of three beats in which you have one long followed by two short beats. In Bulgaria this is referred to as the “male” version of the dance ruchenitsa, and is usually performed at a relatively slow tempo (also known as Macedonian ruchenitsa after the region it is most often heard in). The “female” version is performed at faster tempos and has the reverse structure, with the two short beats preceding the long beat. In some cases it’s so fast that it sounds almost like a two beat cycle, where the first beat is four eighth notes long and the second is three eighth notes long. Obviously that changes the ratio between the beats, but that’s a tangent we won’t go into today. Curiously, you’ll never find a Bulgarian folksong version of 7/8 in which the long beat occurs between the two short beats (although some contemporary arrangements have started doing this). It may come as no surprise that there is no traditional dance associated with such a pattern. Similarly, a groove in 11/8 would be perceived as having 5 beats, where the middle beat is longer, thus creating a perfect symmetry.</p> <p>A true sign that you’ve internalized the grooves is when you are able to improvise over them without outlining the meter in every measure. Similarly to playing over the bar line in jazz, in Balkan music one plays over the pulse. At the same time the rhythm section players often create their own subdivisions that go against the grain of the main pulse. That may be arranged in advance and agreed upon, or it may happen spontaneously. One typical re-subdivision is playing straight dotted quarter notes against the short-short-short-long beat in a 9/8 measure. Over time a seasoned rhythm section will learn each others’ habits and tendencies and will predict each others’ moves, while the soloists will know what to expect, how far to stretch and when to “come home.”</p> <p>A good way to practice these grooves until they become second nature is to find some good recordings, make sure you know already what the time signature and subdivision is, and just clap along. Remember, the name of the dance will tip you on what the time signature is. Then move on to songs you don’t know and try to find the beats and clap along.</p> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Real book online]]></title> <link>https://chromatone.center/practice/chord/real-book/</link> <guid>https://chromatone.center/practice/chord/real-book/</guid> <pubDate>Sat, 27 Sep 2014 00:00:00 GMT</pubDate> <description><![CDATA[Collection of jazz standards, digital corpus of chord progressions]]></description> <enclosure url="https://chromatone.center/real-book.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Why WebMIDI is not supported in iOS?]]></title> <link>https://chromatone.center/practice/midi/ios/</link> <guid>https://chromatone.center/practice/midi/ios/</guid> <pubDate>Wed, 24 Sep 2014 00:00:00 GMT</pubDate> <description><![CDATA[The story of WebMIDI browser abandonment]]></description> <content:encoded><![CDATA[<p><a href="https://apps.apple.com/us/app/web-midi-browser/id953846217" target="_blank" rel="noreferrer">https://apps.apple.com/us/app/web-midi-browser/id953846217</a></p> <p><a href="https://forum.pianoworld.com/ubbthreads.php/topics/3295753/ios-midi-compatible-web-browser.html" target="_blank" rel="noreferrer">https://forum.pianoworld.com/ubbthreads.php/topics/3295753/ios-midi-compatible-web-browser.html</a></p> <blockquote> <p>I gave it a quick try and it didn't seem to work either on the referenced tonesavvy.com site or on my own velocitester.com. It only worked on the default page that simply logs the notes being played. I didn't spend time trying to debug it however. kanefsky 12/21/22</p> </blockquote> <p><a href="https://www.reddit.com/r/WebMIDI/submit/" target="_blank" rel="noreferrer">https://www.reddit.com/r/WebMIDI/submit/</a></p> <h2 id="looks-like-webmidi-browser-on-ipad-is-finally-dead-after-8-years-of-no-updates-at-least-for-my-webapp-that-had-7-years-of-constant-updates" tabindex="-1">Looks like WebMIDI browser on iPad is finally dead after 8 years of no updates, at least for my webapp, that had 7 years of constant updates <a class="header-anchor" href="#looks-like-webmidi-browser-on-ipad-is-finally-dead-after-8-years-of-no-updates-at-least-for-my-webapp-that-had-7-years-of-constant-updates" aria-label="Permalink to "Looks like WebMIDI browser on iPad is finally dead after 8 years of no updates, at least for my webapp, that had 7 years of constant updates""></a></h2> <p>Hi! My name is Denis and I'm the author and developer of Chromatone - the Visual Music Language online research hub. I'm building it as an open source web-app from 2017 here <a href="https://chromatone.center/" target="_blank" rel="noreferrer">https://chromatone.center/</a> . It was a move from limitations of Wordpress to freedom of creativity with Vue, Vite, Vitepress, Tone.js, Elementary.js and, of course, WebMIDI.js. Now we're at 2.9.x version and approaching 3.0. I started with WebMIDI even before</p> ]]></content:encoded> </item> <item> <title><![CDATA[Oscillator sync]]></title> <link>https://chromatone.center/theory/synthesis/osc-sync/</link> <guid>https://chromatone.center/theory/synthesis/osc-sync/</guid> <pubDate>Sat, 20 Sep 2014 00:00:00 GMT</pubDate> <description><![CDATA[Once oscillator resetting another as a technique to produce harmonically reach sounds]]></description> <content:encoded><![CDATA[<p>Oscillator sync is a feature in some synthesizers with two or more VCOs, DCOs, or "virtual" oscillators. As one oscillator finishes a cycle, it resets the period of another oscillator, forcing the latter to have the same base frequency. This can produce a harmonically rich sound, the timbre of which can be altered by varying the synced oscillator's frequency. A synced oscillator that resets other oscillator(s) is called the master; the oscillators which it resets are called slaves. There are two common forms of oscillator sync which appear on synthesizers: Hard Sync and Soft Sync. According to Sound on Sound journalist Gordon Reid, oscillator sync is "one of the least understood facilities on any synthesizer".</p> <h2 id="hard-sync" tabindex="-1">Hard Sync <a class="header-anchor" href="#hard-sync" aria-label="Permalink to "Hard Sync""></a></h2> <p>The leader oscillator's pitch is generated by user input (typically the synthesizer's keyboard), and is arbitrary. The follower oscillator's pitch may be tuned to (or detuned from) this frequency, or may remain constant. Every time the leader oscillator's cycle repeats, the follower is retriggered, regardless of its position. If the follower is tuned to a lower frequency than the leader it will be forced to repeat before it completes an entire cycle, and if it is tuned to a higher frequency it will be forced to repeat partway through a second or third cycle. This technique ensures that the oscillators are technically playing at the same frequency, but the irregular cycle of the follower oscillator often causes complex timbres and the impression of harmony. If the tuning of the follower oscillator is swept, one may discern a harmonic sequence.</p> <p>This effect may be achieved by measuring the zero axis crossings of the leader oscillator and retriggering the follower oscillator after every other crossing.</p> <p>This form of oscillator sync is more common than soft sync, but is prone to generating aliasing in naive digital implementations.</p> <h2 id="soft-sync" tabindex="-1">Soft Sync <a class="header-anchor" href="#soft-sync" aria-label="Permalink to "Soft Sync""></a></h2> <p>There are several other kinds of sync which may also be called Soft Sync. In a Hard Sync setup, the follower oscillator is forced to reset to some level and phase (for example, zero) with every cycle of the leader regardless of position or direction of the follower waveform, which often generates asymmetrical shapes.</p> <p>In some cases, Soft Sync refers to a process intended to nudge and lock the follower oscillator into the same or an integer or fractional multiple of the leader oscillator frequency when they both have similar phases, similar to a phase-locked loop.</p> <h3 id="reversing-sync" tabindex="-1">Reversing Sync <a class="header-anchor" href="#reversing-sync" aria-label="Permalink to "Reversing Sync""></a></h3> <p>This form of oscillator sync is less common. This form is very similar to Hard Sync, with one small difference. In Reversing Soft Sync, rather than resetting to zero, the wave is inverted; that is, its direction is reversed. Reversing Soft Sync is more associated with analog triangle core oscillators than analog sawtooth core oscillators.</p> <h3 id="threshold-or-weak-sync" tabindex="-1">Threshold or Weak Sync <a class="header-anchor" href="#threshold-or-weak-sync" aria-label="Permalink to "Threshold or Weak Sync""></a></h3> <p>Several kinds of Soft Sync use comparison thresholds:</p> <ul> <li>Hard Sync which is disabled when the frequency or amplitude of the follower crosses a user-defined threshold.</li> <li>Hard Sync which is disabled when the frequency of the follower extends too high above or too far below the frequency of the leader.</li> <li>Hard Sync which is disabled when the frequency of the follower is lower than the frequency of the leader.</li> </ul> <p>Soft Sync may accurately refer to any of these, depending on the synthesizer or manufacturer in question.</p> <h3 id="phase-advance-sync" tabindex="-1">Phase Advance 'Sync' <a class="header-anchor" href="#phase-advance-sync" aria-label="Permalink to "Phase Advance 'Sync'""></a></h3> <p>The phase of the follower is advanced by some amount when the leader oscillator level crosses some threshold. Used for audio synthesis, this may give an audible effect similar to Soft Sync.</p> <h3 id="reset-inhibit-sync" tabindex="-1">Reset Inhibit Sync <a class="header-anchor" href="#reset-inhibit-sync" aria-label="Permalink to "Reset Inhibit Sync""></a></h3> <p>When the leader oscillator crosses some threshold, the normal reset of the follower is disabled: it will stick at its final level, positive or negative. When the leader crosses back over some threshold, the follower is reset.</p> <h3 id="overlap-sync" tabindex="-1">Overlap Sync <a class="header-anchor" href="#overlap-sync" aria-label="Permalink to "Overlap Sync""></a></h3> <p>In this method, the current wave completes but a new waveform is generated at the sync pulse. The tail of the old wave and the new wave are output summed if they overlap.</p> <h2 id="digital-implementation-aspects" tabindex="-1">Digital Implementation Aspects <a class="header-anchor" href="#digital-implementation-aspects" aria-label="Permalink to "Digital Implementation Aspects""></a></h2> <p>Naive approaches to sync in digital oscillators will result in aliasing. To prevent this, band-limited methods such as additive synthesis, BLIT (Band-Limited Impulse Train) or BLEP (Band-Limited Step) must be adopted to avoid aliasing.</p> <p>In a digital oscillator, best practice is that the follower will not be reset to the identical phase each cycle, but to a phase advanced by an equivalent time to the phase of the leader at the reset. This prevents jitter in the follower frequency and provides truer synchronization.</p> <p>For digital oscillators, Reversing Sync may less frequently generate aliasing. This effect may be naively implemented by measuring the zero axis crossings of the leader oscillator and reversing the slope of the follower oscillator after every other crossing.</p> <p>For digital implementation, note that none of the Threshold or Weak Sync methods actually synthesize the waveform in a way different from Hard Sync (rather, they selectively deactivate it).</p> <p>Overlap sync is primarily a digital technique with simple implementation, such as used in FOF; an analog implementation could be a highly damped sine oscillator excited by the reset pulse.</p> <h2 id="sync-based-architectures" tabindex="-1">Sync-based Architectures <a class="header-anchor" href="#sync-based-architectures" aria-label="Permalink to "Sync-based Architectures""></a></h2> <p>A variety of synthesis architectures are based on sync, often used in conjunction with amplitude, frequency, or phase modulation. Such architectures include VOSIM and physical modelling synthesis.</p> ]]></content:encoded> </item> <item> <title><![CDATA[Elementary playground]]></title> <link>https://chromatone.center/practice/experiments/elementary/</link> <guid>https://chromatone.center/practice/experiments/elementary/</guid> <pubDate>Thu, 21 Aug 2014 00:00:00 GMT</pubDate> <description><![CDATA[Place to polish Elementary.audio scripting]]></description> <content:encoded><![CDATA[<ElemPlayground/>]]></content:encoded> </item> <item> <title><![CDATA[P2P connections]]></title> <link>https://chromatone.center/practice/experiments/p2p/</link> <guid>https://chromatone.center/practice/experiments/p2p/</guid> <pubDate>Wed, 30 Jul 2014 00:00:00 GMT</pubDate> <description><![CDATA[GUN database experiments]]></description> <content:encoded><![CDATA[<GunP2p/><MidiKeys class="mt-8" :height="200" />]]></content:encoded> <enclosure url="https://chromatone.center/p2p.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Sensors]]></title> <link>https://chromatone.center/practice/experiments/sensors/</link> <guid>https://chromatone.center/practice/experiments/sensors/</guid> <pubDate>Wed, 30 Jul 2014 00:00:00 GMT</pubDate> <description><![CDATA[Device orientation and position]]></description> <content:encoded><![CDATA[<Sensors/>]]></content:encoded> </item> <item> <title><![CDATA[Audio Programming]]></title> <link>https://chromatone.center/theory/synthesis/audio-programming/</link> <guid>https://chromatone.center/theory/synthesis/audio-programming/</guid> <pubDate>Mon, 02 Jun 2014 00:00:00 GMT</pubDate> <description><![CDATA[List of most notable audio programming languages]]></description> <content:encoded><![CDATA[<ul> <li><a href="https://en.wikipedia.org/wiki/ABC_notation" title="ABC notation" target="_blank" rel="noreferrer">ABC notation</a>, a language for notating music using the ASCII character set</li> <li><a href="https://bolprocessor.org/" target="_blank" rel="noreferrer">Bol Processor</a>, a model of <a href="https://en.wikipedia.org/wiki/Formal_grammar" title="Formal grammar" target="_blank" rel="noreferrer">formal grammars</a> enriched with polymetric expressions for the representation of time structures</li> <li><a href="https://en.wikipedia.org/wiki/ChucK" title="ChucK" target="_blank" rel="noreferrer">ChucK</a>, strongly timed, concurrent, and on-the-fly audio programming language</li> <li><a href="https://en.wikipedia.org/wiki/Real-time_Cmix" title="Real-time Cmix" target="_blank" rel="noreferrer">Real-time Cmix</a>, a <a href="https://en.wikipedia.org/wiki/MUSIC-N" title="MUSIC-N" target="_blank" rel="noreferrer">MUSIC-N</a> synthesis language somewhat similar to Csound</li> <li><a href="https://cmajor.dev/" target="_blank" rel="noreferrer">Cmajor</a>, a high-performance JIT-compiled C-style language for DSP</li> <li><a href="https://en.wikipedia.org/wiki/Common_Lisp_Music" title="Common Lisp Music" target="_blank" rel="noreferrer">Common Lisp Music</a> (CLM), a music synthesis and signal processing package in the Music V family</li> <li><a href="https://en.wikipedia.org/wiki/Csound" title="Csound" target="_blank" rel="noreferrer">Csound</a>, a <a href="https://en.wikipedia.org/wiki/MUSIC-N" title="MUSIC-N" target="_blank" rel="noreferrer">MUSIC-N</a> synthesis language released under the <a href="https://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License" title="GNU Lesser General Public License" target="_blank" rel="noreferrer">LGPL</a> with many available <a href="https://en.wikipedia.org/wiki/Unit_generator" title="Unit generator" target="_blank" rel="noreferrer">unit generators</a></li> <li><a href="https://elementary.audio" target="_blank" rel="noreferrer">Elementary audio</a>, a JavaScript library for digital audio signal processing. Generate and transform sound, both natively and in the browser with a functional, declarative API.</li> <li><a href="https://en.wikipedia.org/wiki/Extempore_(software)" title="Extempore (software)" target="_blank" rel="noreferrer">Extempore</a>, a live-coding environment that borrows a core foundation from the <a href="https://en.wikipedia.org/wiki/Impromptu_(programming_environment)" title="Impromptu (programming environment)" target="_blank" rel="noreferrer">Impromptu</a> environment</li> <li><a href="https://en.wikipedia.org/wiki/FAUST_(programming_language)" title="FAUST (programming language)" target="_blank" rel="noreferrer">FAUST</a>, Functional Audio Stream, a functional compiled language for efficient real-time audio signal processing</li> <li><a href="https://glicol.org" target="_blank" rel="noreferrer">GLICOL</a>, a graph-oriented live coding language written in Rust</li> <li><a href="https://en.wikipedia.org/wiki/Hierarchical_Music_Specification_Language" title="Hierarchical Music Specification Language" target="_blank" rel="noreferrer">Hierarchical Music Specification Language</a> (HMSL), optimized more for music than synthesis, developed in the 1980s in <a href="https://en.wikipedia.org/wiki/Forth_(programming_language)" title="Forth (programming language)" target="_blank" rel="noreferrer">Forth</a></li> <li><a href="https://en.wikipedia.org/wiki/Impromptu_(programming_environment)" title="Impromptu (programming environment)" target="_blank" rel="noreferrer">Impromptu</a>, a <a href="https://en.wikipedia.org/wiki/Scheme_(programming_language)" title="Scheme (programming language)" target="_blank" rel="noreferrer">Scheme</a> language environment for <a href="https://en.wikipedia.org/wiki/Mac_OS_X" title="Mac OS X" target="_blank" rel="noreferrer">Mac OS X</a> capable of sound and video synthesis, algorithmic composition, and 2D and 3D graphics programming</li> <li><a href="https://en.wikipedia.org/wiki/Ixi_lang" title="Ixi lang" target="_blank" rel="noreferrer">Ixi lang</a>, a programming language for live coding musical expression.</li> <li><a href="https://en.wikipedia.org/wiki/JFugue" title="JFugue" target="_blank" rel="noreferrer">JFugue</a>, a Java and JVM library for programming music that outputs to MIDI and has the ability to convert to formats including ABC Notation, Lilypond, and MusicXML</li> <li><a href="https://en.wikipedia.org/wiki/JMusic" title="JMusic" target="_blank" rel="noreferrer">jMusic</a></li> <li><a href="https://en.wikipedia.org/wiki/JSyn" title="JSyn" target="_blank" rel="noreferrer">JSyn</a></li> <li><a href="https://en.wikipedia.org/wiki/Keykit" title="Keykit" target="_blank" rel="noreferrer">Keykit</a>, a programming language and portable graphical environment for MIDI music composition</li> <li><a href="https://en.wikipedia.org/wiki/Kyma_(sound_design_language)" title="Kyma (sound design language)" target="_blank" rel="noreferrer">Kyma (sound design language)</a></li> <li><a href="https://en.wikipedia.org/wiki/LilyPond" title="LilyPond" target="_blank" rel="noreferrer">LilyPond</a>, a computer program and file format for music engraving.</li> <li><a href="https://en.wikipedia.org/wiki/Max/MSP" title="Max/MSP" target="_blank" rel="noreferrer">Max/MSP</a>, a proprietary, modular visual programming language aimed at sound synthesis for music</li> <li><a href="https://en.wikipedia.org/wiki/Music_Macro_Language" title="Music Macro Language" target="_blank" rel="noreferrer">Music Macro Language</a> (MML), often used to produce <a href="https://en.wikipedia.org/wiki/Chiptune" title="Chiptune" target="_blank" rel="noreferrer">chiptune</a> music in Japan</li> <li><a href="https://en.wikipedia.org/wiki/MUSIC-N" title="MUSIC-N" target="_blank" rel="noreferrer">MUSIC-N</a>, includes versions I, II, III, IV, IV-B, IV-BF, V, 11, and 360</li> <li><a href="https://en.wikipedia.org/wiki/Nyquist_(programming_language)" title="Nyquist (programming language)" target="_blank" rel="noreferrer">Nyquist</a></li> <li><a href="https://en.wikipedia.org/wiki/OpenMusic" title="OpenMusic" target="_blank" rel="noreferrer">OpenMusic</a></li> <li><a href="https://100r.co/site/orca.html" title="Orca (music programming language)" target="_blank" rel="noreferrer">Orca (music programming language)</a>, a two-dimensional esoteric programming language in which every letter of the alphabet is an operator, where lowercase letters operate on bang, uppercase letters operate each frame</li> <li><a href="https://en.wikipedia.org/wiki/Pure_Data" title="Pure Data" target="_blank" rel="noreferrer">Pure Data</a>, a modular visual programming language for signal processing aimed at music creation</li> <li><a href="https://tidalcycles.org/" title="Tidal Cycles (page does not exist)" target="_blank" rel="noreferrer">Tidal Cycles</a>, a live coding environment for algorithmic patterns, written in Haskell and using Supercollider for synthesis</li> <li><a href="https://en.wikipedia.org/wiki/Reaktor" title="Reaktor" target="_blank" rel="noreferrer">Reaktor</a></li> <li><a href="https://en.wikipedia.org/wiki/Sonic_Pi" title="Sonic Pi" target="_blank" rel="noreferrer">Sonic Pi</a></li> <li><a href="https://en.wikipedia.org/wiki/Structured_Audio_Orchestra_Language" title="Structured Audio Orchestra Language" target="_blank" rel="noreferrer">Structured Audio Orchestra Language</a> (SAOL), part of the <a href="https://en.wikipedia.org/wiki/MPEG-4_Structured_Audio" title="MPEG-4 Structured Audio" target="_blank" rel="noreferrer">MPEG-4 Structured Audio</a> standard</li> <li><a href="https://en.wikipedia.org/wiki/SuperCollider" title="SuperCollider" target="_blank" rel="noreferrer">SuperCollider</a></li> <li><a href="https://en.wikipedia.org/wiki/SynthEdit" title="SynthEdit" target="_blank" rel="noreferrer">SynthEdit</a>, a modular visual programming language for signal processing aimed at creating <a href="https://en.wikipedia.org/wiki/Audio_plug-in" title="Audio plug-in" target="_blank" rel="noreferrer">audio plug-ins</a></li> <li><a href="https://github.com/dy/melo" target="_blank" rel="noreferrer">melo</a> - Micro language for floatbeats and audio, it has smooth operator and organic sugar. Compiles to compact 0-runtime WASM with linear memory.</li> <li><a href="https://github.com/Bubobubobubobubo/topos?" target="_blank" rel="noreferrer">topos</a> - A Web-Based Algorithmic Sequencer. Topos is a web based live coding environment designed to be installation-free, independant and fun.</li> </ul> ]]></content:encoded> </item> <item> <title><![CDATA[AMY synth]]></title> <link>https://chromatone.center/practice/synth/amy/</link> <guid>https://chromatone.center/practice/synth/amy/</guid> <pubDate>Tue, 05 Mar 2013 00:00:00 GMT</pubDate> <description><![CDATA[Wasm synth playground]]></description> <content:encoded><![CDATA[<SynthAmy/><MidiKeys></MidiKeys><h2 id="amy-synth" tabindex="-1">AMY Synth <a class="header-anchor" href="#amy-synth" aria-label="Permalink to "AMY Synth""></a></h2> <h3 id="the-additive-music-synthesizer-library" tabindex="-1">the Additive Music synthesizer librarY <a class="header-anchor" href="#the-additive-music-synthesizer-library" aria-label="Permalink to "the Additive Music synthesizer librarY""></a></h3> <p>Highly experimental. <a href="https://github.com/bwhitman/amy/issues/35" target="_blank" rel="noreferrer">Issue pending</a></p> <ul> <li>Press <code>A</code> on your keyboard to play a note. Or push the PLAY button.</li> <li>Use <i class="p-3 i-la-arrow-left"></i> and <i class="p-3 i-la-arrow-right"></i> keys to browse patches</li> </ul> <p><a href="https://github.com/bwhitman/amy" target="_blank" rel="noreferrer">AMY repository</a></p> <p>AMY accepts commands in ASCII, like so:</p> <h1 id="v0w4f440-0l0-9" tabindex="-1">v0w4f440.0l0.9 <a class="header-anchor" href="#v0w4f440-0l0-9" aria-label="Permalink to "v0w4f440.0l0.9""></a></h1> <p>Here's the full list:</p> <table tabindex="0"> <thead> <tr> <th>Code</th> <th>Python</th> <th>Type-range</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>amp</td> <td>float 0-1+</td> <td>use after a note on is triggered with velocity to adjust amplitude without re-triggering the note</td> </tr> <tr> <td>A</td> <td>bp0</td> <td>string</td> <td>in commas, like 100,0.5,150,0.25,200,0 -- envelope generator with alternating time(ms) and ratio. last pair triggers on note off</td> </tr> <tr> <td>B</td> <td>bp1</td> <td>string</td> <td>set the second breakpoint generator. see breakpoint 0</td> </tr> <tr> <td>b</td> <td>feedback</td> <td>float 0-1</td> <td>use for the ALGO synthesis type in FM, or partial synthesis (for bandwidth) or for karplus-strong, or to indicate PCM looping (0 off, >0, on)</td> </tr> <tr> <td>C</td> <td>bp2</td> <td>string</td> <td>3rd breakpoint generator</td> </tr> <tr> <td>d</td> <td>duty</td> <td>float 0.001-0.999</td> <td>duty cycle for pulse wave, default 0.5</td> </tr> <tr> <td>D</td> <td>debug</td> <td>uint, 2-4</td> <td>2 shows queue sample, 3 shows oscillator data, 4 shows modified oscillator. will interrupt audio!</td> </tr> <tr> <td>f</td> <td>freq</td> <td>float</td> <td>frequency of oscillator</td> </tr> <tr> <td>F</td> <td>filter_freq</td> <td>float</td> <td>center frequency for biquad filter</td> </tr> <tr> <td>g</td> <td>mod_target</td> <td>uint mask</td> <td>Which parameter modulation/LFO controls. 1=amp, 2=duty, 4=freq, 8=filter freq, 16=resonance, 32=feedback. Can handle any combo, add them together</td> </tr> <tr> <td>G</td> <td>filter_type</td> <td>0-3</td> <td>0 = none (default.) 1 = low pass, 2 = band pass, 3 = hi pass.</td> </tr> <tr> <td>I</td> <td>ratio</td> <td>float</td> <td>for ALGO types, where the base note frequency controls the modulators, or for the ALGO base note and PARTIALS base note, where the ratio controls the speed of the playback</td> </tr> <tr> <td>L</td> <td>mod_source</td> <td>0 to OSCS-1</td> <td>Which oscillator is used as an modulation/LFO source for this oscillator. Source oscillator will be silent.</td> </tr> <tr> <td>l</td> <td>vel</td> <td>float 0-1+</td> <td>velocity - >0 to trigger note on, 0 to trigger note off. sets amplitude</td> </tr> <tr> <td>N</td> <td>latency_ms</td> <td>uint</td> <td>sets latency in ms. default 0</td> </tr> <tr> <td>n</td> <td>note</td> <td>uint 0-127</td> <td>midi note, sets frequency</td> </tr> <tr> <td>o</td> <td>algorithm</td> <td>uint 1-32</td> <td>DX7 algorith to use for ALGO type</td> </tr> <tr> <td>O</td> <td>algo_source</td> <td>string</td> <td>which oscillators to use for the algorithm. list of six, use -1 for not used, e.g 0,1,2,-1,-1-1</td> </tr> <tr> <td>p</td> <td>patch</td> <td>uint</td> <td>choose a preloaded PCM sample, partial patch or FM patch number for ALGO waveforms.</td> </tr> <tr> <td>P</td> <td>phase</td> <td>float 0-1</td> <td>where in the oscillator's cycle to start sampling from (also works on the PCM buffer). default 0</td> </tr> <tr> <td>R</td> <td>resonance</td> <td>float</td> <td>q factor of biquad filter. in practice, 0-10.0. default 0.7</td> </tr> <tr> <td>S</td> <td>reset</td> <td>uint</td> <td>resets given oscillator. set to > OSCS to reset all oscillators, gain and EQ</td> </tr> <tr> <td>T</td> <td>bp0_target</td> <td>uint mask</td> <td>Which parameter bp0 controls. 1=amp, 2=duty, 4=freq, 8=filter freq, 16=resonance, 32=feedback (can be added together). Can add 64 for linear ramp, otherwise exponential</td> </tr> <tr> <td>t</td> <td>timestamp</td> <td>uint</td> <td>ms of expected playback since some fixed start point on your host. you should always give this if you can.</td> </tr> <tr> <td>v</td> <td>osc</td> <td>uint 0 to OSCS-1</td> <td>which oscillator to control</td> </tr> <tr> <td>V</td> <td>volume</td> <td>float 0-10</td> <td>volume knob for entire synth, default 1.0</td> </tr> <tr> <td>w</td> <td>wave</td> <td>uint 0-11</td> <td>waveform: [0=SINE, PULSE, SAW_DOWN, SAW_UP, TRIANGLE, NOISE, KS, PCM, ALGO, PARTIAL, PARTIALS, OFF]. default: 0/SINE</td> </tr> <tr> <td>W</td> <td>bp1_target</td> <td>uint mask</td> <td>see bp0_target</td> </tr> <tr> <td>x</td> <td>eq_l</td> <td>float</td> <td>in dB, fc=800Hz amount, -15 to 15. 0 is off. default 0.</td> </tr> <tr> <td>X</td> <td>bp2_target</td> <td>uint mask</td> <td>see bp0_target</td> </tr> <tr> <td>y</td> <td>eq_m</td> <td>float</td> <td>in dB, fc=2500Hz amount, -15 to 15. 0 is off. default 0.</td> </tr> <tr> <td>z</td> <td>eq_h</td> <td>float</td> <td>in dB, fc=7500Hz amount, -15 to 15. 0 is off. default 0.</td> </tr> </tbody> </table> ]]></content:encoded> <enclosure url="https://chromatone.center/dx7_algorithms.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[ChucK]]></title> <link>https://chromatone.center/practice/synth/chuck/</link> <guid>https://chromatone.center/practice/synth/chuck/</guid> <pubDate>Wed, 23 May 2012 00:00:00 GMT</pubDate> <description><![CDATA[Music programming language]]></description> <content:encoded><![CDATA[<WebChuck/><h2 id="strongly-timed-concurrent-on-the-fly" tabindex="-1">Strongly-timed | Concurrent | On-the-fly <a class="header-anchor" href="#strongly-timed-concurrent-on-the-fly" aria-label="Permalink to "Strongly-timed | Concurrent | On-the-fly""></a></h2> <p><a href="https://chuck.stanford.edu/" target="_blank" rel="noreferrer">ChucK</a> is a programming language for real-time sound synthesis and music creation. ChucK offers a unique <strong>time-based, concurrent programming model</strong> that is precise and expressive (we call this <strong>strongly-timed</strong>), dynamic control rates, and the ability to add and modify code <strong>on-the-fly</strong>. In addition, ChucK supports MIDI, OpenSoundControl, HID device, and multi-channel audio. It is open-source and freely available on macOS, Windows, and Linux. It's fun and easy to learn, and offers composers, researchers, and performers a powerful programming tool for building and experimenting with complex audio synthesis/analysis programs, and real-time interactive music.</p> <youtube-embed video="2rpk461T6l4"/>]]></content:encoded> <enclosure url="https://chromatone.center/ch.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[MIDI color mixing]]></title> <link>https://chromatone.center/practice/color/mix/</link> <guid>https://chromatone.center/practice/color/mix/</guid> <pubDate>Tue, 05 Apr 2011 00:00:00 GMT</pubDate> <description><![CDATA[Full screen colors react to midi notes to explore the cross stimulation of the brain]]></description> <content:encoded><![CDATA[<ColorMix style="position: sticky; top: 0;" /><div class="info custom-block"><p class="custom-block-title">INFO</p> <p>Press any key on your keyboard to hear a note and to see the corresponding color. Go fullscreen with the button at the top right.</p> </div> ]]></content:encoded> <enclosure url="https://chromatone.center/milad-fakurian.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Drum rudiments]]></title> <link>https://chromatone.center/practice/rhythm/rudiments/</link> <guid>https://chromatone.center/practice/rhythm/rudiments/</guid> <pubDate>Sun, 07 Sep 2003 00:00:00 GMT</pubDate> <description><![CDATA[Interactive database of all 40 stardard drum rudiments]]></description> <content:encoded><![CDATA[<RhythmDrumRudiments />]]></content:encoded> <enclosure url="https://chromatone.center/daniel-shapiro.jpg" length="0" type="image/jpg"/> </item> <item> <title><![CDATA[Wheels]]></title> <link>https://chromatone.center/practice/rhythm/wheel/</link> <guid>https://chromatone.center/practice/rhythm/wheel/</guid> <pubDate>Tue, 30 Jun 1987 00:00:00 GMT</pubDate> <description><![CDATA[Clockwise motion, but with fixed hands]]></description> <content:encoded><![CDATA[<client-only> <RhythmWheels /> </client-only> ]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> <item> <title><![CDATA[Moire]]></title> <link>https://chromatone.center/practice/experiments/moire/</link> <guid>https://chromatone.center/practice/experiments/moire/</guid> <pubDate>Sat, 28 Aug 1024 00:00:00 GMT</pubDate> <description><![CDATA[Lines and intersections experiment]]></description> <content:encoded><![CDATA[<Moire/>]]></content:encoded> <enclosure url="https://chromatone.center/cover.png" length="0" type="image/png"/> </item> </channel> </rss>