changeset 369:0106c4799906

Update documentation
author Amine Sehili <amine.sehili@gmail.com>
date Sun, 10 Jan 2021 22:36:22 +0100
parents 683c98b7f5a6
children 4d9edd170403
files doc/apireference.rst doc/apitutorial.rst doc/cmdline.rst doc/core.rst doc/dataset.rst doc/examples.rst doc/index.rst doc/installation.rst doc/io.rst doc/signal.rst doc/util.rst
diffstat 11 files changed, 204 insertions(+), 1051 deletions(-) [+]
line wrap: on
line diff
--- a/doc/apireference.rst	Sun Jan 10 17:11:07 2021 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,11 +0,0 @@
-`auditok` API Reference
-=======================
-
-.. toctree::
-    :titlesonly:
-    :maxdepth: 3
-
-       auditok.core <core.rst>
-       auditok.util <util.rst>
-       auditok.io <io.rst>
-       auditok.dataset <dataset.rst>
--- a/doc/apitutorial.rst	Sun Jan 10 17:11:07 2021 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,553 +0,0 @@
-`auditok` API Tutorial
-======================
-
-.. contents:: `Contents`
-   :depth: 3
-
-
-**auditok**  is a module that can be used as a generic tool for data
-tokenization. Although its core motivation is **Acoustic Activity
-Detection** (AAD) and extraction from audio streams (i.e. detect
-where a noise/an acoustic activity occurs within an audio stream and
-extract the corresponding portion of signal), it can easily be
-adapted to other tasks.
-
-Globally speaking, it can be used to extract, from a sequence of
-observations, all sub-sequences that meet a certain number of
-criteria in terms of:
-
-1. Minimum length of a **valid** token (i.e. sub-sequence)
-2. Maximum length of a **valid** token
-3. Maximum tolerated consecutive **non-valid** observations within
-   a valid token
-
-Examples of a non-valid observation are: a non-numeric ascii symbol
-if you are interested in sub-sequences of numeric symbols, or a silent
-audio window (of 10, 20 or 100 milliseconds for instance) if what
-interests you are audio regions made up of a sequence of "noisy"
-windows (whatever kind of noise: speech, baby cry, laughter, etc.).
-
-The most important component of `auditok` is the :class:`auditok.core.StreamTokenizer`
-class. An instance of this class encapsulates a :class:`auditok.util.DataValidator` and can be
-configured to detect the desired regions from a stream.
-The :func:`auditok.core.StreamTokenizer.tokenize` method accepts a :class:`auditok.util.DataSource`
-object that has a `read` method. Read data can be of any type accepted
-by the `validator`.
-
-
-As the main aim of this module is **Audio Activity Detection**,
-it provides the :class:`auditok.util.ADSFactory` factory class that makes
-it very easy to create an :class:`auditok.util.ADSFactory.AudioDataSource`
-(a class that implements :class:`auditok.util.DataSource`) object, be that from:
-
-- A file on the disk
-- A buffer of data
-- The built-in microphone (requires PyAudio)
-
-
-The :class:`auditok.util.ADSFactory.AudioDataSource` class inherits from
-:class:`auditok.util.DataSource` and supplies a higher abstraction level
-than :class:`auditok.io.AudioSource` thanks to a bunch of handy features:
-
-- Define a fixed-length `block_size` (alias `bs`, i.e. analysis window)
-- Alternatively, use `block_dur` (duration in seconds, alias `bd`)
-- Allow overlap between two consecutive analysis windows
-  (if one of `hop_size` , `hs` or `hop_dur` , `hd` keywords is used and is > 0 and < `block_size`).
-  This can be very important if your validator use the **spectral** information of audio data
-  instead of raw audio samples.
-- Limit the amount (i.e. duration) of read data (if keyword `max_time` or `mt` is used, very useful when reading data from the microphone)
-- Record all read data and rewind if necessary (if keyword `record` or `rec` , also useful if you read data from the microphone and
-  you want to process it many times off-line and/or save it)
-
-See :class:`auditok.util.ADSFactory` documentation for more information.
-
-Last but not least, the current version has only one audio window validator based on
-signal energy (:class:`auditok.util.AudioEnergyValidator).
-
-**********************************
-Illustrative examples with strings
-**********************************
-
-Let us look at some examples using the :class:`auditok.util.StringDataSource` class
-created for test and illustration purposes. Imagine that each character of
-:class:`auditok.util.StringDataSource` data represents an audio slice of 100 ms for
-example. In the following examples we will use upper case letters to represent
-noisy audio slices (i.e. analysis windows or frames) and lower case letter for
-silent frames.
-
-
-Extract sub-sequences of consecutive upper case letters
-#######################################################
-
-
-We want to extract sub-sequences of characters that have:
-
-- A minimum length of 1 (`min_length` = 1)
-- A maximum length of 9999 (`max_length` = 9999)
-- Zero consecutive lower case characters within them (`max_continuous_silence` = 0)
-
-We also create the `UpperCaseChecker` with a `read` method that returns `True` if the
-checked character is in upper case and `False` otherwise.
-
-.. code:: python
-
-    from auditok import StreamTokenizer, StringDataSource, DataValidator
-
-    class UpperCaseChecker(DataValidator):
-       def is_valid(self, frame):
-          return frame.isupper()
-
-    dsource = StringDataSource("aaaABCDEFbbGHIJKccc")
-    tokenizer = StreamTokenizer(validator=UpperCaseChecker(),
-                 min_length=1, max_length=9999, max_continuous_silence=0)
-
-    tokenizer.tokenize(dsource)
-
-The output is a list of two tuples, each contains the extracted sub-sequence and its
-start and end position in the original sequence respectively:
-
-
-.. code:: python
-
-
-    [(['A', 'B', 'C', 'D', 'E', 'F'], 3, 8), (['G', 'H', 'I', 'J', 'K'], 11, 15)]
-
-
-Tolerate up to two non-valid (lower case) letters within an extracted sequence
-##############################################################################
-
-To do so, we set `max_continuous_silence` =2:
-
-.. code:: python
-
-
-    from auditok import StreamTokenizer, StringDataSource, DataValidator
-
-    class UpperCaseChecker(DataValidator):
-       def is_valid(self, frame):
-          return frame.isupper()
-
-    dsource = StringDataSource("aaaABCDbbEFcGHIdddJKee")
-    tokenizer = StreamTokenizer(validator=UpperCaseChecker(),
-                 min_length=1, max_length=9999, max_continuous_silence=2)
-
-    tokenizer.tokenize(dsource)
-
-
-output:
-
-.. code:: python
-
-    [(['A', 'B', 'C', 'D', 'b', 'b', 'E', 'F', 'c', 'G', 'H', 'I', 'd', 'd'], 3, 16), (['J', 'K', 'e', 'e'], 18, 21)]
-
-Notice the trailing lower case letters "dd" and "ee" at the end of the two
-tokens. The default behavior of :class:`auditok.core.StreamTokenizer` is to keep the *trailing
-silence* if it does not exceed `max_continuous_silence`. This can be changed
-using the `StreamTokenizer.DROP_TRAILING_SILENCE` mode (see next example).
-
-Remove trailing silence
-#######################
-
-Trailing silence can be useful for many sound recognition applications, including
-speech recognition. Moreover, from the human auditory system point of view, trailing
-low energy signal helps removing abrupt signal cuts.
-
-If you want to remove it anyway, you can do it by setting `mode` to `StreamTokenizer.DROP_TRAILING_SILENCE`:
-
-.. code:: python
-
-    from auditok import StreamTokenizer, StringDataSource, DataValidator
-
-    class UpperCaseChecker(DataValidator):
-       def is_valid(self, frame):
-          return frame.isupper()
-
-    dsource = StringDataSource("aaaABCDbbEFcGHIdddJKee")
-    tokenizer = StreamTokenizer(validator=UpperCaseChecker(),
-                 min_length=1, max_length=9999, max_continuous_silence=2,
-                 mode=StreamTokenizer.DROP_TRAILING_SILENCE)
-
-    tokenizer.tokenize(dsource)
-
-output:
-
-.. code:: python
-
-    [(['A', 'B', 'C', 'D', 'b', 'b', 'E', 'F', 'c', 'G', 'H', 'I'], 3, 14), (['J', 'K'], 18, 19)]
-
-
-
-Limit the length of detected tokens
-###################################
-
-
-Imagine that you just want to detect and recognize a small part of a long
-acoustic event (e.g. engine noise, water flow, etc.) and avoid that that
-event hogs the tokenizer and prevent it from feeding the event to the next
-processing step (i.e. a sound recognizer). You can do this by:
-
- - limiting the length of a detected token.
-
- and
-
- - using a callback function as an argument to :class:`auditok.core.StreamTokenizer.tokenize`
-   so that the tokenizer delivers a token as soon as it is detected.
-
-The following code limits the length of a token to 5:
-
-.. code:: python
-
-    from auditok import StreamTokenizer, StringDataSource, DataValidator
-
-    class UpperCaseChecker(DataValidator):
-       def is_valid(self, frame):
-          return frame.isupper()
-
-    dsource = StringDataSource("aaaABCDEFGHIJKbbb")
-    tokenizer = StreamTokenizer(validator=UpperCaseChecker(),
-                 min_length=1, max_length=5, max_continuous_silence=0)
-
-    def print_token(data, start, end):
-        print("token = '{0}', starts at {1}, ends at {2}".format(''.join(data), start, end))
-
-    tokenizer.tokenize(dsource, callback=print_token)
-
-
-output:
-
-.. code:: python
-
-    "token = 'ABCDE', starts at 3, ends at 7"
-    "token = 'FGHIJ', starts at 8, ends at 12"
-    "token = 'K', starts at 13, ends at 13"
-
-
-************************
-`auditok` and Audio Data
-************************
-
-In the rest of this document we will use :class:`auditok.util.ADSFactory`, :class:`auditok.util.AudioEnergyValidator`
-and :class:`auditok.core.StreamTokenizer` for Audio Activity Detection demos using audio data. Before we get any
-further it is worth, explaining a certain number of points.
-
-:func:`auditok.util.ADSFactory.ads` method is used to create an :class:`auditok.util.ADSFactory.AudioDataSource`
-object either from a wave file, the built-in microphone or a user-supplied data buffer. Refer to the API reference
-for more information and examples on :func:`ADSFactory.ads` and :class:`AudioDataSource`.
-
-The created :class:`AudioDataSource` object is then passed to :func:`StreamTokenizer.tokenize` for tokenization.
-
-:func:`auditok.util.ADSFactory.ads` accepts a number of keyword arguments, of which none is mandatory.
-The returned :class:`AudioDataSource` object's features and behavior can however greatly differ
-depending on the passed arguments. Further details can be found in the respective method documentation.
-
-Note however the following two calls that will create an :class:`AudioDataSource`
-that reads data from an audio file and from the built-in microphone respectively.
-
-.. code:: python
-
-    from auditok import ADSFactory
-
-    # Get an AudioDataSource from a file
-    # use 'filename', alias 'fn' keyword argument
-    file_ads = ADSFactory.ads(filename = "path/to/file/")
-
-    # Get an AudioDataSource from the built-in microphone
-    # The returned object has the default values for sampling
-    # rate, sample width an number of channels. see method's
-    # documentation for customized values
-    mic_ads = ADSFactory.ads()
-
-For :class:`StreamTkenizer`, parameters `min_length`, `max_length` and `max_continuous_silence`
-are expressed in terms of number of frames. Each call to :func:`AudioDataSource.read` returns
-one frame of data or None.
-
-If you want a `max_length` of 2 seconds for your detected sound events and your *analysis window*
-is *10 ms* long, you have to specify a `max_length` of 200 (`int(2. / (10. / 1000)) == 200`).
-For a `max_continuous_silence` of *300 ms* for instance, the value to pass to StreamTokenizer is 30
-(`int(0.3 / (10. / 1000)) == 30`).
-
-Each time :class:`StreamTkenizer` calls the :func:`read` (has no argument) method of an
-:class:`AudioDataSource` object, it returns the same amount of data, except if there are no more
-data (returns what's left in stream or None).
-
-This fixed-length amount of data is referred here to as **analysis window** and is a parameter of
-:func:`ADSFactory.ads` method. By default :func:`ADSFactory.ads` uses an analysis window of 10 ms.
-
-The number of samples that 10 ms of audio data contain will vary, depending on the sampling
-rate of your audio source/data (file, microphone, etc.).
-For a sampling rate of 16KHz (16000 samples per second), we have 160 samples for 10 ms.
-
-You can use the `block_size` keyword (alias `bs`) to define your analysis window:
-
-.. code:: python
-
-    from auditok import ADSFactory
-
-    '''
-    Assume you have an audio file with a sampling rate of 16000
-    '''
-
-    # file_ads.read() will return blocks of 160 sample
-    file_ads = ADSFactory.ads(filename = "path/to/file/", block_size = 160)
-
-    # file_ads.read() will return blocks of 320 sample
-    file_ads = ADSFactory.ads(filename = "path/to/file/", bs = 320)
-
-
-Fortunately, you can specify the size of your analysis window in seconds, thanks to keyword `block_dur`
-(alias `bd`):
-
-.. code:: python
-
-    from auditok import ADSFactory
-    # use an analysis window of 20 ms
-    file_ads = ADSFactory.ads(filename = "path/to/file/", bd = 0.02)
-
-For :class:`StreamTkenizer`, each :func:`read` call that does not return `None` is treated as a processing
-frame. :class:`StreamTkenizer` has no way to figure out the temporal length of that frame (why sould it?). So to
-correctly initialize your :class:`StreamTokenizer`, based on your analysis window duration, use something like:
-
-
-.. code:: python
-
-    analysis_win_seconds = 0.01 # 10 ms
-    my_ads = ADSFactory.ads(block_dur = analysis_win_seconds)
-    analysis_window_ms = analysis_win_seconds * 1000
-
-    # If you want your maximum continuous silence to be 300 ms use:
-    max_continuous_silence = int(300. / analysis_window_ms)
-
-    # which is the same as:
-    max_continuous_silence = int(0.3 / (analysis_window_ms / 1000))
-
-    # or simply:
-    max_continuous_silence = 30
-
-
-******************************
-Examples using real audio data
-******************************
-
-
-Extract isolated phrases from an utterance
-##########################################
-
-We will build an :class:`auditok.util.ADSFactory.AudioDataSource` using a wave file from
-the database. The file contains of isolated pronunciation of digits from 1 to 1
-in Arabic as well as breath-in/out between 2 and 3. The code will play the
-original file then the detected sounds separately. Note that we use an
-`energy_threshold` of 65, this parameter should be carefully chosen. It depends
-on microphone quality, background noise and the amplitude of events you want to
-detect.
-
-.. code:: python
-
-    from auditok import ADSFactory, AudioEnergyValidator, StreamTokenizer, player_for, dataset
-
-    # We set the `record` argument to True so that we can rewind the source
-    asource = ADSFactory.ads(filename=dataset.one_to_six_arabic_16000_mono_bc_noise, record=True)
-
-    validator = AudioEnergyValidator(sample_width=asource.get_sample_width(), energy_threshold=65)
-
-    # Default analysis window is 10 ms (float(asource.get_block_size()) / asource.get_sampling_rate())
-    # min_length=20 : minimum length of a valid audio activity is 20 * 10 == 200 ms
-    # max_length=4000 :  maximum length of a valid audio activity is 400 * 10 == 4000 ms == 4 seconds
-    # max_continuous_silence=30 : maximum length of a tolerated  silence within a valid audio activity is 30 * 30 == 300 ms
-    tokenizer = StreamTokenizer(validator=validator, min_length=20, max_length=400, max_continuous_silence=30)
-
-    asource.open()
-    tokens = tokenizer.tokenize(asource)
-
-    # Play detected regions back
-
-    player = player_for(asource)
-
-    # Rewind and read the whole signal
-    asource.rewind()
-    original_signal = []
-
-    while True:
-       w = asource.read()
-       if w is None:
-          break
-       original_signal.append(w)
-
-    original_signal = ''.join(original_signal)
-
-    print("Playing the original file...")
-    player.play(original_signal)
-
-    print("playing detected regions...")
-    for t in tokens:
-        print("Token starts at {0} and ends at {1}".format(t[1], t[2]))
-        data = ''.join(t[0])
-        player.play(data)
-
-    assert len(tokens) == 8
-
-
-The tokenizer extracts 8 audio regions from the signal, including all isolated digits
-(from 1 to 6) as well as the 2-phase respiration of the subject. You might have noticed
-that, in the original file, the last three digit are closer to each other than the
-previous ones. If you wan them to be extracted as one single phrase, you can do so
-by tolerating a larger continuous silence within a detection:
-
-.. code:: python
-
-    tokenizer.max_continuous_silence = 50
-    asource.rewind()
-    tokens = tokenizer.tokenize(asource)
-
-    for t in tokens:
-       print("Token starts at {0} and ends at {1}".format(t[1], t[2]))
-       data = ''.join(t[0])
-       player.play(data)
-
-    assert len(tokens) == 6
-
-
-Trim leading and trailing silence
-#################################
-
-The  tokenizer in the following example is set up to remove the silence
-that precedes the first acoustic activity or follows the last activity
-in a record. It preserves whatever it founds between the two activities.
-In other words, it removes the leading and trailing silence.
-
-Sampling rate is 44100 sample per second, we'll use an analysis window of 100 ms
-(i.e. block_size == 4410)
-
-Energy threshold is 50.
-
-The tokenizer will start accumulating windows up from the moment it encounters
-the first analysis window of an energy >= 50. ALL the following windows will be
-kept regardless of their energy. At the end of the analysis, it will drop trailing
-windows with an energy below 50.
-
-This is an interesting example because the audio file we're analyzing contains a very
-brief noise that occurs within the leading silence. We certainly do want our tokenizer
-to stop at this point and considers whatever it comes after as a useful signal.
-To force the tokenizer to ignore that brief event we use two other parameters `init_min`
-and `init_max_silence`. By `init_min` = 3 and `init_max_silence` = 1 we tell the tokenizer
-that a valid event must start with at least 3 noisy windows, between which there
-is at most 1 silent window.
-
-Still with this configuration we can get the tokenizer detect that noise as a valid event
-(if it actually contains 3 consecutive noisy frames). To circumvent this we use an enough
-large analysis window (here of 100 ms) to ensure that the brief noise be surrounded by a much
-longer silence and hence the energy of the overall analysis window will be below 50.
-
-When using a shorter analysis window (of 10 ms for instance, block_size == 441), the brief
-noise contributes more to energy calculation which yields an energy of over 50 for the window.
-Again we can deal with this situation by using a higher energy threshold (55 for example).
-
-.. code:: python
-
-    from auditok import ADSFactory, AudioEnergyValidator, StreamTokenizer, player_for, dataset
-
-    # record = True so that we'll be able to rewind the source.
-    asource = ADSFactory.ads(filename=dataset.was_der_mensch_saet_mono_44100_lead_trail_silence,
-             record=True, block_size=4410)
-    asource.open()
-
-    original_signal = []
-    # Read the whole signal
-    while True:
-       w = asource.read()
-       if w is None:
-          break
-       original_signal.append(w)
-
-    original_signal = ''.join(original_signal)
-
-    # rewind source
-    asource.rewind()
-
-    # Create a validator with an energy threshold of 50
-    validator = AudioEnergyValidator(sample_width=asource.get_sample_width(), energy_threshold=50)
-
-    # Create a tokenizer with an unlimited token length and continuous silence within a token
-    # Note the DROP_TRAILING_SILENCE mode that will ensure removing trailing silence
-    trimmer = StreamTokenizer(validator, min_length = 20, max_length=99999999, init_min=3, init_max_silence=1, max_continuous_silence=9999999, mode=StreamTokenizer.DROP_TRAILING_SILENCE)
-
-    tokens = trimmer.tokenize(asource)
-
-    # Make sure we only have one token
-    assert len(tokens) == 1, "Should have detected one single token"
-
-    trimmed_signal = ''.join(tokens[0][0])
-
-    player = player_for(asource)
-
-    print("Playing original signal (with leading and trailing silence)...")
-    player.play(original_signal)
-    print("Playing trimmed signal...")
-    player.play(trimmed_signal)
-
-
-Online audio signal processing
-##############################
-
-In the next example, audio data is directly acquired from the built-in microphone.
-The :func:`auditok.core.StreamTokenizer.tokenize` method is passed a callback function
-so that audio activities are delivered as soon as they are detected. Each detected
-activity is played back using the build-in audio output device.
-
-As mentioned before , Signal energy is strongly related to many factors such
-microphone sensitivity, background noise (including noise inherent to the hardware),
-distance and your operating system sound settings. Try a lower `energy_threshold`
-if your noise does not seem to be detected and a higher threshold if you notice
-an over detection (echo method prints a detection where you have made no noise).
-
-.. code:: python
-
-    from auditok import ADSFactory, AudioEnergyValidator, StreamTokenizer, player_for
-
-    # record = True so that we'll be able to rewind the source.
-    # max_time = 10: read 10 seconds from the microphone
-    asource = ADSFactory.ads(record=True, max_time=10)
-
-    validator = AudioEnergyValidator(sample_width=asource.get_sample_width(), energy_threshold=50)
-    tokenizer = StreamTokenizer(validator=validator, min_length=20, max_length=250, max_continuous_silence=30)
-
-    player = player_for(asource)
-
-    def echo(data, start, end):
-       print("Acoustic activity at: {0}--{1}".format(start, end))
-       player.play(''.join(data))
-
-    asource.open()
-
-    tokenizer.tokenize(asource, callback=echo)
-
-If you want to re-run the tokenizer after changing of one or many parameters, use the following code:
-
-.. code:: python
-
-    asource.rewind()
-    # change energy threshold for example
-    tokenizer.validator.set_energy_threshold(55)
-    tokenizer.tokenize(asource, callback=echo)
-
-In case you want to play the whole recorded signal back use:
-
-.. code:: python
-
-    player.play(asource.get_audio_source().get_data_buffer())
-
-
-************
-Contributing
-************
-
-**auditok** is on `GitHub <https://github.com/amsehili/auditok>`_. You're welcome to fork it and contribute.
-
-
-Amine SEHILI <amine.sehili@gmail.com>
-September 2015
-
-*******
-License
-*******
-
-This package is published under GNU GPL Version 3.
--- a/doc/cmdline.rst	Sun Jan 10 17:11:07 2021 +0100
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,424 +0,0 @@
-`auditok` Command-line Usage Guide
-==================================
-
-This user guide will go through a few of the most useful operations you can use **auditok** for and present two practical use cases.
-
-.. contents:: `Contents`
-   :depth: 3
-
-
-**********************
-Two-figure explanation
-**********************
-
-The following two figures illustrate an audio signal (blue) and regions detected as valid audio activities (green rectangles) according to
-a given threshold (red dashed line). They respectively depict the detection result when:
-
-1. the detector tolerates phases of silence of up to 0.3 second (300 ms) within an audio activity (also referred to as acoustic event):
-
-.. figure:: figures/figure_1.png
-    :align: center
-    :alt: Output from a detector that tolerates silence periods up to 300 ms
-    :figclass: align-center
-    :scale: 40 %
-
-2. the detector splits an audio activity event into many activities if the within activity silence is over 0.2 second:
-
-.. figure:: figures/figure_2.png
-    :align: center
-    :alt: Output from a detector that tolerates silence periods up to 200 ms
-    :figclass: align-center
-    :scale: 40 %
-
-Beyond plotting signal and detections, you can play back audio activities as they are detected, save them or run a user command each time there is an activity,
-using, optionally, the file name of audio activity as an argument for the command.
-
-******************
-Command line usage
-******************
-
-Try the detector with your voice
-################################
-
-The first thing you want to check is perhaps how **auditok** detects your voice. If you have installed `PyAudio` just run (`Ctrl-C` to stop):
-
-.. code:: bash
-
-    auditok
-
-This will print **id** **start-time** and **end-time** for each detected activity. If you don't have `PyAudio`, you can use `sox` for data acquisition (`sudo apt-get install sox`) and tell **auditok** to read data from standard input:
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -i - -r 16000 -w 2 -c 1
-
-Note that when data is read from standard input the same audio parameters must be used for both `sox` (or any other data generation/acquisition tool) and **auditok**. The following table summarizes audio parameters.
-
-
-+-----------------+------------+------------------+-----------------------+
-| Audio parameter | sox option | `auditok` option | `auditok` default     |
-+=================+============+==================+=======================+
-| Sampling rate   |     -r     |       -r         |      16000            |
-+-----------------+------------+------------------+-----------------------+
-| Sample width    |  -b (bits) |     -w (bytes)   |      2                |
-+-----------------+------------+------------------+-----------------------+
-| Channels        |  -c        |     -c           |      1                |
-+-----------------+------------+------------------+-----------------------+
-| Encoding        |  -e        |     None         | always signed integer |
-+-----------------+------------+------------------+-----------------------+
-
-According to this table, the previous command can be run as:
-
-.. code:: bash
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -i -
-
-Play back detections
-####################
-
-.. code:: bash
-
-    auditok -E
-
-:or:
-
-.. code:: bash
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -i - -E
-
-Option `-E` stands for echo, so **auditok** will play back whatever it detects. Using `-E` requires `PyAudio`, if you don't have `PyAudio` and want to play detections with sox, use the `-C` option:
-
-.. code:: bash
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -i - -C "play -q -t raw -r 16000 -c 1 -b 16 -e signed $"
-
-The `-C` option tells **auditok** to interpret its content as a command that should be run whenever **auditok** detects an audio activity, replacing the `$` by a name of a temporary file into which the activity is saved as raw audio. Here we use `play` to play the activity, giving the necessary `play` arguments for raw data.
-
-`rec` and `play` are just an alias for `sox`.
-
-The `-C` option can be useful in many cases. Imagine a command that sends audio data over a network only if there is an audio activity and saves bandwidth during silence.
-
-Set detection threshold
-#######################
-
-If you notice that there are too many detections, use a higher value for energy threshold (the current version only implements a `validator` based on energy threshold. The use of spectral information is also desirable and might be part of future releases). To change the energy threshold (default: 50), use option `-e`:
-
-.. code:: bash
-
-    auditok -E -e 55
-
-:or:
-
-.. code:: bash
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -i - -e 55 -C "play -q -t raw -r 16000 -c 1 -b 16 -e signed $"
-
-If however you figure out that the detector is missing some of or all your audio activities, use a lower value for `-e`.
-
-Set format for printed detections information
-#############################################
-
-By default, **auditok** prints the **id**, **start-time** and **end-time** of each detected activity:
-
-.. code:: bash
-
-    1 1.87 2.67
-    2 3.05 3.73
-    3 3.97 4.49
-    ...
-
-If you want to customize the output format, use `--printf` option:
-
-.. code:: bash
-
-    auditok -e 55 --printf "[{id}]: {start} to {end}"
-
-:output:
-
-.. code:: bash
-
-    [1]: 0.22 to 0.67
-    [2]: 2.81 to 4.18
-    [3]: 5.53 to 6.44
-    [4]: 7.32 to 7.82
-    ...
-
-Keywords `{id}`, `{start}` and `{end}` can be placed and repeated anywhere in the text. Time is shown in seconds, if you want a more detailed time information, use `--time-format`:
-
-    auditok -e 55 --printf "[{id}]: {start} to {end}" --time-format "%h:%m:%s.%i"
-
-:output:
-
-.. code:: bash
-
-    [1]: 00:00:01.080 to 00:00:01.760
-    [2]: 00:00:02.420 to 00:00:03.440
-    [3]: 00:00:04.930 to 00:00:05.570
-    [4]: 00:00:05.690 to 00:00:06.020
-    [5]: 00:00:07.470 to 00:00:07.980
-    ...
-
-Valid time directives are: `%h` (hours) `%m` (minutes) `%s` (seconds) `%i` (milliseconds). Two other directives, `%S` (default) and `%I` can be used for absolute time in seconds and milliseconds respectively.
-
-1st Practical use case example: generate a subtitles template
-#############################################################
-
-Using `--printf ` and `--time-format`, the following command, used with an input audio or video file, will generate and an **srt** file template that can be later edited with a subtitles editor in a way that reduces the time needed to define when each utterance starts and where it ends:
-
-.. code:: bash
-
-    auditok -e 55 -i input.wav -m 10 --printf "{id}\n{start} --> {end}\nPut some text here...\n" --time-format "%h:%m:%s.%i"
-
-:output:
-
-.. code:: bash
-
-    1
-    00:00:00.730 --> 00:00:01.460
-    Put some text here...
-
-    2
-    00:00:02.440 --> 00:00:03.900
-    Put some text here...
-
-    3
-    00:00:06.410 --> 00:00:06.970
-    Put some text here...
-
-    4
-    00:00:07.260 --> 00:00:08.340
-    Put some text here...
-
-    5
-    00:00:09.510 --> 00:00:09.820
-    Put some text here...
-
-
-2nd Practical use case example: build a (very) basic voice control application
-##############################################################################
-
-`This repository <https://github.com/amsehili/gspeech-rec>`_ supplies a bash script the can send audio data to Google's
-Speech Recognition service and get its transcription. In the following we will use **auditok** as a lower layer component
-of a voice control application. The basic idea is to tell **auditok** to run, for each detected audio activity, a certain
-number of commands that make up the rest of our voice control application.
-
-Assume you have installed **sox** and downloaded the Speech Recognition script. The sequence of commands to run is:
-
-1- Convert raw audio data to flac using **sox**:
-
-.. code:: bash
-
-    sox -t raw -r 16000 -c 1 -b 16 -e signed raw_input output.flac
-
-2- Send flac audio data to Google and get its filtered transcription using `speech-rec.sh <https://github.com/amsehili/gspeech-rec/blob/master/speech-rec.sh>`_ :
-
-.. code:: bash
-
-    speech-rec.sh -i output.flac -r 16000
-
-3- Use **grep** to select lines that contain *transcript*:
-
-.. code:: bash
-
-    grep transcript
-
-
-4- Launch the following script, giving it the transcription as input:
-
-.. code:: bash
-
-    #!/bin/bash
-
-    read line
-
-    RES=`echo "$line" | grep -i "open firefox"`
-
-    if [[ $RES ]]
-       then
-         echo "Launch command: 'firefox &' ... "
-         firefox &
-         exit 0
-    fi
-
-    exit 0
-
-As you can see, the script can handle one single voice command. It runs firefox if the text it receives contains **open firefox**.
-Save a script into a file named voice-control.sh (don't forget to run a **chmod u+x voice-control.sh**).
-
-Now, thanks to option `-C`, we will use the four instructions with a pipe and tell **auditok** to run them each time it detects
-an audio activity. Try the following command and say *open firefox*:
-
-
-.. code:: bash
-
-    rec -q -t raw -r 16000 -c 1 -b 16 -e signed - | auditok -M 5 -m 3 -n 1 --debug-file file.log -e 60 -C "sox -t raw -r 16000 -c 1 -b 16 -e signed $ audio.flac ; speech-rec.sh -i audio.flac -r 16000 | grep transcript | ./voice-control.sh"
-
-Here we used option `-M 5` to limit the amount of read audio data to 5 seconds (**auditok** stops if there are no more data) and
-option `-n 1` to tell **auditok** to only accept tokens of 1 second or more and throw any token shorter than 1 second.
-
-With `--debug-file file.log`, all processing steps are written into file.log with their timestamps, including any run command and the file name the command was given.
-
-
-Plot signal and detections
-##########################
-
-use option `-p`. Requires `matplotlib` and `numpy`.
-
-.. code:: bash
-
-    auditok ...  -p
-
-
-Save plot as image or PDF
-#########################
-
-.. code:: bash
-
-    auditok ...  --save-image output.png
-
-Requires `matplotlib` and `numpy`. Accepted formats: eps, jpeg, jpg, pdf, pgf, png, ps, raw, rgba, svg, svgz, tif, tiff.
-
-
-Read data from file
-###################
-
-.. code:: bash
-
-    auditok -i input.wav ...
-
-Install `pydub` for other audio formats.
-
-
-Limit the length of acquired data
-#################################
-
-.. code:: bash
-
-    auditok -M 12 ...
-
-Time is in seconds. This is valid for data read from an audio device, stdin or an audio file.
-
-
-Save the whole acquired audio signal
-####################################
-
-.. code:: bash
-
-    auditok -O output.wav ...
-
-Install `pydub` for other audio formats.
-
-
-Save each detection into a separate audio file
-##############################################
-
-.. code:: bash
-
-    auditok -o det_{N}_{start}_{end}.wav ...
-
-You can use a free text and place `{N}`, `{start}` and `{end}` wherever you want, they will be replaced by detection number, start time and end time respectively. Another example:
-
-.. code:: bash
-
-    auditok -o {start}-{end}.wav ...
-
-Install `pydub` for more audio formats.
-
-
-Setting detection parameters
-############################
-
-Alongside the threshold option `-e` seen so far, a couple of other options can have a great impact on the detector behavior. These options are summarized in the following table:
-
-+--------+-------------------------------------------------------+---------+------------------+
-| Option | Description                                           | Unit    | Default          |
-+========+=======================================================+=========+==================+
-| `-n`   | Minimum length an accepted audio activity should have | second  |   0.2 (200 ms)   |
-+--------+-------------------------------------------------------+---------+------------------+
-| `-m`   | Maximum length an accepted audio activity should reach| second  |   5.             |
-+--------+-------------------------------------------------------+---------+------------------+
-| `-s`   | Maximum length of a continuous silence period within  | second  |   0.3 (300 ms)   |
-|        | an accepted audio activity                            |         |                  |
-+--------+-------------------------------------------------------+---------+------------------+
-| `-d`   | Drop trailing silence from an accepted audio activity | boolean |   False          |
-+--------+-------------------------------------------------------+---------+------------------+
-| `-a`   | Analysis window length (default value should be good) | second  |   0.01 (10 ms)   |
-+--------+-------------------------------------------------------+---------+------------------+
-
-
-Normally, `auditok` does keeps trailing silence of a detected activity. Trailing silence is at most as long as maximum length of a continuous silence (option `-m`) and can be important for some applications such as speech recognition. If you want to drop trailing silence anyway use option `-d`. The following two figures show the output of the detector when it keeps the trailing silence and when it drops it respectively:
-
-
-.. figure:: figures/figure_3_keep_trailing_silence.png
-    :align: center
-    :alt: Output from a detector that keeps trailing silence
-    :figclass: align-center
-    :scale: 40 %
-
-
-.. code:: bash
-
-    auditok ...  -d
-
-
-.. figure:: figures/figure_4_drop_trailing_silence.png
-    :align: center
-    :alt: Output from a detector that drop trailing silence
-    :figclass: align-center
-    :scale: 40 %
-
-You might want to only consider audio activities if they are above a certain duration. The next figure is the result of a detector that only accepts detections of 0.8 second and longer:
-
-.. code:: bash
-
-    auditok ...  -n 0.8
-
-
-.. figure:: figures/figure_5_min_800ms.png
-    :align: center
-    :alt: Output from a detector that detect activities of 800 ms or over
-    :figclass: align-center
-    :scale: 40 %
-
-
-Finally it is almost always interesting to limit the length of detected audio activities. In any case, one does not want a too long audio event such as an alarm or a drill to hog the detector. For illustration purposes, we set the maximum duration to 0.4 second for this detector, so an audio activity is delivered as soon as it reaches 0.4 second:
-
-.. code:: bash
-
-    auditok ...  -m 0.4
-
-
-.. figure:: figures/figure_6_max_400ms.png
-    :align: center
-    :alt: Output from a detector that delivers audio activities that reach 400 ms
-    :figclass: align-center
-    :scale: 40 %
-
-
-Debugging
-#########
-
-If you want to print what happens when something is detected, use option `-D`.
-
-.. code:: bash
-
-    auditok ...  -D
-
-
-If you want to save everything into a log file, use `--debug-file file.log`.
-
-.. code:: bash
-
-    auditok ...  --debug-file file.log
-
-
-
-
-*******
-License
-*******
-
-**auditok** is published under the GNU General Public License Version 3.
-
-******
-Author
-******
-Amine Sehili (<amine.sehili@gmail.com>)
--- a/doc/core.rst	Sun Jan 10 17:11:07 2021 +0100
+++ b/doc/core.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -1,5 +1,5 @@
-auditok.core
-------------
+Core
+----
 
 .. automodule:: auditok.core
    :members:
--- a/doc/dataset.rst	Sun Jan 10 17:11:07 2021 +0100
+++ b/doc/dataset.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -1,5 +1,6 @@
-auditok.dataset
----------------
+
+Dataset
+-------
 
 .. automodule:: auditok.dataset
    :members:
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc/examples.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -0,0 +1,147 @@
+Basic example
+-------------
+
+.. code:: python
+
+    from auditok import split
+
+    # split returns a generator of AudioRegion objects
+    audio_regions = split("audio.wav")
+    for region in audio_regions:
+        region.play(progress_bar=True)
+        filename = region.save("/tmp/region_{meta.start:.3f}.wav")
+        print("region saved as: {}".format(filename))
+
+Example using `AudioRegion`
+---------------------------
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    regions = region.split_and_plot() # or just region.splitp()
+
+output figure:
+
+.. image:: figures/example_1.png
+
+Working with AudioRegions
+-------------------------
+
+Beyond splitting, there are a couple of interesting operations you can do with
+`AudioRegion` objects.
+
+Concatenate regions
+===================
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region_1 = AudioRegion.load("audio_1.wav")
+    region_2 = AudioRegion.load("audio_2.wav")
+    region_3 = region_1 + region_2
+
+Particularly useful if you want to join regions returned by ``split``:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    regions = AudioRegion.load("audio.wav").split()
+    gapless_region = sum(regions)
+
+Repeat a region
+===============
+
+Multiply by a positive integer:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    region_x3 = region * 3
+
+Make slices of equal size out of a region
+=========================================
+
+Divide by a positive integer:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    regions = regions / 5
+    assert sum(regions) == region
+
+Make audio slices of arbitrary size
+===================================
+
+Slicing an ``AudioRegion`` can be interesting in many situations. You can for
+example remove a fixed-size portion of audio data from the beginning or the end
+of a region or crop a region by an arbitrary amount as a data augmentation
+strategy, etc.
+
+The most accurate way to slice an ``AudioRegion`` is to use indices that
+directly refer to raw audio samples. In the following example, assuming that the
+sampling rate of audio data is 16000, you can extract a 5-second region from
+main region, starting from the 20th second as follows:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    start = 20 * 16000
+    stop = 25 * 16000
+    five_second_region = region[start:stop]
+
+This allows you to practically start and stop at any sample within the region.
+Just as with a `list` you can omit one of `start` and `stop`, or both. You can
+also use negative indices:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    start = -3 * region.sr # `sr` is an alias of `sampling_rate`
+    three_last_seconds = region[start:]
+
+While slicing by raw samples is accurate, slicing with temporal indices is more
+intuitive. You can do so by accessing the ``millis`` or ``seconds`` views of
+``AudioRegion`` (or their shortcut alias ``ms`` and ``sec``/``s``).
+
+With the ``millis`` view:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    five_second_region = region.millis[5000:10000]
+
+or with the ``seconds`` view:
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    five_second_region = region.seconds[5:10]
+
+Get an array of audio samples
+=============================
+
+.. code:: python
+
+    from auditok import AudioRegion
+    region = AudioRegion.load("audio.wav")
+    samples = region.samples
+
+If ``numpy`` is installed, this will return a ``numpy.ndarray``. If audio data
+is mono the returned array is 1D, otherwise it's 2D. If ``numpy`` is not
+installed this will return a standard ``array.array`` for mono data, and a list
+of ``array.array`` for multichannel data.
+
+Alternatively you can use:
+
+.. code:: python
+
+    import numpy as np
+    region = AudioRegion.load("audio.wav")
+    samples = np.asarray(region)
--- a/doc/index.rst	Sun Jan 10 17:11:07 2021 +0100
+++ b/doc/index.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -1,5 +1,8 @@
-auditok, an AUDIo TOKenization tool
-===================================
+
+.. autosummary::
+    :toctree: generated/
+
+
 
 .. image:: https://travis-ci.org/amsehili/auditok.svg?branch=master
     :target: https://travis-ci.org/amsehili/auditok
@@ -8,67 +11,30 @@
     :target: http://auditok.readthedocs.org/en/latest/?badge=latest
     :alt: Documentation Status
 
-`auditok` is an **Audio Activity Detection** tool that can process online data (read from an audio device or from standard input) as well as audio files. It can be used as a command line program and offers an easy to use API.
 
-The latest version of this documentation can be found at `Readthedocs <http://auditok.readthedocs.org/en/latest/>`_.
 
-Requirements
-------------
+`auditok` is an **Audio Activity Detection** tool that can process online data
+(read from an audio device or from standard input) as well as audio files. It
+can be used as a command line program or by calling its API.
 
-`auditok` can be used with standard Python!
-
-However, if you want more features, the following packages are needed:
-
-- `Pydub <https://github.com/jiaaro/pydub>`_ : read audio files in popular audio formats (ogg, mp3, etc.) or extract audio from a video file.
-
-- `PyAudio <http://people.csail.mit.edu/hubert/pyaudio/>`_ : read audio data from the microphone and play back detections.
-
-- `matplotlib <http://matplotlib.org/>`_ : plot audio signal and detections (see figures above).
-
-- `numpy <http://www.numpy.org>`_ : required by matplotlib. Also used for math operations instead of standard python if available.
-
-- Optionally, you can use `sox` or `[p]arecord` for data acquisition and feed `auditok` using a pipe.
-
-Installation
-------------
-
-Install with pip:
-
-.. code:: bash
-
-    sudo pip install auditok
-
-or install the latest version on Github:
-
-.. code:: bash
-
-    git clone https://github.com/amsehili/auditok.git
-    cd auditok
-    sudo python setup.py install
-
-Getting started
----------------
 
 .. toctree::
-    :titlesonly:
-    :maxdepth: 2
+    :caption: Getting started
+    :maxdepth: 3
 
-       Command-line Usage Guide <cmdline.rst>
-       API Tutorial <apitutorial.rst>
-
-API Reference
--------------
+    installation
+    examples
 
 .. toctree::
+    :caption: API Reference
     :maxdepth: 3
 
-       auditok.core <core.rst>
-       auditok.util <util.rst>
-       auditok.io <io.rst>
-       auditok.dataset <dataset.rst>
+    core
+    util
+    io
+    signal
+    dataset
 
-Indices and tables
-==================
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
+License
+-------
+MIT.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc/installation.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -0,0 +1,22 @@
+Installation
+------------
+
+.. code:: bash
+
+    pip install auditok
+
+
+A basic version of ``auditok`` will run with standard Python (>=3.4). However,
+without installing additional dependencies, ``auditok`` can only deal with audio
+files in *wav* or *raw* formats. if you want more features, the following
+packages are needed:
+
+    - `pydub <https://github.com/jiaaro/pydub>`_ : read audio files in popular
+       audio formats (ogg, mp3, etc.) or extract audio from a video file.
+    - `pyaudio <http://people.csail.mit.edu/hubert/pyaudio/>`_ : read audio data
+       from the microphone and play back detections.
+    - `tqdm <https://github.com/tqdm/tqdm>`_ : show progress bar while playing
+       audio clips.
+    - `matplotlib <http://matplotlib.org/>`_ : plot audio signal and detections.
+    - `numpy <http://www.numpy.org>`_ : required by matplotlib. Also used for
+       some math operations instead of standard python if available.
--- a/doc/io.rst	Sun Jan 10 17:11:07 2021 +0100
+++ b/doc/io.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -1,5 +1,5 @@
-auditok.io
-----------
+Low-level IO
+------------
 
 .. automodule:: auditok.io
    :members:
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/doc/signal.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -0,0 +1,5 @@
+Signal processing
+-----------------
+
+.. automodule:: auditok.signal
+   :members:
--- a/doc/util.rst	Sun Jan 10 17:11:07 2021 +0100
+++ b/doc/util.rst	Sun Jan 10 22:36:22 2021 +0100
@@ -1,5 +1,5 @@
-auditok.util
-------------
+Util
+----
 
 .. automodule:: auditok.util
    :members: