EditorBackend package

Submodules

EditorBackend.AnalyzeBuffer module

class EditorBackend.AnalyzeBuffer.AnalyzeBuffer(buffer, calibrations, sampleRate)

Bases: PyQt5.QtCore.QObject

When the user requests an analysis, all selected areas are composed to one NumPy array and stored here before an analysis widget picks it up. Also the calibration is added here.

addSelection(channel, selNo, points, type)

Adds a selection to the AnalyzeBuffer.

Parameters:
  • channel – A Channel object for reference.
  • selNo – Name of the selection on that channel.
  • points – List of start and end points, describing the selection areas.
  • type – Type of analysis, e.g. “FFT”
deleteChannel(channel)

Delete a channel from this buffer. Donne with exceptions, because these data-points only exist if the analysis funnction has been used on this channel

Parameters:channel – The Channel to delete.
getBuffer(channel, selNo)

This is the interface for analysis widgets to access data. Before handover it is converted to a float type to avoid precision loss and calibrated.

Parameters:
  • channel – The requested Channel.
  • selNo – Name of the selection requested.
getOffset(channel, selNo)

This is the interface for analysis widgets to access offset-data. Before handover it is converted to a float type to avoid precision loss and calibrated. This is relevant to e.g. the spl-plot that needs a settling time

Parameters:
  • channel – The requested Channel.
  • selNo – Name of the selection requested.
  • offsetLength – Complete length of desired offset in samples.
newSelection
selectionChanged

EditorBackend.Audioplayer module

class EditorBackend.Audioplayer.Audioplayer(buffer, sampleRate, sampleWidth, blockSize)

Bases: PyQt5.QtCore.QObject

This class takes care of audio playback via PyAudio. It also works in the block-wise manner like Buffer. It can switch between channels, but only one channel can be played at a time.

callback(in_data, frame_count, time_info, status)

PyAudio callback, used to feed new samples for playback.

Parameters:
  • in_data – See PyAudio documentation.
  • frame_count – See PyAudio documentation.
  • time_info – See PyAudio documentation.
  • status – See PyAudio documentation.
Returns:

See PyAudio documentation.

isPlaying()

Is it playing?

Returns:True if playback ongoing.
pause()

Pause the current playback.

play()

Start playback from the last known position.

sendPos
setPos(smp, channel)

Sets the playback on the specified position and channel.

Parameters:
  • smp – An integer representing the sample from which to start playing.
  • channel – A Channel object referring to the channel to play.

EditorBackend.Buffer module

class EditorBackend.Buffer.AudioBlock(source, start, channel=0)

Bases: object

The Buffer’s audio data storage is organised in AudioBlocks. An AudioBlock does not necessarily contain sample data. If there is no sample data stored in the Audioblock, the AudioBlock knows how and where to get it from (has a reference to the WavFile-object). Sample data should only be accessed through the getData interface. That way, if no sample data is present, the AudioBlock will load it by itself. There is also an interface for releasing memory.

free()

Delete the data to free memory.

getData()

Interface to retrieve raw sample data.

Returns:A raw bytearray
isEmpty()

Is it an EmptyBlock?

Returns:True if it is an EmptyBlock.
readdata()

Reads data from the source and writes it to memory.

setdata(data)

Manually write data to the AudioBlock, without a source. (Used for Recording)

Parameters:data – A raw bytearray
class EditorBackend.Buffer.Buffer(sampleRate, sampleWidth, blockSize)

Bases: PyQt5.QtCore.QObject

The Buffer class provides SNARE’s storage for audio data. Data can be accessed in a block-wise manner or via a list of selection points from TrackSelection. Data can be added by specifying a source WAVE-file or from the Recorder.

addRecording(deviceChannels, deviceName)

Prepares the buffer for receiving recording data.

Parameters:
  • deviceChannels – List of channels to record.
  • deviceName – Name of the device to record from.
Returns:

Dictionary linking the numerical device channels with Channel objects.

appendData(data, deviceChannel, length)

Slot for the recorder to add recorded data. Input data will be stored in the buffer and also written to disk with WavFileWrite.

Parameters:
  • data – The input array, a raw bytearray.
  • deviceChannel – The device channel it is from.
  • length – and the length of data.
closeRecording()

Closes the recording, which means that the corresponding WAVE-file will be completed. Then the buffer reopens the WAVE-file in read-mode. Therefore the type of channel is changed.

deleteChannel(channel)

Removes the specified channel from the buffer.

Parameters:channel – The channel to remove.
getArray(channel, start, end)

Since a user analysis selection might consist of several marked areas, this method helps by returning an array from the blocked-buffer starting at “start” to “end”

Parameters:
  • channel – The channel on which the selection was made.
  • start – The sample on which this portion of the selection starts.
  • end – The sample on which this portion of the selection ends.
Returns:

A numpy array of unpacked sample data.

getBlock(channel, block)
getSelection(channel, points)

When the user has selected the areas on the data that he wants to analyze, it would not make sense to transmit blocks. Instead the selected areas are extracted from their respective blocks and joined together to one numpy array.

Parameters:
  • channel – The channel on which the selection was made.
  • points – The list of start and end samples marking the selected areas.
Returns:

A numpy array of unpacked sample data.

loadWave(filename)

This is the procedure to add a WAVE-file to the buffer. No data from the file is read. Only the right amount of empty Audioblocks is prepared. Data is only read from disk when needed.

Parameters:filename – Full path to file to be loaded.
Returns:List of newly created Channel objects.
updateFromRecorder
class EditorBackend.Buffer.EmpytBlock(source, start, channel=0, arraySize=None)

Bases: EditorBackend.Buffer.AudioBlock

Empty blocks are used when somehow data is requested form an area where the file already ends.

free()

This is overwritten because deleting nothing would not make much sense.

EditorBackend.Calibrations module

class EditorBackend.Calibrations.Calibrations

Bases: PyQt5.QtCore.QObject

Analysis in SNARE either use dBFs or the user selects an area in the recording/file that contains a 94dB/1kHz calibration tone. That way an analysis can be calibrated.

addCalibration(channel, factor)

Adds a calibration to the dictionary.

Parameters:
  • channel – The channel object the calibration refers to.
  • factor – The calibration factor.
calibrationChanged
getCalibration(channel)

Returns the calibration for the given channel or one if there is no calibration.

Parameters:channel – The channel object the calibration refers to.

EditorBackend.Channel module

class EditorBackend.Channel.Channel(type, name)

Bases: object

A helper class. instead of working with integer-channel numbers, work with an unique object reference, that might as well carry some additional information, like a channel name.

getName()

A getter method.

Returns:The name attribute of this channel.

EditorBackend.MainBackend module

class EditorBackend.MainBackend.MainBackend(sampleRate, sampleWidth)

Bases: PyQt5.QtCore.QObject

The main backend presents the collection of backend-objects, e.g. the main audio-data buffer, the recorder, audioplayer, the waveformbuffer et al. Those objects communicate with each other through the MainBackend.

addAnalysis
addCalibration(channel, selNo)

Adds (or updates) a calibration for

Parameters:
  • channel – Channel object the analysis refers to.
  • selNo – Name of the selection, that contains the calibration selection.
addTrack
configRecord(device, channels)

After InputSelectorDialog configured a recording. When configured it will add channel objects and tracks accordingly.

Parameters:
  • device – Audio device to record from.
  • channels – Channels to record from that device.
deleteChannel(channel, track)

Delete a channel. Since the signal comes from TrackManager, the UI part of the channel has already been deleted.

Parameters:
  • channel – Delete data associated with this channel objects.
  • track (EditorUI.TrackUI) – Track QWidget to delete from MainWindow.
deselectAllReports()
exportReport()
newAnalysis(channel, selNo, type='FFT')

Adds a new analysis or calibration.

Parameters:
  • channel – Channel object the analysis refers to.
  • selNo – Name of the selection.
  • type – Analysis type of the selection.
newSelection()

Adds a new selection on the TrackManager.

openWave(fileName)

After the user selected a valid WAVE-file in the file dialog. Adds a track for every channel of the WAVE-file. :param fileName: Full path to file.

pauseRecord()

Notifies all relevant objects about pausing the recording.

removeTrack
selectAllReports()
startRecord()

Notifies all relevant objects about starting a recortding.

stopRecord()

Notifies all relevant objects about closing the recording.

updateAnalysesStatus
updateAnalysis(channel, selNo, type)

Updates an existing analysis.

Parameters:
  • channel – Channel object the analysis refers to.
  • selNo – Name of the selection.
  • type – Analysis type of the selection.
updateRecordingStatus
updateWaveformMessage

EditorBackend.Recorder module

class EditorBackend.Recorder.Recorder(buffer, sampleRate, sampleWidth, blockSize)

Bases: PyQt5.QtCore.QObject

This class provides an interface to a non-blocking pyaudio recording stream. The interleaved channels of the raw input stream are separated, collected to form blocks of a certain size and the resulting bytearray is sent to a buffer object. The status of the object is communicated through a signal and displayed at the status bar.

callback(in_data, frame_count, time_info, status)

A PyAudio method called every time a chunckSize amount of samples have been recorded. The new samples are added to the temporary buffer until a blockSize is reached. At this point the new block is sent to the buffer and the temporary buffer is reset.

Parameters:
  • in_data – see PyAudio Reference.
  • frame_count – see PyAudio Reference.
  • time_info – see PyAudio Reference.
  • status – see PyAudio Reference.
Returns:

see PyAudio Reference.

isReady()

Is the Recorder object ready for a recording (device set)?

Returns:True if Recorder is ready to record.
isRunning()

Is there a recording ongoing?

Returns:True if recording is ongoing.
open(deviceIndex)

Open the given device for recording. Then set the object to be ready for recording.

Parameters:deviceIndex – PyAudio device index.
pause()

Pauses the recording and updates the status. Remaining data in the temporary buffer will not be sent to the buffer, but also will not be lost.

record()

Starts the recording and updates the status.

sendRecPos
sendToBuffer(data)

Before sending the unformatted bytearray to the buffer it is filtered for the channels. Wanted channels are separated, according to the same principle to solve the WAVE-file channel interweaving. Unwanted channels are ignored.

Parameters:data – Channel interweaved raw bytearray
stop()

Stops the recording: The last remaining contents of the temporary buffer a zero padded to match the blocksize and sent to the buffer. Then the object is set back to the ready for recording state.

updateRecording

EditorBackend.Unpacker module

class EditorBackend.Unpacker.Unpacker(blockSize, sampleWidth)

Bases: object

This class simplifies the conversion from the raw bytearray read from the WAVE-file, or received from the recording hardware to a NumPy array.

unpack(data)

Automatically select the correct unpack method.

Parameters:data – Raw bytearray input
Returns:Converted numpy array.
unpack16(data)

For 2 Bytes the unpack is simple with the python builtin “struct.unpack”

Parameters:data – Raw bytearray input
Returns:Converted numpy array.
unpack24(data)

For 3 Bytes is slighty more complicated. The buultin “struct.unpack” only works on even numbers of bytes, therefore the raw bytearray is padded with 1 Byte of zeros before converting it to a numpy array.

Parameters:data – Raw bytearray input
Returns:Converted numpy array.

EditorBackend.WavFile module

class EditorBackend.WavFile.WavFile(fileName, sampleRate, sampleWidth, blockSize)

Bases: PyQt5.QtCore.QThread

This class reads standard RIFF-WAVE-files and BWF-WAVE-files. Once successfully opened, raw audio data can be accessed in blocked segments by providing the channel number and a sample to start from. 16bit and 24bit files are supported as well as audio-files with an arbitrary number of channels.

Example:

file = ‘test.wav’ sampleRate = 44100 sampleWidth = 2 blockSize = 441000 # 10 seconds per block

wav = WavFile(file, sampleRate, sampleWidth, blockSize)

startSample = 30 * sampleRate # read 10 seconds starting from 0:30 leftChannel = 0

rawData = wav.getBlock(startSample, leftChannel)

#rawData could now be fed to e.g. a pyaudio callback

blockCount()

Gets the length of the audio stream in blocks. Full length of audiostream would be blockCount()*blockSize

Returns:Length of audiostream in blocks, always rounded up.
channelCount()

Gets the number of channels present in the WAVE-file

Returns:number of channels
getBlock(start, channel)

Read block wise raw data from wav-file. Meaning that this method will always return a full block, if necessary a zero-padded block or even an entirely empty block.

Parameters:
  • start – number of sample to start reading from. E.g. start = 88200 will read form 0:02s onwards.
  • channel – 0 -> Left Channel, 1 -> Rigth Channel, n -> further channels
Returns:

Returns raw unformatted audio data as bytearray. This means that e.g. in an 24bit-file three consecutive bytearray elements form one sample.

getFileInfo()

To access basic file-information. At the moment only sampleRate and sampleWidth

Returns:Dictionary containing file information. Keys: ‘sampleRate’, ‘sampleWidth’.
printHeader()

Prints entire RIFF-Header section to shell.

run()

EditorBackend.WavFileWrite module

class EditorBackend.WavFileWrite.WavFileWrite(fileName, sampleRate, sampleWidth, channels, blocksize)

Bases: object

This class is used to write RIFF-WAVE-files when recording with SNARE. It supports 16bit or 24bit but only one channel. It opens (and if necessary overwrites) a wav file, writes a header (initially with a filesize of zero) and then is ready to receive blockwise updates of recorded samples to append to the file. On closing the file, the header will be updated to contain the right data block length.

appendBlock(block)

Appends a block of raw sample data to the file. Note that the header will not be updated until the file is closed. In case of a crash, all sample data is saved, but the file will look empty to most programs.

Parameters:block – A bytearray already in a format of interleaved raw integer samples
close()

Finish the writing process by updating the header to contain correct size information. Then close Qt stream.

writeHeader()

Writes the header with member attributes that already have the right byteformat.

EditorBackend.Waveform module

class EditorBackend.Waveform.Waveform(channel, startBlock, dataBlocks, numberOfPixmaps, dataSrc)

Bases: object

Merely a data structure to simplify the handling of waveforms. This class contains space for sample data, which might be rendered by the WaveformThread to a cooridnate-list, which then can be sent to a TrackWaveform object to be painted and display to the user. It also contains information about the channel it belongs to and its position and size on the timeline.

EditorBackend.WaveformBuffer module

class EditorBackend.WaveformBuffer.WaveformBuffer(buffer, sampleWidth, blockSize, waveformHeight)

Bases: PyQt5.QtCore.QObject

At TrackWaveform only waveforms are stored that are currently displayed. To avoid the computationally intense rendering of new waveforms, every waveform that has been rendered will be stored in an object of this class. E.g. if a waveform was rendered and then the user changed the zoom-level, so that the TrackWaveform dropped all waveform objects, and then the user likes to return to the previous zoom-level, all waveforms are still available from the WaveformBuffer without the need to render again. WaveformBuffer takes a waveform request, looks up if it has been rendered already and either immediately returns the rendered waveform or creates an unrendered waveform to put on the stack of a render WaveformThread.

addChannel(channel)

Adding a channel in the backend will add a WaveformBufferChannel in this object.

Parameters:channel – Reference to a channel object to associate with the right sample data when accessing the buffer.
addWaveform(waveform)

Return path for rendered waveforms. Will be transmitted through the MainBackend to the TrackManager.

Parameters:waveform – A rendered waveform-object.
deleteChannel(channel)

Simply remove the WaveformBufferChannel that is associated with thc given Channel object.

Parameters:channel – s.a.
formatWaveformMessage(load)

Slot called from a WaveformBufferChannel’s render thread. Collects the current queue size from all threads (inside of WaveformBufferChannels), adds them up and sends a message, which will be used to update the status bar.

Parameters:
  • channel – Channel from which the update was sent.
  • load – Current workload of that channel.
getWaveform(channel, startBlock, dataBlocks, numberOfPixmaps)

Request to return a rendered waveform. Simply relayed to the responsible WaveformbufferChannel.

Parameters:
  • channel – Channel object linking to the responsible WaveformBufferChannel.
  • blockNumber – The block to be rendered.
  • width – Width of the waveform pixmap, equals zoom-level.
returnWaveform
updateWaveformMessage

EditorBackend.WaveformBufferChannel module

class EditorBackend.WaveformBufferChannel.WaveformBufferChannel(buffer, sampleWidth, blockSize, waveformHeight, channel, mutex, thread)

Bases: PyQt5.QtCore.QObject

This is added as a layer between WaveformBuffer and the rendering threads. It makes it easier to add and delete channels, also multiple threads can work on the rendering.

addWaveform(waveform)

Return path for rendered pixmaps from the thread. Write into the dictionaries to avoid double renderings. Then free the AudioBlock to save memory.

Parameters:waveform – A rendered pixmap object.
getWaveform(startBlock, dataBlocks, numberOfPixmaps)

Either check by the provided parameters if the waveform already exists or create an unrendered pixmap with the given parameters and data from the buffer.

Parameters:
  • blockNumber – Block number to have a pixmap of
  • width – Width of the requested pixmap equalling the zoom-level
returnWaveform
updateWaveformMessage

EditorBackend.WaveformThread module

class EditorBackend.WaveformThread.WaveformThread(sampleWidth, blockSize, mutex)

Bases: PyQt5.QtCore.QThread

This class runs the computationally intensive rendering of sample data to waveform points list. It runs in its own thread. (It is not a real thread due to python’s GIL, but it still allows the User-Interface to be more responsive by switching between “threads”.) The samples to be rendered into waveforms are contained in the waveform object. The thread will get unrendered waveform objects loaded onto a queue and emit rendered waveforms through a signal. There is also a signal for communicating the current workload. Apart from that, there is no communication with the main thread.

add(waveform)

Slot for backend to add waveform objects to queue.

Parameters:waveform – A waveform object.
draw()

Computes the list of points to a pixmap drawing. In this setup will create a layering of Peak and RMS display. For close zoom levels it switches to the linear display and also uses spreads out the entire waveform-drawing over several pixmaps (subblocks) to account for a limited maximum size of QPixmaps. Finished pixmaps are sent to the backend through a signal. New: pixmap rendering has been moved to TrackWaveform.

finishedWaveform
points(width, height, blocks)

This method creates two lists containing coordinates for Drawing the waveform. The x-coordinates range from 0 to width-1. Y-coordinates represent the maximum or an average of the sample area. All coordinates are mirrored to create the usual symmetrical waveform shape. The algorithm was optimised for low memory usage. Even though the algorithm would be faster when operating on larger chunks of samples, python does not allow control over mallocs. A previous version of this algorithm had the problem that python would request new memory faster then it would free unused memory, (Although this algorithm would only need a static amount of memory) eventually reaching the memory limit for a 32bit-python. Especially with the version of this algorithm using numpy: Numpy cannot alter arrays in place, every operation on an array creates a copy, demanding new memory in big blocks.

Parameters:
  • width – Range of x-coordinates
  • height – Maximum for y-coordinates.
  • blocks – List of AudioBlocks to be used as data source
Returns:

Tuple of two lists, containing coordinates for the maximums-plot and the averages-plot

run()

This method starts the thread. When started the thread will work on its queue and if empty check every second for new jobs.

updateMsg

Module contents