I'm a project manager from a local community radio station, We implementing a set of features in Airtime to enable better logging for CRTC regulations.
Hi Dominique, thanks for sharing that document. There is no really clean way to unmix compiled tracks automatically, so if you want split tracks you will either have to leave some silence between tracks or have the DJ build the show in Airtime from individual files. I would suggest the latter, because it's less complicated and makes your library more re-usable.
An alternative would be to keep the mixed show file whole (as this is a good format for listen-on-demand downloads via a podcast app), avoid any bad cuts, and simply annotate the track start positions within the file. For example Mixcloud has an API for uploading a whole show file and specifying the tracklist at the same time: http://blog.mixcloud.com/2014/08/22/two-new-apis-launched-today/
As for music fingerprinting for tracks played from CD (or vinyl), you could look at the open source project http://echoprint.me/ as a starting point. If you are playing really obscure or unreleased music, you might like to contribute some fingerprints back.
Thank you for your feedback, Our approach to splitting tracks is to split the tracks in the media library by creating a playlist of tracks that all point to the same compilation audio file with different cue points for start and end time of the track. We will test the playback for fade in and fade out silence.
I am aware of the echoprint solution we are exploring that option.
The approach of using one file and multiple sets of cue points in Airtime is the alternative I had in mind, please let us know how that goes. The limitation of splitting files on cue points is that if your DJs do any mixing or talking, there will be pieces of other tracks or speech in them. I think you could simplify your system and do less work by changing the DJ workflow.
For example:
1. DJ Landry uploads the tracks she intends to use in her next mix to the Airtime library, ensuring they are fully tagged with metadata.
2. DJ Landry downloads the latest station idents, jingles or ads while in the Library.
3. DJ Landry mixes her show live, mixing files, CDs or records, talking, having guests in the studio etc. and records a two hour file.
4. DJ Landry uploads the two hour recorded file to Airtime and annotates the music cue points with artist/track.
5. Airtime generates the Playout History log based on cue point information from the show as supplied by the DJ, using internal or external lookups to fill in missing metadata (such as Label, Composer or ISRC).
Doing it this way round should ensure that the annotation is only done once, while entering it directly into Playout History means that the same work will need to be done each time the show is repeated.
So we then have the best of both worlds - clean cut, individual tracks for re-use in the Library, but also fully mixed show recordings which are downloadable on demand.
I think we are on the same page. The problem comes when you don't have all the tracks in the library that are in the compilation audio. like in your example if someone used a CD or record and they didn't upload an unmixed version to airtime. there would be no annotation to reference. Therefore you would have to manually annotate it. But I like your idea of using existing library audio files for their meta data and cue points.
We will develop a tool to annotate compilation audio and setting cue points of split compilation audio files.
Hi Dominique, for the CDJ and vinyl DJs, a certain amount of manual annotation is going to be inevitable, even if only to correct the data returned by Echoprint etc. We have talked about making that easier by having a cue point marker button that the DJ can click in the Airtime interface, then enter the artist/track information directly into the log. If the DJ is really well prepared, they could enter the metadata first, then click the cue point marker at the appropriate time (an assistant would be helpful here!) This would enable 'Now Playing' metadata to be sent out over the live-info API even for vinyl.
Yes we are taking the manual approach first, the method that we are looking at implementing is a simple progress bar to allow the user to listen to the audio and pause the audio at the split point, once the audio has been split they can annotate the audio by editing the meta data in media library. Having a button to select the default metadata / cue points from existing media would be helpful for speeding up the manual entry process.
We are looking at the possibility of detecting cue points using echoprint fingerprinting technology, from my understanding the minimum required sample size for a fingerprint is 20 seconds. Our algorithm to detect cue points is to split the compilation audio into 20 second segments and analyse those segments to detect changes in the output and split on the changes.
Sorry, I've not used Echoprint personally, only read about it. For an annotation interface, you might like to start with the existing waveform cuepoint editor.