Using Web Audio Api For Analyzing Input From Microphone (convert MediaStreamSource To BufferSource)
I am trying to get the beats per Minute (BPM) using the Web Audio Api like it is done in the following links (http://joesul.li/van/beat-detection-using-web-audio/ or https://github
Solution 1:
Those articles are great. There are a few things wrong with your current approach:
- You don't need to decode the stream - you need to connect it to a web audio context with a MediaStreamAudioSourceNode, and then use a ScriptProcessor (deprecated) or an AudioWorker (not implemented everywhere yet) to grab the bits and do detection. decodeAudioData takes an encoded buffer - i.e. the contents of an MP3 file - not a stream object.
- Keep in mind this is a STREAM, not a single file - you can't really just hand an entire song audio file to the beat detector. Well, you CAN - but if you're streaming, then you need to wait until the whole file comes in, which would be bad. You'll have to work in chunks, and the bpm may change during the song. So collect a chunk at a time - probably a second or more of audio at a time - to pass to the beat detection code.
- Although it may be a good idea to lowpass-filter the data, it's probably not worthwhile to high-pass filter it. Remember that filters aren't brick wall filters - they don't slice out everything above or below their frequency, they just attentuate it.
Post a Comment for "Using Web Audio Api For Analyzing Input From Microphone (convert MediaStreamSource To BufferSource)"