Welcome to the final programming assignment of this specialization!
In this week's videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wake word detection).
In this assignment you will learn to:
TimeDistributed
code.Let's get started! Run the following cell to load the package you are going to use.
import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
from td_utils import *
%matplotlib inline
Let's start by building a dataset for your trigger word detection algorithm.
Run the cells below to listen to some examples.
IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")
You will use these three types of recordings (positives/negatives/backgrounds) to create a labeled dataset.
What really is an audio recording?
Let's look at an example.
IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")
The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis).
_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:,0].shape)
print("Time steps in input after spectrogram", x.shape)
Time steps in audio recording before spectrogram (441000,) Time steps in input after spectrogram (101, 5511)
Now, you can define:
Tx = 5511 # The number of time steps input to the model from the spectrogram
n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram
Note that we may divide a 10 second interval of time with different units (steps).
pydub
to synthesize audio, and it divides 10 seconds into 10,000 units.Ty = 1375 # The number of time steps in the output of our model
Because speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds.
# Load audio segments using pydub
activates, negatives, backgrounds = load_raw_audio()
print("background len should be 10,000, since it is a 10 sec clip\n" + str(len(backgrounds[0])),"\n")
print("activate[0] len may be around 1000, since an `activate` audio clip is usually around 1 second (but varies a lot) \n" + str(len(activates[0])),"\n")
print("activate[1] len: different `activate` clips can have different lengths\n" + str(len(activates[1])),"\n")
background len should be 10,000, since it is a 10 sec clip 10000 activate[0] len may be around 1000, since an `activate` audio clip is usually around 1 second (but varies a lot) 916 activate[1] len: different `activate` clips can have different lengths 1579
int(1375*0.5)
corresponds to the moment 5 seconds into the audio clip. To implement the training set synthesis process, you will use the following helper functions.
get_random_time_segment(segment_ms)
is_overlapping(segment_time, existing_segments)
insert_audio_clip(background, audio_clip, existing_times)
get_random_time_segment
and is_overlapping
insert_ones(y, segment_end_ms)
get_random_time_segment(segment_ms)
returns a random time segment onto which we can insert an audio clip of duration segment_ms
. def get_random_time_segment(segment_ms):
"""
Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
Arguments:
segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
Returns:
segment_time -- a tuple of (segment_start, segment_end) in ms
"""
segment_start = np.random.randint(low=0, high=10000-segment_ms) # Make sure segment doesn't run past the 10sec background
segment_end = segment_start + segment_ms - 1
return (segment_start, segment_end)
Exercise:
is_overlapping(segment_time, existing_segments)
to check if a new time segment overlaps with any of the previous segments. You can use:
for ....:
if ... <= ... and ... >= ...:
...
Hint: There is overlap if:
# GRADED FUNCTION: is_overlapping
def is_overlapping(segment_time, previous_segments):
"""
Checks if the time of a segment overlaps with the times of existing segments.
Arguments:
segment_time -- a tuple of (segment_start, segment_end) for the new segment
previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
Returns:
True if the time segment overlaps with any of the existing segments, False otherwise
"""
segment_start, segment_end = segment_time
### START CODE HERE ### (≈ 4 lines)
# Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
overlap = False
# Step 2: loop over the previous_segments start and end times.
# Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
for previous_start, previous_end in previous_segments:
if segment_start <= previous_end and segment_end >= previous_start:
overlap = True
### END CODE HERE ###
return overlap
overlap1 = is_overlapping((950, 1430), [(2000, 2550), (260, 949)])
overlap2 = is_overlapping((2305, 2950), [(824, 1532), (1900, 2305), (3424, 3656)])
print("Overlap 1 = ", overlap1)
print("Overlap 2 = ", overlap2)
Overlap 1 = False Overlap 2 = True
Expected Output:
**Overlap 1** | False |
**Overlap 2** | True |
Exercise:
insert_audio_clip()
to overlay an audio clip onto the background 10sec clip. # GRADED FUNCTION: insert_audio_clip
def insert_audio_clip(background, audio_clip, previous_segments):
"""
Insert a new audio segment over the background noise at a random time step, ensuring that the
audio segment does not overlap with existing segments.
Arguments:
background -- a 10 second background audio recording.
audio_clip -- the audio clip to be inserted/overlaid.
previous_segments -- times where audio segments have already been placed
Returns:
new_background -- the updated background audio
"""
# Get the duration of the audio clip in ms
segment_ms = len(audio_clip)
### START CODE HERE ###
# Step 1: Use one of the helper functions to pick a random time segment onto which to insert
# the new audio clip. (≈ 1 line)
segment_time = get_random_time_segment(segment_ms)
# Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep
# picking new segment_time at random until it doesn't overlap. (≈ 2 lines)
while is_overlapping(segment_time, previous_segments):
segment_time = get_random_time_segment(segment_ms)
# Step 3: Append the new segment_time to the list of previous_segments (≈ 1 line)
previous_segments.append(segment_time)
### END CODE HERE ###
# Step 4: Superpose audio segment and background
new_background = background.overlay(audio_clip, position = segment_time[0])
return new_background, segment_time
np.random.seed(5)
audio_clip, segment_time = insert_audio_clip(backgrounds[0], activates[0], [(3790, 4400)])
audio_clip.export("insert_test.wav", format="wav")
print("Segment Time: ", segment_time)
IPython.display.Audio("insert_test.wav")
Segment Time: (2254, 3169)
Expected Output
**Segment Time** | (2254, 3169) |
# Expected audio
IPython.display.Audio("audio_examples/insert_reference.wav")
y
is a (1,1375)
dimensional vector, since $T_y = 1375$. y[0][1375]
, since the valid indices are y[0][0]
through y[0][1374]
because $T_y = 1375$. y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1
Exercise:
Implement insert_ones()
.
segment_end_ms
(using a 10000 step discretization),segment_end_y = int(segment_end_ms * Ty / 10000.0)
# GRADED FUNCTION: insert_ones
def insert_ones(y, segment_end_ms):
"""
Update the label vector y. The labels of the 50 output steps strictly after the end of the segment
should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the
50 following labels should be ones.
Arguments:
y -- numpy array of shape (1, Ty), the labels of the training example
segment_end_ms -- the end time of the segment in ms
Returns:
y -- updated labels
"""
# duration of the background (in terms of spectrogram time-steps)
segment_end_y = int(segment_end_ms * Ty / 10000.0)
# Add 1 to the correct index in the background label (y)
### START CODE HERE ### (≈ 3 lines)
for i in range(segment_end_y + 1, segment_end_y + 51):
if i < Ty:
y[0, i] = 1
### END CODE HERE ###
return y
arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0,:])
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])
sanity checks: 0.0 1.0 0.0
Expected Output
**sanity checks**: | 0.0 1.0 0.0 |
Finally, you can use insert_audio_clip
and insert_ones
to create a new training example.
Exercise: Implement create_training_example()
. You will need to carry out the following steps:
# GRADED FUNCTION: create_training_example
def create_training_example(background, activates, negatives):
"""
Creates a training example with a given background, activates, and negatives.
Arguments:
background -- a 10 second background audio recording
activates -- a list of audio segments of the word "activate"
negatives -- a list of audio segments of random words that are not "activate"
Returns:
x -- the spectrogram of the training example
y -- the label at each time step of the spectrogram
"""
# Set the random seed
np.random.seed(18)
# Make background quieter
background = background - 20
### START CODE HERE ###
# Step 1: Initialize y (label vector) of zeros (≈ 1 line)
y = np.zeros(shape=(1, Ty))
# Step 2: Initialize segment times as an empty list (≈ 1 line)
previous_segments = []
### END CODE HERE ###
# Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
number_of_activates = np.random.randint(0, 5)
random_indices = np.random.randint(len(activates), size=number_of_activates)
random_activates = [activates[i] for i in random_indices]
### START CODE HERE ### (≈ 3 lines)
# Step 3: Loop over randomly selected "activate" clips and insert in background
for random_activate in random_activates:
# Insert the audio clip on the background
background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
# Retrieve segment_start and segment_end from segment_time
segment_start, segment_end = segment_time
# Insert labels in "y"
y = insert_ones(y, segment_end)
### END CODE HERE ###
# Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
number_of_negatives = np.random.randint(0, 3)
random_indices = np.random.randint(len(negatives), size=number_of_negatives)
random_negatives = [negatives[i] for i in random_indices]
### START CODE HERE ### (≈ 2 lines)
# Step 4: Loop over randomly selected negative clips and insert in background
for random_negative in random_negatives:
# Insert the audio clip on the background
background, _ = insert_audio_clip(background, random_negative, previous_segments)
### END CODE HERE ###
# Standardize the volume of the audio clip
background = match_target_amplitude(background, -20.0)
# Export new training example
file_handle = background.export("train" + ".wav", format="wav")
print("File (train.wav) was saved in your directory.")
# Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
x = graph_spectrogram("train.wav")
return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
File (train.wav) was saved in your directory.
Expected Output
Now you can listen to the training example you created and compare it to the spectrogram generated above.
IPython.display.Audio("train.wav")
Expected Output
IPython.display.Audio("audio_examples/train_reference.wav")
Finally, you can plot the associated labels for the generated training example.
plt.plot(y[0])
[<matplotlib.lines.Line2D at 0x7f13f4891518>]
Expected Output
# Load preprocessed training examples
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")
# Load preprocessed dev set examples
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")
from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam
Our goal is to build a network that will ingest a spectrogram and output a signal when it detects the trigger word. This network will use 4 layers:
* A convolutional layer
* Two GRU layers
* A dense layer.
Here is the architecture we will use.
One key layer of this model is the 1D convolutional step (near the bottom of Figure 3).
Implementing the model can be done in four steps:
Step 1: CONV layer. Use Conv1D()
to implement this, with 196 filters,
a filter size of 15 (kernel_size=15
), and stride of 4. conv1d
output_x = Conv1D(filters=...,kernel_size=...,strides=...)(input_x)
output_x = Activation("...")(input_x)
output_x = Dropout(rate=...)(input_x)
Step 2: First GRU layer. To generate the GRU layer, use 128 units.
output_x = GRU(units=..., return_sequences = ...)(input_x)
output_x = BatchNormalization()(input_x)
Step 3: Second GRU layer. This has the same specifications as the first GRU layer.
Step 4: Create a time-distributed dense layer as follows:
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X)
This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step.
Documentation:
Exercise: Implement model()
, the architecture is presented in Figure 3.
# GRADED FUNCTION: model
def model(input_shape):
"""
Function creating the model's graph in Keras.
Argument:
input_shape -- shape of the model's input data (using Keras conventions)
Returns:
model -- Keras model instance
"""
X_input = Input(shape = input_shape)
### START CODE HERE ###
# Step 1: CONV layer (≈4 lines)
X = Conv1D(filters=196, kernel_size=15, strides=4)(X_input) # CONV1D
X = BatchNormalization()(X) # Batch normalization
X = Activation("relu")(X) # ReLu activation
X = Dropout(rate=0.8)(X) # dropout (use 0.8)
# Step 2: First GRU Layer (≈4 lines)
X = GRU(units=128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(rate=0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
# Step 3: Second GRU Layer (≈4 lines)
X = GRU(units=128, return_sequences = True)(X) # GRU (use 128 units and return the sequences)
X = Dropout(rate=0.8)(X) # dropout (use 0.8)
X = BatchNormalization()(X) # Batch normalization
X = Dropout(rate=0.8)(X) # dropout (use 0.8)
# Step 4: Time-distributed dense layer (see given code in instructions) (≈1 line)
X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed (sigmoid)
### END CODE HERE ###
model = Model(inputs = X_input, outputs = X)
return model
model = model(input_shape = (Tx, n_freq))
Let's print the model summary to keep track of the shapes.
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) (None, 5511, 101) 0 _________________________________________________________________ conv1d_4 (Conv1D) (None, 1375, 196) 297136 _________________________________________________________________ batch_normalization_6 (Batch (None, 1375, 196) 784 _________________________________________________________________ activation_4 (Activation) (None, 1375, 196) 0 _________________________________________________________________ dropout_5 (Dropout) (None, 1375, 196) 0 _________________________________________________________________ gru_3 (GRU) (None, 1375, 128) 124800 _________________________________________________________________ dropout_6 (Dropout) (None, 1375, 128) 0 _________________________________________________________________ batch_normalization_7 (Batch (None, 1375, 128) 512 _________________________________________________________________ gru_4 (GRU) (None, 1375, 128) 98688 _________________________________________________________________ dropout_7 (Dropout) (None, 1375, 128) 0 _________________________________________________________________ batch_normalization_8 (Batch (None, 1375, 128) 512 _________________________________________________________________ dropout_8 (Dropout) (None, 1375, 128) 0 _________________________________________________________________ time_distributed_2 (TimeDist (None, 1375, 1) 129 ================================================================= Total params: 522,561 Trainable params: 521,657 Non-trainable params: 904 _________________________________________________________________
Expected Output:
**Total params** | 522,561 |
**Trainable params** | 521,657 |
**Non-trainable params** | 904 |
The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 to 1375.
model = load_model('./models/tr_model.h5')
You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples.
opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=1)
Epoch 1/1 26/26 [==============================] - 37s - loss: 0.0726 - acc: 0.9805
<keras.callbacks.History at 0x7f13dfd25860>
Finally, let's see how your model performs on the dev set.
loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)
25/25 [==============================] - 5s Dev set accuracy = 0.945163607597
This looks pretty good!
Now that you have built a working model for trigger word detection, let's use it to make predictions. This code snippet runs audio (saved in a wav file) through the network.
def detect_triggerword(filename):
plt.subplot(2, 1, 1)
x = graph_spectrogram(filename)
# the spectrogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model
x = x.swapaxes(0,1)
x = np.expand_dims(x, axis=0)
predictions = model.predict(x)
plt.subplot(2, 1, 2)
plt.plot(predictions[0,:,0])
plt.ylabel('probability')
plt.show()
return predictions
chime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
audio_clip = AudioSegment.from_wav(filename)
chime = AudioSegment.from_wav(chime_file)
Ty = predictions.shape[1]
# Step 1: Initialize the number of consecutive output steps to 0
consecutive_timesteps = 0
# Step 2: Loop over the output steps in the y
for i in range(Ty):
# Step 3: Increment consecutive output steps
consecutive_timesteps += 1
# Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed
if predictions[0,i,0] > threshold and consecutive_timesteps > 75:
# Step 5: Superpose audio and background using pydub
audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000)
# Step 6: Reset consecutive output steps to 0
consecutive_timesteps = 0
audio_clip.export("chime_output.wav", format='wav')
Let's explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips.
IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")
Now lets run the model on these audio clips and see if it adds a chime after "activate"!
filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
filename = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")
You've come to the end of this assignment!
Congratulations on finishing the final assignment!
Thank you for sticking with us through the end and for all the hard work you've put into learning deep learning. We hope you have enjoyed the course!
In this optional and ungraded portion of this notebook, you can try your model on your own audio clips!
myaudio.wav
. # Preprocess the audio to the correct format
def preprocess_audio(filename):
# Trim or pad audio segment to 10000ms
padding = AudioSegment.silent(duration=10000)
segment = AudioSegment.from_wav(filename)[:10000]
segment = padding.overlay(segment)
# Set frame rate to 44100
segment = segment.set_frame_rate(44100)
# Export as wav
segment.export(filename, format='wav')
Once you've uploaded your audio file to Coursera, put the path to your file in the variable below.
your_filename = "audio_examples/my_audio.wav"
preprocess_audio(your_filename)
IPython.display.Audio(your_filename) # listen to the audio you uploaded
Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold.
chime_threshold = 0.5
prediction = detect_triggerword(your_filename)
chime_on_activate(your_filename, prediction, chime_threshold)
IPython.display.Audio("./chime_output.wav")