You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Reflections

I had a much bigger project in mind than what I ended up doing, mainly because I did not have the coding experience nor the time. I still managed to create a simple version of Shazam that identifies what type of music is playing, varying from classical to electronic.

I measure the success of my neural network by testing how accurate it is at determining the song genre. The neural network classifies all the songs correctly, a next step would be to build more layers to the array so that it could maybe name the song or identify the artist.

Here is the GitHub link for where the initial coding for my project came from:

https://github.com/despoisj/DeepAudioClassification/blob/master/config.py

And here is my modified code:

https://github.com/izasab/MAIN

 

May 11th, 2018

More Complex Situation

I modeled a single neuron, with three input connections and one output connection. Then I assigned random weights to a 3×1 matrix, with values in the range of -1 to 1. I pass the weighted sum of the inputs through this function to normalise them between 0 and 1. The derivative of the Sigmoid function indicates how confident I am about the existing weight.

After this, the neural network learns through a process of trial and error, adjusting the weight each time. In order to determine how successful the neuron learns, I calculated the error, which is the difference between the desired output and predicted output. Now, I multiplied the error by the input, thus causing the less confident weights to adjust more, meaning the inputs that equal zero do not cause change to the weights.

I give the neural network some training sets and make it run through it many times, then test it with a new situation.

 

from numpy import exp, array, random, dot

 

class NeuralNetwork():
def __init__(self):
random.seed(1)

self.synaptic_weights = 2 * random.random((3, 1)) – 1

def __sigmoid(self, x):
return 1 / (1 + exp(-x))

def __sigmoid_derivative(self, x):
return x * (1 – x)

def train(self, training_set_inputs, training_set_outputs, num_iter):

for iteration in xrange(num_iter):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)

error = training_set_outputs – output

adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))

# Adjust the weights.
self.synaptic_weights += adjustment

# The neural network thinks.
def think(self, inputs):
# Pass inputs through our neural network (our single neuron).
return self.__sigmoid(dot(inputs, self.synaptic_weights))

 

if __name__ == “__main__”:

#Intialise a single neuron neural network.
neural_network = NeuralNetwork()

print “Random starting synaptic weights: ”
print neural_network.synaptic_weights

# The training set. We have 4 examples, each consisting of 3 input values
# and 1 output value.
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T

# Train the neural network using a training set.
# Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)

print “New synaptic weights after training: ”
print neural_network.synaptic_weights

# Test the neural network with a new situation.
print “Considering new situation [1, 0, 0] -> ?: ”
print neural_network.think(array([1, 0, 0]))

# Output:

#Random starting synaptic weights:
#[[-0.16595599]
#[ 0.44064899]
#[-0.99977125]]
#New synaptic weights after training:
#[[ 9.67299303]
#[-0.2078435 ]
#[-4.62963669]]
#Considering new situation [1, 0, 0] -> ?:
#[ 0.99993704]

April 25th, 2018

Introduction to the Code

This is the simple code I started with where the neural network goes through multiple runs to try and output the number one. Here the output is very close, at 0.99993704

from numpy import exp, array, random, dotfrom numpy import exp, array, random, dot

training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])training_set_outputs = array([[0, 1, 1, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) – 1

for iteration in xrange(10000):    output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs – output) * output * (1 – output))
print 1 / (1 + exp(-(dot(array([1, 0, 0]), synaptic_weights))))

 

 

April 25th, 2018

Progress

Thus far I have made a simple neural net to understand how they work and learn. The neural networks learn by writing a synopsis and going through a number of nodes used in a brain. The writings are adjusted through back propagation and running the code through thousands of iterations.

Now I am starting to implement tensor-flow (a library that works with python), which works with AI.

April 19th, 2018

Background Research

I found a link that helped introduce me to the concept of a neural network: https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1

April 19th, 2018

Description

I will code a python program that uses AI (namely a neural network) to create music. I have not quite decided what I want my input to be, either compositions or just simple sounds from various instruments. I do not know much about neural networks and I wanted to explore how it works.

April 19th, 2018


Recent Comments

Archives

Meta