Modern Machine Learning (ML) is divided into classifiers – algorithms to allow us to understand our own data, and generators – algorithms to make new data using our own classified data as bases. I explored these two sets of ML algorithms using two data sets: 1. surveillance images from the Tokyo train system, and 2. EEG data from NeuroSky headset recordings.
For generation of image content, I employed Tensorflow’s Inception Network example to generate eerie “dreams” of surveillance images from subway trains in Tokyo Japan. Following partly this script online, I first optimized pixels based on successive feature extractors in the network that form multi-scale representations of the features. Then to increase the generativity of the machine component to produce a more “dream-like” image not based solely on texture, a mask based on the gradients of each objective function is passed in optimize with respect to mask characteristics. First I used a circular mask, then I used one surveillance image as mask to generate with respect to a different image, producing a “dream” guided by its previous image incarnation in the data set (my own idea and code). Note that this version (the leg photo example) has the most characteristic texture, due to its use of another image for the mask, and not arbitrary patterns, because it is informed by the other image.
Finally, following this tutorial, I created a video illustrating texture synthesis based on one of the images in my data set to see it evolving from parameters in the inception network by feeding in zoomed-in versions of the output back into the network. These investigations show ability to generate progressively episodic versions of surveillance images based on masking and optimization to the data set, evoking an eerie version of what it could mean for machines to “dream” on the subway repeatedly.
For classification of EEG data, I used Audio-t-SNE to categorize slices of EEG data recorded from NeuroSky headsets, breaking a long recording into half second slices where the subject viewed different sections of videos that are intended to activate different emotions such as fun, melancholy, love, and horror (the same clips as used in the Feed section in this project). After converting the EEG data into audio-readable form using matlab, I used librosa to load the file into Google Colaboratory and trained a T-SNE model to the single file, using each half second segment as a separate input, because the brain areas activated would be different during each temporal segment. The plots are then mapped to 2D space, shown for different EEG recording files for each of the videos. Note that characteristic activations can be seen at some temporal segments due to heightened attention at some points and lowered attention at others. The code for running T-SNE is adapted from Eugene Kogan’s guide on Audio-TSNE. Here’s the google colaboratory notebook.