
I’ve explored Nilearn, a Python module that uses simple interfaces for people to apply machine learning to neuroimaging data. This module would allow me to get the best visualizations for raw data and processed results, and it’s built on scikit-learn, a popular Python machine learning module.
In my fMRI project, I re-create the methods of Miyawaki et al. (2008) in inferring visual stimulus from brain activity. In the experiment of Miyawaki et al. (2008) several series of 10×10 binary images are presented to two subjects while activity on the visual cortex is recorded. In the original paper, the training set is composed of random images (where black and white pixels are balanced) while the testing set is composed of structured images containing geometric shapes (square, cross…) and letters. I will use the training set with cross-validation to get scores on unknown data. I can examine decoding (the reconstruction of visual stimuli from fMRI) and encoding (prediction of fMRI data from descriptors of visual stimuli). This would let me look at the relation between stimuli pixels and brains voxels from both angles. The approach uses a support vector classifier and logistic ridge regression in the prediction function in both the decoding and the encoding.
This June, I’ll begin work in a neuroscience lab where I will be using computational methods to study the zebrafish brain. I hope to cultivate more skills as part of this intrinsic, self-driven passion for neuroscience from a computational perspective. The dynamic interplay of experimental and theoretical models in evaluating and re-evaluating hypotheses is fascinating.
References:
Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M.-A., Morito, Y., Tanabe, H. C., et al. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60, 915–929. doi: 10.1016/j.neuron.2008.11.004
Leave a Reply