<h3>In this post, we discuss a modern challenge of neuroscience: Data storage. We discuss the issue of current storage formats and propose a new DE FACTO standard: The HDF5 file. We hope to initiate an interesting debate on this important issue and are looking forward to your feedback.<span>               </span></h3> <p> </p> <p>     [[{"type":"media","view_mode":"media_large","fid":"344","attributes":{"alt":"blogpost-jeromesdatastoragepost-a4.0.jpg","class":"media-image Big Data Storage","height":"319","title":"Neuroscience Big Data Storage","width":"521"}}]]</p> <p>Any introductory class to Neuroscience will start off with a gross description of all orders of magnitude involved. Among all important numbers to remember, it is important to realize that our brains are amazing structures with about <a href="http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons">100 billion neurons</a>. As scientists involved in the study of this organ, we must stay humble in the face of such a daunting task: Cracking the Neural Code is not going to happen overnight.  </p> <p>Still, we live in very exciting times: It is now possible to record from many more neurons than ever before. While a decade ago, we only had access to a few neurons at a time, it is now in our reach to use calcium imaging to record the activity from <a href="http://www.nature.com/neuro/journal/v16/n3/full/nn.3329.html">thousands of neurons directly from the brain of behaving mice</a>. Inscopix, with its miniature-size microscope is at the forefront of this research effort.</p> <p>Since we now have the means to capture so much more information from active neuronal networks, a new challenge comes immediately into play: How to analyze the data?</p> <p>But first what exactly is this data that we are getting?</p> <p>A small camera chip sitting on top of the microscope records on each of its pixels the fluorescence trace from a μm<sup>2</sup> region of the brain.  About a million of these pixels collectively monitor at 20Hz, the activity of approximately a thousand neurons.  Our task is to extract the activity from all these pixels. In effect, we are looking into reducing the data size 1000 times.</p> <p>But before diving into dimensionality reduction, long before we extract that small amount of pure gold from large blocks of boring rock, we face an immediate problem: <strong>Data storage</strong>.</p> <p>A typical experiment runs for at least 30 min. At 20 Hz and with a million pixels, data is piling up very quickly. Assuming a 16-bit pixel size, each frame costs 2 MB. So, basically, every minute, we generate <strong>2 GB</strong> of uncompressed data. For a typical experiment, we are talking about at least <strong>60 GB</strong> of data. In just a few years, Neuroscience has suddenly arrived at the forefront of the big data problem and the truth is: it’s not going to get better.</p> <h4>THE TIFF FORMAT</h4> <p>A de facto standard to store movie data in biology has long been the <a href="http://en.wikipedia.org/wiki/Tagged_Image_File_Format">TIFF format</a>. Created 20 years ago, it was originally intended to store 1 bit datasets from scanners. Regularly upgraded, it now shines through its flexibility: Grayscale as well as RGB images can be saved as well as floating point number.  It has long been the go to choice for storing scientific instrument data.</p> <p>Although many camera softwares still save movies into multiple TIFF files, the TIFF format can handle multiple images in a single file using so called <a href="http://www.remotesensing.org/libtiff/man/TIFFReadDirectory.3tiff.html">internal <em>directories</em></a>. This directory system has been used extensively for scientific movies.</p> <p>Still, one important limitation of the format is that each file is limited to <strong>4GB</strong> due to a 32 bits offsets in the header of each image. That is why, to record longer movies (more than 2 minutes according to our previous calculation), one has to break the data into multiple TIFF files.</p> <p>Although 8, 16 and even 32 bits images can be stored in TIFF files, most data acquired from modern scientific cameras are not 16 bits but rather 12 bits. Even though some cameras can achieve such a high resolution, a high bit depth is not necessary in the particular case of one photon calcium imaging (where the baseline signal is high and typical changes are in the 1 to 10% of initial value). Therefore, in practice, saving directly in 12 bits would provide an immediate reduction in file size (by 25%) which is highly desirable when dealing with such large datasets.</p> <p>Last but not the least, to provide its flexibility the TIFF format does not enforce any image size among all <em>directories</em> of a single TIFF file: the first image can well be grayscale while the second is colored in RGB. To make this happen, developers had to provide sufficient flexibility to each directory to be able to store any image type. As a result, in the current implementation, it is not possible to directly navigate to any image in a file: One has to go through all the previous images to find the location of the i-th image. In the case of movies, this has both a storage cost (to store all the images header) as well as a computational cost (to quickly access all the frames).</p> <p><span>EXPLORING NEW AVENUES FOR STORAGE</span></p> <p>In the last years, I slowly became convinced we needed something new to store movie data in neuroscience so I started exploring new avenues that would scale well with much larger datasets. </p> <p>One immediate option would be to use standard video file formats like AVI files. However most of these formats have been tailored to deal with real cinema movies and are most of the time, only 8 bits per color channel. They typically are not designed to deal with scientific data for which we need tighter control and a lot of flexibility in the way we access the data.</p> <p>I became acquainted with HDF5 files while reading a Nature methods paper that proposed exactly that: <a href="http://www.nature.com/nmeth/journal/v8/n6/full/nmeth.1600.html">a new standard for data storage</a>.</p> <p><span>The HDF5 FILE FORMAT </span></p> <p>HDF5 or <a href="http://en.wikipedia.org/wiki/Hierarchical_Data_Format">Hierarchical Data Formats</a> have surprisingly been around for a long time. Created nearly 20 years ago at the <a href="http://www.ncsa.illinois.edu/">National Center for Supercomputing Applications</a>, it has been extensively used by the NASA for some of its large datasets. </p> <p>[[{"type":"media","view_mode":"media_large","fid":"333","attributes":{"alt":"blogpost_-_hdf5structure-_-_z1.0.jpg","class":"media-image","height":"281","title":"hdf5structure","width":"500"}}]]</p> <p><em>A sample HDF5 file with groups to provide structure, datasets, raster images, and a palette – Source:  <a href="https://www.llnl.gov/str/April03/Cook.html">https://www.llnl.gov/str/April03/Cook.html</a></em></p> <p>It extends to this idea of internal directories and pushes it to a new level. In a single file, you can find an entire directory system as you would find on a computer hard drive. Therefore each branch of that directory tree can store multiple datasets so that you can not only store multiple images but multiple movies also. As you would organize your hard drive, you have complete freedom on how you organize the internal of the HDF5 file (see Figure).</p> <p>Remarkably, HDF5 have no real limitation in size and scale really well. They also provide very fast access to any location in each datasets.  Moreover, the HDF5 engine is extremely flexible and allows you to choose many data types or compression schemes. </p> <p>All together, it seems like HDF5 files fulfill all of the required criteria. The only caveat it would seem is that there are currently few groups that are using this format in Neuroscience so it’s not a standard yet.  However, doing a little more research, I started to wonder if this was really true.</p> <p>Indeed, if there is one programming language that is dominating the neuroscience landscape, it’s Matlab. The standard storage file in Matlab is a .mat which has been an <a href="http://www.mathworks.com/help/matlab/import_export/mat-file-versions.html">HDF5 since 2006!</a></p> <p>So one can say that actually, the HDF5 is the DE FACTO standard in Neuroscience.</p> <p>Who would have guessed?</p> <p><span>USING HDF5 FILES</span></p> <p>The current implementation of the HDF5 format in mat files adds a little bit of overhead dealing with hdf5 files so I have recently been using more direct access to HDF5 libraries (like hdf5write or read function in Matlab). I recommend bypassing the save and load function when dealing with large datasets in Matlab, these are terribly inefficient going beyond the GB range. In the meantime, there are a number of excellent wrappers to ease the access to HDF5 files, like <a href="http://www.mathworks.com/matlabcentral/fileexchange/31703-hdf5-diskmap-class/content/hdf5prop.m">hdf5prop</a>.</p> <p>There are also a number of excellent libraries in <a href="http://www.h5py.org/">Python</a> or <a href="http://ftp.hdfgroup.org/HDF5/doc/cpplus_RM/index.html">C++ </a> that one can use. In the particular case of movies, a <a href="http://lmb.informatik.uni-freiburg.de/resources/opensource/imagej_plugins/hdf5.html">plugin exists in ImageJ</a> to directly access movie data stored in HDF5 so that you don’t need any programming knowledge to get going.</p> <p><span>STANDARDIZING HDF5</span></p> <p>As usual, freedom comes at a cost. There are so many possibilities to store data in HDF5 files that to construct a standard, we will need to first agree upon how to set that standard. In that regards, the work by <a href="http://www.nature.com/nmeth/journal/v8/n6/full/nmeth.1600.html">Millard et al</a> could be a good start so I suggest you take a look at their proposed standard and comes back here with new ideas.</p> <p>How do you see the future standard for storage in Neuroscience? </p> <p><em>The author’s views are entirely his or her own and may not reflect the views of Inscopix.</em></p>