Two papers, one brain (image)

Way back in 2011, I collected 13 adult male dragon lizards. At the time they were considered tawny dragons (Ctenophorus decresii). Since then, the tawny dragon has been the subject of intense study, including genetic study of its populations. As a result, we know now that the tawny dragon isn’t a species after all, but three species. Today, what I collected is known as the swift dragon (Ctenophorus modestus), and unfortunately the two papers I published describing the brain anatomy of those thirteen lizards both refer to them as the wrong species.

A swift dragon (Ctenophorus modesta), photographed by Tobias Hayashi during fieldwork to collect dragons for my PhD in 2011. At the time, this was considered a tawny dragon (Ctenophorus decresii).

Two papers. The same 13 individuals. How, and why, would someone go through all the trouble of publishing two papers on the same thing? Other than the obvious publish-or-perish, academia-as-an-infinite-hamster-wheel-of-papers reason? Please let me to explain.

Early in my PhD I was trying to figure out how to study lizard brains. Luckily, I happened to meet Jeremy Ullmann, who had done his PhD on fish brains and was doing a postdoc segmenting a mouse brain MRI model (more on that in a second). Amongst many other crucial contributions he made to my PhD, Jeremy introduced me to the idea of using MRI to study lizard brains, as he had done with fish and mice. The primary advantage was that you can study the anatomy of the entire brain at once instead of being limited to specific structures of interest, as is the case for more traditional neuroanatomical methods.

A slice through an MRI, fresh out of the scanner, with no post-processing. This is a lateral view of a lizard head facing right. The brain is clearly visible, can you see it?

Here’s the head of a swift dragon - possibly even the same individual - in the same orientation as the MRI above.

A model, in the sense I’m using here, is outwardly similar to an MRI in that it’s a 3D, greyscale image. However, instead of being an image generated by measuring the effect of changing magnetic fields on the spin of hydrogen atoms, a model is a meta-that. It’s an “average” of many MRI images that are all more-or-less of the same thing, in this case brains all from the same species, and sex, of lizard. Because the model is an amalgamation of multiple images, it is able to drastically increase the signal, and reduce the noise, in the original images. This results in a much clearer picture of the brain. The model Andrew built is now the basis for two papers describing lizard brain anatomy. This is Jeremy’s fault.

Jeremy, as I said, was working on segmenting a mouse brain model at the time. Let’s talk about segmentation. An MRI image - or model - is made up of voxels. Voxels are 3D pixels, and segmentation is the process of assigning each voxel an identity. In Jeremy’s case (and in my case), the identity is the anatomical region that voxel is part of (cortex, thalamus, basal ganglia, etc). Jeremy’s full time job, what he had been working on for years, was segmenting certain parts of the mouse brain. Not even the whole thing, just bits of it. A brain image or model segmented into its anatomical regions is called a segmentation atlas. Pictures of brains with the anatomical regions labelled in any fashion is called a brain atlas.

My aspiration, which I explained to Jeremy, was to segment the whole lizard brain model. When I started my PhD, there hadn’t been a reptile brain atlas published since the early 1990s. The ones that were available, though very useful, were woefully inadequate compared to the modern atlases published for other vertebrate groups. This only added to the difficultly of what I was proposing and Jeremy, wisely, advised against this plan. A full segmentation atlas was an insurmountable amount of work for a PhD student. At best, I would graduate with a thesis that was entirely just the atlas, if I were able to finish it on time.

So instead I did half the work: I figured out what brain regions were visible on my Ctenophorus decresii/modesta brain model and, instead of the long process of digitally “painting” the brain regions onto the model, I took select images of the model and labelled them. This is the format of a normal brain atlas, including the ones most in use today. The brain is sliced (digitally in my case) and pictures are taken of each slice. The brain regions visible in each picture are labelled. A version of this atlas was included in my thesis (as one of five chapters, not the whole thing) and the final version was published here.

A digital “slice” through my lizard brain model, with the distinguishable anatomical regions labelled. This is Figure 6 in this paper.

This atlas is enough for most people, as Jeremy astutely advised me it would be. Most people who would need to know brain anatomy don’t work with brains in 3D, they work in 2D. The vast majority of neuroscience is done with brain slices, often looking down a microscope at them, or, an improbably high amount of the time, just holding the brain slice really close to one’s face and squinting at it. To identify brain regions on these slices, what you need is not a 3D image that has been segmented into brain regions, but a series of 2D images you can flip through until you find the image that most closely resembles the slice of the brain you are interested in. So most of the time, someone interested in lizard brain anatomy will find my 2018 brain atlas most useful.

A histological section through a lizard brain (left) and the corresponding figure from my brain atlas (right). This is figure 41 from this paper.

The segmentation atlas came into being out of necessity; my own personal necessity. The information in the segmentation atlas is essentially identical to that in my traditional brain atlas because the 2018 atlas was used as the primary source for anatomical information for the segmentation atlas. What’s different, then, is who - or what - the information is geared to. The traditional-style atlas is designed to be easily understood by humans. The segmentation atlas is designed to be easily understood by computers.

The purpose of the segmentation atlas is to measure 3D lizard brain images. I myself have over 400 lizard brain MRIs, some collected during my PhD and others during my time as a postdoc at SickKids Hospital in Toronto. I have exactly zero desire to measure these brain images by hand, and it would be physically impossible for me to measure all the brain regions in the segmentation atlas by hand. That is the purpose of the segmentation atlas, the reason for its existence. It can measure all the brain regions, over the entire lizard brain, in all of my MRI images, and it can do it with better precision than a human.*

It can also do this for your 3D lizard brain images, if you have any. Be they MRI, CT, confocal, lightsheet, endocast, etc, you’re far better off using the segmentation atlas to get whatever measurements you want out of your images than you are trying to do them by hand. Save yourself time, frustration, heartache, and repetitive strain injuries. Let the computer take the measurements for you, the segmentation atlas is published here.

So that’s why I published two different papers using the same lizard brain model. To make it easier for people who want to study lizard brains, including myself, whether they use traditional 2D histological techniques, or more newfangled 3D imaging. These two atlases will hopefully make lizard brain research much less daunting than it was when I started by collecting those tawny dragons - or swift dragons, or whatever - ten years ago.


* Not better accuracy, though. I’m still responsible for quality control.









Measuring angles in Avizo

I've spent a good portion of my PhD working with 3D images in the program Avizo, which is essentially the same as the program Amira. This involved a steep learning curve and a lot of problem solving in the beginning. I've decided to write about the problems I've encountered and how I dealt with them in the hope that this may help the next person learning to use Avizo or Amira do so a bit faster, and with less frustration along the way. These problems will look trivial to the regular or advanced Amira/Avizo user, but they were quite frustrating at the time! These posts assume the reader's familiar with sections 2.1-2.5 of the Amira user's guide, which cover how to load, view and segment an image. My introductory post on this topic is here and the rest of my posts are here.

Recently, a lizard-brain-expert who's been helping me identify the brain regions visible in my lizard brain MRI images came to me with a strange problem: she was having a hard time identifying the different brain regions because the brain was not oriented correctly! I was embarrassed not to have noticed previously, but I had to agree: the brain was rotated in the sagittal plane. I quickly e-mailed some MRI experts to ask for help rotating the image into the correct orientation, and I received an odd, but obvious response: what angle do you want your brain rotation to?

Well, I had no idea. It's easy to rotate something in Avizo, but not so easy to figure out at what angle you've rotated it. It's also not something I was able to figure out by looking in the Avizo (or Amira) manuals nor by Googling. Eventually, I figured out a solution. It's not elegant, but it works. Here's what I did.

1. View your image

First, I loaded up my image in using Orthoslice, viewing the plane in which I wanted to rotate the image (in this case, the sagittal plane).

2. View your axes

Second, I wanted to display the axes of my original image. This is easy enough. Just go to the "View" menu and selected "Global Axes".

3. Rotate your image to the correct angle

I use the slower, but more idiot-proof "rotate" button.

4. Measure the angle.

This is the annoying part. There's no way to just say "how much did I just rotate my image?" as far as I can tell. However, you can use the measurement tool to measure a 2D angle, and you can trace over the x-axis of your Global Axis and then at the apex draw a horizontal line, as I've illustrated below. That is the angle you've rotated your image.

Postscript

Even though there's no way that I can see to measure how much you've rotated an image directly, you can do the reverse, that is rotate your image at a desired angle. The rotate module allows you to type in the angle of rotation you want, and it will rotate the image that amount. It's real purpose is to make cool videos, but it does this too.

Now, if only I could figure out how to export my rotated image as a NIFTI file, I would be all set! If you know how to do that, please please please tell me!

The basics of using the Pool to open and view images and label fields

I've spent a good portion of my PhD working with 3D images in the program Avizo, which is essentially the same as the program Amira. This involved a steep learning curve and a lot of problem solving in the beginning. I've decided to write about the problems I've encountered and how I dealt with them in the hope that this may help the next person learning to use Avizo or Amira do so a bit faster, and with less frustration along the way. These problems will look trivial to the regular or advanced Amira/Avizo user, but they were quite frustrating at the time! These posts assume the reader's familiar with sections 2.1-2.5 of the Amira user's guide, which cover how to load, view and segment an image. My introductory post on this topic is here.

Figuring out how the basics of how the "Pool" (in Amira) or the "Project View" (in Avizo) works can be a bit tricky, and it's one of the first things you have to figure out to use these programs. The Pool is where all the data and modules you're working with are visualised. Here's how the Amira User Manual describes the Pool and its contents:

"Amira is a modular and object-oriented software system. Its basic system components are modules and data objects. Modules are used to visualize data objects or to perform some computational operations on them. The components are represented by little icons in the Pool. Icons are connected by lines indicating processing dependencies between the components, i.e., which modules are to be applied to which data objects." - Amira User Manual, page 4

This is basic stuff, but if you're new to Amira/Avizo, confusion arises because nowhere does the manual tell you what any of this stuff looks like. Here's what it looks like in Avizo, and it looks pretty much the same as this in Amira:

Screen Shot 2015-01-07 at 3.08.25 pm.png

So data objects and modules are colourful blobs, connections between data objects and modules are blue lines, and if you look closely, you can see that the data object-end of the blue line is a diamond while the module-end of the blue line is a square. 

The Amira User Manual explains very well how to make a new label field for an image, and how to save the whole thing (image + label field) together as a "project". But what if you have an image and a label field already, and would like to join them? The Amira User Manual, as far as I can tell, doesn't cover this. Here's how to connect two data objects, one of which is a 3D image and the other is a label field. First, open the 3D image:

Then, open the label field:

The data object for the label field appears below the data object for the 3D image, unconnected. Now, click on the white square of the label field's data object, which allows you to control what it's connected to. It gives you a drop-down list of different kinds of connections it can make, select the "ImageData" option. A blue connection line follows your cursor around, click on the 3D image's data object. The two data objects jump together, indicating that they are now connected. Notice that connections between data objects are displayed differently than connections between a data object and a module.

All this is well and good, but if you've done this, you've noticed that none of this results in you actually visualising anything. Fortunately, the Amira User Manual is very good at explaining how to visualise your data, in general. However, one thing that I think is missing from the manual is how to visualise a label field and the image it's based on at the same time. For this, there's Colorwash, section 2.8.3 in the Amira User Manual. Colorwash is described in the manual as being for the visualisation of two images at once, but it's not clear from their description that it can be used to visualise a label field on top of an image. Technically a label field is just an image dataset like any other, so Colorwash can be used like this. Attach a OrthoSlice module to the 3D image data object, then attach a Colourwash module to the OrthoSlice module. Click on the white box in the Colourwash module, select "Data" from the drop-down list, and click on the label field data object. Notice that connections between modules are displayed the same way as connections between data objects.

There! Quick and easy ways to load and visualise images and their label fields.

Some additional comments on segmentation & hot keys

I've spent a good portion of my PhD working with 3D images in the program Avizo, which is essentially the same as the program Amira. This involved a steep learning curve and a lot of problem solving in the beginning. I've decided to write about the problems I've encountered and how I dealt with them in the hope that this may help the next person learning to use Avizo or Amira do so a bit faster, and with less frustration along the way. These problems will look trivial to the regular or advanced Amira/Avizo user, but they were quite frustrating at the time! These posts assume the reader's familiar with sections 2.1-2.5 of the Amira user's guide, which cover how to load, view and segment an image. My introductory post on this topic is here.

In this post I'm going to talk about some additional issues with the hot keys and with adding and subtracting to materials in the Image Segmentation Editor.

An additional hot key.

Here is a list I downloaded of the hot keys that you can use with the Image Segmentation Editor to really speed up segmentation. I've discussed them previously. There's one omission to the list (that I've found, anyway). Keys 1-9 cover the different tools, starting with the Brush tool, but what about the Pick & Move tool? Turns out, even though it's not on the list, "0" switches ("toggles") you to the Pick & Move tool! I've updated my list accordingly.

The add and subtract hot keys are finicky

The most straightforward and least efficient way to add a region of an image to a material is to select that region using whichever tool is most appropriate (in these examples I use the brush tool, which is the most straightforward tool, but again the least efficient) and then click on the Add button in the Selection section of the Segmentation Editor. Here's what that looks like:

There are two hot keys that are supposed to do the same thing, "A" and "Ctrl +". The trick is, neither works in exactly the same way. Once I figured out the differences between them it was quite useful, because now I can use whichever Add function best suits my needs at the time. But it was confusing to figure out.

Pressing "Ctrl +" adds the selected part of the image to the material, but leaves it selected. You must then remember to unselect the region manually using the Clear button in the Selection section of the Segmentation Editor.

That means that, if you then select a region that you want to add to a different material, say Lobe1, and then go to add your new selection to Lobe1, you will add both selections to Lobe1 unless you remember to unselect your first selection.

Using the "A" key to add a selection to a material is even more tricky. It only works if you've already added at least one selection to the material since editing any other material. So, practically, the "A" key will only add a selection to the material if the last thing you did was add (or subtract) a selection to the same material. Furthermore, where that previous selection was matters, and depends on your current settings. In the Selection section of the Segmentation Editor, if the button "Volume" is selected, then the previous selection you added could have been on any slice.

However, if the button "Current Slice" is selected in the Selection section of the Segmentation Editor, then the previous selection added to the current material (in this case, Lobe2) must have been in the current slice for the A hot key to work. 

The "-" hot key for subtracting a selection from the current material has the same sort of funny restrictions as the "A" hot key.

If you have any suggestions, feedback, or questions about Amira/Avizo, please let me know.