Technology and the 3 R’s

A contemporary approach to the refinement of animal research highlights the importance of technology and the 3 R’s:

“Employing new in vivo technologies that can benefit animal welfare and science including methods to minimise pain and distress as well as to deliver enhancements in animal care, housing, handling, training and use”1

This definition fits well with my experience of animals in research – the less you stress your animals while running experiments, the better your data will be, and technology is an important way to reduce your impact on the animal whilst also improving your metrics.

My history with technology

As an example, for my PhD I wanted to investigate control of the cardiovascular system in a transgenic mouse model. I won’t go in to details here, but suffice to say it had a weird hypermetabolic phenotype which we thought would also impact the sympathetic control of the cardiovascular system. But how best to go about measuring this?

Let’s say you went to your GP with a suspected blood pressure issue, what would the doctor do? Most likely sit you down and measure your blood pressure with a plethysmograph (the cuff that goes on your arm and inflates) and your heart rate by counting beats. This was actually the first method I tried in my mice – we had a mouse-sized plethysmograph that works on the tail of the animal; it also comes with tubes to hold the mice steady while performing the experiment.

You don’t need to be a scientific researcher to realise that confining a mouse in a restricted holder like this will stress them out, even after repeated sessions of acclimatisation. And what happens when you’re stressed? Increased heart rate and blood pressure, which by definition makes this technique less than optimal.

However, it is possible to get decent data from such a system, so long as you bear in the mind the limitations when planning studies and drawing conclusions. As it happens, my transgenic mice that I used in this study were much smaller than the wildtypes, and as such I was unable to get reliable readings.

But, we really wanted to record blood pressure in this mouse model, so next step was to attempt a more invasive method to get a direct reading of blood pressure. This is possible, although technically very challenging in mice, by inserting a thin plastic tube into a major blood vessel and getting a direct readout of the pressure from inside the artery.

Obviously, you can’t go inserting a catheter into the artery of an awake mouse, so I anaesthetised my transgenics and learned how to insert the blood pressure catheter into the mouse’s carotid artery. We did get lovely blood pressure readings from this study, with the predictable caveat that it was performed in anaesthetised animals, and there aren’t many things in this world more likely to impact your cardiovascular system than being anaesthetised!

As it happens, we used a certain (old-school, not used in humans) anaesthetic known to have minimal impacts on the cardiovascular system. However, we knew we wouldn’t be able to publish the results without getting some kind of readings in an awake animal. This is where technology comes into the story, in the form of implantable telemetry.

It goes without saying that telemetry, while a great technological solution, is also technically quite challenging as well as prohibitively expensive. As such, and especially given that I was naïve to these methods, I opted for the easier biopotential transmitter to record ECG. Once I’d figured the experiments out and was able to get good heart rate recordings in awake freely moving mice, they formed the pinnacle of my PhD, and enabled us to publish the results.

In an ideal world, we would have used blood pressure telemetry (heart rate alone can give ambiguous results), but I think we made the correct decision at the time. One of my colleagues recently used the blood pressure telemeters, and had a terrible time of it – they’re just that much more difficult, especially the surgery.

Technology is important

Anyway, my takeaway message today relates the importance of technology and the 3 R’s for minimising the stress and harm done to animals in your experiments while simultaneously maximising the quality and impact of data you produce. I recently submitted a grant application to the NC3R’s with exactly this stated goal – to use my fibre-free optogenetics technology in vivo, to show that it has a marked benefit to animals in optogenetics studies, leading to a refinement, as outlined in the 3R’s.

1. Clark Br J Nutri 120(S1), S1-S7 (2018) The 3Rs in research: a contemporary approach to replacement, reduction and refinement

Feeling Warm and Fuzzy

I have previously talked about developing a touch-free timer for use in surgery. The goal was to better enable a single researcher to maintain sterility during animal surgeries. I really think this is a genuine unmet need in the research world, and widespread adoption of touch-free surgery kit would be extremely beneficial, both to the researchers and to the animals.

Anyway, with the plan to expand my touch-free surgery range, I figured the next piece of kit should be a heat mat for keeping rodents warm in surgery. And again, I wanted something that can be controlled by touch-free sensors. Helpfully, Pi Hut sell a small, flexible heating pad:

Looking at the specs, it uses ~1 A of power, which is far more than we can safely run from an Arduino digital pin. In order to do this, we make use of a component called a MOSFET, which is a special kind of transistor used to amplify circuits. A MOSFET lets you use a digital signal (eg. an Arduino output pin) to fully switch a separate circuit (eg. a fully powered heat mat).

Therefore, using a MOSFET, I can control the power going to the heat mat by the digital output of the Arduino. I’ve mentioned pulse width modulation (PWM) before, and it is perfectly suited to this application. PWM will let me digitally control the amount of power going through the heat mat. And, best of all, because it’s digitally controlled, I can shift the PWM up/down with IR proximity sensors.

But, how to display the power going through the heat mat? For this, I again turned to Pi Hut, who sell a 10-segment LED bar:

Each LED in the bar is individually controlled, which means that I can set the Arduino to display an indication of the power going through the heat mat, on a scale of 1-10. Bringing it all together in a 3D printed housing, I have power up and power down proximity sensors, a power indicator bar, and a flexible heat mat that warms quickly to the extent determined by the user:

Touch-free heat mat for keeping rodents warm in surgery.

I have used this heat mat in surgery myself, and it worked really well. It heated up super quick and I could change the power of the heat mat to the temperature needed by the mouse. This piece of kit is indispensible for keeping rodents warm in surgery.

The one thing that I think it missing is an actual reading of the mouse’s temperature – I kept having to feel the surgery bed to check the temperature, which kind of defeats the purpose of being touch-free.

So, my next plan for this piece of kit is to add in a temperature sensor (whether a standalone one or one that runs through the Arduino, I have yet to figure out). Stay tuned for updates.

How Long is Bright Enough?

Last blog post I had a revelation about the best numerical aperture to use for in vivo implanted optical fibres. Today, as part of my indepth study planning, I’ll be investigating the best opto flash time. My default has always been 10 ms, because a) it seems to be what most others use and b) it’s always worked well for me.

However, I like to be sure, and it never hurts to optimise your methodology. But where to start? I’ve mentioned in the past about the EC50 of ChR2 being 1 mW/mm2. However, this is actually misleading, as it doesn’t take into account the duration of illumination.

Power, energy and time

The important point here is that mW is a unit of Power (Watts), which is energy (Joules) over time (seconds). And the thing that actually determines activation of the opsin is the energy that it is exposed to. What this means is that, in principle, you could have wildly differing power output activating ChR2 to the same extent, so long as you adjusted the length of time of illumination accordingly.

If we think of a typical in vivo light flash of 10 mW for 10 ms from a 200 µm fibre, we can calculate the energy emitted in this flash with the equation Power (W) = Energy (J)/Time (s):

Energy (J)             = 0.01 W x 0.01 s

                                = 0.0001 J

So we can say that 0.0001 J (or 100 µJ) of 470 nm light is enough energy from a 200 µm diameter fibre to robustly activate ChR2 in the brain in an experimental setting.

Low opto power

Now let’s say we could only produce 100 µW from our fibre (100-fold less than in the previous example). We could theoretically activate ChR2 by adjusting the illumination time accordingly:

Time (s)                = 0.0001 J / 0.0001 W

                                = 1 s

What this means is that if we had a pitifully weak light source, we could still activate ChR2. Although, I’m not sure how useful 1 Hz neuronal stimulation would be biologically. However, there is a way to make this dim level of illumination biologically relevant, as Anpilov et al. did in their recent wireless opto study1. They did this by using a stabilised step-function opsin (SSFO), which acts more like a toggle switch – a single activation turns it on for 30 mins or so.

Fast opto flashing

We can also look in the other direction, power wise. Let’s say you were interested in making neurones fire at 100 Hz. To maintain a 10 % duty cycle (to allow the neurones to recover electrically and to limit tissue warming), we might want a 1 ms light flash, and we could calculate the required optical power like this:

Power (W)           = 0.0001 J / 0.001 s

                                = 0.1 W

So, to drive a fast-frequency neurone like this with an equivalently robust activation of ChR2, we would need to be able to produce 100 mW out the end of a 200 µm fibre, which would be possible with a laser system. A quick note: 100 mW is actually a lot of light power to pump into a mouse’s brain. So, I would not advise aiming that high. I would worry about heating or damaging the tissue, so better to limit yourself to 15 mW or so, and validate your experiment accordingly.

Measured opto flash times

Anyway, back to my planned experiment. The question was: do I need my full 10 ms flash time to produce the firing I want? A recent paper by Herman et al. investigated the silencing of ChR2-expressing neurones at higher light exposures2. It includes a nice overall picture of light pulse duration-dependent spike probabilities in a variety of neurones (Figure 1).

Firing dynamics in response to varying light pulse duration.

What they find, flashing various neurone types at 20 Hz, is that with light pulses of 5 or 10 ms they have increasing spike probabilities up to 95 – 100 % depending on the neurone type. Then at on-times of 25 ms or longer, the spiking fidelity drops in all neurone types except for fast-spiking neurones in the cortex. Based on this work, I would suggest 5-10 ms appear to be optimal across various neurone types. At any pulse length above or below that, the spiking falls away.

Right, while 5-10 ms looks like a good time duration, that study was performed at a single light intensity, so only provides a partial answer. However, I found an early paper that investigated the threshold light power needed to stimulate an action potential at various distances from the end of the fibre, across a range of pulse widths (Figure 2)3.

Strength-duration relationships for optogenetic stimulation.

A couple of things are clear from Figure 2:

  1. Longer pulse widths drop the power threshold needed to trigger an action potential.
  2. The threshold power needed to trigger an action potential increases with distance from the fibre tip.

It’s difficult to tell from the tiny scale on this graph, but it looks like 5 ms might just be enough to trigger an action potential at 1 mm from fibre tip at the ~9 mW power we get from our system. However, this is dependent on other factors, such as the NA of the implanted fibre.

The best opto flash time

My verdict form this investigation is that 5 ms would likely be fine to trigger a response. However, increasing the flash duration to 10 ms would increase your likelihood of triggering action potentials without any noticeable drawbacks. So after all that, we come back to 10 ms as the best opto flash time (in my opinion).

1. Anplilov et al. Neuron 107(4), 644-655 (2020) Wireless Optogenetic Stimulation of Oxytocin Neurons in a Semi-natural Setup Dynamically Elevates Both Pro-social and Agonistic Behaviors

2. Herman et al. eLife 3, e01481 (2014) Cell type-specific and time-dependent light exposure contribute to silencing in neurons expressing Channelrhodopsin-2

3. Foutz et al. J Neurophysiol 107, 3235-3245 (2012) Theoretical principles underlying optical stimulation of a channelrhodopsin-2positive pyramidal neuron

How Numerical is your Aperture?

Planning another optogenetics study, and I needed to cut the optic fibre cannulae ready for implantation. One of the other postdocs in the lab had been super organised and bought in a bunch of implants from Thorlabs at a variety of numerical apertures (thanks Amy). But, which is the best numerical aperture (NA) for my experiment?

I won’t go into details (because I’m not a physicist), but Wikipedia defines the NA of an optical system as “a dimensionless number that characterises the range of angles over which the system can accept or emit light”.

Essentially, as far as we are concerned for fibre optics, the NA is relevant for two things:

  1. The bigger the NA, the more light from the source will travel down the optic fibre – for a laser system, this doesn’t matter much because the coherent light can easily be focused down it, but for an LED, this can make a big difference for how much light is captured by the fibre (rather than scattering away)
  2. It determines how much the light spreads after exiting the fibre (for in vivo opto’s, this will be in the mouse’s brain) – the higher the NA, the greater the cone of light dispersion

So, back to cutting fibres, and I had to decide which ones to use – I normally use the 0.22 NA fibres out of habit, but I have read multiple recommendations to use as high an NA fibre as possible when using an LED system (which is what we have); the idea being to get as much light power as possible into the mouse’s brain, which is important considering LED systems can struggle to be bright enough for in vivo opto’s. Both Prizmatix and Doric suggest using 0.66 NA fibres for LED-connected systems, which is actually higher than the ones we have available from Thorlabs.

To test the light output, I hooked up fibres of different NA’s to our LED optogenetics system, and recorded the light power out the end of the fibre using a light meter, both under constant illumination and during 10 Hz flashing with 10 ms on times (Table 1).

True to form, the higher the NA of a fibre, the more light that is passed down it. Great, so at this point I’d pretty much settled on the 0.50 NA fibre, because it emitted approx. 50 % more power than the 0.22 NA fibre. However, for the sake of completeness, I decided to input the values into Karl Deisseroth’s irradiance predictor, to check how deep I would get good ChR2 activation. This is a useful step when planning placement of your optic fibres.

I plotted the values for all three NA fibres (Figure 1), and I’ve included the threshold level of 1 mW/mm2 that I’ve talked about previously (this is the measured EC50 of ChR2 H134R, which I use as a threshold irradiance to assume good activation).  

Now I’ll be honest, I was surprised by this outcome – despite having lower light output from the lower NA fibres, the irradiance was higher as soon as you go deeper than about 0.2 mm into the tissue. I can only assume this is because the lower NA results in less light spread coming out of the fibre – the 0.50 NA fibre remains above the critical 1 mW/mm2 down to about 1.0 mm, whereas the 0.22 NA fibre goes to about 1.4 mm.

The answer is simple – I’m going to use the 0.22 NA fibres, because they have the dual benefit of activating ChR2 to a greater depth, and also having lower brightness at the end of the fibre, which means less heating of the tissue and phototoxicity.

How to Track a Mouse

Our old locomotor tracking

One of my projects is investigating a population of neurones that controls mouse locomotor activity and food intake. In the past I have used either implantable telemetry or IR beam break cages to quantify the mice’s movement. But the telemeters, even when they’re functioning well, don’t give particularly good quantification of mouse locomotor activity, which leaves the beam break cages.

For anyone that doesn’t know, these cages are set up to have a couple of IR beams that cross the cage. Whenever the beam is broken (ie. the mouse gets in the way), this is registered by the computer. It’s quite an effective (although crude) method to quantify mouse activity. And it does so completely non-invasively. However, our current IR beam break cages have a number of drawbacks that make them unattractive:

  • They only work with some of the older open cages, and don’t work at all if the mice have any bedding in the cage (it blocks the beams)
  • The beam break cages we have available in the facility, which actually belong to one of the other lecturers (although she is happy for us to use them), are a decade or two old and were built by a previous postdoc – as such they have suffered some degradation over the years and only have partial functionality left

Anyone who reads my blog will already know what I’m about to say – with these issues I’ve raised, I decided to try and build my own set of beam break cages.

Setting up beam breaks

Right, so first step was to find some IR LED’s and sensors that I could pair across 20-30 cm of a mouse’s cage. I’ve used things like this in the past, so I know you can detect an IR signal using an LED in the ~900 nm range and a phototransistor (Figure 1A).

Luckily, I had some sat around, so I hooked them up to an Arduino, but could only detect the IR signal up to around 5 cm distance. This is obviously not enough, so after some detective work, I found some “IR Beam Break Sensors” from PiHut (Figure 1B). If those didn’t work, it would require some more complex electrical engineering to make it work. Apparently you need to use modulated signals to be sensitive enough to work over multiple metres.

Fortunately, the IR sensors from PiHut worked a treat, up to about 40 cm, which is more than enough for my purposes. The next issue was how to fix the sensors in a way that they would remain aligned in a pairing across the cage.

Aligning the sensors

For this I turned to my trusted 3D printer. After borrowing an IVC from the animal facility, I figured I could make hanging holders that would hook onto the side ridges (Figure 2).

These worked great, with the only issue that the mice tended to move their bedding around and block the direct beams. A very simple solution to this problem was to use strong neodymium magnets to “pin” the tube/bedding at one end of the cage, out of the way of the sensor beams.

Right, so now I had 2 pairs of sensors successfully attached to each mouse cage, next I needed to actually track the data in some way.

Tracking data using Arduino

It turns out that tallying IR beam crosses is easy peasy using an Arduino. The only annoyance being having to duplicate the code 24 times (ie. 2 sensors each for 12 cages). But, I still need to get the data out of the Arduino. I figured I could either hook up an SD card reader and write the data to a removable card, or hook up to a PC and download the data directly.

As I was already connecting the Arduino to my laptop, I tried that first. A little Google sleuthing found me an open source (ie. free) “terminal” program, that will happily log data that comes in over a “COM” port, such as is used by the Arduino. It was actually really easy to set up, and will log the IR beam break data in a CSV (comma separated values) format, that can be directly opened by Excel.

For ease of later data analysis, I made the program log the data in 10 second intervals. However, it will be easy to change that depending on the experimental paradigm eg. 1 or even 10 minute intervals for longer term studies over days or weeks.

Just to prove how well the system works, you can see a massive increase in activity following injection of caffeine (Figure 3A). You also get fantastic circadian activity if you record for longer time periods (Figure 3B).

Where to get it from

As always, I am making this system available on my shop, far cheaper than any commercially available system. Obviously I’ll include a copy of the data logging software with instructions of how to use it. Anyone who wants to measure mouse locomotor activity easily and cheaply, check it out.

Edit 5/5/22: I have now uploaded details of how to make this kit to Hackaday, so head over there if you want to try and build it yourself.

One Tiny Step for Man

I’ve been working on the next in my EasyTTL series. Whereas my previous iteration had additional functions and output, this time I had a single goal: make a portable optogenetics TTL driver. This means making it as small as possible and, most importantly, battery powered.

While it is possible to run an Arduino off a battery source, they are pretty big and relatively power hungry. So, I wanted to find a smaller microcontroller to use for this purpose. It is, of course, possible to design a circuit from scratch to use a microcontroller, but that is a huge amount of effort. I would only want to go to those lengths if I had a good reason, like I needed to fit it into a miniscule space, or I was intending to make hundreds.

Fortunately, others have thought the same, and helpfully produced microcontroller breakout boards. Essentially this puts the chip on a board with easily accessible pins and all the control circuitry you need for easy programming via USB, with power regulation etc. I won’t go into all the available microcontroller boards, there are loads out there.

I picked the Adafruit Trinket (Figure 1), because it is small and can be programmed using Arduino IDE, which means I don’t even need to learn any new programming languages. You can think of it as a tiny Arduino, perfect for making a simple and portable optogenetics TTL driver.

The biggest drawback of the Trinket, or any smaller and more basic microcontroller, is that I lose functions; in particular there are fewer I/O pins to connect my switches and dials to. Whereas the Arduino Uno has 14 digital I/O pins, the Trinket only has 4. Now, I obviously need the TTL output and a switch to turn the flashing on/off. I also like to have an LED indicator of the TTL being switched on, which leaves a single pin to control the flashing frequency, on times etc.

With the restriction of a single available pin to control the flashing, I can put in a toggle switch to allow the user to choose between two stimulation paradigms. I will therefore just program my two “favourites”, ie. those that I see most often in the literature or that I am most likely to use myself in the lab:

  • 10 ms flash on-time; 10 Hz frequency
  • 10 ms flash on-time; 20 Hz frequency for 1 second, then off for 3 seconds

My loyal readers will know about my dislike of the 20 Hz and higher frequencies, but as you see them so often in the literature it felt remiss not to include. So anyway, I programmed the Trinket, connected it to switches etc, and hooked the output up to an oscilloscope (Figure 2).

The timing is very good, although it runs about 100 µs fast for a 10 ms pulse, giving it a timing accuracy of 99 %. While this isn’t as good as the Arduino, it is still great, and to be honest is far better than you would ever need for an optogenetics study, either in vivo or in vitro.

Next, I printed a housing unit for the Trinket and a 9 V battery, and I also included a slide switch to cut the power and prevent battery drain when not in use. I think it looks quite smart (Figure 3).

I can’t wait to turn up somewhere with this little box, and hook it up to drive a laser or LED in an optogenetics study.

Animal Consumption

A shower thought

I was having an imaginary argument this morning – you know, the kind you have in the shower where all your points are zingers and your opponent can only be floored by your insightful oratory, whereas anything they come out with is antiquated and flawed. On this occasion, my imaginary antagonist was my father-in-law, who is great for such things because he is a classic dogmatic conservative who apparently changes his mind only when instructed to do so by the Daily Mail. He is also loud and steamrolls all other voices in his vicinity, such that my wife is the only person who successfully argues against him.

Anyway, on this occasion, I was actually walking to the train station, which is another great time for introspection, when I started thinking about the recent news that South Korea’s president was considering banning the consumption of dog meat. Now, I could just imaging the FIL lauding this in his typical brash manner: finally some sense, how could this culture engage in such a disgusting practice for so long?

Animal lover

Now, for context, my FIL absolutely loves dogs, so this is a) a very reasonable position for me to give his fictional self, and b) not something I would ever argue with him in real life. But, as this is a fictional confrontation, there’s no problem. So my rebuttal would go along the lines that, yes I agree that eating dogs is distasteful, and not something that I would ever even consider doing, but how is it any different from our consumption of pigs, cows and sheep?

There is no argument you can make against the consumption of dog meat that doesn’t also preclude the eating of any animal without resorting to playing to our cultural history of keeping dogs as pets. And at that point, one can just point to the historical culture of eating dog meat in places such as Korea.

Pigs in particular are as sociable as dogs, and at least as clever. I’ve seen videos of cows bounding around like puppies and showing affection to their owners, and I know people who keep chickens (and other birds) as pets. The same could be true of rodents, not that we eat them, but we do exterminate them fairly indiscriminately, and I can testify that rats are both clever and sociable. Horses and dogs are often used as working animals (not that I enjoy eating horse meat, but there is a historical precedent of them ending up in food as a cheap substitute to beef).

Advocating veganism

This argument inexorably leads to advocating veganism, which my wife and I attempted a couple of years ago, but found it too challenging; if you ever check the ingredients of packaged food, 99% of the time it contains animal products (particularly dairy), even in food that you would never think it necessary. Instead, we went for drastically reducing our consumption of animal products and when we do, making ethical choices.

We have swapped out cow’s milk for oat milk (which is a bit more expensive but I actually prefer it), only buy free-range eggs (which we did anyway), and try to buy ethically produced meat on the occasions we do buy it (probably once a week). Unfortunately, I am a total cheese hound, which has been the hardest thing to cut.

Extending the argument at the other extreme, how can you argue against consuming any animal? I remember watching a program about people (poachers, I guess?) hunting and eating wild animals in the jungles of Borneo, which went by the deceptively innocuous term bush meat. As this is Borneo, every animal is likely to be in danger of going extinct, which makes it easy to vilify and argue for a total ban. But, as a middle class person who’s grown up in wealthy nations and never been hungry or homeless, how can I judge people for hunting for food?

As I said earlier, many of the ethical changes I’ve made to my diet also increased the cost, so how can I judge others who don’t have the means to do so? Well, when they show that sometimes “bush meat” includes orang-utans or chimpanzees, suddenly my sympathies evaporate. And finally, we come back to the arguments in the use of animals in research – balancing need against ethical usage and suffering.

Reducing suffering

A basic criterion is the ability of the animal to feel suffering, which increases with the innate intelligence of the animal, which is why we are so instantly disgusted by the suffering of primates. And also why a huge amount of animal research is performed on mice, who sit in a good balance between less capable of suffering but close enough to humans to enable important and relevant research.

At the end of the day, reducing the suffering of animals around the world comes down to two things: education (about the harm being done; eg. see the NC3R’s) and empowerment (particularly financial, to enable change). This is particularly true when it comes to eating animals, where we can obviate the need for animal consumption, but only with a huge and concerted effort.

The 3 R’s

Replacement. Reduction. Refinement. Also known as the 3 R’s.

On the face of it, the 3 R’s form a fairly straightforward guide to limit the amount of suffering endured by animals in your experiments. However, these are stepping stones to quite indepth process for advancing technologies and rigorous planning, as defined by the following on the NC3R’s website1:

  • Replacement – Accelerating the development and use of models and tools, based on the latest science and technologies, to address important scientific questions without the use of animals
  • Reduction – Appropriately designed and analysed animal experiments that are robust and reproducible, and truly add to the knowledge base
  • Refinement – Advancing animal welfare by exploiting the latest in vivo technologies and by improving understanding of the impact of welfare on scientific outcomes

These are major points that I think about frequently when planning and performing animal experiments.

Before you plan an experiment

Replacement would seem quite straightforward for someone who works on the mouse neural system, in that it’s not something in my control, so not to worry about it. And while it is true that I rarely have intentions to work in non-animals systems, that doesn’t mean it’s irrelevant.

Really, this needs addressing at the most fundamental level, before I even plan an experiment, ie. is the scientific question I want to answer relevant to a whole-animal neural system. Does it require the use of an animal to answer this question?

For example, I have often used relatively unknown neuropeptide agonists in my work; if I wanted to know more about the intracellular signalling mechanisms these agonists use, it would be both unethical and a waste of time, money and animals to test this on live brain slices (which I use for patch clamping). Instead, one would use a cultured cell line, such as HeLa cells.

Robust and reproducible

Reduction is an interesting one. It’s easy to think, well I’ll just use fewer animals in my experiment. However, this misses the key point of “robust and reproducible” experiments. What if you used fewer animals and didn’t see an effect? Is that because there is no biological effect of your treatment, or is it because you didn’t have enough animals in your study to show a statistical effect? This is where power analyses come in to play: they help you plan a robust study without using an unnecessary number of animals.

It is also important to think of optimising your study design to produce the most statistical power (eg. using crossover studies and repeated measures ANOVA) and to negate the need of repeating studies in the future. Even during an experiment, I am conscious of this metric, because I am always trying to reduce the variability in a study (for example, by reducing animal stress) in order to improve the power and numbers needed for future studies.

Improve data quality and impact

Refinement is really where it’s at for me. I spend a lot of time optimising techniques and, more recently, developing technology, to improve experimental conditions. The thing is, when your mice are unhappy and stressed, they will not behave naturally and your data will be more variable. So it makes sense, from both a pragmatic point of view and for animal welfare, to refine your experiments as best you can.

Refinement can include anything from your study design, acclimatisation of the animals and their housing conditions to advances in technology allowing better data to be collected.

Technology and the 3 R’s

Ever since my PhD, I have been interested in the use of technology to produce better data, and improve animal welfare along the way. I was particularly keen on the use of telemetry to obtain high quality physiology data while minimising the stressful environment. Since then, I’ve been interested in using AAV’s to target neurone populations of interest, and then more advanced technologies including optogenetics and fibre photometry.

In addition to improving the animal welfare in a single experiment, these more advanced technologies can provide more impactful data with deeper insights, which means fewer studies need to be performed to provide a clear picture of the biology in question.

I have also become very interested in developing in vivo technologies myself to improve on aspects that I know impact negatively on animal welfare, for example trying to perform fibre-free optogenetics to limit a lot of the negative aspects of those experiments (such as the need to have head-tethered animals in open cages during experimentation).

A 3 R’s framework

The 3 R’s provide an excellent framework with which to approach animal research in a way that aims to be as ethical as possible. And in fact, I would argue that we are morally bound to consider such questions whenever we intend to perform experiments on animals.

1. https://www.nc3rs.org.uk/the-3rs

Animals in Research

“What do you think about the use of animals in research?”

This is the one question you can always guarantee will be asked in a job interview that has anything to do with the use of animals in research. And it’s actually quite a difficult question to answer well. The answer clearly lies somewhere between, “Yeah, I’m fine with it, I don’t care” and “We have no right to use animals like that for our own benefit.” But, how do you justify the experimentation of animals without coming across as glib or self-serving?

Quite apart from job interviews, I’ve always found this to be a difficult topic. I guess because it is both emotive and there is so much misinformation surrounding it. Generally, the only time you hear about animal research in the news is when there’s been a particularly horrific protest. And those who are most able to talk about the reality of animals in research, ie. the researchers, are scared and drilled into not speaking about it. To the point that only my family and a couple of close friends know what I actually do for a living.

Protesters against animal research.
Anti-vivisection protest in the US

Almost 75% of animal research is performed on mice

I will occasionally, once every year or two, come across an anti-vivisection “information” stand at a market or something, where they are handing out leaflets to try and persuade the public as to the evils of animal research, and the pleasure the scientists take from doing horrific things to the animals. This is obviously not the case, and it’s somewhat indicative of the weakness of their position that they have to cherrypick and inflate the stats, focussing on the most photogenic species, like dogs, cats and primates.

They completely misrepresent the fact that the vast majority of animal research is performed on mice (Figure 1), and more than 90% in mice, rats or fish.

In fact, there are only a couple of facilities in the country that perform research on monkeys, and in fact all research with chimpanzees and other great apes was stopped in 1998. Speaking of which, the reason all the pictures they use always look so terrible, is that they are all ancient (mostly from the 80’s or earlier, before the regulations were brought in).

Experimenting on animals is difficult and expensive

I’m not sure how many of the public actually know that you need a license to perform scientific research on animals. I say a license, when in fact you need multiple: a personal license for the researcher actually performing the experiments, a project license for the person (usually professor) in charge of the work, and a site license for the location the experiments will actually take place. And everything needs to be justified and planned beforehand, with expected outcomes and experimental group sizes. Then you need named training and competency officers, a vet, animal welfare officer and the technicians who will actually be caring for the animals.

And we mustn’t forget the home office inspectors who can, and will, drop by to check on the welfare of the animals, and make sure all paperwork and training is up to date. All of which means that experimenting on animals is difficult and expensive, and requires huge amounts of training and expertise. So anyone who thinks that overworked and underfunded scientists, quite apart from the moral and legal implications, will be frivolous with their use of animals, is deluded.

How to combat misinformation

There are, of course, institutions trying to combat the spread of misinformation, such as the National Centre for 3 R’s research (www.nc3rs.co.uk). It is, however, very difficult to get the public interested in science and statistics when compared to emotive pictures and moral outrage. Which is why it is so important for those that know better to spread good information about this topic.

But where do you begin, when the scientists have been conditioned to be silent about anything to do with vivisection, and the public are so conditioned to fear the evil scientist? It really has to come down to education, both about the realities of animals research – they are treated far better than farm animals, but the moral outrage clearly lands heavier on experimentation – as well as the benefits to medicine and society that come out of this research.

I’m not going to lecture my readers about all the great advances coming out of animal research; suffice it to say that any medical advance you have ever heard of was borne on the back of a huge amount of scientific research, much of it requiring the use of animals. And by the way, this includes many benefits to modern medicine that people may not think of. So even if there are people out there who take a moral stand and refuse any kind of medication because it required vivisection, you live in a world without smallpox and polio (and, might I add, with a Covid19 vaccine) thanks to the use of animals in research.

One of my goals with this website and blog that I have started is to help spread actual and interesting information about research involving the use of animals. Now it is up to us to be thoughtful and diligent with our use of animals, and make sure their sacrifice is not wasted.

Stick or Float?

I recently ended my optogenetics experiment (that I showed the planning for in a previous post), and the next step was to cut the brains. As part of the study, I stimulated the mice 90 mins before perfusion, so that I could immuno stain for c-fos as a marker of neuronal activation. This is important for showing that I can successfully activate my ChR2-expressing neurones, but it also allows me to investigate potential downstream sites of activation throughout the brain. All of which is to say that I had 10 brains to cut, and had to cut and save every section throughout their entirety. Which is a lot of brain sections to cut.

A quick mention of my methods, I am cutting the brains to 30 µm sections on a freezing sledge microtome, and placing the sections in phosphate buffer for free-floating immuno. This is not a technique that I had used, or even seen, before coming to my current lab, so my goal for today is to highlight some of the differences between this method and the one I used previously, namely using a cryostat and doing on-slide immuno.

The microtome is a lovely old-school piece of kit, made of massive cast-iron segments, with a cooling stage to freeze your tissue on, and smooth steel rails to run the stage back and forth on the blade. Speaking of the blade, we use modern microtome blades that fit into a holder, but we still have the original chunk of steel that needs sharpening by hand. For those who’ve never used one, it is slightly odd the first time you use it, because the tissue is frozen on the stage, and when you cut a section it melts at the same time and ends up in a little pile of mush on the blade. You then use a paintbrush to sweep it off and into a well of buffer.

Doing the immuno on floating sections takes some getting used to as well, because you need to suction out the buffer for each step of the immuno, but it takes practice to avoid sucking up the sections (water tension is a bitch). But the most challenging aspect of this method is mounting the sections at the end of the protocol, because you now have to half-float half-push your section onto the slide. It takes a lot of practice to do this in an orderly row, without folding or damaging the sections, and without taking endless hours.

By contrast, mounting sections using a cryostat is easy. Because you do all the cutting and mounting in a frozen chamber, each section is frozen and, for want of a better word, stiff. There are two schools of thought for the mounting procedure: I was originally taught do it onto frozen slides, in which case you manoeuvre the section to where you want it on the slide using a paintbrush then using the warmth of your finger on the underside of the slide you melt the section into place. This works well but it does tend to take a toll on the warmth in your fingers (I manage fine because my circulation is great and my hands are always really warm; the PhD student that taught me had to stop every couple of hours after her fingers became too cold to melt the section into place). The alternative is to leave the slice out on the bench, and then you touch them to the section and they melt on in place.

When you mount sections onto the slides as you cut them, you then have to do the immuno on the slide, which makes washes really easy because you can just dunk them into a well of buffer – no suctioning of tissue to worry about. However, it becomes slightly more challenging to do the antibody steps, because you need to do it on the slide, which means placing the slides flat on the bench and pipetting a few hundred microlitres or so of solution to sit on the slide – in this case water tension is your friend. I found that the bubble of solution on the slide is not level all the way across, which means that you end up with a gradient of antibody and therefore the staining is not uniform across your sections. This doesn’t have to matter much, unless what you need is a quantitative staining.

When it comes to planning which method to use for doing immunohistochemistry, I use the following criteria:

  • Use the microtome and free-floating immuno if you want a quantitative readout, such as c-fos, or if you have a lot of brains to cut (I find it’s quicker overall)
  • Use the cryostat and on-slide immuno if you are using a method with very low volumes, such as in situ hybridisation, or you want precise placement of sections