It’s Hard, We Know

Simplifying optogenetics equipment

I’ve mentioned once or twice the LED-based optogenetics system I’ve been working on recently, so I thought today I would put my faithful readers out of their misery and explain what I’ve been up to.

The driving force behind it was to simplify optogenetics experiments for the user, particularly with the hardware/user interface. I was actually reminded of this again yesterday, when it took me a while to sort out the stimulation protocol on the Radiant software I use for my optogenetics experiments.

So what I wanted was an easily programmable computer hardware that I could connect various LED’s and switches to, and there was really only one answer for me: an Arduino.

The Arduino Uno

For those that don’t know, Arduino is an open-source hardware/software company that produce electronics boards for the easy programming and use of microcontrollers. Their bog-standard model is the Uno (Figure 1); it has a USB input for easy programming by a computer and pin headers so you can easily connect to the microcontroller, for 14 digital in/out pins and 6 analog in pins.

I’ll save an indepth investigation into microcontrollers for another day. For now, suffice it to say that you can connect a huge array of sensors (eg. light detectors, or even switches) and outputs (eg. LED’s), and the Arduino will control them in whichever way you programmed it to.

The Arduino Uno

Controlling optogenetics

Anyway, my goal was to generate a TTL output to drive flashing of the LED, effectively controlling optogenetics with Arduino. Essentially, I want a physical switch that I can use to turn the flashing on and off, and the Arduino will output a signal for the stimulation parameters that I program it to do.

So, my electronics layout will look something like this (Figure 2). I have a toggle switch connecting pin 0 to ground (it has an internal pull-up resistor that sets the pin high, then latching the switch changes the signal to low), a pilot LED connected to pin 1, and an output TTL from pin 2.

Simple circuit for Arduino to flash and LED.

Coding the Arduino Uno

Next, we need to write the code, otherwise the Arduino will just sit there and do nothing. Fortunately, Arduino programming software is really easy to use, and they have endless tutorials and sample code online; if you want to do something but don’t know how, just type it into Google and someone will almost certainly have done it before.

To write our code, we have three sections:

  1. Naming any values
  2. Setup, which is where you instruct the Arduino for its beginning attributes
  3. Loop, which is the main program, and the Arduino will just cycle through your code endlessly in a loop, doing exactly whatever you tell it to do

Some notes on syntax:

  • Int          allows you to specify anything by an integer
  • ;              denotes the end of each “phrase”
  • {  }          denote each section or subsection
  • //            blanks out anything after it on that line, which is useful for putting in comments that won’t affect the program

Without further ado, here’s the simple program I wrote to run the TTL:

Arduino code for controlling optogenetics

I’ve put info into the comments about what the bits of code mean, hopefully it all makes sense. I found the Arduino to be quite easy to code (which I’m pretty sure is the point of them), so I would absolutely recommend any readers to pick one up. Anyone planning in vivo or in vitro optogenetics studies should consider controlling optogenetics with Arduino.

Even if you have no specific projects in mind, I think it’s great for everyone to have at least a basic understanding of electronic circuits and coding. And you might just find that you can solve some problems much easier than you thought.

A device for controlling optogenetics

I will add here an update about a device I have made to easily control optogenetics. I added a couple of dials to allow the user to easily switch pulse on-time and frequency, and housed it with a BNC output for the TTL. I have called it the EasyTTL Uno, and it is available to purchase in my shop. Alternatively, I have made the design specs and code freely available on Hackaday.

The EasyTTL Uno provides a single channel TTL output for controlling an optogenetics laser or LED. Stimulation parameters (pulse on-time and pulse frequency) are controlled by dials, and the flashing is turned on/off with a toggle switch. It’s super-easy to use, and fully customisable if you want to set your own flashing parameters. Please do check it out.

How High Can You Go?

Validating optogenetic stimulation frequency

For my most recent optogenetics experiment, I did a full validation for the optimal optogenetics stimulation frequency. FYI, I would recommend doing this for any new paradigm.

I took a safe “positive control” measure that I knew would be influenced by my neurones of interest. I then applied a ramped increase in stimulation frequencies: 1, 2, 5, 10 and 20 Hz. This gave me what is essentially a dose response, with increasing food intake up to 5 Hz. But it then plateaued with no further increases at higher frequencies. I was then able to select the 5 Hz frequency as my optimal, as it gave me a maximal response while limiting the amount of light.

This is important, because the light is not only phototoxic at high levels, it can produce neuronal activation in the absence of ChR2. I say this, because I will always see some amount of c-fos at the fibre site, even in control mice. And this is why it is so important to include non-ChR2 mice in your study. You also find that at higher light power and frequencies, that your action potential fidelity drops.

What’s the optimal stimulation frequency?

I digress. And ramble. My point today is to talk about the optimal frequency with which to stimulate your opto mice. I have in the past used 10 Hz, or sometimes even 20 Hz, just as a kind of industry standard, without proper optimisation. This is easy to do, but as I’ve just shown, is often not the optimum for your experiment.

A common stimulation paradigm in the literature is to stimulate at 20 Hz in a pulsed manner – for example flashing for 1 second, then off for 3 seconds in a cycle. The popularity of this method likely stems from its use by Aponte, Atasoy and Betley in their early seminal works1-3.

And these come from the much earlier finding by Van Den Top that AgRP neurones fire in such bursting patterns following activation by ghrelin4. So, for experiments involving AgRP neurones, this stimulation paradigm does make sense, as it closely mimics normal physiological activity in the activated state.

A concerning pattern

However, I have noticed a collaborator who uses a similar stimulation pattern, but at even higher frequencies (30 Hz pulsed 1 second on, 3 seconds off). My problem with this begins with the fact that I have recorded from his neuronal population of interest, and they do not fire in such bursts (I have told him this).

Even more concerning, is the question as to whether those neurones are even capable of firing at 30 Hz. It might seem like I’m being overly dramatic, but this is a genuine concern; some neurones are capable of firing much faster, like 100 Hz, but many are not. And there is an even deeper concern, which is that if you overstimulate a neurone, you can drive it so depolarised that it is incapable of generating an action potential – in essence you silence the neurone.

Optogenetic frequency validation

The potential to optogenetically silence neurones was well shown by Lin et al.5, who compared various opsins including our perennial favourite, ChR2(h134r) (Figure 1). They found that at 25 Hz, ChR2(h134r) only has about 25-50 % fidelity, depending on the light irradiance.

But why is this? You need to take into account the time it takes for the opsin to close after light off, which is 18 ms for ChR2(h134r). And this leaves very little time for the neurone to recover at a high optogenetics stimulation frequency. It should be noted, as well, that Lin et al. used very short stimulation times of 0.5 ms, whereas most people use 10 ms in vivo. This means that if you were to stimulate at 30 Hz with 10 ms on time (as my colleague did), you have 23 ms of light off between each flash.

You then have to take into account an 18 ms delay for the ChR2 to close, and that gives 5 ms for neuronal recovery for the next action potential. My point here is not to bash on my colleague, but rather to stress the importance of optimising your stimulation protocol, and in particular not to overdo the irradiance and high frequency stimulation.

How to determine optimal stimulation protocol

For me, there are three factors to consider when planning your optimal stimulation protocol:

  • How do these neurones normally fire when activated? Trying to mimic as closely as possible the natural firing dynamics of your neurones of interest is, in my opinion, the best way to go. This is probably best done by current clamp patching of identified neurones and then applying something to activate them eg. applying ghrelin to AgRP neurones.
  • How fast can you drive electrical behaviour in these neurones? For this, I would patch clamp your opsin-expressing neurones, and then apply light pulses to the soma. This way you can determine likely irradiance power needed, as well as the electrical responsivity and action potential fidelity. This is particularly important if you intend to drive high frequency firing, as you need to know that your neurones are capable of keeping up.
  • Finally, test a range of firing frequencies (including pulse paradigms if relevant) in vivo against a known behavioural response. For my studies and for AgRP studies, it is simple to measure food intake; this lets you test how your predicted stimulation paradigm works in vivo, as well as confirm your current experiment is working eg. virus transfection and optic fibre placements are good.

Hopefully people will find this useful, if only as a reminder to test your optogenetics stimulation frequency, and not to just go for the brightest and fastest possible stim.

1. Aponte et al., Nature Neurosci 14(3) 351-355 (2011) AGRP neurons are sufficient to orchestrate feeding behavior rapidly and without training.

2. Atasoy et al., Nature 488, 172-177 (2012) Deconstruction of a neural circuit for hunger.

3. Betley et al., Cell 155, 1337-1350 (2013) Parallel, redundant circuit organisation for homeostatic control of feeding behaviour.

4. Van den Top et al., Nat Neurosci 7, 493-494 (2004) Orexigen-sensitive NPY/AgRP pacemaker neurons in the hypothalamic arcuate nucleus.

5. Lin et al., Biophysical J 96, 1803-1814 (2009) Characterization of engineered channelrhodopsin variants with improved properties and kinetics.

Miniscopes et al.

I have written about the use of fibre photometry to record Ca2+ activity in vivo, and today I’ll be exploring a more advanced (and far more complex) version of that. Namely, the use of a head-mounted miniscope to record videos of individual neurones.

I first learned about head-mounted miniscopes at the same time as photometry – in 2015 when Chen and Betley showed how AgRP neurones really work1,2. Nobody could read the Betley paper with their beautiful head-mounted miniscope data, and not be excited by that data and want to do it for themselves.

But, one must also recognise that it is clearly an exceptionally complex technique, and that you should only use it when you absolutely need to, ie. don’t do the super-difficult version when you can get just as good an answer with fibre photometry. And having said that, I don’t know if miniscopes were necessary for Betley’s paper – Chen found many similar results without them.

Anyway, my point here is to reiterate what I always say, which is to make your experiments as simple as possible, to give you the strongest and cleanest answer. So in that vein, I will investigate a paper that used miniscopes to find a response that wouldn’t have been possible using photometry, a 2018 paper by Chen et al.3

This paper combines head-mounted miniscope recordings of Galanin-expressing neurones (Gal-cre) of the dorsomedial hypothalamus (DMH) and telemetry-based EEG recordings of brain activity (Figure 1A). They combine the data to allow them to correlate the EEG activity showing different phases of sleep/wake and GCaMP signal from individual neurones (Figure 1C). What’s really interesting is that they show two distinct subpopulations of Galanin neurones, with opposite behaviour during REM and non-REM sleep.

So they performed a series of exhaustive tracing studies (which I won’t go into here), that showed strong and mutually exclusive projections from the DMH galanin neurones to the preoptic area (POA) and the raphe pallidus (RPa). To show these correlated with the REM and non-REM sleep patterns, they redid their miniscope experiments on the DMH, but this time they used a retro-transported AAV-GCaMP to label specifically the differently projecting subpopulations (Figure 2A/E). This elegant experiment showed that the POA-projecting subpopulation was active during non-REM sleep (Figure 2C/D), but the RPa-projecting population was active during REM sleep (Figure 2 G/H).

The authors then go on to perform another exhaustive series of experiments, this time using optogenetics to show that the different DMH projection sites don’t just correlate to REM or non-REM sleep, they can also drive changes between those sleep states.

Lastly, I’m just going to briefly go into my interest in doing these experiments myself. A year or so ago, I enquired with Inscopix (who make the benchmark miniscopes, and I think were spun out from the lab that originally developed them) about purchasing one from them4. The quote came to £60k, which was far too much for us, so I forgot about them for a while to focus on other things.

And then recently, while exploring options related to developing fibre photometry, I came across the open source head-mounted miniscope project from UCLA5. I had seen this before but the sheer complexity put me off. Essentially, they have developed their own miniscope, and have made the designs freely available online. The problem is that this is such a complex technique, I wouldn’t be happy having to build the microscope myself as well as learning and optimising the system; I could just see it being a massive waste of time to get it working well.

Anyway, when I revisited the UCLA miniscope site recently, I found that they have not only released a new lightweight and more advanced version of their miniscope, they also have started selling them fully assembled on the open ephys website6. And their price? £1,940 (including the acquisition box). So, needless to say, I will be requesting from my supervisor that we buy one. Or five. The price is reasonable enough that I think the only reason he’ll say no is if he considers it a waste of my time. Or more to the point, that playing around with one of these will distract me from my -real- work.

There is major challenge with getting a miniscope from anyone that isn’t Inscopix, and that comes down to the GRIN lenses that you need to do the imaging in the brain (for any that don’t know, the GRIN lens is like a fibre optic that has a precise structure that means you keep the image in focus). Anyway, it turns out that the only company in the world that makes GRIN lenses longer than about 4 mm of a type that you can use for in vivo imaging is called GrinTech, and they have an exclusivity deal with Inscopix. Which means that they won’t sell them to you, you need to go to Inscopix, which means spending £60k.

So, for any “real” neuroscientists that work on structures such as the hippocampus or cortex near the brain surface, you should be fine to get the cheap miniscope and get shorter GRIN lenses from places such as Edmund optics. I, on the other hand, and anyone else who works on more interesting and deeper brain regions, will have to keep searching.

1. Chen et al., Cell 160, 829-841 (2015) Sensory detection of food rapidly modulates arcuate feeding circuits.

2. Betley et al., Nature 521, 180-185 (2015) Neurons for hunger and thirst transmit a negative-valence teaching signal.

3. Chen et al., Neuron 97, 1168-1176 (2018) A hypothalamic switch for REM and non-REM sleep.

4. https://www.inscopix.com

5. www.miniscope.org

6. https://open-ephys.org/miniscope-v4

A Study in Green

I have recently been working towards an in vitro GCaMP imaging system. This came about when I inherited a patch/imaging rig from a colleague who moved on to pastures new. The rig was set up for calcium imaging using calcium-sensitive dyes, such as Fura, and included a nice Zeiss inverted microscope, perfusion stage and Hamamatsu Orca 2 camera.

However, rather than chemical indicators of calcium activity, I wanted to use our genetically encoded calcium indicators. Luckily, the excitation/emission spectra for GCaMP align very closely with GFP, which meant that I could use the GFP filters for imaging. I was starting tests with a technician in the lab, and then one day the camera just wouldn’t switch on (seems the control box died – to be fair it’s quite old, but still costs several thousand pounds to replace).

Well, that was about a year and a half ago, and for obvious reasons put a complete halt to further imaging studies. Which is a shame, because the GCaMP imaging provides a number of features that can make it desirable over electrophysiology (Table 1).

Luckily, my supervisor had a sizeable pot of grant cash that had been earmarked for electrophysiology equipment, but could instead be rerouted towards a new imaging camera. So after some research, I found a couple of very good (but also very expensive, >£15k) cameras to test out, the BSI Express from Teledyne Photometrics and the Fusion BT from Hamamatsu. These are both equivalent high-end CMOS cameras, which means they have outstanding sensitivity, imaging speed and resolution.

My plan was to only buy what was absolutely needed, and use what was in place until we want to/can afford to upgrade the system (Figure 1A). I arranged loans of the two cameras I was interested in; the BSI express came first so that’s the one I’ll show today.

The light source we have is a Prior white light with excitation filter changer. I purchased 2 excitation filters for this experiment at 470 nm and 410 nm (Figure 1B), which represent Ca2+-dependent and Ca2+-independent excitation wavelengths of GCaMP, respectively.

Having set up the system, it was time to test it out. Reaching for the low-hanging fruit, I found that one of our mouse lines that we had been gifted by a collaborator included a GCaMP3 reporter (we were actively trying to breed the reporter out, but in the mean time we had a bunch of cre and GCaMP3 double-positive mice that would otherwise not be used). The mouse line in question is a GLP1R-cre mouse, which means that all the GCaMP-expressing cells should have GLP1R. Therefore, the obvious experiment to validate the system was to apply the GLP1R agonist Exendin-4.

Anyway, I took a video, drew regions of interest round identifiable neurones and plotted the change in fluorescence in response to 1 µM Exendin-4 (Figure 2).

All in all, the in vitro GCaMP imaging system worked well, despite a number of problems along the way that I haven’t gone into here. I am very much a convert to this kind of experiment; having spent many years patching individual neurones, it’s lovely how visually obvious these data are. I would highly recommend anyone reading this to look into GCaMP imaging as a quick and easy alternative to patch clamping.

Depths of Detection

A fortuitous chat

The other week I had a chance conversation with a colleague about one of her experiments she was struggling with. It involved recording AgRP neurone activity with in vivo fibre photometry. She was particularly having problems with her fibre placements. Her AAV injections were fine, as she was getting great GCaMP expression in the Arcuate nucleus. But, she was struggling to get good fibre photometry signal. It seemed that she was either overshooting with her fibre and causing damage to the base of the mouse’s brain, or she was not going deep enough to get close enough to the AgRP neurones to pick up the signal.

This led me to wonder about photometry fibre placement. How close do you actually need to get to the fluorescent cells to pick up a good fibre photometry signal? However, it’s difficult to find information about this related to in vivo fibre photometry. The couple of studies I found both used 2-photon excitation for the photometry, but that has a very different excitation profile than “normal” epifluorescent photometry1,2.

Photometry signal detection

After some sleuthing, I found a paper by Simone et al. developing an open-source photometry system. As part of the validation process, they tested the detection power of their system using an artificial setup (small pieces of fluorescent tape submerged in 2% intralipid; Figure 1). They found that detection tailed off dramatically even before 100 µm displacement.

However, the Simone data uses a system that is very different from our in vivo setup. In particular, they used low diameter fibres with intralipid as the confounding medium.

After some further scouring of the internet, I found a thesis from the University of Florida, where the author had set our specifically to investigate and optimise fibre photometry recording4. A quick caveat: as a thesis this work has not been published through peer review. But, the work does look very thorough and will have passed a viva board so I think can be trusted.

Anyway, as part of the thesis, Mansy set up an in vitro system using fluorescent beads obscured by acute brain slices to investigate detection profiles with different fibre optics (Figure 2). Using 400 µm fibres, they found that fluorescent detection dropped off rapidly upon distance from the fibre tip. Interestingly, this was far more pronounced in the .50 NA fibre than the .22 NA fibre (Figure 2A). This surprised me, as we are always told to use the highest NA fibre possible for photometry. The reasoning being to increase the amount of light collection.

However, upon reflection, it makes sense to use lower NA fibres if you think of the detection based not just on the fluorescent collection distance, but also the depth of excitation light penetration (for more info, check out my Depth calculator and blog posts). In that case, it would absolutely make sense for the high NA fibre to have a much decreased detection profile. The difference was even more pronounced when looking at the 3D detection volume (Figure 2B).

How to relate this to our work? I know that my colleague who was having photometry troubles was using a 400 µm .48 NA fibre. These should give an almost identical detection profile to the .50 NA fibre investigated by Mansy (Figure 1A, left). I have since suggested to her that she use lower NA fibres. Switching to the .22 NA fibre should extend her 50% detection depth from about 150 µm to about 300 µm, based on this work (Figure 1A, right).

A note on tapered fibres

Finally, I found a paper which improves the depth of fibre photometry signal detection even further, by moving away from flat-ended fibres2. The problem with imaging from a flat-ended fibre is that the light emission tails of exponentially, and the detection along with that. Furthermore, the detection will also be heavily biased towards the neurones nearer to the fibre. This is dramatically improved by using a tapered-ended fibre to provide more uniform light emission and signal detection (Figure 3).

I had a quick search online, and found that Doric sell tapered photometry fibres (we have a Doric photometry system, and we purchase our photometry fibres from them). My recommendation to my colleague, and anyone else doing photometry, is to try out the tapered fibres provided they will work in your experimental system, and failing that to use lower NA flat-ended fibres.

1. Pisanello et al., Front Neurosci 13(82) 1-16 (2019) The three-dimensional signal collection field for fiber photometry in brain tissue

2. Pisano et al., Nature Methods 16, 1185-1192 (2019) Depth-resolved fiber photometry with a single tapered optical fiber implant

3. Simone et al., Neurophotonics 5(2), 1-10 (2018) Open-source, cost-effective system for low-light in vivo fibre photometry

4. Mansy, PhD Thesis for the University of Florida (2019) A systematic characterization of fiber photometry for optical interrogation of neural circuit dynamics

An Intense Calculation

Last week I was planning my next optogenetics experiment, and I thought I’d try to find the optimal fibre placement. Normally I just aim them to point close to the site of interest, but I’ve had some less-than-optimal experiments in the past so definitely time for some optogenetics irradiance optimisation.

First of all, we need to start with the intensity of light needed to activate the opsin, which in this experiment will be ChR2-H134R. Lin et al. investigated some of the early opsins back in 20091, and found that you get approximately half of the full activation of ChR2-H134R (EC50) at about 1 mW/mm2. Now, I know that I get, on average, 7.5 mW of blue light out the end of a 200 µm optic cannula using our opto setup. So, dividing through, that gives an irradiance (ie. power per surface area) of 7.5/(π*0.12), which comes out around 239 mW/mm2.

Obviously, this is vastly more than enough to activate the ChR2, but how does it spread through the brain? To answer this question, I headed over to Karl Deisseroth’s optogenetics website, where he has an “irradiance calculator”, that will estimate the dissipation of light through brain matter (Figure 1)2.

The light tails off dramatically in the brain; so much so that it is hard to see how deep you can retain ChR2 activation. I plotted the data on a logarithmic scale (which you can also do on the irradiance calculator), and included the EC50 of 1 mW/mm2 as well as an upper “phototoxicity” limit (Figure 2). There isn’t really a clearcut limit for causing neuronal damage, but an early paper by Cardin et al. found that 100 mw/mm2 was capable of causing phototoxicity, so I’m taking that as my upper limit. This produces a nice “Goldilocks zone” between 1 mW/mm2 and 100 mW/mm2, where we expect good neuronal activation with limited damage.

This produces a “Goldilocks zone” between about 0.2 mm and 1.3 mm from the tip of the optic cannula. Given experimental variance, I would put the ideal range to aim for at about 0.4 mm to 1.1 mm (Figure 2).

So, taking this all together, I can plot the fibre and light scatter onto the mouse brain atlas (Figure 3). My neurone population of interest lies in the mediobasal hypothalamic area surrounding the VMH, but particularly on the side near the fornix. Plotting the expected irradiance, we see that the entirety of the neuronal population lies within the “Goldilocks zone”. Great.

However, I have drawn an estimate of the spread of light from a .22 NA fibre, and you can see that it doesn’t successfully hit all the neurone population laterally. But, this is based on the spread through air, and doesn’t take into account the scatter of light by brain tissue, which will necessarily cause some amount of lateral spread. So, how to quantify this?

This takes us to the final stage of optogenetics irradiance optimisation, which uses a freely available light scatter tool called optogenSIM3. I won’t go into details, but essentially you input similar parameters as for the irradiance calculator, but also including the position of the fibre in the brain. The program then runs a simulation to predict light scatter based on the absorption and scatter coefficients of different brain areas, and outputs something like this (Figure 4).

The images aren’t great for visualising details, but note the extent of the green 1 mW/mm2 threshold. The light scatters far wider than I had expected, especially given that this is a low divergence .22 NA fibre. Either way, this shows that I will definitely hit the vast majority of my targeted neurone population with my planned fibre placement.

One final note from Figure 4: see how there is backscatter, so the light goes dorsal to the end of the fibre. Which means that even if your fibre ends up level, or even slightly below, your region of interest, you might well still activate the neurones. The issue then becomes, are you causing damage due to the high irradiance at that point? I have seen, in the brains of previous opto mice, plenty of c-fos at the end of the fibre, even in control mice that don’t express ChR2.

Overall, I’m happy that this optogenetics irradiance optimisation has helped with my planned fibre placement, and hope for a good experiment.

1. Lin et al., Biophysical J 96, 1803-1814 (2009) Characterization of engineered channelrhodopsin variants with improved properties and kinetics.

2. https://web.stanford.edu/group/dlab/cgi-bin/graph/chart.php

3. Liu et al., Biomed Opt Express 6(12), 4859-4870 (2015) OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics.