Effective Stimulation Depth

The effective stimulation depth is one of the critical factors in determining the success of an optogenetics study. But it is routinely ignored, particularly by the vendors of optogenetic LED systems. Be wary of any vendor of optogenetic LED’s that loudly proclaims the mW power they can achieve. Particularly if that fibre has a high NA or diameter.

A dose of light

Quoting the power at the fibre tip is all well and good, but it doesn’t take into account the spread and scatter of light in the brain. So, unless you intend to have your optic fibre literally touching your neurone population of interest, you must consider how the irradiance (light power over area, mW/mm2) drops relative to distance from the fibre tip.

Irradiance loss in the mouse brain

Now that we have plotted the irradiance loss as we move further from the fibre tip, what next? We need to know at what point the light ceases to be effective in activating our optogenetics. If you are used to pharmacology, you can think of it as a drug dilution, but one that occurs spatially through the tissue. So in that case, we need to find the EC50 of the opsin. Then, we can determine an irradiance “dose” to aim for, below which we lose efficacy.

Opsin characterisation

Fortunately, the early pioneers of optogenetics went through a lot of effort to validate and characterise everything. A 2012 paper from Karl Diesseroth’s lab characterised a range of opsins in exhaustive detail1.

The important info for this post is the determination of irradiance needed for activation. Mattis et al. determined photocurrent at a range of irradiances, firstly for a selection of stimulatory opsins (Figure 1A). From this they were able to calculate the irradiance needed for half activation, or the effective power density for 50% activation (EPD50; Figure 1B). This is analogous to the EC50 for pharmacology.

The EPD50 is helpful in that it provides a measure of the sensitivity of the opsins regardless of the expression level in the cell. Having said that, I don’t think we should disregard the magnitude of the photocurrent. Particularly when measured in a directly comparable system as we see here. My takeaway here is that these ChR2-based opsins have an EPD50 around 0.8 – 1.5 mw/mm2; the exception is the CatCh and C1V1 variants which all have very slow (>50 ms) off kinetics.

Inhibitory opsins

Mattis et al. also investigated a number of inhibitory opsins in the same way (Figure 2). These universally have a much higher EPD50 than the excitatory ChR2-based opsin. My takeaway from this figure is that the eArchT3 seems to be the best of these. It has a comparable EPD50 to eNpHR3.0, but a much higher photocurrent. Also, Arch opsins have peak excitation at 520 nm, which is technically easier to obtain than the ~590 nm peak of eNpHR3.0.

Right, so now we have a good idea of the threshold irradiance needed to activate our opsin of choice. Ideally, you would back this up by validating in vitro, using patch-clamp electrophysiology of your neurone system.

Predicting effective stimulation depth

So now I can replot the predicted irradiance loss from the tip of the fibre. Only this time, if I add the threshold irradiance of 1 mW/mm2 (as tested for ChR2h134r in vitro) it will highlight how deep I can expect to activate my opsin. This gives us a predicted effective stimulation depth.

Based on this graph, it appears that I will produce effective stimulation of my neurones for just over 1.2 mm. This is good, as it will allow me to aim the fibre 0.5 mm away from my population of interest, while still having plenty of leeway for experimental variability and still expect to activate the entire population.

In order to simplify this process, I have put this effective stimulation depth calculation into a handy (and free) online tool. Please do try it out, as it should inform you about your experimental design.

1. Mattis et al., Nat Methods 9(2), 159-172 (2012) Principles for applying optogenetic tools derived from direct comparative analysis of microbial opsins


Animals in Research

“What do you think about the use of animals in research?”

This is the one question you can always guarantee will be asked in a job interview that has anything to do with the use of animals in research. And it’s actually quite a difficult question to answer well. The answer clearly lies somewhere between, “Yeah, I’m fine with it, I don’t care” and “We have no right to use animals like that for our own benefit.” But, how do you justify the experimentation of animals without coming across as glib or self-serving?

Quite apart from job interviews, I’ve always found this to be a difficult topic. I guess because it is both emotive and there is so much misinformation surrounding it. Generally, the only time you hear about animal research in the news is when there’s been a particularly horrific protest. And those who are most able to talk about the reality of animals in research, ie. the researchers, are scared and drilled into not speaking about it. To the point that only my family and a couple of close friends know what I actually do for a living.

Protesters against animal research.
Anti-vivisection protest in the US

Almost 75% of animal research is performed on mice

I will occasionally, once every year or two, come across an anti-vivisection “information” stand at a market or something, where they are handing out leaflets to try and persuade the public as to the evils of animal research, and the pleasure the scientists take from doing horrific things to the animals. This is obviously not the case, and it’s somewhat indicative of the weakness of their position that they have to cherrypick and inflate the stats, focussing on the most photogenic species, like dogs, cats and primates.

They completely misrepresent the fact that the vast majority of animal research is performed on mice (Figure 1), and more than 90% in mice, rats or fish.

In fact, there are only a couple of facilities in the country that perform research on monkeys, and in fact all research with chimpanzees and other great apes was stopped in 1998. Speaking of which, the reason all the pictures they use always look so terrible, is that they are all ancient (mostly from the 80’s or earlier, before the regulations were brought in).

Experimenting on animals is difficult and expensive

I’m not sure how many of the public actually know that you need a license to perform scientific research on animals. I say a license, when in fact you need multiple: a personal license for the researcher actually performing the experiments, a project license for the person (usually professor) in charge of the work, and a site license for the location the experiments will actually take place. And everything needs to be justified and planned beforehand, with expected outcomes and experimental group sizes. Then you need named training and competency officers, a vet, animal welfare officer and the technicians who will actually be caring for the animals.

And we mustn’t forget the home office inspectors who can, and will, drop by to check on the welfare of the animals, and make sure all paperwork and training is up to date. All of which means that experimenting on animals is difficult and expensive, and requires huge amounts of training and expertise. So anyone who thinks that overworked and underfunded scientists, quite apart from the moral and legal implications, will be frivolous with their use of animals, is deluded.

How to combat misinformation

There are, of course, institutions trying to combat the spread of misinformation, such as the National Centre for 3 R’s research ( It is, however, very difficult to get the public interested in science and statistics when compared to emotive pictures and moral outrage. Which is why it is so important for those that know better to spread good information about this topic.

But where do you begin, when the scientists have been conditioned to be silent about anything to do with vivisection, and the public are so conditioned to fear the evil scientist? It really has to come down to education, both about the realities of animals research – they are treated far better than farm animals, but the moral outrage clearly lands heavier on experimentation – as well as the benefits to medicine and society that come out of this research.

I’m not going to lecture my readers about all the great advances coming out of animal research; suffice it to say that any medical advance you have ever heard of was borne on the back of a huge amount of scientific research, much of it requiring the use of animals. And by the way, this includes many benefits to modern medicine that people may not think of. So even if there are people out there who take a moral stand and refuse any kind of medication because it required vivisection, you live in a world without smallpox and polio (and, might I add, with a Covid19 vaccine) thanks to the use of animals in research.

One of my goals with this website and blog that I have started is to help spread actual and interesting information about research involving the use of animals. Now it is up to us to be thoughtful and diligent with our use of animals, and make sure their sacrifice is not wasted.


An Influential Choice

We live in a world of convenience and temptation, and it’s difficult to know how to influence food choice in a healthy way. I was particularly reminded of this since having a toddler, who knows what he wants and has very little impulse control.

We were recently on a walk round the park, him on his little balance bike and me walking the dog. He’s zooming around, so I’m happy to follow his lead, but it doesn’t take long to realise that his zoomies have a definite target, which is the café at the park. The café that just happens to sell ice cream. Soon enough we get there and, surprise surprise, he wants a tasty treat.

But then, we are biologically designed in a way to seek out the high reward palatable foods, and not at all for a modern world with multi-billion pound corporations whose business models depend on us gorging ourselves to morbid obesity. As another example, it only took one or two visits to a certain fast-food chain (parenting is hard, don’t judge me) before the little man recognised the signs for said shop and would request the rewarding food they sell.

It might be obvious, from an evolutionary perspective, why we seek high reward foods, but it’s not so obvious how this is coordinated by the brain. In this post I’ll explore some of what we know about the selection of highly palatable food, why this is important for the control of body weight, and some thoughts on how to influence food choice pharmacologically to improve health.

Starting from the beginning, we’ve known for a long time that giving animals access to palatable foods (high fat and/or sugar) causes an increase in body weight1. The story becomes more interesting when we look at intake related to food choice.

In an early study, human subjects were locked in a lab for a couple of weeks and either given unlimited access to monotonous food, or given normal food restricted to the same number of calories as the “monotonous” group. The first group voluntarily decreased their caloric intake (so the second group had theirs decreased), and both cohorts lost weight.

The interesting point of this study is that the monotonous group that voluntarily decreased their food intake didn’t notice their hunger to the same extent as the calorie restricted group. This clearly emphasises the importance of the food environment we live in when it comes to food choice.

My thoughts following on from such food-choice studies were about the possibility of how to influence food choice pharmacologically. As far as I can tell, all the pharmaceutical attempts at combating obesity aim to administer long-lasting modulators of hunger/satiety (increasing energy expenditure has proven problematic for reasons I may go into another time).

Unfortunately, the neuronal pathways that control food intake are so intertwined with other functions (such as mood and nausea), that you get off-target effects. Furthermore, the receptors and signalling pathways you target will naturally compensate to counter the effects, so any effects of food intake and body weight are short-lived.

What if we could administer short-acting compounds that, rather than hammering down our desire to eat with diminishing returns, merely changes the preference away from the unhealthy foods that cause the pathogenic weight gain? It doesn’t matter how hungry you are, if your appetite is limited to carrots and broccoli, it is impossible for you to become obese.

But, how would we go about doing this? I believe that some of our more recent knowledge about AgRP neurones hints at a solution. Back in 2015, Chen et al. showed that AgRP neurones become rapidly inhibited in response to sensory detection of food3, but more importantly that the degree of response was related to the palatability of the food (Figure 1).

We have since seen that this AgRP response is a teaching signal for caloric entrainment – the AgRP response to a particular food detection will change over time depending on the caloric value (Figure 2)4.

We have also seen that driving AgRP neurones activity (with opto stimulation), drives a marked decrease in preference (Figure 3)5.

So, it appears that AgRP neurones are a fundamental link between sensory detection of food, hunger, and the learned seeking of high caloric foods. More specifically, the drop in AgRP neurones activity upon sensory detection of a food seems to be the determinant of how much of that food the animal wants to eat.

Now, what if we could (briefly) activate AgRP neurones during consumption of an unhealthy meal? I say activate, but it could equally mean limit the inhibition upon detection and consumption of the meal. Classic wisdom would suggest that when you activate AgRP neurones you increase hunger and food intake. And that may happen initially.

However, given the results from Betley et al.5, I would argue that over time, with repeated exposures to the same high calorie meal and activation of the AgRP neurones, you would drive a preference away from that unhealthy meal in the future.

How I envisage this working in practice: we would have short-acting (half-life of 20 minutes or so) modulators of AgRP neurone activity, ideally in an easily administered form, such as in an asthma-type inhaler. An overweight individual who wants to eat better to lose weight and become healthier would then take a hit from an AgRP activator at the start of an unhealthy meal, which will decrease their preference for that food. Conversely, they could take a hit from an AgRP inhibitor at the start of a healthy meal, which will increase their preference for that food.

The goal of this pharmacology is not to alter a person’s hunger in any way, but rather to break the evolutionary drive to overconsume high caloric foods, and in that way to give their willpower a boost to selecting healthy food choices. The idea is that you turn any unhealthy foods into the “monotonous” type that we saw earlier, so the person will voluntarily decrease intake of that food. And the best thing about this is that it hijacks the obscenely effective marketing that companies use to push unhealthy food, and instead links that advertising with unrewarding food.

Great, so I like this idea, but how would we go about showing this experimentally? Well, I would start by continuing on from Betley’s work5, but see if I could use optogenetic stimulation of AgRP neurones to shift preference between foods of unequal palatability.

Ideally, we would provide opto-connected AgRP-ChR2 mice long-term access to chow and high energy diet (HED), and set up the optogenetic system to stimulate the AgRP neurones whenever the mice go to eat the HED, but not the chow. Hopefully, the mice would shift their natural preference for HED away to chow.

If this is successful, the next step would be to mimic the same response using pharmacology – my thought would be to test out a number of known compounds that affect AgRP neurone activity (eg. PYY and CCK, or their antagonists), possibly using combinations to yield a bigger effect.

Well, that’s about as far as I’ve come with this idea of how to influence food choice. I did pitch the concept at lab meeting a few months ago, and it went down about as well as season 8 of Game of Thrones. Oh well, hopefully my loyal readers will find it more interesting than my colleagues.

1. Sclafani and Springer, Physiol Behav 17(3), 461-471 (1976) Dietary obesity in adult rats: similarities to hypothalamic and human obesity syndromes.

2. Cabanac and Rabe, Physiol Behav 17(4), 675-8 (1976) Influence of a monotonous food on body weight regulation in humans.

3. Chen et al., Cell 160, 829-841 (2015) Sensory detection of food rapidly modulates arcuate feeding circuits.

4. Su et al., Cell Reports 21, 2724-2736 (2017) Nutritive, post-ingestive signals are the primary regulators of AgRP neuron activity.

5. Betley et al., Nature 521, 180-185 (2015) Neurons for hunger and thirst transmit a negative-valence teaching signal.


An Illuminating Journey

Back in 2016, I decided to make the leap towards my first optogenetics study. A couple of years previously, I had helped set up targeted intracranial nanoinjections for the lab, which meant we were routinely doing experiments with AAV’s (mostly DREADD’s) and retrotracers. And it was only a few years before that that our lab had acquired our first cre line.

So, while the use of transgenic mice in this way was relatively new to us, we were learning quickly and were keen to advance our in vivo capabilities. More and more it was becoming difficult to publish in good journals without showing manipulation of complex behaviours by identified neuronal populations (either with DREADD’s or optotenetics) and demonstrating the circuits involved.

However, optogenetics was still quite new, and totally novel to me, and as I’ve said before one of my failings is my reticence to seek help, so I was figuring this out myself. Not that I was completely alone, I did have a great PhD student to help me, particularly with the in vivo aspects. So anyway, I started by looking at what others had done, focussing on some of the early, high impact work; I was particularly drawn to work from Scott Sternson and Denis Burdakov, as well as the original pioneers of optogenetics including Karl Diesseroth. Picking out the common factors in their methodologies, I wrote up the following list of requirements for my first optogenetics study:

  • Use lasers to produce blue light (~470 nm)
  • Light is pulsed at a maximum 20% duty cycles to limit heat damage and phototoxicity; typically 10 ms ON at 10-20 Hz
  • Light is delivered via fibre optics with a rotary joint to a 200 µm fibre into the mouse’s brain
  • Typical light power from the end of the fibre optic cannula (ie. what is actually entering the mouse’s brain) is around 10-15 mW

If I’m honest, setting up one of the laser systems for my first optogenetics study scared me a little. They’re big, expensive, dangerous and difficult to use. Or at least, so it appeared to someone who’s never used them, and I would be facing a mountain of paperwork if I wanted to get a laser system approved for use at the University.

It was around this time that we started seeing LED-based optogenetics systems coming on the market, which definitely appealed to me. The problem with LED’s is that the light scatters (Figure 1), making it challenging to get sufficient light through an optic fibre.

Laser vs LED light into optic fibre.

If you want to use LED’s to provide sufficient light output for in vivo optogenetics, you need to have an extremely high power light source with very good lensing and/or reduce the number of optical connections to reduce the light lost along the delivery path (Figure 2).

Looking at the possible applications of LED’s, I could safely discount implanting micro-LED’s into the brain (Figure 2D) due to the highly advanced nature of that method and the fact that nobody sells them, as well as having head-attached LED’s (Figure 2C) because there don’t seem to be any trustworthy versions for sale, although the latter does lend to doing wireless optogenetics which does appeal to me but not for my first optogenetics study.

So, between the “normal” desktop-mounted LED’s (Figure 2A) and the intermediate rotating LED’s (Figure 2B), there were 2 options on the market that seemed likely to work. I say this, because it was very rare for any of these manufacturers to actually state what the light output from the fibre cannula would be in an experiment; hats off to Plexon and Prizmatix as the only ones that seemed to do this.

Number of optical connections in different in vivo optogenetics setups.

So, I had narrowed down my options to the Prixmatix desktop LED1 and the Plexon rotary Plexbright system2. However, my distrust of having optical connections, for fear of excessive loss of light, led me to pick the latter. I had already tested an AAV ChR2 construct in vitro, so, together with my experience doing targeted AAV-DREADD injections and cementing ICV cannulae into mouse brains, I was ready for my first optogenetics study.

As Ed (my PhD student) was already working on NPY/AgRP neurones and feeding behaviour, we had the AgRP-cre mouse and we both thought that stimulating AgRP neurones would be the best initial experiment. I maintain you always want to go for the low-hanging fruit when starting anything new.

Ed and I assembled a half dozen AgRP-cre mice, injected them with AAV-DIO-ChR2-mCherry into the arcuate and cemented an optic fibre pointing at the same place. Then came the waiting game – 2 weeks while the transfected neurones ramped up expression of ChR2. We rigged up the Plexon rotary LED’s (we stuck them to a shelf above a bench using electrical tape), wrestled with the Plexon Radiant software to produce a nice stimulation pattern, and finally connected some mice to the ends of the optic fibres.

I can still remember the day we first switched on the LED’s – without a doubt the best moment I’ve ever had in science, and to be honest one of my best in general. How can a simple wedding, or the birth of a child, compare to watching a mouse gorge itself because you flicked a switch on an LED. Absolutely magnificent.




Of Mice and the Internet of Things

Ever since the long lost time of my PhD (about a decade ago), I have been excited by telemetry. More specifically, the use of telemetry and wireless technology to obtain high quality physiological data from mice.

During my PhD, I used telemetry to record ECG in transgenic mice, using DSI’s transmitters ( ECG was actually my second choice for investigating cardiovascular control in our knockout strain, but the blood pressure transmitters were too challenging for me to be confident in spending that much of our limited grant funding on.

The reason we wanted blood pressure recordings is that it is a much more reliable readout for the stimulation of the cardiovascular system, as there are many reflex control on heart rate that make it tricky to understand exactly what is going on (for example, if you stimulate cardiac output, you might well increase heart rate along with blood pressure, but then your baroreflex kicks in and the heart rate drops). As it turns out, I was able to delve into heart rate variability analyses using the ECG transmitters, which formed a large part of my thesis, so all turned out fine.

Anyway, I have more recently been using DSI telemetry to investigate body temperature and locomotor activity in mice, but have found myself getting annoyed. Between the surgery and singly housing animals, crappy battery life and expensive refurbs, short range recording and signal dropouts, it’s been getting on my nerves. And it’s 2021, why are we still using transmitters and recording technology that was developed 20 years ago?

After some time niggling away at the back of my mind that there must be a better way, I had a conversation with my dad about something he’s been working on (he’s technically retired, but is working with an old friend from the oil drilling business) about uses of the Internet of Things. This is one of those terms I’d heard, but thought it was a bit of a gimmick, like amphibious cars, or smartphones.

In fact, it turns out technology has reached the point where everything can be connected. For example, from the industrial sector that he was talking about, they can monitor the temperature of a certain piece of machinery, the pressure inside the system, performance indicators, pollution levels etc. Really, anything that might possibly want monitoring can have a sensor placed inside, which will be quiescent until certain parameters are met, and then it pings out a signal. This means that the battery drain is negligible, and the sensors can remain in place for years. Hearing this, I was excited to check out the state of the technology for my experiments; as researchers we are prone to just use the same as we always have. Here’s what I envisaged:

  • A transmitter that is small enough to implant through a (fat) needle, negating the need for pesky surgery
  • The signal is long-range enough, and includes identifying information, that you can have a single receiver for a number of group-housed mice
  • Implants are single-use – cheap(er) than DSI and disposable, so no faffing with refurbs and sterilisation

It’s possible that my desired were too restrictive, particularly with regards to the maximum size, because after much internet scouring, the best I was able to find was implantable ID chips like this from Unified Information Devices (UID –

These are injectable RFID chips that are primarily used for mouse identification. Apparently, such things are fairly common in industry, where you would subcutaneously implant every mouse with an RFID chip, allowing you to essentially scan a mouse like a barcode and it brings up all the relevant information about that animal.

The UID implants take the identification a step further, also providing a temperature readout along with the animal ID. Unfortunately, this normally requires you to “beep” the mouse with your reader at very close (likely skin contact) range. However, UID also produce a “mouse matrix”, that can read the info (including body temp and track movements) from outside the cage. They are quite pricey though.

The reason this is needed is that the RFID chips don’t have a battery, instead the tiny microchip is temporarily powered by the electromagnetic waves from the reader itself (same as the chips in most modern cash/credit cards).

So, I’ve been thinking, couldn’t you put a tiny battery inside, and low-powered, infrequent data transmission? You would only have to transmit every 5 or 10 minutes, so surely even a tiny battery could manage that? There is a product called Anipill, which takes a similar approach ( Their 1.7g implant sends out data at various timeframes from 1 min to 1 hour, and you can record data from a number of animals (up to 8) simultaneously from a single receiver, which improves the animal welfare by allowing group housing. This seems to be exactly what I had been thinking of, but with a capsule size of around 18 x 9 mm, it is far bigger than I had wanted.

Sadly then, if this company can’t make transmitters anywhere near small enough for injecting, then it’s probably not possible, at least with current technology. But, this is an area that is only improving from the advance of technology, so I will not lose my interest so easily.

AI in neuroscience research

The field of neuroscience is rapidly evolving, and with the help of artificial intelligence (AI), it has the potential to grow even faster. The use of AI in neuroscience research, particularly in preclinical academic research, can help researchers gain new insights into the complex workings of the brain, ultimately leading to new treatments and cures for neurological disorders. In this blog post, we will explore some of the potential uses of AI in neuroscience research.

1. Data Analysis and Interpretation

One of the biggest challenges in neuroscience research is the analysis and interpretation of large and complex datasets. With the help of AI, researchers can automate data analysis, allowing them to quickly identify patterns, make predictions, and draw conclusions. This can save researchers significant amounts of time and help them to identify potential areas of interest.

2. Disease Modeling

AI can be used to create models of neurological disorders, allowing researchers to better understand the mechanisms behind the diseases. This can help in the development of new treatments and therapies for conditions such as Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis. By modeling these diseases, researchers can also identify potential targets for drug development and test the efficacy of new treatments.

3. Brain-Computer Interfaces

Brain-computer interfaces (BCIs) are devices that allow humans to control computers and other devices using only their thoughts. AI can be used to analyze brain signals and improve the accuracy and reliability of BCIs. This can have significant applications in the medical field, allowing individuals with neurological disorders or injuries to regain control of their limbs or communicate with others.

4. Drug Discovery

AI can help accelerate drug discovery by predicting the efficacy of potential drug candidates and identifying new targets for drug development. This can help reduce the time and cost of drug development, ultimately leading to faster and more effective treatments for neurological disorders.

5. Personalized Medicine

Finally, AI can help to personalize medicine by identifying the most effective treatments for individual patients based on their unique genetics and physiology. This can help reduce the risk of adverse reactions and improve patient outcomes.

In conclusion, AI has the potential to revolutionize neuroscience research by providing new tools and insights into the complex workings of the brain. While there are still challenges to overcome, such as the need for high-quality datasets and ethical considerations, the use of AI in preclinical academic research can lead to new treatments and cures for neurological disorders, ultimately improving the lives of millions of people worldwide.

Ok, now let me tell you a secret: that wasn’t written by me, but by AI. Specifically ChatGPT. If you haven’t heard of it, ChatGPT is an AI chatbot. It has advanced natural language processing (NLP) abilities, which means that you can write a question and it will answer.

For the above example, I prompted ChatGPT to “Write a blog post about the potential uses of AI in neuroscience research, particularly preclinical academic research.” You can really ask it to do anything, with whatever restrictions, and it will do it. It’s really quite impressive.

There are some caveats, though. It isn’t always factually correct. And in fact, if you ask it to write something scientific, it might invent (plausible-sounding) citations. So, beware of copying the output indiscriminately.

Other developers have used the GPT framework to build AI chatbot tools. There are loads coming out all the time, so I won’t list them. But I will mention one that I found interesting, called VenturusAI, which analyses a business idea.

I’ll just wrap up by saying that I am not an AI expert, hence I won’t be doing a deep analysis on it. But it is a fascinating field of technology that I’m sure will play an ever increasing role in research, as well as in other aspects of our lives.

I also promise not to use AI to write any more of my blog posts for me. It’s cheating.

Practical uses of 3D printing in an electrophysiology lab

I have mentioned before about my 3D printer, and how useful I have found it. Today, I’ll explain some of the practical uses I’ve found for 3D printing for electrophysiology. The point is that electrophysiology equipment is both extortionately expensive and annoyingly non-compatible. So, it is often quicker, cheaper and easier to design and print a “thing” than try to buy something to fit your particular need.

Build-a-bath workshop

A year or two ago, I was setting up our third (at the time unused) rig for calcium imaging. We had various bits of baths, but no complete set. And it would cost a (relatively) large sum of money to buy a replacement. I found that what I was missing was the “bath” bit (I had the holder). So I measured up one of our existing ones and designed a reasonable copy in Autocad:

3D design for an electrophysiology bath insert.

My printer was able to make it with a very smooth base, which is the crucial aspect for obtaining a watertight seal. I installed it on the calcium imaging rig, it worked well, and is still in use there to this day. Oh, and for anyone who’s interested, I have made the 3D design available on Thingiverse.

Moving an LED

My loyal readers will know that it was around this time that the light source for the calcium rig died. A replacement LED source would cost anywhere from £3k to £15k, depending on how many colours I wanted access to. However, we had animals ready for experiments at the time, and even if we had the money, it could take weeks for new kit to arrive.

Luckily, we had a blue LED of the correct wavelength attached to one other rigs, that was no longer needed there. I had purchased it to do optogenetic stimulation, but we had switched opto’s to the third rig.

Anyway, this seemed like an easy fix, just swap it over. But of course, it’s never that easy. Because, I wanted to move the LED from an Olympus microscope to a Zeiss. And the manufacturers do not make it easy on the consumer by having common fittings.

So, I measured up the fitting on the LED and the back of the Ziess fluorescence port. I then designed a 2-part “sleeve” that would modify the Zeiss port to resemble the back of an Olympus:

3D design for an Olympus-to-Zeiss microscope fluorescence adapter.

I used cable ties to hold it on tight to the Zeiss fluorescence port. The benefit of cable ties over something more permanent like glue is that they can just be cut off if/when the LED wants changing. The LED now fitted snugly onto the back of the microscope, and, after some fiddling with the data and control connections, was now fully functional for calcium imaging.

A “lab things” service

The main point I want readers to take away from this post is the usefulness of 3D printing for electrophysiology labs. I would strongly recommend anyone reading this who performs a practical skill in the lab like electrophysiology to consider investing in a 3D printer. They are actually quite cheap nowadays (mine was about £250 a few years ago), and I’m sure they’ll save you a lot of time and money in the long run.

In fact, the biggest investment to 3D printing things yourself is the time it takes to learn 3D CAD software and optimising the 3D print process itself. So, if you want something custom making, but would prefer not to have to figure it out yourself, just head over to the Services page and send a request. You never know, I might well be able to save you a lot of time, effort and money.

Validating in vivo optogenetics LED systems

One of the most challenging aspects of starting in vivo optogenetics is the equipment. In particular, how do you know which optogenetics stimulation systems will work for your purpose? I’m a big fan of LED’s, because of how cheap and easy they are to use compared with lasers. However, the high degree of scattering can make it challenging to obtain sufficient brightness for in vivo optogenetics.

Today, I will be investigating the most common commercially available in vivo optogenetics LED systems. Specifically, predicting the effective stimulation depth of their LED’s against the most commonly used opsins.

See below the opsins I’m investigating, along with the peak wavelengths and typical activation thresholds. Included are the papers I referenced for the irradiance thresholds.

Stimulation wavelength and irradiance threshold for common opsins.

Now I have the reference values to aim for. Next step is to check the manufacturers’ websites for light power output from their in vivo optogenetics LED systems, find the most appropriate LED for each opsin, and run it through the Depth calculator.

A brief note on my analysis: I use the published fibre characteristics from each vendor and estimate effective stimulation depth in “mixed” brain matter. In each case, I have picked the nearest/brightest LED to the opsin. I have also colour coded the reported stimulation depths to give an easy indication of experimental effectiveness.


I purchased the Plexbright system back in 2016, and it has worked well for activation of blue-responsive opsins. They also sell a wide range of colours to target different opsins. I have picked out their reported light power output from a 200 µm 0.66 NA fibre:

Effective stimulation depth for Plexon Plexbright LED's for a range of common opsins.

Really only the blue 465 nm LED is bright enough to have a stimulation depth approaching 1 mm for classic opsins. stGtACR2 and ChRmine are so super sensitive you can easily stimulate them even with relatively dim LED’s. Hence why they are the favourites for people wanting to do bidirectional optogenetics or with wireless opto’s7.


The Prizmatix UHP LED is the other in vivo optogenetics LED system that I have used (purchased by a collaborator). Again, I’ve only used the blue LED, which worked well. I have picked out their reported light power output from a 200 µm 0.66 NA fibre:

Effective stimulation depth for Prizmatix UHP LED's for a range of common opsins.

Same as Plexon, the blue LED is the best. Although, in this case the green 520 nm LED provides decent activation of inhibitory eOPN3.


Doric are well known for their photometry system, maybe not so much for in vivo optogenetics. They only sell a high powered in vivo optogenetics LED in blue. This time, the reported power values are from a 0.63 NA fibre:

Effective stimulation depth for Doric optogenetics LED's for a range of common opsins.


Mightex make a wide array of optogenetics equipment. Their in vivo LED’s are reported from a 400 µm 0.22 NA fibre:

Effective stimulation depth for Mightex optogenetics LED's for a range of common opsins.

A caveat for these Mightex figures: their published power output figures weren’t explicitly clear that the power is from the end of an optic fibre cannula. It’s possible they are reporting the output from the optic cable, which means the experimentally usable value will be lower.

So which system should you buy?

I think it’s clear from my analysis here that most of the optogenetics LED systems you can buy for in vivo optogenetics are, quite simply, not fit for purpose. And I have only selected the most relevant wavelengths for my analysis; most of the vendors sell a much wider range of colours.

Effective stimulation depth for optogenetics LED's for a range of common opsins.

Looking at the effective stimulation depth, I can understand if people would want to use stGtACR2 and ChRmine, and forget about any other opsins. However, those come with their own limitations: stGtACR2 is soma targeted, so you can’t investigate circuits, while ChRmine has super slow kinetics which makes it unusable for many optogenetic applications. My point here is to be careful with your selection of equipment and opsins to match your experimental requirements.

I will happily recommend Plexon’s Plexbright LED’s and Prizmatix UHP LED’s for in vivo optogenetic stimulation for blue wavelengths. If you get those and use ChR2(h134r) for activating and stGtACR2 for inhibiting neurones, you should be fine. For other colours or other opsins? It’s not so clear cut. Currently, the best option is probably to buy a laser. In fact, Doric sell an interesting thing called a Liser, which is kind of like a hybrid between and LED and a laser, and I would definitely investigate it for non-blue opto’s.

1. Mattis et al. Nat Methods 9(2), 159-172 (2012) Principles for applying optogenetic tools derived from direct comparative analysis of microbial opsins

2. Mahn et al. Nat Comms 9, 4125 (2018) High-efficiency optogenetic silencing with soma-targeted anion-conducting channelrhodopsins

3. Mahn et al. Neuron 109, 1621-1635 (2021) Efficient optogenetic silencing of neurotransmitter release with a mosquito rhodopsin

4. Marshel et al. Science 365, eaaw5202 (2019) Cortical layer–specific critical dynamics triggering perception

5. Klapoetke et al. Nat Methods 11(3), 338-346 (2014) Independent optical excitation of distinct neural populations

6. Chuong et al. Nat Neurosci 17(8), 1123-1129 (2014) Noninvasive optical inhibition with a red-shifted microbial rhodopsin

7. Li et al. Nat Comms 13, 839 (2022) Colocalized, bidirectional optogenetic modulations in freely behaving mice with a wireless dual-color optoelectronic probe

A Lamplight in dark places

Since its discovery over 15 years ago, optogenetics has exploded in popularity in research. Along with this increase in interest and use has been a coincident profusion of optogenetic tools. This includes excitatory and inhibitory opsins across a wide range of timescales and light sensitivities.

Optogenetics publications have been increasing for the past 15 years.

However, one type of opsin that has consistently failed to present itself is a long-term super-sensitive optogenetic silencer. All the *good* inhibitory opsins have very fast kinetics and low sensitivity in the 3-5 mW/mm2 range.

A recent paper by Rodgers et al. changes all that1. They have discovered a novel opsin from the lamprey, which they have named “Lamplight”. It’s a Gi-coupled receptor (unlike most opsins which are light-responsive channels), which means that it is slower to signal but orders of magnitude more sensitive. In fact, its EC50 of 2.4 µW/mm2 is 1000-fold more sensitive than the classic inhibitory opsins like Arch and eNphR3.0.

However, as always, the sensitivity of an opsin is inversely correlated to its kinetics. Therefore, and as expected, Rodgers et al. show that Lamplight has a long and slow activation time (little to no diminishing of effect after 90 seconds). In addition to its extremely high sensitivity, Lamplight also has some other interesting qualities (Figure 1):

  • Scalable response – increasing light levels will produce a higher (stable) response from the opsin.
  • Switchable – the opsin is activated by 405 nm light and inhibited by 525 nm light. This has the added benefit that it won’t be accidentally activated by ambient light, which has much more green than UV

It should also be noted that Lamplight will limit neuronal damage, both by phototoxicity and electrophysiological. Normal opsins can stress (and potentially damage) neurones following chronic activation. This is not an issue with Gi signalling, you really can’t overactivate it.

Based on these unique characteristics, I can imagine Lamplight being a useful opsin for specific uses:

  • Extremely sensitive and longterm inhibition would be useful for use with lower power output wireless optogenetics, or for a single-stim inhibition that could work similarly to injecting CNO with inhibitory DREADDs.
  • Scalable inhibition for probing relative importance of a neurone population to mediate different behaviours/physiology. For example, we had an experiment where increasing the ChR2 stimulation frequency would shift the response from increasing glucose levels to aggressive/escape behaviour.
  • Using 2-colour opto stimulation to turn neurone populations on/off over medium-long term time scales.

Overall, I think this is an interesting opsin with potentially important applications for in vivo research. It is not yet available on Addgene, so anyone who is interested in this opsin should contact the lead author Rob Lucas.

1. Rodgers et al. EMBO Rep 22, e51866 (2021) Using a bistable animal opsin for switchable and scalable optogenetic inhibition of neurons

Optogenetic Stimulation Frequencies

Today I’ll be talking about the importance of optimising in vivo optogenetics frequency, having previously looked at the pulse on-times. All too often, I will see papers or talk to colleagues who use an unfeasible stimulation frequency for their in vivo optogenetics. For example, where I work in the hypothalamus, you often see stimulation at 20 Hz. And from my experience of patch clamping multiple neurone types in the hypothalamus, they just don’t fire that fast.

If you’re not an electrophysiologist, it might not be obvious, but action potentials are energetically expensive. So, neurones will only fire quickly if they need to. In fact, they will only be able to fire quickly if it is required for their function. Which it is for cognitive processing, but not for the much simpler processing required in many other brain regions.

Back to the beginning

As usual, first thing we do is go back to the early optogenetics publications from Karl Diesseroth. In their 2012 Nature Methods paper, Mattis et al. performed a thorough investigation of the opsins available at the time1. And, despite being a decade later, the data still stand, and are still very useful. I strongly recommend reading this paper for anyone who plans to perform optogenetic studies. It’s a huge paper with bags of useful info.

Mattis et al. measured spike fidelity, ie. the success rate of the cell to produce an action potential in response to a flash of light. They used a high light intensity, so there is no issue of there being not enough light to activate the opsin. Instead, the loss of fidelity comes from the neurone being unable to keep up. As I’ve mentioned before, the neurone needs to recover its membrane potential below a certain threshold or it won’t be able to trigger another action potential, so if you chronically overstimulate a neurone they become silenced.

I’ve shown here a comparison of ChR2h134r (also called ChR2R) and ChIEF (Figure 1A). The black lines show the spike fidelity to light pulses, and the grey lines show the fidelity to electrical pulses. Essentially, the grey lines show what the cell is intrinsically capable of, whereas the black lines show how it fares under optogenetic control. Notice how the ChR2h134r loses fidelity at 20 Hz, whereas ChIEF only loses it at 40 Hz. This is largely because of the “off kinetics”, which means that ChR2h134r takes a lot longer to close than ChIEF (Figure 1B). And it’s only after the opsin has closed that the cell can recover its membrane potential.

Optogenetic spike fidelity of ChR2 and ChIEF, from Mattis et al.

A self test

Luckily I have access to an electrophysiology rig, so I was able to test spiking fidelity my target neurones. Namely, AgRP neurones of the arcuate nucleus of the hypothalamus (Arc). I transfected AgRP neurones with ChR2h134r, cut ex vivo slices and patched using current clamp. I then flashed the neurone with increasing frequencies of 470 nm light at a high intensity (Figure 2).

Optogenetic spike fidelity in an AgRP neurone

As you can see, the cell responds nicely with big action potentials at low frequency stimulation. But the action potentials disappear even at 10 Hz. Remember that you really need the action potential to get the response you want, whether you are stimulating the soma or the terminal. Otherwise, you really don’t know what you’re doing to the neurone, although I strongly suspect you’ll be silencing the cells. Either way, I don’t recommend flashing faster than you are able to produce action potentials.

In fact, to demonstrate why you need to limit your flashing frequency, I’ve zoomed in on the 5 Hz flashing and aligned the electrical recording with a visual representation of the likely open/closed state of the ChR2 in those cells (Figure 3). I’ve drawn the light pulses and used the published τOFF to estimate the ChR2 channel close time1.

Now imagine that you have additional pulses in between the ones shown (1 extra for 10 Hz or 3 extra for 20 Hz). Between the slow closing of the ChR2 and the slow recovery of the neurone’s membrane potential, it’s easy to see why the neurone loses firing fidelity above 5 Hz.

An important message

One of the other tests done by Mattis et al. was to simply turn the blue light on continuously for 1 second in two different neurone types. A “regular-spiking” neurone fires one action potential before being silenced, whereas a “fast-spiking” neurone fires continuously through the illumination. The point here is that some neurones can fire at 200 Hz under optogenetic stimulation (mainly cortical neurones). And if your research involves them, you probably know that.

But every neurone I’ve ever investigated wasn’t capable of anything close to that rate. So please check the firing rate your neurone is actually capable of before deciding your in vivo optogenetics frequency. Or, if you are not able to do it or get a friend to, be very careful with your stimulation paradigm. And feel free to ask someone, it never hurts to ask for help.

1. Mattis et al. Nat Methods 9(2), 159-172 (2012) Principles for applying optogenetic tools derived from direct comparative analysis of microbial opsins

Light Penetrance in Different Brain Regions

This post explains a further addition to my depth calculator. In this update, I’ve added options to predict optogenetics stimulation depth in different brain regions. I’ve mentioned before about the irrelevance of visible light wavelength compared to the density of brain matter, when calculating depth of light penetrance.

Light scattering measurements

Anyway, I went back to an early paper calculating light scattering in different types of brain tissue (Figure 1)1. As you can see, absorption of light is irrelevant compared to scattering (note the logarithmic scale). Also, the scattering doesn’t change much across visible wavelengths for grey matter, and not at all for white matter.

Visible light scattering in different types of brain matter.

So what I’ve done is to take the following estimates for scattering values for each type of brain region:

  • Grey matter (blue light): 11.2 (taken from Aravanis et al.2)
  • Grey matter (red light): 9
  • Thalamus (intermediate scattering): 20
  • White matter: 40

Predicting light penetrance

I’ve then plotted the light penetrance using the calculations from Aravanis et al. (also used by Karl Diesseroth) with these different scattering coefficients (Figure 2). Note the logarithmic scale. As I mentioned, shifting to red light makes very little difference to the light penetrance compared with changing the density of brain matter.

Light penetrance in different types of brain matter for optogenetics stimulation.

In order to make the light scattering relevant to optogenetics stimulation depth for in vivo experiments, I have updated my optogenetics depth calculator to include scattering in different types of brain tissue. Using the new calculator, I have predicted the following effective depths in different brain tissue using my standard parameters:

  • Grey matter: 1.57 mm
  • Intermediate: 1.24 mm
  • White matter: 0.92 mm

As you can see, changing the scattering level of the tissue has a dramatic effect on the effectiveness of your in vivo optogenetic stimulation depth. My suggestion for experiment planning is to use the “intermediate” value as default, and pick one of the others if you have a good idea of your target brain regions.

For example, if you’re working in the cortex, which is heavily “grey”, pick the low scattering value. On the other hand, if you are targeting the brain stem, which is densely “white”, pick the high scattering value. If you want a more accurate predictor of light spread, you need to do more complex modelling.

1. Yaroslavsky et al. Phys Med Biol 47, 2059 (2002) Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range

2. Aravanis et al. J Neural Eng 4, S143-S156 (2007) An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology

Amy’s Microtome Counter

Yesterday something awesome happened: Amy (one of the other postdocs in the lab) came to me with an idea for a piece of kit that would help her in the lab. She found that when she was cutting brain sections on the microtome, she would sometimes zone out and forget which plate she was up to. Her request was for a microtome section counter.

The idea was to have something that would light up an indicator of which plate she needed to put the next section into. It would also require a sensor of some kind that would be activated each time she placed a section. We quickly came up with a workable idea that would use an IR sensor, and the user would sweep their hand close to the sensor after each brain section was added.

This project is just asking for an Arduino, so first thing I did was sketch out some code:

// Specify pin connections
int IR = 2;
int LED1 = 3;
int LED2 = 4;
int LED3 = 5;
int LED4 = 6;
int toggle = 7;

// Specify other variables
int count = 0;            // Counter
int maxcount = 4;         // Number of plates to count
long delayTime = 500;     // Delay time to prevent multiple activations
long lastTime = 0;        // Time stamp from last activation

void setup() {

// Specify pin setup
pinMode(LED1, OUTPUT);
pinMode(LED2, OUTPUT);
pinMode(LED3, OUTPUT);
pinMode(LED4, OUTPUT);
pinMode(toggle, INPUT_PULLUP);


void loop() {

// Toggle to select number of plates being used
if(digitalRead(toggle) == LOW){
  maxcount = 3;
  maxcount = 4;

// Detection by IR sensor 
if (digitalRead(IR) == LOW){
  if(millis() - delayTime > lastTime){    // Have you exceeded time since last activation?
    count++;                              // Add to counter
    lastTime = millis();                  // Specify timestamp of IR activation

// Overflow counter back to start
if (count >= maxcount){              
  count = 0;

// Switch on each LED in turn depending on counter
if(count == 0){

if(count == 1){

if(count == 2){

if(count == 3){

// Reset timer overflow
if(millis() - lastTime < 0){
  lastTime = millis() - lastTime;


The electronics is fairly straightforward:

Amy's microtome section counter schematic.

A small amount of assembly later, and we had a working prototype:

Microtome sectioning counter
Amy’s microtome counter

Amy has promised to test it. I’ll let you guys know how it goes.

A Powerful Issue

This will be a short post building off a previous blog post about my depth calculator tool. The principle is also based on Karl Deisseroth’s irradiance calculator, only this time the calculation is reversed. The goal is to predict the in vivo optogenetics power required for your experiment.

So, instead of predicting effective opsin stimulation depth, you input the effective depth you want for you study, and the calculator with predict an estimate of the light power you need out the end of your optic cannula:

In vivo optogenetics power calculator input values.

As with my depth calculator, I have got some recommended starting values for fibre core, NA and irradiance threshold. Then press “Calculate” and it will estimate the power you need to achieve effective stimulation at your desired depth:

In vivo optogenetics power calculator output.

That’s it for today, and I hope some people use my free tool to gain insight into their in vivo optogenetics power requirements.

Who’s Behind the Curtain?

Calculating irradiance depth

My previous blog post was about my newly developed optogenetics irradiance depth calculator. The goal was to produce a simplified version of Karl Diesseroth’s more famous irradiance calculator.

Irradiance decreases with depth from fibre tip.
Typical irradiance dropoff values

After some sleuthing, I found that Diesseroth based his calculator off a paper from 2007 by Aravanis1. It predicts the spread of light through tissue based on two major factors:

  1. Geometric spread – how much the light spreads out the end of the fibre, which for the multimode fibres used for in vivo opto’s will be mainly determined by the NA
  2. Tissue scatter – light absorption and scatter by the (brain) tissue the light is penetrating

Here is the relevant section from the paper:

Optogenetics irradiance depth calculations from Aravanis et al.

The important equation is the bottom one, which calculates the irradiance (I) at distance (z), relative to the starting irradiance. The user can therefore input optical power (I at z=0), threshold irradiance (I at z), fibre radius (r) and numerical aperture (NA).

Apart from that, there are two more variables that are determined by experiment: the index of refraction of the tissue (n) and the scatter coefficient (S). I have used the same values as Aravanis, which are based on mouse brain grey matter.

And then we solve for distance (z). The tricky bit here is that solving for z produces a cubic equation, but fortunately cleverer people than me have written scripts for solving cubics. After that, it was fairly straightforward to write the script in Javascript and attach to the webpage.

Scattering of different wavelengths

A quick note on wavelengths of light. The astute among you will likely have noticed that my depth calculator does not allow you to pick the wavelength of light to use in the calculator, whereas Diesseroth’s does. The reason for this is that wavelength has little impact on the scattering of light in the visible range (Figure 1).

Scattering and absorption coefficients in brain grey matter.

In fact, by far the biggest impact on predicted scattering comes from the difference between white and grey matter, or even who did the measurements! Which bring me to the caveats for using my depth calculator:

  1. It is an estimate predictor of optogenetics irradiance depth, so take the output values as a guide rather than absolute truth
  2. The calculator assumes you are targeting grey matter, so if you place your fibres in denser white matter regions (such as the brainstem), the predictions will no longer be accurate.

Despite the caveats, I do believe my depth calculator tool is useful, and hopefully people will find it easier to decipher than Karl Diesseroth’s.

1. Aravanis et al. J Neural Eng 4, S143-S156 (2007) An optical neural interface: in vivo control of rodent motor cortex with integrated fiberoptic and optogenetic technology

2. Yaroslavsky et al. Phys Med Biol 47, 2059 (2002) Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range