Extracurriculars

In this next section, I’ll be covering the value of extracurricular activities, work and volunteering and explaining how you can use them in your personal statement.

Extracurricular activities, work or volunteering can be used to strengthen your application to study medicine and can help prepare you for the course and job ahead. Almost every extracurricular you do can be used to display some sort of skills or personal quality pertinent to studying and practicing medicine. Having paid work shows a level of independence, responsibility and teamwork whilst volunteering shows compassion and can be linked to your medicinal aspirations. For example, if you are interested in working in elderly care it may be useful to have experience in such a role thus volunteering at a care home may be useful. Universities also like to see commitment to your chosen volunteering so I would recommend a minimum of 6 months volunteering in one place. It doesn’t have to be full-on during those 6 months, but little and often is seen to show more commitment than once for a longer period of time. 

As for extracurricular, the sky is the limit for what you have learnt from them! There’s all the usual suspects; commitment, teamwork, patience, communication, organisational skills but I’d encourage you to think more outside the box (whilst still citing the above skills). For example, sport-related activities might improve your physical endurance. Anyone who has done work experience will know that doctors spend a hell of a lot of time on their feet, so mentioning that you could cope with the long hours without faltering or tiring is actually very relevant. As for learning an instrument, this might help with your coordination and dexterity which would come in handy if you’re interested in becoming a surgeon. At my school, some future surgeons even learnt to knit to acquire that very same skill! 

Let’s not forget how important leadership and independence are as skills for making a good doctor/medical student. As a doctor you will spend much of your time as part of a team, but you will also spend a lot of it as a leader. It is therefore important that you are adept in both teamwork and leadership skills as both are put to the test during a medical career. Independence is also a biggie as university is very different from A levels. The university will not be metaphorically holding your hand throughout the course the way sixth form schools/colleges often do and an awful lot of time will be spent working through those masses of content in the medicine course alone. So it’d be a good idea to indicate that you will be able to work independently; I found a good example of this was in conducting my EPQ which is hugely left up to the student to plan and execute. 

Now, I would personally recommend doing as many extracurricular as you want or can. I know some universities like Oxbridge have expressed in the past that they do not require lots of extracurricular as part of our medicine applications so will not really take mentions of extracurriculars into account, but many other universities are the exact opposite. As you will be applying to 4 universities to study medicine, the chances are at least one will not be interested in extracurriculars and one will be. Therefore, its best to hedge your bets and be sure to include them anyway in your personal statement. What’s more, extracurricular activities aren’t only used in your personal statement- it’s important to look at the bigger picture. The extracurricular could either indirectly help you get into uni, by giving you skills which can be used in your interview, or those skills you gain could be how you actually stay in uni. Remember, universities are looking for these skills for a good reason: because the medicine course and career is extremely tough. So whether or not they specifically ask for them, you should have those skills either way if you really want to succeed.

Choosing your university

Choosing your university is both a pivotal moment and not that important. Sound like an absolute contradiction? Well, I’ll explain…. many doctors would agree with me when I say that the hardest part about getting through medicine is getting a university offer. Whilst various challenges will face you after that including doing the degree, completing your foundation years and then specialising, the biggest hurdle to get over is often getting in in the first place because it is just so competitive. On average there are at least 10 applicants to 1 place in medical school which means a massive number of applicants who meet the minimum entry requirements and could’ve made good doctors are rejected even so due to the sheer number of applicants. But once you’re in, you’re in. Medicine is unique in that, as a course that is directly linked to a vocation, graduates are almost always guaranteed employment. This is the exact opposite for the majority of other courses in which it is relatively pain-free getting a university place, but finding a job at the end of the degree can be considerably harder. So you can see why choosing a university which suits you is so important, because the right university for you is more likely to see you as the right candidate for them thus making your journey into medicine that much easier. There are several different factors to consider when choosing your university but this can be broken down into two main themes; the course and the place. Whilst the basics of medical teaching is always the same as all doctors have to know certain things there are differences in the methods of teaching, the course structure and the teaching resources used. One of the most important of these differences is the teaching style.

 

There are 3 main teaching styles: traditional, integrated and PBL. Traditional, as the name suggests, is based on the tried-and-tested formula of 2 years of lectures, essays and theory-based learning followed by 3 years of clinical development. It is becoming less and less common and is nowadays used only in Oxbridge and a few other top universities. Generally, traditional teaching is heavier on the science and academia than the other teaching styles and often there is an opportunity to intercalate for a year amongst the other 5 years (which I’ll cover later). This can be either a pro or con depending on your personal preferences. Given the lack of clinical placements in the first few years I wouldn’t recommend this style to anyone who is desperate to see patients ASAP, or anyone who isn’t especially interested in the science behind medicine.

PBL stands for ‘problem based learning’ and let’s just say that it does what it says on the tin. Students are often given a case study and then asked to go away and work through every aspect of learning associated to the case as part of a small group of students. This is becoming increasingly popular with universities, as it simulates in part how a doctor might work whilst covering the theory at the same time, but few universities rely solely on PBL- most use a mixture of PBL with the usual forms of teaching. Often PBL universities also offer early clinical exposure. PBL relies quite a bit on independent learning, more so than other courses although all courses will have an element of independent learning, and can really help develop those all important problem-solving skills that doctors require. However, again, it’s not for everyone and really depends on your strengths.

Finally we have integrated. Integrated is a nice middle ground between PBL and traditional, as it teaches using lectures but provides clinical placements throughout the course, not just at the end. This means you do get a bit of ‘doctor experience’ alongside your teaching which allows students to start developing their communication skills with patients right from the outset. Often a little bit of PBL is sprinkled in, especially in the later years of the course where you already have some of the knowledge required to apply to the case given. That’s the basic outline of what you can choose from, but it’s not exactly that simple. Each university has a slightly different version of their chosen teaching style, with many universities dipping into all three styles by varying degrees. This means it’s important to look at each university’s specific course structure when deciding which teaching style or balance of teaching styles you prefer. Do you want a little, all or no PBL? Do you want many or few lectures? Do you like writing essays?

Another difference you can look for between universities is the teaching resources used. By this, I mean how they teach anatomy and technology use. The big question surrounding anatomy is: prosection or dissection? All universities use imaging and models but not all use dissection, or some use it as well as prosection. You’ll probably already have heard of dissection, where students are given a cadaver from which to study anatomy with, but prosection is less understood. Prosection is where an experienced teacher dissects the cadaver for the students to learn from. It can be better than dissection in that a lot of time is saved as students don’t need to spend hours working through fat to get to ‘the good stuff’ and prosection can be much easier to analyse as the body is dissected expertly. On the other hand, many people prefer dissection as it’s a more involved way of learning anatomy and can be a very good experience for people interested in surgery. Whether you prefer dissection or prosection, it is good to know what your chosen university uses before you apply.

The university’s use of technology is less important than other factors like teaching style but can make a real difference during your course. Many universities are now adopting a more technology-based approach with learning resources provided on student portals, lectures being recorded and available to watch at any time and the option to email your lecturers questions rather than having to chase them up at the end of a lecture in person. I personally found the whole ‘recorded lectures’ idea very appealing, but not for the obvious reason. Some students would take advantage of the recorded lectures by not showing up to their 9am lectures in the knowledge that they can access the lecture later in the day after their lie-in but that is not really the purpose of recorded lectures. The real reason they’re such a good idea is because you will soon find out that lectures are entirely different to lessons during A levels. In a lecture an entire year group might be being taught at once rather than the 30-odd people you are used to, meaning there’s less opportunity for teachers to slow down so you can finish the sentence you’re writing, not to mention the sheer volume of learning material that is covered in a relatively short space of time. With recorded lectures you can go back after you’ve watched the lecture in person and learn at your own pace, skip forward to parts of lecture you didn’t understand and listen to a particularly difficult concept being explained as many times as you’d like. It’s a great provision that universities have only recently introduced, and whilst I wouldn’t recommend it be the #1 factor determining your university choices I would say it’s a good thing to look at if you’re trying to tip the balance between two close calls.

Next up, intercalation. Intercalation usually means taking a year during your course to do something more research-based and independent. I would liken it to a scaled-up EPQ, and it definitely should be a big factor to consider when choosing your university. Now I realise that the idea of adding another year onto a course which is already 5 years long may not be hugely attractive, and if I’m being honest the majority of people who did an EPQ in my year group discovered research really wasn’t for them. But if, like me, you think you’d like to take a break and do something a little different from the science-packed medical course and really delve deep into an area of medicine you personally are interested about then intercalation is right up your alley. What’s more, it’s another qualification under your belt which can get you extra points when you are ranked for foundation jobs at the end of the course, meaning you have a competitive advantage in national rankings to those who haven’t intercalated. Not all universities offer intercalation and some of the ones that do only allow a portion of the year group to intercalate. If you know for definite you want to intercalate, then obviously you’ll need to check all the universities you apply to offer it to at least some students, but one thing you might want to consider is: would you prefer that the whole year group intercalated? For one thing, this would mean you wouldn’t have to compete for the few intercalation places available like you would with universities that don’t offer intercalation to everybody. What’s more, universities that make the whole year intercalate will probably have a wider range of intercalation options and better resources/opportunities available for intercalation students given that so many of their students intercalate every year. The final thing to consider is that it might be nice for you to stay with the same year group throughout your course, which wouldn’t happen if you intercalated in a university when only a select group of students intercalate. In that scenario, you would stay back a year as the rest of your peers would move onto their clinical years and eventually graduate a year earlier than you. This wouldn’t necessarily be a big deal, as no doubt you would still be very much a part of your year group whilst at the same time you could befriend people in the year group below you, but it’s something to think about… Meanwhile, if you are not entirely sure how you feel about intercalation it might be better for you to choose a university that offers it, but doesn’t make it compulsory so that you can decide how you feel about intercalating a couple years into the medical degree. And obviously, intercalation isn’t for everyone. It tends to suit the more ‘academic’ medical student so if you are really excited to get onto clinical work and are less interested in the science part of medicine then intercalation might not be for you. In that case, steer clear of those medical schools who make intercalation mandatory!

 

The other side of choosing your university is about choosing the right place for you to live and study. As a medical student you will be spending at least 5 years at whatever university you choose, so it’s important that you like the place as much as the course, and every university has its own ‘personality’. Being happy and healthy as a student is an important aspect to how successful you are in your course, as medicine is a very intense and jam-packed course which requires you to be at your optimum not only academically but in terms of focus and ability to cope with pressure. Being in a place that makes you happy and having an active student life is an important way of stress relieving and making sure you’ll make the most of your course as well as having a bit of fun.

So let’s talk London. Of the 34 universities teaching medicine in England, Scotland and Wales 5 of those are in London, 3 of which are ranked in the top 10 of UK medical schools according to The Medic Portal. As well as London being well populated with medical school it is, of course, our capital city and as such is huge, busy, vibrant and characteristic of any other major city of the world. But another characteristic of major cities like London is the marmite complex; you either love it or hate it. So that’s the first thing to decide, would you want to live in London or not? On a similar theme, you also need to decide if quiet and secluded is your thing, or if big city would suit you better. There are of course medical schools in all the big cities in the UK but there are some slightly more green and rural ones too so have a think about what you’d prefer.

The other decision which will help rule out some universities is how far from home do you want to be? How important this distance from home is to a person does vary, personally I didn’t take it much into account except for the fact that I knew I didn’t want to stay living at home. But it’s different for different people; some may prefer to stay in their home city, others specifically want to travel as far as humanly possible and others might want to live away from home but still live close enough to have their parents do their laundry! It’s all up to personal preference.

Another place-related factor that does start to slip back into the course-related side a bit is about the hospitals in the area your university is in. I’m sure you’ve heard that many people choose to stay in the city they studied in after graduating, and so it might be worth you considering how good the hospital placements in the area you are studying in are. And this isn’t just in terms of where you have your foundation jobs, it’s also about how good the clinical placements during the course area. Ideally, you want a hospital which will have a good variety of different cases and patient situations for you to learn from during your clinical years as well as during your foundation years.

Now student life is often pretty much the same across all universities as every university will have societies where you can geek out, sports societies and arts societies. There are of course some universities/cities known for having bad or good nightlife but I wouldn’t recommend you make decisions on universities based off of that. Furthermore, the number 1 rule for choosing your university is not to choose it on the basis of what your friends choose. It seems obvious, but the notion of starting new can be truly scary for some people and to those people this is what I would say: medicine is full of change. Your first 2 years as a foundation doctor will be made up of rotations which may or may not be in the same hospital, and no doubt you will move from place to place as you specialise too. So it’s best that you come to terms with starting fresh and try to use university as a practice run for the future changes you’ll be faced with, rather than holding on to the norm and staying in your comfort zone. Not to mention, change isn’t actually a bad thing! I personally found the fact that medicine was such a ranging and constantly changing job appealing, and it was part of why I chose to pursue medicine in the first place.

 

And there you have it, everything I have to say about choosing your university. But here’s where the ‘not that important’ bit comes into play: regardless of where you study you will come out of medical school with the same medical degree and should have pretty much the same opportunities in your career as any other student from any other university would have. So don’t worry too much about where you get into, just focus on getting in and doing well wherever you are!

Work Experience

First things first, you need to know if medicine is the job for you. Work experience can be essential in helping you make that decision so I would suggest doing some as soon as you can- don’t leave it until you start applying! Work experience is such a useful way to find out what working as a doctor is truly like, how doctors work within the all-important multi-disciplinary team and can support your will to study medicine. It can also be a scary eye-opener, as you realise the responsibility that doctors have, what an absolutely exhausting job it can be and how even modern medicine can’t save everyone. I remember after my first day of work experience, I crawled through my front door at home because my feet hurt so much! It is important that you do get to see both the good and the bad factors of working as a doctor because medicine is a 5-6 year university course and you don’t want to be finding out at the end of the course that actually, medicine isn’t the job for you.

As well as using it to check that you’d like working as a doctor, it is essential that you can draw from your work experience when talking about medicine in your personal statement and interview. Similar to what I said above, admissions teams want to see that you have realistic insight into what the course and job are like to ensure that you won’t drop out during the course or not want to practice medicine at the end of it. So from examples in work experience you can show that you know about the difficulties that doctors face, but you can also suggest how you might overcome such issues should you become a doctor. This would help to convince admissions teams that you would be able to ‘hack it’ as a doctor. What’s more, giving examples from work experience of certain elements that you have noticed during your time around doctors can demonstrate the qualities in doctors that you value and would aspire to. That’s because depending on the aspects of work experience that you focus on, you’ll learn and therefore speak about different things in an interview. For example, I spent the majority of my work experience really focusing on communication and the way that doctors interact with their patients. So when it came to interviews, it was clear that I was someone who really cared about making the patient feel safe, respected and heard through good communication. Others that I know focused on the procedures that they encountered and the science they learnt during their work experience, which translated to their passion for science and learning when they spoke about it in their interviews.

What you get out of work experience very much depends on you and what you make of it. It’s important that you take any opportunity you can to learn from not only the doctors around you but also from other healthcare professionals. Quite often during my work experience the doctor I was shadowing would warn me that the next hour would be spent with them writing out discharge letters which could be boring for me, and so I would ask nurses if I could shadow them or indeed if there was anything I could do to help them out as I shadowed. A question that universities sometimes ask at interview is “Why do you want to be a doctor and not a nurse?”. In order for you to be able to answer that question well, you need to actually know what the role of a nurse is so that you can accurately compare it to the role of a doctor. As well as trying to learn from lots of different healthcare professionals, you should also be taking every opportunity you can to ask who you are shadowing questions. If you don’t understand something about a case, ask them! So often work experience students are totally bamboozled about things they hear the doctors talking about but are too shy to say something- don’t be! As long as you are sensible about when you ask the questions (i.e: not in front of patients or interrupting the doctors) doctors are often happy to explain it to you, especially foundation doctors fresh out of med school and excited to share their knowledge.

Finally, how do you get work experience and how long for? Some people spend weeks and weeks doing work experience, others can spend 3 days at the same place to glean the same amount of insight. Most universities do not specify how long they want you to do work experience (although there are some exceptions, so make sure you check the university’s website) so it’s up to you to decide how much you want to do. I personally did about 3 weeks, broken up into 4 different placements spread across 2 years. The first week (2 placements) was all about checking out the job and deciding on medicine. The other 2 weeks were at two different hospitals in the summer before applying but honestly, I think just 1 week would’ve been more than enough. I have to warn you, it can be difficult to get work experience if you don’t have contacts in healthcare to use but just persevere and it’ll be worth it! Some ideas for how to secure work experience would be to visit local hospitals and ask receptionists if they can help, or if they know who you should contact regarding work experience or you could look online as some departments in certain hospitals require you to apply online for a work experience placement.

Applying to Medicine

I’m sure many of the people who read this blog will be considering applying to study medicine at university, and from experience I know it can be a long and arduous process, so I thought it was time that I impart whatever wisdom I might have about the application process with you readers.

That said, disclaimer! Everyone’s experience of applying to medicine is different as we all apply to different universities, have different timelines and approaches to things and are just different and unique people! That means what worked for me might not work for you, so only take advice which is applicable to your situation and that you really would find useful.

What’s more, having started this as a single post but since realising that I had a hell of a lot to say about each stage of the application process, I have decided to split this topic into several different posts. I’m sort of doing it chronologically, starting with work experience so here goes…

Antibiotics; A journey from good, to bad, to ugly

As I hope you are already aware, medicine is facing a crisis at the moment which could prove to be a bigger killer than cancer; antibiotic resistance. Bacterial infection seems an archaic issue, one that we solved years ago and have since forgotten as a major threat to human health in the public eye, but it’s making a comeback. Before we get into all that, however, let’s first look at what antibiotics are and how they came about in the first place.

Antibiotics are a group of drugs which can treat or prevent bacterial infections by inhibiting or stopping the growth of bacteria. There are hundreds of different antibiotics in circulation today, but they all stem from just a few drugs so can be categorised into six groups. One of these groups are the penicillins, which include the drugs penicillin and amoxicillin and this is where the antibiotics’ journey begins. Penicillin was discovered in 1928 by Alexander Fleming and is generally thought of as the world’s first antibiotic. Its initial discovery is quite well known, with Fleming accidentally uncovering the antibacterial properties of a mold called Penicillium notatum when returning from a holiday to find that where the mold had grown, normal growth of Staphylococci was prevented. Fleming then grew some more of the mold in order to confirm his findings; he had discovered something that would not only prevent growth but could be harnessed to fight bacterial infection, changing the world in its wake. However, Fleming didn’t develop Penicillin into what it is today. To do this, the active ingredient would need to be isolated, purified, tested and then produced on a grand scale.

It was 10 years later that endeavours into this began, when Howard Florey came across Fleming’s work and began further developing it with his team at Oxford University. Florey, and one of his employees Ernst Chain, managed to produce penicillin culture fluid extracts. From this, they experimented on 50 mice that they had infected with streptococcus and treated only half with penicillin injections. Whilst half of the infected mice died from sepsis, the ones which had been treated survived thus proving the effectiveness of penicillin. In 1940 the first human test was conducted, with injections being given for 5 days to an infected Albert Alexander. Alexander began to recover, but unfortunately Florey and Chain didn’t have enough pure penicillin to completely treat the infection and so Alexander ultimately died. This was now their biggest problem; making enough penicillin. It took 2,000 litres of culture fluid to extract enough pure penicillin to treat just 1 human case, so you can see how treating an entire population would frankly be impossible if they continued in this way. It was in the US whilst Florey and Chain were looking to find a solution to this problem that they happened upon a different, more prolific fungus called Penicillium chrysogeum. This yielded 200 times more penicillin than the previously used species, and with a few mutation-causing X-rays yielded 1000 times more. This allowed for 400 million units of penicillin to be produced for use during the war in 1942, reducing death rate from bacterial infection to less than 1%. And so began the antibiotic revolution in medicine which changed the world into what it is today.

Now, over 70 years later, we’re faced with a huge issue. Bacteria have become resistant to antibiotics due to their overuse. Antibiotic resistance occurs when a bacteria mutates so that the antibiotic can no longer kill it, and is part of natural selection. However, from previous knowledge of natural selection you might know that it is slow process across many generations and relies on the mutation being an advantage in order for it to become widespread. Yet when you use antibiotics rather than having 1 mutated bacteria amongst millions of non-mutated, all non-mutated bacteria are killed leaving a 100% resistant population. Albeit that population is exactly 1, but bacteria can multiply and start causing havoc very quickly. And then those same antibiotics no longer have any effect on the new culture, leading to longer hospital stays, more expensive medication and increased mortality rates.

Given how easily everyone is now able to travel around the world, this issue is not just localised to 1 country or continent but is on global scale. Given that tackling antibiotic resistance requires investment into finding new antibiotics and using the more expensive medications that are on hand, developing countries could be hit much harder than developed. Meanwhile, our struggling NHS will experience even more pressure as patients have longer stays at hospital. Worst-case scenario, if this issue isn’t addressed we could be returned to a time before antibiotics where common infections are once again fatal, killing 10 million people a year by 2050. Clearly not a desirable outcome, so how can we tackle antibiotic resistance?

Prevention and control of antibiotic resistance requires change at several different levels of society. The World Health Organisation has made recommendations for what we as individuals, healthcare professionals, policy makers, and people in the agriculture sector. Amongst the advice for individuals is not sharing leftover antibiotics, not demanding antibiotics if they aren’t considered necessary and preparing food hygienically to prevent infections. Meanwhile the healthcare industry needs to invest in research into new antibiotics, as so far most drugs being developed have only been modifications to existing antibiotics. They don’t work for long, and of the 51 new antibiotics currently being developed only 8 of them have been classed by WHO as innovative and capable of contributing meaningfully to the issue.

Research into new drugs is of the utmost importance, but the responsibility appears to fall to governments to fund such research as pharmaceutical companies are reluctant to do so. Some suggest that this is because it is more lucrative for drug companies to treat chronic conditions in which patients rely on their drugs for a lifetime than providing short-term (and may I point out, life-saving) treatments. Thankfully, this year countries including Switzerland, South Africa and the U.K. pledged a combined $82 million to support the Global Antibiotic Research and Development Partnership which was set up by WHO and the Drugs for Neglected Diseases Initiative. Less encouraging, is the estimation by the director of the WHO Global Tuberculosis Programme that more than $800 million is needed annually to fund research into new anti-tuberculosis medicines.

I know. It’s feeling dismal. But what can we do, on the metaphorical shop floor? Stop the overuse of antibiotics, follow through on any prescribed courses of antibiotics even if you’re feeling better and make other people aware of the issue. The bigger the issue becomes in the eyes of the public, the better the government and other organisations will address it so talk about it to anyone who will listen! I am optimistic that we as a species will survive this most recent challenge to our health, but only if we start pulling up our socks and addressing the situation.

 

Thank you for reading, remember to rate and share!

“Science has been a process of continuous advancement towards objective truth”

This is one statement from which I wrote an essay for, in a past paper for the BMAT. The BioMedical Admissions Test, alongside the UKCAT, is one of the medicine admissions tests that medical schools might ask for and consists of 3 sections: Aptitude and Skills, Scientific Knowledge and Applications and a writing task. Unfortunately, pretty much the only way to prepare for either admissions test is practice, practice, practice so I decided to share one of my practice writing tasks with you….

 

This statement suggests that developments in science have all, always been working towards the discovery of a set of unchanging and unbiased facts. It suggests that the goal of all science is eventually to uncover these facts that have been consistent throughout mankind’s scientific journey. For example, the ‘discovery’ of gravity by Newton. Gravity has been around and affecting us always but only became apparent relatively recently, and cannot be denied or manipulated.

However, it can be argued that advancements in science have not been continuous, as we have had periods of rapid advancement followed by periods of stagnation often due to the limits of technology preventing further advancement. For example, the improvements in the microscope allowed for the Germ Theory to be discovered consequently. Sometimes, incidentally, stagnation was not brought on by technological limits but by limits to scientific advancement out of fear such as the Church historically discouraging new theories and discoveries of Galileo’s out of fear that they would challenge their teachings.

What’s more, it can be argued that the truths we work towards discovering are in fact not objective, nor as they unchanging as the statement might suggest. This ‘objectivity’ supposes that scientific fact is not affected by societal influences however I would argue the contrary, as the way those truths are perceived very much depends on when in history they are discovered and the state in which society is in at the time. Moreover, the scientific truths have not and will not always be the same as science is not as constant as it may seem. Facts are constantly changing, for examples evolution drives changes and adaptations in organisms, the universe is constantly changing with its expansion and the emergence or disappearance of stars.

The truths that science offers are always changing slowly but surely so that we are not working towards the same truths they may have worked towards a millennia ago an for this reason I cannot agree with this statement. Whilst science is about advancement, it is not always continuous and the truth is not objective.

 

Read it? Rate it!

Should school start times be changed to accommodate the change in circadian rhythm which occurs during adolescence?

Okay, I know it’s a longer-than-normal title but that’s because it was also the title to the EPQ I carried out last year. For those of you that are unfamiliar with the EPQ, it stands for ‘Extended Qualification Project’ and is essentially a research project about anything you choose. At the end, you either make a product or write an essay; guess which I did? I wrote an essay debating whether or not high school start times should be changed or not, and found the topic so interesting that I thought I’d share a condensed version with you readers!

First things first: are teenagers actually sleep deprived? Constant tiredness is a stereotypical feature attributed to teenagers but are they really losing out on sleep or is it just every day complaining? The recommended amount of sleep for a teenager is about 9.25 hours, but the average that teenagers actually get is around 7 hours per night. This means that they are indeed sleep deprived, so much so that British children are rated to be the 6th most sleep deprived worldwide!

Sleep deprivation is very much a public health issue in my eyes, as the effects of it are detrimental to both physical and mental health. Coordination and endurance are hindered by tiredness, as well as skin problems and even metabolic deficits like obesity being linked to it. Mood can be poorly affected too, with higher occurrences of feeling unhappy or depressed. Outward behaviour can also change from being sleep deprived, with aggressiveness and irritability being seen in subjects lacking in sleep. This can harm relationships between teens and their friends and family. This, coupled with the increased feelings of sadness can lead to incidences of depression. Inordinate sleepiness also makes it difficult to concentrate and stay alert, as I’m sure any teenager you asked could testify to, making it hard to maximise learning at school.

But why are young people so tired? Well, some of this sleepiness could be explained by the teenage circadian rhythm. The circadian rhythm regulates when you fall asleep and wake up, but this sleep timing is all pushed back by 1-3 hours during puberty. More specifically the secretion of the ‘sleepy’ hormone, melatonin, happens and stops happening later into the night and morning. This means that teenagers naturally fall asleep later at night and wake up later in the day than an adult would.

As a result, waking up early enough to get to school by 8:30am clashes with the natural teenage sleep cycle. This impacts not only sleep length, but also quality. Clearly, if an adolescent struggles to fall asleep before 11pm but must wake at 6:30am then they’re not going to get the recommended 9+ hours of sleep needed. But the social jet lag effect created by ignoring your natural body clock results in poorer quality of sleep. Not to mention, the famed weekend lie-in is actually caused by a build-up of ‘sleep debt’ over the week. The body makes an effort to catch up on missed sleep after a week of deficit, but this later waking time on weekends leads to irregular sleep patterns. As I mentioned in my previous post about sleep, lack of routine damages sleep quality so lie-ins aren’t actually good for you.

So… could changing school start times help? Well, researchers at Oxford University predict that by changing high school start times from 8:30am to 10am, GCSE attainment could improve by 10%! This is because of the improved cognitive ability, concentration and attitude that would be seen as teenagers got more, better quality sleep. Thus by starting later, schools would be helping themselves and their students to maximise the time spent at school. Not only that, but the time spent out of school would also be put to better use as increased productivity would mean homework would be completed faster leaving more time for extra-curricular activities.

As I said before, sleep deprivation has very poor effects on young people. Naturally, by allowing teens to get the sleep they need many of these effects would be reversed leading to happier and healthier young people. The actual scale of these improvements can’t be predicted, but given the huge issues that the NHS currently faces with obesity and depression in youths, any improvements would be worth it- or would it?

Changing school start times, as simple as it sounds, would actually be a massive change. Transportation would need to be reorganised, new contracts for teachers would need drawing up, parents would need to change shifts or hire childcare and there would be less time available at the end of the school day. As well as school buses being used, many high school students and staff also use public transport. The shift in timings could affect costs to schools hiring buses and increase congestion, as the later finish would add to rush hour traffic and therefore travel times. This argument is full of faults though. I struggle to understand how the exact same route and the exact same number of buses and drivers would somehow result in higher costings just because it happens 2 hours later in the day. Also, whilst I can see how congestion could be made worse, at least less congestion would be found in the mornings as the rush to school and work would occur in staggered waves.

The later finish time is a common concern amongst young people, as it would leave less time after school for extra-curricular activities, homework and free time. For sports teams relying on outdoor practice, the loss of daylight hours could lead to less practice time and more competition between clubs for use of facilities. Similar competition over facilities could be found in other extra-curricular clubs, as well as less time being available for use of public services like the library. Less time to do homework after school is also a worry, though I have already mentioned the increased productivity that has been found in well-rested subjects meaning less time would actually be required to carry out the same amount of work.

Opposition from teachers would inevitably arise if school start times changed, for one thing because the later finish time added to the work many teachers do after school would result in very late finishes and isolation from their families as schedules clash. However, because adults do not need to wake up as late as adolescents do, teachers could easily adapt to the new school timings by moving all their extra work to the morning hours before school began and would still have the evenings to enjoy with their families.

Additionally, with high schools no longer starting and finishing around the same time as primary schools, childcare issues could cause a lot of stress for families. With older siblings no longer available to supervise younger siblings, parents would either need to change their shifts to make sure they’re home or pay for childcare. This would hit lower-income families the hardest, as lower-paid jobs often correspond with a lack of flexible working hours and to pay for childcare would leave behind less disposable income.

Finally, the sleep deprivation found in teenagers cannot be linked solely to their circadian rhythm. As with most things, a multitude of factors can be blamed either partially or wholly for the trend found and in this case the other considerations to think about are behaviour and the homeostatic system. The homeostatic system is yet another biological process that affects our sleep, but this time it affects sleep/wake length. Regardless of the time of day, the homeostatic system causes ‘sleep pressure’ to build up which can only be relieved by sleep. Now in teenagers, this sleep pressure takes longer to build up over the course of the day than it does in younger children which means that the desire to go to sleep kicks in later in teenagers which could explain the late bedtimes.

Another bit of biology for you: the circadian rhythm is controlled by the hypothalamus which detects light and dark signals in the environment and responds by releasing hormones, adjusting body temperature, etcetera to make you fall asleep. The key role that light plays in controlling our sleep timing and our ever-increasing use of technology may be a big contributor to why teenagers struggle to fall asleep at night. The blue-wavelength light emitted by electronic devices mimics daylight, tricking and stimulating the brain into staying awake and not triggering all of those bodily functions which make us feel tired. No doubt if you asked any young person, they could confirm to you that they routinely utilise some kind of electronic device in the 30 minutes before going to bed. And it is this behaviour which could be making it difficult for teenagers to fall asleep at night thus adding to sleep deprivation.

So you see, there is no simple answer to the seemingly simple question. There may be some biological component which fights against adolescents waking up early, but would fixing that fix the whole problem? And is it worth the effort it would require to implement such a huge change? In my opinion, even the smallest improvements are worth it. I know from experience what a huge difference just one extra hour of sleep makes to my mood and to how I receive the rest of my day. Imagine millions of people just that little bit happier, and what might’ve seemed like a small difference at first becomes something a lot bigger and better. As for the other question, realistically I don’t think that to only change school start times would completely fix the issue of sleep deprivation, but it is a start. For the best results, I would say that education on the importance of sleep and good sleep hygiene should be taught in schools alongside the start time changes.

 

Thank you for reading, remember to rate and share!

Genetically Engineering the Human Embryo- is it ethical?

Genetically engineered crops have been in use since the late 90s, and even that has flared up opposition based on possible risk to human health, the environment and other unforeseen repercussions. So mention genetic modification of humans, and the immediately conjured image is that of a perfect, superior and absolutely terrifying race of people that seem far less human than us. Whilst the technology for such things aren’t quite in our grasp yet, I think it is worth keeping tabs on just how advanced genetic engineering is becoming and start considering the consequences… before it’s too late (dun dun duuuuuun!).

So where are we at right now? Well, in 2015 this debate was reignited by a group of Chinese researchers who attempted to remove a mutated gene causing a deadly blood disorder from non-viable embryos. They did this using the game-changing gene-editing CRISPR-Cas9 tool which came about in 2013. CRISPR is part of a natural defence mechanism within bacteria against viruses, and cas9 is an associated enzyme to CRISPR. When it comes to genetic modification CRISPR-Cas9 can be manipulated to cut out any section of DNA, and if a new piece of DNA is placed near the cut site it is accepted by the body as a replacement. This means that ‘bad’ genes can be removed and replaced by the healthy version, but it’s not as straightforward as it seems. The research in China had to be abandoned partway through, because unintended mutations were found in the genome which could cause cell death and transformation. Clearly, not a great outcome but it did, as I said, grab the public’s attention and get people talking about it which in itself is a much needed effect.

Since then, research on human embryos has been carried out in various places but never with the intent to implant them into a woman. Recently, details of the first successful genetic modification of a human embryo in the US were released. The scientists also used CRISPR, this time to remove the mutation causing a heritable heart condition which can cause sudden cardiac death. This was an exciting discovery but again would not result in implantation. So why not? Implantation of genetically modified embryos is illegal everywhere, with any research at all in the UK being very limited. But should it be?

Let’s first consider how genetic engineering of embryos could be an asset to the medical world. The most obvious use for genetic modification is to use it to eradicate genetic diseases. Countless diseases, many of which are very dangerous, are caused not by viruses or bacteria but due to mutations in our DNA. Whilst genetic engineering cannot help those already living with such a disease, it can prevent those diseases from being passed on to offspring and future generations. Long term, this could lead to targeted diseases eventually dying out. Clearly, that would be a desirable outcome which could save many lives. But as well as how dangerous a disease may be, it is important to also consider how living with that disease affects quality of life.  Many genetic diseases, like cystic fibrosis, can only be managed rather than cured. That management can sometimes require a lot of care to be provided by medical staff and/or family members and can limit the freedom and capability of sufferers. By correcting the mutation in an embryo which screens positive for a certain disease, the resultant baby when born would be void of the disease as opposed to having to manage the condition for the rest of their lives.

Some have suggested that the need to eradicate genetic diseases is unnecessary, given that embryo screening means parents who are concerned about passing on a disease to their offspring can screen embryos and select to use those which are healthy. However, this method still results in diseased embryos being destroyed which in itself is considered unethical dependant on at what point you believe life begins. Furthermore, some parents can go through countless expensive IVF treatments, screening each time, and still be unable to produce an embryo which doesn’t have the disease. In this case, the only other way to ensure a child can be born from those parents is from the use of genetic engineering.

One fear voiced by some is that any change to our DNA could create a butterfly effect, with unpredictable consequences affecting future generations with altered genes. Indeed any change would always carry a risk of unexpected and unwanted repercussions and so any and all genetic modifications would need to be considered carefully to try to minimise the occurrence of such ramifications. Some say that better yet, no changes should be made full stop to the human genome as the risk isn’t worth the benefit. This argument could be coupled with the above one, ensuing that genetic engineering is not necessary as alternative options are available so why take the risk for something that isn’t an absolute must?

A similar issue surrounding genetic modification is the idea that by selecting or ‘deselecting’ certain genes and versions of genes, we make the gene pool smaller and reduce genetic diversity. In the future, this could cause problems should the deselected genes became useful, desired or even necessary. For example, if a disease was deadly to everyone except those with a certain mutation but that mutation was no longer around due to genetic engineers targeting the removal of it, then the human race could be seriously under threat. Of course, this is a worst-case-scenario example but one which still needs to be considered, not to mention that lower scale versions of this could happen also. There is a flaw with this argument though, in that I struggle to see how the mutation for, say, Down’s syndrome could benefit future generations. Whilst I understand that we should preserve as much genetic variety as possible, I do not think that retaining genes which are not just ‘undesirable’ but are actually harmful and causes suffering is necessary for the sake of genetic diversity.

Finally, and most importantly to most people, there’s the matter of designer babies. Even with the emergence of genetically modified crops, worries about the applications in humans to create designer babies were loudly voiced. The selection of certain characteristics which are considered more desirable, such as intelligence, would clearly be wrong… wouldn’t it? Of course, one could say that a great many characteristics, physical or otherwise, do have some sort of health benefit so it is ethical to seek them. After all, medicine looks not just to cure illness but prevent it and improve quality of life. So it stands to reason that if freckles are associated with a higher risk of developing skin cancer, then genetic modification to remove freckles makes sense, no? And if self-esteem issues could be prevented by genetically engineering babies to make attractive adults, would that not improve the mental health of the general population?

I’m hoping at this point that you can see the point I am leading into, and not just cheering me on. Almost all genetic modifications can be argued to be of value medically, with some arguments being admittedly less believable than others, and so we reach the heart of the problem with genetic engineering; at what point is the modification acceptable, and when have we gone too far? And so the solution for many is to just proclaim all modifications are unacceptable, and the result is that no progress can be made because everyone is too busy worrying about the worst case scenario.

But how likely is this scenario, really? Already, research happening now must have ethics approval. The currently proposed use of genetic engineering would be carefully monitored and controlled. Any governments that endorsed it would no doubt be setting guidelines and restrictions to make sure it didn’t get out of control, wasn’t misused and was safe and ethical. If these rules set out were even in the least bit breached, the watchful eye of the authorities would be alerted and the scientists could be stopped long before they got anywhere close to the production of ‘designer babies’.

What’s more, many characteristics are not solely controlled by genes, but by environmental influences. And even though genes do play a part in it, often more than one gene affects a feature that a person may have. So qualities like intelligence or sportiness can’t just be ‘manufactured’ into a person, and the emergence of designer babies isn’t quite as realistic as people fear it could become.

Whether you agree with genetic engineering embryos or not, the most important thing right now is that the matter is brought to the public eye in a big way. We need people talking and thinking about where they stand on the matter, so we can begin to put regulations like the ones mentioned above into place. Perhaps society will continue to ban genetic engineering altogether, as it generally has in the past. Or maybe, common sense and a little faith in the human race not to take it too far will push us into the future, one with fewer genetic diseases and less suffering.

 

Thank you for reading, remember to rate and share!

Vampire or Victim?

Brasov in Transylvania, Romania

Following my recent trip to Transylvania in Romania for work experience, I have decided to dedicate this post to the monsters and myths which stem from the region: specifically vampires. Like many historical ideas and beliefs, the creation of such a supernatural being likely served as an attempt to explain ailments that couldn’t otherwise be explained, before the emergence of scientific understanding. In this post, I will describe the medical condition most popularly believed to have led to belief in vampires.

The most commonly referenced illness used to explain away vampirism is porphyria, a group of inherited metabolic disorders in which haem (used in haemoglobin) production is disrupted. The production of haem from a chemical called ALA involves several steps, and each step requires the use of an enzyme. In people suffering from porphyria, one of those enzymes is faulty due to the inheritance of a mutated gene which codes for the synthesis of that enzyme. This means that the production of haem is slowed or even stopped, and as a result the ‘transitional chemicals’ made in the stages between ALA and haem, known as porphyrins, can build up to harmful levels. As well as this, the limited haem production can mean that not enough healthy red blood cells can be made. There are different types of porphyria depending on which enzyme is dysfunctional, most of which produce different symptoms with some overlap.

The most common form, and the one best related to vampires, is porphyria cutanea tarda (PCT) which affects the skin. PCT causes photosensitivity in which exposure to sunlight can cause painful, blistering and itchy skin… sound like something you’ve heard of before? A well known characteristic of vampires is that they burn in sunlight, hence must stay out of the sun and so have a dramatically pale complexion. Similarly many porphyria sufferers are indeed pale as, naturally, they avoid the sunlight due to their photosensitivity.  What’s more, healing after this reaction to sunlight is often slow and, with repeated damage, can cause the skin to tighten and shrink back. If this shrinking causes gums to recede, you can imagine that the canines may start to resemble fangs.

Another general symptom of porphyria is that when the accumulated porphyrins are excreted, the resultant faeces may turn a purple-red colour. Whilst the same conclusion may not be reached in modern times, historically this may have given the impression that the sufferer had been drinking blood which is another vampire hallmark. Interestingly, drinking blood could- and I say this tentatively- actually relieve some symptoms of porphyria. Whilst the haemoglobin would be broken down, the haem pigment itself could survive digestion and be absorbed from the small intestine meaning in theory that drinking blood would do the same for relieving symptoms as a blood transfusion would. Finally, garlic. Seemingly the most random trait of vampires is their aversion to garlic however even this could be explained by porphyria. Garlic can stimulate red blood cell and haem production which, for a person with porphyria, could worsen their symptoms as more porphyrins build up. This could lead to an acute attack in which abdominal pain, vomiting and seizures may occur. Seems like an extreme reaction, but perhaps…

So does porphyria explain how the legends of vampires came about? I would say so, but some folklorists and experts would disagree. It is suggested by such people that porphyria doesn’t accurately explain the original characteristics of vampires but more the fictional adaptations that have more recently been referred to. Folkloric vampires weren’t believed to have issues with sunlight at all and were described as looking healthy, ‘as they were in life’, which contradicts the pale complexion and blistering skin seen in PCT. Furthermore, it is still unclear whether or not drinking blood would truly relieve symptoms of porphyria and even so, how those affected would know to try it with no understanding of their disease and no craving for blood makes it all seem rather unlikely. Speaking of probability, reports of vampires were rampant in the 18th century yet porphyria is a relatively rare  occurrence and its severity ranges from full-on vampire to no symptoms developing at all, making it seem even less probable that such an apparently widespread phenomenon could be the result of PCT.

Whether you believe that porphyria caused the vampire myths or not, it certainly is an interesting disease that sadly has no cure (so far), is difficult to diagnose and relies generally on management rather than treatment. Here’s hoping that future developments using gene therapy and even research spun off of use of ‘vampire plant’ models could lead to improvements some day.

 

Read it? Rate it!

‘Carbon dating’ cancer? What’s that all about?

Last week, the Institute of Cancer Research announced that scientists have been able to precisely pinpoint the timing in which different stages of a patient’s cancer developed. This could result in some interesting progress in the treatment and understanding of cancer and I will explain how researchers are doing this, but first- what is cancer?

Cancer is a group of diseases caused by the uncontrollable division of damaged cells. Cells can become damaged in this way due to a mutation in their DNA which intervenes with the regulation of mitosis (cell division). More specifically, the genes proto-oncogenes and tumour suppressor genes can become mutated. Proto-oncogenes trigger division, however mutated ones (known as oncogenes) trigger mitosis to happen at a much faster rate than normal. Meanwhile, mutated tumour suppressor genes fail to inhibit cell division as they should. Both mutations, as you can imagine, lead to the fast and furious growth of a cell into a tumour.

Mutations naturally occur quite frequently, but not all mutations cause cancer, and in fact it often requires more than one mutation in a cell in order for it to become cancerous. Most of the time either the mutation is relatively harmless, the DNA is repaired or the cell ‘kills itself’ before it can do any harm (apoptosis). However, in cancer cells the signals telling them to undergo apoptosis can be overridden so that the damaged cell continues to divide, producing even more damaged cells.

There are multiple methods currently in use to treat cancer, but the three most common treatments are surgery, chemotherapy and radiotherapy. In surgery, the tumour is simply removed from the body however this only completely cures if the cancer is contained in one area and hasn’t spread. Surgery is often used in combination with other treatments such as chemotherapy to shrink the tumour before surgery (neo adjuvant treatment).

Chemotherapy is the use of drugs to treat cancer, usually by stopping cells from dividing. The drugs do this by either preventing DNA from replicating (which occurs in the time preceding mitosis) or by interrupting mitosis during the metaphase stage. Chemotherapy is most effective against rapidly dividing cells like cancer cells but it can also effect other cells which divide frequently such as hair-producing cells. This explains some of the side effects such as loss of hair that can occur during chemotherapy.

Radiation works by damaging the DNA in cells that are dividing using high-energy rays (normally x-rays). Seems confusing, doesn’t it, given that cancer itself occurs due to damage to DNA in the first place? Radiation is different because the way in which it damages the cells means that they can’t grow or divide anymore. That damage can generally be repaired in normal cells, but not always which is why there are unwanted side effects to radiotherapy, but cancer cells cannot fix themselves so they die over time.

So now that you are clued up on how cancer occurs and can be stopped, back to carbon-dating. Carbon dating is used to determine the age of organic matter by measuring the amount of carbon-14 that they contain, but here’s the burn- I’m not really talking about carbon dating. Sorry! Don’t leave just yet, because what these scientists did to find out when various stages of cancer progressed in a patient is still pretty interesting. The researchers used genetic analysis and mathematical models that’s normally used in evolutionary biology and applied it to cancer instead. In evolutionary biology, genetic data from current species can be used in combination with carbon-dated fossils of ancestral species to estimate when the current species- or species in between ‘now’ and ‘then’- arose throughout history. Now you can perhaps see why the carbon-dating link comes in.

These methods could only be applied, however, due to a needle tract tumour which occurred when a biopsy of the patient’s tumour was taken. What this means is that a sample of the cancer was taken using a needle and where that needle was removed, some of the cancer cells contaminated the needle track. These cancer cells grew into a metastatic tumour (tumour which has spread from the primary site of cancer to a different area) but because the scientists knew exactly when this tumour happened, the genetic data from these cancer cells could be analysed and compared etcetera etcetera so that a timeline of how the cancer had started and spread was made.

This timeline is useful because it could help with diagnosis and treatment, not only directly but also from what else the scientists found. The researchers discovered that the cancer spread faster during the first year, however after metastasis this progression slowed. What this suggests is that the degree of genetic instability may play a more important part in the deadliness of cancer  than the amount that the cancer has spread. This could be used to determine a patient’s prognosis more accurately, and could help doctors when evaluating how well a treatment might- or might not- work. Furthermore, tracking a cancer’s progression could enable doctors to better predict the cancer’s behaviour in the future thus influencing the strategy for treating it.

So that’s a bit of information about what cancer is, how it can be treated and a recent research development in the field. Some of what I have shared in this post, I learnt at a ‘medicine insight day’ hosted by a group of medical students at Oxford University. They shared with us tips on how to get into university, explained some of the science in cancer and educated us about ovarian and testicular cryopreservation- a method to preserve the fertility of teenagers who have survived cancer.

This preservation is necessary because if a young person survives cancer, the treatment against cancer is often so aggressive that an ‘early menopause’ is triggered in the patient meaning they become infertile (unable to have children). The Future Fertility Trust is  charitable trust fund which offers cryopreservation in which ovarian or testicular tissue is collected, stored and re-implanted after cancer treatment. This enables young cancer survivors to have children in later life, however the work is not funded by the NHS so relies on donations and fundraising. I would like to see ovarian and testicular cryopreservation help more young people and hope that it might become available to all young people by the NHS is the future, and this could be possible if enough cases are funded to show the usefulness and success of  the technique. If you would like to find out more about cryopreservation, Future Fertility Trust, or to donate, go to http://www.futurefertilitytrustuk.org.

Thank you for reading, remember to rate and share!