Skip to content

The green green grass

8 June 2014

Savannah, South Africa

Contemplate, if you will, the picture above. It’s most likely a familiar sight, have you ever caught a glimpse of a nature documentary on the telly: the African savanna. Zebra, wildebeest and buffalo graze away in the hot sun, and we sense the presence of lions, poised to attack them at any time.

Imagine then, if such is your pleasure, the same scene with just one component missing: grass. Without the tiny green leaves of the plants from the Poaceae family, there would be no savanna but rather a thick forest of thorny acacia trees or perhaps a hot and arid desert. No giant herds of zebra and wildebeest. No lions to stalk them.

A world before grass

It might seem like grasses have always been around, that they’re somehow a very ancient group of plants, but in fact they’re quite a recent* addition to the planet’s collection of biomes. Even though the earliest species of grass appeared some 60 million years go – just before the end of the era of non-avian dinosaurs – they were just a scarce and limited selection of plants growing near rivers and lakes.

The ancient bamboo, grass of the dinosaurs.

The ancient bamboo, grass of the dinosaurs.

In those pre-grass days, the world was a different place, covered by thick forests and deserts. No open plains full of grazing animals, no evolutionary race between swift herbivores and even swifter predators. It was at once a slower world, with most animals walking leisurely about and a more violent one with ambush predators lying in wait in the lush undergrowth.

And this was the state of the world for hundreds of millions of years. From the ancient fern forests of the late Devonian 360 million years ago, through the swampy oxygen-rich forests of the Carboniferous, the vast deserts of the Permian and Triassic, the lush conifer forests of the Jurassic to the first flowering plants of late Cretaceous, forests gave way to deserts and desert in turn were overgrown by forests. Not until the end of the Oligocene, some 20-25 million years ago, did grasses start to spread to more arid plains and form the steppes, savannas and prairies we see today. And it took another 15 million years before the modern C4 grasses like maize, millet and sugarcane started to make an appearance.

The familiar - albeit relatively modern - look of a North American prairie.

The familiar – albeit relatively modern – look of a typical North American prairie.

A dangerous opportunity

This might all be fascinating on a theoretical level, but there’s more to the story of grasses than a study in biome evolution. The spread of grasses and the formation of wide and open grasslands changed the adaptive path for many animals, from antelopes and horses to carnivores and birds. But there was another group of animals that was also affected, a small insignificant family of primates suddenly finding themselves exposed in the open: hominids.

She might look happy but life wasn't easy for the poor australopithecines.

She might look happy but life wasn’t easy for the poor australopithecines.

Our ancestors got it rough. Not only were they small and defenceless, they lost their natural habitat and had to make do in a very competitive landscape, filled with powerful and dangerous herbivores like buffalo, rhinoceros and elephants and hunted by fast and furious predators like lions, leopards and hyenas. This, in combination with the constantly changing climate, forced us to develop tools and weapons and get us to rise up on our hind legs and become bipedal.

But as harsh as it was, the new grasslands also promised something new: a huge hunting ground full of game and lots of grass seeds and roots to eat. And as the savanna expanded north and met the Mediterranean Sea, so did our human ancestors, spreading on to the Middle East and then east into Asia and north into Europe.

Our green little friends

The story doesn’t end there either, though. Not only has grasses changed the face of the Earth and facilitated the evolution of our own species, they have also been instrumental in taking us from a few thinly spread hunter-gatherer tribes to being the most widespread mammal in the world, sporting the most advanced societies the planet has ever seen.

Some 10,000-15,000 years ago, just at the end of the last glacial period, we were getting a bit crowded. There were tribes of humans all over Europe, Middle-east, Asia and Africa and we were running out of places to gather food. Something had to be done.

The wild wheat grass our ancestors cultivated.

The wild wheat grass our ancestors cultivated.

Cue the agricultural revolution. Instead of walking miles and miles to find the herbs, roots and seeds we needed to stay alive, we started growing them around where we lived. And the main thing we grew was different types of grasses: wheat, rye, barley, millet, rice, and maize.

It wasn’t a revolution without casualties however. Rather than providing us with a reliable source of food, the first cultivated grasses were prone to bad harvests which made sure that starvation and malnutrition became a regular occurrence in human societies. In fact, it got so bad that the average life expectancy was dramatically lowered and the people who survived to adulthood grew up to be significantly shorter and weaker than our hunter-gatherer ancestors. Essentially, agriculture was making us frail and sickly.

But we persevered – most likely because there was no real alternative; we’d effectively run out of space and had to make our own food from now on – and fast-forwarding a few thousand years to present day we’re much more accomplished farmers. We now rely on a range of grass seeds for our daily food and as a result they make up the absolute majority of what we consume; be it in the form of noodles, rice, porridge, bread or pasta. As chance would have it, we turned out the only grass-eating ape** on the planet and a very successful one at that. Grasses surely are our green little friends.


* As always, ‘recent’ is a relative term, and in context of speciation and evolution usually refer to some million years but fewer than a hundred million.

** We might be the only grass-eating ape, but there is another primate that’s also a graminivore: the Gelada, an East African highland baboon. They eat their grasses raw, however. As we cook our grass seeds, we get more nutrition out of them and are hence better than them and can feel appropriately smug and superior.

The Frankenstein syndrome

6 April 2014

You’re at the grocery store, doing the weekly shopping when you come over a little peckish. Considering something for the road, you see a packet of Twinkies:

Mmm... Creamy goodness.

Mmm… Creamy goodness.

It seems an innocent enough treat, if a bit calorie rich. But on a hunch you turn the packet over, start reading the label and find the following:

Enriched Wheat Flour (ferrous sulfate, niacin, thiamine mononitrate, riboflavin and folic acid), sugar, corn syrup, water, high fructose corn syrup, vegetable and/or animal shortening (containing one or more of partially hydrogenated soybean, cottonseed or canola oil, and beef fat), dextrose, whole eggs, 2% or less of: modified corn starch, cellulose gum, whey, sodium acid pyrophosphate, baking soda, monocalcium phosphate, salt, cornstarch, corn flour, corn syrup solids, mono and diglycerides, soy lecithin, polysorbate 60, dextrin, calcium caseinate, sodium stearol lactylate, wheat gluten, calcium sulfate, natural and artificial flavors, caramel color, sorbic acid, E102 (Yellow 5), E129 (Red 40)

“Whoa. That’s a lot of ingredients.” you say to yourself. “And how do you even pronounce ‘sodium acid pyrophosphate’? Or ‘calcium caseinate’? And that ‘thiamine mononitrate’ sounds really nasty. Better off picking up some fruit and have a healthy natural snack. Bananas are good and filling, I’ll get some bananas instead.”

All natural banana

Bananas are a good tasty snack. And I won’t mention the highly unnaturally selection having being performed upon it to end up with the cultivated version of the wild banana berry*.

The all natural - albeit unnaturally selected upon - banana.

The all natural – albeit unnaturally selected upon – banana.

But consider the ingredients list for a natural banana, if such a list was legally required:

Water (75%), sugars (12%) (glucose (48%), fructose (40%), sucrose (2%), maltose (<1%)), starch (5%), E460 (3%), amino acids (<1%) (glutamatic acid (19%), aspartic acid (16%), histidine (11%), leucine (7%), lycine (5%), phenylalanine (4%), arginine (4%), valine (4%), alanine (4%), serine (4%), glycine (3%), threonine (3%), isoleucine (3%), proline (3%), tryptophan (1%), cystine (1%), tyrocine (1%), methionine (1%)), fatty acids (1%) (palmitic acid (30%), omega-6 fatty acid: linoleic acid (14%), omega-3 fatty acid: linoleic acid (8%), oleic acid (7%), palmitoleic acid (3%), stearic acid (2%), lauric acid (1%), myristic acid (1%), capric acid (<1%)), ash (<1%), phytosterols, E515, oxalic acid, E300, E306 (tocopherol), phylloquinone, thiamin, E101, E160a, 3-methylbut-1-yl ethanoate, 2-methylbutyl ethanoate, 2-methylpropan-1-ol, 3-methybutyl-1-ol, 2-hydroxy-3-methylethyl butanoate, 3-methybutanal, ethyl hexanoate, ethyl butanoate, pentyl acetate, E1510, natural ripening agent (ehtylene gas)

That list is even longer than the Twinkie one. And just as full of scary-sounding things like ‘tocopherol’ and ‘2-hydroxy-3-methylethyl butanoate’. Also, ‘ash’? Really? And ‘ehtylene gas’ – isn’t that what they run those welding machines on?

Now, I don’t want to scare you off ever eating bananas again – that’d be silly. Rather, the point I’m somewhat laboriously making is that chemistry is complicated. And organic chemistry even more so. Our naturally occurring foods often contains more weird chemicals than our man-made varieties.

The Frankenstein syndrome

There seem to be a prevalent mistrust of anything synthetic. A fear of plastics, chemical additives, man-made fibres, metallic alloys and other manufactured compounds. I’d like to label it a phobia (chemophobia would be the correct term), but perhaps it’s not a completely groundless fear? We’ve after all heard about countless incidents involving man-made chemicals and substances: factory emissions, illnesses from synthetic materials used in buildings and fabrics, allergic reactions to newly discovered chemical compounds and so on. It does really seem like whenever we invent something, something bad follows.

"It's pronounced 'Eye-gore' actually."

“It’s pronounced ‘Eye-gore’ actually.”

It’s called the Frankenstein syndrome – a fear of our own creations – and it seems to be widespread in modern society. But why are we so ready to mistrust new inventions? Are we really that bad inventors/scientists/chemists that we constantly and over and over again release monsters into the wild?

It could be argued that this phenomenon is just a variant of the old Luddite hatred of everything new, but I think there’s more to it than that. In addition to the mistrust of the unknown – which, to be fair, is pretty reasonable – there is a fear of loss of control. Once we’ve created something new, we effectively let go of it and let it run rampant. And even though the creator(s) usually swears by their new product and assure us it’s benevolent, observers from the outside of the lab are less convinced.

And so the mistrust and fear start off rumours and urban myths. Like the ones about the Brilliant Blue FCF colouring agent used in certain sweets. Rumours would have it that the – rather unnatural-looking and therefore surely harmful – food colouring induced hyperactivity in kids that had enjoyed some blue M&M’s. And even though numerous studies showed that this was not in fact the case, those rumours refused to die and parent kept on going through the bags of M&M’s to rid them from blue ones before their kids got their hands on them.


There is also an element of denialism at work here. Healthy living is now a fairly common lifestyle choice and people striving to eat organic food without artificial chemicals. This is all a Good Thing, but the notion that we could somehow live chemical-free lives is a false one – even with the rather incorrect definition of ‘chemical’ meaning ‘synthetic’. There is no place on Earth today that is not already affected by man-made chemicals. The water, air and soil is everywhere treated with a cocktail of assorted compounds, ranging from the beneficial to more harmful ones.

A truck-load of chemicals - one way or the other.

A truck-load of chemicals – one way or the other.

Of course, the level of artificial contamination will vary somewhat and the choice to add your own in form of synthetic fertilisers and herbicides/insecticides/fungicides will affect this level further. But be in no doubt – there are no produce to be had on this planet that are free from artificial chemicals.

The point

Ok, so what is my point? Should we panic or should we just resign and give up? Is it the end of the world or is it nothing to worry about?

Well, neither really. Artificial chemicals might not be the end of the world as we know it and they are certainly not automatically evil, but it would make sense to keep an eye out. Some of the chemicals we’ve released have been rather nasty indeed, especially when in form of different metals. We still remember the effects of DDT and mercury compounds and even today PCB is present in the environment. Clearly a more stringent view on how we test man-made chemicals before releasing them into nature or put them in our food has been required.

But let’s not go overboard. A lot of what we see in the ingredients list for our groceries are compounds readily found in nature. As always, more knowledge about what’s what and what could potentially be harmful is required. Combined of course with a will to actually learn and accept new findings. So we need a little less of the “I don’t care what they say; I’ll never trust these additives!” or “All newly invented things are always great and should be welcomed with open arms” and more of “Ok, let’s see if there are any good studies on this.” Less dogma and more research.

After all, knowledge is a good thing. Make sure you get yours; that’s an additive we cannot do without.


I’d like to thank James Kennedy for the inspiration for this post – and for painstakingly listing the ingredients for a bunch of natural products


* Just for the record – this is what a natural wild banana berry looks like: 


10+1 facts and myths about caffeine

28 February 2014

Every morning I get out of bed and stumble into the kitchen. There I make myself a mug of hot dark Kazaar lungo. The first sip tastes almost like hot smoke, the second is already softer, smoother. Almost instantly I can feel the buzz: my brain switches into gear and I start to function on a higher level. I’m awake.

1. Caffeine uptake and half-life

Kazaar! (No actual magic involved)

Kazaar! (No actual magic involved)

Unfortunately, the instant wake-up effect of coffee seems to be a myth. Depending on your metabolism, it takes up to 45 minutes for the caffeine in coffee to enter your blood stream. So the effect I’m experiencing is either a placebo or it’s to do with the bitterness of the coffee itself. This also means there should be no issue taking a cup of coffee right before bed time, as long as you fall asleep before the effect of the caffeine kicks in. In fact, I often take a strong coffee before I go to bed to help me sleep. And as to how long it takes for caffeine to leave the system, its half-life is between 5 and 8 hours. So if you’re not used to coffee, the effects of a single cup could stay with you most of a working day.

2. Origin and evolution

"Have you got it? Have you got it? I need another fix."

“Have you got it? Have you got it? I need another fix.”

Caffeine can be found in a number of plants, either in the seeds (like coffee seeds or ‘beans’, the berries of guarana or the kola nut) or the leaves (like in the tea-plant). Evolutionary, caffeine has evolved to act as an insecticide, paralysing any larvae that feeds off the plant. It also seem to work as an inhibitor for seedlings of the same species, making sure no other plants grow too close in order to avoid competition for resources. Additionally, caffeine seem to trigger a reward behaviour in honey bees pollinating the plant, encouraging them to revisit similar plants, which would increase the probability of successful reproduction.

3. The colour of caffeine is – clear?

Ooh. Caffeine in a bottle...

Ooh. Caffeine in a bottle…

Rather counter-intuitively, caffeine is actually clear when mixed with water, so the colour intensity of the coffee or tea is a poor indicator as to how much caffeine it contains. For instance, dark roasted coffee contains less caffeine than light roasted, due to the fact that the roasting process removes some of the caffeine. And pale tea often contain as much – if not more – caffeine as black tea.

4. Tea vs coffee

Speaking of tea: you’ve probably heard that tea contains more caffeine that coffee. This is strictly true, if measured by dry weight, but as a prepared beverage coffee contains many times more caffeine than tea. Which – if you think about it – you already knew from the effect of coffee compared with the much weaker effect of tea.

5. Kola, cola or coke?

Curing both hysteria and melancholy? Miraculous.

Curing both hysteria and melancholy? Miraculous.

Just like coffee is originally from Africa, so is the kola tree. The nuts of that tree contains caffeine and has traditionally been chewed to produce a stimulant effect.

More recently, extracts from the kola nut have been used in certain soft drinks to give them a similar effect (and also possibly create an addition to the product in the consumer). However, coca leaves are no longer used, so perhaps one of the more known brands should consider changing the name?

6. Chocolate

I’ve written a post on chocolate before, where I explained all its amazing benefits. In addition to all its other qualities, dark chocolate also contains quite an amount of caffeine, as much as coffee in fact.

The other wonder drug.

The other wonder drug.

The effect is however reduced due to the combination of theobromine and theophylline that are present in relatively high levels. This is why you don’t get the same buzz from chocolate. (What? No, sorry. That buzz is the effect of all the sugar found in chocolate. I’ll save the pros and cons of sugar for another post.)

7. Side effects

There are several misconceptions about the effect of caffeine (and coffee in particular) on our health. One is that drinking too much coffee will cause gastric ulcers, and that we should limit our intake to only one or two cups a day.

The real cause of gastric ulcers.

The real cause of gastric ulcers.

This is false. Gastric ulcers are caused not by coffee but by the bacterium Helicobacter pylori, something the Australian doctors Barry Marshall and Robin Warren proved in the 1980s: Dr Marshall deliberately drank a concoction containing this microbe and within days, he had developed several gastric ulcers, without drinking copious amounts of coffee.

8. Health benefits

In fact, rather than being detrimental to our health, caffeine seem to offer some protection against a range of diseases, including Parkinson’s, Type 2 diabetes, liver cirrhosis and certain types of cancer. You would need to consume a lot of coffee to get these effects though; more than 4-5 cups of coffee per day.

9. Loo breaks

Best restroom signs ever.

Best restroom signs ever.

Another common misconception is that caffeine is strongly diuretic, and that you will have to run to the loo all the time if you drink tea, coffee or cola drinks. The truth is that caffeine is only mildly diuretic and only in people who are not used to it. One of the amazing things about caffeine is that all the side effects (sleep disturbances, nervousness, minor muscle tremors etc) wear off as you get used to the drug. The benefits however (alertness, increased concentration capabilities, reduction of physical fatigue etc.) all stay with you, regardless of how long you’ve been using caffeinated drinks.

10. Toxicity

Even so, as with most alkaloids, caffeine is toxic to human beings in high enough concentrations. It would however take 80-100 cups of coffee to achieve a lethal dose. The risk of overdosing on coffee is therefore rather remote, but there has still been at least one reported death attributed to caffeine: a man suffering from liver cirrhosis overdosed on caffeinated mints. 

11. Memory enhancement

Memorise this.

Memorise this.

But I’ll end this list with a new discovery: it would seem that caffeine will help us with memory consolidation, i.e. the process of converting short-term memories to long-term memories. In a recent study, people who consumed two cups of espresso just after a memory test outperformed the placebo group. There seem to be a sweet spot at 300 mg, though, so don’t overdo it. Two cups of espresso is just enough – no more, no less.

The dark mistress

Overall, caffeine seem to be quite a drug. No real side effects (and the minor ones that do exist, fade away over time), and several mental and physical benefits. But drinking tea or coffee isn’t down to logic; it’s a life style – a passion, even. Once we get over the bitterness of the dark mistress, we just can’t get enough of her. Which in the cold light of logical thinking might be a drawback, but I really don’t care. Give me another mug of that strong dark hot stuff.


The future is autonomous

15 February 2014

A while ago – several years ago, actually – I wrote a post on electric cars, bemoaning the lack of market penetration, even in the 21st century. It was sort of a prequel to the post The future isn’t what it used to be, where I commented on the lack of technical advancement. There’s another side to these two stories and that’s the one about autonomous vehicles, a.k.a. self-steering cars.

Old school

Even though based on hidden guide rails, this vehicle detector was state-of-the-art in 1957

Even though based on hidden guide rails, this vehicle detector was state-of-the-art in 1957

Just like electric cars, self-steering cars have been around for a long time, albeit in a limited sense. Already in the 1920s, there were successful experiments with remote-controlled cars driving in heavy city traffic. But of course the computer power to create fully autonomous cars didn’t exist back then, and instead the research was focusing on getting autonomous cars follow magnetic or electric rails hidden in the streets. This railroad car technology never took off, due to the potentially astronomical costs of amending all the roads in the world with guide rails.

Even with the replacement of guide rails for electronic devices to detect road and lane edges in the 1960s, the cost was still too high except for limited field trials, and it wasn’t until 20 years later we got the first hints at cars being able to detect the roads and lanes all by themselves.


"Now if only they could find a way to stop me from becoming car sick when reading, all would be well."

“Now if only they could find a way to stop me from becoming car sick when reading, all would be well.”

And now, 30 years on, we have fully autonomous cars driving in real live city traffic on a daily basis. Companies like Mercedes-Benz, Volvo, Volkswagen, Ford, Toyota, Audi, BMW, Nissan and GM are all currently testing driverless cars. And of course we have the famous Google cars.

And gone are any needs to have amended roads with sensors and guide rails. Modern cars know what a road is from looking at it, and know how to stay on it. They also know how to keep their distance from surrounding traffic and how to navigate intersections and multi-lane highways. They do this with higher precision than human drivers and at higher speeds.

In fact, as the technology has matures so quickly (relatively speaking), governments around the world are finding themselves with outdated traffic legislation and are scrambling to catch up. Germany and UK have already passed laws that allow driverless cars to operate in traffic, with the owner of the car responsible for any accidents – even if he or she wasn’t driving at the time.

Future prospects

So what will the future bring? When will we be able to buy our first driverless car? And will we want to?

Just a regular car - that happens to be autonomous.

Just a regular car – that happens to be autonomous.

Well, the benefits of autonomous cars are numerous. First and foremost it will almost certainly cut down road traffic accidents by at least 95%. With high precision driving systems (that never get tired, lost, frustrated, drunk or sick), aided by radar and infrared sensors (allowing them to see in the dark or fog) we would soon enter a period of time where people getting hurt or killed in traffic would be major news. It would be more common to be hit by lightning or winning the lottery than being part of a car crash.

Secondly, it would make traffic much smoother. Human beings aren’t exactly renowned for the logical thinking – and this is especially true when driving – so much of the traffic congestion problems we experience in cities today is down to irrational driver behaviour. Not so with autonomous cars. They will let other vehicles in, they will keep safe distances and reasonable speeds, they will know which routes to avoid at certain times of day and they will communicate with each other in a polite and relaxed manner.

In the future, all German cars will only come in black.

In the future, all German cars will only come in black. True fact.*

Thirdly, we have convenience. Apart from being able to let go of the rather stressful activity of keeping a metric tonne of heavy machinery on the road at high speeds, we would be able to have our cars pick us up at home and drop us off at work and then go somewhere else to park for the day. No more need to look for parking spaces or waiting for the car to heat up and defrost on cold winter mornings. And, we wouldn’t have to worry about having had a drink for dinner, or being too young or old to drive, or suffering from a disability of some sort. The car would take us where we need to go.

And lastly, it’s the financial aspect. Even though self-steering cars will no doubt be prohibitively expensive at first, prices will soon drop and we could expect autonomous cars to become cheaper than manual cars at some point in the near future. Add to this the reduced need for insurance and the optimal fuel-economy of autonomous cars and you’re sitting on a winner. In the bigger picture, society at large will also benefit, since costs for road traffic accidents and their related human traumas add up to astronomical amounts on a yearly basis.

Dark clouds

There are however a few obstacles on the road (no pun intended) to that bright new future. And they’re not related to technical limitations –  undoubtedly the technology needed for autonomous cars will become better, cheaper and smaller over time, but even what we have today is perfectly adequate.

"Do you like it? No, reindeer actually. Made it all myself. It's very snug."

“Do you like it? No, reindeer actually. Made it all myself. It’s very snug.”

No, the problem is more one of human nature. It’s our own inbuilt fears and hangups that will prove to be the most difficult obstacle. And for once I’m not talking about the Luddite syndrome of hating and fearing everything new (that’s another post). No, this time it’s more about loss of control.

Humans are an industrious bunch of monkeys. We keep on inventing more and more advanced ways of staying ahead of the game, of keeping ourselves safe and alive. Fire, stone tools, fur clothes, huts and canoes. And lately interstate multi lane highways, high-rise buildings, water closets and streamed high-definition IP-based television.

But the downside of all our inventions is that it makes us think we’re in control; that we somehow can control life. And that feeling of control is something we don’t want to give up. We assume we always know best. We really are the most arrogant primates on the planet.

And this is not even the joke about the pilot and the dog...

And this is not even the joke about the pilot and the dog…

This could have an effect on the uptake of self-steering vehicles. Even if autonomous cars will be better drivers than even the most seasoned and experienced rally driver, we will harbour an inbuilt mistrust towards them. A machine could never really drive a car, surely? How would it know what to do if something happens? It would never be as good a driver as I. Or – what if something goes wrong? What if it malfunctions? Then we’ll be stuck in an out-of-control car, running down the streets at rush hour at 90 mph. It’ll be a nightmare!

Yes. It certainly would. But let’s think about it factually: how many airplane crashes have you heard or read about that were caused by autopilot failure? And how many that were contributed to human error? Granted, driving a car is more difficult than flying an airplane, but even so: people being distracted or reacting too slow or being just plain drunk is the main cause of road traffic accidents. Not the cruise control running amok or the automatic break system failing.

Like it or not…

In the end it won’t really matter. Technology have a tendency to march on regardless of any concerns for safety or lack of freedom. Already this year, Mercedes will be selling their S-klasse, with autonomous steering, breaking and lane control systems. Volvo and Ford will follow with their semi-autonomous cars and next year both Audi and Nissan will join the ranks, closely followed by Toyota and Cadillac. And within six years, we will see the first fully autonomous vehicles be available on the market, with Mercedes-Benz, Volvo, BMW and Nissan selling completely self-steering cars in stores around the world. Within five more years, they will be joined by Ford and Daimler.

What all the kids will be “driving” soon.

What all the kids will be “driving” soon.

So it looks like there will be a dozen or so different models of autonomous cars driving around in our everyday traffic within a few years. We will no doubt hate them at first, as they will drive carefully and keep to the speed limits. We will also hate them because they will be very expensive cars and we would also like to be able to afford one. But after a few years, these feelings will most likely fade away, and it’s not unlikely that if you buy a brand new car in 10 years time you will opt for the more convenient self-driving one; if nothing else because of the huge savings you’ll make on the insurance premium.

This will soon be a rare sight: a young person with a driver's licence.

This will soon be a rare sight: a young person with a driver’s licence.

And fast-forward another 10 years and we can expect to find old-fashioned manually steered vehicles only at the bottom of the range. Instead, all mid-class vehicles will be autonomous and some will boast new trendy features like downloadable driver profiles, so that you can be driven around by famous rally or racing drivers. Or – if  you prefer – perhaps a boy-racer profile? Or a senior citizen one? Or a distracted parent one? Sky’s the limit…

Either way, we will have gotten so used to the convenience and safety of autonomous traffic that we will start lobbying for a more extensive and thorough driver’s test programme for those who still choose to drive manually. Within a few more years, a driver’s licence will be as costly and rare as a pilot licence.

The future is bright, the future is now

No doubt my predictions in this post will be wrong. Predictions about the future always are. Mostly because they’re too conservative or too linear. In the 1980s, no one could even imagine the socio-economic impact of the internet. Just like no one in the early 1900s would have been able to predict the meteoric rise in motorised traffic.

No doubt we will all turn into Victor Meldrew.

No doubt we will all turn into Victor Meldrew.

But one thing is pretty clear: some years from now, when we’ve gotten old(er), we will most certainly be able to rant on about the good old days to our grandchildren. The good old days when we were still allowed to drive, and cars would still run on highly carcinogenic fossil-based fuels, like petrol. Or diesel, even.

And our grandkids will no doubt roll their eyes, stop pretend to listen or care, get into their chic fuel-cell powered autonomous personal vehicles and drive off, somewhere far far away from your grumpy old self.


* Might not be factually true, although I sure hope it will be.

The blush response

15 December 2013

I have a problem with blushing. Not the physical act of having my face turn red when I make a fool out of myself, mind, but the actual concept of the blush response. Why do we blush? What’s the evolutionary value of showing people around that we’re embarrassed? What could possibly be the point?

The physiology

No blushing here. But then again, she's not human, is she?

No blushing here. But then again, she’s not human, is she?

I find that the best way to get answers is to learn more about the problem. So what is blushing exactly?

From a physiological point of view blushing can be described as an autonomous phenomenon where our facial capillaries dilate to increase the blood flow in the surface of the skin, turning it red. It happens involuntarily when embarrassed or under emotional stress. The increased blood flow also make the face (and other parts also blushing, like neck or ears) feel hot and uncomfortable. The response usually soon fades away and normal skin colour returns within a few minutes.

Caught with our pants down

"Ok, you saw through that one, did you? Don't I just feel like a proper tit right now?"

“Ok, you saw through that one, did you? Don’t I just feel like a proper tit right now?”

Ok, so blushing is a physiological representation of the emotional state of embarrassment. Fair enough. We get caught telling a lie, we feel exposed and embarrassed and we blush. And since we have little or no control over this phenomenon, it stands to reason it would have some kind of evolutionary value. After all, it costs extra energy exposing our warm blood to the outer layers of our skin and that cost must be counter-weighted by some benefit, or it would have been selected against and disappeared millions of years ago.

As we blush when we’re exposed with not telling the truth, perhaps blushing is a lie detector alarm? Perhaps we’re supposed to get caught when lying – especially since we as humans are so good at it. Could it be that the blush response is a control device to make sure we don’t lie our heads off? After all, if we were to lie too much, we would potentially sabotage our position within the group and get left on our own. And we wouldn’t have survived on our own…

Good in theory…

That might be an interesting idea, but there’s a major flaw in the reasoning. And to illustrate that we have to – yet again – go back to the birth place of modern humans: Africa.

Yeah, hide your face. That'll make your embarrassment much less conspicuous.

Yeah, hide your face. That’ll make your embarrassment much less conspicuous.

We diverged as a separate species from Homo heidelbergensis some 200,000 years ago in East Africa. The climate was tropical just like today and the days would have been scorching hot. To protect us from the damaging radiation of the sun, we no doubt had black curly hair and dark skin, as is the case with most African populations today. In the beginning, we were all African. That’s the proper, original variant of our species. Those of us not African (or of African descent) are mutants – plain and simple.

That little fact highlights the problem with blushing as a social signal: in dark-skinned humans, blushing is more or less undetectable. And since blushing is a common trait in all modern humans (and most likely also in older forms, like Homo ergaster and Homo heidelbergensis), we must conclude that the visible part of blushing has been of little consequence to us. It’s therefore not very likely that blushing has evolved as a social signal to alert others of our embarrassment.

Blushing is supposed to be – invisible?

So blushing evolved as an invisible phenomenon. Ok, so be it. But that makes it even more peculiar. What is then the point of flushing our faces if no one can even see it?

"I might be blushing. Or I might not. And you can't tell."

“I might be blushing. Or I might not. And you can’t tell.”

I’ve already mentioned another effect of blushing, something we often think of as secondary and of little importance: heat. When we blush, our faces go hot from all the blood flushing through our skin. And this effect might be what the blush response is all about. When we blush, we get a physical reminder that we’re in an embarrassing situation and that we should probably try to avoid those in the future.

So it could be that blushing is a lie detector after all, but a personal, secret one. It could be a way for our brains to tell us that we’ve made fools of ourselves again and that we need to do better in the future. After all, it’s not a problem being a liar as long as you don’t get caught. The blush response might aim to make us better liars rather than stopping us from lying. It might be our personal trainer helping us becoming highly functional sociopaths.

This whole chain of reasoning highlights a very common pitfall in our culture: extrapolating features and phenomenons displayed in one sub-population (our own) and applying them to the whole of the species. With the blush response, it seems like we’ve succumbed to the temptation once more. Man, is my face red.

P.S. Turns out my face was to become even more red. As Jim kindly pointed out, I made a couple of embarrassing typos in this post. Don’t worry, they’ve been rectified. But the whole episode sure altered my facial colour. Not that that would have any evolutionary significance.

P.P.S. I was just informed that comments and pingbacks had been switched off. This was most unconsciously done, I assure you. I always welcome your comments. (This post is quickly becoming the most embarrassing I’ve ever posted.)

Luddites, biotechnology and the future

14 October 2013

Luddites - fighting progress since 1812.

Luddites – fighting progress since 1812.

It’s early morning. A thick fog is wrapping the cobbled narrow street in a translucent cotton, softening the appearance of grimy buildings and filth-ridden gutters. A group of people are quickly but quietly moving up the street towards a big red-bricked building. They stop in front of a wide double door, built from pale solid wood. One of the men raises an axe and smashes the door open. The mob storms the building and starts destroying the delicate machinery inside. Soon, only splinters and bent and distorted lengths of steel remains where once stood one of the finest examples of technological progress made in over 200 years: the automated power loom. The Luddites have struck again.

In hindsight, it might seem a bit foolish to have tried to stop the progress of industrialisation by chopping up some old machinery with an axe, but humans aren’t exactly rational creatures (see The limbic society) and tend to react emotionally to most situations.

Yet another example of hysteria over reason.

Yet another example of hysteria over reason.

Cue present day. A field of golden wheat, swaying slowly in the soft wind. The early morning sun has just touched the top of a hill on the other side of the valley. A group of people are quickly but quietly positioning themselves along the downwind edge of the field. On a given signal, they set fire to the tall dry grassy strands. The fire spreads rapidly, engulfing the wheat. Soon, only scorched earth and charred stumps remains where once stood one of the most technologically advanced feats in the history of mankind: the genetically modified organism or GMO.

GMO – a quick background

A rock dove (a.k.a. common pigeon) genetically modified into a fantail pigeon by means of artificial selection.

A rock dove (a.k.a. common pigeon) genetically modified into a fantail pigeon by means of artificial selection.

Now, about GMO: even though biotechnology is a fairly modern concept, the process of genetically modifying organisms to suit our needs and fancies is nothing new. We’ve been doing it ever since we cultivated the grey wolf some 10,000 years ago. But we’ve been forced to do it old-school: look for traits that we think would be of benefit to us (if not the organism itself) and select upon them. Over years, decades, centuries and millennia we have managed to create a wide range of unnatural beasts and abnormal crops, all by using controlled breeding.

What’s different now is that we for the first time have access to the genome itself. Instead of looking for how genes express themselves in the form of their parent organism’s plumage, colour or size, we can now modify them directly, either by borrowing traits from other organisms or by tweaking the genes themselves. By injecting the new genes into our target organism (on egg cell level) we can then increase an animal’s resistance to certain pathogens or increase the yield of a crop. This would then reduce the need to use antibiotics or fertilisers and would therefore help produce higher quality food for less money.


As hinted in the introduction, people aren’t all that keen on genetically modified organisms. The reasons for this seem to group together in four different arguments: terrorism, religion/ethics, invasion and health.

We should probably worry more about naturally occurring pathogens than synthetic ones.

We should probably worry more about naturally occurring pathogens than synthetic ones.

I’ll start with terrorism. With biotechnology we have the possibility to create new and previously unknown biological weapons and disperse them on the enemy. The population in that country would then quickly get infected and perish from this synthetic superbug.

While this is a theoretical possibility, it’s not really a practical one. It would be much easier to just use an existing virus or bacteria than to create a new one. And even then, the lack of control over how the disease will spread makes bio-terrorism even more uncontrollable than nuclear terrorism. So in short, yes, it would be possible but it could as easily turn on you and your family as on your enemy.

Yeah, that sure does look like natural variations of the grey wolf and not like some unnatural genetically modified frankendogs**.

Yeah, that sure does look like natural variations of the grey wolf and not like some unnatural genetically modified frankendogs**.

Next up is religious and ethical arguments. I won’t bother with the former – not a lot is allowed according to most monotheistic religions anyway, including banking, pork chops and alcohol consumption – but I will have to say something on the latter. From an ethical point of view, biotechnology is indistinguishable our current breeding programmes. We alter nature to fit our needs, with little or no concern of the welfare of the organisms themselves. Lately, we do seem to have started to care more about animal welfare (although mainly pets), which is of course a Good Thing, but regardless, bioengineered organisms should fall under the same regulations as any other creature. Which it does and all should therefore be catered for*.

The effect of the Japanese vine kudzu, let loose in Georgia, USA.

The effect of the Japanese vine kudzu, let loose in Georgia, USA.

Invasive species. Now this is a big one. It’s well-known that the introduction of foreign species into existing ecosystems is usually less than favourable for native species, like when the European rabbits were planted in Australia or the Japanese kudzu-vine spread into the wild in USA. The threat that a genetically enhanced organism lets itself loose and out-competes the wild fauna or flora is a real one. So far – even though several GMOs have managed to spread in the wild – nothing like this has happened. But we need to monitor the situation carefully and make sure we don’t inadvertently create a species that will become a pest. (There are safeguards in place, like suicide-genes and such, but if life and evolution has taught us anything it’s that things constantly change. The suicide genes might devolve, or lose their efficiency for a number of reasons. It would be foolhardy to rely solely on emergency shutdown mechanisms.)

Ok, good. I'm glad we're keeping this discussion free from emotionally loaded propaganda and deliberate misinformation.

Ok, good. I’m glad we’re keeping this discussion free from emotionally loaded propaganda and deliberate misinformation.

And finally health. This is the argument most widely used against GMOs, and especially GMO crops. It has been stated by activist groups that GMO food could be bad for your health, either directly or through long-term exposure. A plausible mechanism for how that could possibly work has yet to be suggested. Especially since GMO food chemically is no different from other food.

Also, we have been using GMO food for 30 odd years now and no apparent epidemic of related health problems has been detected.

The only possible mechanism for harming human beings would be to deliberately introduce a gene that produces some kind of toxin. And even then it would have to be produced in high enough concentrations to harm us, which would mean precious energy that could have been used to increase the yield would have to be reallocated for expensive toxin production. The resulting crop would perform very poorly in the fields compared to other, non-toxic, varieties.


Although there are issues with genetically modified organisms, they’re mainly to do with the potential threat of creating new pest species that could potentially harm the local ecosystems. However, in that scenario, it’s worthwhile to remember that GMOs rarely turn out to be the super-organisms that everyone fear. Even if sporting a modified gene here and an enhanced gene there, they rarely perform better in the wild than the species with a few million years of evolutionary adaptation under their belt.

You might question – and rightfully so – what the point of GMO crops is, if they don’t perform any better than the regular selected crops. Actually, they sometimes do, but we’re still in the process of trying to figure this biotechnology thing out. We have managed to produce crops that express some beneficial traits, but it has not been as easy as we’ve hoped. The potentials are promising but getting there might take some time. But obviously, this should warrant more research to be done, not less.

Good food is just good food, regardless of what technology was used to modify the genes.

Good food is just good food, regardless of what technology was used to modify the genes.

And when it comes to health concerns, we need to remember there’s no magical ‘natural’ substance or medium. Just because a gene has been modified by us and not nature, it doesn’t make the organism somehow completely foreign and less natural. It’s still the same chemicals as in non-GMOs, and the GMO crops still contain all the regular stuff like dietary fibers, carbohydrates, fats, proteins, vitamins and minerals. Or as the European Commission Directorate-General for Research and Innovation would have it:

“The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research, and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are not per se more risky than e.g. conventional plant breeding technologies.”

So to conclude this – all too long – post: GMO food is no more dangerous to eat than food from crops refined through selective breeding. And frankly, if you’re worried about what’s in your food, you could do worse than focusing on what could really harm you: refined sugar. But that’s another story.

* Unless of course you don’t have any faith in our authorities competence or good intentions. In which case it’s a different issue all together.

** For the record, I’ve been lucky enough to have owned a couple of Great Danes and they’re lovely dogs. Not frankenesque at all.


29 September 2013

Look at all the little virtual people.

Look at all the little virtual people.

I read a book a while back. The book was Accelerando by Charles Stross and it was packed with new concepts in a way I hadn’t experienced since I read Neuromancer by William Gibson decades ago. (Yes, I’m a huge Science Fiction fan. Didn’t you know?) Just as Gibson in the very first cyberpunk novel* back in 1984 hinted at the future of a connected world (world-wide web, anyone?), in Accelerando, Stross shows us the future of a post-singularity humanity.


The idea of a post-singularity humanity might warrant some further explanations. The singularity, in a socio-technological aspect, is where the processing power of our computers exceed the processing power of humanity. Essentially, it’s when machines become smarter than us.

Well, this looks suitably artificial. And vaguely intelligent.

Well, this looks suitably artificial. And vaguely intelligent.

Now, we’re all aware of the blistering speed of advancement of computer technology the last 80 or so years. From clunky steam-powered mechanical calculators, via electrical tube-endowed monsters to the first microchip computers in the early 60s, our computers keep on getting faster. And, according to Moore’s law, they will keep on doubling in speed every 18 months. That’s an exponential acceleration and it stands to reason that within a relatively short amount of time, we will have more processing power in our glasses than a commercial data processing warehouse has today.

So. Soon we will have computers that can out-think us. And not just in a simplistic mechanical way, like calculating one trillion decimals of the number π, but actually be better at reasoning, analysis and pattern recognition. That’s what we call the technological singularity; humans would no longer be highest intelligence on the planet.

A post-singularity humanity would therefore face a completely different set of challenges than what we currently do. Rather than having to worry about growing crops and building armies to defend our territories from other human beings, we would be competing with a new type of consciousness: the artificial intelligence or AI. And one way of managing that would be to go digital.


A solar-system-sized computer? Yes, why not? Why not indeed.

A solar-system-sized computer? Yes, why not? Why not indeed.

Just as enough processing power would allow artificial intelligence and consciousness to be created synthetically, that same power would allow us to scan the current state of our brains and upload that state to a processing cloud. This facsimile would then go on being a conscious intelligence, with similar emotions and experiences to what we experience here in the physical analogue world.

Having been uploaded, we could go on experiencing the physical world using sensors of different kind, just as we currently use our eyes, ears, noses and what-not. That way we would very much still be physical beings, relying on the external word for our everyday experiences. Or we could recreate a virtual world to live in instead. In this virtual world we could be gods, changing the world around us as we please; or we could have restrictions in place, making the virtual world as limiting as the physical reality.

And with enough power, the fidelity of this virtual world could be as high as the real world. Essentially, our virtual existence would be indistinguishable from reality.

What is reality?

This would be a definite low fidelity virtual reality, then.

This would be a definite low fidelity virtual reality, then.

But hang on. If we – at least in theory – could simulate a virtual world to the level of it being the same as the physical world, what would the difference be? How would we know we’re simulated consciousnesses living in a virtual world and not physical beings living in the real world?

Well, we probably wouldn’t be able to tell the difference. All our senses would tell us that the rain that was falling was wet and cold, the wind strong and smelling of the sea and the sky dark and full of clouds. We would experience emotions like fear, love and hate, just as we do in the real world.

This has some uncomfortable consequences. If we can foresee a possible future of a post-singularity humanity, where we live virtual lives in a simulated reality, what is to say that it hasn’t already happened? What if we already live in a simulation and don’t even know it? This might already be in the future and we might just have chosen to not remember any of it, and instead live our lives in a costume drama, as it were.

Out of all the infinite worlds…

A multiverse containing many different Earths. I particularly like the octopus one.

A multiverse containing many different Earths. I particularly like the octopus one.

I sincerely doubt that we are living in a simulation, though. For one thing, why would we have chosen to not remember that we are, if we were? That doesn’t sound very likely, in my opinion. Also, if we were in a simulation, why would we choose one with so many practical problems in it? Why choose to live in a world with global warming, starvation, poverty, slavery, prostitution, countless wars and a mass extinction rate that’s off the chart? Surely there would be nicer worlds that we could create?

And I don’t put much faith in theories of some alien intelligences having secretly invaded us and staged a synthetic world for us to use. Why go through all that trouble? They would essentially have to dismantle the whole solar system and turn it into a Matrioshka brain in order to provide the required processing power. And for what? Placating a bunch of simian low-brows? Surely it would be cheaper just to exterminate us, old-school-style.

Science to the rescue

If you can’t stand the uncertainty of not knowing if you (and the rest of the perceived universe) is really real, physicists might soon come to the rescue. The prospect of our universe not being a real physical universe is actually not laughed at within the scientific sphere, and several theoretical experiments for determining if this is the case are being designed.

Science Woman and Science Boy to the rescue.

Science Woman and Science Boy to the rescue.

One of them rely on the assumption that – as we currently understand the laws of physics – it would take an infinite amount of processing power to simulate them ad infinitum. And since there can’t be an unlimited amount of anything, some short-cuts must be made in order to simulate our universe on a quantum level. This would result in rounding-off errors and weird values at the far ends of the scale for all physical forces within our world, and were we to find such anomalies we could deduce that we are indeed living in a simulated reality.

But, as I said, don’t lose any sleep over it. The probability of that being the case is pretty low. And even if it was true, and we were really living in a simulated reality, what difference would it make on a practical level? If it still feels like it’s real, why not treat it as if it was real. I mean, what options would we have? Sit around and complain about it? Much better to embrace whatever reality we find us in and make the best of it.

Don’t worry, be happy. It will end soon enough, anyway.

* I find it slightly ironic and quite amusing that Neuromancer, the novel that defined the genre cyberpunk, was written on a mechanical typewriter, and that Gibson didn’t even own a computer at the time. But then again, who am I to talk? I only bought my first computer 11 years ago.

Crime and punishment

9 September 2013

This is not a post on Fyodor Dostoyevsky‘s famous novel Crime and Punishment. Rather, it is a post on capital punishment, its moral implications and social consequences. As such, it might be slightly controversial and easily offended readers might do well in skipping this post.

There. Warning over. (Side note: I haven’t done a post where I had to put a warning or disclaimer at the top for a while now. Am I losing my sting?)


The current legal systems by nation.

The current legal systems by nation.

There are only four types of legal systems in use in the world today. The most common is Civil law, which is based on abstracted laws and rules legislated by a governmental body. This is the legal systems used throughout Europe (except for the UK) and most of South and Central America, Asia and Africa.

The second most common system is Common law, based on the old legal system of the British Empire and famous from all the British and American films and television shows. It is essentially a precedential system, where judges develop the laws in court, creating precedents that will act as guidelines for subsequent cases of a similar nature.

Common law is not the most common after all.

Common law is not the most common after all.

Then we have religious law, now only represented by the Islamic Sharia, where laws are based on rules found in religious scriptures. It is mainly practiced in the Middle east and parts of Northern Africa.

Finally – and now all but extinct – we have the Customary law systems, where old customs are essentially viewed as laws in court. In practice, if things have always been done a certain way, it becomes the law and people are required to continue doing them that way. Today only Mongolia and Sri Lanka practice Customary law.


I’ve talked about fairness before, both in The fairness syndrome and The moral code. I concluded that there seem to be a hardwired sense of fairness in us humans, where we expect people to behave decently and if they don’t, we become outraged. This fairness sense is the basis of all legal systems, both current and ancient. We want our societies to be fair, and for everyone to be treated fairly. However, the way in which we realise this fairness has varied over the ages and across borders.

"Yeah, like I'd eat YOU."

“Yeah, like I’d eat YOU.”

In the beginning we had the old ‘Eye for an eye, tooth for a tooth’-system. If someone knocked out one of your teeth, you had the right to knock that persons tooth out in return. Or stab out their eye, had they somehow managed to cause you to lose an eye. It’s a very direct and basic system and it’s still in use today. In fact, in the Korowai people in Papua New Guinea, you are allowed by the village elders to take revenge on a murderer by killing him/her yourself. And then you’re allowed to eat him/her, to regain some of the energy lost when the murderer killed the victim. But this is a dying practice (no pun intended) and most societies now delegate the punishment to a legal institution of some sort.

Death penalty

This brings me to the core subject of this post: capital punishment. Just like the old ‘Eye for an eye’-system is declining, so is the practice of sentencing people to their death. Currently, out of 206 countries, only 57 actively practice the death penalty. Of those, most are developing countries, and there’s an obvious trend towards abolishment in countries where the economy is advancing. The only post-industrial countries still practicing death penalties are Japan, South Korea, Singapore, Taiwan and United States.

Getting rid of enemies of the state is not really about justice.

Getting rid of enemies of the state is not really about justice.

So the number of countries using the death penalty is steadily decreasing. And this raises a question: why has so many countries abolished the death penalty? Is it for moral reasons? A sense of becoming a more enlightened society? Or is there something more practical behind the decision?

An argument in favour of the death penalty is that the presence of capital punishment will act as a deterrent and stop people from committing heinous crimes. In reality, that doesn’t seem to be the case. With the typical practicality of the human mind, criminals (like all humans) tend to suppress uncomfortable facts and rationalise that it ‘could never happen to them, anyway’.

No, I can't see much of a correlation either...

No, I can’t see much of a correlation either…

The statistics seem to validate this view. As seen in the graph to the right, there is no obvious correlation between the use of capital punishment and the percentage of homicides committed per year. For instance, most European countries have the same low-level of homicides as Saudi Arabia, even though none of the European countries practice capital punishment and Saudi Arabia does. And on the other side of the Atlantic, United States does practice capital punishment but still have the same amount of homicides per year as Argentina, that has abolished it.

So, perhaps the reason almost every single developed country has abolished the death penalty is because it just doesn’t work as a deterrent?


But there’s also the moral side of capital punishment to consider. In recent years, more and more elaborate methods of absolving the executioners have been invented. The (in)famous lethal injection machine of United States uses a control computer with a randomising function and a mix of lethal and non-lethal syringes in order to allow the two operators to stay ignorant of who did actually order the machine to perform the execution.

Lethal injection machine - a philosophical folly.

Lethal injection machine – a philosophical folly.

From a philosophical point of view it’s a bit of a folly, since each operator is required to press the button and therefore is essential for the execution to take place. That way, however much they would like to avoid it, they are both equally responsible for taking the prisoner’s life.

But the phenomenon does highlight a conflict of interest. Even though a state wishes to be able to enforce the death penalty, it doesn’t want to force anyone to have to carry out the sentence. This is a symptom of trying to escape responsibility of performing the executions. It’s similar to the notion of people not wanting to know how the meat they’re having for dinner has been produced. And just like suppressing the thoughts of slaughter houses filled with petrified cows and pigs is the first step towards vegetarianism, trying to avoid the guilt associated with executions is the first step towards abolishing the death penalty all together.

The future

As mentioned, there is a global trend towards abolishment of the death penalty. More and more countries join the ranks of post-industrial nations and in the process, most of them leave capital punishment behind. And for good reasons. Capital punishment has no place in a modern society; it doesn’t work as a deterrent and is just a brutal and archaic form of punishment that’s left over from when we had a much more primitive view on justice.

But being an abolitionist doesn’t make me a pacifist. If someone were to hurt someone I love, I would turn to violence in an instant. But that’s just me allowing my limbic system getting the better of me. I would hope that an enlightened and advanced society would keep itself above such primitive emotions and be guided by a clear state of mind. Courts of law are supposed to be about justice after all, not knee-jerk simian responses to emotional triggers.

So let’s call capital punishment for what it is: revenge, not justice.

Fat and fit?

26 August 2013

Warning for obesity-related mortality?

Warning for obesity-related mortality?

I watched a Swedish television commercial the other day. It consisted of depicting a slightly portly middle-aged woman winning a range of different Olympic sports. The point of it – I think – was to show that middle-aged women are better than you think at things you didn’t think they could do; an idea that the company behind the commercial – an internet service provider focusing on online gaming – was eager to enforce.

But regardless of the message or the motive behind the commercial, it got me thinking: does being overweight stop you from being a healthy human being? Is what we’ve been told actually true – that being overweight is a sure ticket to heart conditions, diabetes, circulatory problems and all the rest? Or is there something else hidden here? Could we have oversimplified the issue?

Conventional wisdom

The traditional view of overweight people: lazy, unmotivated and unfit.

The traditional view of overweight people: lazy, unmotivated and unfit.

There is such a mountain of statistical data showing links between numerous diseases and being overweight that it has become the conventional wisdom of the medical profession. And not just the medical profession either; the same links are used in the insurance business and the fitness industry as well as generic healthcare services. There is a lot of money to be made on this, and to reinforce the idea that being overweight is the same as being unhealthy. No one wants to be unhealthy after all, and being told that you are will most like trigger a behavioural change, in order to fight this evil overweight and become healthy again. It’s then a piece of cake (mmm, cake…) to sell in services that cater to that need to lose weight.

Case in point: when I had my yearly medical a while back, it was pointed out that I could do with losing some excess weight and that additional exercise would be a good idea. My first reaction was to immediately start planning how to lose this overweight to assure my good health. But then I had second thoughts. Ok, so I don’t participate in any team sports or spend my free time at the gym, but I eat (more or less) healthily and I walk 6-7 km at a brisk pace daily. That should at least make sure I’m not exceptionally unfit, shouldn’t it? I mean, one doesn’t want to be manic about fitness, does one?

But, on the other hand, those statistics are a frightening read…

Lies, damned lies and statistics

Don't listen you your bathroom scale; it's just being melodramatic.

Don’t listen you your bathroom scale; it’s just being melodramatic.

It turns out that the truth is a little more complex than what the statistics are indicating. What we think we see in the statistics could just be common symptoms rather than cause and effect. Even though it is indeed true that people who are overweight are more likely to also suffer from diabetes, heart conditions and circulatory problems, it doesn’t automatically follow that the former is the cause of the latter. Rather, an equally valid answer would be that an unhealthy lifestyle is the cause of both. This would mean that you could get all the health problems listed above without being overweight and that you could be overweight without developing a single one of them.

Now, obviously I can’t deny that certain conditions are linked directly to being overweight. If you happen to be very overweight, you might start suffering from pains in your joints, and your heart would have to work harder to power a bigger and heavier body. But that’s not my point. My point is that in the current culture of manic weight loss, even healthy young people who are just above the ‘normal’ weight (or indeed at or under it) are desperately trying to lose weight by dieting and excessive training and exercise. That is not healthy behaviour, and is just a few tiny steps away from turning into full-blown eating disorders like anorexia or bulimia.

The tide is turning

You don't have to be skinny to be fit.

You don’t have to be skinny to be fit.

Lately, reports have started to challenge the old wisdom. Studies have been carried out to more closely investigate the link between obesity and the lack of fitness. They all show that keeping fit is much more important than losing weight. Obese people who exercise regularly and lead a healthy life have a much lower risk of morbidity than people who are of ideal weight but unfit.

But don’t take my word for it. Here are a few quotes, starting with the Harvard Health Policy Review:

“A fit man carrying 50 pounds of body fat had a death rate less than one-half that of an unfit man with only 25 pounds of body fat.”

And the Annals of Epidemiology:

“Consistently, physical inactivity was a better predictor of all-cause mortality than being overweight or obese.”

And from The President’s Council on Physical Fitness and Sports:

“Active obese individuals actually have lower morbidity and mortality than normal weight individuals who are sedentary … the health risks of obesity are largely controlled if a person is physically active and physically fit.”

And finally the International Journal of Obesity Related Metabolic Disorders:

“An interesting finding of this study is that overweight, but fit men were at low risk of all-cause mortality.”

No, it's not a deadly sin to enjoy a cheeseburger.

No, it’s not a deadly sin to enjoy a cheeseburger.

To summarise: being unhealthy is much more dangerous than being overweight. And being overweight is not the same as being unhealthy. Although they can sometimes correlate, they are independent, and should be viewed as such.

So, if you’re feeling guilty lusting after that cheeseburger or pizza – don’t worry. As long as you have a reasonably healthy lifestyle, and it’s not in direct conflict with any pre-existing medical conditions, go ahead. Enjoy. Life is short enough; live it a little.


P.S. Just after finishing this post I read on New Scientist that not only does being overweight have no negative effects our health and longevity but it could actually be beneficial. Indeed, carrying a few extra pounds seems to make you live longer than if you’re at your ‘ideal’ weight. There you go: yet another reason not to forgo that dessert.


“Fat and fit” pullover

P.P.S. I’ve added a motivational “Fat and fit” pullover inspired by this post to my Zazzle store:


12 August 2013

Whenever I finish a blog post I say to myself: “There. I’m done. This will be my last post. I’ll never blog again.” It feels like I’m empty. Done. Finished. And if I go against my better judgement and try to force myself to open WordPress and click Add New Post I end up staring at the dreaded blank page.

The dreaded blank page.

The dreaded blank page.

But after a while (days or sometimes weeks) I get this itch, this urge to write. An idea has formed, or a need to explore a topic in more detail. It connects with other ideas and factoids I’ve collected over the years and before I know it I once more find myself sitting in front of my computer and starting on another post.

This seems to be my process. I need these periods of downtime in order to be creative. And, being aware of this, I don’t really mind. It is as it is. It’s a small price to pay to be able to express myself in text.


But this has put me in mind: what does it really mean being able to write? Is it important? And I don’t mean being able to write your name to sign for that delivery, but actually put your thoughts down in writing in a way that’s understandable to others. Is that in any way essential? Or is it like being able to sculpt or play the sitar – nice if you know how to do it, but not really important for your everyday life?

"What do they mean 'Referring to'? 'Referring'? Is that even a real word?"

“What do they mean ‘Referring to’? ‘Referring’? Is that even a real word?”

There’s a form of illiteracy spreading that takes the form of not being able to express one’s ideas and thoughts clearly enough in text for someone else to understand them. People suffering from this new illiteracy know how to write, but not how to write understandably. Their writing reveals a severe lack of understanding of basic grammar and spelling, and only rudimentary knowledge of sentence structure.

This form of illiteracy has in fact spread all the way up to the higher levels of the education system. Uppsala University is Sweden’s oldest and most prestigious university, and has traditionally been ranking well both in Sweden and internationally. But lately the professors teaching courses there have noticed a significant drop in the students’ ability to write. They don’t seem to understand that changes in word order changes the meaning of a sentence, they only have a very limited vocabulary and they suffer from a severe lack of grammatical knowledge in general. They no longer use capital letters at the beginning of sentences or full stops at the end. It’s come to the point where they can’t write reports or read and understand academic texts.

Does it matter?

But does it really matter? If everyone is on the same – albeit less than ideal – level of understanding, wouldn’t the language simply adjust and become simplified in itself? Why do we need this advanced linguistic knowledge anyway? What does it matter if students are on the literacy level of a 13-year-old? Aren’t they still smart enough? Don’t they still think unique thoughts and come up with new ideas?

In the future, we will all be subvocalising like bosses.

In the future, we will all be subvocalising like bosses.

Perhaps they do. Perhaps language doesn’t affect the way we think. And with the advent of new technologies we might never have to write things ever again. Voice-to-text solutions are limited today but they show encouraging signs of maturing into usable tools for everyday situations. And with text reading algorithms reading out loud for us we could perhaps bypass the written language all together, or at least banish it to our computers and make it into a machine language? In the near future, we could have devices interpreting the nerve signals we send to our larynx and tongue as we subvocalise our thoughts, and then easily store those thoughts digitally on the cloud, send them to our friends or publicise them to a wider audience. All of it without ever touching a keyboard or picking up a pen.

But hang on. If we’re no longer able to write comprehensible sentences, what would those subvocalised thoughts really look like? If we lack the ability to put our thoughts together according to strict grammatical rules, how would we be able to communicate them to other people? If we don’t all follow the same rules, wouldn’t we simply drift apart and end up being utterly incapable of understanding each other? We would be split up and isolated, just like in a modern version of the tower of Babel*.

Language and thought

I’ve made a lot of questions in this post, but the central one would have to be ‘Does language affect the way we think?’. And to answer that question I’d like to return to my favourite subject: human evolution.

That little ring of bone could tell future paleontologists if you were able to speak or not.

That little ring of bone could tell future paleontologists if you were able to speak or not.

In the beginning there was no language. Humans – or pre-humans, I guess – made do without ever uttering a single word. Sure, we had different calls and gestures for different things, ‘words’ if you like for things like ‘leopard’, ‘water’ and ‘crocodile’ (just like a lot of other animals), but no language as such. That lack of linguistic capability could be seen not just in the physical structure of our bodies (lack of space for a lowered and elongated larynx, the diminutive size of the hypoglossal nerve canal), but in our culture and tool industry as well. As our linguistic prowess increased so did our sophistication in tool making and arts and crafts. There seem to be a direct correlation between inventions and the use of language.

This interesting connection could well be evidence of us humans having to be able to think things through in words and sentences in order to make sense of them. Until we can put an idea into words we only perceive it as a hunch, something just beyond the grasp of our minds. So in that sense, being able to form coherent sentences is an essential requirement for constructive thoughts and ideas. Without language our minds are blind, fumbling around without a chance of ever coming up with any original thoughts of their own.

So, yes: a proper understanding of language is essential for our capability of thinking original thoughts. We need a language with a fixed set of grammatical rules in order to make sense of the confusing and ever-changing collection of ideas we have inside our minds. And if we want to communicate those ideas to others – the basis for human culture – we need everyone else to use the same grammatical rules in order for them to understand what we’re saying.

Evolution or degeneration?

If we're to survive as a species we really need to keep our minds sharp.

If we’re to survive as a species we really need to keep our minds sharp.

Language isn’t a fixed thing. It is constantly changing and evolving. New words and grammatical rules are adopted regularly and old ones disappear and are left by the roadside of the history of language like old fast food wrappers and discarded empty cans of soda pop.

But, whatever changes a language goes through it has to be a global change, a change everyone (at least eventually) is onboard with. Otherwise the language will start to degenerate and become a blunter and blunter tool. And our thoughts and minds will become blunter with it. So let us keep our language and our minds as sharp as possible. We are going to need them. Badly.


* For the record, the tale of the tower of Babel has always confounded me. What is the moral suppose to be? “Don’t try to do great things”? “Be wary of God, for he is a mean bastard and will mess you up good”? Honestly, does anyone have any ideas?

%d bloggers like this: