Friday, 6 December 2013

To take the Gautrain or not to take the Gautrain?


Today I found myself using public transport to get to work for the first time. Although I have a 62km road trip to get from my home in Edenvale, near OR Tambo International Airport, to the CSIR in Pretoria, it usually takes me no more than 45 minutes. This is because I travel almost the entire distance on the R21 highway, which is hardly ever slow in the direction I’m going as I’m travelling against traffic.

Gauteng's Gautrain - www.gautrain.co.za

But this week things changed. The etolling system came into effect, and the additional cost that this now adds to my journey now makes the Gautrain combined trip to work cheaper than driving my personal vehicle. Under the current petrol price, my monthly budget to get to work is about R1500. But with the addition of toll roads, which leads to an additional cost of R240, my budget now needs to be increased to R1740. With petrol prices set to increase and the deteriorating South African rand, this amount guaranteed to increase over time.

If I were to take the Gautrain, and taking into account the average number of days I’m in the Pretoria office during a month, the cost for the train trip would be R1272, the cost for parking would be R180 and the cost of the bus would be R132, totaling R1584. If I take into account that I’m only going to be travelling one tenth of my normal monthly road travel by personal vehicle, it means that my petrol to get to work will be approximately R150. Which makes the total cost of my Gautrain trip R1734.

So we’ve established that the Gautrain combined trip is marginally cheaper, and may be more cost effective in the future, but what about the paramount reason for using public transport – mitigation of climate change? Does this choice of transport reduce my own person carbon footprint?

I was particularly concerned because when I started my last leg to work, the bus ride, there were only five people on the bus for the entire length of the journey. Is my personal footprint really going to be reduced if I’m using underutilized public transport? So I decided it was necessary to do the calculation and see for myself.

To start with, what is the amount of carbon dioxide emitted if I travel in my personal vehicle to work? For one journey to work, this is a 62km road trip. Using the guidelines for mobile emissions published by Defra, a medium passenger vehicle emits approximately 186.8 g of CO2 per km traveled. Therefore, for my 62km journey, I’m looking at emitting an astronomical 11.58 kg of carbon dioxide, or 0.01158 tons of carbon dioxide.

Now let’s compare that to the combined Gautrain journey to work. To start the journey from home, I have to travel 8.3km by personal vehicle to get to the Rhodesfield Gautrain station. That means I’m going to emit approximately 1.55 kg of carbon dioxide, or 0.00155 tons of carbon dioxide.  

The next leg of the journey is the train trip from Rhodesfield station to Hatfield station. This represents 27 minutes of travel time. The Gautrain, on a typical four car train, has twelve 200 kWatt motors. That’s a combine value of 2400 kWatts or 2.4 MWatts. Taking into account the 27 minutes, or 0.45 hours, travel time, the MWatt hours are 1.08 MWatt hours.

The Gautrain is running off coal powered electricity. The EPA approximates that a coal fired power station emits 1.020129 tons of carbon dioxide per MWatt hour. This means that the total amount of carbon dioxide emitted from my train leg would be 1.101739 tons of carbon dioxide. Now that’s for the entire train. A four car train can hold 321 seated passengers and 138 standing passengers. When I got on the train this morning, it was reasonably full at about 75% seated occupation. This is approximately 241 passengers. The emissions need to be divided by the number of passengers, to get the emissions per passenger. This works out to be 0.00457 tons of carbon dioxide per passenger.

The last leg of my journey is the bus trip. Using the same Defra guidelines, the emissions from a bus are approximately 1034.61 g of carbon dioxide per km traveled. Therefore, for my 7 km bus trip, the total emissions would have been 8.587 kg of carbon dioxide, or 0.008587 tons of carbon dioxide.  Again, this has to be divided by the number of passengers using the bus. There were only five people on the bus, so the total amount of emission per passenger was 0.001717 tons of carbon dioxide.


The total carbon dioxide emission for the entire journey is then 0.007844 tons of carbon dioxide. This is only 67% of the carbon emissions that I would have emitted if I had traveled by my own personal vehicle.

Carbon dioxide emission (tons) depending on passenger composition 

So as far as carbon dioxide emissions go, public transport is a sure winner. The biggest draw back was the amount of time it took to get to my office, versus the normal 45 minute journey if I took my own personal vehicle. It took almost two hours for me to get to my office. The major cause of the long journey time was waiting for the next train (I managed to just miss my connecting train) and waiting for the bus to leave. This probably added at least half an hour onto my journey time. The other additional leg of my journey is the 20 minute 1.5 km walk from the bus stop to my office. Now I can probably do with the walk, but it certainly adds to the length of the journey, and doesn't do much to help my body odour during the course of the day.

So to make the Gautrain journey to work feasible, I need to learn to be productive on the train and on the bus. But it's not too difficult because after all, I wrote this blog while sitting on the train. There are plenty of activities that I would normally have done at the office, which I could do during the journey, such as reading papers, writing out ideas, or checking emails. But it will require a mind set change to be really successful at this.

So for the time being, while my ire about the etolls is still a strong motivation, I will continue to use the Gautrain and make a concerted effort to adapt to a new way of traveling. 

Friday, 15 November 2013

Technical Post: Rasterizing line data from a shape file - but conserving the sum of the field of interest

Hi there,

It's been a while since I've posted. I'm deep in the data analysis and modelling mode now. I've been trying to convert information I have on vehicles which is represented as lines in a shape file to raster data, and had a major success today. I thought I would share this since I could fine no reference to the problem elsewhere. I'm using R to do my GIS manipulations, particularly the packages rgeos, raster and ncdf.

So the problem is: I have line data in a shape file, but I want it as raster data (i.e. represented in pixels). I'm interested in a particular field (in my case the number of vehicles per hour on a road). This data is represented as a line, which has a field for number of vehicles on that road in an hour. There are functions in R which can rasterize data.

So for examples I could use:

rasterize(VehAMPKhrpoll_trans, q, 'Vkmhr', fun = "sum")

where VehAMPKhrpoll is my shape file which I've imported into R and reprojected.

But I found with this function that the sum over all the lines for the field of interest does not equal the sum for this field over all the pixels. This was a major problem for me. The model for the vehicle numbers already has errors, and I didn't want to compound those errors by incorrectly rasterizing my data.

So after a few days of hacking away at R code I managed to come up with this function which uses the extent from a NETCDF file which has the extent I require, and then rasterizes the line data from my shape file, by cropping the lines within a particular pixel, then extracting the lengths of each line in the pixel, then dividing the length of that line by the original length of the line to get the proportion of the line inside the pixel, and then multiplying the field for that line (i.e. number of vehicles) by the proportion, so that the value of the field for that line inside the pixel is proportional to how much of the line is in the pixel. And in this way, if I sum over all my pixels, I obtain the sum over all my lines for that particular field of interest.

The R function I used is:
## filename1 = file name and/or location of NETCDF file
## filename2 = file name and/or location of shape file (just the first part of the file names)
## field = The field of interest in the shape file
## nrow = number of rows in the raster file
## ncol = number of columns in the raster file

rasterize_sum <-function(filename1, filename2, field, nrow, ncol) {
### Function to obtain the sum of lines in a pixel for a particular field, but where the proportional value 
### of the field that is allocated to a line inside the pixel is based on the length of the line which falls
### into the pixel.
  require(ncdf)
  require(rgeos)
  require(raster)
  #Read in the NETCDF file
  ncid_old <- open.ncdf( filename1, write=FALSE )
  
  #Read in the shape file
  VehAMPKhrpoll <- readOGR(dsn = getwd(), layer = filename2)
  VehAMPKhrpoll_trans <- spTransform(VehAMPKhrpoll, CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs"))
  
  #Extract the lon and lat from the NETCDF file
  lon <- ncid_old$dim$lon$vals
  lat <- ncid_old$dim$lat$vals
  ## This part is here because the coordinates in a NETCDF file gives the centre value of each pixel
  lon_min <- min(lon)-(lon[2]-lon[1])*2^(-1)
  lon_max <- max(lon)+(lon[2]-lon[1])*2^(-1)
  lat_min <- min(lat)-(lat[2]-lat[1])*2^(-1)
  lat_max <- max(lat)+(lat[2]-lat[1])*2^(-1)
  space_to_centre_lon <- ((lon_max - lon_min)*ncol^(-1))*2^(-1)
  space_to_centre_lat <- ((lat_max - lat_min)*nrow^(-1))*2^(-1)
  lon1 <- seq((lon_min + space_to_centre_lon),(lon_max - space_to_centre_lon), length = ncol)
  lat1 <- seq((lat_min + space_to_centre_lat),(lat_max - space_to_centre_lat), length = nrow)

  ## Getting the lengths for each line in the original shape file
  BB <- gLength(VehAMPKhrpoll_trans, byid = TRUE)

  ## A box to get the extent of the NETCDF file
  dd <- cbind(x=c(lon_min, lon_min, lon_max, lon_max, lon_min), y=c(lat_min, lat_max, lat_max, lat_min, lat_min)) 
  
  #Initialising the raster file 
  Vkmhr_RASTER <- raster(ncol = ncol, nrow = nrow)
  extent(Vkmhr_RASTER) <- extent(dd)

  # a loop to go through each pixel in the raster file
  for (i in 1:length(lon1)) {
    for (j in 1:length(lat1)) {
      # Obtaining the extent for a pixel
      xmin <- lon1[i] - (lon1[2]-lon1[1])/2 ;  xmax <- lon1[i] + (lon1[2]-lon1[1])/2; ymin <- lat1[j]- (lat1[2]-lat1[1])/2; ymax <- lat1[j] + (lat1[2]-lat1[1])/2 
      aa <- cbind(x=c(xmin, xmin, xmax, xmax, xmin), y=c(ymin, ymax, ymax, ymin, ymin)) 
      # cropping the SpatialLinesDataFrame by the current pixel
      VEH1 <- crop(VehAMPKhrpoll_trans, extent(aa))
      # Initialising a vector which will contain the proportions of each line in the pixel
      CC <- NULL
      # Initialising a vector which will contain the proportional values of the field of interest
      VEHprop <- NULL
      #Filling the raster file, from bottom row to top row (lowest lats first)
      #Row 1 of the raster file will be for the maximum latitude value
      if (is.null(VEH1)) {
        values(Vkmhr_RASTER)[(nrow-j)*ncol + i] <- NA 
      } else {
        AA <- gLength(VEH1, byid = TRUE)
        for (k in 1:length(names(AA))) {
          CC[k] <- AA[names(AA)==names(AA)[k]]/BB[names(BB)==names(AA)[k]]
          ##BAD BAD hard coded the field value here, need it to be input in function
          VEHprop[k] <- VEH1@data[,field][k]*CC[k]
        } 
        values(Vkmhr_RASTER)[(nrow-j)*ncol + i] <- sum(VEHprop, na.rm = TRUE)
      }
    }
  }
  Vkmhr_RASTER
}

filename1 <- "ccam_filestructure.nc"  ## The NETCDF file
filename2 <- "VehAMPKhrpoll"   ## The shape file


Vkmhr_RASTER1 <- rasterize_sum(filename1, filename2, "Vkmhr", 25, 25) ## Be sure to put the field name
                                                                      ## in inverted commas


Image of the rasterized shape file lines data for vehicle kilometre numbers on Western Cape roads

Hopefully this helps someone out there!

Thursday, 26 September 2013

IPCC to Issue Fifth Assessment Report Friday 27th of Sep 2013

Today the IPCC will be issuing the Fifth Assessment Report on climate change. There promises to be some ominous warnings on the threat of our continued addiction to fossil fuels, but they are going to provide suggestions on the best possible ways to mitigate climate change. There are fears though that the climate deniers are going to put up every effort to dilute the seriousness of the findings and to do their usual doubt casting. Guardian Report

The report will be publicly available at http://www.ipcc.ch/.

Neville Smith explains the process behind the report:


Monday, 23 September 2013

On the Side: Please help Save the Rhino

It goes without saying that the natural environment plays an important role in mitigating climate change and keeping Earth as the hospitable place that it is for us humans. It's hard to know for sure the effect that one species can have on the environment, but the rapid removal of one species from the ecosystem can have complex and unpredictable effects on its ecosystem counterparts.

The Rhino (Ceratotherium species) is a keystone species - this means that the activities of the Rhino and the mere presence of the Rhino in the ecosystem is important for the survival of other species and the balance of the ecosystem. If the Rhino is suddenly removed, it will have far reaching consequences for other species in the environment, and could disrupt the delicate balance of the ecosystems where these animals exist. As an example, White Rhinos are responsible for maintaining "grazing lawns", which is beneficial to grazers like wildebees, zebra, and impala. and therefore they also assist in maintaining the balance between trees and grasses which is what makes a savannah a savannah. 

This year alone, 688 Rhinos have already been poached in South Africa. Please lend your voice, or just a tweet, to WWF South Africa http://www.wwf.org.za/, to support their initiative to raise one million tweets in support of saving the Rhino in South Africa. Tweet #iam4rhinos!

http://www.wwf.org.za/act_now/rhino_conservation/


Rapidly Melting Sea Ice

The amount of Arctic sea ice has reached its lowest point for the year, and reached an all time low last year. The scary thing is that scientists only predicted these levels of sea ice to be reached in 2050. This means that the effects of global warming are happening faster than anticipated, and it also means that models predicting the sequence of global change are still a long way from being exact. There are important climate feedbacks and components in the climate system that we still don't fully understand. WWF article on Black Carbon

One of the causes being attributed to the rapid melt in Arctic sea ice is black carbon. This is basically particulate carbon, or soot, which is released when wood is burned or in other types of fuel burning. But technologies do exist to reduce the amount of black carbon emissions, which can particularly reduce the amount of black carbon emitted in shipping, which has a direct effect on the Arctic sea ice, as black carbon emitted from shipping falls onto the snow, which absorbs heat, thereby increasing the rate of sea ice melt. Reducing black carbon could have just as great an impact on reduction of global warming as the reduction in carbon dioxide emissions. And this is an easier goal to achieve. Paper on the effects of Black Carbon reduction

Here's a video from NOAA explaining how black carbon accelerates Arctic sea ice melt, and could therefore accelerate the effects of climate change:


Friday, 16 August 2013

Key Indicators for Climate Change

Here is a link to the NASA Climate website which gives a set of really great plots and illustrations indicating some of the key indicators of Climate Change. Carbon Dioxide concentrations, Sea level and Global Surface Temperature are going up, and Arctic Sea Ice and Land Ice are going down. The data says it all. NASA Climate's Key Climate Change Indicators

Two sets of time series illustrating the rising Carbon Dioxide concentration. From  NASA Climate's Key Climate Change Indicators

Thursday, 8 August 2013

2012 - A Year of Broken Records ...

But unfortunately the records being broken are not the good ones. The Guardian has posted an article summarising a recent report by NOAA (National Oceanic and Atmospheric Administration) on a warming planet. 2012 saw the lowest sea-ice coverage on record and ranked in the top ten warmest years ever recorded - http://www.theguardian.com/world/2013/aug/06/noaa-report-arctic-ice-climate-change

A Report by NOAA called State of the Climate in 2012 - http://www.ncdc.noaa.gov/bams-state-of-the-climate/2012.php

Tuesday, 6 August 2013

Climate Change set to occur over ten times faster than any other natural climate change recorded over the last 65 million years

Here's am ominous article from the Stanford News website explaining some of the work recently published in Science by Climate Scientists Noah Diffenbaugh and Chris Field. It explains how current models predict the average global temperature to increase by 5 - 6 degrees Celsius by the end of the century. The last time the planet had to adjust to this magnitude of temperature increase, ecosystems had thousands of years to adapt. But due to the unprecedented trajectory of climate change happening today, ecosystems are forced to adapt in only decades. If this trajectory continues, by 2046-2065 the average temperature of the Northern land masses will be 2-4 degrees higher. This means that the hottest summer experienced in the last 20 years will now occur at least every second year, if not more frequently.

Map from http://news.stanford.edu/news/2013/august/climate-change-speed-080113.html
Note the colour of South Africa in the Velocity of Climate Change map - much of South Africa, particularly the Northen Cape and central interior are set to experience rapid climate change.

Friday, 19 July 2013

Technical Post - Installing HDF5 and NetCDF4 and NetCDF4 libraries for Fortran

This is a post I've been meaning to do for a while now, ever since I got it to work, but it may not be of interest to everyone. Ever since I started working on inverse modelling, I've found that I've needed to use Linux more and more, particularly because many of the atmospheric transport models are written in the Fortran programming language. Also, if you ever plan to do anything on a super computer, you're more than likely going to be dealing with the Linux operating system. I wouldn't say that I'm the greatest fan of Linux, but I have learnt to get my way around the terminal window, to the point where I in fact feel I've learnt to view the computer in an entirely different way. Not to give away too much about my age, but we did have the DOS operating system on our first computer, so as a kid I could get my way around the computer and perform a few operations (like backing up and restoring games). So the terminal was not too big of a shock for me. My biggest problem with Linux is how difficult it is (for me) to install a new program if you don't already have all the correct libraries. And then you install the missing libraries it's complaining about, and then everything breaks. I just wish that there was a little bit more self checking going on, so that it would stop you before you did something stupid. Or even better, check first what libraries you have, and then actually install automatically the correct, compatible libraries for you. But anyway ... that's just me.



So what I would like to present today is my solution to installing important libraries used for accessing very large, generally spatial, datasets. And that's the HDF5 and NetCDF4 libraries. Particularly for the use in Fortran. And as it stands, I can also use NetCDF successfully in Python (but with a few extra NetCDF Python specific packages). The reason why I'm posting this is because I routinely break Linux. I've found that it's a skill I possess. One day everything will be working like clockwork, and the next moment, there are black screens and hung computers. And it's back to the drawing board again. So I've had to install NetCDF (in particular NetCDF, but you need HDF5 if you want to use NetCDF4) on several occasions, and every single time it's been a painful exploration through the internet forum space, greedily searching for any solution which will result in the easy installation of these libraries. After the last time, I decided that enough was enough, and I documented everything that I did, so that when it happens again, I can just follow my step by step guide, without having to do any searching, and it will just work the first time. I'm sure it's a pipe dream, but here it goes anyway. Maybe this will help one person out there, and save them a week's worth of frustration.

This set of instructions is aimed at those planning to use the Intel Fortran compliers on Ubuntu. The important point is that you have to compile NetCDF with the compiler you're going to be using (you could also be using gfortran). Now if you install NetCDF from the repository you are going to run into problems here. I had attempted to do this (I always try to install from the repository if I can), but I eventually had to uninstall the NetCDF version I had installed from the repository. This is the version that kept coming up when I used the command nc-config --all, and I think this is the reason why, no matter how I tried to link the libraries, ifort would tell me that NetCDF had not been compiled with ifort. So rather don't have any other NetCDF versions lurking.

The first thing you have to do is make sure that your .bashrc file is setup correctly so that your compiler and its libraries are working right from the start of the session. And you also need to make a few other changes to point your installation in the right direction. This is by far the scariest part of the installation, because quite frankly, I have no idea what most of these lines are actually doing. Some of them may even be redundant. But these instructions I found on various webpages, trying to explain exactly how to install NetCDF for
Intel Fortran compilers (so mainly sourced from the Unidata website and from the Intel website), and for me these changes did appear to work and did no harm. From what I understand, we're basically trying to point the computer to where it needs to look for various libraries and how to treat certain file types.

I mainly followed the advice from the Unidata website and installed everything (zlib, HDF5, NetCDF4, and NetCDF4-Fortran into one folder (in my case this was /home/alecia/local), which was not the default folder.  When I didn't use the prefix command and let it install in the default location, then I couldn't get HDF5 to install.
 
These are the extra lines in my .bashrc file, which get added right at the end of the file  [To get into the .bashrc file, you type: cd, and then you type vim .bashrc if you have the vim editor]:

source /opt/intel/composer_xe_2013.3.163/bin/compilervars.sh intel64
PATH=$PATH:/opt/intel/composer_xe_2013.3.163/bin/
export PATH
source /opt/intel/composer_xe_2013.3.163/bin/intel64/idbvars.sh
PATH=$PATH:/opt/intel/composer_xe_2013.3.163/bin/intel64/ifort
export PATH
export F90=ifort
export F77=ifort
export F9X=ifort
export PATH=$PATH:/home/alecia/local/bin
export NETCDF=/home/alecia/local
export NETCDF_LIB=/home/alecia/local/lib
export NETCDF_INC=/home/alecia/local/include
NETCDFHOME=/home/alecia/local
export NETCDFHOME
NETCDF_PREFIX=$NETCDFHOME
 
Save the .bashrc file, and then type

source .bashrc
which will execute the new setup file and make the changes you've added.

The first few lines, up until the second "export PATH" are telling the computer where the Intel libraries are. From the "export F90=ifort" line, this is specifically for the HDF5 and NetCDF4 installation.

Then obtain the tar.gz installation files for curl, zlib, HDF5, NetCDF4, and NetCDF4 libraries for Fortran from the unidata website
 
I actually installed curl from the repository, because the one I got from the website didn't want to install. But this seemed to work just fine.
 
Then extract zlib into its own directory:
tar zxvf zlib-1.2.7.tar.gz
cd zlib-1.2.7/
./configure--prefix=/home/alecia/local 
make check install

Of course, everywhere that you see "alecia" you would just replace with your own home folder name. And the version number would be the latest version number. 
 
Then extract HDF5 in its own folder:
tar zxvf hdf5-1.8.9.tar.gz
cd hdf5-1.8.9/
CFLAGS=-O0 ./configure --prefix=/home/alecia/local --with-zlib=/home/alecia/local --enable-fortran
make check install

Then extract NetCDF4 into its own folder:
tar zxvf netcdf-4.3.0.tar.gz
cd netcdf-4.3.0/
CPPFLAGS=-I/home/alecia/local/include LDFLAGS=-L/home/alecia/local/lib ./configure --prefix=/home/alecia/local 
make check install
 
Then extract NetCDF4 Fortran libraries into its own folder:
tar zxvf netcdf-fortran-4.2.tar.gz
cd netcdf-fortran-4.2/ 
CPPFLAGS=-I/home/alecia/local/include LDFLAGS=-L/home/alecia/local/lib ./configure --prefix=/home/alecia/local
 

Then to compile with ifort I needed to use the following command (and apparently the order is important, but I haven't tried different combinations):
ifort -o convert.out -I/home/alecia/local/include convert_ccamtolpdm2_go.f90 -L/home/alecia/local/lib -lnetcdff -lnetcdf

Of course here you would replace the f90 file and the output file with your own.

And to get the executable to run, I had to use:
./convert.out -rpath=/home/alecia/local/lib

When I restarted the system the next day, and tried to redo this, I believe I ran into some problems, so that's when I discovered how to correctly modify the LD_LIBRARY_PATH list (yet again - thanks to the wonderful internet invention of the forum). This is once again a modification done in the .bashrc file.


LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/alecia/local/lib
export LD_LIBRARY_PATH


These two lines get added right at the end of the .bashrc file. As I understand it, LD_LIBRARY_PATH is a list of all the places that the computer needs to look when trying to find a library. When you install libraries into non-default locations then you need to add this location to the list. What you don't want to do is erase the list and then just add the new library location. That is why if you read up about LD_LIBRARY_PATH modifications, a lot of people will tell you just not to do it at all. So the format that I've used here is very important, because this way it definitely adds the library location, as opposed to wiping out the list. Just for you to check, before you make this modification, you can type  

echo $LD_LIBRARY_PATH
and run in the terminal, and it will give you the list. You can then make the modification to the .bashrc file and save, and then type

source .bashrc
and run and this will run the file and make the added changes. If you rerun the echo $LD_LIBRARY_PATH command, then it should give you the same list you previously had, but now with the new library path /home/alecia/local/lib.

Now you should be able to execute your new Fortran executables by just typing
./convert.out
(or whatever you've called your executable)


I'm sure there are several different ways of doing this, and probably many of them more elegant and efficient, but for me, using a personal computer with Intel i7-3770K machine , running Ubuntu 12.04 on a VMWare virtual machine, this worked (and I've gotten it to work twice - yes I have already broken Linux since finding this solution). I'm still compiling my Fortran code using the same line of commands, and I'm still able to open, read and write to NetCDF files. For how long this will continue I don't know, but if anything goes wrong, this is where I will start with building it up from scratch again.

I hope this helps somebody out there.

Wednesday, 17 July 2013

The 9th International Carbon Dioxide Conference - Beijing

Firstly, apologies for my long absence.

Not so long ago I was fortunate enough to attend the 9th International Carbon Dioxide Conference, hosted in Beijing by the Chinese Academy of Sciences. It was truly my first big international conference, and an opportunity to present the first set of results I had obtained. Well of course that’s what I said I was going to do ... It seemed that almost as soon as I submitted my abstract and it was accepted for a poster presentation (I was at first a bit disappointed, until I saw the author list of those presenting orals and realised that perhaps just listening would be a good thing this time round) that things started to delay my ambitions of having all my results wrapped up by the time of the conference. But through many late nights and many more antacids later, I was able to assemble something with which I was rather pleased. This in part played a role in my disappearance from the carbon science blogging scene.

When I finished off my poster, I thought I was being terribly clever when I put the link to the @CapeCarbon twitter account at the end of the poster. I then discovered that Twitter is not accessible in China – which was a bit of a buzz kill. So much for that grand idea.

The conference was an eye opening experience and a carbon science nerd’s dream! I can honestly say that almost my entire PhD reference list was walking about the conference. And if someone wasn't there themselves, then their students were there presenting collaborative work. I very often had to restrain myself from asking for autographs and taking pictures with famous (in my mind) scientists whose work I had poured over for hours and hours, resulting in severely dog-eared papers with plenty of my own scribbles as I’d tried to mould it into something I could understand and grasp.

Two such scientists based at the LSCE in Paris (Laboratoire des Sciences du Climat et de l’Environnement -  http://www.lsce.ipsl.fr/en/) are Philippe Ciais and Philippe Peylin. Their work features heavily in my literature review, and at one stage I had one of them standing on my left and the other on my right while they were having a discussion with my supervisor. I found myself searching the crowd for my South African colleague so that I could telepathically communicate to him to please take a picture!

A picture of Ralph Keeling who features in an earlier post
That was the other great opportunity presented to me by the conference – I got to meet up with my Australian supervisor, Dr. Peter Rayner from the University of Melbourne, and have an entire week to discuss, in person, the technical issues I’d been wrestling with over the past weeks leading up to the conference, as well as to meet in person a fellow collaborator with whom I’d had many an email conversation. For those people starting a PhD, I can only hope for you that you have a supervisor like Dr. Rayner. He has a unique grasp of the science behind atmospheric transport as well as the statistics of inverse modelling, which he is able to elucidate to another person in such a way that you can actually see the light bulb blinking on as a once complicated and hostile battlefield of detail and facts is turned into a tangible, achievable process. Preceding the conference, I had gone through a bit of the “PhD blues”, and had avoided discussing my concerns with any of my supervisors. This was a terrible mistake, and once I’d had my first conversation with Dr. Rayner in the weeks before the conference (a marathon two hour Skype meeting), I felt the glumness lift off of me, and I was given direction once again and able to forge on ahead. A PhD is not for the weak, let me tell you. If you thought you were emotionally unhinged before, just try one of these on for size.

Anyway! What did I take away from the conference? Well many of the presentations were on trends in carbon dioxide levels – and yes, carbon dioxide concentrations are going up. We are definitely not doing enough yet to prevent a more than two degree average temperature increment into the future. It may level off now and then, but now that we are starting to have the first really long term carbon dioxide time series datasets, it’s clear that levels are rising. And from these kinks in the carbon dioxide trend we also know that we are not doing such a great job yet in predicting the complicated interaction between the climate and the carbon cycle, particularly the component related to the land surface. That means that we need more measurements and we need better models. I also learnt that carbon scientists lean a little bit towards the cynical side – you need to in order to survive the carbon science/climate science game.

A rather thought provoking cartoon posted by one of the presenters on the last day of the conference

When I wasn't gawking at famous scientists or straining to take in every bit of information I could sponge from the presentations, the conference organisers were doing a great job of keeping us busy on tours around Beijing. I even managed, along with my South African colleague, to venture the streets of Beijing and discover a bit of the history and culture of the city. 
At The Summer Palace - Residence of The Dragon Lady
The Great Wall of China

The Forgotten City
More to follow soon!

Another article on the breaching of the 400ppm carbon dioxide level in the New York Times: http://www.nytimes.com/2013/05/11/science/earth/carbon-dioxide-level-passes-long-feared-milestone.html?pagewanted=all&_r=0




Friday, 17 May 2013

Global Carbon Dioxide Concentrations Surpass the 400ppm Mark

Here's an interview with Ralph Keeling, son of the climate scientist Charles David Keeling, who developed the well known and much publicized Keeling curve, on the approach of the 400ppm Global Carbon Dioxide Concentration mark: http://www.guardian.co.uk/environment/2013/may/14/record-400ppm-co2-carbon-emissions

Here's also an interview with Prof. Michael Mann from Penn State University:
http://www.youtube.com/watch?v=bvC-VI2EdBY

And if you haven't seen enough yet, here's an article which explains the significance of 400ppm, as well as a really honest talk by David Roberts on the reality we may be facing. This level of CO2 concentrations has not been seen for three million years - http://www.treehugger.com/climate-change/what-does-world-400-ppm-co2-look.html

Keeling Curve - http://www.treehugger.com/climate-change/what-does-world-400-ppm-co2-look.html

Recipe for Obtaining Carbon Dioxide Emission Estimates - Second Ingredient: Transport Model


WARNING: The Presence of Equations is Detected in this Post!
It’s been a while since my last post regarding carbon dioxide emission estimates, and since I'm currently busy working on the Transport Model for my analysis, I thought it would be a good time to introduce the second ingredient needed for carbon flux estimation through inverse modelling.

To adequately explain why the transport model is needed, I'm going to have to introduce an equation. This is the Bayesian cost function which needs to be optimized in order to solve for the carbon dioxide fluxes:

JBLS = (c - Gs)TX(c - Gs) + (s - z)TW(- z)

where c are the observed concentrations at a particular station, s are the carbon dioxide fluxes (which is what we want to solve for), G is the matrix which describes how to fluxes influence the concentrations (and so is referred to as the influence function), X is the inverse covariance matrix of the concentrations, z are the best estimates of the carbon dioxide fluxes, and W is the inverse covariance matrix of the estimated fluxes (see Enting, 2002 Inverse Problems in Atmospheric Constituent Transport). The influence function G is the part of the equation which requires the transport model. This is the component which links the rates of CO2 emissions to the observed CO2 concentrations, so that c Gs.


The idea behind the optimization is to estimate the best values for s (the fluxes) so that c Gs is as close to zero as possible. But because there are many sets of values that can be assigned to s so that the optimization equation is minimized, the Bayesian approach to optimization adds a second component, which is the minimization of the difference between the modelled values for s  and the best estimates for s (z which is referred to the as the prior estimates). The Bayesian way of thinking is that we aren't starting from a clean slate – we already know something about the carbon dioxide fluxes on the ground. We know which areas are going have lots of anthropogenic emissions, we know which areas are vegetated and are going to have photosynthesis and ecosystem respiration processes occurring, and we know which areas are barren where very little emissions are going to take place. So let’s add that information to the optimization routine so that we can further restrict the set of best possible solutions for the fluxes, s. And what we end up with is then a probability distribution for the solution of s. So not one answer, but a set of answers with associated probabilities.

The approach I'm using to obtain the influence function is based on a atmospheric modelling tool often used in pollution tracking called a Lagrangian Particle Distribution Model run in backward mode. So what does that mean? When a Lagrangian Particle Distribution Model (LPDM) is run in backward mode, particles are released from the measurement sites and travel to the surface and boundaries (Seibert and Frank, 2002, Source-receptor matrix calculation with a Lagrangian particle distribution model in backward mode) That’s why it is referred to as backward mode – we start at the measurement point and then go backward in time to see which surfaces would have influenced that particle. In order to correctly move the particles around, LPDM needs meteorological inputs at a resolution which matches up with the size of the area you’re interested in. Because in my study I'm going to be concentrating of Cape Town and the surrounding areas, I need to have meteorological inputs that are at a pretty high resolution. I'm going to be using values from the atmospheric model CCAM (CSIRO’s Conformal-Cubic Atmospheric Model - CCAM) which provides three dimensional estimates for the wind components (u, v, and w) and temperature and turbulent kinetic energy. And I'm going to be using an LPDM developed by Uliasz (Uliasz, M., 1994. Lagrangian particle modeling in mesoscale applications, Environmental Modeling II).
An example of the wind output from CCAM at a 1km resolution around Tasmania South Esk Region (http://www.csiro.au/sensorweb/ccam/index.html)

The LPDM model has been written in FORTRAN code, and the CCAM model outputs it meteorological variables as NETCDF files. I'll soon write a post on the joys of compiling code in LINUX and converting my met NETCDF files into a binary format that LPDM can use. 


Thursday, 2 May 2013

How Rising CO2 Levels are Affecting the Ocean

I recently had the opportunity to chat to a local radio host on the impacts of rising CO2 levels. I was surprised to find out that, in fact, very few people are aware of how important the world's oceans are in absorbing carbon dioxide, and what the affect of rising CO2 levels in the atmosphere will have on ocean acidity, and the consequences of this. In the long term, oceans may absorb up to 85% of the anthropogenic CO2 emissions. But with all this absorbing of CO2, the oceans are becoming more acidic. It might seem like a small change in pH, but the drop of 0.1 in pH from pre-industrial to present, over only a few decades, would be similar to a natural shift in ocean pH of 5000 to 10000 years. So for ocean life which has adapted over thousands of years to a particular pH, it's a lot to deal with.

Ocean acidification not only affects marine life, but the acidification of the ocean means that the ocean can absorb less CO2. Which has a negative feedback on climate change - the rates of CO2 in the atmosphere may increase faster than predicted because the ability of the ocean to act as a sink for CO2 is diminishing. In addition to the ocean becoming more acidic, the average ocean temperature is also rising, and this further diminishes the oceans ability to absorb CO2

The affect of rising CO2 levels on the ocean is rather complicated and multifaceted. There's a whole load of ocean chemistry going on there, not just one simple straight forward equation.

To read more see this Scientific American article or see this short video:



And here's a second video from New Zealand's NIWA which I think tells the story a little better:


Friday, 8 March 2013

New Scientist article on recent warming

Found this insightful article on the New Scientist website on the recent trend in warming. Global Warming Article in New Scientist

Follow the link to the Science article to get the full story!

Marcott, S.A., Shakun, J.D., Clark, P.U., Mix, A.C., 2013. A Reconstruction of Regional and Global Temperature for the Past 11300 years, Science, vol.339, DOI: 10.1126/science.1228026 http://www.sciencemag.org/content/339/6124/1198.full


Also an update! Hangklip monitoring station is back online!! Follow on twitter at @CapeCarbon.




Friday, 1 March 2013

My Cowboy Method of Estimating Carbon Dioxide Emissions


If you’ve been following the twitter feed (@CapeCarbon) – thanks to Laurie Butgereit of Meraka, CSIR, for getting it up and running – then you might have noticed, that not only is the carbon dioxide concentration at the two sites being tweeted, but under certain conditions, the carbon dioxide emissions from Cape Town and surrounds as well.

So how am I doing this exactly? As my last post revealed, normally a super computer, or at least a pretty amazing desktop, is needed to carry out an atmospheric inversion for obtaining estimates of carbon emissions. Well that certainly is the ultimate goal, and it’s the only estimate that’s really publishable. But in the mean time, to give people a taste of what’s possible with atmospheric measurements of carbon dioxide, I came up with a method to get a “ballpark” figure of what the carbon dioxide flux is. This is not a statistical approach. This is not a physics approach. But rather an “applied maths” approach. The same type of approach you need to use when your lecturer asks you to estimate how many ping pong balls can fit inside the lecture theatre.

Now the name comes from one of my favourite movies, Armegeddon (which probably popped into my mind because of all the meteors that have been crashing down to Earth lately - http://www.geekosystem.com/nasa-explains-russian-meteor/), specifically from the Russian cosmonaut, Lev Andropov, where he calls the America astronauts a “bunch of cowboys” after they've just caused his space station to blow up. There’s a scene where he bashes a spanner against the space ship to get it to work, and it springs to life. Well this is my way of smashing a spanner against the measurements to get them to give emission estimates.

The first thing that needs to happen is that the wind needs to be blowing in the right direction. It needs go from one of the measurement stations, over Cape Town, and directly to the next measurement station. So this is either when the wind is blowing from the South East or from the North West. We also need to know how fast the wind is blowing. Fortunately we can get all of this from the South African Weather Service (SAWS Cape Town Weather).
Map showing the required wind directions in order to make flux estimates based on the difference in CO2 concentrations between the Robben Island and Hangklip sites. Map generated using Google Earth.

Then we need to get the difference in carbon dioxide concentration between the two sites. If the wind is coming from the North West, then we want Hangklip – Robben Island, and if it’s from the South East, then we want Robben Island – Hangklip. This difference then needs to be converted into mg of carbon dioxide per cubed m. This can be done using the ideal gas law.
mg CO2 per m3 = CO2 ppm × 44.01/(8.3145 × Temperature/Pressure)

Now this is where the serious wangling comes in.

The first major assumption that we need to make is that the wind is travelling in a straight line from station A to station B. Then we need to assume that the wind speed is constant. This won’t be 100% correct, but we’re not going to try and model the wind fields for this exercise (that’s what the super computer is used for).

Then my thinking is: imagine this concentration of CO2 as a cylinder with a 1m2 base and 1m in height. The cylinder starts off at station A and then travels towards station B at the speed of the wind in a straight line, a total distance of 77.4km. Along the way, the cylinder is going to collect or lose CO2 due to processes taking place on the surface. There is also not just one cylinder at a particular point, but cylinders stacked up on top of each other until they reach the planetary boundary layer. To start off with, just to keep things simple and manageable, we’re going to assume that all the cylinders are moving at the same speed. This is not true, because the higher up in the atmosphere you go, the faster the wind is going. What I plan to use in the future is the wind profile power law (http://en.wikipedia.org/wiki/Wind_profile_power_law).

The planetary boundary layer extends from the surface to height of up to 3km, depending on the temperature and other factors (http://www.sciencedaily.com/articles/p/planetary_boundary_layer.htm). It’s where the air is most influenced by what’s happening on the surface, and it’s pretty hard to model. This height is required in order to make the estimation of the CO2 fluxes possible, and so, for starters we’re just going to assume that the height of the planetary boundary layer is 1000m, which I grabbed from a study on North America, but for an area of similar latitude (http://www.meteor.iastate.edu/~jdduda/portfolio/605_paper.pdf). As the tweets get a bit more advanced, I will try to change the PBL depending on the time of the day and the current season.
An illustration of the approach used to estimate the CO2 fluxes and the assumptions made 
We can now come up with an empirical relationship

The difference between the two CO2 concentrations = ΔCO2
= The amount of CO2 emitted or absorbed per m2 per second × the total distance travelled × the total amount of time spent per m2 / the total number of cylinders in the stack.

We know the difference in concentration between the two sites, we know the total distance travelled which is 77400m, the amount of time spent at each m2 is the inverse of the wind speed, and the total number of cylinders is 1000. So all that’s left is the emission (or absorption) of CO2:

CO2 flux = ΔCO2 × wind speed × 1000 / 77400

So far the values that I've tested this on are in the right range, so I'm happy that the estimates are at least of the right order of magnitude.

During the mornings we have seen that there are negative fluxes observed. This is because photosynthesis has kicked in. During the morning, the stomata open up, and allow CO2 to passes into the plant cells. At the same time water vapour can also leave the plant cells. If it starts to get too hot, the plant needs to be careful that it doesn't dessicate, and so it has to control how open it's stomata are going to be, which then limits the amount of CO2 that can be absorbed through photosynthesis. If there's one thing I hope people take away, it's the important role that our natural systems play in regulating the atmosphere in which we exist.
Illustration of photosynthesis from www.butler.edu




Couldn't have done it without a little help from my friends...


Couldn't have done it without a little help from my friends...

Getting the Picarros up and running has been a journey in itself. As I've mentioned before, scientific equipment doesn't just up and install itself. And boy, you better get it right, or else you might as well just chuck the data or else spend the rest of your life trying to justify corrections and tweaks to the data.

My journey started in Australia, where I was lucky enough to spend a few weeks with my now co-supervisor, Dr. Peter Rayner. Peter taught me all the in's and out's of atmospheric inversion and how to go about getting one to run. This was my first experience with working on a super computer, and I think I learnt to code in about three different languages during those six weeks - IDL, Python and Fortran. As a bonus, Peter also took me on a tour of Lygon Street, Melbourne. This is a delightful place, which has restaurants of all kinds stretching from end to end. Every day we would visit a different restaurant so I got to enjoy a culinary tour of Melbourne, Australia, as well. I had possibly one of the best burgers ever at a place called Grill'd (http://www.grilld.com.au/). Unfortunately I'm not a ginger, or otherwise I would have been eligible for the special.
A view down Lygon Street, Melbourne - http://www.melbourneplaces.com
As an aside - there is a wonderful little cinema in Lygon Street, Cinema Nova (www.cinemanova.com.au/), which I discovered while staying there. It's an arthouse cinema, but they do show some of the movies on the regular circuit. My normal experience at a cinema, which I do love going to in general, is that you get your ticket and then you can maybe get a popcorn and some soda, and not much else. But at Cinema Nova, not only can you get your box of popcorn (so you can have the full cinema experience), but you can order a glass of wine as well!! In a GLASS! This was an entirely new experience for me. There's nothing like a glass of wine to help you get thoroughly enthralled in a soppy movie.
The perfect movie combo
Just around the corner from the University of Melboure are the Aspendale CSIRO (Commonwealth Scientific and Industrial Organisation) offices. Peter has a long standing relationship with CSIRO and works often in collaboration with the CSIRO Marine and Atmospheric Research group. Lucky for me! This group manages several atmospheric monitoring sites, including Cape Grim on Tasmania. Who better to ask how to go about setting up my own system? I was hoping for a few pointers and maybe a setup diagram or two. Well I got that and much more. On the day I visited, Zoë Loh (http://www.youtube.com/watch?v=n6313xRIdck) and Ann Stavert (http://www.youtube.com/watch?v=oYmJJerYi-w) graciously and enthusiastically showed me around their labs, and then took me through their entire setup of the Picarro systems, and the do’s and don’ts of plumbing. And on top of that, Zoë arranged for me to go into the field with David Etheridge to see two Picarro monitoring sites in action at a geosequestration site, as well as to get some hands on training in collecting flask samples for isotope analysis.
Atmospheric monitoring station near a geosequestration site in Otway, Australia
Assistance from CSIRO didn't stop there. I also got to visit the Marine and Atmospheric Research group based in Canberra. This was to get some hands on experience with code related to the CABLE (CSIRO Atmosphere Biosphere Land Exchange) model, which our group in South Africa had planned to start using and adapting for our purpose. This model represents the interactions between soil, vegetation and the atmosphere, and includes linkages between hydrology, plant physiology and their micro climate (http://cawcr.gov.au/projects/access/cable/cable_technical_description.pdf). Vanessa Haverd very kindly took me through all the code and explained how to get it to run.

When I arrived back home, it was time to start ordering the bits and pieces. If there was one thing that I had taken away from the trip, it was that it was going to be a lot harder to set everything up than I had anticipated. There are a lot of little parts that are needed to make the whole measurement system run properly. We had decided to go with the same Picarro CRDS measurement systems that our colleagues at CSIRO were using, and Picarro, although thousands of kilometres away were very helpful in getting us all the Picarro bits and pieces that we needed, and never hesitated to call if they needed to explain anything.  

Once the Picarro’s arrived, it was time for me to sort out the plumbing, It was something that I had been dreading since the Australia trip because I knew that it was more complicated than I had originally thought, and that it could make or break the whole measurement system.

Fortunately, just after getting back from Australia, I had the opportunity to attend the conference for the South African Society of Atmospheric Sciences, where I met in person Ernst Brunke, of the Cape Point GAW station, run by the South African Weather Service. Instead of treating me like someone who was trespassing onto his turf, Ernst was happy to explain in any required detail what I needed to get for the plumbing system to work, and also to share and collaborate with his lab at Cape Point.
The Robben Island and Hangklip Picarro's visiting Cape Point GAW station for a concurrent calibration
Lucky for me, the suppliers of Swagelok connectors (http://www.swagelok.com/johannesburg), the same connectors used on the Picarro CRDS instrument, were literally just around the corner from where I lived. Here I spent several hours while the sales personal helped to identify all the correct components needed for my plumbing diagram to turn into a reality. They even helped me to assemble the bits and pieces. I now know what a ferrule is. 

When it came to the actual installation, I needed permission from the Port Authority, Transnet, in Cape Town. Mr. Robin Poggenpoel, regional manager, was glad to get on board with the research project, and provided permission for me to access two of the lighthouses to install the CO2 measurement instruments. With help from my family, and from the lighthouse keepers at Robben Island and Hangklip - Peter and George - I was able to finally get the instruments installed. After about a year of planning, everything finally came together.
Peter, the Robben Island lighthouse keeper, accompanying me to Robben Island for installation
So as you can see, this research project would not be possible without the help of a lot people, who really didn't have to, but did anyway. Thank you!