## Monday, 2 December 2013

Very shortly, I'll upload the newest release of BCEA, my R package to post-process the output of a (Bayesian) health economic model and produce systematic summaries (such as graphs and tables) for a full economic evaluation and probabilistic sensitivity analysis (more posts on this are, in random order, here, here, here and here).

I've made changes of two kinds: the first one can go under the header of "cosmetic changes" (like Pietro Rigo, who taught me a couple of courses back in my undergraduate studies, used to say). Basically, I decided to print out the whole script for BCEA and while I was going through it I realised that in several points I wasn't really being very elegant or particularly effective with my code. This was not a huge problem, I think, because speed of execution is generally not an issue $-$ the inputs to the functions are usually moderately small and the operations required are not too complicated. But, once I realised that, it bothered me that my code was full of loops and unnecessary lines and so I changed it. As it happens, the gains in terms of computational speed are not huge (because of the reason I mentioned earlier). But the cockroach lemma still applies...

The second type of changes could be labelled as "substantial" and involve the following.
1. I've included a function multi.evppi, which implements the method for multivariate analysis of the expected value of partial perfect information, described here. (The next bit of information is totally irrelevant, but when I went to Edinburgh for the MRC clinical trial conference, I caught up with Mark Strong, who gave a talk on this, which motivated me to finish off the script).
2. A utility function called CreateInputs, which can be called to produce the object containing the matrix with the parameters simulated using the MCMC procedure (using JAGS or BUGS) and a vector of parameters (as strings) that can be used to perform the EVPPI analysis.
3. I've also coded up (borrowing from Mark's original scripts) a function to perform diagnostic analysis on the assumptions underlying the Gaussian process model that is used to estimate the EVPPI. That's called diag.evppi.
4. A function that performs structural probabilistic sensitivity analysis, which is called struct.psa. I think this was long due, since basically most of the inputs were already there $-$ you only have to run the model using different specifications and save (some of) the results to a list and then this function will compute the weights to be associated with each model specification (according to the methods specified in this paper by Chris Jackson et al.
This will be version 2.0-2 $-$ I've toyed with the idea of moving up a gear in the numbering, since it seems to me to be a substantial improvement (more for the inclusion of the new functions than anything else). But I also thought that it would be good to have it out there and see what people think of it and possibly test it (I know that some people are using BCEA so hopefully we'll get some feedback in time for our book).

## Saturday, 30 November 2013

### Lost

The results of the ISBA elections have come out and unfortunately, I've been beaten to the post of programme chair for the Section on Biostatistics and Pharmaceutical Statistics.

I am not sure by how much $-$ I meant to ask for more details, but I've been a bit busy last week and didn't have the time. I suspect that the fact that only current members of the sections were allowed to vote wasn't good for me, as, probably, it increased the incumbent's advantage.

Anyway, I should congratulate Telba Irony, who's been re-elected and hopefully get her to take some of my programme points (starting with sponsoring the next BayesPharma) on board anyway.

### My talk at the LSHTM

Yesterday I gave a talk on our RDD project at the Centre for Statistical Methodology of the London School of Hygiene and Tropical Medicine. While presenting me, Karla (the organiser of the seminar) joked that I should go for a hat trick of presentations at the LSHTM, since only last month I gave another talk (on the structural zero problems in health economics $-$ on a related note, the paper, which I also discussed here, was actually accepted by Statistics in Medicine).

The main point of this talk was to try and point out various advantages of including genuine prior knowledge in the RDD framework, to try and get suitable estimates and make the assumptions underlying it more robust. I think we need to clarify a couple of points, but also I got good comments, so it was very helpful!

The slides of the talks are here.

## Wednesday, 27 November 2013

### PSMR 2014

Registration for the short course on Practical Statistics for Medical Research (PSMR) 2014 are now open. Here is the advert we've published on the BMJ with all the relevant information and details and even more info & details are here $-$ in case you're interested...

## Saturday, 16 November 2013

### BCEs0 version 1.1 on CRAN

As I was responding to the points raised by two referees and the editor on my paper on cost-effectiveness with structural zeros (the preliminary version was here, while I have presenting it in a few talks and discussed it here, here, here and here).

I have to say most of the comments I received made a lot of sense and were extremely helpful. In particular, when I was thinking about how I should address them, I realised that I was much better off by modelling all the prior distributions on the scale of the mean and standard deviation of the cost variable, rather than using the original scale (e.g. rate & shape for the Gamma distribution).

This is true in general, of course, but it is quite helpful in this case, because I want to impose a very informative prior for the subjects for whom a 0 has been observed (so that in the posterior the mean cost is identically 0, a fortiori). I have updated the software webpage, which now reports the full list of inputs required by the main function
bces0.

In particular, I have modified the original code so that:
1. a treatment-specific threshold for the default Uniform prior on the mean and standard deviation of the costs for the non-null component (previously, I was assuming a single value to be applied to both treatment being compared);
2. a "robust" option, which by default is set to TRUE, which implies that "minimally informative" Cauchy priors are specified on the coefficients for the pattern model for the zero cost indicator. If robust is set to FALSE, then BCEs0 will use a vague Normal prior instead;
3. a "model.file" option, which allows you to specify the name of the .txt file to which the JAGS model code is saved. This is not quite fundamental, but as I was testing the package I kind of got annoyed that every time I run it, it would overwrite previous versions of the model file, which I may need for future tests. And so I changed this.
I think the paper in its current (hopefully final!) version looks much better and the more I think about the overall problem and how the model deals with it, the more I kind of like it. But then again, as we say in Italian: "ogni scarrafone e' bello a mamma sua", which poorly translates into English as "every cockroach is beatiful to its own mother's eyes"...

## Thursday, 14 November 2013

### Loophole

I think I should thank Marta (again!) for this post, as she made me think about it while we were riding together to the Stan workshop, in one of our now ("A XY", that is, as opposed to "B XY" when we used to do so all the time) rare joint outings on the Vespa.

Lately, quite a few London Buses advertise electronic cigarettes, which we found peculiar, given the ban on tobacco advertising. Now, of course, technically, electronic cigarettes have nothing to do with tobacco, so, I'm guessing, they are perfectly within the law in advertising them.

However (and I must say I don't really know enough about this!), it appears that some evidence is present to hint at potential risks to health due to e-cigarette consumption. So, one may wonder, why are these allowed to advertise without formal investigations on their safety at least planned? Again: I may be completely ignorant of government-commissioned studies into this matter (in which case, well done UK Große Koalition!). But I may also be guessing well, right?...

## Monday, 11 November 2013

### Imperialstan

Despite the map here, I'm not going to talk about yet another fraction of the former Soviet Empire which is taken the form of a people's republic, possibly with witty British Ambassadors.

In fact, I'm going to talk about the Stan workshop that I have be to, earlier today, which was held at Imperial College. My friend Lea organised it and Mike Betancourt (who's actually in my department at UCL) run the show (brilliantly, it has to be said).

In the morning, Mike gave a brief overview of MCMC and introduced the basics of Hamiltonian Monte Carlo (I think this by Radford Neal is just a great introduction to the topic). Then in the afternoon he concentrated on Stan and rstan in particular (which, unsurprisingly, is the R interface to the actual HMC engine).

I think this was kind of the first of a potential series of similar talks/workshops and I found it very useful. Of course it's always difficult to strike a balance between how in depth you want to go with the theory and the examples, so for instance, I think a little more on the actual NUTS algorithm would have been helpful $-$ but as I said, I know full well how hard it is to do this, so well done, Mike!

## Saturday, 9 November 2013

### Keynote speaker

Earlier today, I was trying to finish preparing the poster for the Clinical Trials Methodology Conference $-$ I'll have both the poster presentation (on the Expected Value of Information under mixed strategies) and my talk on the Stepped Wedge design on the Monday, so by 3pm I'll be just wandering around the sessions having done my duty.

Luckily, XY slept a couple of hours, so I could actually do some work. But then he woke up and while he was playing in the living room, he found my badge from the Chemometrics Workshop and the latest copy of Significance

I did put the badge around his neck the first time, but he quickly learned to do it himself; at which point he kept putting it on, while carefully reading the magazine. I think he quite looks the part, doesn't he?