The best thing about
Its generality. You can fit pretty much anything you like, combining all sorts of simple/complex/bespoke outcome models with each other, linking in all sorts of ways, and all through (in my opinion) a pretty simple command syntax.
The worst thing about
Its generality. I think the biggest difficulty to get
megenreg to do what you want it to do, efficiently, is in obtaining good starting values. Now I’m sure if you ask most people have you ever provided your own starting values to an estimation routine, they will say no. Most of the time there is no need to, because a command generally handles one type of model, and the author spent a great deal of time finding the best way to obtain good starting values, in order to provide a quick and useful implementation of a/their method. This is more difficult in my case (poor me, I know). It’s also a matter of what to put my time and effort into (the list is long…again, poor me).
At the minute, if you use
megenreg to fit a complex model, making full use of the advanced sharing between outcomes, then in general, the starting values are pretty poor. Because it’s so general, it’s difficult to handle every situation in a sensible way. Currently, I can get some half decent estimates of the parameters for any fixed effects in your model, by simply fitting a model containing only the fixed effects. Any random effect variances are set to one (I actually model on the log standard deviation scale, with starting values of 0), with covariances set to zero (again, I actually model the inverse hyperbolic tangent of the correlation, with starting values of 0). If you have any components with elements that contain anything other than just a fixed effect, then the associated parameter will be given a starting value of 0, on the scale of the linear predictor. Most of the time this is rubbish. But what else is there to do? Well, I’ll tell you.
Some (not so distant) future plans
In the paper and recent seminars I’ve given I talk a lot about the different modelling frameworks that can all be fitted within
megenreg. Well, being able to do lots and lots of different things with one command results in, one could argue (not me…), a complex and challenging syntax to get to grips with, where it’s easy to go wrong. It’s also a nightmare to error check, certify every permutation, and generally make sure everything works as it should! But that’s my problem…mostly (there’ll be a blog post on this soon).
In my post introducing
megenreg, I talked about some of my previous commands that I wanted to bring all together under one codebase. Well now I’ve done that, I can reverse the desire. I can write versions of
stgenreg that all call
megenreg underneath, without the user knowing what’s going on. See the benefits of doing this yet? The first is that all those commands have much simpler and easier to use syntax, but fit models that
megenreg can fit. So I can write simple shell files which parse the simpler syntax and turn it into
megenreg compatable syntax. I think that’s pretty cool. It’s also what a lot of official Stata commands do…many if not all of the
me commands simply call
gsem! It also means it’s much easier to extend the orginal programs, and yet maintain their simpler syntax. Secondly, and more importantly, within the shell file, I’m now restricting to a much smaller subset of models, where I can write only a small amount of code (hopefully) to implement ways of getting much better starting values for the full model. This will reduce computation time tremendously.
In the meantime you can attempt to use the
from() option to provide a vector of your own starting values, but that involves matching up coefficients with where they appear in my coefficient vector, which is currently very difficult to do! I’ll make this easier as soon as I can.
I’ll be writing shell files for joint longitudinal-survival models, multilevel survival models, joint frailty survival models, and general hazard models, to name a few. Watch this space.