This example shows how to simulate data from a joint model of longitudinal and survival data using

`survsim`

. I also show how to fit the true data-generating model with`merlin`

. For more background info on`survsim`

, see:

Crowther MJ. Simulating time-to-event data from parametric distributions, custom distributions, competings risk models and general multi-state models. (Pre-print).

$~$

We’ll start by formally defining the data-generating model. Let’s assume a proportional hazards survival model, with the current value parameterisation.

$$h(t) = h_{0}(t) \exp(X_{1ij}\mathbf{\beta_{1}} + \alpha m_{i}(t))$$

and

$$y_{}(t) = m_{i}(t) + \epsilon_{ij}$$

where

$$m_{i}(t) = X_{2ij}(t)\mathbf{\beta}_{2} + Z_{ij}(t)b_{i}$$

and

$$b_{i} \sim N(0,\Sigma)$$

Where $\alpha$ is a log hazard ratio for a one-unit increase in the biomarker, at time $t$. Let’s assume a simple random intercept and random linear trend for the biomarker trajectory, i.e.

$$m_{i}(t) = (\beta_{20} + b_{0i}) + (\beta_{21} + b_{1i})t$$

The challenge with simulating survival times from such a model is that the cumulative hazard function is analytically intractable, and hence non-invertable. This is one of the situations `survsim`

was designed for (more details on the algorithm can be found in Crowther and Lambert (2013)).

We’ll simulate a dataset of 500 patients, generating an `id`

variable.

```
. clear
. set seed 249587
. set obs 500
number of observations (_N) was 0, now 500
. gen id = _n
```

Next we simulate the things we need at the patient level. This includes a binary treatment group variable, `trt`

, and the random effects for later use in the longitudinal data. I’ll simulate two independent random effects for an intercept and slope.

```
. gen trt = runiform()>0.5
. gen b0 = rnormal()
. gen b1 = rnormal(0,0.1)
```

Now we can simulate our survival times, still at the patient level. They key is being explicit in your definition of the hazard function. From above, substituting in the longitudinal trajectory, our hazard function becomes,

$$h(t) = h_{0}(t) \exp(X_{1ij}\mathbf{\beta_{1}} + \alpha (X_{2ij}(t)\mathbf{\beta}_{2} + Z_{ij}(t)b_{i}))$$

`survsim`

allows you to specify a user-defined hazard function, meaning you have complete generality. The hazard function needs to be written in Mata code (Stata’s matrix programming language), but is fairly self-explanatory. Key things to note:

- You refer to time using
`{t}`

- You can directly include Stata variables in the definition of your hazard function, they will get read in automatically

```
. survsim stime died, /// new variables
> hazard( /// user-define hazard function
> 0.1:*1.2:*{t}:^0.2 /// baseline Weibull hazard
> :* exp(0.2 :* (b0 :+ (0.1 :+ b1) :* {t})) /// current value association
> ) ///
> covariates(trt -0.5) /// additional treatment effect
> maxt(5) // admin. censoring
Warning: 294 survival times were above the upper limit of maxtime()
They have been set to maxtime()
You can identify them by _survsim_rc = 3
```

which assumes a scale and shape of $\lambda=0.1$ and $\gamma=1.2$ for the baseline Weibull hazard function, and $\beta_{20}=0$ and $\beta_{21}=0.1$ for the mean intercept and slope of the longitudinal trajectory. The association parameter is set to $\alpha=0.2$. We also have a constant treatment effect with a log hazard ratio of -0.5.

Now we can move on to generating the *observed* longitudinal data. We know the true trajectory function, so we can generate whatever observation scheme we like. I’ll simulate up 5 observations per patient, measured at baseline, 1, 2, 3, 4 “years”. This is easiest to do using `expand`

, and then generating my `time`

variable to represent the time of observation, using the observation number minus 1.

```
. expand 5
(2,000 observations created)
. bys id : gen time = _n-1
. drop if time>stime
(410 observations deleted)
```

The last line above them simply drops any time points that occur after a patient’s event time. Now we generate the observed longitudinal responses, at the observation time points, incorporating some residual variability from the true normal distribution, with standard deviation 0.5.

```
. gen xb = b0 + (0.1 + b1) * time
. gen y = rnormal(xb,0.5)
```

Finally, because we `expand`

-ed our survival times, it replicated them in all the extra rows. Since we’re going to use `merlin`

to estimate our joint model, which can handle repeated event times, it’s crucial we remove the repeats, and only pass a single event time and event indicator to the estimation command.

```
. bys id (time) : replace stime = . if _n>1
(1590 real changes made, 1590 to missing)
. bys id (time) : replace died = . if _n>1
(1,590 real changes made, 1,590 to missing)
```

So our final dataset looks like:

```
. list id trt time y stime died if id==1 | id==13, sepby(id)
+------------------------------------------------+
| id trt time y stime died |
|------------------------------------------------|
1. | 1 0 0 1.332419 5 0 |
2. | 1 0 1 1.35186 . . |
3. | 1 0 2 1.907379 . . |
4. | 1 0 3 1.459253 . . |
5. | 1 0 4 .6210317 . . |
|------------------------------------------------|
53. | 13 1 0 -.9602892 2.8578734 1 |
54. | 13 1 1 .1079234 . . |
55. | 13 1 2 -1.303469 . . |
+------------------------------------------------+
```

Where we have our patient identifier, `id`

, patient treatment group, `trt`

, the observation times of the longitudinal outcome, `time`

, the observed values of the longitudinal outcome, `y`

, the survival time, `stime`

, and associated event indicator, `died`

.

We can fit the ture data-generating joint model very simply using `merlin`

, as follows

```
. merlin (stime trt EV[y] , family(weibull, failure(died)) timevar(stime)) ///
> (y fp(time, power(1)) M1[id]@1 , family(gaussian) timevar(time))
variables created for model 2, component 1: _cmp_2_1_1 to _cmp_2_1_1
Fitting fixed effects model:
Fitting full model:
Iteration 0: log likelihood = -3505.9684
Iteration 1: log likelihood = -2921.7043
Iteration 2: log likelihood = -2910.6233
Iteration 3: log likelihood = -2910.5528
Iteration 4: log likelihood = -2910.5528
Mixed effects regression model Number of obs = 2,090
Log likelihood = -2910.5528
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
stime: |
trt | -.5076658 .1406721 -3.61 0.000 -.7833781 -.2319535
EV[] | .1892384 .0768753 2.46 0.014 .0385655 .3399113
_cons | -2.382246 .1512833 -15.75 0.000 -2.678755 -2.085736
log(gamma) | .1826352 .0667266 2.74 0.006 .0518535 .3134169
-------------+----------------------------------------------------------------
y: |
fp() | .1041867 .0085211 12.23 0.000 .0874857 .1208877
M1[id] | 1 . . . . .
_cons | .0559673 .0469214 1.19 0.233 -.0359969 .1479316
sd(resid.) | .5154946 .0091327 .4979021 .5337088
-------------+----------------------------------------------------------------
id: |
sd(M1) | .9641086 .0329715 .9016038 1.030946
------------------------------------------------------------------------------
```

which links the expected value (current value) of the longitudinal outcome directly to survival. The model converges nicely, with parameter estimates around the true values. To convince ourselves that everything is working, we can simply up the sample size further to check that everything is unbiased. This example should hopefully provide a base case on which to expand for your own work.

This example used `survsim`

version 4.0.4 and `merlin`

version 1.15.1.