From ba3557f9f2b6d7f6949c9c04d26e448bb5fd9cd7 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Thu, 17 Apr 2025 17:02:48 -0400 Subject: [PATCH 01/26] initial draft in function references manual. --- src/functions-reference/embedded_laplace.qmd | 235 +++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 src/functions-reference/embedded_laplace.qmd diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd new file mode 100644 index 000000000..629e9f3a1 --- /dev/null +++ b/src/functions-reference/embedded_laplace.qmd @@ -0,0 +1,235 @@ +--- +pagetitle: Embedded Laplace Approximation +--- + +# Embedded Laplace Approximation + +The embedded Laplace approximation can be used to approximate certain +marginal and conditional distributions that arise in latent Gaussian models. +A latent Gaussian model observes the following hierarchical structure: +$$ + \phi \sim p(\phi), \ \ \theta \sim \text{MultiNormal}(0, K(\phi)), \ \ + y \sim p(y \mid \theta, \phi), +$$ +where $K(\phi)$ denotes the prior covariance matrix parameterized by $\phi$. +To draw samples from the posterior $p(\phi, \theta \mid y)$, we can either +use a standard method, such as Markov chain Monte Carlo, or we can follow +a two-step procedure: + +1. draw samples from the *marginal likelihood* $p(\phi \mid y)$ +2. draw samples from the *conditional posterior* $p(\theta \mid y, \phi)$. + +In practice, neither the marginal likelihood nor the conditional posterior +are available in close form and so they must be approximated. +It turns out that if we have an approximation of $p(\theta \mid y, \phi)$, +we immediately obtain an approximation of $p(\phi \mid y)$. +The embedded Laplace approximation returns +$\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. +Evaluating this log density in the `model` block, we can then sample from +$p(\phi \mid y)$ using one of Stan's algorithms. + +To obtain posterior draws for $\theta$, we generate samples from the Laplace +approximation to $p(\theta \mid y, \phi)$ in `generated quantities`. + +## Specifying the likelihood function + +The first step is to write down a function in the `functions` block which +returns `\log p(y \mid \theta, \phi)`. There are a few constraints on this +function: + +* The function return type must be `real` + +* The first argument must be the latent Gaussian variable $\theta$ and must +have type `vector`. + +* The operations in the function must support higher-order automatic +differentation (AD). Most functions in Stan support higher-order AD. +The exceptions are functions with specialized calls for reverse-mode AD, and +these are higher-order functions (algebraic solvers, differential equation +solvers, and integrators) and the suite of hidden Markov model (HMM) functions. + +The signature of the function is +``` +real ll_function(vector theta, ...) +``` +There is no type restrictions for the variadic arguments `...` and each +argument can be passed as data or parameter. As always, users should use +parameter arguments only when nescessary in order to speed up differentiation. +In general, we recommend marking data only arguments with the keyword `data`, +for example, +``` +real ll_function(vector theta, data vector x, ...) +``` + +## Specifying the covariance function + +We next need to specify a function that returns the prior covariance matrix +$K$ as a function of the hyperparameters $\phi$. +The only restriction is that this function returns a matrix with size +$n \times n$ where $n$ is the size of $\theta$. The signature is: +``` +matrix K_function(...) +``` +There is no type restrictions for the variadic arguments. The variables $\phi$ +is implicitly defined as the collection of all non-data arguments passed to +`ll_function` (excluding $\theta$) and `K_function`. + + +## Approximating the log marginal likelihood $\log p(y \mid \phi)$ + +In the `model` block, we increment `target` with `laplace_marginal`, a function +that approximates $\log p(y \mid \phi)$. This function takes in the +user-specified likelihood and covariance functions, as well as their arguments. +These arguments must be passed as tuples, which can be generated on the fly +using parenthesis. +We also need to pass an argument $\theta_0$ which serves as an initial guess for +the optimization problem that underlies the Laplace approximation, +$$ + \underset{\theta}{\text{argmax}} \ \log p(\theta \mid y, \phi). +$$ +The size of $\theta_0$ must be consistent with the size of the $\theta$ argument +passed to `ll_function`. + +The signature of the function is: +``` +target += laplace_marginal(function ll_function, tupple (...), vector theta_0, + function K_function, tupple (...)); +``` +The tuple `(...)` after `ll_function` contains the arguments that get passed +to `ll_function` *excluding $\theta$*. Likewise, the tuple `(...)` after +`ll_function` contains the arguments that get passed to `K_function`. + +It also possible to specify control parameters, which can help improve the +optimization that underlies the Laplace approximation. Specifically: + +* `tol`: the tolerance $\epsilon$ of the optimizer. Specifically, the optimizer +stops when $||\nabla \log p(\theta \mid y, \phi)|| \le \epsilon$. By default, +the value is $\epsilon = 10^{-6}$. + +* `max_num_steps`: the maximum number of steps taken by the optimizer before +it gives up (in which case the Metropolis proposal gets rejected). The default +is 100 steps. + +* `hessian_block_size`: the size of the blocks, assuming the Hessian +$\partial \log p(y \mid \theta, phi) \ \partial \theta$ is block-diagonal. +The structure of the Hessian is determined by the dependence structure of $y$ +on $\theta$. By default, the Hessian is treated as diagonal +(`hessian_block_size=1`). If the Hessian is not block diagonal, then set +`hessian_block_size=n`, where `n` is the size of $\theta$. + +* `solver`: choice of Newton solver. The optimizer used to compute the +Laplace approximation does one of three matrix decompositions to compute a +Newton step. The problem determines which decomposition is numerical stable. +By default (`solver=1`), the solver makes a Cholesky decomposition of the +negative Hessian, $- \partial \log p(y \mid \theta, \phi) / \partial \theta$. +If `solver=2`, the solver makes a Cholesky decomposition of the covariance +matrix $K(\phi)$. +If the Cholesky decomposition cannot be computed for neither the negative +Hessian nor the covariance matrix, use `solver=3` which uses a more expensive +but less specialized approach. + +* `max_steps_linesearch`: maximum number of steps in linesearch. The linesearch +method tries to insure that the Newton step leads to a decrease in the +objective function. If the Newton step does not improve the objective function, +the step is repeatedly halved until the objective function decreases or the +maximum number of steps in the linesearch is reached. By default, +`max_steps_linesearch=0`, meaning no linesearch is performed. + +With these arguments at hand, we can call `laplace_marginal_tol` with the +following signature: +``` +target += laplace_margina_tol(function ll_function, tupple (...), vector theta_0, + function K_function, tupple (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +``` + +## Draw approximate samples from the conditional $p(\theta \mid y, \phi)$ + +In `generated quantities`, it is possible to draw samples from the Laplace +approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. +The process of iteratively drawing from $p(\phi \mid y)$ (say, with MCMC) and +then $p(\theta \mid y, \phi)$ produces samples from the joint posterior +$p(\theta, \phi \mid y)$. The signature for `laplace_latent_rng` follows closely +the signature for `laplace_marginal`: +``` +vector theta = + laplace_latent_rng(function ll_function, tupple (...), vector theta_0, + function K_function, tupple (...)); +``` +Once again, it is possible to specify control parameters: +``` +vector theta = + laplace_latent_tol_rng(function ll_function, tupple (...), vector theta_0, + function K_function, tupple (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +``` + +## Built-in likelihood functions for the embedded Laplace + +Stan supports a narrow menu of built-in likelihood functions. These wrappers +exist for the user's convenience but are not more computationally efficient +than specifying log likelihoods in the `functions` block. + +[...] + + +## Draw approximate samples for out-of-sample latent variables. + +In many applications, it is of interest to draw latent variables for +in-sample and out-of-sample predictions. We respectively denote these latent +variables $\theta$ and $\theta^*$. In a latent Gaussian model, +$(\theta, \theta^*)$ jointly follow a prior multivariate normal distribution: +$$ + \theta, \theta^* \sim \text{MultiNormal}(0, {\bf K}(\phi)), +$$ +where $\bf K$ designates the joint covariance matrix over $\theta, \theta^*$. + +We can break $\bf K$ into three components, +$$ +{\bf K} = \begin{bmatrix} + K & \\ + K^* & K^{**} +\end{bmatrix}, +$$ +where $K$ is the prior covariance matrix for $\theta$, $K^{**}$ the prior +covariance matrix for $\theta^*$, and $K^*$ the covariance matrix between +$\theta$ and $\theta^*$. + +Stan supports the case where $\theta$ is associated with an in-sample +covariate $X$ and $\theta^*$ with an out-of-sample covariate $X^*$. +Furthermore, the covariance function is written in such a way that +$$ +K = f(..., X, X), \ \ K^{**} = f(..., X^*, X^*), \ \ K^* = f(..., X, X^*), +$$ +as is typically the case in Gaussian process models. + + + + + +The +function `laplace_latent_rng` produces samples from the Laplace approximation +and admits nearly the same arguments as `laplace_marginal`. A key difference +is that +``` +vector laplace_latent_rng(function ll_function, tupple (...), vector theta_0, + function K_function, tupple (...)); +``` + + + + + + + + + + + + + + + + From e8d1813693568cfa7b376be5b17bbaeb8d350120 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Fri, 18 Apr 2025 15:18:14 -0400 Subject: [PATCH 02/26] initial doc for embedded laplace. --- src/functions-reference/embedded_laplace.qmd | 221 +++++++++++++++---- 1 file changed, 173 insertions(+), 48 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 629e9f3a1..c5d2d5bcf 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -30,12 +30,16 @@ $p(\phi \mid y)$ using one of Stan's algorithms. To obtain posterior draws for $\theta$, we generate samples from the Laplace approximation to $p(\theta \mid y, \phi)$ in `generated quantities`. +The process of iteratively drawing from $p(\phi \mid y)$ (say, with MCMC) and +then $p(\theta \mid y, \phi)$ produces samples from the joint posterior +$p(\theta, \phi \mid y)$. + ## Specifying the likelihood function -The first step is to write down a function in the `functions` block which -returns `\log p(y \mid \theta, \phi)`. There are a few constraints on this -function: +The first step to use the embedded Laplace approximation is to write down a +function in the `functions` block which returns `\log p(y \mid \theta, \phi)`. +There are a few constraints on this function: * The function return type must be `real` @@ -148,9 +152,7 @@ target += laplace_margina_tol(function ll_function, tupple (...), vector theta_0 In `generated quantities`, it is possible to draw samples from the Laplace approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. -The process of iteratively drawing from $p(\phi \mid y)$ (say, with MCMC) and -then $p(\theta \mid y, \phi)$ produces samples from the joint posterior -$p(\theta, \phi \mid y)$. The signature for `laplace_latent_rng` follows closely +The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: ``` vector theta = @@ -166,70 +168,193 @@ vector theta = int solver, int max_steps_linesearch); ``` -## Built-in likelihood functions for the embedded Laplace - -Stan supports a narrow menu of built-in likelihood functions. These wrappers -exist for the user's convenience but are not more computationally efficient -than specifying log likelihoods in the `functions` block. +## Built-in likelihood functions -[...] +Stan supports certain built-in likelihood functions. This selection is currently +narrow and expected to grow. The built-in functions exist for the user's +convenience but are not more computationally efficient than specifying log +likelihoods in the `functions` block. +### Poisson likelihood with log link -## Draw approximate samples for out-of-sample latent variables. - -In many applications, it is of interest to draw latent variables for -in-sample and out-of-sample predictions. We respectively denote these latent -variables $\theta$ and $\theta^*$. In a latent Gaussian model, -$(\theta, \theta^*)$ jointly follow a prior multivariate normal distribution: +Consider a count data, which each observed count $y_i$ associated with a group +$g(i)$ and a corresponding latent variable $\theta_{g(i)}$. The likelihood is $$ - \theta, \theta^* \sim \text{MultiNormal}(0, {\bf K}(\phi)), +p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)})). $$ -where $\bf K$ designates the joint covariance matrix over $\theta, \theta^*$. +The arguments required to compute this likelihood are: -We can break $\bf K$ into three components, -$$ -{\bf K} = \begin{bmatrix} - K & \\ - K^* & K^{**} -\end{bmatrix}, -$$ -where $K$ is the prior covariance matrix for $\theta$, $K^{**}$ the prior -covariance matrix for $\theta^*$, and $K^*$ the covariance matrix between -$\theta$ and $\theta^*$. +* `y`: an array of counts. +* `y_index`: an array whose $i^\text{th}$ element indicates to which +group the $i^\text{th}$ observation belongs to. -Stan supports the case where $\theta$ is associated with an in-sample -covariate $X$ and $\theta^*$ with an out-of-sample covariate $X^*$. -Furthermore, the covariance function is written in such a way that -$$ -K = f(..., X, X), \ \ K^{**} = f(..., X^*, X^*), \ \ K^* = f(..., X, X^*), -$$ -as is typically the case in Gaussian process models. +The signatures for the embedded Laplace approximation function with a Poisson +likelihood are +``` +real laplace_marginal_poisson_log_lpmf(array[] int y | array[] int y_index, + vector theta0, function K_function, (...)); +real laplace_marginal_tol_poisson_log_lpmf(array[] int y | array[] int y_index, + vector theta0, function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +vector laplace_latent_poisson_log_rng(array[] int y, array[] int y_index, + vector theta0, function K_function, (...)); +vector laplace_latent_tol_poisson_log_rng(array[] int y, array[] int y_index, + vector theta0, function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +``` -The -function `laplace_latent_rng` produces samples from the Laplace approximation -and admits nearly the same arguments as `laplace_marginal`. A key difference -is that +A similar built-in likelihood lets users specify an offset $x_i \in \mathbb R^+$ +to the rate parameter of the Poisson. The likelihood is then, +$$ +p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). +$$ +The signatures for this function are: ``` -vector laplace_latent_rng(function ll_function, tupple (...), vector theta_0, - function K_function, tupple (...)); +real laplace_marginal_poisson2_log_lpmf(array[] int y | array[] int y_index, + vector x, vector theta0, + function K_function, (...)); + +real laplace_marginal_tol_poisson2_log_lpmf(array[] int y | array[] int y_index, + vector x, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); + +vector laplace_latent_poisson2_log_rng(array[] int y, array[] int y_index, + vector x, vector theta0, + function K_function, (...)); + +vector laplace_latent_tol_poisson2_log_rng(array[] int y, array[] int y_index, + vector x, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); ``` +### Negative Binomial likelihood with log link +The negative Bionomial generalizes the Poisson likelihood function by +introducing the dispersion parameter $\eta$. The likelihood is then +$$ +p(y \mid \theta, \phi) = \prod_i\text{NegBinomial2} (y_i \mid \exp(\theta_{g(i)}), \eta). +$$ +Here we use the alternative paramererization implemented in Stan, meaning that +$$ +\mathbb E(y_i) = \exp (\theta_{g(i)}), \ \ \text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. +$$ +The arguments for the likelihood function are: +* `y`: the observed counts +* `y_index`: an array whose $i^\text{th}$ element indicates to which +group the $i^\text{th}$ observation belongs to. +* `eta`: the overdispersion parameter. +The function signatures for the embedded Laplace approximation with a negative +Binomial likelihood are +``` +real laplace_marginal_neg_binomial_2_log_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, (...)); + +real laplace_marginal_tol_neg_binomial_2_log_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); + +vector laplace_latent_neg_binomial_2_log_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, (...)); + +vector laplace_latent_tol_neg_binomial_2_log_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +``` +### Bernoulli likelihood with logit link +For a binary outcome $y_i \in \{0, 1\}$, the likelihood is +$$ +p(y \mid \theta, \phi) = \prod_i\text{Bernoulli} (y_i \mid \text{logit}^{-1}(\theta_{g(i)})). +$$ +The arguments of the likelihood function are: +* `y`: the observed counts +* `y_index`: an array whose $i^\text{th}$ element indicates to which +group the $i^\text{th}$ observation belongs to. +The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are +``` +real laplace_marginal_bernoulli_logit_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, (...)); + +real laplace_marginal_tol_bernoulli_logit_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); + +vector laplace_latent_bernoulli_logit_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, (...)); + +vector laplace_latent_tol_bernoulli_logit_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, (...), + real tol, int max_steps, int hessian_block_size, + int solver, int max_steps_linesearch); +``` - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + From ff4414a21bd86dee68e6f07d5f34140ed464e84a Mon Sep 17 00:00:00 2001 From: Aki Vehtari Date: Mon, 28 Apr 2025 21:43:18 +0300 Subject: [PATCH 03/26] edits --- src/functions-reference/embedded_laplace.qmd | 86 ++++++++++++-------- 1 file changed, 53 insertions(+), 33 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index c5d2d5bcf..de1e07fc8 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -8,38 +8,54 @@ The embedded Laplace approximation can be used to approximate certain marginal and conditional distributions that arise in latent Gaussian models. A latent Gaussian model observes the following hierarchical structure: $$ - \phi \sim p(\phi), \ \ \theta \sim \text{MultiNormal}(0, K(\phi)), \ \ + \phi \sim p(\phi), \\ + \theta \sim \text{MultiNormal}(0, K(\phi)), \\ y \sim p(y \mid \theta, \phi), $$ where $K(\phi)$ denotes the prior covariance matrix parameterized by $\phi$. -To draw samples from the posterior $p(\phi, \theta \mid y)$, we can either +To sample from the joint posterior $p(\phi, \theta \mid y)$, we can either use a standard method, such as Markov chain Monte Carlo, or we can follow a two-step procedure: -1. draw samples from the *marginal likelihood* $p(\phi \mid y)$ -2. draw samples from the *conditional posterior* $p(\theta \mid y, \phi)$. - -In practice, neither the marginal likelihood nor the conditional posterior -are available in close form and so they must be approximated. -It turns out that if we have an approximation of $p(\theta \mid y, \phi)$, -we immediately obtain an approximation of $p(\phi \mid y)$. -The embedded Laplace approximation returns -$\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. -Evaluating this log density in the `model` block, we can then sample from -$p(\phi \mid y)$ using one of Stan's algorithms. - -To obtain posterior draws for $\theta$, we generate samples from the Laplace +1. sample from the *marginal posterior* $p(\phi \mid y)$, +2. sample from the *conditional posterior* $p(\theta \mid y, \phi)$. + +In practice, neither the marginal posterior nor the conditional posterior +are available in closed form and so they must be approximated. +The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, +where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) d\theta$ $ +is called marginal likelihood. The Laplace method approximates +$p(y \mid \phi, \theta) p(\theta)$ with a normal distribution and the +resulting Gaussian integral can be evaluated analytically to obtain an +approximation to the log marginal likelihood +$\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. + +Combining this marginal likelihood with the prior in the `model` +block, we can then sample from the marginal posterior $p(\phi \mid y)$ +using one of Stan's algorithms. The marginal posterior is lower +dimensional and likely to have easier shape to sample leading more +efficient inference. On the other hand each marginal likelihood +computation is more costly, and the combined change in efficiency +depends on the case. + +To obtain posterior draws for $\theta$, we sample from the normal approximation to $p(\theta \mid y, \phi)$ in `generated quantities`. -The process of iteratively drawing from $p(\phi \mid y)$ (say, with MCMC) and +The process of iteratively sampling from $p(\phi \mid y)$ (say, with MCMC) and then $p(\theta \mid y, \phi)$ produces samples from the joint posterior $p(\theta, \phi \mid y)$. +The Laplace approximation is especially useful if $p(\theta)$ is +multivariate normal and $p(y \mid \phi, \theta)$ is +log-concave. Stan's embedded Laplace approximation is restricted to +have multivariate normal prior $p(\theta)$ and ... likelihood +$p(y \mid \phi, \theta)$. + ## Specifying the likelihood function The first step to use the embedded Laplace approximation is to write down a -function in the `functions` block which returns `\log p(y \mid \theta, \phi)`. -There are a few constraints on this function: +function in the `functions` block which returns the log joint likelihood +`\log p(y \mid \theta, \phi)`. There are a few constraints on this function: * The function return type must be `real` @@ -82,8 +98,9 @@ is implicitly defined as the collection of all non-data arguments passed to ## Approximating the log marginal likelihood $\log p(y \mid \phi)$ In the `model` block, we increment `target` with `laplace_marginal`, a function -that approximates $\log p(y \mid \phi)$. This function takes in the -user-specified likelihood and covariance functions, as well as their arguments. +that approximates the log marginal likelihood $\log p(y \mid \phi)$. +This function takes in the +user-specified likelihood and prior covariance functions, as well as their arguments. These arguments must be passed as tuples, which can be generated on the fly using parenthesis. We also need to pass an argument $\theta_0$ which serves as an initial guess for @@ -148,9 +165,9 @@ target += laplace_margina_tol(function ll_function, tupple (...), vector theta_0 int solver, int max_steps_linesearch); ``` -## Draw approximate samples from the conditional $p(\theta \mid y, \phi)$ +## Sample from the approximate conditional $\hat{p}(\theta \mid y, \phi)$ -In `generated quantities`, it is possible to draw samples from the Laplace +In `generated quantities`, it is possible to sample from the Laplace approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: @@ -168,17 +185,19 @@ vector theta = int solver, int max_steps_linesearch); ``` -## Built-in likelihood functions +## Built-in Laplace marginal likelihood functions -Stan supports certain built-in likelihood functions. This selection is currently +Stan supports certain built-in Laplace marginal likelihood functions. +This selection is currently narrow and expected to grow. The built-in functions exist for the user's convenience but are not more computationally efficient than specifying log likelihoods in the `functions` block. -### Poisson likelihood with log link +### Poisson with log link -Consider a count data, which each observed count $y_i$ associated with a group -$g(i)$ and a corresponding latent variable $\theta_{g(i)}$. The likelihood is +Given count data, with each observed count $y_i$ associated with a group +$g(i)$ and a corresponding latent variable $\theta_{g(i)}$, and Poisson model, +the likelihood is $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)})). $$ @@ -238,16 +257,17 @@ vector laplace_latent_tol_poisson2_log_rng(array[] int y, array[] int y_index, ``` -### Negative Binomial likelihood with log link +### Negative Binomial with log link -The negative Bionomial generalizes the Poisson likelihood function by -introducing the dispersion parameter $\eta$. The likelihood is then +The negative Bionomial distribution generalizes the Poisson distribution by +introducing the dispersion parameter $\eta$. The corresponding likelihood is then $$ p(y \mid \theta, \phi) = \prod_i\text{NegBinomial2} (y_i \mid \exp(\theta_{g(i)}), \eta). $$ Here we use the alternative paramererization implemented in Stan, meaning that $$ -\mathbb E(y_i) = \exp (\theta_{g(i)}), \ \ \text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. +\mathbb E(y_i) = \exp (\theta_{g(i)}), \\ +\text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. $$ The arguments for the likelihood function are: @@ -280,9 +300,9 @@ vector laplace_latent_tol_neg_binomial_2_log_rng(array[] int y, int solver, int max_steps_linesearch); ``` -### Bernoulli likelihood with logit link +### Bernoulli with logit link -For a binary outcome $y_i \in \{0, 1\}$, the likelihood is +Given binary outcome $y_i \in \{0, 1\}$ and Bernoulli model, the likelihood is $$ p(y \mid \theta, \phi) = \prod_i\text{Bernoulli} (y_i \mid \text{logit}^{-1}(\theta_{g(i)})). $$ From 1bd8d1754977c96efe06ed30a1c544dafd6b0357 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Wed, 28 May 2025 15:07:04 -0400 Subject: [PATCH 04/26] Add to build --- src/_quarto.yml | 1 + src/functions-reference/_quarto.yml | 1 + 2 files changed, 2 insertions(+) diff --git a/src/_quarto.yml b/src/_quarto.yml index a202fded4..b1bf257d6 100644 --- a/src/_quarto.yml +++ b/src/_quarto.yml @@ -237,6 +237,7 @@ website: - section: "Additional Distributions" contents: - functions-reference/hidden_markov_models.qmd + - functions-reference/embedded_laplace.qmd - section: "Appendix" contents: - functions-reference/mathematical_functions.qmd diff --git a/src/functions-reference/_quarto.yml b/src/functions-reference/_quarto.yml index bc6e83acb..9fd6f9a9a 100644 --- a/src/functions-reference/_quarto.yml +++ b/src/functions-reference/_quarto.yml @@ -72,6 +72,7 @@ book: - part: "Additional Distributions" chapters: - hidden_markov_models.qmd + - embedded_laplace.qmd - part: "Appendix" chapters: - mathematical_functions.qmd From a6ee3a7374ee7046903dd782410fdd4db35e3ac7 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Wed, 28 May 2025 15:09:28 -0400 Subject: [PATCH 05/26] Fix typos --- src/functions-reference/embedded_laplace.qmd | 130 +++++++++---------- 1 file changed, 65 insertions(+), 65 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index de1e07fc8..6be5f53ae 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -23,12 +23,12 @@ a two-step procedure: In practice, neither the marginal posterior nor the conditional posterior are available in closed form and so they must be approximated. The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, -where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) d\theta$ $ -is called marginal likelihood. The Laplace method approximates +where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) d\theta$ $ +is called marginal likelihood. The Laplace method approximates $p(y \mid \phi, \theta) p(\theta)$ with a normal distribution and the resulting Gaussian integral can be evaluated analytically to obtain an -approximation to the log marginal likelihood -$\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. +approximation to the log marginal likelihood +$\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. Combining this marginal likelihood with the prior in the `model` block, we can then sample from the marginal posterior $p(\phi \mid y)$ @@ -40,21 +40,21 @@ depends on the case. To obtain posterior draws for $\theta$, we sample from the normal approximation to $p(\theta \mid y, \phi)$ in `generated quantities`. -The process of iteratively sampling from $p(\phi \mid y)$ (say, with MCMC) and +The process of iteratively sampling from $p(\phi \mid y)$ (say, with MCMC) and then $p(\theta \mid y, \phi)$ produces samples from the joint posterior $p(\theta, \phi \mid y)$. The Laplace approximation is especially useful if $p(\theta)$ is multivariate normal and $p(y \mid \phi, \theta)$ is log-concave. Stan's embedded Laplace approximation is restricted to -have multivariate normal prior $p(\theta)$ and ... likelihood +have multivariate normal prior $p(\theta)$ and ... likelihood $p(y \mid \phi, \theta)$. ## Specifying the likelihood function -The first step to use the embedded Laplace approximation is to write down a -function in the `functions` block which returns the log joint likelihood +The first step to use the embedded Laplace approximation is to write down a +function in the `functions` block which returns the log joint likelihood `\log p(y \mid \theta, \phi)`. There are a few constraints on this function: * The function return type must be `real` @@ -63,7 +63,7 @@ function in the `functions` block which returns the log joint likelihood have type `vector`. * The operations in the function must support higher-order automatic -differentation (AD). Most functions in Stan support higher-order AD. +differentiation (AD). Most functions in Stan support higher-order AD. The exceptions are functions with specialized calls for reverse-mode AD, and these are higher-order functions (algebraic solvers, differential equation solvers, and integrators) and the suite of hidden Markov model (HMM) functions. @@ -74,7 +74,7 @@ real ll_function(vector theta, ...) ``` There is no type restrictions for the variadic arguments `...` and each argument can be passed as data or parameter. As always, users should use -parameter arguments only when nescessary in order to speed up differentiation. +parameter arguments only when necessary in order to speed up differentiation. In general, we recommend marking data only arguments with the keyword `data`, for example, ``` @@ -98,11 +98,11 @@ is implicitly defined as the collection of all non-data arguments passed to ## Approximating the log marginal likelihood $\log p(y \mid \phi)$ In the `model` block, we increment `target` with `laplace_marginal`, a function -that approximates the log marginal likelihood $\log p(y \mid \phi)$. +that approximates the log marginal likelihood $\log p(y \mid \phi)$. This function takes in the user-specified likelihood and prior covariance functions, as well as their arguments. These arguments must be passed as tuples, which can be generated on the fly -using parenthesis. +using parenthesis. We also need to pass an argument $\theta_0$ which serves as an initial guess for the optimization problem that underlies the Laplace approximation, $$ @@ -113,11 +113,11 @@ passed to `ll_function`. The signature of the function is: ``` -target += laplace_marginal(function ll_function, tupple (...), vector theta_0, - function K_function, tupple (...)); +real laplace_marginal(function ll_function, tuple(...), vector theta_0, + function K_function, tuple(...)); ``` -The tuple `(...)` after `ll_function` contains the arguments that get passed -to `ll_function` *excluding $\theta$*. Likewise, the tuple `(...)` after +The `tuple(...)` after `ll_function` contains the arguments that get passed +to `ll_function` *excluding $\theta$*. Likewise, the `tuple(...)` after `ll_function` contains the arguments that get passed to `K_function`. It also possible to specify control parameters, which can help improve the @@ -159,8 +159,8 @@ maximum number of steps in the linesearch is reached. By default, With these arguments at hand, we can call `laplace_marginal_tol` with the following signature: ``` -target += laplace_margina_tol(function ll_function, tupple (...), vector theta_0, - function K_function, tupple (...), +target += laplace_margina_tol(function ll_function, tuple(...), vector theta_0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` @@ -172,22 +172,22 @@ approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: ``` -vector theta = - laplace_latent_rng(function ll_function, tupple (...), vector theta_0, - function K_function, tupple (...)); +vector theta = + laplace_latent_rng(function ll_function, tuple(...), vector theta_0, + function K_function, tuple(...)); ``` Once again, it is possible to specify control parameters: ``` -vector theta = - laplace_latent_tol_rng(function ll_function, tupple (...), vector theta_0, - function K_function, tupple (...), +vector theta = + laplace_latent_tol_rng(function ll_function, tuple(...), vector theta_0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` ## Built-in Laplace marginal likelihood functions -Stan supports certain built-in Laplace marginal likelihood functions. +Stan supports certain built-in Laplace marginal likelihood functions. This selection is currently narrow and expected to grow. The built-in functions exist for the user's convenience but are not more computationally efficient than specifying log @@ -196,7 +196,7 @@ likelihoods in the `functions` block. ### Poisson with log link Given count data, with each observed count $y_i$ associated with a group -$g(i)$ and a corresponding latent variable $\theta_{g(i)}$, and Poisson model, +$g(i)$ and a corresponding latent variable $\theta_{g(i)}$, and Poisson model, the likelihood is $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)})). @@ -211,18 +211,18 @@ The signatures for the embedded Laplace approximation function with a Poisson likelihood are ``` real laplace_marginal_poisson_log_lpmf(array[] int y | array[] int y_index, - vector theta0, function K_function, (...)); + vector theta0, function K_function, tuple(...)); real laplace_marginal_tol_poisson_log_lpmf(array[] int y | array[] int y_index, - vector theta0, function K_function, (...), + vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); vector laplace_latent_poisson_log_rng(array[] int y, array[] int y_index, - vector theta0, function K_function, (...)); + vector theta0, function K_function, tuple(...)); vector laplace_latent_tol_poisson_log_rng(array[] int y, array[] int y_index, - vector theta0, function K_function, (...), + vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` @@ -237,21 +237,21 @@ The signatures for this function are: ``` real laplace_marginal_poisson2_log_lpmf(array[] int y | array[] int y_index, vector x, vector theta0, - function K_function, (...)); + function K_function, tuple(...)); real laplace_marginal_tol_poisson2_log_lpmf(array[] int y | array[] int y_index, vector x, vector theta0, - function K_function, (...), + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); vector laplace_latent_poisson2_log_rng(array[] int y, array[] int y_index, - vector x, vector theta0, - function K_function, (...)); + vector x, vector theta0, + function K_function, tuple(...)); vector laplace_latent_tol_poisson2_log_rng(array[] int y, array[] int y_index, vector x, vector theta0, - function K_function, (...), + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` @@ -259,15 +259,15 @@ vector laplace_latent_tol_poisson2_log_rng(array[] int y, array[] int y_index, ### Negative Binomial with log link -The negative Bionomial distribution generalizes the Poisson distribution by +The negative Binomial distribution generalizes the Poisson distribution by introducing the dispersion parameter $\eta$. The corresponding likelihood is then $$ p(y \mid \theta, \phi) = \prod_i\text{NegBinomial2} (y_i \mid \exp(\theta_{g(i)}), \eta). $$ -Here we use the alternative paramererization implemented in Stan, meaning that +Here we use the alternative parameterization implemented in Stan, meaning that $$ -\mathbb E(y_i) = \exp (\theta_{g(i)}), \\ -\text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. +\mathbb E(y_i) = \exp (\theta_{g(i)}), \\ +\text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. $$ The arguments for the likelihood function are: @@ -279,23 +279,23 @@ group the $i^\text{th}$ observation belongs to. The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are ``` -real laplace_marginal_neg_binomial_2_log_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, (...)); +real laplace_marginal_neg_binomial_2_log_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...)); -real laplace_marginal_tol_neg_binomial_2_log_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, (...), +real laplace_marginal_tol_neg_binomial_2_log_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); -vector laplace_latent_neg_binomial_2_log_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, (...)); +vector laplace_latent_neg_binomial_2_log_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...)); -vector laplace_latent_tol_neg_binomial_2_log_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, (...), +vector laplace_latent_tol_neg_binomial_2_log_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` @@ -314,23 +314,23 @@ group the $i^\text{th}$ observation belongs to. The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are ``` -real laplace_marginal_bernoulli_logit_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, (...)); +real laplace_marginal_bernoulli_logit_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...)); -real laplace_marginal_tol_bernoulli_logit_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, (...), +real laplace_marginal_tol_bernoulli_logit_lpmf(array[] int y | + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); -vector laplace_latent_bernoulli_logit_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, (...)); +vector laplace_latent_bernoulli_logit_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...)); -vector laplace_latent_tol_bernoulli_logit_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, (...), +vector laplace_latent_tol_bernoulli_logit_rng(array[] int y, + array[] int y_index, real eta, vector theta0, + function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch); ``` @@ -374,7 +374,7 @@ vector laplace_latent_tol_bernoulli_logit_rng(array[] int y, - - + + From bffbede7274c5871cf945d431994391e338f34e4 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Wed, 28 May 2025 16:44:43 -0400 Subject: [PATCH 06/26] Start signature formatting --- src/functions-reference/embedded_laplace.qmd | 38 ++++++++++++-------- src/functions-reference/functions_index.qmd | 20 +++++++++++ 2 files changed, 44 insertions(+), 14 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 6be5f53ae..c7263dd69 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -209,24 +209,34 @@ group the $i^\text{th}$ observation belongs to. The signatures for the embedded Laplace approximation function with a Poisson likelihood are -``` -real laplace_marginal_poisson_log_lpmf(array[] int y | array[] int y_index, - vector theta0, function K_function, tuple(...)); -real laplace_marginal_tol_poisson_log_lpmf(array[] int y | array[] int y_index, - vector theta0, function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} -vector laplace_latent_poisson_log_rng(array[] int y, array[] int y_index, - vector theta0, function K_function, tuple(...)); +`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y \textbar\ array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -vector laplace_latent_tol_poisson_log_rng(array[] int y, array[] int y_index, - vector theta0, function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y \textbar\ array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +`vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +`vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +{{< since 2.37 >}} A similar built-in likelihood lets users specify an offset $x_i \in \mathbb R^+$ to the rate parameter of the Poisson. The likelihood is then, diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index ebfa0de5d..80febf32c 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1621,6 +1621,26 @@ pagetitle: Alphabetical Index -
[`(T x) : R`](real-valued_basic_functions.qmd#index-entry-ab5ca07ba7ba53020939ca3ba7c6e64ccf05cf19) (real-valued_basic_functions.html)
+**laplace_latent_poisson_log_rng**: + + -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-eff5f6d441cfa6795e0f8e6b15b42d024765e323) (embedded_laplace.html)
+ + +**laplace_latent_tol_poisson_log_rng**: + + -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-ffa419074e92267fba174d07d89982aab3791f7f) (embedded_laplace.html)
+ + +**laplace_marginal_poisson_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-69cc1eb968d5e1cf857158a6dbd71e46a2b98638) (embedded_laplace.html)
+ + +**laplace_marginal_tol_poisson_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b89307e74f003363276021f42b183edb574bb134) (embedded_laplace.html)
+ + **lbeta**: -
[`(real alpha, real beta) : real`](real-valued_basic_functions.qmd#index-entry-def48992e0f8724381904fba78466b0f4a0d14bd) (real-valued_basic_functions.html)
From 53f47d29157ac0605f246ede41fb7013cf13254e Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Thu, 29 May 2025 10:07:16 -0400 Subject: [PATCH 07/26] More signature work --- src/functions-reference/embedded_laplace.qmd | 234 +++++++++++-------- src/functions-reference/functions_index.qmd | 80 +++++++ 2 files changed, 221 insertions(+), 93 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index c7263dd69..a87308c2b 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -112,16 +112,31 @@ The size of $\theta_0$ must be consistent with the size of the $\theta$ argument passed to `ll_function`. The signature of the function is: -``` -real laplace_marginal(function ll_function, tuple(...), vector theta_0, - function K_function, tuple(...)); -``` + + +\index{{\tt \bfseries laplace\_marginal }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + The `tuple(...)` after `ll_function` contains the arguments that get passed to `ll_function` *excluding $\theta$*. Likewise, the `tuple(...)` after -`ll_function` contains the arguments that get passed to `K_function`. +`K_function` contains the arguments that get passed to `K_function`. It also possible to specify control parameters, which can help improve the -optimization that underlies the Laplace approximation. Specifically: +optimization that underlies the Laplace approximation, using `laplace_marginal_tol` +with the following signature: + + +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. + * `tol`: the tolerance $\epsilon$ of the optimizer. Specifically, the optimizer stops when $||\nabla \log p(\theta \mid y, \phi)|| \le \epsilon$. By default, @@ -156,14 +171,7 @@ the step is repeatedly halved until the objective function decreases or the maximum number of steps in the linesearch is reached. By default, `max_steps_linesearch=0`, meaning no linesearch is performed. -With these arguments at hand, we can call `laplace_marginal_tol` with the -following signature: -``` -target += laplace_margina_tol(function ll_function, tuple(...), vector theta_0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` +{{< since 2.37 >}} ## Sample from the approximate conditional $\hat{p}(\theta \mid y, \phi)$ @@ -171,19 +179,23 @@ In `generated quantities`, it is possible to sample from the Laplace approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: -``` -vector theta = - laplace_latent_rng(function ll_function, tuple(...), vector theta_0, - function K_function, tuple(...)); -``` + + +\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +`vector` **`laplace_latent_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + Once again, it is possible to specify control parameters: -``` -vector theta = - laplace_latent_tol_rng(function ll_function, tuple(...), vector theta_0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` + +\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +`vector` **`laplace_latent_tol_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} ## Built-in Laplace marginal likelihood functions @@ -210,18 +222,20 @@ group the $i^\text{th}$ observation belongs to. The signatures for the embedded Laplace approximation function with a Poisson likelihood are - + \index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y \textbar\ array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +TODO description. {{< since 2.37 >}} - + \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y \textbar\ array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +TODO description. {{< since 2.37 >}} @@ -229,6 +243,7 @@ likelihood are `vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +TODO description. {{< since 2.37 >}} @@ -236,6 +251,7 @@ likelihood are `vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +TODO description. {{< since 2.37 >}} A similar built-in likelihood lets users specify an offset $x_i \in \mathbb R^+$ @@ -244,27 +260,38 @@ $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). $$ The signatures for this function are: -``` -real laplace_marginal_poisson2_log_lpmf(array[] int y | array[] int y_index, - vector x, vector theta0, - function K_function, tuple(...)); - -real laplace_marginal_tol_poisson2_log_lpmf(array[] int y | array[] int y_index, - vector x, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); - -vector laplace_latent_poisson2_log_rng(array[] int y, array[] int y_index, - vector x, vector theta0, - function K_function, tuple(...)); - -vector laplace_latent_tol_poisson2_log_rng(array[] int y, array[] int y_index, - vector x, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` + + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +`vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +`vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} ### Negative Binomial with log link @@ -288,27 +315,37 @@ group the $i^\text{th}$ observation belongs to. The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are -``` -real laplace_marginal_neg_binomial_2_log_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...)); - -real laplace_marginal_tol_neg_binomial_2_log_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); - -vector laplace_latent_neg_binomial_2_log_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...)); - -vector laplace_latent_tol_neg_binomial_2_log_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +`vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +`vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} ### Bernoulli with logit link @@ -323,27 +360,38 @@ The arguments of the likelihood function are: group the $i^\text{th}$ observation belongs to. The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are -``` -real laplace_marginal_bernoulli_logit_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...)); - -real laplace_marginal_tol_bernoulli_logit_lpmf(array[] int y | - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); - -vector laplace_latent_bernoulli_logit_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...)); - -vector laplace_latent_tol_bernoulli_logit_rng(array[] int y, - array[] int y_index, real eta, vector theta0, - function K_function, tuple(...), - real tol, int max_steps, int hessian_block_size, - int solver, int max_steps_linesearch); -``` + + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +`vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +`vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index 80febf32c..566692aef 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1621,21 +1621,101 @@ pagetitle: Alphabetical Index -
[`(T x) : R`](real-valued_basic_functions.qmd#index-entry-ab5ca07ba7ba53020939ca3ba7c6e64ccf05cf19) (real-valued_basic_functions.html)
+**laplace_latent_bernoulli_logit_rng**: + + -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-76c52dc387f97008815bd1574950b0591eed6d56) (embedded_laplace.html)
+ + +**laplace_latent_neg_binomial_2_log_rng**: + + -
[`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-817310ab6f07ee2a7060461fdbdb2c67bb32bf1b) (embedded_laplace.html)
+ + +**laplace_latent_poisson_2_log_rng**: + + -
[`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-7ff5a2bd449f1359ec978aeae187ef00ef43501a) (embedded_laplace.html)
+ + **laplace_latent_poisson_log_rng**: -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-eff5f6d441cfa6795e0f8e6b15b42d024765e323) (embedded_laplace.html)
+**laplace_latent_rng**: + + -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-9fe30de84bc921ad3ca7f7d05ea5259375d71690) (embedded_laplace.html)
+ + +**laplace_latent_tol_bernoulli_logit_rng**: + + -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-a0c71c99bb325c6d3c26c2f90a9284e65f79131d) (embedded_laplace.html)
+ + +**laplace_latent_tol_neg_binomial_2_log_rng**: + + -
[`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-162d0f8b3a810bac0218e059593f859801ebc3c4) (embedded_laplace.html)
+ + +**laplace_latent_tol_poisson_2_log_rng**: + + -
[`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-4e5732bcd95215252096b5de3d01d82ae4a442ff) (embedded_laplace.html)
+ + **laplace_latent_tol_poisson_log_rng**: -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-ffa419074e92267fba174d07d89982aab3791f7f) (embedded_laplace.html)
+**laplace_latent_tol_rng**: + + -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-49d4548b6a8842f8e60219a8fc7bbde410a6100a) (embedded_laplace.html)
+ + +**laplace_marginal**: + + -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-10ad18098ba50058d64707ab6a84fb8673d24835) (embedded_laplace.html)
+ + +**laplace_marginal_bernoulli_logit_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-fcd6fa2ce9b1968e193edf876cbecdb7b7025ff8) (embedded_laplace.html)
+ + +**laplace_marginal_neg_binomial_2_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-d9225f23cd6af7ffcc45a3ee0d70e3f77f303edd) (embedded_laplace.html)
+ + +**laplace_marginal_poisson_2_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-aa567fcc59081a7e6dc4f102f3e417a47969f91b) (embedded_laplace.html)
+ + **laplace_marginal_poisson_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-69cc1eb968d5e1cf857158a6dbd71e46a2b98638) (embedded_laplace.html)
+**laplace_marginal_tol**: + + -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-00f3333e0fed7aa31c301aaa64db6e17d579e925) (embedded_laplace.html)
+ + +**laplace_marginal_tol_bernoulli_logit_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-11f26356248255b07c55547657eb245074cf580d) (embedded_laplace.html)
+ + +**laplace_marginal_tol_neg_binomial_2_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e302c57a432c1a726b77e82915ccb18f82748bb2) (embedded_laplace.html)
+ + +**laplace_marginal_tol_poisson_2_log_lpmf**: + + -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e076e264095a7b2e74149913407d3b2ab2d6e069) (embedded_laplace.html)
+ + **laplace_marginal_tol_poisson_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b89307e74f003363276021f42b183edb574bb134) (embedded_laplace.html)
From 6b5a35ffcde68b473c2d3da2e22a4de24d70af88 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Thu, 29 May 2025 10:27:11 -0400 Subject: [PATCH 08/26] Add distribution statements --- src/functions-reference/embedded_laplace.qmd | 68 ++++++++++++++++++++ src/functions-reference/functions_index.qmd | 40 ++++++++++++ 2 files changed, 108 insertions(+) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index a87308c2b..77cfa5023 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -219,6 +219,22 @@ The arguments required to compute this likelihood are: * `y_index`: an array whose $i^\text{th}$ element indicates to which group the $i^\text{th}$ observation belongs to. + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_poisson_log`**`(y_index, theta0, K_function, (...))`
\newline + +Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta0, K_function, (...))`. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_tol_poisson_log`**`(y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline + +Increment target log probability density with `laplace_marginal_tol_poisson_log_lupmf(y | y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +{{< since 2.37 >}} + The signatures for the embedded Laplace approximation function with a Poisson likelihood are @@ -259,6 +275,24 @@ to the rate parameter of the Poisson. The likelihood is then, $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). $$ + + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta0, K_function, (...))`
\newline + +Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta0, K_function, (...))`. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline + +Increment target log probability density with `laplace_marginal_tol_poisson_2_log_lupmf(y | y_index, x, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +{{< since 2.37 >}} + + The signatures for this function are: @@ -313,6 +347,23 @@ The arguments for the likelihood function are: group the $i^\text{th}$ observation belongs to. * `eta`: the overdispersion parameter. + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_neg_binomial_2_log`**`(y_index, eta, theta0, K_function, (...))`
\newline + +Increment target log probability density with `laplace_marginal_neg_binomial_2_log_lupmf(y | y_index, eta, theta0, K_function, (...))`. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_tol_neg_binomial_2_log`**`(y_index, eta, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline + +Increment target log probability density with `laplace_marginal_tol_neg_binomial_2_log_lupmf(y | y_index, eta, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +{{< since 2.37 >}} + + The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are @@ -359,6 +410,23 @@ The arguments of the likelihood function are: * `y_index`: an array whose $i^\text{th}$ element indicates to which group the $i^\text{th}$ observation belongs to. + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_bernoulli_logit`**`(y_index, theta0, K_function, (...))`
\newline + +Increment target log probability density with `laplace_marginal_bernoulli_logit_lupmf(y | y_index, theta0, K_function, (...))`. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit }!sampling statement|hyperpage} + +`y ~ ` **`laplace_marginal_tol_bernoulli_logit`**`(y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline + +Increment target log probability density with `laplace_marginal_tol_bernoulli_logit_lupmf(y | y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +{{< since 2.37 >}} + + The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index 566692aef..8cc61bf59 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1676,21 +1676,41 @@ pagetitle: Alphabetical Index -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-10ad18098ba50058d64707ab6a84fb8673d24835) (embedded_laplace.html)
+**laplace_marginal_bernoulli_logit**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-1d93ab799518d0aac88e63c01f9655f36c7cbeb6) (embedded_laplace.html)
+ + **laplace_marginal_bernoulli_logit_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-fcd6fa2ce9b1968e193edf876cbecdb7b7025ff8) (embedded_laplace.html)
+**laplace_marginal_neg_binomial_2_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-7f807083cfb7347ece100da5dfe719186a678d02) (embedded_laplace.html)
+ + **laplace_marginal_neg_binomial_2_log_lpmf**: -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-d9225f23cd6af7ffcc45a3ee0d70e3f77f303edd) (embedded_laplace.html)
+**laplace_marginal_poisson_2_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-47b6396bce572a75c537fe2902d7be0f80b7af4e) (embedded_laplace.html)
+ + **laplace_marginal_poisson_2_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-aa567fcc59081a7e6dc4f102f3e417a47969f91b) (embedded_laplace.html)
+**laplace_marginal_poisson_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-b2c6946b65561039eb8bf3999fb59d38a6336e10) (embedded_laplace.html)
+ + **laplace_marginal_poisson_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-69cc1eb968d5e1cf857158a6dbd71e46a2b98638) (embedded_laplace.html)
@@ -1701,21 +1721,41 @@ pagetitle: Alphabetical Index -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-00f3333e0fed7aa31c301aaa64db6e17d579e925) (embedded_laplace.html)
+**laplace_marginal_tol_bernoulli_logit**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-92d43f7c3643c7d85966bbfc0b2a71546facb37c) (embedded_laplace.html)
+ + **laplace_marginal_tol_bernoulli_logit_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-11f26356248255b07c55547657eb245074cf580d) (embedded_laplace.html)
+**laplace_marginal_tol_neg_binomial_2_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-40016f7d5d96b9d999ca42f446720cf3f3cd5769) (embedded_laplace.html)
+ + **laplace_marginal_tol_neg_binomial_2_log_lpmf**: -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e302c57a432c1a726b77e82915ccb18f82748bb2) (embedded_laplace.html)
+**laplace_marginal_tol_poisson_2_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-257d3af0df49c240293dc3a2d4f8cac109dd54d1) (embedded_laplace.html)
+ + **laplace_marginal_tol_poisson_2_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e076e264095a7b2e74149913407d3b2ab2d6e069) (embedded_laplace.html)
+**laplace_marginal_tol_poisson_log**: + + -
[distribution statement](embedded_laplace.qmd#index-entry-4a7ecca812a308ea648bb31996559cba79738d83) (embedded_laplace.html)
+ + **laplace_marginal_tol_poisson_log_lpmf**: -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b89307e74f003363276021f42b183edb574bb134) (embedded_laplace.html)
From 3996971291ae9d16cadfc509b635781651792441 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Thu, 29 May 2025 10:34:47 -0400 Subject: [PATCH 09/26] Duplicate for lupmfs --- src/functions-reference/embedded_laplace.qmd | 67 ++++++++++++++++++++ src/functions-reference/functions_index.qmd | 40 ++++++++++++ 2 files changed, 107 insertions(+) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 77cfa5023..117588107 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -254,6 +254,23 @@ TODO description. TODO description. {{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + \index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} @@ -311,6 +328,22 @@ TODO description. TODO description. {{< since 2.37 >}} + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + \index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} @@ -366,6 +399,7 @@ Increment target log probability density with `laplace_marginal_tol_neg_binomial The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are + \index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} @@ -382,6 +416,22 @@ TODO description. TODO description. {{< since 2.37 >}} + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + \index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} @@ -445,6 +495,23 @@ TODO description. TODO description. {{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +`real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +TODO description. +{{< since 2.37 >}} + + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +TODO description. +{{< since 2.37 >}} + \index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index 8cc61bf59..badd1472c 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1686,6 +1686,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-fcd6fa2ce9b1968e193edf876cbecdb7b7025ff8) (embedded_laplace.html)
+**laplace_marginal_bernoulli_logit_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-be6571c1711c9f1192e43144fbdfcff1dea6e0fa) (embedded_laplace.html)
+ + **laplace_marginal_neg_binomial_2_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-7f807083cfb7347ece100da5dfe719186a678d02) (embedded_laplace.html)
@@ -1696,6 +1701,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-d9225f23cd6af7ffcc45a3ee0d70e3f77f303edd) (embedded_laplace.html)
+**laplace_marginal_neg_binomial_2_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-48ce0c23acb809fc07429a8dd51b230865f415d0) (embedded_laplace.html)
+ + **laplace_marginal_poisson_2_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-47b6396bce572a75c537fe2902d7be0f80b7af4e) (embedded_laplace.html)
@@ -1706,6 +1716,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-aa567fcc59081a7e6dc4f102f3e417a47969f91b) (embedded_laplace.html)
+**laplace_marginal_poisson_2_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-edb9486186d9d6a2c354cae57246f01bd004e172) (embedded_laplace.html)
+ + **laplace_marginal_poisson_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-b2c6946b65561039eb8bf3999fb59d38a6336e10) (embedded_laplace.html)
@@ -1716,6 +1731,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-69cc1eb968d5e1cf857158a6dbd71e46a2b98638) (embedded_laplace.html)
+**laplace_marginal_poisson_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-04fde08ca1cba55d670effdf01ffafdc6589e4c4) (embedded_laplace.html)
+ + **laplace_marginal_tol**: -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-00f3333e0fed7aa31c301aaa64db6e17d579e925) (embedded_laplace.html)
@@ -1731,6 +1751,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-11f26356248255b07c55547657eb245074cf580d) (embedded_laplace.html)
+**laplace_marginal_tol_bernoulli_logit_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-d7467a382e63b9aff52ca3ee78c531e7015a49fe) (embedded_laplace.html)
+ + **laplace_marginal_tol_neg_binomial_2_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-40016f7d5d96b9d999ca42f446720cf3f3cd5769) (embedded_laplace.html)
@@ -1741,6 +1766,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e302c57a432c1a726b77e82915ccb18f82748bb2) (embedded_laplace.html)
+**laplace_marginal_tol_neg_binomial_2_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-cd55590ff7cb4ffd6e8b2379c95a2de89043e288) (embedded_laplace.html)
+ + **laplace_marginal_tol_poisson_2_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-257d3af0df49c240293dc3a2d4f8cac109dd54d1) (embedded_laplace.html)
@@ -1751,6 +1781,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e076e264095a7b2e74149913407d3b2ab2d6e069) (embedded_laplace.html)
+**laplace_marginal_tol_poisson_2_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-644e80df6ed47fcffc1ecc8333dc049dae69890b) (embedded_laplace.html)
+ + **laplace_marginal_tol_poisson_log**: -
[distribution statement](embedded_laplace.qmd#index-entry-4a7ecca812a308ea648bb31996559cba79738d83) (embedded_laplace.html)
@@ -1761,6 +1796,11 @@ pagetitle: Alphabetical Index -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b89307e74f003363276021f42b183edb574bb134) (embedded_laplace.html)
+**laplace_marginal_tol_poisson_log_lupmf**: + + -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-6301b27bd4c2e216d4fda329e937d6fb15987674) (embedded_laplace.html)
+ + **lbeta**: -
[`(real alpha, real beta) : real`](real-valued_basic_functions.qmd#index-entry-def48992e0f8724381904fba78466b0f4a0d14bd) (real-valued_basic_functions.html)
From 1db0c1d202568f4b87a3968487090fee82eaec29 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Tue, 3 Jun 2025 19:21:09 -0400 Subject: [PATCH 10/26] start addressing the decription todo --- src/functions-reference/embedded_laplace.qmd | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 0afb98981..c5e100e62 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -120,7 +120,7 @@ The signature of the function is: `real` **`laplace_marginal`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$. {{< since 2.37 >}} @@ -137,7 +137,8 @@ with the following signature: `real` **`laplace_marginal_tol`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +and allows user to specify tolerances. * `tol`: the tolerance $\epsilon$ of the optimizer. Specifically, the optimizer @@ -187,7 +188,7 @@ the signature for `laplace_marginal`: `vector` **`laplace_latent_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$. {{< since 2.37 >}} Once again, it is possible to specify control parameters: @@ -196,7 +197,7 @@ Once again, it is possible to specify control parameters: `vector` **`laplace_latent_tol_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$. {{< since 2.37 >}} ## Built-in Laplace marginal likelihood functions @@ -245,7 +246,9 @@ likelihood are `real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link. {{< since 2.37 >}} @@ -253,7 +256,10 @@ TODO description. `real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link, and allows the users to tune the control +parameters of the approximation. {{< since 2.37 >}} From 88fa61f736ce4d2b4b7785eaf27c6db7a171ea96 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Thu, 5 Jun 2025 14:25:37 -0400 Subject: [PATCH 11/26] description of function signatures and proofread. --- src/functions-reference/embedded_laplace.qmd | 118 ++++++++++++++----- 1 file changed, 87 insertions(+), 31 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index c5e100e62..60ab53677 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -21,8 +21,8 @@ a two-step procedure: 1. sample from the *marginal posterior* $p(\phi \mid y)$, 2. sample from the *conditional posterior* $p(\theta \mid y, \phi)$. -In practice, neither the marginal posterior nor the conditional posterior -are available in closed form and so they must be approximated. +In the above procedure, neither the marginal posterior nor the conditional posterior +are typically available in closed form and so they must be approximated. The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) d\theta$ $ is called the marginal likelihood. The Laplace method approximates @@ -138,8 +138,7 @@ with the following signature: `real` **`laplace_marginal_tol`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ -and allows user to specify tolerances. - +and allows the user to tune the control parameters of the approximation. * `tol`: the tolerance $\epsilon$ of the optimizer. Specifically, the optimizer stops when $||\nabla \log p(\theta \mid y, \phi)|| \le \epsilon$. By default, @@ -197,14 +196,18 @@ Once again, it is possible to specify control parameters: `vector` **`laplace_latent_tol_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$. +Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$ +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} ## Built-in Laplace marginal likelihood functions -Stan supports certain built-in Laplace marginal likelihood functions. -This selection is currently -narrow and expected to grow. The built-in functions exist for the user's +Stan provides convenient wrappers for the embedded Laplace approximation +when applied to latent Gaussian models with certain likelihoods. +With this wrapper, the likelihood is pre-specified and does not need to be +specified by the user. +The selection of supported likelihoods is currently +narrow and expected to grow. The wrappers exist for the user's convenience but are not more computationally efficient than specifying log likelihoods in the `functions` block. @@ -258,7 +261,7 @@ distribution with a log link. Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson -distribution with a log link, and allows the users to tune the control +distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -268,7 +271,9 @@ parameters of the approximation. `real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link. {{< since 2.37 >}} @@ -276,7 +281,10 @@ TODO description. `real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link, and allows the user to tune the control +parameters of the approximation. {{< since 2.37 >}} @@ -284,7 +292,9 @@ TODO description. `vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} @@ -292,7 +302,10 @@ TODO description. `vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Poisson distribution with a log link +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} A similar built-in likelihood lets users specify an offset $x_i \in \mathbb R^+$ @@ -325,7 +338,9 @@ The signatures for this function are: `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link and an offset. {{< since 2.37 >}} @@ -333,7 +348,10 @@ TODO description. `real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link and an offset +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -341,7 +359,9 @@ TODO description. `real` **`laplace_marginal_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link and an offset. {{< since 2.37 >}} @@ -349,7 +369,10 @@ TODO description. `real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a Poisson +distribution with a log link and an offset +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -357,7 +380,9 @@ TODO description. `vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} @@ -365,7 +390,10 @@ TODO description. `vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Poisson distribution with a log link and an offset, +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -413,7 +441,9 @@ Binomial likelihood are `real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ +in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative +Binomial distribution with a log link. {{< since 2.37 >}} @@ -421,7 +451,10 @@ TODO description. `real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ +in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative +Binomial distribution with a log link, and allows the user to tune the control +parameters of the approximation. {{< since 2.37 >}} @@ -429,7 +462,9 @@ TODO description. `real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ +in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative +Binomial distribution with a log link. {{< since 2.37 >}} @@ -437,7 +472,10 @@ TODO description. `real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ +in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative +Binomial distribution with a log link, and allows the user to tune the control +parameters of the approximation. {{< since 2.37 >}} @@ -445,7 +483,9 @@ TODO description. `vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood +$p(y \mid \theta, \eta)$ is a Negative binomial distribution with a log link. {{< since 2.37 >}} @@ -453,7 +493,10 @@ TODO description. `vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood +$p(y \mid \theta, \eta)$ is a Negative binomial distribution with a log link +and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} ### Bernoulli with logit link @@ -492,7 +535,9 @@ The function signatures for the embedded Laplace approximation with a Bernoulli `real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli +distribution with a logit link. {{< since 2.37 >}} @@ -500,7 +545,9 @@ TODO description. `real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli +distribution with a logit link and allows the user to tune the control parameters. {{< since 2.37 >}} @@ -509,7 +556,9 @@ TODO description. `real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli +distribution with a logit link. {{< since 2.37 >}} @@ -517,7 +566,9 @@ TODO description. `real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ +in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli +distribution with a logit link and allows the user to tune the control parameters. {{< since 2.37 >}} @@ -525,7 +576,9 @@ TODO description. `vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Bernoulli distribution with a logit link. {{< since 2.37 >}} @@ -533,7 +586,10 @@ TODO description. `vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline -TODO description. +Returns a draw from the Laplace approximation to the conditional posterior +$p(\theta \mid y, \phi)$ in the special case where the likelihood +$p(y \mid \theta)$ is a Bernoulli distribution with a logit link, +and lets the user tune the control parameters of the approximation. {{< since 2.37 >}} From b86fb23085e1c7a783161e662f50400b2ecbf06b Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Thu, 5 Jun 2025 16:32:28 -0400 Subject: [PATCH 12/26] implement Steve's changes --- src/functions-reference/embedded_laplace.qmd | 361 +++++++++++-------- 1 file changed, 205 insertions(+), 156 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 60ab53677..3a4d0f5fc 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -10,9 +10,16 @@ A latent Gaussian model observes the following hierarchical structure: $$ \phi \sim p(\phi), \\ \theta \sim \text{MultiNormal}(0, K(\phi)), \\ - y \sim p(y \mid \theta, \phi), + y \sim p(y \mid \theta, \phi). $$ -where $K(\phi)$ denotes the prior covariance matrix parameterized by $\phi$. +In this formulation, $y$ represents the +observed data, and $p(y \mid \theta, \phi)$ is the likelihood function that +specifies how observations are generated conditional on the latent variables +$\theta$ and hyperparameters $\phi$. +$\phi$ denotes the set of hyperparameters governing the model and +$p(\phi)$ is the prior distribution placed over these hyperparameters. +$K(\phi)$ denotes the prior covariance matrix for the latent Gaussian variables +$\theta$ and is parameterized by $\phi$. The prior $p(\theta)$ is restricted to be a multivariate normal. To sample from the joint posterior $p(\phi, \theta \mid y)$, we can either use a standard method, such as Markov chain Monte Carlo, or we can follow @@ -24,10 +31,14 @@ a two-step procedure: In the above procedure, neither the marginal posterior nor the conditional posterior are typically available in closed form and so they must be approximated. The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, -where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) d\theta$ $ +where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) \text{d}\theta$ $ is called the marginal likelihood. The Laplace method approximates -$p(y \mid \phi, \theta) p(\theta)$ with a normal distribution and the -resulting Gaussian integral can be evaluated analytically to obtain an +$p(y \mid \phi, \theta) p(\theta)$ with a normal distribution centered at +$$ + \theta^* = \underset{\theta}{\text{argmax}} \ \log p(\theta \mid y, \phi), +$$ +and $\theta^*$ is obtained using a numerical optimizer. +The resulting Gaussian integral can be evaluated analytically to obtain an approximation to the log marginal likelihood $\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. @@ -50,56 +61,99 @@ multivariate normal and $p(y \mid \phi, \theta)$ is log-concave. Stan's embedded Laplace approximation is restricted to the case where the prior $p(\theta)$ is multivariate normal. Furthermore, the likelihood $p(y \mid \phi, \theta)$ must be computed using -only operations which support higher-order derivatives (see the next section). +only operations which support higher-order derivatives +(see section [specifying the likelihood function](#laplace_likelihood_spec)). + +## Approximating the log marginal likelihood $\log p(y \mid \phi)$ + +In the `model` block, we increment `target` with `laplace_marginal`, a function +that approximates the log marginal likelihood $\log p(y \mid \phi)$. +The signature of the function is: + +`real` **`laplace_marginal`**`(function likelihood_function, tuple() likelihood_arguments, vector theta_init, function covariance_function, tuple() covariance_arguments)` + +Which returns an approximation to the log marginal likelihood $p(y \mid \phi)$. +{{< since 2.37 >}} + +This function takes in the following argumeents. + +1. `likelihood_function` - user-specified likelihood whose first argument is the vector of latent Gaussian variables `theta` +2. `likelihood_arguments` - A tuple of the likelihood arguments whose internal members will be passed to the covariance function +3. `theta_init` - an initial guess for the optimization problem that underlies the Laplace approximation, +4. `covariance_function` - Prior covariance function +5. `covariance_arguments` A tuple of the arguments whose internal members will be passed to the the covariance function + +The size of $\theta_\text{init}$ must be consistent with the size of the $\theta$ argument +passed to `likelihood_function`. -## Specifying the likelihood function +Below we go over each argument in more detail. + +## Specifying the likelihood function {#laplace-likelihood_spec} The first step to use the embedded Laplace approximation is to write down a function in the `functions` block which returns the log joint likelihood -`\log p(y \mid \theta, \phi)`. There are a few constraints on this function: +$\log p(y \mid \theta, \phi)$. + +There are a few constraints on this function: -* The function return type must be `real` +1. The function return type must be `real` -* The first argument must be the latent Gaussian variable $\theta$ and must +2. The first argument must be the latent Gaussian variable $\theta$ and must have type `vector`. -* The operations in the function must support higher-order automatic +3. The operations in the function must support higher-order automatic differentiation (AD). Most functions in Stan support higher-order AD. The exceptions are functions with specialized calls for reverse-mode AD, and these are higher-order functions (algebraic solvers, differential equation solvers, and integrators), the marginalization function for hidden Markov models (HMM) function, and the embedded Laplace approximation itself. -The signature of the function is +The base signature of the function is + +```stan +real likelihood_function(vector theta, ...) ``` -real ll_function(vector theta, ...) + +The `...` represents a set of optional variadic arguments. There is no type +restrictions for the variadic arguments `...` and each argument can be passed +as data or parameter. + +The tuple after `likelihood_function` contains the arguments that get passed +to `likelihood_function` *excluding $\theta$*. For instance, if a user defined +likelihood uses a real and a matrix the likelihood function's signature would +first have a vector and then a real and matrix argument. + +```stan +real likelihood_fun(vector theta, real a, matrix X) ``` -There is no type restrictions for the variadic arguments `...` and each -argument can be passed as data or parameter. As always, users should use -parameter arguments only when necessary in order to speed up differentiation. + +The call to the laplace marginal would start with this likelihood and +tuple holding the other likelihood arguments. + +```stan +real val = laplace_marginal(likelihood_fun, (a, X), ...); +``` + +As always, users should use parameter arguments only when necessary in order to +speed up differentiation. In general, we recommend marking data only arguments with the keyword `data`, for example, -``` -real ll_function(vector theta, data vector x, ...) + +```stan +real likelihood_function(vector theta, data vector x, ...) ``` ## Specifying the covariance function -We next need to specify a function that returns the prior covariance matrix -$K$ as a function of the hyperparameters $\phi$. -The only restriction is that this function returns a positive-definite matrix -with size $n \times n$ where $n$ is the size of $\theta$. The signature is: -``` -matrix K_function(...) -``` -There is no type restrictions for the variadic arguments. The variables $\phi$ -is implicitly defined as the collection of all non-data arguments passed to -`ll_function` (excluding $\theta$) and `K_function`. - +The argument `covariance_function` returns the prior covariance matrix +$K$. The signature for this function is the same as a standard stan function. +It's return type must be a matrix of size $n \times n$ where $n$ is the size of $\theta$. -## Approximating the log marginal likelihood $\log p(y \mid \phi)$ +```stan +matrix covariance_function(...) +``` -In the `model` block, we increment `target` with `laplace_marginal`, a function + -\index{{\tt \bfseries laplace\_marginal }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...)): real}|hyperpage} - -`real` **`laplace_marginal`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline - -Returns an approximation to the log marginal likelihood $p(y \mid \phi)$. -{{< since 2.37 >}} +passed to `likelihood_function`. --> + +The `...` represents a set of optional +variadic arguments. There is no type restrictions for the variadic arguments +`...` and each argument can be passed as data or parameter. The variables +$\phi$ is implicitly defined as the collection of all non-data arguments passed +to `likelihood_function` (excluding $\theta$) and `covariance_function`. + +The tuple after `covariance_function` contains the arguments that get passed +to `covariance_function`. For instance, if a user defined covariance function +uses two vectors +```stan +real cov_fun(vector theta, real b, matrix Z) +``` +the call to the Laplace marginal would include the covariance function and +a tuple holding the covariance function arguments. +```stan +real val = laplace_marginal(likelihood_fun, (a, X), cov_fun, (b, Z), ...); +``` -The `tuple(...)` after `ll_function` contains the arguments that get passed -to `ll_function` *excluding $\theta$*. Likewise, the `tuple(...)` after -`K_function` contains the arguments that get passed to `K_function`. +## Control parameters It also possible to specify control parameters, which can help improve the optimization that underlies the Laplace approximation, using `laplace_marginal_tol` with the following signature: - -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +`real` **`laplace_marginal_tol`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ and allows the user to tune the control parameters of the approximation. @@ -182,20 +244,18 @@ approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: - -\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} -`vector` **`laplace_latent_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...))`
\newline +`vector` **`laplace_latent_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...))`
\newline Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$. {{< since 2.37 >}} Once again, it is possible to specify control parameters: - -\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function ll\_function, tuple(...), vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} - -`vector` **`laplace_latent_tol_rng`**`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +`vector` **`laplace_latent_tol_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$ and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -228,36 +288,33 @@ group the $i^\text{th}$ observation belongs to. \index{{\tt \bfseries laplace\_marginal\_poisson\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_poisson_log`**`(y_index, theta0, K_function, (...))`
\newline +`y ~ ` **`laplace_marginal_poisson_log`**`(y_index, theta_init, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta0, K_function, (...))`. +Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta_init, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_poisson_log`**`(y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_poisson_log`**`(y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_poisson_log_lupmf(y | y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. -{{< since 2.37 >}} +Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta_init, covariance_function, (...))`. The signatures for the embedded Laplace approximation function with a Poisson likelihood are - -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} - -`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} - -`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -266,20 +323,19 @@ parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} - -`real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +`real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -287,20 +343,18 @@ distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} - -`vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline - + +\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +`vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood @@ -317,36 +371,32 @@ $$ \index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta0, K_function, (...))`
\newline +`y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta0, K_function, (...))`. +Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline - -Increment target log probability density with `laplace_marginal_tol_poisson_2_log_lupmf(y | y_index, x, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +`y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +Increment target log probability density with `laplace_marginal_tol_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. {{< since 2.37 >}} - The signatures for this function are: - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} - -`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline - + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -354,20 +404,19 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} - -`real` **`laplace_marginal_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -375,20 +424,20 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} -`vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...))`
\newline +`vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood @@ -419,37 +468,37 @@ group the $i^\text{th}$ observation belongs to. \index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_neg_binomial_2_log`**`(y_index, eta, theta0, K_function, (...))`
\newline +`y ~ ` **`laplace_marginal_neg_binomial_2_log`**`(y_index, eta, theta_init, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_neg_binomial_2_log_lupmf(y | y_index, eta, theta0, K_function, (...))`. +Increment target log probability density with `laplace_marginal_neg_binomial_2_log_lupmf(y | y_index, eta, theta_init, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_neg_binomial_2_log`**`(y_index, eta, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_neg_binomial_2_log`**`(y_index, eta, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_neg_binomial_2_log_lupmf(y | y_index, eta, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +Increment target log probability density with `laplace_marginal_tol_neg_binomial_2_log_lupmf(y | y_index, eta, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. {{< since 2.37 >}} The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are - -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline +`real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative Binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative @@ -457,20 +506,20 @@ Binomial distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline +`real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative Binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative @@ -478,20 +527,20 @@ Binomial distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} -`vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...))`
\newline +`vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood @@ -514,36 +563,36 @@ group the $i^\text{th}$ observation belongs to. \index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_bernoulli_logit`**`(y_index, theta0, K_function, (...))`
\newline +`y ~ ` **`laplace_marginal_bernoulli_logit`**`(y_index, theta_init, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_bernoulli_logit_lupmf(y | y_index, theta0, K_function, (...))`. +Increment target log probability density with `laplace_marginal_bernoulli_logit_lupmf(y | y_index, theta_init, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_bernoulli_logit`**`(y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_bernoulli_logit`**`(y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_bernoulli_logit_lupmf(y | y_index, theta0, K_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +Increment target log probability density with `laplace_marginal_tol_bernoulli_logit_lupmf(y | y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. {{< since 2.37 >}} The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are - -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +`real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli @@ -551,40 +600,40 @@ distribution with a logit link and allows the user to tune the control parameter {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +`real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link and allows the user to tune the control parameters. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} -`vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...))`
\newline +`vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta0, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood @@ -631,6 +680,6 @@ and lets the user tune the control parameters of the approximation. - - + + From 3841dbeec7cfaf557aba022e8adc5e5aa3ba00df Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Thu, 5 Jun 2025 16:38:18 -0400 Subject: [PATCH 13/26] Update metadata for index --- src/functions-reference/embedded_laplace.qmd | 6 +- src/functions-reference/functions_index.qmd | 59 +++++++++----------- 2 files changed, 32 insertions(+), 33 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 3a4d0f5fc..2235d2906 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -70,7 +70,11 @@ In the `model` block, we increment `target` with `laplace_marginal`, a function that approximates the log marginal likelihood $\log p(y \mid \phi)$. The signature of the function is: -`real` **`laplace_marginal`**`(function likelihood_function, tuple() likelihood_arguments, vector theta_init, function covariance_function, tuple() covariance_arguments)` +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...) likelihood\_arguments, vector theta_init, function K\_function, tuple(...) covariance\_arguments): real}|hyperpage} + + + +`real` **`laplace_marginal`**`(function likelihood_function, tuple(...) likelihood_arguments, vector theta_init, function covariance_function, tuple(...) covariance_arguments)` Which returns an approximation to the log marginal likelihood $p(y \mid \phi)$. {{< since 2.37 >}} diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index badd1472c..e9d406644 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1623,57 +1623,52 @@ pagetitle: Alphabetical Index **laplace_latent_bernoulli_logit_rng**: - -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-76c52dc387f97008815bd1574950b0591eed6d56) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-1fc637cfa219f5661eaf691aed6979aa11715e42) (embedded_laplace.html)
**laplace_latent_neg_binomial_2_log_rng**: - -
[`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-817310ab6f07ee2a7060461fdbdb2c67bb32bf1b) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-cdf01a782863e45a083e48b4bd247071d9a4de50) (embedded_laplace.html)
**laplace_latent_poisson_2_log_rng**: - -
[`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-7ff5a2bd449f1359ec978aeae187ef00ef43501a) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-51913d568f11fc64a9f166ff680e92b7943b85bc) (embedded_laplace.html)
**laplace_latent_poisson_log_rng**: - -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-eff5f6d441cfa6795e0f8e6b15b42d024765e323) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-de9bf8cc1a51693f3e9a1a49dca129ae45c868b9) (embedded_laplace.html)
**laplace_latent_rng**: - -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-9fe30de84bc921ad3ca7f7d05ea5259375d71690) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-e736fd996b0dd5ec34cc3a31e68ae93361bcfe4c) (embedded_laplace.html)
**laplace_latent_tol_bernoulli_logit_rng**: - -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-a0c71c99bb325c6d3c26c2f90a9284e65f79131d) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-929b023be5b575bb12c7568d48c4292d68484e4b) (embedded_laplace.html)
**laplace_latent_tol_neg_binomial_2_log_rng**: - -
[`(array[] int y, array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-162d0f8b3a810bac0218e059593f859801ebc3c4) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-5549a3af52023a92a370b327107898233c546f09) (embedded_laplace.html)
**laplace_latent_tol_poisson_2_log_rng**: - -
[`(array[] int y, array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-4e5732bcd95215252096b5de3d01d82ae4a442ff) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-b65fcddeadb2a0edba8bda048961a188a3c02e67) (embedded_laplace.html)
**laplace_latent_tol_poisson_log_rng**: - -
[`(array[] int y, array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-ffa419074e92267fba174d07d89982aab3791f7f) (embedded_laplace.html)
- - -**laplace_latent_tol_rng**: - - -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-49d4548b6a8842f8e60219a8fc7bbde410a6100a) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-2660b79c1a17a7def5edb58129a847511818959d) (embedded_laplace.html)
**laplace_marginal**: - -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-10ad18098ba50058d64707ab6a84fb8673d24835) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...) likelihood_arguments, vector theta_init, function covariance_function, tuple(...) covariance_arguments) : real`](embedded_laplace.qmd#index-entry-95e1c296ee63b312ffa7ef98140792e7f3fadeac) (embedded_laplace.html)
**laplace_marginal_bernoulli_logit**: @@ -1683,12 +1678,12 @@ pagetitle: Alphabetical Index **laplace_marginal_bernoulli_logit_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-fcd6fa2ce9b1968e193edf876cbecdb7b7025ff8) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-f842c1af3fe63a0220e54721301ff2e9ffa6cd52) (embedded_laplace.html)
**laplace_marginal_bernoulli_logit_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-be6571c1711c9f1192e43144fbdfcff1dea6e0fa) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-31c6ccea0ce342412ce6c8b0d31e29eafaa91f5c) (embedded_laplace.html)
**laplace_marginal_neg_binomial_2_log**: @@ -1698,12 +1693,12 @@ pagetitle: Alphabetical Index **laplace_marginal_neg_binomial_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-d9225f23cd6af7ffcc45a3ee0d70e3f77f303edd) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-221578be4523a20b22e6a523ba6457be9b21f792) (embedded_laplace.html)
**laplace_marginal_neg_binomial_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-48ce0c23acb809fc07429a8dd51b230865f415d0) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-720cc42c0aa5a63ef214bcb27374e2ceabd37455) (embedded_laplace.html)
**laplace_marginal_poisson_2_log**: @@ -1713,12 +1708,12 @@ pagetitle: Alphabetical Index **laplace_marginal_poisson_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-aa567fcc59081a7e6dc4f102f3e417a47969f91b) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-6d26f0abe1ac0de87b82f12b3ed92195ef8cd7f8) (embedded_laplace.html)
**laplace_marginal_poisson_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-edb9486186d9d6a2c354cae57246f01bd004e172) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-01db5858ff57fce301d7f58e3dcb5cac55998091) (embedded_laplace.html)
**laplace_marginal_poisson_log**: @@ -1728,17 +1723,17 @@ pagetitle: Alphabetical Index **laplace_marginal_poisson_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-69cc1eb968d5e1cf857158a6dbd71e46a2b98638) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-9b336691c420ff95c5a2c48f78e69c8e605224f6) (embedded_laplace.html)
**laplace_marginal_poisson_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-04fde08ca1cba55d670effdf01ffafdc6589e4c4) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-e7c7252606cc5d8b1f77617eef2af52d21642250) (embedded_laplace.html)
**laplace_marginal_tol**: - -
[`(function ll_function, tuple(...), vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-00f3333e0fed7aa31c301aaa64db6e17d579e925) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-88d77a8692d68016d68f400d2a2541259bcf24a2) (embedded_laplace.html)
**laplace_marginal_tol_bernoulli_logit**: @@ -1748,12 +1743,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_bernoulli_logit_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-11f26356248255b07c55547657eb245074cf580d) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-98481a032cb5ad2b100b7db23109d3f4f70e0af9) (embedded_laplace.html)
**laplace_marginal_tol_bernoulli_logit_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-d7467a382e63b9aff52ca3ee78c531e7015a49fe) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-4b2493e6cf1e09674de57a4f42b3334b42d02136) (embedded_laplace.html)
**laplace_marginal_tol_neg_binomial_2_log**: @@ -1763,12 +1758,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_neg_binomial_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e302c57a432c1a726b77e82915ccb18f82748bb2) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-c31e9b3864a282b96f30819d5a39b4995a9eb911) (embedded_laplace.html)
**laplace_marginal_tol_neg_binomial_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-cd55590ff7cb4ffd6e8b2379c95a2de89043e288) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-17764e1afa12272c00795a785590169d0d70fda4) (embedded_laplace.html)
**laplace_marginal_tol_poisson_2_log**: @@ -1778,12 +1773,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_poisson_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e076e264095a7b2e74149913407d3b2ab2d6e069) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-603a63e43b3518251e5f402a738efe4906015c36) (embedded_laplace.html)
**laplace_marginal_tol_poisson_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector x, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-644e80df6ed47fcffc1ecc8333dc049dae69890b) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e3eca950639aed9276dd8c0e880d3982d7a6c642) (embedded_laplace.html)
**laplace_marginal_tol_poisson_log**: @@ -1793,12 +1788,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_poisson_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b89307e74f003363276021f42b183edb574bb134) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b9f92ff7606b590d41fd9c33e480439337aae673) (embedded_laplace.html)
**laplace_marginal_tol_poisson_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta0, function K_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-6301b27bd4c2e216d4fda329e937d6fb15987674) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-42b965299d48dc219eaed0a64fec07d8d070d47b) (embedded_laplace.html)
**lbeta**: From 40e97524994f6bdc14cc8c7e8da7983bdcacb271 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Fri, 6 Jun 2025 09:34:06 -0400 Subject: [PATCH 14/26] propagate renames into latex index --- src/functions-reference/embedded_laplace.qmd | 60 ++++++++++---------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 2235d2906..9b2eabb07 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -31,7 +31,7 @@ a two-step procedure: In the above procedure, neither the marginal posterior nor the conditional posterior are typically available in closed form and so they must be approximated. The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, -where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) \text{d}\theta$ $ +where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) \text{d}\theta$ is called the marginal likelihood. The Laplace method approximates $p(y \mid \phi, \theta) p(\theta)$ with a normal distribution centered at $$ @@ -70,7 +70,7 @@ In the `model` block, we increment `target` with `laplace_marginal`, a function that approximates the log marginal likelihood $\log p(y \mid \phi)$. The signature of the function is: -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...) likelihood\_arguments, vector theta_init, function K\_function, tuple(...) covariance\_arguments): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...) likelihood\_arguments, vector theta\_init, function covariance\_function, tuple(...) covariance\_arguments): real}|hyperpage} @@ -196,10 +196,10 @@ It also possible to specify control parameters, which can help improve the optimization that underlies the Laplace approximation, using `laplace_marginal_tol` with the following signature: -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -249,7 +249,7 @@ The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: -\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...))`
\newline @@ -257,7 +257,7 @@ Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi) {{< since 2.37 >}} Once again, it is possible to specify control parameters: -\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function ll\_function, tuple(...), vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$ @@ -308,7 +308,7 @@ The signatures for the embedded Laplace approximation function with a Poisson likelihood are -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -317,7 +317,7 @@ distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -328,7 +328,7 @@ parameters of the approximation. -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -337,7 +337,7 @@ distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -348,7 +348,7 @@ parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood @@ -356,7 +356,7 @@ $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -390,7 +390,7 @@ Increment target log probability density with `laplace_marginal_tol_poisson_2_lo The signatures for this function are: -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -398,7 +398,7 @@ distribution with a log link and an offset. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -409,7 +409,7 @@ and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -418,7 +418,7 @@ distribution with a log link and an offset. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -429,7 +429,7 @@ and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -439,7 +439,7 @@ $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -490,7 +490,7 @@ The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -500,7 +500,7 @@ Binomial distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -511,7 +511,7 @@ parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -521,7 +521,7 @@ Binomial distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -532,7 +532,7 @@ parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -542,7 +542,7 @@ $p(y \mid \theta, \eta)$ is a Negative binomial distribution with a log link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -584,7 +584,7 @@ Increment target log probability density with `laplace_marginal_tol_bernoulli_lo The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -594,7 +594,7 @@ distribution with a logit link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -605,7 +605,7 @@ distribution with a logit link and allows the user to tune the control parameter -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -615,7 +615,7 @@ distribution with a logit link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -625,7 +625,7 @@ distribution with a logit link and allows the user to tune the control parameter {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -635,7 +635,7 @@ $p(y \mid \theta)$ is a Bernoulli distribution with a logit link. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta_init, function K\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline From 585823e9fba4a881e49fafe81ad0c2fd7f5e0b8e Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Fri, 6 Jun 2025 13:44:54 -0400 Subject: [PATCH 15/26] add details on how laplace approximation provides approximation for marginal likelihood. --- src/functions-reference/embedded_laplace.qmd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 9b2eabb07..701f398bd 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -31,7 +31,7 @@ a two-step procedure: In the above procedure, neither the marginal posterior nor the conditional posterior are typically available in closed form and so they must be approximated. The marginal posterior can be written as $p(\phi \mid y) \propto p(y \mid \phi) p(\phi)$, -where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) \text{d}\theta$ +where $p(y \mid \phi) = \int p(y \mid \phi, \theta) p(\theta) \text{d}\theta$ is called the marginal likelihood. The Laplace method approximates $p(y \mid \phi, \theta) p(\theta)$ with a normal distribution centered at $$ @@ -41,6 +41,10 @@ and $\theta^*$ is obtained using a numerical optimizer. The resulting Gaussian integral can be evaluated analytically to obtain an approximation to the log marginal likelihood $\log \hat p(y \mid \phi) \approx \log p(y \mid \phi)$. +Specifically: +$$ + \hat p(y \mid \phi) = \frac{p(\theta^* \mid \phi) p(y \mid \theta^*, \phi)}{\hat p (\theta^* \mid \phi, y)}. +$$ Combining this marginal likelihood with the prior in the `model` block, we can then sample from the marginal posterior $p(\phi \mid y)$ From 354562566712289c73c134305fe61cdbd19b580a Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Tue, 10 Jun 2025 12:20:14 -0400 Subject: [PATCH 16/26] fix typos spotted by Aki. --- src/functions-reference/embedded_laplace.qmd | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 701f398bd..3b625ca89 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -85,8 +85,8 @@ Which returns an approximation to the log marginal likelihood $p(y \mid \phi)$. This function takes in the following argumeents. -1. `likelihood_function` - user-specified likelihood whose first argument is the vector of latent Gaussian variables `theta` -2. `likelihood_arguments` - A tuple of the likelihood arguments whose internal members will be passed to the covariance function +1. `likelihood_function` - user-specified log likelihood whose first argument is the vector of latent Gaussian variables `theta` +2. `likelihood_arguments` - A tuple of the log likelihood arguments whose internal members will be passed to the covariance function 3. `theta_init` - an initial guess for the optimization problem that underlies the Laplace approximation, 4. `covariance_function` - Prior covariance function 5. `covariance_arguments` A tuple of the arguments whose internal members will be passed to the the covariance function @@ -96,7 +96,7 @@ passed to `likelihood_function`. Below we go over each argument in more detail. -## Specifying the likelihood function {#laplace-likelihood_spec} +## Specifying the log likelihood function {#laplace-likelihood_spec} The first step to use the embedded Laplace approximation is to write down a function in the `functions` block which returns the log joint likelihood @@ -185,13 +185,13 @@ The tuple after `covariance_function` contains the arguments that get passed to `covariance_function`. For instance, if a user defined covariance function uses two vectors ```stan -real cov_fun(vector theta, real b, matrix Z) +matrix cov_fun(real b, matrix Z) ``` the call to the Laplace marginal would include the covariance function and a tuple holding the covariance function arguments. ```stan -real val = laplace_marginal(likelihood_fun, (a, X), cov_fun, (b, Z), ...); +real val = laplace_marginal(likelihood_fun, (a, X), theta_init, cov_fun, (b, Z), ...); ``` ## Control parameters From 99661efba0cf2318e1b9054b000fbf04bd452c30 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Tue, 10 Jun 2025 17:55:59 -0400 Subject: [PATCH 17/26] correct signatures in function references and add section on embedded Laplace in users guide on GPs. --- src/bibtex/all.bib | 48 +++++- src/functions-reference/embedded_laplace.qmd | 27 ++- src/stan-users-guide/gaussian-processes.qmd | 163 ++++++++++++++++++- 3 files changed, 218 insertions(+), 20 deletions(-) diff --git a/src/bibtex/all.bib b/src/bibtex/all.bib index a17de483a..ad8b6d3ea 100644 --- a/src/bibtex/all.bib +++ b/src/bibtex/all.bib @@ -1894,4 +1894,50 @@ @misc{seyboldt:2024 note="pyro-ppl GitHub repository issue \#1751", year = "2024", url ="https://github.com/pyro-ppl/numpyro/pull/1751#issuecomment-1980569811" -} \ No newline at end of file +} + +@article{Margossian:2020, + Author = {Margossian, C. C. and Vehtari, A. and Simpson, D. + and Agrawal, R.}, + Title = {Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond}, + journal = {Advances in Neural Information Processing Systems}, + volume = {34}, + Year = {2020}} + +@article{Kuss:2005, + author = {Kuss, Malte and Rasmussen, Carl E}, + title = {Assessing Approximate Inference for Binary {Gaussian} Process Classification}, + journal = {Journal of Machine Learning Research}, + volume = {6}, + pages = {1679 -- 1704}, + year = {2005}} + +@article{Vanhatalo:2010, + author = {Jarno Vanhatalo and Ville Pietil\"{a}inen and Aki Vehtari}, + title = {Approximate inference for disease mapping with sparse {Gaussian} processes}, + journal = {Statistics in Medicine}, + year = {2010}, + volume = {29}, + number = {15}, + pages = {1580--1607} +} + +@article{Cseke:2011, + author = {Botond Cseke and Heskes, Tom}, + title = {Approximate marginals in latent {Gaussian} models}, + journal = {Journal of Machine Learning Research}, + volume = {12}, + issue = {2}, + page = {417 -- 454}, + year = {2011}} + +@article{Vehtari:2016, + author = {Aki Vehtari and Tommi Mononen and Ville Tolvanen and Tuomas Sivula and Ole Winther}, + title = {Bayesian Leave-One-Out Cross-Validation Approximations for {Gaussian} Latent Variable Models}, + journal = {Journal of Machine Learning Research}, + year = {2016}, + volume = {17}, + number = {103}, + pages = {1--38}, + url = {http://jmlr.org/papers/v17/14-540.html} +} diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 3b625ca89..ea09b94ff 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -370,13 +370,13 @@ $p(y \mid \theta)$ is a Poisson distribution with a log link and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -A similar built-in likelihood lets users specify an offset $x_i \in \mathbb R^+$ -to the rate parameter of the Poisson. The likelihood is then, +A similar built-in likelihood lets users specify a vector offset +$x \in \mathbb R^N$ with $x_i \ge 0$ to the rate parameter of the Poisson. +The likelihood is then, $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). $$ - \index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log }!sampling statement|hyperpage} `y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...))`
\newline @@ -384,7 +384,6 @@ $$ Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...))`. {{< since 2.37 >}} - \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log }!sampling statement|hyperpage} `y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline @@ -393,16 +392,14 @@ Increment target log probability density with `laplace_marginal_tol_poisson_2_lo The signatures for this function are: - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} +`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -412,8 +409,7 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -421,8 +417,7 @@ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -432,8 +427,7 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -442,8 +436,7 @@ $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline diff --git a/src/stan-users-guide/gaussian-processes.qmd b/src/stan-users-guide/gaussian-processes.qmd index e7acf9a84..aefabc41c 100644 --- a/src/stan-users-guide/gaussian-processes.qmd +++ b/src/stan-users-guide/gaussian-processes.qmd @@ -486,8 +486,130 @@ model { } ``` +#### Poisson GP using an embedded Laplace approximation {-} + +For computational reasons, we may want to integrate out the Gaussian process +$f$, as was done in the normal output model. Unfortunately, exact +marginalization over $f$ is not possible when the outcome model is not normal. +Instead, we may perform *approximate* marginalization with an *embedded +Laplace approximation* [@Rue:2009, @Margossian:2020]. +To do so, we first use the function `laplace_marginal` to approximate the marginal +likelihood $p(y \mid \rho, \alpha, a)$ and sample the +hyperparameters with Hamiltonian Monte Carlo sampling. Then, we recover the +integrated out $f$ in the `generated quantities` block using +`laplace_latent_rng`. + +The embedded Laplace approximation computes a Gaussian approximation of the +conditional posterior, +$$ + \hat p_\mathcal{L}(f \mid \rho, \alpha, a, y) \approx p(f \mid \rho, \alpha, a, y), +$$ +where $\hat p_\mathcal{L}$ is a Gaussian that matches the mode and curvature +of $p(f \mid \rho, \alpha, a, y)$. We then obtain an approximation of +the marginal likelihood as follows: +$$ + \hat p_\mathcal{L}(y \mid \rho, \alpha, a) + = \frac{p(f^* \mid \alpha, \rho) p(y \mid f^*, a)}{ + \hat p_\mathcal{L}(f \mid \rho, \alpha, a, y)}, +$$ +where $f^*$ is the mode of $p(f \mid \rho, \alpha, a, y)$, obtained via +numerical optimization. + +To use Stan's embedded Laplace approximation, we must define the prior covariance +function and the log likelihood function in the functions block. +```{stan} +functions { + // log likelihood function + real ll_function(vector f, real a, array[] int y) { + return poisson_log_lpmf(y | a + f); + } + + // covariance function + matrix cov_function(real rho, real alpha, array[] real x, int N, real delta) { + matrix[N, N] K = gp_exp_quad_cov(x, alpha, rho); + return add_diag(K, delta) + } + +} +``` + +Furthermore, we must specify an initial value $f_\text{init}$ for the +numerical optimizer that underlies the Laplace approximation. In our experience, +we have found setting all values to 0 to be a good default. + +```{stan} +transformed data { + vector[N] f_init = rep_vector(0, N); +} +``` + +We then increment `target` in the model block with the approximation to +$\log p(y \mid \rho, \alpha, a)$. +```{stan} +model { + rho ~ inv_gamma(5, 5); + alpha ~ std_normal(); + sigma ~ std_normal(); + + target += laplace_marginal(ll_function, (a, y), f_init, + cov_function, (rho, alpha, x, N, delta)); +} +``` +Notice that we do not need to construct $f$ explicitly, since it is +marginalized out. Instead, we recover the GP function in `generated quantities`: +```{stan} +generated quantities { + vector[N] f = laplace_latent_rng(ll_function, (a, y), f_init, + cov_function, (rho, alpha, x, N, delta)); +} +``` + +Stan also provides support for a limited menu of built-in functions, including +the Poisson distribution with a log link and an offset $a$. When using such +a built-in function, the user does not need to specify a likelihood in the +`functions` block. However, the user must strictly follow the signature of the +likelihood: in this case, $a$ must be a vector of length $N$ (to allow for +different offsets for each observation $y_i$) and we must indicate which +element of $f$ each component of $y$ matches using the variable $y_\text{index}$. +In our example, there is a simple pairing $(y_i, f_i)$, however we could imagine +a scenario where multiple observations $(y_{j1}, y_{j2}, ...)$ are observed +for a single $f_j$. + +```{stan} +transformed data { + // ... + array[n_obs] int y_index; + for (i in 1:n_obs) y_index[i] = i - 1; +} -#### Logistic Gaussian process regression {-} +// ... + +transformed parameter { + vector[N] a_vec = rep_vector(a, N); +} + +model { + // ... + target += laplace_marginal_poisson_2_log_lpmf(y | y_index, a_vec, f_init, + cov_function, (rho, alpha, x, N, delta)); +} + +generated quantities { + vector[N] f = laplace_latent_poisson_2_log_rng(y, y_index, a_vec, f_init, + cov_function, (rho, alpha, x, N, delta)); +} + +``` + +Marginalization with a Laplace approximation can lead to faster inference, +however it also introduces an approximation error. In practice, this error +is negligible when using a Poisson likelihood and the approximation works well +for log concave likelihoods [@Kuss:2005, @Vanhatalo:2010, @Cseke:2011, +@Vehtari:2016]. +Still, users should exercise caution, especially +when trying unconventional likelihoods. + +#### Logistic GP regression {-} For binary classification problems, the observed outputs $z_n \in \{ 0,1 \}$ are binary. These outputs are modeled using a Gaussian @@ -514,10 +636,47 @@ data { // ... model { // ... - y ~ bernoulli_logit(a + f); + z ~ bernoulli_logit(a + f); } ``` +#### Logistic GP regression with an embedded Laplace approximation {-} + +As with the Poisson GP, we cannot marginalize the GP function exactly, +however we can resort to an embedded Laplace approximation. + +```{stan} +functions { + // log likelihood function + real ll_function(vector f, real a, array[] int z) { + return bernoulli_logit_lpmf(z | a + f); + } + + // covariance function + matrix cov_function(real rho, real alpha, array[] real x, int N, real delta) { + matrix[N, N] K = gp_exp_quad_cov(x, alpha, rho); + return add_diag(K, delta) + } +} + +// ... + +model { + target += laplace_marginal(ll_function, (a, z), f_init, + cov_function, (rho, alpha, x, N, delta)); +} + +generated quantities { + vector[N] f = laplace_latent_rng(ll_function, (a, z), f_init, + cov_function, (rho, alpha, x, N, delta)); +} +``` + +While marginalization with a Laplace approximation can lead to faster inference, +it also introduces an approximation error. In practice, this error may not be +negligable with a Bernoulli likelihood; for more discussion see, e.g. +[@Vehtari:2016, @Margossian:2020]. + ### Automatic relevance determination {-} From 229e3e4add0be5571c3b8a66cdefa78d33168370 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Wed, 11 Jun 2025 11:41:36 -0400 Subject: [PATCH 18/26] add examples with control parameters in GP sections. --- src/stan-users-guide/gaussian-processes.qmd | 45 ++++++++++++++++++++- 1 file changed, 44 insertions(+), 1 deletion(-) diff --git a/src/stan-users-guide/gaussian-processes.qmd b/src/stan-users-guide/gaussian-processes.qmd index aefabc41c..a9b8933ac 100644 --- a/src/stan-users-guide/gaussian-processes.qmd +++ b/src/stan-users-guide/gaussian-processes.qmd @@ -564,8 +564,47 @@ generated quantities { } ``` +Users can set the control parameters of the embedded Laplace approximation, +via `laplace_marginal_tol` and `laplace_latent_tol_rng`. When using these +functions, the user must set *all* the control parameters. +```{stan} +transformed data { +// ... + + real tol = 1e-6; // optimizer's tolerance for Laplace approx. + int max_num_steps = 1e3; // maximum number of steps for optimizer. + int hessian_block_size = 1; // when hessian of log likelihood is block + // diagonal, size of block (here 1). + int solver = 1; // which Newton optimizer to use; default is 1, + // use 2 and 3 only for special cases. + max_steps_linesearch = 0; // if >= 1, optimizer does a lineseach with + // specified number of steps. +} + +// ... + +model { +// ... + + target += laplace_marginal(ll_function, (a, y), f_init, + cov_function, (rho, alpha, x, N, delta), + tol, max_num_steps, hessian_block_size, + solver, max_steps_linesearch); +} + +generated quantities { + vector[N] f = laplace_latent_rng(ll_function, (a, y), f_init, + cov_function, (rho, alpha, x, N, delta), + tol, max_num_steps, hessian_block_size, + solver, max_steps_linesearch); +} + +``` +For details about the control parameters, see [@Margossian:2022] + + Stan also provides support for a limited menu of built-in functions, including -the Poisson distribution with a log link and an offset $a$. When using such +the Poisson distribution with a log link and and prior mean $a$. When using such a built-in function, the user does not need to specify a likelihood in the `functions` block. However, the user must strictly follow the signature of the likelihood: in this case, $a$ must be a vector of length $N$ (to allow for @@ -601,6 +640,10 @@ generated quantities { ``` +As before, we could specify the control parameters for the embedded Laplace +approximation using `laplace_marginal_tol_poisson_2_log_lpmf` and +`laplace_latent_tol_poisson_2_log_nrg`. + Marginalization with a Laplace approximation can lead to faster inference, however it also introduces an approximation error. In practice, this error is negligible when using a Poisson likelihood and the approximation works well From fa1735bd0388313fc26de7381df4ea44c226cd1e Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Wed, 11 Jun 2025 14:38:18 -0400 Subject: [PATCH 19/26] draft section on embedded Laplace approximation in reference manual. --- src/bibtex/all.bib | 8 + src/reference-manual/_quarto.yml | 1 + src/reference-manual/laplace_embedded.qmd | 176 ++++++++++++++++++++++ 3 files changed, 185 insertions(+) create mode 100644 src/reference-manual/laplace_embedded.qmd diff --git a/src/bibtex/all.bib b/src/bibtex/all.bib index ad8b6d3ea..5b4cdb71b 100644 --- a/src/bibtex/all.bib +++ b/src/bibtex/all.bib @@ -1941,3 +1941,11 @@ @article{Vehtari:2016 pages = {1--38}, url = {http://jmlr.org/papers/v17/14-540.html} } + + +@article{Margossian:2023, + author = {Margossian, Charles C.}, + title = {General adjoint-differentiated Laplace approximation}, + journal = {arXiv:2306.14976 }, + year = {2023}} + diff --git a/src/reference-manual/_quarto.yml b/src/reference-manual/_quarto.yml index 889680338..1ef1b59fc 100644 --- a/src/reference-manual/_quarto.yml +++ b/src/reference-manual/_quarto.yml @@ -59,6 +59,7 @@ book: - pathfinder.qmd - variational.qmd - laplace.qmd + - laplace_embedded.qmd - diagnostics.qmd - part: "Usage" diff --git a/src/reference-manual/laplace_embedded.qmd b/src/reference-manual/laplace_embedded.qmd new file mode 100644 index 000000000..ae53548eb --- /dev/null +++ b/src/reference-manual/laplace_embedded.qmd @@ -0,0 +1,176 @@ +--- +pagetitle: Embedded Laplace Approximation +--- + +# Embdeed Laplace Approximation + +Stan provides functions to perform an embedded Laplace +approximation for latent Gaussian models. Bearing a slight abuse of language, +this is sometimes known as an integrated or nested Laplace approximation. +Details of Stan's implementation can be found in reference +[@Margossian:2020, @Margossian:2023]. + +A standard approach to fit a latent Gaussian model would be to perform inference +jointly over the latent Gaussian variables and the hyperparameters. +Instead, the embedded Laplace approximation can be used to do *approximate* +marginalization of the latent Gaussian variables; we can then +use any inference over the remaining hyperparameters, for example Hamiltonian +Monte Carlo sampling. + +Formally, consider a latent Gaussian model, +$$ +\begin{eqnarray*} + \phi & \sim & p(\phi) \\ + \theta & \sim & \text{Multi-Normal}(0, K(\phi)) \\ + y & \sim & p(y \mid \theta, \phi). +\end{eqnarray*} +$$ +The motivation for marginalization is to bypass the challenging geometry of the joint +posterior $p(\phi, \theta \mid y)$. This geometry (e.g. funnels) often frustrates +inference algorithms, including Hamiltonian Monte Carlo sampling and approximate +methods such as variational inference. On the other hand, the marginal posterior +$p(\phi \mid y)$ is often well-behaved and in many cases low-dimensional. +Furthermore, the conditional posterior $p(\theta \mid \phi, y)$ can be well +approximated by a normal distribution, if the likelihood $p(y \mid \theta, \phi)$ +is log concave. + + +## Approximation of the conditional posterior and marginal likelihood + +The Laplace approximation is the normal distribution that matches the mode +and curvatureof the conditional posterior $p(\theta \mid y, \phi)$. +The mode, +$$ + \theta^* = \underset{\theta}{\text{argmax}} \ p(\theta \mid y, \phi), +$$ +is estimated by a Newton solver. Since the approximation is normal, +the curvature is matched by setting the covariance to the negative Hessian +of the log conditional posterior, evaluated at the mode, +$$ + \Sigma^* = - \left . \frac{\partial^2}{\partial \theta^2} + \log p (\theta \mid \phi, y) \right |_{\theta =\theta^*}. +$$ +The resulting Laplace approximation is then, +$$ +\hat p_\mathcal{L} (\theta \mid y, \phi) = \text{Multi-Normal}(\theta^*, \Sigma^*) +\approx p(\theta \mid y, \phi). +$$ +This approximation implies another approximation for the marginal likelihood, +$$ + \hat p_\mathcal{L}(y \mid \phi) := \frac{p(\theta^* \mid \phi) \ + p(y \mid \theta^*, \phi) }{ \hat p_\mathcal{L} (\theta^* \mid \phi, y) } + \approx p(y \mid \phi). +$$ +Hence, a strategy to approximate the posterior of the latent Gaussian model +is to first estimate the marginal posterior +$\hat p_\mathcal{L}(\phi \mid y) \propto p(\phi) p_\mathcal{L} (y \mid \phi)$ +using any algorithm supported by Stan. +Approximate posterior draws for the latent Gaussian variables are then +obtained by first drawing $\phi \sim \hat p_\mathcal{L}(\phi \mid y)$ and +then $\theta \sim hat p_\mathcal{L}(\theta \mid \phi, y)$. + + +## Trade-offs of the approximation + +The embedded Laplace approximation presents several trade-offs with standard +inference over the joint posterior $p(\theta, \phi \mid y)$. The main +advantage of the embedded Laplace approximation is that it side-steps the +intricate geometry of hierarchical models. The marginal posterior +$p(\phi \mid y)$ can then be handled by Hamiltonian Monte Carlo sampling +without extensive tuning or reparameterization, and the mixing time is faster, +meaning we can run shorter chains to achieve a desired precision. In some cases, +approximate methods, e.g. variational inference, which +work poorly on the joint $p(\theta, \phi \mid y)$ work well on the marginal +posterior $p(\phi \mid y)$. + +On the other hand, the embedded Laplace approximation presents certain +disadvantages. First, we need to perform a Laplace approximation each time +the log marginal likelihood is evaluated, meaning each iteration +can be expensive. Secondly, the approximation can introduce non-negligable +error, especially with non-conventional likelihoods (note the prior +is always multivariate normal). How these trade-offs are resolved depends on the application; see [@Margossian:2020] for some examples. + + +## Details of the approximation + +### Tuning the Newton solver + +A critical component of the embedded Laplace approximation is the Newton solver +used to estimate the mode $\theta^*$ of $p(\theta \mid \phi, y)$. The objective +function being maximized is +$$ +\Psi(\theta) = \log p(\theta \mid \phi) + \log p(y \mid \theta, \phi), +$$ +and convergence is declared if the change in the objective is sufficiently +small between two iterations +$$ +| \Psi (\theta^{(i + 1)}) - \Psi (\theta^{(i)}) | \le \Delta, +$$ +for some *tolerance* $\Delta$. The solver also stops after reaching a +pre-specified *maximum number of steps*: in that case, Stan throws an exception +and rejects the current proposal. This is not a problem, as +long as these exceptions are rare and confined to early phases of the warmup. + +The Newton iteration can be augmented with a linesearch step to insure that +at each iteration the objective function $\Psi$ decreases. Specifically, +suppose that +$$ +\Psi (\theta^{(i + 1)}) < \Psi (\theta^{(i)}). +$$ +This can indicate that the Newton step is too large and that we skipped a region +where the objective function decreases. In that case, we can reduce the step +length by a factor of 2, using +$$ + \theta^{(i + 1)} \leftarrow \frac{\theta^{(i + 1)} + \theta^{(i)}}{2}. +$$ +We repeat this halving of steps until +$\Psi (\theta^{(i + 1)}) \ge \Psi (\theta^{(i)})$, or until a maximum number +of linesearch steps is reached. By defaut, this maximum is set to 0, which +means the Newton solver performs no linesearch. For certain problems, adding +a linsearch can make the optmization more stable. + + +The embedded Laplace approximation uses a custom Newton solver,specialized +to find the mode of $p(\theta \mid \phi, y)$. +A keystep for efficient optimization is to insure all matrix inversions are +numerically stable. This can be done using the Woodburry-Sherman-Morrison +formula and requires one of three matrix decompositions: + +1. Cholesky decomposition of the Hessian of the negative log likelihood +$W = - \partial^2_\theta \log p(y \mid \theta, \phi)$ + +2. Cholesky decomposition of the prior covariance matrix $K(\phi)$. + +3. LU-decomposition of $I + KW$, where $I$ is the identity matrix. + +The first solver (1) should be used if the negative log likelihood is +positive-definite. Otherwise the user should rely on (2). In rarer cases where +it is not numerically safe to invert the covariance matrix $K$, users can +use the third solver as a last-resort option. + + +### Sparse Hessian of the log likelihood + +A key step to speed up computation is to take advantage of the sparsity of +the Hessian of the log likelihood, +$$ + H = \frac{\partial^2}{\partial \theta^2} \log p(y \mid \theta, \phi). +$$ +For example, if the observations $(y_1, \cdots, y_N)$ are conditionally +independent and each depends on only depend on one component of $\theta$, +such that +$$ + \log p(y \mid \theta, \phi) = \sum_{i = 1}^N \log p(y_i \mid \theta_i, \phi), +$$ +then the Hessian is diagonal. This leads to faster calculations of the Hessian +and subsequently sparse matrix operations. This case is common in Gaussian +process models, and certain hierarchical models. + +Stan's suite of functions for the embedded Laplace approximation are not +equipped to handle arbitrary sparsity structures; instead, they work on +block-diagonal Hessians, and the user can specify the size $B$ of these blocks. +The user is responsible for working out what $B$ is. If the Hessian is dense, +then we simply set $B = N$. + +NOTE: currently, there is no support for sparse prior covariance matrix. +We expect this to be supported in future versions of Stan. From 70d717fecbcc73221fc360f7f0565f438e8287c6 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Wed, 11 Jun 2025 15:47:25 -0400 Subject: [PATCH 20/26] add back html comments. --- src/functions-reference/embedded_laplace.qmd | 22 +++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index ea09b94ff..619e71425 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -370,13 +370,14 @@ $p(y \mid \theta)$ is a Poisson distribution with a log link and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -A similar built-in likelihood lets users specify a vector offset -$x \in \mathbb R^N$ with $x_i \ge 0$ to the rate parameter of the Poisson. -The likelihood is then, +A similar built-in likelihood lets users specify an offset $x \in \mathbb R$ +with $x_i \ge 0$ to the rate parameter of the Poisson. This is equivalent to +specifying a prior mean $log(x_i)$ for $\theta_i$. The likelihood is then, $$ p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). $$ + \index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log }!sampling statement|hyperpage} `y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...))`
\newline @@ -384,6 +385,7 @@ $$ Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...))`. {{< since 2.37 >}} + \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log }!sampling statement|hyperpage} `y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline @@ -392,13 +394,15 @@ Increment target log probability density with `laplace_marginal_tol_poisson_2_lo The signatures for this function are: -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} +`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} + \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline @@ -409,7 +413,8 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -417,6 +422,7 @@ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} + \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -427,7 +433,8 @@ distribution with a log link and an offset and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} `vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline @@ -436,6 +443,7 @@ $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. {{< since 2.37 >}} + \index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} `vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline From 5bfa0a3600282cc6bc6600cfd92773e66bc69c94 Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Wed, 11 Jun 2025 15:59:50 -0400 Subject: [PATCH 21/26] Fix metadata ordering, build --- src/functions-reference/embedded_laplace.qmd | 10 ++-- src/reference-manual/laplace_embedded.qmd | 36 ++++++------- src/stan-users-guide/gaussian-processes.qmd | 56 ++++++++++---------- 3 files changed, 51 insertions(+), 51 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index 619e71425..bf9e7f301 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -83,7 +83,7 @@ The signature of the function is: Which returns an approximation to the log marginal likelihood $p(y \mid \phi)$. {{< since 2.37 >}} -This function takes in the following argumeents. +This function takes in the following arguments. 1. `likelihood_function` - user-specified log likelihood whose first argument is the vector of latent Gaussian variables `theta` 2. `likelihood_arguments` - A tuple of the log likelihood arguments whose internal members will be passed to the covariance function @@ -395,7 +395,7 @@ Increment target log probability density with `laplace_marginal_tol_poisson_2_lo The signatures for this function are: -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -414,7 +414,7 @@ and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...): real}|hyperpage} `real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ @@ -423,9 +423,9 @@ distribution with a log link and an offset. {{< since 2.37 >}} -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch)): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline +`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson diff --git a/src/reference-manual/laplace_embedded.qmd b/src/reference-manual/laplace_embedded.qmd index ae53548eb..82eb3c483 100644 --- a/src/reference-manual/laplace_embedded.qmd +++ b/src/reference-manual/laplace_embedded.qmd @@ -2,9 +2,9 @@ pagetitle: Embedded Laplace Approximation --- -# Embdeed Laplace Approximation +# Embedded Laplace Approximation -Stan provides functions to perform an embedded Laplace +Stan provides functions to perform an embedded Laplace approximation for latent Gaussian models. Bearing a slight abuse of language, this is sometimes known as an integrated or nested Laplace approximation. Details of Stan's implementation can be found in reference @@ -30,24 +30,24 @@ posterior $p(\phi, \theta \mid y)$. This geometry (e.g. funnels) often frustrate inference algorithms, including Hamiltonian Monte Carlo sampling and approximate methods such as variational inference. On the other hand, the marginal posterior $p(\phi \mid y)$ is often well-behaved and in many cases low-dimensional. -Furthermore, the conditional posterior $p(\theta \mid \phi, y)$ can be well -approximated by a normal distribution, if the likelihood $p(y \mid \theta, \phi)$ +Furthermore, the conditional posterior $p(\theta \mid \phi, y)$ can be well +approximated by a normal distribution, if the likelihood $p(y \mid \theta, \phi)$ is log concave. ## Approximation of the conditional posterior and marginal likelihood The Laplace approximation is the normal distribution that matches the mode -and curvatureof the conditional posterior $p(\theta \mid y, \phi)$. +and curvature of the conditional posterior $p(\theta \mid y, \phi)$. The mode, $$ \theta^* = \underset{\theta}{\text{argmax}} \ p(\theta \mid y, \phi), $$ -is estimated by a Newton solver. Since the approximation is normal, +is estimated by a Newton solver. Since the approximation is normal, the curvature is matched by setting the covariance to the negative Hessian of the log conditional posterior, evaluated at the mode, $$ - \Sigma^* = - \left . \frac{\partial^2}{\partial \theta^2} + \Sigma^* = - \left . \frac{\partial^2}{\partial \theta^2} \log p (\theta \mid \phi, y) \right |_{\theta =\theta^*}. $$ The resulting Laplace approximation is then, @@ -57,13 +57,13 @@ $$ $$ This approximation implies another approximation for the marginal likelihood, $$ - \hat p_\mathcal{L}(y \mid \phi) := \frac{p(\theta^* \mid \phi) \ + \hat p_\mathcal{L}(y \mid \phi) := \frac{p(\theta^* \mid \phi) \ p(y \mid \theta^*, \phi) }{ \hat p_\mathcal{L} (\theta^* \mid \phi, y) } \approx p(y \mid \phi). $$ Hence, a strategy to approximate the posterior of the latent Gaussian model -is to first estimate the marginal posterior -$\hat p_\mathcal{L}(\phi \mid y) \propto p(\phi) p_\mathcal{L} (y \mid \phi)$ +is to first estimate the marginal posterior +$\hat p_\mathcal{L}(\phi \mid y) \propto p(\phi) p_\mathcal{L} (y \mid \phi)$ using any algorithm supported by Stan. Approximate posterior draws for the latent Gaussian variables are then obtained by first drawing $\phi \sim \hat p_\mathcal{L}(\phi \mid y)$ and @@ -75,7 +75,7 @@ then $\theta \sim hat p_\mathcal{L}(\theta \mid \phi, y)$. The embedded Laplace approximation presents several trade-offs with standard inference over the joint posterior $p(\theta, \phi \mid y)$. The main advantage of the embedded Laplace approximation is that it side-steps the -intricate geometry of hierarchical models. The marginal posterior +intricate geometry of hierarchical models. The marginal posterior $p(\phi \mid y)$ can then be handled by Hamiltonian Monte Carlo sampling without extensive tuning or reparameterization, and the mixing time is faster, meaning we can run shorter chains to achieve a desired precision. In some cases, @@ -86,7 +86,7 @@ posterior $p(\phi \mid y)$. On the other hand, the embedded Laplace approximation presents certain disadvantages. First, we need to perform a Laplace approximation each time the log marginal likelihood is evaluated, meaning each iteration -can be expensive. Secondly, the approximation can introduce non-negligable +can be expensive. Secondly, the approximation can introduce non-negligible error, especially with non-conventional likelihoods (note the prior is always multivariate normal). How these trade-offs are resolved depends on the application; see [@Margossian:2020] for some examples. @@ -104,7 +104,7 @@ $$ and convergence is declared if the change in the objective is sufficiently small between two iterations $$ -| \Psi (\theta^{(i + 1)}) - \Psi (\theta^{(i)}) | \le \Delta, +| \Psi (\theta^{(i + 1)}) - \Psi (\theta^{(i)}) | \le \Delta, $$ for some *tolerance* $\Delta$. The solver also stops after reaching a pre-specified *maximum number of steps*: in that case, Stan throws an exception @@ -121,16 +121,16 @@ This can indicate that the Newton step is too large and that we skipped a region where the objective function decreases. In that case, we can reduce the step length by a factor of 2, using $$ - \theta^{(i + 1)} \leftarrow \frac{\theta^{(i + 1)} + \theta^{(i)}}{2}. + \theta^{(i + 1)} \leftarrow \frac{\theta^{(i + 1)} + \theta^{(i)}}{2}. $$ -We repeat this halving of steps until +We repeat this halving of steps until $\Psi (\theta^{(i + 1)}) \ge \Psi (\theta^{(i)})$, or until a maximum number -of linesearch steps is reached. By defaut, this maximum is set to 0, which +of linesearch steps is reached. By default, this maximum is set to 0, which means the Newton solver performs no linesearch. For certain problems, adding -a linsearch can make the optmization more stable. +a linsearch can make the optimization more stable. -The embedded Laplace approximation uses a custom Newton solver,specialized +The embedded Laplace approximation uses a custom Newton solver,specialized to find the mode of $p(\theta \mid \phi, y)$. A keystep for efficient optimization is to insure all matrix inversions are numerically stable. This can be done using the Woodburry-Sherman-Morrison diff --git a/src/stan-users-guide/gaussian-processes.qmd b/src/stan-users-guide/gaussian-processes.qmd index a9b8933ac..11481f0d2 100644 --- a/src/stan-users-guide/gaussian-processes.qmd +++ b/src/stan-users-guide/gaussian-processes.qmd @@ -28,7 +28,7 @@ functions drawn from the process. Gaussian processes can be encoded in Stan by implementing their mean and covariance functions or by using the specialized covariance functions -outlined below, and plugging the result into the Gaussian model. +outlined below, and plugging the result into the Gaussian model. This form of model is straightforward and may be used for simulation, model fitting, or posterior predictive inference. A more efficient Stan implementation for the GP with a normally distributed outcome marginalizes @@ -489,14 +489,14 @@ model { #### Poisson GP using an embedded Laplace approximation {-} For computational reasons, we may want to integrate out the Gaussian process -$f$, as was done in the normal output model. Unfortunately, exact +$f$, as was done in the normal output model. Unfortunately, exact marginalization over $f$ is not possible when the outcome model is not normal. Instead, we may perform *approximate* marginalization with an *embedded Laplace approximation* [@Rue:2009, @Margossian:2020]. To do so, we first use the function `laplace_marginal` to approximate the marginal likelihood $p(y \mid \rho, \alpha, a)$ and sample the hyperparameters with Hamiltonian Monte Carlo sampling. Then, we recover the -integrated out $f$ in the `generated quantities` block using +integrated out $f$ in the `generated quantities` block using `laplace_latent_rng`. The embedded Laplace approximation computes a Gaussian approximation of the @@ -504,11 +504,11 @@ conditional posterior, $$ \hat p_\mathcal{L}(f \mid \rho, \alpha, a, y) \approx p(f \mid \rho, \alpha, a, y), $$ -where $\hat p_\mathcal{L}$ is a Gaussian that matches the mode and curvature +where $\hat p_\mathcal{L}$ is a Gaussian that matches the mode and curvature of $p(f \mid \rho, \alpha, a, y)$. We then obtain an approximation of the marginal likelihood as follows: $$ - \hat p_\mathcal{L}(y \mid \rho, \alpha, a) + \hat p_\mathcal{L}(y \mid \rho, \alpha, a) = \frac{p(f^* \mid \alpha, \rho) p(y \mid f^*, a)}{ \hat p_\mathcal{L}(f \mid \rho, \alpha, a, y)}, $$ @@ -517,13 +517,13 @@ numerical optimization. To use Stan's embedded Laplace approximation, we must define the prior covariance function and the log likelihood function in the functions block. -```{stan} +```stan functions { // log likelihood function real ll_function(vector f, real a, array[] int y) { - return poisson_log_lpmf(y | a + f); + return poisson_log_lpmf(y | a + f); } - + // covariance function matrix cov_function(real rho, real alpha, array[] real x, int N, real delta) { matrix[N, N] K = gp_exp_quad_cov(x, alpha, rho); @@ -537,7 +537,7 @@ Furthermore, we must specify an initial value $f_\text{init}$ for the numerical optimizer that underlies the Laplace approximation. In our experience, we have found setting all values to 0 to be a good default. -```{stan} +```stan transformed data { vector[N] f_init = rep_vector(0, N); } @@ -545,29 +545,29 @@ transformed data { We then increment `target` in the model block with the approximation to $\log p(y \mid \rho, \alpha, a)$. -```{stan} +```stan model { rho ~ inv_gamma(5, 5); alpha ~ std_normal(); sigma ~ std_normal(); - + target += laplace_marginal(ll_function, (a, y), f_init, - cov_function, (rho, alpha, x, N, delta)); + cov_function, (rho, alpha, x, N, delta)); } ``` Notice that we do not need to construct $f$ explicitly, since it is marginalized out. Instead, we recover the GP function in `generated quantities`: -```{stan} +```stan generated quantities { vector[N] f = laplace_latent_rng(ll_function, (a, y), f_init, cov_function, (rho, alpha, x, N, delta)); } ``` -Users can set the control parameters of the embedded Laplace approximation, -via `laplace_marginal_tol` and `laplace_latent_tol_rng`. When using these +Users can set the control parameters of the embedded Laplace approximation, +via `laplace_marginal_tol` and `laplace_latent_tol_rng`. When using these functions, the user must set *all* the control parameters. -```{stan} +```stan transformed data { // ... @@ -589,7 +589,7 @@ model { target += laplace_marginal(ll_function, (a, y), f_init, cov_function, (rho, alpha, x, N, delta), tol, max_num_steps, hessian_block_size, - solver, max_steps_linesearch); + solver, max_steps_linesearch); } generated quantities { @@ -614,7 +614,7 @@ In our example, there is a simple pairing $(y_i, f_i)$, however we could imagine a scenario where multiple observations $(y_{j1}, y_{j2}, ...)$ are observed for a single $f_j$. -```{stan} +```stan transformed data { // ... array[n_obs] int y_index; @@ -630,12 +630,12 @@ transformed parameter { model { // ... target += laplace_marginal_poisson_2_log_lpmf(y | y_index, a_vec, f_init, - cov_function, (rho, alpha, x, N, delta)); + cov_function, (rho, alpha, x, N, delta)); } generated quantities { vector[N] f = laplace_latent_poisson_2_log_rng(y, y_index, a_vec, f_init, - cov_function, (rho, alpha, x, N, delta)); + cov_function, (rho, alpha, x, N, delta)); } ``` @@ -648,7 +648,7 @@ Marginalization with a Laplace approximation can lead to faster inference, however it also introduces an approximation error. In practice, this error is negligible when using a Poisson likelihood and the approximation works well for log concave likelihoods [@Kuss:2005, @Vanhatalo:2010, @Cseke:2011, -@Vehtari:2016]. +@Vehtari:2016]. Still, users should exercise caution, especially when trying unconventional likelihoods. @@ -688,13 +688,13 @@ model { As with the Poisson GP, we cannot marginalize the GP function exactly, however we can resort to an embedded Laplace approximation. -```{stan} +```stan functions { // log likelihood function real ll_function(vector f, real a, array[] int z) { - return bernoulli_logit_lpmf(z | a + f); + return bernoulli_logit_lpmf(z | a + f); } - + // covariance function matrix cov_function(real rho, real alpha, array[] real x, int N, real delta) { matrix[N, N] K = gp_exp_quad_cov(x, alpha, rho); @@ -706,18 +706,18 @@ functions { model { target += laplace_marginal(ll_function, (a, z), f_init, - cov_function, (rho, alpha, x, N, delta)); + cov_function, (rho, alpha, x, N, delta)); } generated quantities { vector[N] f = laplace_latent_rng(ll_function, (a, z), f_init, - cov_function, (rho, alpha, x, N, delta)); + cov_function, (rho, alpha, x, N, delta)); } ``` While marginalization with a Laplace approximation can lead to faster inference, it also introduces an approximation error. In practice, this error may not be -negligable with a Bernoulli likelihood; for more discussion see, e.g. +negligible with a Bernoulli likelihood; for more discussion see, e.g. [@Vehtari:2016, @Margossian:2020]. @@ -1001,7 +1001,7 @@ input vector `x`. All that is left is to define the univariate normal distribution statement for `y`. The generated quantities block defines the quantity `y2`. We generate -`y2` by randomly generating `N2` values from univariate normals with +`y2` by randomly generating `N2` values from univariate normals with each mean corresponding to the appropriate element in `f`. From 67c49522379a45fe6ddaa2bb99d8ade7a1470413 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Wed, 11 Jun 2025 17:06:55 -0400 Subject: [PATCH 22/26] fix references on GP section. --- src/bibtex/all.bib | 19 ++++++++++++++++--- src/stan-users-guide/gaussian-processes.qmd | 8 ++++---- 2 files changed, 20 insertions(+), 7 deletions(-) diff --git a/src/bibtex/all.bib b/src/bibtex/all.bib index 5b4cdb71b..7593086f3 100644 --- a/src/bibtex/all.bib +++ b/src/bibtex/all.bib @@ -1896,9 +1896,10 @@ @misc{seyboldt:2024 url ="https://github.com/pyro-ppl/numpyro/pull/1751#issuecomment-1980569811" } + @article{Margossian:2020, - Author = {Margossian, C. C. and Vehtari, A. and Simpson, D. - and Agrawal, R.}, + Author = {Margossian, Charles C and Vehtari, Aki and Simpson, Daniel + and Agrawal, Raj}, Title = {Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond}, journal = {Advances in Neural Information Processing Systems}, volume = {34}, @@ -1944,8 +1945,20 @@ @article{Vehtari:2016 @article{Margossian:2023, - author = {Margossian, Charles C.}, + author = {Margossian, Charles C}, title = {General adjoint-differentiated Laplace approximation}, journal = {arXiv:2306.14976 }, year = {2023}} + + @article{Rue:2009, + title={Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations}, + author={Rue, H{\aa}vard and Martino, Sara and Chopin, Nicolas}, + journal={Journal of the Royal Statistical Society: Series B (Statistical Methodology)}, + volume={71}, + number={2}, + pages={319--392}, + year={2009}, + publisher={Wiley Online Library}, + doi={10.1111/j.1467-9868.2008.00700.x} +} diff --git a/src/stan-users-guide/gaussian-processes.qmd b/src/stan-users-guide/gaussian-processes.qmd index 11481f0d2..6ad62f387 100644 --- a/src/stan-users-guide/gaussian-processes.qmd +++ b/src/stan-users-guide/gaussian-processes.qmd @@ -492,7 +492,7 @@ For computational reasons, we may want to integrate out the Gaussian process $f$, as was done in the normal output model. Unfortunately, exact marginalization over $f$ is not possible when the outcome model is not normal. Instead, we may perform *approximate* marginalization with an *embedded -Laplace approximation* [@Rue:2009, @Margossian:2020]. +Laplace approximation* [@Rue:2009; @Margossian:2020]. To do so, we first use the function `laplace_marginal` to approximate the marginal likelihood $p(y \mid \rho, \alpha, a)$ and sample the hyperparameters with Hamiltonian Monte Carlo sampling. Then, we recover the @@ -600,7 +600,7 @@ generated quantities { } ``` -For details about the control parameters, see [@Margossian:2022] +For details about the control parameters, see @Margossian:2023. Stan also provides support for a limited menu of built-in functions, including @@ -647,7 +647,7 @@ approximation using `laplace_marginal_tol_poisson_2_log_lpmf` and Marginalization with a Laplace approximation can lead to faster inference, however it also introduces an approximation error. In practice, this error is negligible when using a Poisson likelihood and the approximation works well -for log concave likelihoods [@Kuss:2005, @Vanhatalo:2010, @Cseke:2011, +for log concave likelihoods [@Kuss:2005; @Vanhatalo:2010; @Cseke:2011; @Vehtari:2016]. Still, users should exercise caution, especially when trying unconventional likelihoods. @@ -718,7 +718,7 @@ generated quantities { While marginalization with a Laplace approximation can lead to faster inference, it also introduces an approximation error. In practice, this error may not be negligible with a Bernoulli likelihood; for more discussion see, e.g. -[@Vehtari:2016, @Margossian:2020]. +[@Vehtari:2016; @Margossian:2020]. ### Automatic relevance determination {-} From 17687069711959498061077eea1b5e7280c3906c Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Fri, 20 Jun 2025 14:16:29 -0400 Subject: [PATCH 23/26] Fix pdf build --- src/reference-manual/laplace_embedded.qmd | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/reference-manual/laplace_embedded.qmd b/src/reference-manual/laplace_embedded.qmd index 82eb3c483..00e1e3de2 100644 --- a/src/reference-manual/laplace_embedded.qmd +++ b/src/reference-manual/laplace_embedded.qmd @@ -18,13 +18,11 @@ use any inference over the remaining hyperparameters, for example Hamiltonian Monte Carlo sampling. Formally, consider a latent Gaussian model, -$$ \begin{eqnarray*} \phi & \sim & p(\phi) \\ \theta & \sim & \text{Multi-Normal}(0, K(\phi)) \\ y & \sim & p(y \mid \theta, \phi). \end{eqnarray*} -$$ The motivation for marginalization is to bypass the challenging geometry of the joint posterior $p(\phi, \theta \mid y)$. This geometry (e.g. funnels) often frustrates inference algorithms, including Hamiltonian Monte Carlo sampling and approximate From 5cec36efb921aa119803838ae3571107c6c645cb Mon Sep 17 00:00:00 2001 From: Brian Ward Date: Fri, 20 Jun 2025 14:19:59 -0400 Subject: [PATCH 24/26] Web build --- src/_quarto.yml | 1 + src/reference-manual/laplace_embedded.qmd | 4 ++-- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/src/_quarto.yml b/src/_quarto.yml index b1bf257d6..f6aa243df 100644 --- a/src/_quarto.yml +++ b/src/_quarto.yml @@ -188,6 +188,7 @@ website: - reference-manual/pathfinder.qmd - reference-manual/variational.qmd - reference-manual/laplace.qmd + - reference-manual/laplace_embedded.qmd - reference-manual/diagnostics.qmd - section: "Usage" contents: diff --git a/src/reference-manual/laplace_embedded.qmd b/src/reference-manual/laplace_embedded.qmd index 00e1e3de2..7a9dda324 100644 --- a/src/reference-manual/laplace_embedded.qmd +++ b/src/reference-manual/laplace_embedded.qmd @@ -7,8 +7,8 @@ pagetitle: Embedded Laplace Approximation Stan provides functions to perform an embedded Laplace approximation for latent Gaussian models. Bearing a slight abuse of language, this is sometimes known as an integrated or nested Laplace approximation. -Details of Stan's implementation can be found in reference -[@Margossian:2020, @Margossian:2023]. +Details of Stan's implementation can be found in references +[@Margossian:2020] and [@Margossian:2023]. A standard approach to fit a latent Gaussian model would be to perform inference jointly over the latent Gaussian variables and the hyperparameters. From c418d08c8ecefc658eafc6d2d279bec1ca8a14f4 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Tue, 24 Jun 2025 09:28:26 +0400 Subject: [PATCH 25/26] edit doc to reflect changes in embedded laplace: (i) theta_init is optional and (ii) helper functions allow for a prior mean. --- src/functions-reference/embedded_laplace.qmd | 300 +++++++------------ src/functions-reference/functions_index.qmd | 82 ++--- src/reference-manual/laplace_embedded.qmd | 16 +- 3 files changed, 141 insertions(+), 257 deletions(-) diff --git a/src/functions-reference/embedded_laplace.qmd b/src/functions-reference/embedded_laplace.qmd index bf9e7f301..63346dbfb 100644 --- a/src/functions-reference/embedded_laplace.qmd +++ b/src/functions-reference/embedded_laplace.qmd @@ -7,20 +7,21 @@ pagetitle: Embedded Laplace Approximation The embedded Laplace approximation can be used to approximate certain marginal and conditional distributions that arise in latent Gaussian models. A latent Gaussian model observes the following hierarchical structure: -$$ - \phi \sim p(\phi), \\ - \theta \sim \text{MultiNormal}(0, K(\phi)), \\ - y \sim p(y \mid \theta, \phi). -$$ +\begin{eqnarray} + \phi &\sim& p(\phi), \\ + \theta &\sim& \text{MultiNormal}(0, K(\phi)), \\ + y &\sim& p(y \mid \theta, \phi). +\end{eqnarray} In this formulation, $y$ represents the observed data, and $p(y \mid \theta, \phi)$ is the likelihood function that -specifies how observations are generated conditional on the latent variables -$\theta$ and hyperparameters $\phi$. -$\phi$ denotes the set of hyperparameters governing the model and -$p(\phi)$ is the prior distribution placed over these hyperparameters. +specifies how observations are generated conditional on the latent Gaussian +variables $\theta$ and hyperparameters $\phi$. $K(\phi)$ denotes the prior covariance matrix for the latent Gaussian variables $\theta$ and is parameterized by $\phi$. -The prior $p(\theta)$ is restricted to be a multivariate normal. +The prior $p(\theta \mid \phi)$ is restricted to be a multivariate normal +centered at 0. That said, we can always pick a likelihood that offsets $\theta$, +which is equivalently to specifying a prior mean. + To sample from the joint posterior $p(\phi, \theta \mid y)$, we can either use a standard method, such as Markov chain Monte Carlo, or we can follow a two-step procedure: @@ -60,10 +61,9 @@ The process of iteratively sampling from $p(\phi \mid y)$ (say, with MCMC) and then $p(\theta \mid y, \phi)$ produces samples from the joint posterior $p(\theta, \phi \mid y)$. -The Laplace approximation is especially useful if $p(\theta)$ is -multivariate normal and $p(y \mid \phi, \theta)$ is +The Laplace approximation is especially useful if $p(y \mid \phi, \theta)$ is log-concave. Stan's embedded Laplace approximation is restricted to the case -where the prior $p(\theta)$ is multivariate normal. +where the prior $p(\theta \mid \phi)$ is multivariate normal. Furthermore, the likelihood $p(y \mid \phi, \theta)$ must be computed using only operations which support higher-order derivatives (see section [specifying the likelihood function](#laplace_likelihood_spec)). @@ -74,11 +74,11 @@ In the `model` block, we increment `target` with `laplace_marginal`, a function that approximates the log marginal likelihood $\log p(y \mid \phi)$. The signature of the function is: -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...) likelihood\_arguments, vector theta\_init, function covariance\_function, tuple(...) covariance\_arguments): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...) likelihood\_arguments, function covariance\_function, tuple(...), vector theta\_init covariance\_arguments): real}|hyperpage} - + -`real` **`laplace_marginal`**`(function likelihood_function, tuple(...) likelihood_arguments, vector theta_init, function covariance_function, tuple(...) covariance_arguments)` +`real` **`laplace_marginal`**`(function likelihood_function, tuple(...) likelihood_arguments, function covariance_function, tuple(...) covariance_arguments)` Which returns an approximation to the log marginal likelihood $p(y \mid \phi)$. {{< since 2.37 >}} @@ -87,12 +87,8 @@ This function takes in the following arguments. 1. `likelihood_function` - user-specified log likelihood whose first argument is the vector of latent Gaussian variables `theta` 2. `likelihood_arguments` - A tuple of the log likelihood arguments whose internal members will be passed to the covariance function -3. `theta_init` - an initial guess for the optimization problem that underlies the Laplace approximation, -4. `covariance_function` - Prior covariance function -5. `covariance_arguments` A tuple of the arguments whose internal members will be passed to the the covariance function - -The size of $\theta_\text{init}$ must be consistent with the size of the $\theta$ argument -passed to `likelihood_function`. +3. `covariance_function` - Prior covariance function +4. `covariance_arguments` A tuple of the arguments whose internal members will be passed to the the covariance function Below we go over each argument in more detail. @@ -136,7 +132,9 @@ real likelihood_fun(vector theta, real a, matrix X) ``` The call to the laplace marginal would start with this likelihood and -tuple holding the other likelihood arguments. +tuple holding the other likelihood arguments. We do not need to pass +`theta`, since it is marginalized out and therefore does not +appear explicitly as a model parameter. ```stan real val = laplace_marginal(likelihood_fun, (a, X), ...); @@ -191,7 +189,7 @@ the call to the Laplace marginal would include the covariance function and a tuple holding the covariance function arguments. ```stan -real val = laplace_marginal(likelihood_fun, (a, X), theta_init, cov_fun, (b, Z), ...); +real val = laplace_marginal(likelihood_fun, (a, X), cov_fun, (b, Z), ...); ``` ## Control parameters @@ -200,16 +198,19 @@ It also possible to specify control parameters, which can help improve the optimization that underlies the Laplace approximation, using `laplace_marginal_tol` with the following signature: -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} - -\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol }!{\tt (function likelihood\_function, tuple(...), function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol`**`(function likelihood_function, tuple(...), function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ and allows the user to tune the control parameters of the approximation. +* `theta_init`: the initial guess for the Newton solver when finding the mode +of $p(\theta \mid y, \phi)$. By default, it is a zero-vector. + * `tol`: the tolerance $\epsilon$ of the optimizer. Specifically, the optimizer stops when $||\nabla \log p(\theta \mid y, \phi)|| \le \epsilon$. By default, the value is $\epsilon = 10^{-6}$. @@ -219,7 +220,7 @@ it gives up (in which case the Metropolis proposal gets rejected). The default is 100 steps. * `hessian_block_size`: the size of the blocks, assuming the Hessian -$\partial \log p(y \mid \theta, phi) \ \partial \theta$ is block-diagonal. +$\partial \log p(y \mid \theta, \phi) \ \partial \theta$ is block-diagonal. The structure of the Hessian is determined by the dependence structure of $y$ on $\theta$. By default, the Hessian is treated as diagonal (`hessian_block_size=1`). If the Hessian is not block diagonal, then set @@ -252,18 +253,18 @@ approximation of $p(\theta \mid \phi, y)$ using `laplace_latent_rng`. The signature for `laplace_latent_rng` follows closely the signature for `laplace_marginal`: - -\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_rng }!{\tt (function likelihood\_function, tuple(...), function covariance\_function, tuple(...), vector theta\_init): vector}|hyperpage} -`vector` **`laplace_latent_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...))`
\newline +`vector` **`laplace_latent_rng`**`(function likelihood_function, tuple(...), function covariance_function, tuple(...))`
\newline Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$. {{< since 2.37 >}} Once again, it is possible to specify control parameters: -\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function likelihood\_function, tuple(...), vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} +\index{{\tt \bfseries laplace\_latent\_tol\_rng }!{\tt (function likelihood\_function, tuple(...), function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_rng`**`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_rng`**`(function likelihood_function, tuple(...), function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Draws approximate samples from the conditional posterior $p(\theta \mid y, \phi)$ and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} @@ -282,47 +283,50 @@ likelihoods in the `functions` block. ### Poisson with log link Given count data, with each observed count $y_i$ associated with a group -$g(i)$ and a corresponding latent variable $\theta_{g(i)}$, and Poisson model, +$g(i)$ and a corresponding latent variable $\theta_{g(i)}$, and a Poisson model, the likelihood is $$ -p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)})). +p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)} + m_{g(i)})), $$ +where $m_{g(i)}$ acts as an offset for $\theta_{g(i)}$. This can also be +interpreted as a prior mean on $\theta_{g(i)}$. The arguments required to compute this likelihood are: * `y`: an array of counts. * `y_index`: an array whose $i^\text{th}$ element indicates to which group the $i^\text{th}$ observation belongs to. +* `m`: a vector of ofssets or prior means for $\theta$. \index{{\tt \bfseries laplace\_marginal\_poisson\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_poisson_log`**`(y_index, theta_init, covariance_function, (...))`
\newline +`y ~ ` **`laplace_marginal_poisson_log`**`(y_index, m, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta_init, covariance_function, (...))`. +Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, m, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_poisson_log`**`(y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_poisson_log`**`(y_index, m, covariance_function, (...), theta_init, tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, theta_init, covariance_function, (...))`. +Increment target log probability density with `laplace_marginal_poisson_log_lupmf(y | y_index, m, covariance_function, (...))`. The signatures for the embedded Laplace approximation function with a Poisson likelihood are - -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): real}|hyperpage} +`real` **`laplace_marginal_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +`real` **`laplace_marginal_tol_poisson_log_lpmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -331,19 +335,19 @@ parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline + +\index{{\tt \bfseries laplace\_marginal\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} +`real` **`laplace_marginal_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector m, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_poisson_log_lupmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson @@ -351,18 +355,18 @@ distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} -`vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline + +\index{{\tt \bfseries laplace\_latent\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): vector}|hyperpage} +`vector` **`laplace_latent_poisson_log_rng`**`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Poisson distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_log\_rng }!{\tt (array[] int y, array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_poisson_log_rng`**`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood @@ -370,101 +374,17 @@ $p(y \mid \theta)$ is a Poisson distribution with a log link and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} -A similar built-in likelihood lets users specify an offset $x \in \mathbb R$ -with $x_i \ge 0$ to the rate parameter of the Poisson. This is equivalent to -specifying a prior mean $log(x_i)$ for $\theta_i$. The likelihood is then, -$$ -p(y \mid \theta, \phi) = \prod_i\text{Poisson} (y_i \mid \exp(\theta_{g(i)}) x_i). -$$ - - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log }!sampling statement|hyperpage} - -`y ~ ` **`laplace_marginal_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...))`
\newline - -Increment target log probability density with `laplace_marginal_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...))`. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log }!sampling statement|hyperpage} - -`y ~ ` **`laplace_marginal_tol_poisson_2_log`**`(y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_poisson_2_log_lupmf(y | y_index, x, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. -{{< since 2.37 >}} - -The signatures for this function are: - - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline -Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ -in the special case where the likelihood $p(y \mid \theta)$ is a Poisson -distribution with a log link and an offset. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} - -`real` **`laplace_marginal_tol_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline - -Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ -in the special case where the likelihood $p(y \mid \theta)$ is a Poisson -distribution with a log link and an offset -and allows the user to tune the control parameters of the approximation. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_marginal\_poisson\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...): real}|hyperpage} -`real` **`laplace_marginal_poisson_2_log_lpmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline - -Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ -in the special case where the likelihood $p(y \mid \theta)$ is a Poisson -distribution with a log link and an offset. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_marginal\_tol\_poisson\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch)): real}|hyperpage} - -`real` **`laplace_marginal_tol_poisson_2_log_lupmf`**`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline - -Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ -in the special case where the likelihood $p(y \mid \theta)$ is a Poisson -distribution with a log link and an offset -and allows the user to tune the control parameters of the approximation. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_latent\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} - -`vector` **`laplace_latent_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...))`
\newline - -Returns a draw from the Laplace approximation to the conditional posterior -$p(\theta \mid y, \phi)$ in the special case where the likelihood -$p(y \mid \theta)$ is a Poisson distribution with a log link and an offset. -{{< since 2.37 >}} - - -\index{{\tt \bfseries laplace\_latent\_tol\_poisson\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector x, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} - -`vector` **`laplace_latent_tol_poisson_2_log_rng`**`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline - -Returns a draw from the Laplace approximation to the conditional posterior -$p(\theta \mid y, \phi)$ in the special case where the likelihood -$p(y \mid \theta)$ is a Poisson distribution with a log link and an offset, -and allows the user to tune the control parameters of the approximation. -{{< since 2.37 >}} - ### Negative Binomial with log link The negative Binomial distribution generalizes the Poisson distribution by introducing the dispersion parameter $\eta$. The corresponding likelihood is then $$ -p(y \mid \theta, \phi) = \prod_i\text{NegBinomial2} (y_i \mid \exp(\theta_{g(i)}), \eta). +p(y \mid \theta, \phi) = \prod_i\text{NegBinomial2} (y_i \mid \exp(\theta_{g(i)} + m_{g(i)}), \eta). $$ Here we use the alternative parameterization implemented in Stan, meaning that $$ -\mathbb E(y_i) = \exp (\theta_{g(i)}), \\ +\mathbb E(y_i) = \exp (\theta_{g(i)} + m_{g(i)}), \\ \text{Var}(y_i) = \mathbb E(y_i) + \frac{(\mathbb E(y_i))^2}{\eta}. $$ The arguments for the likelihood function are: @@ -473,41 +393,42 @@ The arguments for the likelihood function are: * `y_index`: an array whose $i^\text{th}$ element indicates to which group the $i^\text{th}$ observation belongs to. * `eta`: the overdispersion parameter. +* `m`: a vector of ofssets or prior means for $\theta$. \index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_neg_binomial_2_log`**`(y_index, eta, theta_init, covariance_function, (...))`
\newline +`y ~ ` **`laplace_marginal_neg_binomial_2_log`**`(y_index, eta, m, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_neg_binomial_2_log_lupmf(y | y_index, eta, theta_init, covariance_function, (...))`. +Increment target log probability density with `laplace_marginal_neg_binomial_2_log_lupmf(y | y_index, eta, m, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_neg_binomial_2_log`**`(y_index, eta, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_neg_binomial_2_log`**`(y_index, eta, m, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_neg_binomial_2_log_lupmf(y | y_index, eta, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +Increment target log probability density with `laplace_marginal_tol_neg_binomial_2_log_lupmf(y | y_index, eta, m, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. {{< since 2.37 >}} The function signatures for the embedded Laplace approximation with a negative Binomial likelihood are - -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): real}|hyperpage} -`real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline +`real` **`laplace_marginal_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative Binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_neg_binomial_2_log_lpmf`**`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative @@ -515,20 +436,20 @@ Binomial distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): real}|hyperpage} -`real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline +`real` **`laplace_marginal_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative Binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_neg\_binomial\_2\_log\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_neg_binomial_2_log_lupmf`**`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative @@ -536,20 +457,20 @@ Binomial distribution with a log link, and allows the user to tune the control parameters of the approximation. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): vector}|hyperpage} -`vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...))`
\newline +`vector` **`laplace_latent_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector m, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood $p(y \mid \theta, \eta)$ is a Negative binomial distribution with a log link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_neg\_binomial\_2\_log\_rng }!{\tt (array[] int y, array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_neg_binomial_2_log_rng`**`(array[] int y, array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi, \eta)$ in the special case where the likelihood @@ -561,47 +482,48 @@ and allows the user to tune the control parameters of the approximation. Given binary outcome $y_i \in \{0, 1\}$ and Bernoulli model, the likelihood is $$ -p(y \mid \theta, \phi) = \prod_i\text{Bernoulli} (y_i \mid \text{logit}^{-1}(\theta_{g(i)})). +p(y \mid \theta, \phi) = \prod_i\text{Bernoulli} (y_i \mid \text{logit}^{-1}(\theta_{g(i)} + m_{g(i)})). $$ The arguments of the likelihood function are: * `y`: the observed counts * `y_index`: an array whose $i^\text{th}$ element indicates to which group the $i^\text{th}$ observation belongs to. +* `m`: a vector of ofssets or prior means for $\theta$. \index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_bernoulli_logit`**`(y_index, theta_init, covariance_function, (...))`
\newline +`y ~ ` **`laplace_marginal_bernoulli_logit`**`(y_index, m, covariance_function, (...))`
\newline -Increment target log probability density with `laplace_marginal_bernoulli_logit_lupmf(y | y_index, theta_init, covariance_function, (...))`. +Increment target log probability density with `laplace_marginal_bernoulli_logit_lupmf(y | y_index, m, covariance_function, (...))`. {{< since 2.37 >}} \index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit }!sampling statement|hyperpage} -`y ~ ` **`laplace_marginal_tol_bernoulli_logit`**`(y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline +`y ~ ` **`laplace_marginal_tol_bernoulli_logit`**`(y_index, m, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`
\newline -Increment target log probability density with `laplace_marginal_tol_bernoulli_logit_lupmf(y | y_index, theta_init, covariance_function, (...), tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. +Increment target log probability density with `laplace_marginal_tol_bernoulli_logit_lupmf(y | y_index, m, covariance_function, (...), theta_init, tol, max_steps, hessian_block_size, solver, max_steps_linesearch)`. {{< since 2.37 >}} The function signatures for the embedded Laplace approximation with a Bernoulli likelihood are - -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline +`real` **`laplace_marginal_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lpmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector m, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_bernoulli_logit_lpmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli @@ -609,40 +531,40 @@ distribution with a logit link and allows the user to tune the control parameter {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector m, function covariance\_function, tuple(...)): real}|hyperpage} -`real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline +`real` **`laplace_marginal_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...))`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} + +\index{{\tt \bfseries laplace\_marginal\_tol\_bernoulli\_logit\_lupmf }!{\tt (array[] int y \textbar\ array[] int y\_index, vector m, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): real}|hyperpage} -`real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`real` **`laplace_marginal_tol_bernoulli_logit_lupmf`**`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns an approximation to the log marginal likelihood $p(y \mid \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a bernoulli distribution with a logit link and allows the user to tune the control parameters. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...)): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, function covariance\_function, tuple(...), vector theta\_init): vector}|hyperpage} -`vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...))`
\newline +`vector` **`laplace_latent_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...))`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood $p(y \mid \theta)$ is a Bernoulli distribution with a logit link. {{< since 2.37 >}} - -\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector theta\_init, function covariance\_function, tuple(...), real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} + +\index{{\tt \bfseries laplace\_latent\_tol\_bernoulli\_logit\_rng }!{\tt (array[] int y, array[] int y\_index, vector m, function covariance\_function, tuple(...), vector theta\_init, real tol, int max\_steps, int hessian\_block\_size, int solver, int max\_steps\_linesearch): vector}|hyperpage} -`vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline +`vector` **`laplace_latent_tol_bernoulli_logit_rng`**`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch)`
\newline Returns a draw from the Laplace approximation to the conditional posterior $p(\theta \mid y, \phi)$ in the special case where the likelihood diff --git a/src/functions-reference/functions_index.qmd b/src/functions-reference/functions_index.qmd index 77ac2441b..6aa9250a3 100644 --- a/src/functions-reference/functions_index.qmd +++ b/src/functions-reference/functions_index.qmd @@ -1623,52 +1623,42 @@ pagetitle: Alphabetical Index **laplace_latent_bernoulli_logit_rng**: - -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-1fc637cfa219f5661eaf691aed6979aa11715e42) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-627a0deb197f3eba9ab71644f42e8afb3e0793e3) (embedded_laplace.html)
**laplace_latent_neg_binomial_2_log_rng**: - -
[`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-cdf01a782863e45a083e48b4bd247071d9a4de50) (embedded_laplace.html)
- - -**laplace_latent_poisson_2_log_rng**: - - -
[`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-51913d568f11fc64a9f166ff680e92b7943b85bc) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, real eta, vector m, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-05f4fe23889b0c68edde7fc84945af0bf8d1e726) (embedded_laplace.html)
**laplace_latent_poisson_log_rng**: - -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-de9bf8cc1a51693f3e9a1a49dca129ae45c868b9) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-e6f1da0a228b9548915532b62d3d8d25ea1f893d) (embedded_laplace.html)
**laplace_latent_rng**: - -
[`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-e736fd996b0dd5ec34cc3a31e68ae93361bcfe4c) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...), function covariance_function, tuple(...)) : vector`](embedded_laplace.qmd#index-entry-6d0685309664591fc32d3e2a2304af7aa5459e1c) (embedded_laplace.html)
**laplace_latent_tol_bernoulli_logit_rng**: - -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-929b023be5b575bb12c7568d48c4292d68484e4b) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-66d8665514d9fb3d6d17ee95d68e7d186e87e229) (embedded_laplace.html)
**laplace_latent_tol_neg_binomial_2_log_rng**: - -
[`(array[] int y, array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-5549a3af52023a92a370b327107898233c546f09) (embedded_laplace.html)
- - -**laplace_latent_tol_poisson_2_log_rng**: - - -
[`(array[] int y, array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-b65fcddeadb2a0edba8bda048961a188a3c02e67) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-95b14b2cc2f02a8abcbb4f4d09fffa1901608512) (embedded_laplace.html)
**laplace_latent_tol_poisson_log_rng**: - -
[`(array[] int y, array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-2660b79c1a17a7def5edb58129a847511818959d) (embedded_laplace.html)
+ -
[`(array[] int y, array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : vector`](embedded_laplace.qmd#index-entry-97f692748ebfabf0574b7a8e2edaceb940ee4b6b) (embedded_laplace.html)
**laplace_marginal**: - -
[`(function likelihood_function, tuple(...) likelihood_arguments, vector theta_init, function covariance_function, tuple(...) covariance_arguments) : real`](embedded_laplace.qmd#index-entry-95e1c296ee63b312ffa7ef98140792e7f3fadeac) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...) likelihood_arguments, function covariance_function, tuple(...) covariance_arguments) : real`](embedded_laplace.qmd#index-entry-6da15f0ed076016d814cdc278127896f99d29633) (embedded_laplace.html)
**laplace_marginal_bernoulli_logit**: @@ -1678,12 +1668,12 @@ pagetitle: Alphabetical Index **laplace_marginal_bernoulli_logit_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-f842c1af3fe63a0220e54721301ff2e9ffa6cd52) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-6f7e1622f7b7534d2af01b71426f48c1ed0021b6) (embedded_laplace.html)
**laplace_marginal_bernoulli_logit_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-31c6ccea0ce342412ce6c8b0d31e29eafaa91f5c) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-bb1efa7b7c8782cd786578df678fa7327e6b1e15) (embedded_laplace.html)
**laplace_marginal_neg_binomial_2_log**: @@ -1693,27 +1683,12 @@ pagetitle: Alphabetical Index **laplace_marginal_neg_binomial_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-221578be4523a20b22e6a523ba6457be9b21f792) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-b6bc4ba819ac536112abc8a24eb407fdbbf77fa4) (embedded_laplace.html)
**laplace_marginal_neg_binomial_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-720cc42c0aa5a63ef214bcb27374e2ceabd37455) (embedded_laplace.html)
- - -**laplace_marginal_poisson_2_log**: - - -
[distribution statement](embedded_laplace.qmd#index-entry-47b6396bce572a75c537fe2902d7be0f80b7af4e) (embedded_laplace.html)
- - -**laplace_marginal_poisson_2_log_lpmf**: - - -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-6d26f0abe1ac0de87b82f12b3ed92195ef8cd7f8) (embedded_laplace.html)
- - -**laplace_marginal_poisson_2_log_lupmf**: - - -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-01db5858ff57fce301d7f58e3dcb5cac55998091) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-f4be4d5300d7dbedff1ddbd5fdad1b78f0cfb621) (embedded_laplace.html)
**laplace_marginal_poisson_log**: @@ -1723,17 +1698,17 @@ pagetitle: Alphabetical Index **laplace_marginal_poisson_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-9b336691c420ff95c5a2c48f78e69c8e605224f6) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-6ec228e252d52b0e8ee5f3dd836607f8d8cb8a29) (embedded_laplace.html)
**laplace_marginal_poisson_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-e7c7252606cc5d8b1f77617eef2af52d21642250) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...)) : real`](embedded_laplace.qmd#index-entry-c092314a5f45deef27ce0e8930a7d28c87ca601d) (embedded_laplace.html)
**laplace_marginal_tol**: - -
[`(function likelihood_function, tuple(...), vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-88d77a8692d68016d68f400d2a2541259bcf24a2) (embedded_laplace.html)
+ -
[`(function likelihood_function, tuple(...), function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-0f4bd0330deef2db7884dc5a4c933f181e1f2a8c) (embedded_laplace.html)
**laplace_marginal_tol_bernoulli_logit**: @@ -1743,12 +1718,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_bernoulli_logit_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-98481a032cb5ad2b100b7db23109d3f4f70e0af9) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-25eee002446b22fc3bbeb35b557d0371f7c6fba5) (embedded_laplace.html)
**laplace_marginal_tol_bernoulli_logit_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-4b2493e6cf1e09674de57a4f42b3334b42d02136) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-d3cec88dd3810edb8b58d7897a3c69e6d4172f8d) (embedded_laplace.html)
**laplace_marginal_tol_neg_binomial_2_log**: @@ -1758,27 +1733,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_neg_binomial_2_log_lpmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-c31e9b3864a282b96f30819d5a39b4995a9eb911) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-4f9b672d0827401038b66b3b7ebf71225e12794e) (embedded_laplace.html)
**laplace_marginal_tol_neg_binomial_2_log_lupmf**: - -
[`(array[] int y | array[] int y_index, real eta, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-17764e1afa12272c00795a785590169d0d70fda4) (embedded_laplace.html)
- - -**laplace_marginal_tol_poisson_2_log**: - - -
[distribution statement](embedded_laplace.qmd#index-entry-257d3af0df49c240293dc3a2d4f8cac109dd54d1) (embedded_laplace.html)
- - -**laplace_marginal_tol_poisson_2_log_lpmf**: - - -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-603a63e43b3518251e5f402a738efe4906015c36) (embedded_laplace.html)
- - -**laplace_marginal_tol_poisson_2_log_lupmf**: - - -
[`(array[] int y | array[] int y_index, vector x, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-e3eca950639aed9276dd8c0e880d3982d7a6c642) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, real eta, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-89629abb0f3629f91a5f95662063961ffaa43651) (embedded_laplace.html)
**laplace_marginal_tol_poisson_log**: @@ -1788,12 +1748,12 @@ pagetitle: Alphabetical Index **laplace_marginal_tol_poisson_log_lpmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-b9f92ff7606b590d41fd9c33e480439337aae673) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-0bdae1e29545db671474b6dd8ad71c13a9653450) (embedded_laplace.html)
**laplace_marginal_tol_poisson_log_lupmf**: - -
[`(array[] int y | array[] int y_index, vector theta_init, function covariance_function, tuple(...), real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-42b965299d48dc219eaed0a64fec07d8d070d47b) (embedded_laplace.html)
+ -
[`(array[] int y | array[] int y_index, vector m, function covariance_function, tuple(...), vector theta_init, real tol, int max_steps, int hessian_block_size, int solver, int max_steps_linesearch) : real`](embedded_laplace.qmd#index-entry-218384ef96a2418722ba7d631c085623afb9bc23) (embedded_laplace.html)
**lbeta**: diff --git a/src/reference-manual/laplace_embedded.qmd b/src/reference-manual/laplace_embedded.qmd index 7a9dda324..3912ccc8a 100644 --- a/src/reference-manual/laplace_embedded.qmd +++ b/src/reference-manual/laplace_embedded.qmd @@ -5,10 +5,11 @@ pagetitle: Embedded Laplace Approximation # Embedded Laplace Approximation Stan provides functions to perform an embedded Laplace -approximation for latent Gaussian models. Bearing a slight abuse of language, -this is sometimes known as an integrated or nested Laplace approximation. -Details of Stan's implementation can be found in references -[@Margossian:2020] and [@Margossian:2023]. +approximation for latent Gaussian models, following the procedure described +by @RasmussenWilliams:2006 and @Rue:2009. This approach is often refered to +as the integrated or nested Laplace approximation, although the exact details +of the method can vary. The details of Stan's implementation can be found in +references [@Margossian:2020; @Margossian:2023]. A standard approach to fit a latent Gaussian model would be to perform inference jointly over the latent Gaussian variables and the hyperparameters. @@ -65,7 +66,7 @@ $\hat p_\mathcal{L}(\phi \mid y) \propto p(\phi) p_\mathcal{L} (y \mid \phi)$ using any algorithm supported by Stan. Approximate posterior draws for the latent Gaussian variables are then obtained by first drawing $\phi \sim \hat p_\mathcal{L}(\phi \mid y)$ and -then $\theta \sim hat p_\mathcal{L}(\theta \mid \phi, y)$. +then $\theta \sim \hat p_\mathcal{L}(\theta \mid \phi, y)$. ## Trade-offs of the approximation @@ -86,7 +87,8 @@ disadvantages. First, we need to perform a Laplace approximation each time the log marginal likelihood is evaluated, meaning each iteration can be expensive. Secondly, the approximation can introduce non-negligible error, especially with non-conventional likelihoods (note the prior -is always multivariate normal). How these trade-offs are resolved depends on the application; see [@Margossian:2020] for some examples. +is always multivariate normal). How these trade-offs are resolved depends on +the application; see @Margossian:2020 for some examples. ## Details of the approximation @@ -128,7 +130,7 @@ means the Newton solver performs no linesearch. For certain problems, adding a linsearch can make the optimization more stable. -The embedded Laplace approximation uses a custom Newton solver,specialized +The embedded Laplace approximation uses a custom Newton solver, specialized to find the mode of $p(\theta \mid \phi, y)$. A keystep for efficient optimization is to insure all matrix inversions are numerically stable. This can be done using the Woodburry-Sherman-Morrison From 894f151b412c62c987acc5a8fa10a3aef49a4527 Mon Sep 17 00:00:00 2001 From: Charles Margossian Date: Tue, 24 Jun 2025 09:58:09 +0400 Subject: [PATCH 26/26] update GP subsection on embedded Laplace. --- src/stan-users-guide/gaussian-processes.qmd | 46 +++++++++------------ 1 file changed, 19 insertions(+), 27 deletions(-) diff --git a/src/stan-users-guide/gaussian-processes.qmd b/src/stan-users-guide/gaussian-processes.qmd index 6ad62f387..c92c3ec2e 100644 --- a/src/stan-users-guide/gaussian-processes.qmd +++ b/src/stan-users-guide/gaussian-processes.qmd @@ -492,7 +492,7 @@ For computational reasons, we may want to integrate out the Gaussian process $f$, as was done in the normal output model. Unfortunately, exact marginalization over $f$ is not possible when the outcome model is not normal. Instead, we may perform *approximate* marginalization with an *embedded -Laplace approximation* [@Rue:2009; @Margossian:2020]. +Laplace approximation* [@RasmussenWilliams:2006; @Rue:2009; @Margossian:2020]. To do so, we first use the function `laplace_marginal` to approximate the marginal likelihood $p(y \mid \rho, \alpha, a)$ and sample the hyperparameters with Hamiltonian Monte Carlo sampling. Then, we recover the @@ -533,16 +533,6 @@ functions { } ``` -Furthermore, we must specify an initial value $f_\text{init}$ for the -numerical optimizer that underlies the Laplace approximation. In our experience, -we have found setting all values to 0 to be a good default. - -```stan -transformed data { - vector[N] f_init = rep_vector(0, N); -} -``` - We then increment `target` in the model block with the approximation to $\log p(y \mid \rho, \alpha, a)$. ```stan @@ -551,7 +541,7 @@ model { alpha ~ std_normal(); sigma ~ std_normal(); - target += laplace_marginal(ll_function, (a, y), f_init, + target += laplace_marginal(ll_function, (a, y), cov_function, (rho, alpha, x, N, delta)); } ``` @@ -559,7 +549,7 @@ Notice that we do not need to construct $f$ explicitly, since it is marginalized out. Instead, we recover the GP function in `generated quantities`: ```stan generated quantities { - vector[N] f = laplace_latent_rng(ll_function, (a, y), f_init, + vector[N] f = laplace_latent_rng(ll_function, (a, y), cov_function, (rho, alpha, x, N, delta)); } ``` @@ -571,6 +561,7 @@ functions, the user must set *all* the control parameters. transformed data { // ... + vector[N] f_init = rep_vector(0, N); // starting point for optimizer. real tol = 1e-6; // optimizer's tolerance for Laplace approx. int max_num_steps = 1e3; // maximum number of steps for optimizer. int hessian_block_size = 1; // when hessian of log likelihood is block @@ -586,17 +577,18 @@ transformed data { model { // ... - target += laplace_marginal(ll_function, (a, y), f_init, + target += laplace_marginal(ll_function, (a, y), cov_function, (rho, alpha, x, N, delta), - tol, max_num_steps, hessian_block_size, + f_init, tol, max_num_steps, hessian_block_size, solver, max_steps_linesearch); } generated quantities { - vector[N] f = laplace_latent_rng(ll_function, (a, y), f_init, + vector[N] f = laplace_latent_rng(ll_function, (a, y), cov_function, (rho, alpha, x, N, delta), - tol, max_num_steps, hessian_block_size, - solver, max_steps_linesearch); + f_init, tol, max_num_steps, + hessian_block_size, solver, + max_steps_linesearch); } ``` @@ -604,10 +596,10 @@ For details about the control parameters, see @Margossian:2023. Stan also provides support for a limited menu of built-in functions, including -the Poisson distribution with a log link and and prior mean $a$. When using such +the Poisson distribution with a log link and and prior mean $m$. When using such a built-in function, the user does not need to specify a likelihood in the `functions` block. However, the user must strictly follow the signature of the -likelihood: in this case, $a$ must be a vector of length $N$ (to allow for +likelihood: in this case, $m$ must be a vector of length $N$ (to allow for different offsets for each observation $y_i$) and we must indicate which element of $f$ each component of $y$ matches using the variable $y_\text{index}$. In our example, there is a simple pairing $(y_i, f_i)$, however we could imagine @@ -624,25 +616,25 @@ transformed data { // ... transformed parameter { - vector[N] a_vec = rep_vector(a, N); + vector[N] m = rep_vector(a, N); } model { // ... - target += laplace_marginal_poisson_2_log_lpmf(y | y_index, a_vec, f_init, + target += laplace_marginal_poisson_log_lpmf(y | y_index, m, cov_function, (rho, alpha, x, N, delta)); } generated quantities { - vector[N] f = laplace_latent_poisson_2_log_rng(y, y_index, a_vec, f_init, + vector[N] f = laplace_latent_poisson_log_rng(y, y_index, m, cov_function, (rho, alpha, x, N, delta)); } ``` As before, we could specify the control parameters for the embedded Laplace -approximation using `laplace_marginal_tol_poisson_2_log_lpmf` and -`laplace_latent_tol_poisson_2_log_nrg`. +approximation using `laplace_marginal_tol_poisson_log_lpmf` and +`laplace_latent_tol_poisson_log_nrg`. Marginalization with a Laplace approximation can lead to faster inference, however it also introduces an approximation error. In practice, this error @@ -705,12 +697,12 @@ functions { // ... model { - target += laplace_marginal(ll_function, (a, z), f_init, + target += laplace_marginal(ll_function, (a, z), cov_function, (rho, alpha, x, N, delta)); } generated quantities { - vector[N] f = laplace_latent_rng(ll_function, (a, z), f_init, + vector[N] f = laplace_latent_rng(ll_function, (a, z), cov_function, (rho, alpha, x, N, delta)); } ```