botorch.fit

Model fitting routines.

botorch.fit.fit_gpytorch_mll(mll, closure=None, optimizer=None, closure_kwargs=None, optimizer_kwargs=None, **kwargs)[source]

Clearing house for fitting models passed as GPyTorch MarginalLogLikelihoods.

Parameters:
  • mll (MarginalLogLikelihood) – A GPyTorch MarginalLogLikelihood instance.

  • closure (Callable[[], tuple[Tensor, Sequence[Tensor | None]]] | None) – Forward-backward closure for obtaining objective values and gradients. Responsible for setting parameters’ grad attributes. If no closure is provided, one will be obtained by calling get_loss_closure_with_grads.

  • optimizer (Callable | None) – User specified optimization algorithm. When optimizer is None, this keyword argument is omitted when calling the dispatcher.

  • closure_kwargs (dict[str, Any] | None) – Keyword arguments passed when calling closure.

  • optimizer_kwargs (dict[str, Any] | None) – A dictionary of keyword arguments passed when calling optimizer.

  • **kwargs (Any) – Keyword arguments passed down through the dispatcher to fit subroutines. Unexpected keywords are ignored.

Returns:

The mll instance. If fitting succeeded, then mll will be in evaluation mode, i.e. mll.training == False. Otherwise, mll will be in training mode.

Return type:

MarginalLogLikelihood

botorch.fit.fit_fully_bayesian_model_nuts(model, max_tree_depth=6, warmup_steps=512, num_samples=256, thinning=16, disable_progbar=False, jit_compile=False)[source]

Fit a fully Bayesian model using the No-U-Turn-Sampler (NUTS)

Parameters:
  • model (FullyBayesianSingleTaskGP | SaasFullyBayesianMultiTaskGP) – SaasFullyBayesianSingleTaskGP to be fitted.

  • max_tree_depth (int) – Maximum tree depth for NUTS

  • warmup_steps (int) – The number of burn-in steps for NUTS.

  • num_samples (int) – The number of MCMC samples. Note that with thinning, num_samples / thinning samples are retained.

  • thinning (int) – The amount of thinning. Every nth sample is retained.

  • disable_progbar (bool) – A boolean indicating whether to print the progress bar and diagnostics during MCMC.

  • jit_compile (bool) – Whether to use jit. Using jit may be ~2X faster (rough estimate), but it will also increase the memory usage and sometimes result in runtime errors, e.g., https://github.com/pyro-ppl/pyro/issues/3136.

Return type:

None

Example

>>> gp = SaasFullyBayesianSingleTaskGP(train_X, train_Y)
>>> fit_fully_bayesian_model_nuts(gp)
botorch.fit.get_fitted_map_saas_model(train_X, train_Y, train_Yvar=None, input_transform=None, outcome_transform=None, tau=None, optimizer_kwargs=None)[source]

Get a fitted MAP SAAS model with a Matern kernel.

Parameters:
  • train_X (Tensor) – Tensor of shape n x d with training inputs.

  • train_Y (Tensor) – Tensor of shape n x 1 with training targets.

  • train_Yvar (Tensor | None) – Optional tensor of shape n x 1 with observed noise, inferred if None.

  • input_transform (InputTransform | None) – An optional input transform.

  • outcome_transform (OutcomeTransform | None) – An optional outcome transforms.

  • tau (float | None) – Fixed value of the global shrinkage tau. If None, the model places a HC(0.1) prior on tau.

  • optimizer_kwargs (dict[str, Any] | None) – A dict of options for the optimizer passed to fit_gpytorch_mll.

Returns:

A fitted SingleTaskGP with a Matern kernel.

Return type:

SingleTaskGP

botorch.fit.get_fitted_map_saas_ensemble(train_X, train_Y, train_Yvar=None, input_transform=None, outcome_transform=None, taus=None, num_taus=4, optimizer_kwargs=None)[source]

Get a fitted SAAS ensemble using several different tau values.

Parameters:
  • train_X (Tensor) – Tensor of shape n x d with training inputs.

  • train_Y (Tensor) – Tensor of shape n x 1 with training targets.

  • train_Yvar (Tensor | None) – Optional tensor of shape n x 1 with observed noise, inferred if None.

  • input_transform (InputTransform | None) – An optional input transform.

  • outcome_transform (OutcomeTransform | None) – An optional outcome transforms.

  • taus (Tensor | list[float] | None) – Global shrinkage values to use. If None, we sample num_taus values from an HC(0.1) distrbution.

  • num_taus (int) – Optional argument for how many taus to sample.

  • optimizer_kwargs (dict[str, Any] | None) – A dict of options for the optimizer passed to fit_gpytorch_mll.

Returns:

A fitted SaasFullyBayesianSingleTaskGP with a Matern kernel.

Return type:

SaasFullyBayesianSingleTaskGP