Skip to content

Commit

Permalink
Update GPU docs page
Browse files Browse the repository at this point in the history
  • Loading branch information
RAMitchell committed May 28, 2019
1 parent 6971f1b commit 1a00b0f
Show file tree
Hide file tree
Showing 2 changed files with 23 additions and 8 deletions.
29 changes: 21 additions & 8 deletions doc/gpu/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,6 @@ The experimental parameter ``single_precision_histogram`` can be set to True to

The device ordinal can be selected using the ``gpu_id`` parameter, which defaults to 0.

Multiple GPUs can be used with the ``gpu_hist`` tree method using the ``n_gpus`` parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If ``gpu_id`` is specified as non-zero, the selected gpu devices will be from ``gpu_id`` to ``gpu_id+n_gpus``, please note that ``gpu_id+n_gpus`` must be less than or equal to the number of available GPUs on your system. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.

.. note:: Enabling multi-GPU training

Default installation may not enable multi-GPU training. To use multiple GPUs, make sure to read :ref:`build_gpu_support`.

The GPU algorithms currently work with CLI, Python and R packages. See :doc:`/build` for details.

Expand All @@ -82,11 +77,29 @@ The GPU algorithms currently work with CLI, Python and R packages. See :doc:`/bu
param['max_bin'] = 16
param['tree_method'] = 'gpu_hist'
Objective functions
Single Node Multi-GPU
=====================
Multiple GPUs can be used with the ``gpu_hist`` tree method using the ``n_gpus`` parameter. which defaults to 1. If this is set to -1 all available GPUs will be used. If ``gpu_id`` is specified as non-zero, the selected gpu devices will be from ``gpu_id`` to ``gpu_id+n_gpus``, please note that ``gpu_id+n_gpus`` must be less than or equal to the number of available GPUs on your system. As with GPU vs. CPU, multi-GPU will not always be faster than a single GPU due to PCI bus bandwidth that can limit performance.

.. note:: Enabling multi-GPU training

Default installation may not enable multi-GPU training. To use multiple GPUs, make sure to read :ref:`build_gpu_support`.
XGBoost supports multi-GPU training on a single machine via specifying the `n_gpus' parameter.


Multi-node Multi-GPU Training
=============================
XGBoost supports fully distributed GPU training using `Dask
<https://dask.org/>`_. See Python documentation :ref:`dask_api` and worked examples `here
<https://github.com/dmlc/xgboost/tree/master/demo/dask>`_.


objective functions
===================
Most of the objective functions implemented in XGBoost can be run on GPU. Following table shows current support status.
most of the objective functions implemented in xgboost can be run on gpu. following table shows current support status.

.. |tick| unicode:: U+2714
.. |tick| unicode:: u+2714
.. |cross| unicode:: U+2718

+-----------------+-------------+
Expand Down
2 changes: 2 additions & 0 deletions doc/python/python_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,8 @@ Callback API

Dask API
--------
.. _dask_api:

.. automodule:: xgboost.dask

.. autofunction:: xgboost.dask.run
Expand Down

0 comments on commit 1a00b0f

Please sign in to comment.