Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Version 0.90 release candidate #4475

Merged
merged 3 commits into from
May 20, 2019
Merged

[RFC] Version 0.90 release candidate #4475

merged 3 commits into from
May 20, 2019

Conversation

hcho3
Copy link
Collaborator

@hcho3 hcho3 commented May 17, 2019

You can try the 0.90 release candidate by downloading the following:

v0.90 (2019.05.18)

XGBoost Python package drops Python 2.x (#4379, #4381)

Python 2.x is reaching its end-of-life at the end of this year. Many scientific Python packages are now moving to drop Python 2.x.

XGBoost4J-Spark now requires Spark 2.4.x (#4377)

Roadmap: better performance scaling for multi-core CPUs (#4310)

Roadmap: Harden distributed training (#4250)

  • Make distributed training in XGBoost more robust by hardening Rabit, which implements the AllReduce primitive. In particular, improve test coverage on mechanisms for fault tolerance and recovery. Special thanks to @chenqin.

New feature: Multi-class metric functions for GPUs (#4368)

  • Metrics for multi-class classification have been ported to GPU: merror, mlogloss. Special thanks to @trivialfis.
  • With supported metrics, XGBoost will select the correct devices based on your system and n_gpus parameter.

New feature: Scikit-learn-like random forest API (#4148, #4255, #4258)

  • XGBoost Python package now offers XGBRFClassifier and XGBRFRegressor API to train random forests. See the tutorial. Special thanks to @canonizer

New feature: use external memory in GPU predictor (#4284, #4396, #4438, #4457)

  • It is now possible to make predictions on GPU when the input is read from external memory. This is useful when you want to make predictions with big dataset that does not fit into the GPU memory. Special thanks to @rongou, @canonizer, @sriramch.

    dtest = xgboost.DMatrix('test_data.libsvm#dtest.cache')
    bst.set_param('predictor', 'gpu_predictor')
    bst.predict(dtest)
  • Coming soon: GPU training (gpu_hist) with external memory

New feature: XGBoost can now handle comments in LIBSVM files (#4430)

New feature: Embed XGBoost in your C/C++ applications using CMake (#4323, #4333, #4453)

  • It is now easier than ever to embed XGBoost in your C/C++ applications. In your CMakeLists.txt, add xgboost::xgboost as a linked library:

    find_package(xgboost REQUIRED)
    add_executable(api-demo c-api-demo.c)
    target_link_libraries(api-demo xgboost::xgboost)

    XGBoost C API documentation is available. Special thanks to @trivialfis

Performance improvements

Bug-fixes

API changes

Maintenance: Refactor C++ code for legibility and maintainability

Maintenance: testing, continuous integration, build system

Usability Improvements, Documentation

Acknowledgement

Contributors: Nan Zhu (@CodingCat), Adam Pocock (@Craigacp), Daniel Hen (@Daniel8hen), Jiaxiang Li (@JiaxiangBU), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), Andy Adinets (@canonizer), Jonas (@elcombato), Harry Braviner (@harrybraviner), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), James Lamb (@jameslamb), Jean-Francois Zinque (@jeffzi), Yang Yang (@jokerkeny), Mayank Suman (@mayanksuman), jess (@monkeywithacupcake), Hajime Morrita (@omo), Ravi Kalia (@project-delphi), @ras44, Rong Ou (@rongou), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), @sriramch, Jiaming Yuan (@trivialfis), Christopher Suchanek (@wsuchy), Bozhao (@yubozhao)

Reviewers: Nan Zhu (@CodingCat), Adam Pocock (@Craigacp), Daniel Hen (@Daniel8hen), Jiaxiang Li (@JiaxiangBU), Laurae (@Laurae2), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), @alois-bissuel, Andy Adinets (@canonizer), Chen Qin (@chenqin), Harry Braviner (@harrybraviner), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), @jakirkham, James Lamb (@jameslamb), Julien Schueller (@jschueller), Mayank Suman (@mayanksuman), Hajime Morrita (@omo), Rong Ou (@rongou), Sara Robinson (@sararob), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), @sriramch, Sean Owen (@srowen), Sergei Lebedev (@superbobry), Yuan (Terry) Tang (@terrytangyuan), Theodore Vasiloudis (@thvasilo), Matthew Tovbin (@tovbinm), Jiaming Yuan (@trivialfis), Xin Yin (@xydrolase)

@dmlc/xgboost-committer

Copy link
Member

@terrytangyuan terrytangyuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM

@CodingCat
Copy link
Member

CodingCat commented May 17, 2019

found some format problem in doc #4476,

ran some integration tests, results are consistent with previous versions

expecting some users came and ask why my xgboost4j-spark throw exceptions in 0.9 (breaking changes on missing value handling)

@hcho3
Copy link
Collaborator Author

hcho3 commented May 20, 2019

Will merge after tests pass

@hcho3 hcho3 merged commit 515f5f5 into dmlc:master May 20, 2019
@hcho3 hcho3 mentioned this pull request May 20, 2019
18 tasks
@hcho3 hcho3 deleted the release_0.90 branch May 20, 2019 21:29
@lock lock bot locked as resolved and limited conversation to collaborators Aug 18, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants