Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch datasources from broker endpoint when refresh new datasources #5183

Conversation

liyuance
Copy link
Contributor

We should fetch all datasources from broker endpoint when refresh new datasources.
Otherwise superset will not scan out new datasource unitl new druid supervisor task publish first segment file.

@codecov-io
Copy link

codecov-io commented Jun 12, 2018

Codecov Report

Merging #5183 into master will not change coverage.
The diff coverage is 66.66%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #5183   +/-   ##
=======================================
  Coverage   77.46%   77.46%           
=======================================
  Files          44       44           
  Lines        8729     8729           
=======================================
  Hits         6762     6762           
  Misses       1967     1967
Impacted Files Coverage Δ
superset/connectors/druid/models.py 80.49% <66.66%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 280200f...0c7d617. Read the comment docs.

Copy link
Member

@mistercrunch mistercrunch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove all references to the coordinator as part of this PR?

def get_pydruid_client(self):
cli = PyDruid(
self.get_base_url(self.broker_host, self.broker_port),
self.broker_endpoint)
return cli

def get_datasources(self):
endpoint = self.get_base_coordinator_url() + '/datasources'
endpoint = self.get_base_broker_url() + '/datasources'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think get_base_coordinator_url can now be removed. Looks like it's also referenced in tests/druid_tests.py.

@liyuance
Copy link
Contributor Author

@mistercrunch Fine, I have updated this PR, please review again, Thx

@liyuance
Copy link
Contributor Author

@mistercrunch Hi, I am confused about the "check failed" on test_urls, why cluster.get_base_broker_url() return "http://localhost:7980/None"

@mistercrunch
Copy link
Member

@liyuance add broker_endpoint in get_test_cluster_obj

@liyuance
Copy link
Contributor Author

@mistercrunch, Thx so much!

@john-bodley
Copy link
Member

Do we still need the coordinator for anything? If not we should probably remove it from the model.

@mistercrunch mistercrunch merged commit 7f30b48 into apache:master Jun 13, 2018
@mistercrunch
Copy link
Member

Ooops, I merged and then say your comment @john-bodley. I was planning on following up with a PR doing this.

timifasubaa pushed a commit to airbnb/superset-fork that referenced this pull request Jul 25, 2018
…pache#5183)

* fetch datasources from broker endpoint when refresh new datasources

* remove get_base_coordinator_url as out of use

* add broker_endpoint in get_test_cluster_obj
wenchma pushed a commit to wenchma/incubator-superset that referenced this pull request Nov 16, 2018
…pache#5183)

* fetch datasources from broker endpoint when refresh new datasources

* remove get_base_coordinator_url as out of use

* add broker_endpoint in get_test_cluster_obj
@mistercrunch mistercrunch added 🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🚢 0.26.0 labels Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🚢 0.26.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants