Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare continuous benchmarking #138

Open
kou opened this issue Jun 5, 2024 · 1 comment
Open

Prepare continuous benchmarking #138

kou opened this issue Jun 5, 2024 · 1 comment

Comments

@kou
Copy link
Member

kou commented Jun 5, 2024

Objective

  • Prevent/detect performance regression automatically

Requirements

  • Detect performance regression in PR (CI is failed)
  • Detect performance regression in main commits (CI is failed)
  • Easy to maintain over correctness
    • For example: We don't want to maintain our server even if GitHub hosted runner may not provide stable computation resource.

Nice to have

  • Visualize performance trends
@kou
Copy link
Member Author

kou commented Jun 5, 2024

We may be able to use https://github.com/benchmark-action/github-action-benchmark for this.

We may be able to generate an input from benchmark-driver result for customBiggerIsBetter something like:

diff --git a/Rakefile b/Rakefile
index 76a5629..f8cb30c 100644
--- a/Rakefile
+++ b/Rakefile
@@ -30,9 +30,12 @@ end
 load "#{__dir__}/tasks/tocs.rake"
 
 benchmark_tasks = []
+benchmark_record_yamls = []
+benchmark_record_tasks = []
 namespace :benchmark do
   Dir.glob("benchmark/*.yaml").sort.each do |yaml|
     name = File.basename(yaml, ".*")
+    next if name.end_with?(".record")
     env = {
       "RUBYLIB" => nil,
       "BUNDLER_ORIG_RUBYLIB" => nil,
@@ -62,6 +65,43 @@ namespace :benchmark do
         benchmark_tasks << "benchmark:#{name}:small"
       end
     end
+
+    namespace name do
+      benchmark_record_yaml = yaml.gsub(/\.yaml\z/, ".record.yaml")
+      benchmark_record_yamls << benchmark_record_yaml
+      desc "Record #{name} benchmark result"
+      task :record do
+        sh(env, *command_line, "--output", "record")
+        mv("benchmark_driver.record.yml", benchmark_record_yaml)
+      end
+      benchmark_record_tasks << "benchmark:#{name}:record"
+    end
+  end
+
+  desc "Output benchmark result for continuous benchmarking"
+  task :continuous => benchmark_record_tasks do
+    require "benchmark_driver"
+    require "json"
+    require "yaml"
+
+    results = []
+    benchmark_record_yamls.each do |yaml|
+      benchmark_name = File.basename(yaml, ".record.yaml")
+      record = YAML.unsafe_load_file(yaml)
+      record["job_warmup_context_result"].each do |job, warmup_results|
+        warmup_results[false].each do |context, result|
+          result.values.each do |metric, value|
+            results << {
+              "name" => "#{benchmark_name} - #{job.name} - #{context.name}",
+              "value" => value,
+              "unit" => metric.unit,
+            }
+            break
+          end
+        end
+      end
+    end
+    File.write("benchmark.result.json", JSON.pretty_generate(results))
   end
 end

(I may not have a time to complete this. If someone is interested in this, please work on this.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant