Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Treat obsoleted and skipped results as passed #363

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kalikiana
Copy link
Member

This can be configured via allowed_results.

See: https://progress.opensuse.org/issues/174583

Copy link
Contributor

@Martchus Martchus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only adjusts from what jobs we get the HDD image from. I think we actually need to adjust the scheduling/monitoring below.

@@ -53,7 +54,8 @@ job_templates:
PARALLEL_WITH: ping_server
EOF

hdd=$(runcli openqa-cli api --host "$openqa_url" jobs version="$version" scope=relevant arch="$arch" flavor="$flavor" test="$test_name" latest=1 | runjq -r '.jobs | map(select(.result == "passed")) | max_by(.settings.BUILD) .settings.HDD_1')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just HDD selection, we cannot use HDD_1 from obsoleted/skipped hosts, right?

@@ -53,7 +54,8 @@ job_templates:
PARALLEL_WITH: ping_server
EOF

hdd=$(runcli openqa-cli api --host "$openqa_url" jobs version="$version" scope=relevant arch="$arch" flavor="$flavor" test="$test_name" latest=1 | runjq -r '.jobs | map(select(.result == "passed")) | max_by(.settings.BUILD) .settings.HDD_1')
jobs=$(runcli openqa-cli api --host "$openqa_url" jobs version="$version" scope=relevant arch="$arch" flavor="$flavor" test="$test_name" latest=1 | runjq -r .jobs)
hdd=$(echo $jobs | runjq -r "map(select(.result | test(\"(${allowed_results})\"))) | max_by(.settings.BUILD) .settings.HDD_1")
time openqa-cli schedule \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this call is the problematic one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I'm looking into extending the monitor command now: os-autoinst/openQA#6101

Copy link
Member

@okurz okurz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not right. We can't just assume from obsolete jobs that the cloned jobs are passed. That's why I suggested to follow the cloned jobs but not within our downstream scripting but upstream API routes

@perlpunk
Copy link
Contributor

perlpunk commented Jan 9, 2025

That's not right. We can't just assume from obsolete jobs that the cloned jobs are passed. That's why I suggested to follow the cloned jobs but not within our downstream scripting but upstream API routes

We discussed in the estimation that there were no automatic restarts, so there is nothing we can "follow".
What we could do - if we get obsoleted jobs, we could retry the whole schedule call. For that we would probably need to tee the JSON output and filter it ourselves instead of relying on the exit code.

@okurz
Copy link
Member

okurz commented Jan 9, 2025

@perlpunk isn't that a symptom of a more significant problem? Our scripts-ci shouldn't be that sophisticated but rather openQA should behave in a more usable manner

@Martchus
Copy link
Contributor

Martchus commented Jan 9, 2025

Not sure whether obsolescence is tracked in the database so we could easily "follow" the new set of jobs in the monitoring command the openqa-cli. Even if it was tracked it would probably be not super easy to implement (considering we deal with sets of jobs here and the concrete set of jobs might have changed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants