Discussion:
Automating testing for the netinst and live images
(too old to reply)
Roland Clobus
2023-07-26 09:20:02 UTC
Permalink
Hello Debian-cd Team and Phil,

Now that the busy period for releasing Debian 12.1 is over and some of
the live images have been verified by openQA (via me), would it make
sense to think about automating the tests for the officially released
Debian images, and the weekly builds as well?

My thoughts/ramblings:
* As soon as an image has been generated, and it has been made
accessible via an URL, openQA will be invoked and starts to download and
test the image (i.e. the generator will trigger openQA instead of openQA
polling)
** Phil will be able to generate API keys for openQA
** I've already implemented a similar setup on Jenkins [1] for the live
images [6]
** Phil has already implemented a similar setup for the netinst images,
using polling [2]
* By testing on virtualised hardware, at least many of the manual,
tediously repeating tests can be verified to work correctly, which could
make the tests on real hardware faster, because less needs to be tested
* Automated tests would automatically see e.g. kernel mismatches in the
installer [3]
** However, for the live images (based on testing and unstable) I've
implemented an automatic kernel selection, which saves additional
maintenance [4]
* The automated tests will show issues earlier, but that would require
regular monitoring/dashboarding
** I've tried to tags the issues that I've reported [5]
* For the medium to long term, would it make sense to shift these test
from debian.net machines to debian.org machines?
** The workload on osuosl3-amd64.d.n is already rather high

That's already a lot for a single mail,
with kind regards,
Roland Clobus

[1] https://jenkins.debian.net/view/live/
[2] https://openqa.debian.net/group_overview/10
[3]
https://openqa.debian.net/tests/overview?distri=debian&version=testing&build=20230724_1119-testing-amd64&groupid=10
[4]
https://salsa.debian.org/live-team/live-build/-/blob/master/scripts/build/installer_debian-installer#L309=
[5]
https://udd.debian.org/cgi-bin/bts-usertags.cgi?user=debian-qa%40lists.debian.org&tag=openqa&format=html#results
[6] https://openqa.debian.net/group_overview/14
Philip Hands
2023-07-27 06:00:02 UTC
Permalink
Post by Roland Clobus
Hello Debian-cd Team and Phil,
Now that the busy period for releasing Debian 12.1 is over and some of
the live images have been verified by openQA (via me), would it make
sense to think about automating the tests for the officially released
Debian images, and the weekly builds as well?
* As soon as an image has been generated, and it has been made
accessible via an URL, openQA will be invoked and starts to download and
test the image (i.e. the generator will trigger openQA instead of openQA
polling)
** Phil will be able to generate API keys for openQA
In fact I think anyone can do that (having logged in, using salsa) but
of course I'm very happy to do it.
Post by Roland Clobus
** I've already implemented a similar setup on Jenkins [1] for the live
images [6]
** Phil has already implemented a similar setup for the netinst images,
using polling [2]
I beleive this implements openqa-cli triggering as part of image creation:

https://salsa.debian.org/images-team/setup/-/merge_requests/4

but it needs testing (hence the WIP).
Post by Roland Clobus
* By testing on virtualised hardware, at least many of the manual,
tediously repeating tests can be verified to work correctly, which could
make the tests on real hardware faster, because less needs to be tested
* Automated tests would automatically see e.g. kernel mismatches in the
installer [3]
** However, for the live images (based on testing and unstable) I've
implemented an automatic kernel selection, which saves additional
maintenance [4]
* The automated tests will show issues earlier, but that would require
regular monitoring/dashboarding
** I've tried to tags the issues that I've reported [5]
* For the medium to long term, would it make sense to shift these test
from debian.net machines to debian.org machines?
** The workload on osuosl3-amd64.d.n is already rather high
It would be good to have more workers.

I'm assuming that one way of doing that would be to spin up cloud
instances, but my tentative attempts to work out how that's done have
not yet bourn fruit (one issue is that the worker needs to be able to
run kvm, which given that one's cloud instance is already a VM needs
nested VMs, which seems problematic)

I recently had a machine at Equinix for testing arm64 workers, which
worked really well, but which I relised after starting it up was going
to be rather expensive (their arm64 offer being Ampere Altra Q80-30's
which have 80 cores, and cost $2.50 an hour).

We've since tried a VM on altra.d.n, which allowed me to learn that its
CPU (Neoverse-N1) is not quite new enough to do nested VMs (it looks
like a Neoverse-N2 based thing ought to work, judging from comments in
the related kernel patches)

Running the workers on altra itself seems like it ought to be an option,
but my kids are currently off school keeping me busy, so I've not been
pushing that yet.

So, if anyone has ideas for getting nested-VM-capable cloud stuff
working (or just more amd64 hosting) or getting arm64 resources that
would do the trick (or e.g. riscv machines I could play on), I'm very
interested, becuase the current workers are only just keeping up, so
adding more jobs at present is going to be frustrating.

Cheers, Phil.
--
Philip Hands -- https://hands.com/~phil
Loading...