This disables the upgrade testing temporarily with the intention of
adding it back again when we build 3.7 images and can test the 3.6 to
3.7 upgrade.
Change-Id: I102795248ab9e692e4558257d94d5b3e0b73267e
I noticed that I717ab8f9b980b363fdddaa28e76cd269b1e4d876 didn't run
the nodepool job, so didn't test if we modify credentials here. Add
it.
Change-Id: I34c3312eac08141f25f554d41ef399e97aab6b32
These dummy variables were for the nodepool.yaml template during
testing, but are no longer referenced. Clean them up.
Change-Id: I717ab8f9b980b363fdddaa28e76cd269b1e4d876
This reverts commit e76e0089d1.
Reason for revert: Facebook mirrors are out of sync for more then 24h that does break CI and block progress. Let's try out another rax mirror
this time.
Change-Id: Ibb0a23ee790aeea5e4f5d05ce731791bd380976a
This is just enough to get the cloud-launcher working on the new
Linaro cloud. It's a bit of a manual setup, and much newer hardware,
so trying to do things in small steps.
Change-Id: Ibd451e80bbc6ba6526ba9470ac48b99a981c1a8d
This updates whitelist_externals to allowlist_externals for tox v4
compatibility. We override the ensure_tox_version from ensure-tox to
force latest tox to be used which should check that we are compatible in
this change.
Change-Id: I0ac3c1aa35c949d9b83a8607d43ed29f2fca480e
This should only be landed as part of our upgrade process. This change
will not upgrade Gerrit properly on its own.
Note, we keep Gerrit 3.5 image builds and 3.5 -> 3.6 upgrade jobs in
place until we are certain we won't roll back. Once we've crossed that
threshold we can drop 3.5 image builds, add 3.7 image builds, and update
the upgrade testing to perform a 3.6 -> 3.7 upgrade.
Change-Id: I40c4f96cc40edc5caeb32a1af80069ef784967fd
This provider is going away and the depends-on change should be the last
step to remove it from nodepool. Once that is complete we can stop
trying to manage the mirror there (it will need to be manually shut
down), stop managing our user accounts, and stop writing cloud.yaml that
include these details for inap/iweb on nodepool nodes.
Note we leave the bridge clouds.yaml content in place so that we can
manually clean up the mirror node. We can safely remove that clouds.yaml
content in the future without much impact.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/867264
Change-Id: I01338712aeae79aa78e7f61d332a2290093c8a1b
Docker-compose exec allocates a tty by default unlike docker exec. New
Ansible doesn't provision a tty which causes docker-compose exec to fail
under Ansible now. Address this by passing -T to docker-compose exec
commands which stops allocating a tty. We didn't need one to gracefully
stop zuul services.
Change-Id: Ib52f184f771ae4530f6b6531257dda5d8443043c
I'm not sure if this is clearer or not (which is why I proposed it
separately here).
From inspection of the code, adding "state: latest" just means Ansible
runs "install -U" ... which is pretty much the same thing as adding
--upgrade. Which is clearer, I'm not sure?
Change-Id: I6e31523686555e33d062f3b05f2385d7e21e2620
This was inspired by a similar change
I78a914f71cef687f09fcfee0f3f498b79d810f5d. In the bootstrap-bridge
production, we are calling create-venv every time we fire off a set of
production jobs. While it's good to keep the venv updated, it doesn't
need to happen hourly.
This writes the requirements to a file, and only installs it if the
template updates (i.e. if the venv is fresh, the dependencies updated
or we updated the daily timestamp).
Change-Id: I7a70b73fb907b923f47a2a0de72e21649c15e05f
In reviews for I3696740112fa691d1700040b557f53f6721393e7 clarkb
correctly pointed out that a constraint like ansible<8 will never
result in the production venv being updated.
The point of having the requirements.txt was to avoid a full update
run on the venv on every one of its frequent runs.
A good in-between seems to be writing out the current day timestamp
into the requirements file. Since the template: return value is based
on comparing the hash of the old/new (we suspected this, but I also
double confirmed with a local test), this results in the template
being updated just once a day. Ergo we will run a --update run on the
ansible-venv just once a day.
Change-Id: I78a914f71cef687f09fcfee0f3f498b79d810f5d
Change I4789fe99651597b073e35066ec3be312e18659b8 made me realise that
with the extant code, nothing will update the /usr/ansible-env
environment when we bump the versions.
The installation of the Ansible, openstacksdk and ARA packages as part
of the "install-ansible" role was done this way to facilitate being
able to install all three of these from their main/master/devel
branches for the "-devel" job, which is our basic canary for upstream
things that might affect us. Because of the way the pip: role works
with "state: latest" and mixing on-disk paths with pypi package names,
this became a bit of a complex swizzling operation.
Some thing have changed since then; particularly us now using a
separate venv and upstream Ansible's change to use "collections"; so
pulling in a bug-fix for Ansible is not as simple as just cloning
github.com/ansible/ansible at a particular tag any more. This means
we should reconsider how we're specifying the packages here.
This simplifies things to list the required packages in a
requirements.txt file, which we install into the venv root. The nice
thing about this is that creating requirements.txt with the template:
role is idempotent, so we can essentially monitor the file for changes
and only (re-)run the pip install into /usr/ansible-env when we change
versions (forcing upgrades so we get the versions we want, and fixing
the original issue mentioned above).
Change-Id: I3696740112fa691d1700040b557f53f6721393e7
This was missed in the original Ansibling because unlike all the other
settings it does not use an env var to set it. Instead we have to write
a file that the container startup scripts will append to the actual
config.
The comment for this value says it should be a human so we set it to the
root address of the running host which should end up in infra rooter
inboxes.
Change-Id: I235dfcba53c029136c141382920279ab593d1f82
Take the site configuration for lists.opendev.org and
lists.zuul-ci.org off of the old lists.openstack.org server, and
also clean up tests of the same.
Change-Id: Ic6095889c29d8a34def113204052300558f0a77c
Uncomment the existing configuration for the Zuul community's
mailing lists so they'll be precreated in time for our migration
maintenance later today.
Change-Id: I5d0680eae356c2585f872411ab4529e895f543fd
This file streams the db backup to borg; it should be using the
"database" container from the docker-compose file.
Change-Id: Ide568215677855aa8eff1da617c4649223373052