This mirror contains less data - only epel and fedora, thus
updated more quickly and was suggested by mirror maintainer
to be used instead in the meanwhile.
Change-Id: I22d4a3a1aadc5784cd4cc38f7af26a4b6d95c93a
The two old entries in here are for puppetmaster.openstack.org (no
longer exists) and bridge.openstack.org (replaced by
bridge01.opendev.org).
Remove the old entries.
Change-Id: I2199166e7d302630792eea6255d274dc2fd1040d
An unrelated issue revealed that passing hostvars here is very
inefficient because it has to be serialised to json and back again for
the library module interface. On the production system where hostvars
is massive this takes ~20 seconds.
We can write this in a jinja loop that doesn't have this conversion,
which takes < a second. The output is the same.
Change-Id: I40ff001c53b72c020463f63201719eef7d145b4f
We've incorrectly embedded the project ID in our block storage
endpoint override for Rackspace Public Cloud, which leads to a 404
Not Found response since the SDK appends the supplied project_id
already. Removing this allows to use latest versions of the
OpenStack CLI/SDK for volume management in Rackspace Public Cloud,
so long as we pin python-cinderclient<8 (for v2 API support).
Change-Id: If37f1a848ec4d3128784ed28068bfae9f06e2f14
Add the ipv4/ipv6 of the new bridge host as an allowed login source.
We will cleanup the old entries later as migration is finished.
Change-Id: I80e671922210bf251ee4fbc6558029d857e47198
As a follow-on to Iaf0dd577cf1bdc0c9464b7413d22eec9db37a640; also
install the python dev packages so python things can build.
Change-Id: I99cde1a93671da500d3013b5eb6ba4f3509e646f
Deployment to the new Jammy bridge host is failing because it can't
build netifaces for Python 3.10. Upstream doesn't have a wheel --
this must not fail in the gate because we setup the testing bridge
node to use our wheel cache.
We should unconditionally install this for maximum flexiblity when
deploying fresh hosts.
Change-Id: Iaf0dd577cf1bdc0c9464b7413d22eec9db37a640
Python 3.11 has been released. Once the parent commit of this commit
lands we will have removed our python3.8 images making room for
python3.11 in our image list. Add these new images which will make way
for running and testing our software on this new version of python.
Change-Id: Idcea3d6fa22839390f63cd1722bc4cb46a6ccd53
This switches the bridge name to bridge01.opendev.org.
The testing path is updated along with some final references still in
testinfra.
The production jobs are updated in add-bastion-host, and will have the
correct setup on the new host after the dependent change.
Everything else is abstracted behind the "bastion" group; the entry is
changed here which will make all the relevant playbooks run on the new
host.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/862551
Change-Id: I21df81e45a57f1a4aa5bc290e9884e6dc9b4ca13
Similar to I84acaa917187db092a302519c14bc94a6a87c2c0, this is a
follow-on to I286796ebd71173019a627f8fe8d9a25d0bfc575a.
At this point, there is no "bastion" group for the executor Ansible to
use.
The idea here is to reduce the number of places we're directly
referencing bridge.openstack.org. We could move this into a job
variable, but that's the same as defining it here. KISS and reference
it directly here (since it's in a role and used multiple times, it's
still better than hardcoding in multiple places).
Change-Id: If6dbcb34e25e3eb721cd2892b8adb84344289882
In I286796ebd71173019a627f8fe8d9a25d0bfc575a we abstracted adding the
bastion host into this role. However, when running on the executor
this role doesn't see playbooks/roles; the roles should be in
playbooks/zuul/roles as they are then siblings to the playbooks
running the production jobs (zuul/run-production-playbook[-post].yaml)
Change-Id: I84acaa917187db092a302519c14bc94a6a87c2c0
gitea-lb01 has been replaced by gitea-lb02. Reviewers should double
check the new gitea-lb02 server appears happy to them before approving
this change. Approving this change will be the last step required before
we delete gitea-lb01 entirely.
jvb02 is one of two additional jitsi meet jvb servers (on top of the one
running in the all in one meetpad install) deployed to help scale up our
jitsi meet server. The current October 2022 PTG has shown that while
meetpad has been useful to a small number of team there isn't the
concurrent demand that having extra jvbs like this supports. This means
we can scale back as the PTG is expected to be our largest load on the
service.
Do both of these in the same change as they update the inventory file
which causes all of our infra-prod jobs to run which takes a long time.
Squashing the changes together ensures we turn that around in half the
time.
Change-Id: I5e8b23ba9a8dc389cd7ef7f294ba4b0f8048102b
Run a base test against a Bionic bridge to ensure we don't break
things as we transition the current production host as we move to a
new Focal-based environment.
Change-Id: I1f745a06c4428cf31a166b3d53dd6321bfd41ebc
Following-on from Iffb462371939989b03e5d6ac6c5df63aa7708513, instead
of directly referring to a hostname when adding the bastion host to
the inventory for the production playbooks, this finds it from the
first element of the "bastion" group.
As we do this twice for the run and post playbooks, abstract it into a
role.
The host value is currently "bridge.openstack.org" -- as is the
existing hard-coding -- thus this is intended to be a no-op change.
It is setting the foundation to make replacing the bastion host a
simpler process in the future.
Change-Id: I286796ebd71173019a627f8fe8d9a25d0bfc575a
The prior change Iffb462371939989b03e5d6ac6c5df63aa7708513 added the
"bastion" group for system-config-run-* jobs, and the dependent change
here adds the bridge host to the "bastion" group when it is
dynamically added in opendev/base-jobs.
This playbook can thus refer to the bastion group, rather than having
to hardcode the hostname.
This should have no affect in production as it all still refers to the
existing bridge.openstack.org; but will make it easier to switch in
the (near) future.
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/861026
Change-Id: Icc52d2544afc1faf519a036cda94a3cae10448ee
This replaces hard-coding of the host "bridge.openstack.org" with
hard-coding of the first (and only) host in the group "bastion".
The idea here is that we can, as much as possible, simply switch one
place to an alternative hostname for the bastion such as
"bridge.opendev.org" when we upgrade. This is just the testing path,
for now; a follow-on will modify the production path (which doesn't
really get speculatively tested)
This needs to be defined in two places :
1) We need to define this in the run jobs for Zuul to use in the
playbooks/zuul/run-*.yaml playbooks, as it sets up and collects
logs from the testing bastion host.
2) The nested Ansible run will then use inventory
inventory/service/groups.yaml
Various other places are updated to use this abstracted group as the
bastion host.
Variables are moved into the bastion group (which only has one host --
the actual bastion host) which means we only have to update the group
mapping to the new host.
This is intended to be a no-op change; all the jobs should work the
same, but just using the new abstractions.
Change-Id: Iffb462371939989b03e5d6ac6c5df63aa7708513
Now that all the bridge nodes are Jammy (3.10), we can uncap this
dependency which will bring in the latest selenium. Unfortunately
after investigation the easier way to do things I hoped this would
allow doesn't work; comments are added and small updates for new API.
Update the users file-match so they run too.
Change-Id: I6a9d02bfc79b90417b1f5b3d9431f4305864869c
In prepartion for upgrading this host, run jobs with a Jammy based
bridge.openstack.org.
Since this has a much later Python, it brings in a later version of
selenium when testing (used for screenshots) which has dropped some of
the APIs we use. Pin it to the old version; we will fix this in a
follow-on just to address one thing at a time
(I6a9d02bfc79b90417b1f5b3d9431f4305864869c).
Change-Id: If53286c284f8d25248abf4a1b2edd6951437dec2
As a short history diversion, at one point we were trying building
diskimage-builder based images for upload to our control-plane
(instead of using upstream generic cloud images). This didn't really
work because the long-lived production servers led to leaking images
and nodepool wasn't really meant to deal with this lifecycle.
Before this the only thing that needed credentials for the
control-plane clouds was bridge.
Id1161bca8f23129202599dba299c288a6aa29212 reworked things to have a
control-plane-clouds group which would have access to the credential
variables.
So at this point we added
zuul/templates/group_vars/control-plane-clouds.yaml.j2 with stub
variables for testing.
However, we also have the same cloud: variable with stub variables in
zuul/templates/host_vars/bridge.openstack.org.yaml.j2. This is
overriding the version from control-plane-clouds because it is more
specific (host variable). Over time this has skewed from the
control-plane-clouds definition, but I think we have not noticed
because we are not updating the control-plane clouds on the non-bridge
(nodepool) nodes any more.
This is a long way of saying remove the bridge-specific definitions,
and just keep the stub variables in the control-plane-clouds group.
Change-Id: I6c1bfe7fdca27d6e34d9691099b0e1c6d30bb967
This is useful for debugging when pip does updates and we need to be
sure that we ran with a new (or old) version of pip.
Change-Id: I556bb68e255ee0bdbcbd2c72dc537e2a6c7d64b6
Currently configured path for images.linuxcontainers is pointing to
non-existing URL as no DNS records present for
us.images.linuxcontainers.org. However, it is still used by some EM
branches, so can't be simple removed as of today.
Change-Id: I0aba60c81b0c3731744ec0d6cde8d1c0fb2f7b35
Needed-By: https://review.opendev.org/c/openstack/openstack-ansible/+/853029
This is another fixup for Iebaeed5028050d890ab541818f405978afd60124
As described inline; in production we want the nested ansible to use
the production inventory; but in the gate we haven't yet built the
inventory used for testing. Pass an inventory list there.
Change-Id: If09ecb873368621687f6e396f766f4e22635c188
The idea with this role is to install the root key from the on-disk
RSA secret. However, when this play runs against localhost it doesn't
match the host-variable defined root_rsa_key.
This is being run nested -- the executor Ansible task has forked the
Ansible we have installed on the bridge which is now installing this.
"connection: local" does what we want here -- it makes ansible assume
bridge.openstack.org is 127.0.0.1 -- which it is -- and avoids us
having to worry about the bootstrap ssh-ing back to itself.
This is a fixup for Iebaeed5028050d890ab541818f405978afd60124
Change-Id: I4cdcc373d1b7b6fa542a78c9f84067c79352d2f6
This is failing in production; because no_log is set we don't get the
output. However, we are not seeing the log file written to disk,
which means the shell: command must not be running. Thus what must be
happening is that in the production case the variable isn't defined
here, and the task fails. Change this to "is defined". This doesn't
hit in the gate ... because the variable is defined there.
Change-Id: Ib3fb24318ec0e56d4d9d9e5a879a432e6c8898e4
Sigh, the leading spaces on the > here break the redirection for the
shell invocation. Fix that up. Also remove a whitespace that snuck
in.
Change-Id: I582bad02185eda49e3abb1f33a4fd885091a97b6