This used to be called "bridge", but was then renamed with
Ia7c8dd0e32b2c4aaa674061037be5ab66d9a3581 to install-ansible to be
clearer.
It is true that this is installing Ansible, but as part of our
reworking for parallel jobs this is the also the synchronisation point
where we should be deploying the system-config code to run for the
buildset.
Thus naming this "boostrap-bridge" should hopefully be clearer again
about what's going on.
I've added a note to the job calling out it's difference to the
infra-prod-service-bridge job to hopefully also avoid some of the
inital confusion.
Change-Id: I4db1c883f237de5986edb4dc4c64860390cc8e22
This playbook was renamed "install-ansible.yaml" with
Ia7c8dd0e32b2c4aaa674061037be5ab66d9a3581
We want all jobs to match on this; it will make them run if we update
the ansible version on the bastion host, bridge.
Change-Id: Id38fc39f8f6b4d8f532eb9796259e8f4bf18d861
This adds a keycloak server so we can start experimenting with it.
It's based on the docker-compose file Matthieu made for Zuul
(see https://review.opendev.org/819745 )
We should be able to configure a realm and federate with openstackid
and other providers as described in the opendev auth spec. However,
I am unable to test federation with openstackid due its inability to
configure an oauth app at "localhost". Therefore, we will need an
actual deployed system to test it. This should allow us to do so.
It will also allow use to connect realms to the newly available
Zuul admin api on opendev.
It should be possible to configure the realm the way we want, then
export its configuration into a JSON file and then have our playbooks
or the docker-compose file import it. That would allow us to drive
change to the configuration of the system through code review. Because
of the above limitation with openstackid, I think we should regard the
current implementation as experimental. Once we have a realm
configuration that we like (which we will create using the GUI), we
can chose to either continue to maintain the config with the GUI and
appropriate file backups, or switch to a gitops model based on an
export.
My understanding is that all the data (realms configuration and session)
are kept in an H2 database. This is probably sufficient for now and even
production use with Zuul, but we should probably switch to mariadb before
any heavy (eg gerrit, etc) production use.
This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html
We can re-deploy with a new domain when it exists.
Change-Id: I2e069b1b220dbd3e0a5754ac094c2b296c141753
Co-Authored-By: Matthieu Huin <mhuin@redhat.com>
Mixed up with gitea-lb naming.
Fixes I19db98fcec5715c33b62c9c9ba5234fd55700fd8
Signed-off-by: Dr. Jens Harbott <harbott@osism.tech>
Change-Id: I91d077102904a2144d12bc60eb7341f1065473b4
This was introduced with I19db98fcec5715c33b62c9c9ba5234fd55700fd8
opendev-infra-prod-setup-src is the abstract parent job, we should be
using infra-prod-setup-src.
Change-Id: I7fdefe7ce60ab248f9a90b6be363eefc826f8e1f
There are new gerrit releases. This change updates our production 3.3
image to 3.3.8. We also update Our 3.4 image to 3.4.2 to keep up there.
Release notes for both:
https://www.gerritcodereview.com/3.3.html#338https://www.gerritcodereview.com/3.4.html#342
Seems to largely be bugfixes and reindexing improvements.
Change-Id: Iae8aa403b4001937320767d4166a6af2bc89a2ea
The current opendev-infra-prod-base job sets up the executor to log
into bridge AND copies in Zuul's checkout of system-config to
/home/zuul/src.
This presents an issue for parallel operation, as every production job
is cloning system-config ontop of each other.
Since they all operate in the same buildset, we only need to clone
system-config from Zuul once, and then all jobs can share that repo.
This adds a new job "infra-prod-setup-src" which does this. It is a
dependency of the base job so should run first.
All other jobs now inhert from opendev-infra-prod-setup-keys, which
only sets up the executor for logging into bridge.
Change-Id: I19db98fcec5715c33b62c9c9ba5234fd55700fd8
Depends-On: https://review.opendev.org/c/opendev/base-jobs/+/807807
Having two groups here was confusing. We seem to use the review group
for most ansible stuff so we prefer that one. We move contents of the
gerrit group_vars into the review group_vars and then clean up the use
of the old group vars file.
Change-Id: I7fa7467f703f5cec075e8e60472868c60ac031f7
Previously we had set up the test gerrit instance to use the same
hostname as production: review02.opendev.org. This causes some confusion
as we have to override settings specifically for testing like a reduced
heap size, but then also copy settings from the prod host vars as we
override the host vars entirely. Using a new hostname allows us to use a
different set of host vars with unique values reducing confusion.
Change-Id: I4b95bbe1bde29228164a66f2d3b648062423e294
Previously we had a test specific group vars file for the review Ansible
group. This provided junk secrets to our test installations of Gerrit
then we relied on the review02.opendev.org production host vars file to
set values that are public.
Unfortunately, this meant we were using the production heapLimit value
which is far too large for our test instances leading to the occasionaly
failure:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 9596567552 bytes for committing reserved memory.
We cannot set the heapLimit in the group var file because the hostvar
file overrides those values. To fix this we need to replace the test
specific group var contents with a test specific host var file instead.
To avoid repeating ourselves we also create a new review.yaml group_vars
file to capture common settings between testing and prod. Note we should
look at combining this new file with the gerrit.yaml group_vars.
On the testing side of things we set the heapLimit to 6GB, we change the
serverid value to prevent any unexpected notedb confusion, and we remove
replication config.
Change-Id: Id8ec5cae967cc38acf79ecf18d3a0faac3a9c4b3
This shifts our Gerrit upgrade testing ahead to testing 3.3 to 3.4
upgrades as we have upgraded to 3.3 at this point.
Change-Id: Ibb45113dd50f294a2692c65f19f63f83c96a3c11
This bumps the gerrit image up to our 3.3 image. Followup changes will
shift upgrade testing to test 3.3 to 3.4 upgrades, clean up no longer
needed 3.2 images, and start building 3.4 images.
Change-Id: Id0f544846946d4c50737a54ceb909a0a686a594e
Avoid running the letsencrypt job when other roles add handlers for
their certificates. We don't need to run this job explicitly in that
case.
Change-Id: Ic2e9b7fc81b73ecf7af197b83496e3589bb28bb0
Co-Authored-By: Jeremy Stanley <fungi@yuggoth.org>
Currently we connect to the LE staging environment with acme.sh during
CI to get the DNS-01 tokens (but we never follow-through and actually
generate the certificate, as we have nowhere to publish the tokens).
We've known for a while that LE staging isn't really meant to be used
by CI like this, and recent instability has made the issue pronounced.
This modifies the driver script to generate fake tokens which work to
ensure all the DNS processing, etc. is happening correctly.
I have put this behind a flag so the letsencrypt job still does this
however. I think it is worth this job actually calling acme.sh to
validate this path; this shouldn't be required too often.
Change-Id: I7c0b471a0661aa311aaa861fd2a0d47b07e45a72
As of https://github.com/ansible/ansible/commit/724800c (and now
2.12.0b1), ansible started requiring Python 3.8 or later on
controllers. Switch our representative bridge.openstack.org test
nodes to the ubuntu-focal label which has 3.8.10 as its default
python3 so we can determine whether it's safe to upgrade production
similarly.
Change-Id: Ie1dc4dfaaf08ab74bf59717610231855926e9d19
This is a bit of spring cleaning. Previously we based on images on
Buster but Bullseye exists now so give it a go.
Change-Id: Icc3d79b361e41df2f2f063993fd206ab7d992f75
To do this we also update jinja-init to bullseye and gitea seems to be
the only user of this image. The impact of this should be fairly self
contained to gitea.
Note this update isn't urgent, but good hygiene. We should coordinate
this update with the 1.15.x gitea upgrade and do them in such a sequence
that we can identify problems easily if they pop up.
Change-Id: Ia0075416a1d8a067cfecd26c03f8db9641cbcb89
This switch testing of lists.openstack.org to Focal and we make a CGI
env var update to accomodate newer mailman.
Specifically newer mailman's CGI scripts filter env vars that it will
pass through. We were setting MAILMAN_SITE_DIR to vhost our mailman
installs with apache2, but that doesn't pass the filter and is removed.
HOST is passed through so we update our scripts, apache vhost configs,
exim, and init scripts to use the HOST env var instead.
Change-Id: I5c8c70c219669e37b7b75a61001a2b7f7bb0bb6c
This uses the opendev assets bundle image created with
I3166679bde6d771276289b9d32e7e4407957b2f8.
The mount options require using BuildKit, hence the Dockerfile update.
Otherwise conceptually it's fairly simple; copy in the files from the
opendevorg/assets image rather than the file-system.
Change-Id: I36bdc76471eec5380a676ebcdd885a88d3985976
Move some common assets into a top-level assets/ directory. Services
can reference these assets via
https://opendev.org/opendev/system-config/raw/branch/master/assets/<file>
in <img> tags, etc.
Some services want to embed these into their images, but we wish to
only keep one canonical copy. For this, add a Dockerfile and jobs
that creates a simple bundle of assets in opendevorg/assets. This can
be referenced in other builds; the new BuildKit bind-mount is
particularly useful for this
(c.f. I36bdc76471eec5380a676ebcdd885a88d3985976).
Change-Id: I3931566eb86a0618705d276445fa0a5f659692ea
The Open Infrastructure Foundation's developers who maintain the
OpenStackID software are taking over management of the site itself,
and have deployed it on new servers. DNS records have already been
updated to the new IP address, so it's time to clean up our end in
preparation for deleting the old servers we've been running.
OpenStackID is still used by some services we run, like RefStack and
Zanata, and we're still hosting the OpenStackID Git repository and
documentation, so this does not get rid of all references to it.
Change-Id: I1d625d5204f1e9e3a85ba9605465f6ebb9433021