This adds a keycloak server so we can start experimenting with it.
It's based on the docker-compose file Matthieu made for Zuul
(see https://review.opendev.org/819745 )
We should be able to configure a realm and federate with openstackid
and other providers as described in the opendev auth spec. However,
I am unable to test federation with openstackid due its inability to
configure an oauth app at "localhost". Therefore, we will need an
actual deployed system to test it. This should allow us to do so.
It will also allow use to connect realms to the newly available
Zuul admin api on opendev.
It should be possible to configure the realm the way we want, then
export its configuration into a JSON file and then have our playbooks
or the docker-compose file import it. That would allow us to drive
change to the configuration of the system through code review. Because
of the above limitation with openstackid, I think we should regard the
current implementation as experimental. Once we have a realm
configuration that we like (which we will create using the GUI), we
can chose to either continue to maintain the config with the GUI and
appropriate file backups, or switch to a gitops model based on an
My understanding is that all the data (realms configuration and session)
are kept in an H2 database. This is probably sufficient for now and even
production use with Zuul, but we should probably switch to mariadb before
any heavy (eg gerrit, etc) production use.
This is a partial implementation of https://docs.opendev.org/opendev/infra-specs/latest/specs/central-auth.html
We can re-deploy with a new domain when it exists.
Co-Authored-By: Matthieu Huin <firstname.lastname@example.org>
The Open Infrastructure Foundation's developers who maintain the
OpenStackID software are taking over management of the site itself,
and have deployed it on new servers. DNS records have already been
updated to the new IP address, so it's time to clean up our end in
preparation for deleting the old servers we've been running.
OpenStackID is still used by some services we run, like RefStack and
Zanata, and we're still hosting the OpenStackID Git repository and
documentation, so this does not get rid of all references to it.
This enables the new eavesdrop01.opendev.org server in all current
channels. Puppet has been disabled on the old server and we will
manually stop supybot/meetbot and mirgrate logs before this applies.
This adds a new server to take over from eavesdrop01.openstack.org.
We limit the puppet installs, etc. to the openstack.org server. The
new server is in the group eavesdrop_opendev as we cut over services.
A stub for basic installation is added to the service playbook.
I've confirmed with EmilienM in #tripleo that the edeploy project is
defunct and the #edeploy IRC channel is no longer in use. It was
registered on OFTC by a lone project contributor many years ago and
is not under our control, so we should not be running bots in it
In order to be able to correctly bootstrap statusbot notification
volunteers on OFTC, temporarily remove everyone whose OFTC nicks we
haven't already confirmed (our active infra-root sysadmins).
Volunteers can re-propose additions to this list via code review as
a means of directly confirming their registered nicks on OFTC.
Stop reporting service info and logging in channels which have had
no human comments (only bots) in all of 2021.
This zuul02 instance will replace zuul01. There are a few items to
coordinate when doing an actual switch so we haven't removed zuul01 from
inventory here. In particular we need to update gearman server config
values in the zuul cluster and we need to save queues, shutdown zuul01,
then start zuul02's scheduler and restore queues there.
I believe landing this change is safe as we don't appear to start zuul
on new instances by default. Reviewers should double check this.
We are doing this so that we can cleanup the private network + floating
IP setup that the existing mirror does. Once this new mirror is up and
happy we can cname to it and then clean up the old mirror and its
networking config. We do this in order to save an IP that the current
private network router is consuming.
We will be rotating zk01-03.openstack.org out and replacing them with
zk04-06.opendev.org. This is the first change in that process which puts
zk04 into the rotation. This should only be landed when operators are
ready to manually stop zookeeper on zk03 (which is being replaced by
zk04 in this change).
Once we are satisfied that we have disabled the inputs to firehose we
can land this change to stop managing it in config management. Once that
is complete the server can be removed.
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
These have been replaced with new focal .opendev.org hosts. Note we
don't want to land this until we successfully transitioned from one set
of hosts to another.
This adds the new focal nodepool launchers replacements for nl02-04 to
our inventory. This will configure them with an idle configuration. We
then confirm they are happy running in an idle state then switch over
the config from the old to new servers.
This is a new focal replacement for nl01.openstack.org. We keep
nl01.openstack.org in our inventory for now because we want ansible to
update the nodepool.yaml configs for these two hosts to coordinate a
hand off of responsibilities once we are happy with the new deployment.
We also switch the testing hostname to nl04.openstack.org as this will
be the last nodepool launcher to be removed. When we swap it out the
testing will be updated to use focal hosts.
These are new focal replacement servers. Because this is the last set of
replacements for the executors we also cleanup the testing of the old
servers in the system-config-run-zuul job and the inventory group
These are new replacement servers. Once the new servers have been
ansibled and zuul-executor is started on them the old servers will be
asked to gracefully stop. Once gracefully stopped the old servers will