For our rollout, we need to be able to run this without actually
running the up.
Also, split out startup tasks so that we can run them from a
dedicated start playbook by themselves.
Change-Id: I08d994e496fbd8d5adbfa1ce344b0ae52f46535c
Sister change for Ia5caff34d3fafaffc459e7572a4eef6bd94422ea and
removing earlier references to the mirror server in preparation for
building and adding the new one.
Change-Id: I7d506be85326835d5e77a0c9c461f2d457b1dfd3
This adds a simple role to install Zookeeper.
Add an option to nodepool-base to use this role to install Zookeeper.
Use this in the nodepool-builder gate testing where we are just
validating that the nodepool-builder container starts and is ready to
accept connections. It needs a zookeeper to talk to, even though it
is not going to do anything.
Change-Id: I4ae89a51e454be4ee53ad4e04407162aaa8d9f9a
When testing our system-conf configuration we don't actually add zuul to
the docker group. This means the zuul user cannot access the docker
socket. This then breaks docker container log collection. Address this
by becoming root when collecting logs.
Change-Id: Ic0232f7ef458cdd07fb0853f97f2dc22ce137c71
Currently we don't set a contact email with our accounts. This is an
optional feature, but would be helpful for things like [1] where we
would be notified of certificates affected by bugs, etc.
Setup the email address in the acme.sh config which will apply with
any new accounts created. To update all the existing hosts, we see if
the account email is added/modified in the config *and* if we have
existing account details; if so we need a manual update call.
For anyone who might be poking here, we also add a note on sharing an
account based on some broadly agreed upon discussion in IRC.
[1] https://community.letsencrypt.org/t/revoking-certain-certificates-on-march-4/114864
Change-Id: Ib4dc3e179010419a1b18f355d13b62c6cc4bc7e8
We need to use bazelisk to build gerrit so that we can properly
track bazel versions in the job. Use the roles developed for
gerrit-review to do that, then simplify the dockerfile to have
it simply copy the war into the target image.
Also add polymer-bridges.
Depends-On: https://review.opendev.org/709256
Change-Id: I7c13df51d3b8c117bcc9aab9caad59687471d622
This is a new cloud provided via citycloud that will add resources
capable of running Airship jobs. The goal is to use this as a stepping
stone to having Airship jobs run on our generic CI resources. This cloud
will provide both generic and larger resources to support this.
Change-Id: I63fd9023bc11f1382424c8906dc306cee5b3f58d
As a follow-on to Ie37abb4fd3eb3342b66ade52ab65024c420d7264 remove the
linaro credentials that were related to the (now removed) linaro-cn1
cloud.
Change-Id: Ia1e8dd3732164708c2e9fd82509e350829c438ba
This was missed when converting the registry server over to LE in
production. We need to test it this way too.
Change-Id: Ic2a05ebeae6991b69c000d5269165a45a0c72d38
This change switches the post bits to use a new centralized
role to collect all container logs.
Depends-On: https://review.opendev.org/701867
Change-Id: I9e982b37518c22e6d5358f7604ebc7f56b0626e3
While we're in there - fix a misspelling.
Remove auth.restTokenPrivateKey from config file. It hasn't been
used since 2.6: https://gerrit-review.googlesource.com/c/gerrit/+/70770
Change-Id: I94405cf870d57780b86f30c2bddb573ff15c05bc
NOTE: We should update storyboard-dev to be driven by
letsencrypt first, otherwise we need to plumb in the
self-signed cert, which gets weird with needing to
import it for java which in this case is in the container
image, meaning we either need to bind-mount java certs in
or build it in to the image.
Change-Id: Ida9dd15ca8262925c54579660fe9c16e2b573907
For gate testing we need the smaller AFS cache size applied to
everything that might install openafs, not just the mirror nodes.
Move the definition to the afs-client group.
Change-Id: Id27efd2f12f5ac3f351f65fa1ae513624a53df90
This is the first step in managing the opendev.org cert with LE. We
modify gitea01.opendev.org only to request the cert so that if this
breaks the other 7 giteas can continue to serve opendev.org. When we are
happy with the results we can merge the followup change to update the
other 7 giteas.
Depends-On: https://review.opendev.org/694182
Change-Id: I9587b8c2896975aa0148cc3d9b37f325a0be8970
This runs gerrit in a container on review-dev01 using podman.
Remove an unused web_server.py file that we found from copying it
from puppet to ansible.
Change-Id: I399d3cf8471bc8063022b0db0ff81718b2ee2941
We'll use this to test the checks plugin.
We have to add jgit as a repo because it's a submodule now.
Change-Id: Ic7e9ad0265e136a9ac6b1147998f6eb5ee398180
A few things have changed and we need to fix them in one go.
Use mirror for installing docker for buildset-registry
While, we need to make this more systemic, that's hanging off of the
mirror rework. For now, since we know all of these jobs are debian
based, just set the mirror location.
Replace use of zuul cloner with git clones
You can never be a prophet in your own hometown. This is now broken
because of the git cache rework, so just replace it.
Update libjemalloc library
python:slim is based on buster now, which has libjemalloc2 not
libjemalloc1.
Remove gerrit repo remote for submodules
A recent change to the base jobs to use prepare-workspace-git
broke the gerrit image builds by actually having the origin
remote by /dev/null as intended. This breaks submodules because
for a few of them where we don't have matching stable branches
the submodule relative path behavior is actually exactly what
we want.
Since we don't care about the remote otherwise, remove the
origin remote before doing the submodule update --init so that
the submodule will clone the refs from the zuul prepared repo.
Change-Id: Ieb5b6bc8711fe971ed3445c7c267306ac4616464
An upcoming change will add JWT authentication to the registry;
prepare for that by establishing a server-side secret for use
in signing the tokens.
Change-Id: Ibaa15dd0c4b0d797f01a1886186fdc021dc990fa
Use latest bazel
It seems 0.27 is now too old. This is what happens when I go on vacation
apparently.
Add in a hack to override the bazelversion. We'll remove this once
https://gerrit-review.googlesource.com/c/gerrit/+/237495 lands and
has been merged up.
Change-Id: Ib7a6d33ce8bf8498fd5cd09b25087dc09acb8df4
Setting this to system-config allows us to run the base tests as 3rd
party ci for projects like testinfra.
Change-Id: I2d15df154dcdc7c5da6c3326fbecec2146201164
We had some extra bazel options that don't seem to be necessary
anymore now that we are using upstream bazel options appropriately.
Retry the build a couple of times if it goes south, inside of the
build image. This should allow re-use of the cache the second time,
and if there is a temporary error, it should pick up and move
forward.
Change-Id: I5f304acb21fd3a4d40701fc0414ae0c424c838e5
This introduces two new roles for managing the backup-server and hosts
that we wish to back up.
Firstly the "backup" role runs on hosts we wish to backup. This
generates and configures a separate ssh key for running bup and
installs the appropriate cron job to run the backup daily.
The "backup-server" job runs on the backup server (or, indeed
servers). It creates users for each backup host, accepts the remote
keys mentioned above and initalises bup. It is then ready to receive
backups from the remote hosts.
This eliminates a fairly long-standing requirement for manual setup of
the backup server users and keys; this section is removed from the
documentation.
testinfra coverage is added.
Change-Id: I9bf74df351e056791ed817180436617048224d2c
Our goal is upgrading to 3.0. To do that we need to upgrade to 2.15, then
to 2.16, then to 3.0. Build all of the images so that we can do that.
2.16 and 3.0 also use bazel, so just use one copy of the Dockerfile for
all three and let zuul check out the repos to the right versions.
Depends-On: https://review.opendev.org/673147
Depends-On: https://review.opendev.org/672320
Change-Id: I35bd278e0c70c871fa44d005c60a987d1d8e3cdc
Add new IP addresses to inventory for the rebuild, but don't
reactivate it in the haproxy pools yet.
Note this switches the gitea testing to use a host called gitea99 so
that it doesn't conflict with our changes of the production hosts.
Change-Id: I9779e16cca423bcf514dd3a8d9f14e91d43f1ca3
This takes a similar approach to the extant ansible_cron_install_cron
variable to disable the cron job for the cloud launcher when running
under CI.
If you happen to have your CI jobs when the cron job decides to fire,
you end up with a harmless but confusing failed run of the cloud
launcher (that has tried to contact real clouds) in the ARA results.
Use the "disbaled" flag to ensure the cron job doesn't run. Using
"disabled" means we can still check that the job was installed via
testinfra however.
Convert ansible_cron_install_cron to a similar method using disable,
document the variable in the README and add a test for the run_all.sh
script in crontab too.
Change-Id: If4911a5fa4116130c39b5a9717d610867ada7eb1
Zuul now includes an ansible_python_interpreter hostvar in every
host in its inventory. It defaults to python2. The write-inventory
role, which takes the Zuul inventory and makes an inventory for
the fake bridge server in the gate passes that through. Because it's
in /etc/ansible/inventory.yaml, it overrides any settings which may
arrive via group vars, but this is the way we set the interpreter
for all the hosts on bridge (we do not do so in the actual inventory
file).
To correct this, tell write-inventory to strip the
ansible_python_interpreter variable when it writes out the new
inventory. This restores the behavior to match what happens on
the real bridge host. One instance of setting the interpreter
for the fake "trusty" host used in base platform tests is moved to
a hostvars file to match the rest of the real hosts.
Change-Id: I60f0acb64e7b90ed8af266f21f2114fd598f4a3c
This adds a periodic job to copy logs to a mirror volume, and export
it via the usual mirror http.
I have precreated the log volume; just as a R/W volume because this is
expected to be very low volume access.
Change-Id: I67870f6d439af2d2a63a5048ef52cecff3e75275
Keytabs are slightly longer than what is being tested; upto 100 bytes
or so. This means the encoded data breaks over lines, which means you
need to be more careful about quoting.
Update the testing to a longer keytab (100 bytes of random data) and
fix up the quoting. Also enable no_logging to avoid putting key
material into the logs.
Change-Id: I73c391a2ebd2c962dc9a422f9d44265160210852
This move was prompted by wishing to expose the mirror update logs for
the rsync updates so that debugging problems does not require a root
user (note: not actually done in this change; will be a follow-on).
Rather than start hacking at puppet, the rsync mirror scripts make a
nice delination point for starting an Ansible-first/Bionic update.
Most magic is included in the scripts, so there is not much more to do
than copy them. The host uses the existing kerberos and openafs roles
and copies the key material into place (to be added before merge).
Note the scripts are removed from the extant puppet so we don't have
two updates happening simultaneously. This will also require a manual
clean to remove the cron jobs as a once-off when merging.
The other part of mirror-update is the reprepro based scripts for the
various debuntu repositories. They are left as future work for now.
Testing is added to ensure dependencies and scripts are all in place.
Change-Id: I525ac18b55f0e11b0a541b51fa97ee5d6512bf70
Donnyd has kindly offered us access to fortnebula's test cloud. This
adds clouds.yaml entries to bridge and nodepool so that we can take
advantage of these resources.
Change-Id: I4ebc261c6f548aca0b3f37dc9b60ffac08029e67
This is an intermediate step to having both kafs and openafs testing
in the gate; this just makes it clear which host is which.
Change-Id: I8cd006227ed47ad5f2c5eec664083477dd7ba397
In a follow-on change (I9bf74df351e056791ed817180436617048224d2c) I
want to use #noqa to ignore an ansible-lint rule on a task; however
emperical testing shows that it doesn't work with 3.5.1. Upgrading to
4.1.0 it seems whatever was wrong has been fixed.
This, however, requires upgrading to 4.1.0.
I've been through the errors ... the comments inline I think justify
what has been turned off. The two legitimate variable space issues I
have rolled into this change; all other hits were false positives as
described.
Change-Id: I7752648aa2d1728749390cf4f38459c1032c0877
In order to have nodepool build images and upload them to control
plane clouds, add them to the clouds.yaml on the nodepool-builder
hosts. Keep them out of the launcher configs by splitting the config
templates. So that we can keep our copies of things to a minimum,
create a group called "control-plane-clouds" and put bridge and nb0*
in it.
There are clouds mentions in here that we no longer use, a followup
patch will clean those up.
NOTE: Requires shifting the clouds config dict from
host_vars/bridge.openstack.org.yaml to group_vars/control-plane-clouds.yaml
in the secrets on bridge.
Needed-By: https://review.opendev.org/640044
Change-Id: Id1161bca8f23129202599dba299c288a6aa29212
This impelements mirrors to live in the opendev.org namespace. The
implementation is Ansible native for deployment on a Bionic node.
The hostname prefix remains the same (mirrorXX.region.provider.) but
the groups.yaml splits the opendev.org mirrors into a separate group.
The matches in the puppet group are also updated so to not run puppet
on the hosts.
The kerberos and openafs client parts do not need any updating and
works on the Bionic host.
The hosts are setup to provision certificates for themselves from
letsencrypt. Note we've added a new handler for mirror nodes to use
that restarts apache on certificate issue/renewal.
The new "mirror" role is a port of the existing puppet mirror.pp. It
installs apache, sets up some modules, makes some symlinks, sets up a
cleanup cron job and installs the apache vhost configuration.
The vhost configuration is also ported from the extant puppet. It is
simplified somewhat; but the biggest change is that we have extracted
the main port 80 configuration into a macro which is applied to both
port 80 and 443; i.e. the host will have SSL support. The other ports
are left alone for now, but can be updated in due course.
Thus we should be able to CNAME the existing mirrors to new nodes, and
any existing http access can continue. We can update our mirror setup
scripts to point to https resources as appropriate.
Change-Id: Iec576d631dd5b02f6b9fb445ee600be060f9cf1e
This is a first step toward making smaller playbooks which can be
run by Zuul in CD.
Zuul should be able to handle missing projects now, so remove it
from the puppet_git playbook and into puppet.
Make the base playbook be merely the base roles.
Make service playbooks for each service.
Remove the run-docker job because it's covered by service jobs.
Stop testing that puppet is installed in testinfra. It's accidentally
working due to the selection of non-puppeted hosts only being on
bionic nodes and not installing puppet on bionic. Instead, we can now
rely on actually *running* puppet when it's important, such as in the
eavesdrop job. Also remove the installation of puppet on the nodes in
the base job, since it's only useful to test that a synthetic test
of installing puppet on nodes we don't use works.
Don't run remote_puppet_git on gitea for now - it's too slow. A
followup patch will rework gitea project creation to not take hours.
Change-Id: Ibb78341c2c6be28005cea73542e829d8f7cfab08
This change proposes calling a handler each time a certificate is
created/updated. The handler name is based on the name of the
certificate given in the letsencrypt_certs variable, as described in
the role documentation.
Because Ansible considers calling a handler with no listeners an error
this means each letsencrypt user will need to provide a handler.
One simple option illustrated here is just to produce a stamp file.
This can facilitate cross-playbook and even cross-orchestration-tool
communication. For example, puppet or other ansible playbooks can
detect this stamp file and schedule their reloads, etc. then remove
the stamp file. It is conceivable more complex listeners could be
setup via other roles, etc. should the need arise.
A test is added to make sure the stamp file is created for the
letsencrypt test hosts, which are always generating a new certificate
in the gate test.
Change-Id: I4e0609c4751643d6e0c8d9eaa38f184e0ce5452e
Note, this does not have complete tests yet (we will need to update
the job to start a swift for that).
Change-Id: I2ee7a9e4fb503a3431366c16c380cf09327f6050
We currently only have letsencrypt_test_only as a single flag that
sets tests to use the letsencrypt staging environment and also
generates a self-signed certificate.
However, for initial testing we actually want to fully generate
certificates on hosts, but using the staging environment (i.e. *not*
generate self-signed certs). Thus we need to split this option into
two, so the gate tests still use staging+self-signed, but in-progress
production hosts can just using the staging flag.
These variables are split, and graphite01.opendev.org is made to
create staging certificates.
Also remove some debugging that is no longer necessary.
Change-Id: I08959ba904f821c9408d8f363542502cd76a30a4
We don't have python2 on bridge.o.o, force python3.
Change-Id: Ie8eb68007c0854329cf3757e577ebcbfd40ed8aa
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This change contains the roles and testing for deploying certificates
on hosts using letsencrypt with domain authentication.
From a top level, the process is implemented in the roles as follows:
1) letsencrypt-acme-sh-install
This role installs the acme.sh tool on hosts in the letsencrypt
group, along with a small custom driver script to help parse output
that is used by later roles.
2) letsencrypt-request-certs
This role runs on each host, and reads a host variable describing
the certificates required. It uses the acme.sh tool (via the
driver) to request the certificates from letsencrypt. It populates
a global Ansible variable with the authentication TXT records
required.
If the certificate exists on the host and is not within the renewal
period, it should do nothing.
3) letsencrypt-install-txt-record
This role runs on the adns server. It installs the TXT records
generated in step 2 to the acme.opendev.org domain and then
refreshes the server. Hosts wanting certificates will have
pre-provisioned CNAME records for _acme-challenge.host.opendev.org
pointing to acme.opendev.org.
4) letsencrypt-create-certs
This role runs on each host, reading the same variable as in step
2. However this time the acme.sh tool is run to authenticate and
create the certificates, which should now work correctly via the
TXT records from step 3. After this, the host will have the
full certificate material.
Testing is added via testinfra. For testing purposes requests are
made to the staging letsencrypt servers and a self-signed certificate
is provisioned in step 4 (as the authentication is not available
during CI). We test that the DNS TXT records are created locally on
the CI adns server, however.
Related-Spec: https://review.openstack.org/587283
Change-Id: I1f66da614751a29cc565b37cdc9ff34d70fdfd3f
Change I754637115f8c7469efbc1856e88bbcb6fb83b4ce moved a bunch of log
collection to use "stage-output". This uses "fetch-output" which
automatically puts these logs in hostname subdirectories; but it does
not have an option to put it in hosts/hostname as we were doing with
the other logs.
Although we could add such support, it probably doesn't make sense as
most other multinode jobs will have the same layout with the host logs
at the top level. Remove the intermediate "/hosts/" directory on
system-config jobs so all logs remain at the top level, and we don't
have this confusing split as to where logs are for each host.
Change-Id: I56bd67c659ffb26a460d9406f6f090d431c8aa79
This adds the concept of an unmanaged domain; for unmanaged domains we
will write out the zone file only if it doesn't already exist.
acme.opendev.org is added as an unmanaged domain. It will be managed
by other ansible roles which add TXT records for ACME authentication.
The initial template comes from the dependent change, and this ensures
the bind configuration is always valid.
For flexibility and testing purposes, we allow passing an extra
refspec and version to the git checkout. This is one way to pull in
changes for speculative CI runs (I looked into having the hosts under
test checkout from Zuul; but by the time we're 3-ansible call's deep
on the DNS hosts-under-test it's a real pain. For the amount of times
we update this, it's easier to just allow a speculative change that
can take a gerrit URL; for an example see [1])
[1] https://review.openstack.org/#/c/641155/10/playbooks/group_vars/dns.yaml
Testing is enhanced to check for zone files and correct configuration
stanzas.
Depends-On: https://review.openstack.org/641154
Depends-On: https://review.openstack.org/641168
Change-Id: I9ef5cfc850c3458c63aff46cfaa0d49a5d194e87
This allows the zones to load, which is useful in follow-on changes
where we can query them on the host from testinfra to make sure it's
all working.
Change-Id: I9d22c07ce2d1ebad67b0f1ca222c1b457779ce47
We call the bridge playbook from run-base.yaml to bootstrap bridge,
so that's really where we need to disable the cron installation.
Change-Id: I5f3d604feaca5c1d577636c2d1130eec82a35961
The run_all cron running in test jobs is unawesome because it can
cause the inventory overrides we put in for the testing to get
overwritten with the real inventory. We don't want test jobs
attempting to run against real hosts.
Change-Id: I733f66ff24b329d193799e6063953e88dd6a35b1
Add an option to run a playbook (in the fake bridge context) after
running the base playbook. Use this to run a new playbook which
exercises gitea project creation after bootstrapping the gitea
service.
Disable ansible-lint 304 because it erroneously thinks shell and
command are the same thing.
Change-Id: I0394b614771bc62b9fe23d811defd7767b3d10db
We want to trigger ansible runs on bridge.o.o from zuul jobs. First
iteration of this tried to login as root but this is not allowed by our
ssh config. That config seems reasonable so we add a zuul user instead
which we can ssh in as then run things as root from zuul jobs. This
makes use of our existing user management system.
Change-Id: I257ebb6ffbade4eb645a08d3602a7024069e60b3
This runs an haproxy which is strikingly similar to the one we
currently run for git.openstack.org, but it is run in a docker
container.
Change-Id: I647ae8c02eb2cd4f3db2b203d61a181f7eb632d2
When setting up hosts for testing in CI, configure the docker
mirrors before running the base playbook.
Change-Id: I172ae87156238fa6a07414c74e1ca17df1a30257
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
There are upstream jobs in zuul-jobs with the docker build playbooks,
so use them. The system-config jobs are kept so that we don't have
to duplicate the secret stanza.
Change-Id: Iceee55a3d0e8b243549fa988f134b1ea9bb6dac5
This adds the infrastructure for building docker images: the
credential used to upload to Docker Hub as well as the parent jobs
and playbooks to perform the builds.
Change-Id: I7cbbcdd184c4934f1b0ce5905d9760c732b06aa9
Depends-On: https://review.openstack.org/631078
The gerrit source dir needs three plugins cloned into
the plugins dir and also a few files updated.
Depends-On: https://review.openstack.org/631007
Change-Id: I56037137d43ee1cea0a4c17e48d09102e1599ddc
Whenever we promote an image, delete the change tag for that image
in Docker Hub, and also delete any change tags older than 24 hours
in order to keep the Docker Hub image registry tidy.
Change-Id: Id4654c893963bdb0a364b1132793fe4fb152bf27
If we clone gerrit to ~/src/gerrit.googlesource.com/gerrit but
want to keep the Dockerfile in system-config, then we need to be
able to run:
docker build ~/src/gerrit.googlesource.com/gerrit -f Dockerfile
Most of the time the dir will just be '.', so put in a sensible
default.
Change-Id: I235080c05e679d2ac270cd5401b85c655fab3112
This job has no nodes; the playbook needs to run on localhost.
The only tasks use the uri module without local files, so should
be safe.
Change-Id: Ic012426a66be3b85efe9af35089addf1316dfa63
Upload an image to dockerhub with a change-specific tag in every
gate job, and then, if the change lands, re-tag the image in
dockerhub.
Change-Id: Ie57fc342cbe29d261d33845829b77a0c1bae5ff4
This is a role for installing docker on our control-plane servers.
It is based on install-docker from zuul-jobs.
Basic testinfra tests are added; because docker fiddles the iptables
rules in magic ways, the firewall testing is moved out of the base
tests and modified to partially match our base firewall configuration.
Change-Id: Ia4de5032789ff0f2b07d4f93c0c52cf94aa9c25c
This collects syslogs from nodes running in our ansible gate tests.
The node's logs are grouped under a "hosts" directory (the bridge.o.o
logs are moved there for consistentcy too).
Change-Id: I3869946888f09e189c61be4afb280673aa3a3f2e
This change takes the ARA report from the "inner" run of the base
playbooks on our bridge.o.o node and publishes it into the final log
output. This is then displayed by the middleware.
Create a new log hierarchy with a "bridge.o.o" to make it clear the
logs here are related to the test running on that node. Move the
ansible config under there too.
Change-Id: I74122db09f0f712836a0ee820c6fac87c3c9c734
This adds connection information for an experimental kubernetes
cluster hosted in vexxhost-sjc1 to the nodepool servers.
Change-Id: Ie7aad841df1779ddba69315ddd9e0ae96a1c8c53
The constructed inventory plugin allows expressing additional groups,
but it's too heavy weight for our needs. Additionally, it is a full
inventory plugin that will add hosts to the inventory if they don't
exist.
What we want instead is something that will associate existing hosts
(that would have come from another source) with groups.
This also switches to using emergency.yaml instead of emergency, which
uses the same format.
We add an extra groups file for gate testing to ensure the CI nodes
get puppet installed.
Change-Id: Iea8b2eb2e9c723aca06f75d3d3307893e320cced
This new job is a parent job allowing us to CD from Zuul via
bridge.openstack.org. Using Zuul project ssh keys we add_host bridge.o.o
to our running inventory on the executor then run ansible on bridge.o.o
to run an ansible playbook in
bridge.openstack.org:/opt/system-config/playbooks.
Change-Id: I5cd2dcc53ac480459a22d9e19ef38af78a9e90f7
Deployment of the nodepool cloud.yaml file is currently failing with
FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'rackspace_username' is undefined"}
This is because the variables in the group_vars on bridge.o.o are all
prefixed with "nodepool_". Switch to this.
Change-Id: I524cc628138d85e3a31c216d04e4f49bcfaaa4a8
This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140
Add a job which runs testinfra for the eavesdrop server. When we
have a per-hostgroup playbook, we will add it to this job too.
The puppet group is removed from the run-base job because the
groups.yaml file is now used to construct groups (as it does
in production) and will construct the group correctly.
The testinfra iptables module may throw an error if it's run
multiple times simultaneously on the same host. To avoid this,
stop using parallel execution.
Change-Id: I1a7bab5c14b0da22393ab568000d0921c28675aa
This adds a group var which should normally be the empty list but
can be overridden by the test framework to inject additional iptables
rules. It's used to add the zuul console streaming port. To
accomplish this, the base+extras pattern is adopted for
iptables public tcp/udp ports. This means all host/group vars should
use the "extra" form of the variable rather than the actual variable
defined by the role.
Change-Id: I33fe2b7de4a4ba79c25c0fb41a00e3437cee5463
And collect it on post, it is helpful to see the results.
Change-Id: I0dbecf57bf9182168eb6f99cdf88329fcdeb1bdc
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This adds a job which creates a bridge-like node and bootstraps it,
and then runs the base playbook against all of the node types we
use in our control plane. It uses testinfra to validate the results.
Change-Id: Ibdbaf511bbdaee46e1335f2c83b95ba1553a1d94
Depends-On: https://review.openstack.org/595905