This guide aims to help PR authors contribute to the
This guide is a work-in-progress and should not be considered complete. Check back often as we fill out more details based on experience and feedback, and please let us know how this guide can be improved by opening a GitHub issue in the repository.
Log into your GitHub account.
Fork the ansible-collections/community.hashi_vault repository by clicking the Fork button in the upper right corner. This will create a fork in your own account.
Clone the repository locally, following the example instructions here (but replace
hashi_vault). Pay special attention to the local path structure of the cloned repository as described in those instructions (for example
As mentioned on that page, commit your changes to a branch, push them to your fork, and create a pull request (GitHub will automatically prompt you to do so when you look at your repository).
Additions to the collection documentation are very welcome! We have three primary types of documentation, each with their own syntax and rules.
This type of documentation gets generated from structured YAML, inside of a Python string. It is included in the same code that it’s documenting, or in a separate Python file, such as a doc fragment. Please see the module format and documentation guidance for more information.
This type of documentation is tested with
ansible-test sanity and full instructions are available on the testing module documentation page.
Although we can’t preview how the documentation will look for these, we can be reasonably sure the output is correct because the documentation is highly structured and validated using sanity tests.
Markdown files (those with the extension
.md) can be found in several directories within the repository. These files are primarily aimed at developers and those browsing the repository, to explain or give context to the other files nearby.
The main exception to the above is the
README.md in the repository root. This file is more important because it provides introductory information and links for anyone browsing the repository, both on GitHub and on the collection’s Ansible Galaxy page.
Markdown files can be previewed natively in GutHub, so they are easy to validate by reviewers, and there are no specific tests that need to run against them.
The collection docsite is what you are reading now. It is written in reStructuredText (RST) format and published on the ansible_documentation site. This is where we have long-form documentation that doesn’t fit into the other two categories.
If you are considering adding an entirely new document here it may be best to open a GitHub issue first to discuss the idea and how best to organize it.
Refer to the Ansible style guide for all submissions to the collection docsite.
RST files for the docsite are in the
docs/docsite/rst/ directory. Some submissions may also require edits to
extra-docs.yml file is validated by the collection’s CI, and there is not yet any automated preview for the docsite rendering, but this is an area we are hoping to improve on.
Docsite pages can be generated locally through a workaround technique. This is not a supported method but it may be helpful to get more rapid feedback on docsite changes, if you’re comfortable at a command line.
The process is:
Clone ansible/ansible or a fork of it.
.rstfiles you want to preview into that repository’s
Install the requirements needed to build the docsite (from the repository root):
$ pip install -r requirements.txt $ pip install -r docs/docsite/requirements.txt
You may also need to remove write permission from group and other:
$ chmod -R go-w docs/docsite/rst
Build the docs:
$ make coredocs
The rendered HTML docs should be available in
docs/docsite/_build/html/ and can be opened in a browser.
If you’re making anything more than very small or one-time changes, run the tests locally to avoid having to push a commit for each thing, and waiting for the CI to run tests.
First, review the guidance on testing collections, as it applies to this collection as well.
Unlike other collections, we now require an integration_config.yml file for properly running integration tests, as the tests require external dependencies (like a Vault server) and they need to know where to find those dependencies.
If you have contributed to this collection or to the
hashi_vault lookup plugin in the past, you might remember that the integration tests used to download, extract, and run a Vault server during the course of the tests, by default. This legacy mode is not recommended but is still available (for now) via opt-in.
Legacy mode is not recommended because a new Vault server and proxy server will be downloaded, set up, configured, and/or uninstalled, for every target. Traditionally, we’ve only had one target, and so it was a good way to go, but that’s no longer going to be the case. This will make it slower and slower as you’ll incur the overhead on every target, in every run.
Skip to Docker Compose localenv for a method that’s nearly as easy as legacy mode, and far more efficient (docker-compose).
To get started quickly without having to set anything else, you can use legacy mode by copying the included integration config sample:
$ cp tests/integration/integration_config.yml.sample tests/integration/integration_config.yml
That file has everything configured to be able to run the integration tests and have them set up the dependencies for you.
You will also need the following additional Ansible collections:
Running legacy mode tests in docker (recommended):
$ ansible-test integration --docker default -v
Running legacy mode tests in a controlled python virtual environment (not recommended):
$ ansible-test integration --venv --requirements --allow-destructive -v
In legacy mode, your system packages may be manipulated by running locally or in a venv.
If you must use legacy mode testing, you can make it more efficient by limiting your test run to the specific target needed, to avoid the overhead of creating and destroying the dependencies for each target. For example:
$ ansible-test integration --docker default -v lookup_hashi_vault
The recommended way to run the tests has Vault and tinyproxy running in their own containers, set up via docker-compose, and the integration tests run in their own container separately.
We have a pre-defined “localenv” setup role for this purpose.
For ease of typing / length of commands, we’ll enter the role directory first:
$ pushd tests/integration/targets/setup_localenv_docker
This localenv has both Ansible collection and Python requirements, so let’s get those out of the way:
$ pip install -r files/requirements/requirements.txt -c files/requirements/constraints.txt $ ansible-galaxy collection install -r files/requirements/requirements.yml
To set up your docker-compose environment with all the defaults:
The setup script does the following:
docker-compose.ymlfor the project.
Generate a private key and self-signed certificate for Vault.
Template a Vault config file.
Bring down the existing compose project.
Bring up the compose project as defined by the vars (specified or defaults).
integration_config.ymlfile that has all the right settings for integration tests to connect.
Copy the integration config to the correct location if there isn’t already one there (it won’t overwrite, in case you had customized changes).
With your containers running, you can now run the tests in docker (after returning back to the collection root):
$ popd $ ansible-test integration --docker default --docker-network hashi_vault_default -v
--docker-network part is important, because it ensures that the Ansible test container is in the same network as the dependency containers, that way the test container can reach them by their container names. The network name,
hashi_vault_default comes from the default docker-compose project name used by this role (
hashi_vault). See the customization section for more information.
setup.sh again can be used to re-deploy the containers, or if you prefer you can use the generated
files/.output/<project_name>/docker-compose.yml directly with local tools.
If running again, remember to manually copy the contents of newly generated
files/.output/integration_config.yml to the integration root, or delete the file in the root before re-running setup so that it copies the file automatically.
setup.sh passes any additional params you send it to the
ansible-playbook command it calls, so you can customize variables with the standard
-e) option. There are many advanced scenarios possible, but a few things you might want to override:
vault_version– can target any version of Vault for which a docker container exists
cleanbut could be set to
up– similar to running
docker-compose up(no op if the project is running as it should) *
down– similar to
docker-compose down(destroys the project) *
clean– (default) similar to
docker-compose downfollowed by
none– does the other tasks, including templating, but does not bring the project up or down. With this option, the
community.dockercollection is not required.
vault_crypto_force– by default this is
falseso if the cert and key exist they won’t be regenerated. Setting to
truewill overwrite them.
proxy_port– all of the ports are exposed to the host, so if you already have any of the default ports in use on your host, you may need to override these.
proxy_container_name– these are the names for their respective containers, which will also be the DNS names used within the container network. In case you have the default names in use you may need to override these.
docker_compose_project_name– unlikely to need to be changed, but it affects the name of the docker network which will be needed for your
ansible-testinvocation, so it’s worth mentioning. For example, if you set this to
ansible_hashi_vaultthen the docker network name will be