This documentation site is for the versions of Synapse maintained by the Matrix.org Foundation (github.com/matrix-org/synapse), available under the Apache 2.0 licence.
Everyone is welcome to contribute code to matrix.org
projects, provided that they are willing to
license their contributions under the same license as the project itself. We
follow a simple 'inbound=outbound' model for contributions: the act of
submitting an 'inbound' contribution means that the contributor agrees to
license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
LICENSE).
If you are running Windows, the Windows Subsystem for Linux (WSL) is strongly
recommended for development. More information about WSL can be found at
https://docs.microsoft.com/en-us/windows/wsl/install. Running Synapse natively
on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need a recent version of Python 3. Your Python also needs support for virtual environments. This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running sudo apt install python3-venv should be enough.
A recent version of the Rust compiler is needed to build the native modules. The
easiest way of installing the latest version is to use rustup.
Synapse can connect to PostgreSQL via the psycopg2 Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with sudo apt install libpq-dev.
Synapse has an optional, improved user search with better Unicode support. For that you need the development package of libicu. On Debian or Ubuntu Linux, this can be installed with sudo apt install libicu-dev.
The preferred and easiest way to contribute changes is to fork the relevant
project on GitHub, and then create a pull request to ask us to pull your
changes into our repo.
Before installing the Python dependencies, make sure you have installed a recent version
of Rust (see the "What do I need?" section above). The easiest way of installing the
latest version is to use rustup.
Synapse uses the poetry project to manage its dependencies
and development environment. Once you have installed Python 3 and added the
source, you should install poetry.
Of their installation methods, we recommend
installing poetry using pipx,
pip install --user pipx
pipx install poetry==1.5.1 # Problems with Poetry 1.6, see https://github.com/matrix-org/synapse/issues/16147
There is a growing amount of documentation located in the
docs
directory, with a rendered version available online.
This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse.
docs/development
exists primarily to house documentation for
Synapse developers.
docs/admin_api houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
We welcome improvements and additions to our documentation itself! When
writing new pages, please
build docs to a book
to check that your contributions render correctly. The docs are written in
GitHub-Flavoured Markdown.
Some documentation also exists in Synapse's GitHub
Wiki, although this is primarily
contributed to by community authors.
When changes are made to any Rust code then you must call either poetry install
or maturin develop (if installed) to rebuild the Rust code. Using maturin
is quicker than poetry install, so is recommended when making frequent
changes to the Rust code.
The unit tests run parts of Synapse, including your changes, to see if anything
was broken. They are slower than the linters but will typically catch more errors.
poetry run trial tests
You can run unit tests in parallel by specifying -jX argument to trial where X is the number of parallel runners you want. To use 4 cpu cores, you would run them like:
poetry run trial -j4 tests
If you wish to only run some unit tests, you may specify
another module instead of tests - or a test class or a method:
poetry run trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
If your tests fail, you may wish to look at the logs (the default log level is ERROR):
less _trial_temp/test.log
To increase the log level for the tests, set SYNAPSE_TEST_LOG_LEVEL:
SYNAPSE_TEST_LOG_LEVEL=DEBUG poetry run trial tests
By default, tests will use an in-memory SQLite database for test data. For additional
help with debugging, one can use an on-disk SQLite database file instead, in order to
review database state during and after running tests. This can be done by setting
the SYNAPSE_TEST_PERSIST_SQLITE_DB environment variable. Doing so will cause the
database state to be stored in a file named test.db under the trial process'
working directory. Typically, this ends up being _trial_temp/test.db. For example:
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 poetry run trial tests
The database file can then be inspected with:
sqlite3 _trial_temp/test.db
Note that the database file is cleared at the beginning of each test run. Thus it
will always only contain the data generated by the last run test. Though generally
when debugging, one is only running a single test anyway.
Invoking trial as above will use an in-memory SQLite database. This is great for
quick development and testing. However, we recommend using a PostgreSQL database
in production (and indeed, we have some code paths specific to each database).
This means that we need to run our unit tests against PostgreSQL too. Our CI does
this automatically for pull requests and release candidates, but it's sometimes
useful to reproduce this locally.
You don't need to specify the host, user, port or password if your Postgres
server is set to authenticate you over the UNIX socket (i.e. if the psql command
works without further arguments).
Your Postgres account needs to be able to create databases; see the postgres
docs for ALTER ROLE.
The integration tests are a more comprehensive suite of tests. They
run a full version of Synapse, including your changes, to check if
anything was broken. They are slower than the unit tests but will
typically catch more errors.
The following command will let you run the integration test with the most common
configuration:
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:focal
(Note that the paths must be full paths! You could also write $(realpath relative/path) if needed.)
This configuration should generally cover your needs.
To run with Postgres, supply the -e POSTGRES=1 -e MULTI_POSTGRES=1 environment flags.
To run with Synapse in worker mode, supply the -e WORKERS=1 -e REDIS=1 environment flags (in addition to the Postgres flags).
Complement is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.
It's often nice to develop on Synapse and write Complement tests at the same time.
Here is how to run your local Synapse checkout against your local Complement checkout.
(checkout complement alongside your synapse checkout)
To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:
The above will run a monolithic (single-process) Synapse with SQLite as the database. For other configurations, try:
Passing POSTGRES=1 as an environment variable to use the Postgres database instead.
Passing WORKERS=1 as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
If setting WORKERS=1, optionally set WORKER_TYPES= to declare which worker
types you wish to test. A simple comma-delimited string containing the worker types
defined from the WORKERS_CONFIG template in
here.
A safe example would be WORKER_TYPES="federation_inbound, federation_sender, synchrotron".
See the worker documentation for additional information on workers.
Passing ASYNCIO_REACTOR=1 as an environment variable to use the Twisted asyncio reactor instead of the default one.
Passing PODMAN=1 will use the podman container runtime, instead of docker.
Passing UNIX_SOCKETS=1 will utilise Unix socket functionality for Synapse, Redis, and Postgres(when applicable).
To increase the log level for the tests, set SYNAPSE_TEST_LOG_LEVEL, e.g:
If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:
In your Complement test comment out defer deployment.Destroy(t) and replace with defer time.Sleep(2 * time.Hour) to keep the homeserver running after the tests complete
Start the Complement tests
Find the name of the container, docker ps -f name=complement_ (this will filter for just the Compelement related Docker containers)
Access the container replacing the name with what you found in the previous step: docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier.
To create a changelog entry, make a new file in the changelog.d directory named
in the format of PRnumber.type. The type can be one of the following:
feature
bugfix
docker (for updates to the Docker image)
doc (for updates to the documentation)
removal (also used for deprecations)
misc (for internal-only changes)
This file will become part of our changelog at the next
release, so the content of the file should be a short description of your
change in the same style as the rest of the changelog. The file can contain Markdown
formatting, and must end with a full stop (.) or an exclamation mark (!) for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
changelog.d/1234.bugfix, and contain content like:
The security levels of Florbs are now validated when received
via the /federation/florb endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each changelog.d file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
Obviously, you don't know if you should call your newsfile
1234.bugfix or 5678.bugfix until you create the PR, which leads to a
chicken-and-egg problem.
There are two options for solving this:
Open the PR without a changelog file, see what number you got, and then
add the changelog file to your branch, or:
Look at the list of all
issues/PRs, add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.
This
script
might be helpful if you find yourself doing this a lot.
Sorry, we know it's a bit fiddly, but it's really helpful for us when we come
to put together a release!
Changes which affect the debian packaging files (in debian) are an
exception to the rule that all changes require a changelog.d file.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command:
dch
This will make up a new version number (if there isn't already an unreleased
version in flight), and open an editor where you can add a new changelog entry.
(Our release process will ensure that the version number and maintainer name is
corrected for the release.)
If your change affects both the debian packaging and files outside the debian
directory, you will need both a regular newsfragment and an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
In order to have a concrete record that your contribution is intentional
and you agree to license it under the same terms as the project's license, we've adopted the
same lightweight approach that the Linux Kernel
submitting patches process,
Docker, and many other
projects use: the DCO (Developer Certificate of Origin).
This is a simple declaration that you wrote
the contribution or otherwise have the right to contribute it to Matrix:
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment:
Signed-off-by: Your Name <your@email.example.org>
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the -s
flag to git commit, which uses the name and email set in your
user.name and user.email git configs.
If you would like to provide your legal name privately to the Matrix.org
Foundation (instead of in a public commit or comment), you can do so
by emailing your legal name and a link to the pull request to
dco@matrix.org.
It helps to include "sign off" or similar in the subject line. You will then
be instructed further.
Once private sign off is complete, doing so for future contributions will not
be required.
Once the Pull Request is opened, you will see a few things:
our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;
one or more of the developers will take a look at your Pull Request and offer feedback.
From this point, you should:
Look at the results of the CI pipeline.
If there is any error, fix the error.
If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
A pull request is a conversation, if you disagree with the suggestions, please respond and discuss it.
Create a new commit with the changes.
Please do NOT overwrite the history. New commits make the reviewer's life easier.
Push this commits to your Pull Request.
Back to 1.
Once the pull request is ready for review again please re-request review from whichever developer did your initial
review (or leave a comment in the pull request that you believe all required changes have been done).
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!