Skip to content

MuistotKartalla/muistot-backend

Repository files navigation

Backend Server

codecov

Backend server

The server is built with FastAPI and runs on Uvicorn


Development Setup

  • Backend on port 5600
  • Database on port 5601
  • Adminer on port 5602
  • MailDev on port 5603

The setup scripts could be refactored into single files.


Pre-requisites

  1. If you are on Windows, install Windows Subsystem for Linux (WSL) if you don't have it installed. This is required for Docker Desktop. Restart your machine after installing WSL.

  2. Install Docker if you don't have it installed. Restart your machine after installing Docker.

  3. Install [Python]https://www.python.org/downloads/ if you don't have it installed. Package Installer for Python (pip) is also required but that should come with the latest version of Python.

It is very highly recommended to use an Integrated Development Environment (IDE) during development. VSCode is a good one, with plenty of extensions available to help with the process.


Cloning the repository

Clone the backend repository

git clone [email protected]:MuistotKartalla/muistot-backend.git

Navigate to the backend root folder (or open the folder directly from an IDE)

cd muistot-backend

Installing libraries

Install dependencies

pip install -r requirements.txt

Install dev dependencies

pip install -r requirements-dev.txt

Recreating database

Deletes all data and volumes

sh scripts/recreate_db.sh

Filling local database with filler data. Without this the local site wont work.

python database/test/filler.py

Running test server

sh scripts/run_server.sh

Stopping

docker-compose down -v

Testing

The tests can be run using the following commands

sh scripts/run_tests.sh

Generates coverage reports in terminal and html reports


Coverage

Measured with branches included Branch coverage disabled in a few lines in the following files:

Application Config

Check the defaults in configuratioin models.

Two configs are currently in use for development:

Login

Logins are handled through email. There are multiple types of mailers available for mailing in mailer. The ZonerMailer is used for mailing to the local Maildev.

Session Storage

The sessions are stored in redis and the management is done with the sessions module. Sessions are stored in redis and the session token byte length is defined in the config. The tokens are base64 encoded in the Authorization header and get stored in hashed format in Redis. Stale sessions get removed from the user pool on login. The session manager maintains a linking to all user sessions so that it can clear them.

Sessions and user data can be accessed using the session middleware.

Databases

These are found under database

These are wrapped connections from SQLAlchemy with async support through SQLAlchemy and asyncmy. Some custom wrappers are used to retain backwards compatibility with the old custom implementation.

The database connections are provided to the request scope from the database middleware.

OpenAPI

There is a small hack done to the OpenAPI in helpers.py to replace the original errors. This is due to how the application handles errors and uses a different schema from default.

Repos

This whole thing is under repos. These take care of fetching and converting the data coming from and going into the database. The base contains the base definitions and checks for repos and the status module takes care of fetching resource status information. This status information is used for the repo decorations to manage access control.

A bit more clarification on the inner workings of this:

1. REQUEST                       -> REPO (init)
2.         -> STATUS (decorator) -> REPO (method)
3.                               <- REPO (result)

1. Call comes to an endpoint and repo is constructed
  1.1. The repo is constructed
  1.2. The configure method is called to add information available from the request
  1.3. A fully functional repo is returned
2. A repo method is called and the exist constructor on it intercepts it
  2.1. The exists decorator queries the relevant exists helper for the repo class
  2.2. The decorator method sets up any attributes fetched from the database on the repo
  2.3. A Status is returned and injected into the repo method arguments if desired
  2.4. The call proceeds to the fully initialized repo
3. An entity is returned from the repo and it is mapped to a response
   Usually the database fetch_one is returned and it gets serialized into the response body

Here is an example of a repo method:

@append_identifier('project', key='id')
@require_status(Status.DOES_NOT_EXIST | Status.SUPERUSER, errors={
    Status.DOES_NOT_EXIST | Status.AUTHENTICATED: HTTPException(
        status_code=HTTP_403_FORBIDDEN,
        detail="Not enough privileges",
    )
})
async def create(self, model: NewProject) -> PID:
    ...

First, the append_identifier decorator is used to add the project being created to the available identifiers for status checks. This is important to do for each method where this information is not directly available from the initial batch of identifiers given to the repo. In the usual case the repo creator gives the repo all the path parameters as identifiers.

Second, the require_status decorator is used to require a status check to pass before the method is called. The require_status decorator defines some default errors, but allows the user to provide custom error conditions through the decorator as is seen above. Due to the SUPERUSER check, we need to provide a custom error when the DOES_NOT_EXIST condition would be true without SUPERUSER present.

Usually the status checks do not need custom errors even with multiple status checks:

@append_identifier('site', value=True)
@require_status(Status.EXISTS | Status.ADMIN, Status.EXISTS | Status.OWN)
async def modify(self, site: SID, model: ModifiedSite, status: Status) -> bool:
    ...

In this case the multiple status conditions passed to the decorator cause it to allow any request matching any of the given statuses.

Security

The security provides classes for users and crude session scope based resource access control management. The access control is double-checked, once at the resource level to prevent grossly illegal calls and a second time at the repo level to fetch the up-to-date information.

This could be improved further by revoking sessions upon receiving a permission level related issue from a repo meaning someone was removed as an admin for example.

Logging

The logging package logging hooks into the uvicorn error logger to propagate log messages.

Config

Config is loaded with the config package. The config is a single Pydantic model that is read from ./config.json or ~/config.json otherwise the base (testing) config is used.

Testing

The tests are set up in a way where the setup builds the needed docker image and installs the server as a package there. This has the added benefit of providing typehints for the project if used in conjunction with PyCharm remote interpreters. Highly recommended btw, free for students.

Main conftest.py takes care of loading the default database connection per session. The integration folder conftest overrides the client default databas dependency with the initialized database dependency.

CI/CD

The file takes care of contacting the deployment server through ssh to install the new version. This could be changed in the future to build a docker image that is pushed to a remote repo to make this easier.

Further Development

This project could be refactored into smaller services e.g:

  • login
  • users
  • admin
  • data

And could then be run in a lower cost environment e.g. Amazon Lambda. This would also allow breaking down the project into smaller parts that could be containerized individually and could be run like microservices.

Information

Here is the general structure of the api and a description of actions available for each resource.

[]

The comments were scrapped.

NOTE: Latest description is in the swagger docs of the app, or partly at Muistotkartalla - Api

Developing the Project

Getting up to speed

The following steps should get you up to speed on the project structure:

  1. Read the previous section on developer notes
  2. Take a look at the database/schemas folder
    • See what is stored where
    • How are entities related
  3. Take a look at the muistot.backend module
    • See the api module for endpoints
      • See the imports and what they provide
      • Analyze the general endpoint file structure
      • See the Repo Creator
    • See the actual models for the api
    • Take a dive into the repos
      • Look at the base module
        • See the files attribute
      • Look at the exists module
      • Take a look at the repo and exists for memories
    • See the services module for user edit

Modifying the database schema

Remember to do changes that are somewhat backwards compatible and apply them to the actual database. The schema is in an okay state, but additions are much easier than deletions.

Creating new features

If you need to develop new endpoints or features the following is suggested:

  1. Create a new endpoint file under muistot.backend.api
    • Check imports from other api modules to see what is where
    • router = make_router(tags=["Relevant feature(s) from main.py"])
    • Use Cache with caution if needed by getting it from the middleware
  2. Decide if the feature requires existence checks and/or provides CRUD to a resource
    • NO: new file under services
    • YES: consider setting up a repo, evaluate which is easier
  3. Write service methods with Database as the first argument
  4. Remember async def and await
  5. Always write tests for the feature
    • At least do happy path tests for the endpoints

Further Development

Improving Caching

Add caching to queries.

Improving testing

The testing speed is quite slow now, the tests could be split into smaller parts and run in parallel.

Improving configuration and repo model

The queries now fetch data for all requests and this is expensive. This could be refactored to use a caching service to fetch on interval instead.

File Storage

The files are now stored on disk in the docker image which is not that good. This should be abstracted behind an interface and be made to work with a Storage Bucket service.

About

Backend Server for Muistot Kartalla

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages