diff --git a/Dipal/Backend/Fox/README.md b/Dipal/Backend/Fox/README.md
new file mode 100644
index 0000000..405b105
--- /dev/null
+++ b/Dipal/Backend/Fox/README.md
@@ -0,0 +1,7 @@
+# Fox
+
+_**Fox**_ is a local server located inside the building that is responsible for device management, meter reading, and building data storage. In one section there is one fox. The operator is responsible for this fox, he has access to control devices (such as utilities, cameras and intercoms). Technically, fox is a server with Linux and our Fox application.
+
+
+
+Fox connects Kaiser and frontend applications to devices. _**Intercom**_ is connected with both HTTP and Asterisk. _**Cameras**_ are connected using the Owl service, which converts the RTSP stream from cameras into the WebRTC format that smartphones understand. Fox also has its own _**database**_ that stores information about the devices.
diff --git a/Dipal/Backend/Kaiser/Database explanation.md b/Dipal/Backend/Kaiser/Database explanation.md
new file mode 100644
index 0000000..5e9e712
--- /dev/null
+++ b/Dipal/Backend/Kaiser/Database explanation.md
@@ -0,0 +1,109 @@
+# Database explanation
+You can see the full database diagram [here](https://app.diagrams.net/#G1HSoNpmIZhO9FB5pVuVnkpj8IlVA4h3VU).
+
+
+
+### Profiles
+
+The main subject in our system is the user. The user account is stored in Keycloak. There are 3 types of user profiles: user profile, company profile and operator profile. The difference between an account and a profile is that an account consists only of technical information about the user, especially secrets, while a profile consists of personal information about the user and information about his contracts.
+
+A _**user**_ profile is the profile of a simple user of the Dipal smartphone application. He only has access to Iguana and Pyrador. The user can have multiple places (his flats) and if he is the owner of the flat, he can give access to other users with a QR code or by invitation.
+
+A _**company**_ profile is an third-party provider company profile. It is assumed that third-party companies may contract with Dipal to use our app. They may have their own services and their own customer base (this base may be hidden from us). They may change the logo, color and some elements of the user interface of our application for their services.
+
+An _**operator**_ is someone who is responsible for the object. It can be a security guy or concierge, so he has minimal technical education requirements. He is responsible for fault tolerance of his fox (or foxes). The operator has access to all operations on the fox.
+
+Every profile has their own contacts (phone number or email). We store in the database all information about the contact verification: last verification code, sent, verified, expired, next try.
+
+### Place
+
+The _**place**_ collection stores all places available in our system. The place storage is represented as a tree: all places, except the root one, have a parent. For example, `root` has a parent `apartment`, which in turn has a parent `bulding`.
+
+
+
+### User place
+
+A _**user place**_ is represented as a separate collection. One user can be in multiple user places. Every user place document has members subcollection with following fields:
+
+* title (user's custom place title)
+* status (user place confirmation status for this user)
+* privileges (different priveleges to add new members, services, etc.)
+* services (subcollection with permited services and devices)
+* qr code id (if access was given by qr code)
+
+There is only one user place owner. He can send a QR code or an invitation by phone number to the user he wants to add to his place. When the user scans the QR code or accepts the invitation, the owner can select the privileges and services he wants to share.
+
+
+
+### Service
+
+A _**service**_ is a basic functionality of Dipal service. It can be any service provided by the company: intercom, camera, internet, utilities, etc. First you need to create a _**base service**_ and describe in it all the necessary information and settings that apply to this group of services. We can create several services of the same group (based on one base service). This makes the implementation of the service fully dynamic. The service document also stores common information about some services: banners, videos, etc. Each company can have its own services.
+
+Service info and configuration are stored as key-value pairs. For example:
+
+```json
+{
+ "settings": [
+ {
+ "key": "protocol",
+ "value": "rtsp"
+ },
+ {
+ "key": "username",
+ "value": "comfortech"
+ },
+ {
+ "key": "password",
+ "value": "mypass"
+ },
+ {
+ "key": "ip",
+ "value": "10.1.0.6"
+ },
+ {
+ "key": "port",
+ "value": "8554"
+ },
+ {
+ "key": "rouths",
+ "value": "camera4"
+ }
+ ],
+ "necessary_info": [
+ {
+ "key": "caption",
+ "value": "entrance 1"
+ },
+ {
+ "key": "model",
+ "value": "model1"
+ },
+ {
+ "key": "resolution",
+ "value": "Full HD"
+ }
+ ]
+}
+```
+
+### Contract
+
+The are two types of contracts in the Dipal system: _**between user and company**_ and _**between companies**_. They both are stored inside one collection. The contract document contains:
+
+* provided services
+* provided info and docs
+* source company id
+* counter part (user or company profile id)
+* contract content
+* contract status
+* contract start and end
+
+### Plan
+
+A _**plan**_ gives you access to a particular service for a fee. One service can have multiple plans with different fees. For each plan, you can set up different time intervals, pricing, discounts, and support for automatic payments.
+
+### Fox
+
+The _**fox**_ collection contains all existing foxes. All foxes are bound to a specific place (usually a house or building). Information about IP and MAC addresses and connected akitas and devices is also stored there.
+
+> We used to use the MQTT broker to connect to foxes, but now we connect foxes to Kaiser using an HTTP server.
diff --git a/Dipal/Backend/Kaiser/Installation.md b/Dipal/Backend/Kaiser/Installation.md
new file mode 100644
index 0000000..fcf618d
--- /dev/null
+++ b/Dipal/Backend/Kaiser/Installation.md
@@ -0,0 +1,38 @@
+# Installation
+
+If you install Kaiser project completely on your system, you need also to run third party services. You can use docker-compose files from our DevOps repos.
+
+* [Keycloak](http://194.226.0.195:32127/). Account management system.
+* [Kafka](http://194.226.0.195:32127/). Message broker with many features.
+* [Redis](http://194.226.0.195:32127/). Data cache system.
+
+Configure enviroment varibles in `.env` files of your microservices.
+
+### Microservice common runbook
+
+To run microservice is simple:
+
+```bash
+npm install
+npm run start
+```
+
+If you have `node_modules` or `dist` directories:
+
+```bash
+rm ./node_modules -R
+rm ./dist -R
+npm cache clean --force
+npm install
+npm run start
+```
+
+### Get AUTH\_KEY for gateways
+
+To get security key get it from Keycloak panel or ask it from your supervisor:
+
+Choose realm (for example dipal\_develop or master). Realm Settings -> Keys -> Get `Public key` provided by `rsa-generated`. Set it in your `.env` file as `AUTH_KEY`.
+
+### Get SECURITY\_KEY
+
+In some services, we need a security key to enccrypt and decrypt secrets (for example `e87fb5065e795ea8eb6c71a5756e31c6`). If you connect to a shared MongoDB and Keycloak, ask your supervisor to give you a SECURITY\_KEY. If you work with the system on your machine, generate it [by yourself](https://onlinehashtools.com/generate-random-md5-hash). After that set it in your `.env` file.
diff --git a/Dipal/Backend/Kaiser/Microservices.md b/Dipal/Backend/Kaiser/Microservices.md
new file mode 100644
index 0000000..90e66ec
--- /dev/null
+++ b/Dipal/Backend/Kaiser/Microservices.md
@@ -0,0 +1,65 @@
+# Microservices
+Each microservice has a core/common module with modules that can be useful in any microservice:
+
+Kafka module. Sends messages via Kafka. Logger module. Logs all the messages fro microservice. HTTP response. Generates a generic HTTP response with your data. Health service. Makes a helath check. Prometheus service. Sends info to Prometheus.
+
+> Do not use `console.log`, use the _**logger service**_ instead.
+
+For detailed documentation about any microservice you can see its compodoc.
+
+```sh
+npm run compodoc
+```
+
+All gateways use _**JWT token**_ to guard access and get user ID with all requests. This means that when anyone logs into their account, they receive an access token with user account information (see below). This access token also contains encoded Keycloak security information, so our _**gateways reject access tokens**_ not created by our system. We also get _**Account ID**_ and _**Username**_ (phone number) from the JWT token.
+
+JWT token data example:
+
+```json
+{
+ "exp": 1673560863,
+ "iat": 1673524863,
+ "jti": "6d29c45d-1f63-4b0f-a3fc-22cc9d8db25c",
+ "iss": "http://localhost:8080/auth/realms/master",
+ "sub": "99b2ca6d-b179-4161-9c25-ad84ab7ca438",
+ "typ": "Bearer",
+ "azp": "comfortech",
+ "session_state": "9b1a9973-b270-4e27-b83e-3284b867e4f1",
+ "acr": "1",
+ "resource_access": {
+ "comfortech": {
+ "roles": [
+ "user"
+ ]
+ }
+ },
+ "scope": "email profile",
+ "sid": "9b1a9973-b270-4e27-b83e-3284b867e4f1",
+ "email_verified": true,
+ "preferred_username": "+79516519741"
+}
+```
+
+### Iguana
+
+**Iguana** is a main gateway responsible for communication between Peacock and Kaiser (via HTTP).
+
+### Pyrador
+
+**Pyrador** is a gateway responsible for authentification and user account management (for both users and admins).
+
+### Zoo
+
+**Zoo** is a gateway responsible for communication with the admin panel only and environment configuration scripts.
+
+### Crow
+
+**Crow** is a microservice responsible for all data operations, account management and some business logic.
+
+### Goose
+
+**Goose** is a microservice responsible for connection with foxes (via MQTT).
+
+### Pigeons
+
+**Pigeons** is a microservice responsible for notifications (push, SMS, email).
diff --git a/Dipal/Backend/Kaiser/README.md b/Dipal/Backend/Kaiser/README.md
new file mode 100644
index 0000000..7173a9c
--- /dev/null
+++ b/Dipal/Backend/Kaiser/README.md
@@ -0,0 +1,12 @@
+# Kaiser
+
+Kaiser is a cloud-based system responsible for managing accounts, payment transactions, location information, etc. It is a core part of the Dipal project and provides an interface for both the frontend application and the admin panel. It is divided into microservices, which are connected between each other using Kafka message broker.
+
+
+
+The system diagram is divided into 4 layers:
+
+* Frontend. This can be either a smartphone app (called _**Peacock**_) or an admin panel (web app).
+* Gateways. They have open ports for APIs and redirect requests to specific microservices depending on the functionality.
+* Microservices. Process requests, send requests to third-party services (databases, billing systems, foxes).
+* Third-party services (e.g. databases) and foxes.
diff --git a/Dipal/Backend/Kaiser/Third-party services.md b/Dipal/Backend/Kaiser/Third-party services.md
new file mode 100644
index 0000000..7f13cb7
--- /dev/null
+++ b/Dipal/Backend/Kaiser/Third-party services.md
@@ -0,0 +1,137 @@
+# Third-party services
+### Keycloak
+
+We use Keycloak for the user management because it is a comfortable and multifunctional system. All passwords are stored in a secure database, making it impossible to retrieve them.
+
+> For `username` we use user's phone number.
+
+#### How do we store passwords?
+
+Our system has no passwords, instead we send an SMS each time a user logs in, where he receives a 6-digit one-time password (OTP). This OTP expires after `OTP_EXPIRATION_TIME` milliseconds. The user will not be able to use this OTP until `NEXT_TRY_TIME` milliseconds expires. Both `OTP_EXPIRATION_TIME` and `NEXT_TRY_TIME` default to `60000`.
+
+Inside Keycloak in account attributes we have timers:
+
+* `next_try_timestamp` - the time stamp of the next attempt, which will be possible, according to the server time
+* `otp_expiration_timestamp` - the time stamp when otp will be expired, according to the server time
+
+> In the development version, we use the `111111` mock for OTPs of any account.
+
+When user encrypt the OTP with an `init vector (IV)` and a `security key`.
+
+* We generate random **6-digit token (OTP)** and send in to the user with SMS.
+* We get the _**security key**_ from the environment variables.
+* We generate a random _**init vector**_, add OTP and salt from code, and store it as the _**password of the Keycloak account**_.
+* We store the _**result of encryption**_ in the `session_data` attribute.
+
+When the user logs in (enters his OTP), we use `session_data` as a raw data and `OTP + salt` as a init vector.
+
+This means that the data to be accessed is stored in several parts:
+
+* Kubernetes secrets or enviromental variables (security key).
+* Keycloak (as a password and as an attribute).
+* Code constants (this is where we store the salt).
+
+Without all this data, a hacker will not be able to gain access to someone else's account. Moreover, no one but the user himself can see the OTP sent to him.
+
+#### Roles
+
+We store information about roles (superadmin, admin, user) inside Keycloak. _**Superadmin**_ has full control over the system. _**Admin**_ can handle only those objects he is responsible for. There are different privileges for administrators and users. _**User**_ is just a user of the Dipal app. Only superadmin and admins have access to the admin panel (Zoo).
+
+
+
+#### HTTPS required
+
+If you try to connect to Keycloak on a server by external IP, it will throw `HTTPS required` error. To fix it go to a Docker container at first. If you use Docker Compose, cd to yml directory and:
+
+```text
+sudo docker-compose exec keycloak sh
+```
+
+If you are using Docker:
+
+```text
+sudo docker exec sh
+```
+
+After you enter the container, execute some instructions:
+
+```sh
+cd /opt/jboss/keycloak/bin
+./kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin
+./kcadm.sh update realms/master -s sslRequired=NONE
+```
+
+#### "I accidentally deleted an admin account"
+
+If you deleted an admin account, go to the container and execute this from the `bin` directory:
+
+```text
+./add-user-keycloak.sh --server http://ip_address_of_the_server:8080/admin --realm master --user admin --password adminPassword
+```
+
+### Firebase
+
+_**Firebase**_ is a set of hosting services for any type of application. We use the messaging system from it to send data from Dipal to the user, such as _**push notifications**_. We set up a websocket connection between Dipal and Firebase servers and between Dipal and Firebase clients.
+
+#### Integration with Dipal
+
+To connect Firebase to Pigeons, we've created an app inside Firebase settings and saved `google-services.json`. This information must be set in the Pigeon `.env` file. The frontend part should send a request for a registration token from Firebase. It then sends the received registration token to Kaiser, and the backend part stores it in the user's Keycloak attribute of the `registration_token`. When the system needs to send a notification to this user, it will take the registration token from Keycloak and send the data to the Firebase server.
+
+`vapidKey` is _**not**_ a token. The token looks like that:
+
+```text
+eUyCZGz_QMKZIzI7QOAq-U:APA91bGTymTn4doKJNl_aBVVqHKXr9y2JyvLjlhzJosPJUYsXBHCEbZL3A1rEpVUZOuvN8UQvlO1_A2bx-ha8lVdHUJ0RDVpyQ3BTxoY6qO2-QcK1ySxAmn5_NXgsOEEjyywh_eyuiZG
+```
+
+Every notification has `data` and `push` fields. If the notification doesn't have push notification (e.g. intercom call) , it doesn't send `push` field.
+
+Notification example:
+
+```json
+{
+ "data": {
+ "type": "user_place",
+ "action": "request_confirmed",
+ "member_name": "Кирилл Некрасов",
+ "place_name": "flat 7"
+ },
+ "push": {
+ "title": "Place request was accepted!",
+ "body": "Кирилл Некрасов confirmed your request to flat 7!"
+ }
+}
+```
+
+### Apache Kafka
+
+Apache Kafka guarantees message delivery and is adapted to microservice architecture.
+
+
+
+#### Rules for sending messages
+
+Early we used the following format for Kakfa topics: `from__to__to__v1_0_0`, e.g `from_zoo_to_crow_to_delete_settings_v1_0_0`. As you can see, source and destination services are defined statically.
+
+Now we use another rule for the Kafka topics: `..`, e.g `crow.settings.delete`. Now it doesn't matter who the message came from.
+
+### Redis
+
+Redis (from the remote dictionary server) - the resident database management system class NoSQL with open source, working with data structures such as "key - value".
+
+### Billing systems
+
+On the future our system is supposed to use various billing systems, but now it only supports Comfortel (UUT-Telecom) billing system.
+
+Every account in our billing system has an _**account number**_. This account number is specified in the contract with UUT-Telecom. To establish a connection to the billing system server, your server's external IP address must be on the list of allowed billing system IP addresses. At first you need to take an access token for the billing system. The test account is `9999999` with the password `password`. After that you can get a list of information about the user.
+
+We save the `account_number` in user collection of our database.
+
+### Filebeat
+
+### MongoDB
+
+MongoDB is a non-relational document database that provides support for JSON-like storage. The data is stored inside _**documents**_ with a similar data structure, which themselves are stored in different **collections**. One collection is for one type of records.
+
+To make a private key for a document, you can create an index for it.
+
+> You can use [MongoDB Compass](https://www.mongodb.com/products/compass) to work with our databases.
diff --git a/Dipal/README.md b/Dipal/README.md
new file mode 100644
index 0000000..8cb3782
--- /dev/null
+++ b/Dipal/README.md
@@ -0,0 +1,13 @@
+# Overview
+
+**Dipal** ("digital pal") is a system for building automation. Its purpose is **to simply communication** between residents, the management company, and the security service, with automatic device control. The use of the Dipal system involves **common** and **private areas**. **Common area** includes services outside of the residents' apartments such as video surveillance, intercom, utilities. **Private area** includes services of the residents' apartments such as smart home, private cameras. The system helps to both residents and building employees to save their time, energy and minimize the risks.
+
+Due to building automation, those responsible for construction can prevent accidents, such as standpipe break, in time. The plumber will automatically receive the request and fix the leak. If the leak is in an apartment, the tenants will be notified. Residents can also communicate with the management company and the security service by creating a ticket in the app.
+
+The system can also automate payments for Internet, utilities for residents. Users can see the electricity or water meter readings in real time. They can also connect the system to their private smart home devices. Another function, for example, is to alert residents about power surges or unclosed faucet.
+
+### **Technical part**
+
+The Dipal app is an online service for Dipal system. The **frontend part** is a smartphone app written in Flutter and React. The **backend part** is written in Node.js with TypeScript language and NestJS framework. The backend is divided into 2 parts: **Kaiser** (cloud server) and **Fox** (local server) with one-to-many connection. Kaiser is responsible for account management, payment transactions, places information, etc. Fox is responsible for the in-building logic, such as devices control. Both parts have microservice architecture.
+
+
diff --git a/README.md b/README.md
index e69de29..19f0b4a 100644
--- a/README.md
+++ b/README.md
@@ -0,0 +1,3 @@
+# Athena
+
+The documentation storage for Dipal
diff --git a/Workflow/Architecture patterns.md b/Workflow/Architecture patterns.md
new file mode 100644
index 0000000..e167e8a
--- /dev/null
+++ b/Workflow/Architecture patterns.md
@@ -0,0 +1,24 @@
+# Architectural patterns
+To write quality code, you must read a lot about the architectural patterns we use.
+
+## Microservice architecture
+
+A _**microservice architecture**_ involves dividing an application into independent microservices, each responsible for a certain part of the business logic. During deployment, depending on the load on a particular microservice, a certain number of microservice instances can be created, which allows you to adjust the application load. For example, if the application frequently uses smart home device management functionality, but rarely uses messaging or SMS functionality, you can increase the number of microservice instances that are responsible for device management and reduce the number of notifier microservice instances. Each microservice is independent of the written language and is connected to each other using an HTTP connection or message brokers such as _**Kafka**_, _**RabbitMQ**_ or _**MQTT**_.
+
+You can read about microservice architechture [here](https://microservices.io/patterns/microservices.html).
+
+## Domain-driven design architecture
+
+You can read about domain-driven architecture [here](https://www.domainlanguage.com/wp-content/uploads/2016/05/DDD_Reference_2015-03.pdf).
+
+## Event-driven design architecture
+
+You can read about event-driven architecture [here](https://elementallinks.com/el-reports/EventDrivenArchitectureOverview_ElementalLinks_Feb2011.pdf).
+
+## CQRS
+
+NestJS has own instruments for [CQRS implementation](https://docs.nestjs.com/recipes/cqrs). You can study an example implementation of NestJS [here](https://github.com/kamilmysliwiec/nest-cqrs-example).
+
+## Design patterns
+
+Design patterns are a powerful tool for writing reusable and comprehensive code. [Here](https://refactoring.guru/design-patterns/) you can find a good guide for all of them.
diff --git a/Workflow/Development rules.md b/Workflow/Development rules.md
new file mode 100644
index 0000000..b844b34
--- /dev/null
+++ b/Workflow/Development rules.md
@@ -0,0 +1,311 @@
+# Development rules
+## Project configuration rules
+
+> If you create your own project, do not forget to write a runbook. It should contain detailed instructions on how to run your project.
+
+Runbook example:
+
+````markdown
+## Service build information
+
+There are different stages of building the application for this service. Based on the environment you want to deploy we have different ways to build the application. following information may help with building the service.
+
+### Regular user
+
+```bash
+npm install
+npm run build
+npm run test:ci
+npm start:{dev || debug || prod}
+```
+
+### Advanced user
+```bash
+cd scripts
+bash run.sh -h
+
+2022.05.30.14.43
+Usage: $(basename "${BASH_SOURCE[0]}") [-h] [-buildDocker] [-runDocker] [-runApp] [-runDoc] [-packageHelm]
+This script helps you to run the application in different forms. below you can get the full list of available options.
+Available options:
+-h, --help Print this help and exit
+-buildDocker Build the docker image called "imageName:latest"
+-runDocker Build the docker image and run on local machine
+-runApp Run application with npm in usual way for development
+-runDoc Generate the code documentation
+-packageHelm makes a helm package from the helm chart.
+```
+````
+
+### package.json configuration
+
+`package.json` example:
+
+```json
+{
+"name": "My favourite microservice",
+"version": "0.0.1",
+"description": "Microservice not responsible for anything",
+"author": "Dipal Team",
+"private": true,
+"license": "Apache 2.0",
+"scripts": {
+ "build": "nest build",
+ "format": "prettier --write \"src/**/*.ts\" \"test/**/*.ts\"",
+ "start": "nest start",
+ "start:dev": "nest start --watch",
+ "start:debug": "nest start --debug --watch",
+ "start:prod": "node dist/main",
+ "lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix",
+ "test": "jest",
+ "test:pipeline": "jest",
+ "test:watch": "jest --watch",
+ "test:cov": "jest --coverage",
+ "test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
+ "compodoc": "./node_modules/.bin/compodoc -p tsconfig.json -w -s -r 8005 --theme 'readthedocs'",
+ "rei": "rm -r node_modules/ dist/ package-lock.json && npm i "
+},
+"dependencies": {},
+"devDependencies": {},
+"jest": {
+ "moduleFileExtensions": [
+ "js",
+ "json",
+ "ts"
+ ],
+ "rootDir": "src",
+ "testRegex": ".*\\.spec\\.ts$",
+ "transform": {
+ "^.+\\.(t|j)s$": "ts-jest"
+ },
+ "collectCoverageFrom": [
+ "**/*.(t|j)s"
+ ],
+ "coverageDirectory": "../coverage",
+ "testEnvironment": "node"
+}
+}
+```
+
+> If you install package that not supposed on production (like compodoc, faker or types for TS) add it in `devDependencies`, not `dependencies`. It can be the reason why you can not pass deploying on pipeline.
+
+### Enviromental variables file
+
+Environment variables offer information on the process's operating environment (producton, development, build pipeline, and so on). Environment variables in Node are used to store sensitive data such as passwords, API credentials, and other information that should not be written directly in code. Environment variables must be used to configure any variables or configuration details that may differ between environments.
+
+You should always provide `.env.example` with all the enviromental variables with empty values. This is an example:
+
+```text
+DB_USERNAME=
+DB_PASSWORD=
+DB_HOST=
+DB_PORT=
+DB_NAME=
+```
+
+> Do not forget **to add** `.env` file inside the `.gitignore` file.
+
+Inside your code you should import environment variables using the `dotenv` lib. You must do this _**before**_ all your code, especially importing configs in the `main.js` file:
+
+```js
+import * as dotenv from "dotenv";
+
+dotenv.config();
+
+import { config } from "./infrastructure/config/config";
+```
+
+### Compodoc
+
+NestJS, like Angular, supports Compodoc to automatically create documentation based on developers' comments. This is internal documentation for backend developers. In it you can see the project structure, modules, endpoints, variables, functions, etc.
+
+You need to set the port inside package.json and be sure that it is free:
+
+```text
+"compodoc": "./node_modules/.bin/compodoc -p tsconfig.json -w -s -r 8005 --theme 'readthedocs'",
+```
+
+So always write comments on your code.
+
+### Lint
+
+ESLint statically analyzes your code to quickly find problems in JS/TS code. You can integrate it inside your VSCode.
+
+We have a `.eslintrc.json` file with standart rules:
+
+```json
+{
+ "env": {
+ "browser": false,
+ "es2021": true
+ },
+ "extends": [
+ "prettier",
+ "eslint:recommended",
+ "plugin:@typescript-eslint/recommended"
+ ],
+ "overrides": [],
+ "parser": "@typescript-eslint/parser",
+ "parserOptions": {
+ "ecmaVersion": "latest",
+ "sourceType": "module"
+ },
+ "plugins": ["@typescript-eslint"],
+ "rules": {
+ "indent": ["warn", 2],
+ "linebreak-style": ["warn", "unix"],
+ "quotes": ["warn", "double"],
+ "semi": ["warn", "always"]
+ }
+}
+```
+
+And the `.prettierrc` file":
+
+```json
+{
+"arrowParens": "avoid",
+"bracketSpacing": false,
+"endOfLine": "lf",
+"insertPragma": false,
+"singleAttributePerLine": false,
+"bracketSameLine": true,
+"printWidth": 120,
+"proseWrap": "always",
+"quoteProps": "as-needed",
+"requirePragma": false,
+"semi": true,
+"singleQuote": false,
+"tabWidth": 2,
+"trailingComma": "es5",
+"useTabs": false,
+"parser": "typescript"
+}
+```
+
+## Coding style
+
+In our company we keep a close eye on the quality of the code. Since we do not have much time for refactoring, your code must follow to all principles and be _**easy to read**_ for all members of our team. _**Configure your VSCode**_ according on style that we have.
+
+You can find coding style rules [here](https://google.github.io/styleguide/tsguide.html).
+
+## HTTP REST
+
+You need to write standartized requests and return correct responses with your APIs.
+
+You can find HTTP requests and responses rules [here](https://aws.amazon.com/what-is/restful-api/).
+
+## Test-driven development
+
+We stick to test-driven development (TDD) because we need to be 100% sure that our code works perfectly and there are no silly bugs in it. You alone are responsible for unit tests of your code. When you write tests first, you can better understand the functionality you are implementing. While writing tests, you must think as a user of your module. You should predict all the situations that can happen and check if the code works correctly.
+
+Here are the common rules you must follow writing unit tests:
+
+One test checks one thing. Create mocks and stubs for dependencies. Try as many different types of input as possible. Create a factory for input and output information.
+
+Test example:
+
+```typescript
+ it("Should encrypt according to args", async () => {
+ let algorithm: any, securityKey: any, initVector: any, data: any;
+ const args = test_cases.crypto.args;
+ const expectation = test_cases.crypto.expectation;
+ jest.spyOn(crypto, "createCipheriv").mockImplementation((x: any, y: any, z: any) => {
+ algorithm = x;
+ securityKey = y;
+ initVector = z;
+ return {
+ update: (_x: any, _y: any, _z: any) => {
+ data = _x;
+ return expectation.update;
+ },
+ final: (_x: any) => expectation.final,
+ } as Cipher;
+ });
+
+let result = service.encrypt(args.data, args.initVector, args.algorithm);
+
+expect(result).toBe(expectation.update + expectation.final);
+ expect(algorithm).toBe(args.algorithm);
+ expect(initVector).toBe(args.initVector);
+ expect(data).toBe(args.data);
+ expect(securityKey).toBeDefined();
+ expect(crypto.createCipheriv).toBeCalledTimes(1);
+ });
+ });
+```
+
+When you start writing code, all the tests will fail. Your task is to make them successful.
+
+## NestJS application architecture and rules for module creation
+
+For clean code, we use Domain-Driven design. The simple structure of project is:
+
+* Application.
+ * Controllers. Handle incoming requests and returning responses to the client
+ * DTOs. Define how the data will be sent over the network.
+* Domain.
+ * Decorators.
+ * Constants.
+ * Enums.
+ * Filters. Process all unhandled exceptions across an application.
+ * Guards. Determine whether a given request will be handled by the route handler or not (for example, authefication).
+ * Interceptors. Process the request before handling and the response after handling.
+ * Interfaces.
+ * Modules.
+ * Repositories. Mediate between the domain and data mapping layers.
+ * Services. Implement business logic.
+
+> Always write a _**Swagger**_ documentation for your endpoints.
+
+> When you write DTO, validate all the fields with `class-validator`.
+
+DTO example:
+
+```ts
+export class SetRegistrationTokenDTO {
+ /**
+ * Registration token for the user's app
+ */
+ @IsNotEmpty()
+ @IsString()
+ @Matches(/[A-Za-z0-9\-_]{22}:[A-Za-z0-9\-_]{140}/)
+ @ApiProperty({
+ description: "Registration firebase token",
+ example:
+ "fwcHeh8bSh6KRGQPd-6B2H:APA91bEV1KGorVt7bPHcgbuRCJrxWVmgXdUZvVmnOfB9rYhDGAudTtSvZu8qwT_hErYp0ONWR8MQIzpN6B7FlwdpMMYG2vjU1T1KXE0NEKhZc1d8hc9YwKWqXqNgyMnRuhN074Wziw9Q",
+ })
+ registration_token: string;
+}
+```
+
+> Don't forget to use NestJS features (e.g. filters) and design patterns.
+
+> When creating any new environment variable _**always**_ notify your supervisor.
+
+You can read full NestJS documentation [here](https://docs.nestjs.com/).
+
+## MongoDB and query rules
+
+You can read full MongoDB documentation [here](https://webimages.mongodb.com/_com_assets/cms/kuyjf3vea2hg34taa-horizontal_default_slate_blue.svg?auto=format%252Ccompress).
+
+## Documentation and comments writing rules
+
+* Follow the rules of the English language.
+* Don't write a lot of text, write as concisely and clearly as possible.
+* Put the Swagger documentation in a separate file in the `docs` folder.
+* You can see code documentation using `compodoc`.
+
+Good comments example:
+
+```js
+/**
+** matches US social security number
+**/
+let pattern = Pattern("^\\{3}-\\d{2}-\\d{4}$")
+
+//=============================================================================================================
+// use this thing to divide different functions and methods
+
+// TODO: implement functionality
+```
diff --git a/Workflow/Docker containerization and Dockerfile rules.md b/Workflow/Docker containerization and Dockerfile rules.md
new file mode 100644
index 0000000..4a7eee5
--- /dev/null
+++ b/Workflow/Docker containerization and Dockerfile rules.md
@@ -0,0 +1,144 @@
+# Docker containerization and Dockerfile rules
+Docker is a technology for running applications inside an isolated container, as if they were running on a virtual machine. You need to provide everything that will be installed inside the container in your `Dockerfile`. You can see an Dockerfile example in a boilerplate.
+
+> When you are going to make a pull request **always** test your app inside the docker container.
+
+To create a container named example based on the image, use the following command:
+
+```text
+sudo docker run --name -d
+```
+
+Creating a container named example based on an nginx image:
+
+```text
+sudo docker run --name example -d nginx
+```
+
+Use this command to view the currently running containers:
+
+```text
+sudo docker ps
+```
+
+To run the created container in the background, use the following command:
+
+```text
+sudo docker container start
+```
+
+To go inside a container that is running in the background, run the following command:
+
+```text
+sudo docker exec -i -t /bin/bash
+```
+
+To exit the container, use the standard `exit` command.
+
+To remove a container, use the rm option:
+
+```text
+sudo docker rm -f
+```
+
+> To connect your container with other containers you need to create networks, see more about it [official documentation](https://docs.docker.com/network/).
+
+Dockerfile example for multi-stage build:
+
+```dockerfile
+FROM node:fermium-alpine AS environment
+
+ARG MS_HOME=/app
+ENV MS_HOME="${MS_HOME}"
+
+ENV MS_SCRIPTS="${MS_HOME}/scripts"
+
+ENV USER_NAME=node USER_UID=1000 GROUP_NAME=node GROUP_UID=1000
+
+WORKDIR "${MS_HOME}"
+
+# Build
+FROM environment AS develop
+
+COPY ["./package.json", "./package-lock.json", "${MS_HOME}/"]
+
+FROM develop AS builder
+COPY . "${MS_HOME}"
+
+RUN PATH="$(npm bin)":${PATH} \
+ && npm ci \
+ && npm run test:ci \
+ && npm run test:e2e \
+ && npm run-script build \
+ # Clean up dependencies for production image
+ && npm install --frozen-lockfile --production && npm cache clean --force
+
+# Serve
+FROM environment AS prod
+
+COPY ["./scripts/docker-entrypoint.sh", "/usr/local/bin/entrypoint"]
+COPY ["./scripts/bootstrap.sh", "/usr/local/bin/bootstrap"]
+COPY --from=builder "${MS_HOME}/node_modules" "${MS_HOME}/node_modules"
+COPY --from=builder "${MS_HOME}/dist" "${MS_HOME}/dist"
+
+RUN \
+ apk --update add --no-cache tini bash \
+ && deluser --remove-home node \
+ && addgroup -g ${GROUP_UID} -S ${GROUP_NAME} \
+ && adduser -D -S -s /sbin/nologin -u ${USER_UID} -G ${GROUP_NAME} "${USER_NAME}" \
+ && chown -R "${USER_NAME}:${GROUP_NAME}" "${MS_HOME}/" \
+ && chmod a+x \
+ "/usr/local/bin/entrypoint" \
+ "/usr/local/bin/bootstrap" \
+ && rm -rf \
+ "/usr/local/lib/node_modules" \
+ "/usr/local/bin/npm" \
+ "/usr/local/bin/docker-entrypoint.sh"
+USER "${USER_NAME}"
+
+EXPOSE 8085
+
+ENTRYPOINT [ "/sbin/tini", "--", "/usr/local/bin/entrypoint" ]
+```
+
+Use a multi-stage build in Dockerfile to simply _**copy the necessary artifacts into the environment**_. For example, if you send a lot of dependencies and files to the production environment at build time are redundant and not needed to run your application. With multi-stage builds, these resources can only be used when the runtime environment contains only what is required and needed. In other words, multi-stage builds are a promising way to get rid of so-called overweight and security threats.
+
+There are common Dockerfile rules:
+
+Do not write the container as root. Assign a root user to an executable file, but without write permission. Keep the image as minimal as possible. Avoid confidential data leaks. Build dockerignore and context. Do not add env-file into docker container.
+
+[Article about a Dockerfile best practices by Moeid Heidari](https://moeidheidari.pro/blog/how-to-write-a-dockerfile-considering-best-practicecs)
+
+### Docker Compose
+
+Docker Compose is an add-on to the docker, an application that allows you to run multiple containers simultaneously and route data streams between them. The Docker Compose file describes the process of loading and configuring containers. The example code of `docker-compose.yml` file looks like this:
+
+```dockerfile
+version: "2.3" # Set version of docker-compose.yml
+services: # Specify containers
+ nginx: # Installs the name of the first container, nginx, and configures it
+ build: ./nginx # Specify where to build from
+ ports: # Specify ports to be forwarded outside
+ - "80:80"
+ volumes: # Сonnect the working directory with the project code
+ - ./www:/var/www
+ depends_on: # Setting the order in which the containers will be loaded
+ php: # php container starts before Nginx
+ condition: service_healthy # Set the condition to start the nginx container
+ php: # Set name of the first container - php and configure it
+ build: ./php # Specify where to build from
+ volumes: # Include the same working directory with the project code
+ - ./www:/var/www
+ check: # Check that the application works inside the container
+ test: ["CMD", "php-fpm","-t"] # Test command we want to execute
+ interval: 3s # Interval of attempts to run the test
+ timeout: 5s # Delay in starting the command
+ retries: 5 # Number of retries
+ start_period: 1s # How long after the container starts the test
+```
+
+To run the container on the background:
+
+```bash
+sudo docker compose up -d
+```
diff --git a/Workflow/Git workflow.md b/Workflow/Git workflow.md
new file mode 100644
index 0000000..6c533d7
--- /dev/null
+++ b/Workflow/Git workflow.md
@@ -0,0 +1,81 @@
+# Git workflow
+If you are a programmer you should know about **Git**. Roughly speaking, this is where all your code is stored with all the changes you've made. Since all developers are working on their own features, you have different **branches** in the repo. You must make your branch based on the **develop** branch before you start working on your task.
+
+If you clone repo for the first time:
+
+```text
+git clone
+git switch develop
+git branch
+git switch
+```
+
+If you create a branch **do not forget** to pull the latest version and check that you are on **develop** branch:
+
+```text
+git pull
+git switch develop
+git branch
+git switch
+```
+
+There are rules on how to call these branches:
+
+* `feature/` is for features. A feature is something new for the project.
+* `bugfix/` is for bugfixes. A bugfix is a fix for some functuanallity that should already be working.
+* `enhancement/` is for code refactoring. If your task doesn't not add new functionallity and only improves the code.
+
+> Create your own branch for **any task**. The only exception may be if your tasks are very close and **you have discussed it with your supervisor**.
+
+After you've made the changes and are going home, don't forget to do the following things:
+
+```text
+git add .
+git commit -m "your changes description here"
+git push origin
+```
+
+> **Never ever** push to **develop** or **main/master** branches. You will immediately be thrown out the window.
+
+Also, do not push the code to other colleagues' branches unless they do not ask you to do it.
+
+After you've made a task you should make a **pull request**. You can do it with Git UI, but **be sure that you** `**don't make**` **the pull request to** `**main**` and do not merge it by yourself. Let your supervisor know that you have made the pull request. Do not forget to move your task to `Waiting for approval`.
+
+> If you don't want some file to be stored in git (credentials, generated files, etc.) add it in `.gitignore`
+
+### "I accidentally forgot to change the branch"
+
+If you mistakenly started your project on **master** or **develop** branches you can stash your changes and then apply them to your branch.
+
+```text
+git stash
+git branch
+git switch
+git stash --apply
+```
+
+### "I accidentally commited my changes the wrong branch"
+
+If you've already done the commit to a **wrong** branch:
+
+```text
+git reset --soft HEAD@{1}
+git stash
+git branch
+git switch
+git stash --apply
+```
+
+### "I accidentally deleted a file in the project"
+
+If you've damaged or deleted a file or multiple files, you always can restore them by using:
+
+```text
+git restore
+```
+
+Or if you are specifically so screwed up and want to restore the whole project:
+
+```text
+git restore .
+```
diff --git a/Workflow/Pipeline and Jenkins description.md b/Workflow/Pipeline and Jenkins description.md
new file mode 100644
index 0000000..4acac07
--- /dev/null
+++ b/Workflow/Pipeline and Jenkins description.md
@@ -0,0 +1,17 @@
+# Pipeline and Jenkins description
+A DevOps pipeline is a set of automated processes and tools that allows developers and operations professionals to collaborate on building and deploying code to a production environment.
+
+What does this really mean to you? It means that you should follow _**all the rules**_ when writing code and not forget to _**write tests**_, even if for some reason you didn't do it at the beginning. So, pipeline is the kind of thing that makes our lives easier. When you push your code to the repo, at first, it automatically _**runs tests**_ on the server. They may succeed or fail-it - doesn't matter: it's _**your branch**_, and your _**code won't be deployed**_ now anyway, so you will not damage anything. But when your branch is merged into `develop` and the tests are passed, it will _**automatically start deploying**_ it to the server (development or staging). There are different stages, and if one of them _**fails**_, the whole pipeline will fail and _**nothing will be deployed**_. No, in this case you won't screw everything up, the _**last successful version**_ will still be deployed and running.
+
+Here are the stages:
+
+1. build an app (`npm build`)
+2. run tests (`npm run tests`)
+3. build docker (`docker build .`)
+4. push image (`docker push ${env.HOST}:${env.PORT}/comfortech/${env.IMAGE_NAME}_develop:v${BUILD_NUMBER}`)
+5. clean up docker (`docker system prune --force`)
+6. clean up (`cleanWs()`)
+
+The language may vary with different commands for building apps and running tests, but the theory is general. Do not forget to add `Jenkinsfile` to your project to make the pipeline work. You can take the Jenkisfile example from our _**boilerplate**_.
+
+When you send your code to the pipeline, you can watch it inside Jenkins. You can also restart the pipeline yourself.
diff --git a/Workflow/README.md b/Workflow/README.md
new file mode 100644
index 0000000..5fed426
--- /dev/null
+++ b/Workflow/README.md
@@ -0,0 +1,17 @@
+# Workflow
+In our team, we use **Agile Scrum workflow**. This means that we have a strict structure which controls development, task completion and release frequency. One of fundamental basics of Agile are **sprints**. Sprints are small individual cycles of production process. Sprint is usually 2 weeks long. Two sprints are called **epic** (one month of working). **Stories**, also called “user stories,” are short requirements or requests written from the perspective of an end user. Supervisors divide one user story to tasks and assign them to lower posistions. **Task** is a task. Usually it should not take more than 2 days or less than 2 hours.
+
+At the end of each sprint, we have **a general meeting** with all teams to discuss the tasks and problems that have arisen. We also hold **small meetings** within every team to explain user stories or programming concepts. Each day at the end, each member of each team must **report** on the progress of their tasks and the problems for the day to their higher positions. It can be in writing or orally, depending on the supervisor.
+
+* In case of anything, **notify your supervisor** because he has a plan which should be executed. Any problem can be solved, and your colleagues can help you, so **don't be shy**.
+* If you have an idea how to improve the code, **don't do it yourself**, tell your supervisor about it first.
+* We have a very strict rule: **DON'T CHANGE ANYTHING WITHOUT NOTIFYING SOMEONE**.
+
+## **Kanban board**
+
+To implement the workflow we use the [**kanban board**](https://board.techpal.ru/). Here we track our tasks and stories for each sprint. Each task has an estimated time to complete it. There are 4 stages for each task or story:
+
+* **To Do.** Here are tasks which should be finished to the end of the sprint. Here, if you are supervisor, you can also find user stories to divide into tasks.
+* **In Progress.** Here are tasks which you that you are currently doing. Don't forget to move the task back to To Do.
+* **Waiting for approval.** Here are tasks that you finished. Your supervisor should check to see if your feature is working in a proper way, and if not, they will move it back to To Do.
+* **Done**. Here are tasks which are done. **Never ever** move your tasks here, only the team lead can do that after they has checked if the feature is working properly.