attendance-tracker

Timetable Tracker Documentation

Overview

A modern and scalable Timetable Tracker designed for students and educators. This application allows users to manage class schedules, track attendance, and analyze performance with an intuitive, classroom-inspired interface. Built with a focus on clean code, domain-driven design, and modern DevOps practices, it features independent frontend and backend services containerized with Docker.

Core Features

Tech Stack & Architecture

This project is architected as a monorepo containing two independent services: a frontend single-page application (SPA) and a backend API. This separation allows for independent development, deployment, and scaling of each part.

Frontend

Backend


System Design & Architectural Pattern

The backend is built following a Domain-Driven Design (DDD) approach combined with a Layered (or Clean) Architecture. This creates a system that is robust, maintainable, and scalable.

What are Domains?

A domain represents a major area of functionality or a core business concept. We group code by domain to keep related logic together (high cohesion) and separate it from unrelated logic (low coupling). If you need to change how streams work, you know to look primarily in the domains/stream folder. This is far more scalable than having one giant folder for all controllers, one for all services, etc.

The primary domains in this application are User, Stream, Timetable, Attendance, and Analytics.

The Layers of a Domain

Each domain is organized into distinct layers, each with a single responsibility. This is the separation of concerns principle. An HTTP request flows inward through these layers, and the response flows outward.

  1. Controller (*.controller.ts) - The Entry Gate
    • Job: Handles raw HTTP requests and responses.
    • Responsibilities: Extracts data from the request (req.params, req.body, req.user), calls the appropriate Service function, and formats the result from the service into a JSON response with the correct status code. It contains no business logic.
  2. DTO (*.dto.ts) - The Data Contract
    • Job: Defines the shape and validation rules for data moving in and out of the application.
    • Responsibilities: Uses Zod to create schemas for request bodies, params, and queries. Our validateRequest middleware uses these schemas to ensure all incoming data is valid before it reaches the controller, preventing invalid data from entering our business logic.
  3. Service (*.service.ts) - The Brain / Business Logic
    • Job: Orchestrates the application’s core business rules.
    • Responsibilities: Receives validated data from the controller, performs permission checks (e.g., “Is this user an admin?”), executes business logic (e.g., “An owner cannot leave a stream”), calls Repository methods to fetch/save data, and throws specific application errors (NotFoundError, ForbiddenError). This is where the main work happens.
  4. Repository (*.repository.ts) - The Database Worker
    • Job: Handles all direct communication with the database.
    • Responsibilities: This is the only layer that should directly import and use the prisma client. It contains all database queries (prisma.stream.findUnique, etc.), abstracting the data access logic away from the service layer.
  5. Prisma Model (schema.prisma) - The Blueprint
    • Job: Defines the fundamental data structures and relationships of our domains.
    • Responsibilities: Acts as the single source of truth for our database schema. Prisma generates TypeScript types from this file, which are used throughout the application for type safety.

The “Vertical Slice” Workflow

When you add a new feature, you touch multiple files because you are implementing a complete “vertical slice” of functionality that cuts through all layers of the application. For example, adding an “Archive Stream” feature requires:

  1. A schema change (isArchived field).
  2. A new repository method (setArchiveStatus).
  3. New service logic (archiveStream with permission checks).
  4. A new controller handler (handleArchiveStream).
  5. A new route (POST /streams/:streamId/archive).

This process ensures that every new feature is robust, secure, and well-integrated.


Development & Deployment

Local Development

The project is configured for a seamless local development experience using Docker Compose.

  1. Prerequisites: Docker, Docker Compose, Node.js (for local IDE support).
  2. Setup:
    • Create .env.development files in frontend/ and backend/ based on the .env.example templates.
  3. Run:

    • Database
      docker compose -f docker-compose-dev.yml up -d
      
    • Frontend
      cd frontend && npm run dev
      
    • Backend
      cd backend && npm run dev
      
  4. Access:
    • Frontend: http://localhost:5173
    • Backend API: http://localhost:3001

Deployment (Staging & Production on Google Cloud VMs)

The workflow distinguishes between staging and production environments using Docker Compose on dedicated virtual machines.

This automated workflow ensures that deployments are consistent, reliable, and require minimal manual intervention while supporting different configurations for staging and production environments.

Infrastructure