Jaspreet Singh
Backend/Platform Engineer
Overview
Backend/Full Stack Engineer @Protokol.io (Native Teams)
Social Links
About
- Backend/Platform Engineer with expertise in payment gateway integrations (Stripe, Braintree), developer tooling, and scalable backend systems.
- Passionate about platform engineering, embedded systems, and CI/CD automation.
- Built self-hosted LLM inference server with GPU acceleration and developed full-stack applications with modern tech stacks.
Stack
Experience
- Integrated Stripe payment gateway in Go, implementing transaction lifecycle operations including refunds and webhook-based payment status verification for reliable synchronization.
- Integrated Braintree (PayPal) payment gateway in Go, implementing full transaction lifecycle (refund, void, resync) and building a backend abstraction layer for enhanced logging and reliability.
- Designed and built a Node.js CLI-based hot-reload system using Commander.js, replacing UI-driven workflows and improving developer efficiency across 10–15 engineers.
- Implemented incremental compilation and file-level syncing with WebSocket-based real-time updates, reducing rebuild times and enabling low-latency development workflows.
- Developed and maintained SDKs and build systems using Webpack, Babel, and TypeScript, supporting 8–9 production client applications.
- Built a database-agnostic admin platform supporting MongoDB and MySQL with dynamic query execution and 50+ predefined operations.
- Containerized services using Docker and docker-compose, including embedding runtime binaries for isolated execution environments.
- Go
- Stripe
- Braintree
- PayPal
- Node.js
- Commander.js
- WebSocket
- Chokidar
- Webpack
- Babel
- TypeScript
- MongoDB
- MySQL
- TanStack Router
- TanStack Table
- TanStack Query
- Shadcn/UI
- React
- Docker
- Docker Compose
Aerchain
- Node.js
- PostgreSQL
- Sequelize
- Express.js
- Postman
- SAP Integration
Education
- Computer Science
- Software Engineering
- Algorithms
- Data Structures
- Distributed Systems
Projects(2)
Self-hosted LLM inference system with GPU acceleration and containerized deployment.
- Built self-hosted LLM inference server using vLLM with GPU acceleration on AMD RX 6750 XT
- Designed multi-service architecture using Docker and Docker Compose
- Integrated Open WebUI for model interaction
- Implemented dynamic model loading and unloading
- vLLM
- Docker
- Docker Compose
- Open WebUI
- llama.cpp
- GPU Acceleration
- AMD RX 6750 XT