About Me

I'm a Staff SDET (Software Development Engineer in Test) with extensive experience in quality engineering, test architecture, and AI-augmented development. I understand where tests belong in the test pyramid and how to release code at velocity with confidence. I specialize in building scalable test infrastructure, driving quality culture across engineering organizations, and leveraging emerging AI tools to accelerate development workflows.

Core Competencies

Test Strategy & Architecture

  • Test Pyramid Design
  • Unit & Component Testing
  • Integration Testing
  • E2E Test Automation
  • Visual Regression Testing
  • LLM Evaluation Testing
  • Performance & Load Testing
  • Contract Testing

Testing Techniques

  • Mocking & Stubbing
  • Test Doubles & Fakes
  • Snapshot Testing
  • Data-Driven Testing
  • Shift-Left Testing
  • Mutation Testing
  • Accessibility Testing

Frameworks & Tools

  • Playwright
  • Cypress
  • Selenium
  • Storybook
  • Jest / Vitest
  • Testing Library
  • Appium
  • k6 / Artillery

Languages

  • JavaScript / TypeScript
  • Python
  • Java
  • Node.js
  • PHP
  • .NET / C#
  • SQL

AI & Developer Tools

  • Claude Code
  • Cursor IDE
  • GitHub Copilot
  • LLM Eval Frameworks
  • MCP Servers
  • Development Context Repos
  • Prompt Engineering

CI/CD & Infrastructure

  • GitHub Actions
  • Jenkins
  • Docker
  • AWS
  • Terraform
  • Kubernetes

APIs & Services

  • REST API Testing
  • GraphQL
  • gRPC
  • OpenAPI / Swagger
  • Postman / Newman
  • API Mocking

Databases

  • PostgreSQL
  • MySQL
  • MS SQL Server
  • MongoDB
  • Redis

My Approach to Testing

I believe in a strategic approach to testing that ensures both quality and efficiency. By understanding the test pyramid, I focus on creating the right balance of unit, integration, and end-to-end tests to provide maximum coverage with minimum overhead.

My goal is always to enable teams to ship with confidence while maintaining velocity. This means automating the right things, creating maintainable test suites, and providing clear, actionable feedback when issues are found.

AI-Augmented Quality Engineering

I'm deeply invested in the intersection of AI and quality engineering. I maintain a development context repository that enables AI coding assistants to understand project architecture, testing patterns, and team conventions—dramatically accelerating development cycles while maintaining quality standards.

My work with LLM evaluation testing involves building frameworks to systematically assess model outputs for accuracy, consistency, and safety. This emerging discipline combines traditional testing principles with new evaluation methodologies tailored to non-deterministic AI systems.

What Sets Me Apart

I operate at the intersection of software engineering and quality—not just finding bugs, but architecting systems that prevent them. I bring a developer-first mindset to testing, building tools and frameworks that teams actually want to use. My experience spans the full stack from component-level Storybook testing and visual regression to production observability and incident response.

I've led quality initiatives across multiple product teams, mentored engineers on testing best practices, and built test infrastructure that scales from startup to enterprise. I thrive in environments where quality is a shared responsibility and engineering velocity is paramount.

CodeWars Badge