Work
  • May2023 - Dec2024
    InsightM
    Senior Software Engineer
    • Helped establish a dedicated Infrastructure team that provided reusable Terraform modules for the development teams, by standardizing application architectures across the organization, allowing easier deployment to ECS, Lambda and supporting infrastructure such as Route53, ACM and S3.

    • Refactored previously implemented Terraform by converting it from a single monolith module to multiple small and reusable ones, which allowed easier maintenance in multiple environments.

    • Imported AWS infrastructure created manually via Console (GUI) to Terraform and added build pipelines using GitHub Actions / CircleCI for projects that required it.

    • Used ansible with Ubuntu and nix with NixOS for deploying and configuring services to EC2 instances.

    • Added code quality validation using GitHub Actions for linting, formatting and security and pre-commit hooks to multiple repositories containing Terraform, Python and JavaScript code.

    • Implemented, from scratch, a build pipeline in GitHub Action for the Report generator team, which develops critical-mission services for the company, allowing developers to quickly test and deploy only applications that changed during their development cycle.

    • Worked on security initiatives such as defining database access policies, users and roles, also implemented Ansible playbooks to help manage them.

    • Participated in on-call rotation.

  • May2022 - Apr2023
    Optimizely
    Senior Software Engineer
    • Subject-matter expert on DevOps tasks for my team of 5 developers.

    • Developed AWS Lambdas using Python and APIs with Flask deployed to ECS with peak traffic of 100 req/s.

    • Standardized GitHub Actions as the build tool, then converted other pipeline implementations from AWS CodeBuild, Travis and CircleCI.

    • Managed and created multiple Terraform modules, updated terraform and aws provider version to latest.

    • Identified cost savings in Infrastructure by resizing overly provisioned resources.

    • Created and maintained Dockerfiles to comply with security scanners for multiple services.

  • Feb2021 - May2022
    Trackstreet
    Software Development Manager
    • Kicked off a new DevOps team that focuses on CI/CD implementation, cloud orchestration using infrastructure as code, monitoring and telemetry of current deployments, among many other initiatives.

    • Helped architect and implement a cost reduction plan which involved migration internal services from AWS to PhoenixNAP by creating a Nomad and Consul clusters in bare metal servers that reduced our infrastructure cost by close to 50% while increasing data acquisition.

    • Migrated highly accessed data from S3 to Wasabi to reduce cost even further.

    • Managed Data Acquisition team of 7 developers to maintain and grow web crawlers for more than 50k websites.

    • Implemented SDLC for the team on Jira using Kanban for Maintenance tasks and Scrum for product/technical initiatives.

    • Started migrating projects from current codebases in PHP and Python to Golang using a new architecture focused on maintainability, performance and metrics. Cost was reduced up to 90% for some services.

    • Analyzed, benchmarked and helped redesign our current database stack to a more well suited OLAP/OLTP architecture using PostgreSQL and Yellowbrick.

  • Apr2020 - Feb2021
    Ally Financial
    Senior Python Engineer
    • Maintained Terraform, Jenkins, Helm charts and bash scripts for multiple projects.

    • Spearheaded the usage of a mono repository and all of it’s implementation using tools like pants, bash and Jenkins.

    • Advocated for Python best practices by discussing with the team, refactoring codebases and reviewing PRs.

    • Developed chat bots using AWS Lex and Rasa, at the time used by 70% of clients.

  • Feb2019 - Jan2020
    Radar Governamental
    Lead Python Developer
    • Designed and implemented a scalable and resilient web crawling system using Python, MongoDB and GitLab.

    • Automated crawlers for 50 websites to insert and update data in the application on an hourly, daily, or weekly basis.

    • With the system’s implementation, time spent on manual data insertion or updates was reduced by 50% (from 4 to 2 hours) for each of the 20 employees and their focus shifted to validation and analysis instead of data entry.

    • Set up monitoring and alerting for web crawlers using Grafana and Prometheus.

    • Hosted monitoring system using AWS ECS/EC2 with Terraform.

    • Created multiple AWS Lambda services to validate and process data extracted from crawlers and insert into database.

    • Led the development team of 3, responsible for sprint planning and new hires.

  • Jan2018 - Nov2018
    Sigalei
    Chief Technology Officer
    • Led the technical effort for 5 devs, 3 focused on the application and 2 for web crawling.

    • Directly mentored two devs to work with web crawling and one for sysadmin.

    • Refactored and improved web crawling projects to increase velocity of adding new websites from 4 weeks to 2 weeks.

    • Created cloud infrastructure for prototype on Linode and then migrated to production on Google Cloud Platform with company growth.

    • Implemented Docker containers for additional areas: application and data science.

    • Improved all Elasticsearch index mappings which solved the problem of frequent timeouts on searches and resulted in searches being completed in 10 seconds or less.

  • Jul2016 - Jan2018
    Sigalei
    Lead Python Developer
    • Championed Docker containers by dockerizing all web crawler projects.

    • Created and configured new servers for CouchDB and ELK.

    • Created CI/CD workflow on GitLab.

    • Led web crawler projects.

    • Refactored CouchDB views using better semantics and reduced disk usage by 40%.

    • Worked with another developer to add data validations on web crawlers for easier development of new projects and increased reliability of the data.