Hello! My name is Noah Burrell and I am an experienced IT professional with a passion for technology and innovation. I have 5+ years of experience working in various roles, including systems administration, network administration, and DevOps engineering. My expertise includes cloud-native computing, Kubernetes, automation, Linux, and networking. I am a strong collaborator who thrives in a team environment and enjoys working on complex problems that require innovative solutions. I am committed to staying up-to-date with the latest technologies and trends in the field and am constantly learning and expanding my knowledge base. Outside of work, I enjoy tinkering with new technologies (see my lab), reading about science and technology, and spending time with my family and friends. I am excited about the opportunity to bring my skills, experience, and passion for technology to new challenges and to make a positive impact in the IT industry.
As an experienced IT professional specializing in DevOps, Kubernetes, and cloud-native computing, I have a proven track record of designing, implementing, and managing cloud-native solutions that are highly scalable, fault-tolerant, and cost-effective. With over 5 years of experience in the field, I have developed extensive knowledge and expertise in deploying and managing applications on Kubernetes, AWS, and GCP.
My experience includes designing and deploying highly available Kubernetes clusters, managing resources, monitoring, and troubleshooting. I have worked with a variety of Kubernetes tools and platforms such as Helm, Istio, and Prometheus to deploy, configure, and manage Kubernetes workloads. My proficiency with Kubernetes has enabled me to develop and implement reliable and scalable architectures for organizations, ensuring that their applications are available and performing optimally at all times.
In addition to my expertise in Kubernetes, I have worked with a wide range of cloud-native technologies such as Docker, Terraform, Kyverno, and Prometheus. I have a deep understanding of how these technologies work together to create scalable, resilient, and secure cloud-native architectures. I have also used CI/CD tools such as GitLab, Github Actions, and Argo CD to automate the deployment pipeline, reducing the time it takes to go from code to production.
With my skills, experience, and passion for the field, I am committed to helping organizations build and deploy high-quality, scalable, and reliable applications on the cloud. I am a collaborative team player who enjoys working with cross-functional teams to solve complex problems and achieve business goals. I am comfortable working in fast-paced environments and can adapt quickly to new technologies and tools. I am excited about the opportunity to bring my skills and experience to your organization and contribute to its success.
As an experienced professional in the IT industry, I am always looking for ways to expand my horizons and explore new opportunities. I am open to consulting opportunities, where I can share my expertise with organizations and help them solve complex problems in areas such as DevOps, Kubernetes, and cloud-native computing. I believe that my knowledge, skills, and experience can help businesses optimize their operations and achieve their goals. Furthermore, I am also open to new career opportunities that offer challenges and growth opportunities. I am excited to bring my passion and dedication to a new role where I can continue to learn, grow, and make a positive impact in the field of IT. Whether it's through consulting services or a new career opportunity, I am committed to providing high-quality solutions and delivering value to organizations.
If I sound like a good fit for your organization on a contract or permanent basis, please contact me.
If you are looking for a consultant or for professional services, my rates can be be found here.
This is my homelab, it is where I test things, deploy personal projects and applications, and run all of the components for my home network. In my homelab I run my router (TNSR), hypervisor (Proxmox), Kubernetes cluster, storage services (NFS and Samba), active directory server, and much more. My homelab is even where I am running this website.
While relatively speaking, this is a fairly small deployment when compared to the infrastructure that most businesses have, the concepts used can be easily abstracted and deployed at a much larger scale (up to and including data center scale).
There is a lot to dig into with my homelab configuration so I have annotated the diagrams below with additional context and details to help clarify what you are looking at.
I love to talk about IT topics and homelabbing/infrastructure, so if you have questions about what I am running in my lab or how you have your own infrastructure configured, feel free to contact me or consider hiring me.
Network backbone. All wired network devices (excluding Bell Home Hub) connected to this switch.
The primary driver for my homelab. Runs Proxmox VE as the hypervisor. Directly attached to 2 EMC DAS chassis'.
Components chassis and Components sourced individually, not a standard Supermicro SKU. Used to run a Chia harvester and store Chia plots. Running OpenMediaVault.
General purpose PC running Windows 10. Primarily used for game streaming (Moonlight), and GPU media transcoding.
Contains (12) 10 TB Western Digital WD101EMAZ hard drives. Directly connected to HPE ProLiant DL380 Gen9. Used for Media and general network storage.
Contains (12) 10 TB Western Digital WD101EMAZ hard drives. Directly connected to HPE ProLiant DL380 Gen9. Used for Media and general network storage.
3000VA (30 Amp) APC SMT3000RM2U UPS. Provides backup power for entire lab. Approximate runtime of 30 minutes from full charge.
Router and firewall for home network.
Active Directory controller and DNS server for network.
Runs Xubuntu, acts as a seperate environment that I perform my day job in.
A 3-node Kubernetes cluster for running the vast majority of applications and services in my homelab. See the Kubernetes section of this page for more details.
The Bell Home Hub and HP server are directly to connected to each other with CAT6a to enable a 10 Gbps connection. The port on the HP server is assigned exclusively to the TNSR VM to enable routing out to the internet at full-speed (3 Gbps) without affecting the throughput of the internal network.
The HP server is connected directly to each of the two EMC DAS chassis' using SFF-8088 to SFF-8088 cables between the LSI HBA in the HP server, and the SAS controllers in the EMC DAS chassis'. Note: This is not a network connection, this is a direct SAS connection.
The HP server is connected via Direct Attach Copper to the Ubiquiti switch at 10 Gbps. This is done using the second 10 Gbps port on the server as the first is reserved for the connection between TNSR and the Bell Home Hub. The iLo port of the server is also connected to the switch using standard CAT5e cabling for connectivity at 1 Gbps.
The whitebox Windows 10 PC is connected to the Ubiquiti switch using standard CAT5e cabling as only 1 Gbps connectivity is required.
The Supermicro Server is connected to the Ubiquiti switch using standard CAT5e cabling as only 1 Gbps connectivity is required. Communication with a Chia Harvester has very low bandwidth requirements. The IPMI port of the server is also connected to the switch using standard CAT5e cabling for connectivity at 1 Gbps.
WiFi 6 long-range Ubiquiti access points located on the upper level of the house and in the basement for full WiFi coverage. Connected to the Unifi Controller running under Kubernetes. Powered with PoE injectors in the server rack. Cabled with CAT5e and directly connected to the Ubiquiti switch as these devices are only 1 Gbps capable.
My personal desktop/workstation. Connected directly to the Ubiquiti switch using CAT6a to enable 10 Gbps throughput.
As shown above, my Kubernetes cluster consists of 3 nodes running in virtual machines in Proxmox, each of which is assigned 8 GB of memory, and 8 vCPUs. Each node is both a controlplane and worker node. My eventual goal is to purchase a blade style server chassis to run Kubernetes on bare metal, and split up the controlplane and workers onto their own dedicated blades. However, that is a project for the future and is not currently on my immediate radar.
My entire Kubernetes environment is described as code in the repository here. This repository is what drives Argo CD, which is the tool responsible for keeping my Kubernetes environment aligned with what I have described in my Github repository.
The critical components in my Kubernetes cluster include:
All configurations can be found in my Git repository
Operators and Critical Components
App-Of-Apps
Argo CD
Cert Manager
External DNS
MetalLB
Metrics Server
NFS Subdir External Provisioner
NGINX Ingress
Sealed Secrets
TNSR Controller
Externally Exposed Applications
Chia Node
Contact API
Ghost
This Website
Minio
Ombi
Paperless
Plex Media Server
Seafile
Internal-Only Applications
NZBGet
Radarr
Sonarr
SMTP Relay
Tdarr
Ubiquiti Unifi Controller
Bell fibre internet connection. 3 Gbps symmetrical.
Bell Home Hub 4000. Straight passthrough to TNSR.
TNSR router. 3 seperate virtual interfaces assigned to VM for each VLAN. Acts as the default gateway for all VLANs. Is directly connected to the Bell Home Hub on a dedicated interface.
MetalLB running within Kubernetes. Peers with TNSR using BGP and automatically injects routes to make all loadbalancer service resources in Kubernetes available to the boader network and the internet as required. Routes to this network are assigned using next-hop addresses of each Kubernetes node which are equally weighted allowing for BGP loadbalancing to occur.
An IP address pool that MetalLB has complete control over. Statically assigns addresses to NGINX and CoreDNS as these need to be well-known but otherwise dynamically assigns IP addresses to all other loadbalancer services in Kubernetes. Some of the other services MetalLB allocates IP addresses for include Plex, the Ubiquiti Unifi Controller, and the Chia node.
The Native VLAN is primarily used for servers and network devices. As an example, Proxmox and each access point have their own dedicated IP addresses on this network.
While the access points on the network have their own IP addresses on the Native VLAN, they connect wireless clients to the General VLAN.
The General VLAN is where most devices get connected to. This includes all wireless devices and most wired devices. All VMs excluding the Kubernetes nodes have a presence on this network as well.
The Kubernetes VLAN is dedicated exclusively to the Kubernetes nodes. Nothing else is assigned to this network except for TNSR.
Routes for the NGINX ingress and CoreDNS.
Each route has 3 potential next hop addresses, one for each Kubernetes node. The next hop IP address is the actual address of the node.
This is what the BGP routes that MetalLB propagates to TNSR look like.
Kubernetes workloads resolve DNS against CoreDNS, as is the standard.
Any DNS requests that both CoreDNS and the Windows DNS server are unable to resolve are finally forwarded externally to either Cloudflare (1.1.1.1) or Google (8.8.8.8 and 8.8.4.4)
CoreDNS and the K8S Gateway plugin handle name resolution for the burrell.tech, home.burrell.tech, and k8s.burrell.tech zones. Any queries it receives that it doesn't know how to answer get forwarded to the Windows DNS server. An IP address is statically assigned by MetalLB so that devices external to Kubernetes can query CoreDNS.
The Windows DNS server is mainly responsible for providing name resolution of DHCP registered devices and a handful of manual DNS entries for the burrell.tech zone for services that are not in Kubernetes.
The only devices that don't resolve against CoreDNS are the Kubernetes cluster nodes. This is to prevent a circular dependency where the nodes require CoreDNS to already be available before the cluster is ready. Instead, the nodes have been manually set to resolve directly against the Window DNS server.
Internal network devices all resolve against CoreDNS/K8S Gateway in Kubernetes. The DHCP server is configured out to give the IP address of CoreDNS as the nameserver.
I'm still working on more content for this page...
Remote
Kingston, Ontario, Canada
Ottawa, Ontario, Canada
Network Technology
Carleton University
Ottawa, Canada
09/2015 - 04/2019
Achievements
Deans' Honour List (2015 - 2016)
Ottawa, Ontario, Canada
Argo CD and GitOps Implementation
The Empire Life Insurance Company
2022
Developed strategy for managing applications with Git and ArgoCD while complying with change management processes.
Consulted with various stakeholders including development teams, IT security, and the auditing department.
Created a series of generic Helm charts to facilitate and ease transition.
Architected a monolithic Git repository for managing key cluster resources while delegating application control to developers.
Implemented a deployment strategy with strict controls over production and relaxed controls over non-production.
Configured LDAP driven role-based access controls for privileges in ArgoCD.
Wrote Bash script to monitor ArgoCD API for monitoring in existing Nagios system.
Implemented Argo CD notifications to Slack.
Worked with development teams to create a Github action capable of automatically updating application configurations on new releases.
Automated PBX Deployments
Telecom Metric Inc.
2021
Created a fully automatic and hands-off deployment strategy for 3CX PBXs.
Set up base images of PBX with minimal configurations for Ansible connectivity.
Utilized Terraform to orchestrate cloud provisioning and script triggers.
Created a series of Ansible tasks and Bash scripts to automatically configure VMs.
Reduced deployment process from 4 hours to 20 minutes.
Homelab Colocation
Personal
2018-2021
Installed, cabled, and populated a 42u server rack within a local data center.
Configured a highly available and fault tolerant Proxmox cluster.
Provisioned a highly available multi-tier storage cluster (150+ TB).
Designed a dynamically routed (OSPF), highly available, and redundant internal network.
Installed an out-of-band management system including a serial console server and IP KVM.
Implemented a site-to-site VPN between the data centre and home network.
Set up a fully automatic and multi-tier backup solution following the 3-2-1 rule.
Deployed numerous internet facing-services behind fault tolerant NGINX reverse proxies.
WPA2-PSK Multi-Auth Proof of Concept
Algonquin College of Applied Arts and Technology
2018
Researched implementation strategy for using OpenWRT.
Configured FreeRADIUS on embedded hardware to act as authentication backend.
Deployed SQLite database on embedded hardware as backend store for FreeRADIUS.
Wrote web frontend and deployed on external server to enable user login and generate unique PSKs.
Deployed Python REST API to embedded hardware for communication with web frontend.
Set up hook to automatically bind individual MAC addresses to PSKs on first successful authentication.
GitOps Fundamentals (10/2022)
Issuer: Codefresh
ID: 634c515ced65512f3d42eafb
GitOps at Scale (10/2022)
Issuer: Codefresh
ID: 634c7563efabc25be82dc112
3CX Advanced Certified Engineer (05/2019)
Issuer: 3CX
ID: siq1JrVcEC
JNCIA-Junos (10/2018)
Issuer: Juniper Networks
ID: 14PN2N1581E4QGSQ
Don't see what you're looking for? Contact me, I may still be able to help.