SpyglassMTG Blog

  • Blog
  • A Project Manager’s Tale of Databricks, DevOps and a Dash of Spreadsheets…

A Project Manager’s Tale of Databricks, DevOps and a Dash of Spreadsheets…

A Project Manager’s Tale of Databricks, DevOps and a Dash of Spreadsheets…

In the world of large-scale data and analytics projects, achieving seamless integration between infrastructure, development, and data engineering requires more than just technical prowess, it demands strategic technical oversight, adaptability, and sometimes a little creativity. At Spyglass MTG, we recently wrapped up a successful Databricks implementation for a client that showed just how powerful careful planning and cross-functional collaboration can be when paired with a healthy dose of flexibility. 

Planning and Design: The Blueprint for Success (with a Few Twists) 

Every successful project starts with a solid plan, and sometimes (always, whatever) a bit of herding cats. Our goal was to unify the client’s scattered data sources and optimize workflows within a secure cloud environment that their organization required. This meant moving from individually coded, laptop-bound workflows to a robust architecture that used Azure services like private DNS zones, network security groups (NSGs), and resource peering across a "hub and spoke" topology. The Statement of Work (SOW) outlined our roles, timelines, and objectives…but, as any PM knows, no plan survives intact without a little improvisation. 

One of the first challenges was managing the complex dependencies that come with hybrid cloud solutions. The preliminary deployment phase focused on creating secure virtual networks, configuring access controls, and establishing approval gates for Infrastructure as Code (IaC) modules developed in Terraform. By using a centralized DevOps repository, we maintained version control and consistency across environments like development, QA, and production. Ultimately, our team didn’t just build infrastructure, we coached the client’s data scientists on best practices, helping them embrace the new workflows. 

Wagile: Because Sometimes You Need a Bit of Both 

When you’re working with multiple teams such as networking, data engineering, and data science, it’s clear that no single project management methodology will cut it. Enter "wagile"… hybrid of waterfall and agile and other things... For example, when setting up the foundational security layers, a waterfall approach made sense. But data pipeline development, that needed sprints, iterations, feedback, rinse, and repeat. 

The result? The infrastructure and DevOps teams tackled their tasks in parallel while staying aligned through regular touchpoints and updates where we were able to track handoffs and dependencies between infrastructure and dev. We didn’t stick to all the formal agile ceremonies. Our client stakeholders had been through full agile transformations before and weren’t eager for a repeat performance. Instead, we focused on keeping things lean and meaningful, meeting the team where they were rather than forcing them into a methodology they didn’t really believe in.   

Communication and Task Tracking: From DevOps Boards to Good Old Excel 

Let’s talk about the art of communication and the reality of task tracking. Azure DevOps is a fantastic tool IF you can get your team to actually use it. But with this client, some of the DevOps boards felt like an abandoned amusement park: a few rides were running and some had tumbleweeds blowing through.  Some were committed to adding tasks and updating current ones, others didn’t see it unless I shared it during standup… 

Rather than spin our wheels, I adapted. I mirrored the Azure DevOps boards in weekly Excel spreadsheets (complete with some formatting pizzazz, you know, red, green, yellow highlights, bold borders…a thing of beauty) and sent them out via email. Sure, it felt a bit nostalgic, it was giving “Project Manager, 2005 Edition”, but it worked. Everyone knew their tasks and deadlines, and for the most part, we stayed aligned. What started as a workaround became a reliable rhythm for communication.   

This wasn’t just about organizing tasks; it was about building trust. The regular email updates, combined with informal planning sessions, reassured stakeholders that we weren’t there to enforce rigid rules but to drive progress. Over time, this approach helped build confidence in the Spyglass team and kept communication flowing in a way that worked for the client.  

Ultimately Delivering Value Through Flexibility and Best Practices 

By aligning with the client’s technical and cultural needs, we delivered a solution that elevated their data capabilities and set them up for long-term success. Here’s what we accomplished: 

  • Scalable data processing: Databricks workspaces streamlined workflows for development and reporting. 
  • Enhanced data governance: Implemented Unity Catalog for data lineage and audit controls, providing greater transparency and security. 
  • Improved network security: Built an enterprise-grade architecture that protected sensitive data from unauthorized access while maintaining secure private connectivity.   

The success of this project reinforced an important PM-ing lesson: the best implementations are as much about people and communication as they are about technology. At Spyglass MTG, we approach every implementation as more than a technical achievement, it’s a chance to provide clients with the practices and tools they need to take full advantage of their data in this modern data world. To learn more, contact us today. 

Data Understanding DevOps and MLOps: The Importance of Good Software Practices for Data Scientists