3 Tips for Applying Agile to Data Science and Data Ops

Reading Time: 4 minutes

It’s plain for all to see that nearly everything is becoming increasingly data driven these days, and the explosive emergence of the IoT has fuelled a lot of that. Every effort made to harness data and either implement it or make decisions based on it is in the interests of competitive advantages, and for as long as we live in a capitalist society where only certain birds get worms that’s going to be the driving force behind much of what goes on in the digital world.

Visualizations, analytics, and the ‘biggie’ – machine learning – are among other aspects of big data that are demanding more attention and more budgetary investment allowances that ever before. Machine learning in particular is kind of like an unexplored continent and it 1620 rather than 2020. Most of you who’ll be reading this blog won’t need us to go into the how’s and why’s of that, so we’ll just continue with where we’re going with all of this in today’s blog.

Here at 4GoodHosting, it probably goes without saying that we’re very front and center in as far as the audience for all these developments are concerned. While anything regarding big data isn’t immediately relevant for us, it certainly is in a roundabout way and that’s very likely true for any good Canadian web hosting provider in Canada. The changes has been revolutionary and continue to be so, and so let’s get to today’s topic.

While we are not shot callers or developers, we know that some of you are and as such here are 3 solid tips for applying agile to data science and data ops.

All About Agile Methodologies

Nowadays you’ll be hard pressed to find even one organization that isn’t trying to become more data-driven. The aim of course is to leverage data visualizations, analytics, and machine learning for advantages over competitors. Strong data ops programs are essential for providing actionable insights through analytics requires and the same goes for a proactive data governance program to address data quality, privacy, policies, and security.

The 3 components and their realities that should be shaping aligned stakeholder priorities are delivery of data ops, analytics, and governance. Being able to implement multiple technologies and amass the right people with the right skills at the right time are going to become as-expected aspects of any interest group that’s working towards this.

Further, agile methodologies can form the working process to help multidisciplinary teams prioritize, plan, and successfully deliver incremental business value. The benefits of having these methodologies in place can also extend to capturing and processing feedback from customers, stakeholders, and end-users. This volunteered data usually has great value for promoting data visualization improvements, machine learning model recalibrations, data quality increases, and data governance compliance.

We’ll conclude this preface to the 3 tips by saying agile data science teams should be multidisciplinary, meaning a collection of e data ops engineers, data modelers, database developers, data governance specialists, data scientists, citizen data scientists, data stewards, statisticians, and machine learning experts should be the norm – whatever that takes on your end . Of course you’ll be determining that actual makeup on the scope of work and the complexity of data and analytics required.

Right then, on to our 3 for applying agile to data science and data ops:

  1. Developing and Upgrading Analytics, Dashboards, and Data Visualizations

Data science teams are nowadays best utilized when they’re conceiving dashboards to help end-users answer questions.

But the key here is in taking a very deep and equivocal look at agile user stories, and each should be looked at through 3 different lenses:

  • Who are the end-users?
  • What problem do they want addressed?
  • What makes the problem important?

Answers to these questions can then be the basis for writing agile user stories that deliver analytics, dashboards, or data visualizations. You may also want to make efforts to determine who intends to be using the dashboard and what answers they will be looking for. This process is made easier when stakeholders and end-users provide hypotheses indicating how they intend to take results and make them actionable.

  1. Develop / Upgrade Machine Learning Models

Segmenting and tagging data, feature extraction and making sure data sets are run through selectively and strategically chosen algorithms and configurations needs to be an integral part of the process of developing analytical and machine learning models. Also increasingly common is having agile data science teams taking records of agile user stories for prepping data for use in model development.

From there, separate stories for each experiment are logged and then cross-referenced for patterns across them or additional insights determined from seeing them side by side.

The transparency helps teams review the results from experiments, decide on successive priorities, and discuss whether current approaches are still to be seen as conducive to beneficial results. You need to take a very hard look in regard to the last part of that, and be willing to move in entirely different directions if need be. Being fixed in your ways here or partial to any approach has the ability to sabotage your interests in a big way.

  1. Discovering, Integrating, and Cleansing Data Sources

Ideally geared agile data science teams will be seeking out new data sources to integrate and enrich their strategic data warehouses and data expanses. Let’s consider data siloed in SaaS tools used by marketing departments for reaching prospects or communicating with customers as an excellent example. Other data sources might provide additional perspectives around supply chains, customer demographics, or environmental contexts that impact purchasing decisions.

 

Other smart choices are agile backlogs with story cards to research new data sources, validating sample data sets, and integrating prioritized ones into primary data repositories. Further considerations may be automating the data integration, implementing data validation and quality rules, and linking data with master data sources.

Lastly, data science teams should also capture and prioritize data debt. To date many data entry forms and tools did not have sufficient data validation, and integrated data sources did not have cleansing rules or exception handling. Refer to this as keeping a clean house if you will, but it is something that’s a good idea even if it’s not something that’s ever going to take priority.

Between all of this you should be able to improve data quality and deliver tools for leveraging analytics in decision making, products, and services.

 

Post Navigation