Skip to content

Series: Beginner's Guide to Azure Data Factory

Welcome to this Beginner’s Guide to Azure Data Factory! In this series, I’m going to cover the fundamentals of Azure Data Factory in fun, casual, bite-sized blog posts that you can read through at your own pace and reference later. You may not be new to data integration or SQL, but we’re going to start completely from scratch in this series.

How do you get started building data pipelines? What if you need to transform or re-shape data? How do you schedule and monitor your data pipelines? Can you make your solution dynamic and reusable? Join me in this Beginner’s Guide to Azure Data Factory to learn all of these things – and maybe more :) Let’s go!

  1. Introduction to Azure Data Factory
  2. Creating an Azure Data Factory
  3. Overview of Azure Data Factory Components
  4. Copy Data Wizard
  5. Pipelines
  6. Copy Data Activity
  7. Datasets
  8. Linked Services
  9. Data Flows
  10. Orchestrating Pipelines
  11. Debugging Pipelines
  12. Triggers
  13. Monitoring
  14. Annotations and User Properties
  15. Integration Runtimes
  16. Copy SQL Server Data
  17. Executing SSIS Packages
  18. Source Control
  19. Templates
  20. Parameters
  21. Variables
  22. ForEach Loops
  23. Lookups
  24. Understanding Pricing
  25. Resources

P.S. This series will always be a work-in-progress. Yes, always. Azure changes often, so I keep coming back to tweak, update, and improve content. I just might not be able to do it right away! :)

Copy Data Activity in Azure Data Factory

This post is part 6 of 25 in the series Beginner's Guide to Azure Data Factory

In the previous post, we went through Azure Data Factory pipelines in more detail. In this post, we will dig into the copy data activity. How does it work? How do you configure the settings? And how can you optimize performance while keeping costs down?

Copy Data Activity

The copy data activity is the core (*) activity in Azure Data Factory.

(* Cathrine’s opinion 🤓)

You can copy data to and from more than 80 Software-as-a-Service (SaaS) applications (such as Dynamics 365 and Salesforce), on-premises data stores (such as SQL Server and Oracle), and cloud data stores (such as Azure SQL Database and Amazon S3). During copying, you can define and map columns implicitly or explicitly, convert file formats, and even zip and unzip files – all in one task.

Yeah. It’s powerful :) But how does it really work?

Continue reading →

Datasets in Azure Data Factory

This post is part 7 of 25 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at the copy data activity and saw how the source and sink properties changed with the datasets used. In this post, we will take a closer look at some common datasets and their properties.

Let’s start with the source and sink datasets we created in the copy data wizard!

Dataset Names

First, a quick note. If you use the copy data wizard, you can change the dataset names by clicking the edit button on the summary page…

Continue reading →

Linked Services in Azure Data Factory

This post is part 8 of 25 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at datasets and their properties. In this post, we will look at linked services in more detail. How do you configure them? What are the authentication options for Azure services? And how do you securely store your credentials?

Let’s start by creating a linked service to an Azure SQL Database. Yep, that linked service you saw screenshots of in the previous post. Mhm, the one I sneakily created already so I could explain using datasets as a bridge to linked services. That one :D

Creating Linked Services

First, click Connections. Then, on the linked services tab, click New:

Screenshot of the Azure Data Factory user interface showing the connections tab with linked services highlighted

The New Linked Service pane will open. The Data Store tab shows all the linked services you can get data from or read data to:

Continue reading →

Data Flows in Azure Data Factory

This post is part 9 of 25 in the series Beginner's Guide to Azure Data Factory

So far in this Azure Data Factory series, we have looked at copying data. We have created pipelines, copy data activities, datasets, and linked services. In this post, we will peek at the second part of the data integration story: using data flows for transforming data.

But first, I need to make a confession. And it’s slightly embarrassing…

Continue reading →

Orchestrating Pipelines in Azure Data Factory

This post is part 10 of 25 in the series Beginner's Guide to Azure Data Factory

In the previous post, we peeked at the two different data flows in Azure Data Factory, then created a basic mapping data flow. In this post, we will look at orchestrating pipelines using branching, chaining, and the execute pipeline activity.

Let’s continue where we left off in the previous post. How do we wire up our solution and make it look something like this?

Diagram showing data being copied from an on-premises data center to Azure Data Lake Storage, and then transformed from Azure Data Lake Storage to Azure Synapse Analytics (previously Azure SQL Data Warehouse)

We need to make sure that we get the data before we can transform that data.

One way to build this solution is to create a single pipeline with a copy data activity followed by a data flow activity. But! Since we have already created two separate pipelines, and this post is about orchestrating pipelines, let’s go with the second option :D

Continue reading →