Skip to content

Triggers in Azure Data Factory

This post is part 13 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at testing and debugging pipelines. But how do you schedule your pipelines to run automatically? In this post, we will look at the different types of triggers in Azure Data Factory.

Let’s start by looking at the user interface, and dig into the details of the different trigger types.

(Pssst! Triggers have been moved into the management page. I’ll be updating the descriptions and screenshots shortly!)

Creating Triggers

First, click Triggers. Then, on the linked services tab, click New:

Screenshot of Azure Data Factory user interface with Triggers open, highlighting the button for creating a new trigger

The New Trigger pane will open. The default trigger type is Schedule, but you can also choose Tumbling Window and Event:

Screenshot of Azure Data Factory user interface with the New Trigger pane open and the different trigger types highlighted

Let’s look at each of these trigger types and their properties :)

Continue reading →

Debugging Pipelines in Azure Data Factory

This post is part 12 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at orchestrating pipelines using branching, chaining, and the execute pipeline activity. In this post, we will look at debugging pipelines. How do we test our solutions?

You debug a pipeline by clicking the debug button:

Screenshot of the Azure Data Factory interface, with a pipeline open, and the debug button highlighted

Tadaaa! Blog post done? :D

I joke, I joke, I joke. Debugging pipelines is a one-click operation, but there are a few more things to be aware of. In the rest of this post, we will look at what happens when you debug a pipeline, how to see the debugging output, and how to set breakpoints.

(Pssst! The debugging experience has had a huge makeover since I first wrote this post. I’ll be updating everything shortly!)

Debugging Pipelines

Let’s start with the most important thing:

Continue reading →

Orchestrating Pipelines in Azure Data Factory

This post is part 11 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we peeked at the two different data flows in Azure Data Factory, then created a basic mapping data flow. In this post, we will look at orchestrating pipelines using branching, chaining, and the execute pipeline activity.

Let’s continue where we left off in the previous post. How do we wire up our solution and make it look something like this?

Diagram showing data being copied from an on-premises data center to Azure Data Lake Storage, and then transformed from Azure Data Lake Storage to Azure Synapse Analytics (previously Azure SQL Data Warehouse)

We need to make sure that we get the data before we can transform that data.

One way to build this solution is to create a single pipeline with a copy data activity followed by a data flow activity. But! Since we have already created two separate pipelines, and this post is about orchestrating pipelines, let’s go with the second option :D

Continue reading →

Data Flows in Azure Data Factory

This post is part 10 of 26 in the series Beginner's Guide to Azure Data Factory

So far in this Azure Data Factory series, we have looked at copying data. We have created pipelines, copy data activities, datasets, and linked services. In this post, we will peek at the second part of the data integration story: using data flows for transforming data.

But first, I need to make a confession. And it’s slightly embarrassing…

Continue reading →

Linked Services in Azure Data Factory

This post is part 9 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at datasets and their properties. In this post, we will look at linked services in more detail. How do you configure them? What are the authentication options for Azure services? And how do you securely store your credentials?

Let’s start by creating a linked service to an Azure SQL Database. Yep, that linked service you saw screenshots of in the previous post. Mhm, the one I sneakily created already so I could explain using datasets as a bridge to linked services. That one :D

(Pssst! Linked services have been moved into the management page. I’ll be updating the descriptions and screenshots shortly!)

Creating Linked Services

First, click Connections. Then, on the linked services tab, click New:

Screenshot of the Azure Data Factory user interface showing the connections tab with linked services highlighted

The New Linked Service pane will open. The Data Store tab shows all the linked services you can get data from or read data to:

Continue reading →

Datasets in Azure Data Factory

This post is part 8 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at the copy data activity and saw how the source and sink properties changed with the datasets used. In this post, we will take a closer look at some common datasets and their properties.

Let’s start with the source and sink datasets we created in the copy data wizard!

Dataset Names

First, a quick note. If you use the copy data wizard, you can change the dataset names by clicking the edit button on the summary page…

Continue reading →

Copy Data Activity in Azure Data Factory

This post is part 7 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we went through Azure Data Factory pipelines in more detail. In this post, we will dig into the copy data activity. How does it work? How do you configure the settings? And how can you optimize performance while keeping costs down?

Copy Data Activity

The copy data activity is the core (*) activity in Azure Data Factory.

(* Cathrine’s opinion 🤓)

You can copy data to and from more than 80 Software-as-a-Service (SaaS) applications (such as Dynamics 365 and Salesforce), on-premises data stores (such as SQL Server and Oracle), and cloud data stores (such as Azure SQL Database and Amazon S3). During copying, you can define and map columns implicitly or explicitly, convert file formats, and even zip and unzip files – all in one task.

Yeah. It’s powerful :) But how does it really work?

Continue reading →

Pipelines in Azure Data Factory

This post is part 6 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we used the Copy Data Wizard to copy a file from our demo dataset to our storage account. The Copy Data Wizard created all the factory resources for us: pipelines, activities, datasets, and linked services.

In this post, we will go through pipelines in more detail. How do we create and organize them? What are their main properties? Can we edit them without using the graphical user interface?

Pipelines: The Basics

When I was new to Azure Data Factory, I had many questions, but I didn’t always have someone to ask and learn from. When I did work in a team, I didn’t always dare ask my team members for help, because I felt silly for asking about things that I felt I should probably know.

Yeah, I know… It’s easy to tell others that there are no silly questions, but I don’t always listen to myself :)

I don’t want you to feel the same way! So. Let’s start from the beginning. These are the questions that I had when I was new to Azure Data Factory. Or, these are the questions that I realized I should have asked when I discovered something by accident and went “Oh! So that’s what that is! I wish I knew that last week!

Continue reading →

Copy Data Wizard in Azure Data Factory

This post is part 5 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at the different Azure Data Factory components. In this post, we’re going to tie everything together and start making things happen. Woohoo! First, we will get familiar with our demo datasets. Then, we will create our Azure Storage Accounts that we will copy data into. Finally, we will start copying data using the Copy Data Wizard.

Demo Datasets

First, let’s get familiar with the demo datasets we will be using. I don’t know about you, but I’m a teeny tiny bit tired of the AdventureWorks demos. (I don’t even own a bike…) WideWorldImporters is at least a little more interesting. (Yay, IT joke mugs and chocolate frogs!) But! Let’s use something that’s not already in relational database format.

Let me present… *drumroll* 🥁

Continue reading →

Overview of Azure Data Factory Components

This post is part 4 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at the Azure Data Factory user interface and the four main Azure Data Factory pages. In this post, we will go through the Author page in more detail and look at a few things on the Monitoring page. Let’s look at the different Azure Data Factory components!

Azure Data Factory Components on the Author Page

On the left side of the Author page, you will see your factory resources. In this example, we have already created one pipeline, two datasets, and two data flows:

Screenshot of the Author page in Azure Data Factory, with one Pipeline, two Datasets, and one Data Flow already created

Let’s go through each of these Azure Data Factory components and explain what they are and what they do.

Continue reading →