In the previous post, we looked at the Azure Data Factory user interface and the four main Azure Data Factory pages. In this post, we will go through the Author page in more detail and look at a few things on the Monitoring page. Let’s look at the different Azure Data Factory components!
Azure Data Factory Components on the Author Page
On the left side of the Author page, you will see your factory resources. In this example, we have already created one pipeline, two datasets, and two data flows:

Let’s go through each of these Azure Data Factory components and explain what they are and what they do.
Pipelines
Pipelines are the things you execute or run in Azure Data Factory, similar to packages in SQL Server Integration Services (SSIS). This is where you define your workflow: what you want to do and in which order. For example, a pipeline can first copy data from an on-premises data center to Azure Data Lake Storage, and then transform the data from Azure Data Lake Storage into Azure Synapse Analytics.

When you open a pipeline, you will see the pipeline authoring interface. On the left side, you will see a list of all the activities you can add to the pipeline. On the right side, you will see the design canvas. You can click on parameters, variables and output close to the bottom to expand those panes, or the properties button in the top right corner to view the pipeline properties and related pipelines. We’ll cover these things later!
Activities
Activities are the individual steps inside a pipeline, where each activity performs a single task. You can chain activities or run them in parallel. Activities can either control the flow inside a pipeline, move or transform data, or perform external tasks using services outside of Azure Data Factory.

You add an activity to a pipeline by dragging it onto the design canvas. When you click on an activity, it will be highlighted, and you will see the activity properties in the properties panel. These properties will be different for each type of activity.
Datasets
If you are moving or transforming data, you need to specify the format and location of the input and output data. Datasets are like named views that represent a database, a database table, a folder, or a single file.

Data Flows: Mapping
Mapping Data Flows are a special type of activity for creating visual data transformations without having to write any code.

Data Flows: Wrangling (Power Queries)
Wrangling Data Flows are a different type of activity for creating visual data transformations without having to write any code. If you have used Power BI, you will recognize this interface 🤓 This will most likely be rebranded from “wrangling data flows” to “power queries” in 2021, so you may see me use both terms!

Templates
If you don’t want to create all your pipelines from scratch, you can use the pre-defined templates by Microsoft, or create custom templates.

Azure Data Factory Components on the Management Page
On the left side of the Management page, you will see the other components and services you can create and configure.

Linked Services
Linked Services are like connection strings. They define the connection information for data sources and services, as well as how to authenticate to them.

Integration Runtimes
Integration runtimes specify the infrastructure to run activities on. You can create three types of integration runtimes: Azure, Self-Hosted, and Azure-SSIS. Azure integration runtimes use infrastructure and hardware managed by Microsoft. Self-Hosted integration runtimes use hardware and infrastructure managed by you, so you can execute activities on your local servers and data centers. Azure-SSIS integration runtimes are clusters of Azure virtual machines running the SQL Server Integration (SSIS) engine, used for executing SSIS packages in Azure Data Factory.

Triggers
Triggers determine when to execute a pipeline. You can execute a pipeline on a wall-clock schedule, in a periodic interval, or when an event happens.

Summary
In this post, we went through the Author page in more detail and looked at the different Azure Data Factory components. I like to illustrate and summarize these in a slightly different way:

You create pipelines to execute one or more activities. If an activity moves or transforms data, you define the input and output format in datasets. Then, you connect to the data sources or services through linked services. You can specify the infrastructure and location where you want to execute the activities by creating integration runtimes. After you have created a pipeline, you can add triggers to automatically execute it at specific times or based on events. Finally, if you don’t want to create your pipelines from scratch, you can start from pre-defined or custom templates.
Alrighty! Enough theory. Are you ready to make things happen? I am! Let’s copy some data using the Copy Data Wizard :)
🤓