Skip to content

Tag: Azure Data Factory

Ask the Experts: Data Integration in Azure Synapse Analytics (at Data Toboggan 2021)

On Saturday, June 12th, 2021, I will be moderating an Ask the Experts session at Data Toboggan! This is a free event focusing on Azure Synapse Analytics. There are over twenty sessions and lightning talks scheduled, covering topics such as architecture, performance, tools, data integration, machine learning and much more.

If you have any questions about Data Integration in Azure Synapse Analytics (or Azure Data Factory), join us! You don’t want to miss this session 🤓

Data Toboggan logo showing a toboggan (sled) going down a hill, with the text "Ask the Experts: Data Integration in Azure Synapse Analytics"

Ask the Experts from Microsoft and the Community

I’m incredibly thankful and excited that experts from both Microsoft and the community will be participating. It’s a unique opportunity for you to learn more about the technologies and products, as well as from experiences based on real-world projects.

These are the experts who will be participating:

What do you want to know?

When should you use Azure Synapse Analytics vs. Azure Data Factory? How can you optimize your Pipelines and Data Flows? Can Azure Purview bring additional benefits for Data Engineers? How do you set up automated deployment? What are some best practices to follow and pitfalls to avoid?

I’m sure you can think of many other questions! Please submit them through this form before the event. You can of course also ask questions during the session, it’s a live session after all! I just want to make sure that we have some questions queued up from the start to get the most out of our time 😊

(If we don’t get through all the questions during the session, I will try to get them answered in a follow-up blog post.)

Join us at Data Toboggan!

Take a look at the schedule, make sure you’re registered, and submit your questions. If this session is not for you, can you please spread the word to anyone in your network who might be interested? I hope to see you there! 🤓

Overview of Azure Data Factory User Interface

This post is part 3 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we started by creating an Azure Data Factory, then we navigated to it. In this post, we will navigate inside the Azure Data Factory. Let’s look at the Azure Data Factory user interface and the four Azure Data Factory pages.

Azure Data Factory Pages

On the left side of the screen, you will see the main navigation menu. Click on the arrows to expand and collapse the menu:

Animation of expanding and collapsing the pages menu in the Azure Data Factory user interface

Once we expand the navigation menu, we see that Azure Data Factory consists of four main pages: Home, Author, Monitor, and Manage:

Screenshot of the Azure Data Factory user interface showing the four main pages: Data Factory, Author, Monitor, and Manage
Continue reading →

Renaming the default branch in Azure Data Factory Git repositories from “master” to “main”

In Azure Data Factory, you can connect to a Git repository using either GitHub or Azure DevOps. When connecting, you have to specify which collaboration branch to use. In most cases, the default branch is used. Historically, the default branch name in git repositories has been “master“. This is problematic because it is not inclusive and is very offensive to many people.

The Git project, GitHub, and Azure DevOps are making changes to allow users to specify a different default branch name. GitHub and Azure DevOps will be changing their default branch names to “main” in 2020. I fully support this change and will be doing the same in my projects.

In this post, we will go through how to rename the default branch from “master” to “main” in Azure Data Factory Git repositories hosted in GitHub and Azure DevOps. Then we will reconnect Azure Data Factory and configure it to use the new “main” branch as the collaboration branch.

For these examples, I’m using my personal demo projects. I’m not taking into consideration any branch policies, other users, third-party tools, or external dependencies. As always, keep in mind that this is most likely a larger change, both technically and organizationally, in production and enterprise projects. 😊

The Short Version

  1. Create a new “main” branch in your Git repository
  2. Set the new “main” branch as the default branch in your Git repository
  3. Delete the old “master” branch in your Git repository
  4. Disconnect from your Git repository in Azure Data Factory
  5. Reconnect to your Git repository in Azure Data Factory using the new “main” branch as the collaboration branch
Continue reading →

Azure Data Factory Resources

This post is part 26 of 26 in the series Beginner's Guide to Azure Data Factory

For the past 25 days, I have written one blog post per day about Azure Data Factory. My goal was to start completely from scratch and cover the fundamentals in casual, bite-sized blog posts. This became the Beginner’s Guide to Azure Data Factory. Today, I will share a bunch of resources to help you continue your own learning journey.

I’ve already seen from your questions and comments that you are ready to jump way ahead and dive into way more advanced topics than I ever intended this series to cover 😉 And as much as I love Azure Data Factory, I can’t cover everything. So a little further down, I will share where and how and from who you can continue learning about Azure Data Factory.

But first…

That’s a wrap! Woohoo 🥳

Continue reading →

Understanding Pricing in Azure Data Factory

This post is part 25 of 26 in the series Beginner's Guide to Azure Data Factory

Congratulations! You’ve made it through my entire Beginner’s Guide to Azure Data Factory 🤓 We’ve gone through the fundamentals in the first 23 posts, and now we just have one more thing to talk about: Pricing.

And today, I’m actually going to talk! You see, in November 2019, I presented a 20-minute session at Microsoft Ignite about understanding Azure Data Factory pricing. And since it was recorded and the recording is available for free for everyone… Well, let’s just say that after 23 posts, I think we could both appreciate a short break from reading and writing 😅

(And as a side note, I’m originally publishing this post on December 24th. Here in Norway, we celebrate Christmas all day today! This is the biggest family day of the year for me, full of food and traditions. So instead of spending a lot of time writing today, I’m going to link to my video and spend the rest of the day with my family. Yay! 🎅🏻🎄🎁)

Continue reading →

Lookups in Azure Data Factory

This post is part 24 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at foreach loops and how to control them using arrays. But you can also control them using more complex objects! In this post, we will look at lookups. How do they work? What can you use them for? And how do you use the output in later activities, like controlling foreach loops?

Lookups

Lookups are similar to copy data activities, except that you only get data from lookups. They have a source dataset, but they do not have a sink dataset. (So, like… half a copy data activity? :D) Instead of copying data into a destination, you use lookups to get configuration values that you use in later activities.

And how you use the configuration values in later activities depends on whether you choose to get the first row only or all rows.

But before we dig into that, let’s create the configuration datasets!

Continue reading →

ForEach Loops in Azure Data Factory

This post is part 23 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at how to use variables in pipelines. We took a sneak peek at working with an array, but we didn’t actually do anything with it. But now, we will! In this post, we will look at how to use arrays to control foreach loops.

ForEach Loops

You can use foreach loops to execute the same set of activities or pipelines multiple times, with different values each time. A foreach loop iterates over a collection. That collection can be either an array or a more complex object. Inside the loop, you can reference the current value using @item().

Let’s take a look at how this works in Azure Data Factory!

Continue reading →

Variables in Azure Data Factory

This post is part 22 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we talked about why you would want to build a dynamic solution, then looked at how to use parameters. In this post, we will look at variables, how they are different from parameters, and how to use the set variable and append variable activities.

Variables

Parameters are external values passed into pipelines. They can’t be changed inside a pipeline. Variables, on the other hand, are internal values that live inside a pipeline. They can be changed inside that pipeline.

Parameters and variables can be completely separate, or they can work together. For example, you can pass a parameter into a pipeline, and then use that parameter value in a set variable or append variable activity.

Continue reading →

Parameters in Azure Data Factory

This post is part 21 of 26 in the series Beginner's Guide to Azure Data Factory

In the last mini-series inside the series (🙃), we will go through how to build dynamic pipelines in Azure Data Factory. In this post, we will look at parameters, expressions, and functions. Later, we will look at variables, loops, and lookups. Fun!

But first, let’s take a step back and discuss why we want to build dynamic pipelines at all.

(Pssst! There are now also global parameters, woohoo! They didn’t exist when I first wrote this blog post. I’ll be adding this shortly!)

Hardcoded Solutions

Back in the post about the copy data activity, we looked at our demo datasets. The LEGO data from Rebrickable consists of nine CSV files. So far, we have hardcoded the values for each of these files in our example datasets and pipelines.

Now imagine that you want to copy all the files from Rebrickable to your Azure Data Lake Storage account. Then copy all the data from your Azure Data Lake Storage into your Azure SQL Database. What will it look like if you have to create all the individual datasets and pipelines for these files?

Like this. It will look like this:

Screenshot of nine different datasets connecting to the Rebrickable website
Screenshot of nine different datasets connecting to Azure Data Lake Storage
Screenshot of nine different datasets connecting to Azure SQL Database
Screenshot of nine different pipelines copying data from the Rebrickable website to Azure Data Lake Storage
Screenshot of nine different pipelines copying data from Azure Data Lake Storage to Azure SQL Database

Hooboy! I don’t know about you, but I do not want to create all of those resources! 🤯

(And I mean, I have created all of those resources, and then some. I currently have 56 hardcoded datasets and 72 hardcoded pipelines in my demo environment, because I have demos of everything. And I don’t know about you, but I never want to create all of those resources again! 😂)

So! What can we do instead?

Dynamic Solutions

We can build dynamic solutions!

Continue reading →

Templates in Azure Data Factory

This post is part 20 of 26 in the series Beginner's Guide to Azure Data Factory

In the previous post, we looked at setting up source control. Once we did that, a new menu popped up under factory resources: templates! In this post, we will take a closer look at this feature. What is the template gallery? How can you create pipelines from templates? And how can you create your own templates?

Let’s hop straight into Azure Data Factory!

Using Templates from the Template Gallery

From the Home page, you can create pipelines from templates:

Screenshot of the Azure Data Factory Home page, highlighting the create pipeline from template option
Continue reading →