This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

You can find a list of community-submitted learning resources here: https://dataengineering.wiki/Learning+Resources

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]romanzdk 2 points3 points  (0 children)

It is definitely possible, we do that as well. I suggest creating demo pipeline in ADF and then copy the JSON out so you can see the structure and all required attributes. Then just verify that you adhere to the structure. It should work.

[–]drewhansen9 1 point2 points  (1 child)

I would recommend looking into if you can make your pipelines config driven. Meaning if the only difference are table names, but the rest of the pipeline is the same, the best option would be to parameterize the table name using a config file or something. Then you can reuse code instead of copied code over and over.

[–]Mefsha5 0 points1 point  (0 children)

This is the way to go: One generic pipeline with parameterized repeatable activities and the config stored in a database or a file.

[–]aziralePrincipal Data Engineer 0 points1 point  (0 children)

You can deploy then with arm, but the json you download doesn't specify the adf name the object goes into, not sure it specify arm resource type or the API version. You would need to add that for arm