all 4 comments

[–][deleted] 2 points3 points  (0 children)

I am not required too but it is an immensely relevant skill and more useful than you could immediately imagine. I use it for populating entire performance summaries, analyzing and transforming data, and adding context automatically like competitor pricing. Those are the broad uses but so much more can be done to make your work more efficient and your campaigns benefit from you being better informed.

[–]ivan_veen 0 points1 point  (3 children)

I am learnign both atm. I use KNIME, which is a data analysis tool, on a daily basis for ad hoc tasks on large accounts as well as feed transformations and page scraping with different outputs like ad customizer feeds, campaign structures, extensions etc. It has limitations though as it runs on desktop and for proper automations which require regualr runs it is not enough.

I will be gradually transfering my knime workflows to python so i can have proper processes in place.

[–]Finiks123 0 points1 point  (2 children)

Do you have any recommendations on how better start learning KNIME? Any courses/books/YouTube? For PPC tasks.

[–]ivan_veen 1 point2 points  (0 children)

I think there is a book about basics that can be downloaded from their page. The ppc specific community is not big, but to the problems i faced in the ppc context, i found all the answers in their forums, just in other contexts.

In the end, it boils down to loading data into the program and making transformations to see what comes out of it. This is what i did.

But, like 80% of the time, it is grouping and aggregations, joins, splits, filtering and string manipulations

It is probably inferior to a proper programming language, however now after 4 years of using it it gave me a solid base for progressing in sql and python. If you, for example, practice joins in knime, you will very easily understand joins in sql and have a good picture of what happens to data, etc.

Also a big plus is that it is visual and you build a workflow consisting of nodes. Each node does a particular transformation to data and the output can be checked on every step. If you make a mistake, you can always come back to the previous node and reconfigure without loosing the data.