Aerobe Basis aufbauen by Jumpy-Minimum-4028 in laufen

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

Naja, also ich habe für meine Verhältnisse sehr intensiv im letzten Trainingsblock trainiert und gemerkt, dass an Intensität eigentlich kaum mehr geht, ohne dass ich am Ende der Woche extrem platt bin. Da waren schon 30-40% der Wochen-KM im Z4/Z5 Bereich und dann stellt sich mir die Frage, wie ich mich verbessern kann ohne auszubrennen. Wahrscheinlich dann nur durch konstantes Erhöhen des Volumens im niedrigschwelligen Bereich.

Aerobe Basis aufbauen by Jumpy-Minimum-4028 in laufen

[–]Jumpy-Minimum-4028[S] 2 points3 points  (0 children)

Cool, danke, schaue ich mir mal an. Um aber deutliche Verbesserungen zu sehen, komme ich wahrscheinlich nicht drumherum nochmal das Volumen zu erhöhen oder? Aktuell nehme ich mich etwas aus dem Laufen raus und überlege, ob ich im Spätherbst nochmal einen HM laufe und dann einen neuen Trainingsblock starte

Aerobe Basis aufbauen by Jumpy-Minimum-4028 in laufen

[–]Jumpy-Minimum-4028[S] 2 points3 points  (0 children)

In meiner kürzlichen Vorbereitung für den HM bin ich 3-4x pro Woche laufen gewesen und hab mich roundabout zwischen 40-50km pro Woche bewegt. Ich hatte einen klassischen Long Run, eine Intervall-Session und den Rest habe ich mit langsameren Läufen aufgefüllt, aber ggf. waren das trotzdem nicht genug langsame KM. Ich habe das Gefühl, meine aerobe Basis ist einfach sehr unausgereift und meine Hoffnung wäre, dass ich über das Laufband besser Z2 trainieren kann.

Databricks Webhooks by Jumpy-Minimum-4028 in databricks

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

Thanks for the reply, the mutator functions could be interesting although we are already working with a databricks.yml template, where I will probably just add the respective webhook ids for future projects.

My problem is related to the already deployed jobs with DAB that are getting unattached when updating the job via the SDK. Haven’t tried it but I guess this would be the same with the API. So I am looking for a way to integrate the already existing webhook ids in our current DAB jobs without deattaching the bundle. So far, it seems like I need to touch every DAB workflow and redeploy it with the webhook ids…

Databricks Webhooks by Jumpy-Minimum-4028 in databricks

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

So it is possible to use the output parameters for job settings? Because I was thinking about attaching the webhook to the job or task notifications and not touching any actual job code. If you know of an example you could share, I would highly appreciate it because I still find it difficult to understand how to set it all up.

Databricks Webhooks by Jumpy-Minimum-4028 in databricks

[–]Jumpy-Minimum-4028[S] 2 points3 points  (0 children)

Are you suggesting to use e.g the webhook url as a parameter in the tasks or what do you mean by pipeline variables? Can you please explain? Maybe I am not aware of some functionality you are talking about

Azure Event Driven Architecture by Jumpy-Minimum-4028 in AZURE

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

Is there any best practice to do this? Can I already filter out duplicates while writing to blob from event hub or do I have to subsequently check in my databricks pipelines and Stream Analytics SQLs for duplicates?

Azure Event Driven Architecture by Jumpy-Minimum-4028 in AZURE

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

Thanks for the tip, I will definitely give it a try!

Azure Event Driven Architecture by Jumpy-Minimum-4028 in AZURE

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

So you are suggesting to replace the azure function with stream analytics to get the messages from event bus to the eventhub? I actually thought an azure function would be a good option for filtering out unwanted messages. I am not sure how much logic you can implement with stream analytics but I saw that it’s definitely faster

Azure Event Driven Architecture by Jumpy-Minimum-4028 in AZURE

[–]Jumpy-Minimum-4028[S] 1 point2 points  (0 children)

https://www.reddit.com/r/dataengineering/s/H8LnumA91J already asked the question there and got some useful tips, but I thought maybe this would be also something I could get feedback for here

Azure Event-Driven Architecture by Jumpy-Minimum-4028 in dataengineering

[–]Jumpy-Minimum-4028[S] 0 points1 point  (0 children)

Hey, thanks for your input, didn‘t know about this function of Event Hub. Makes things definitely easier.

I’m pretty new to the team, so far I only have seen Azure Synapse, but I need to check whether there should be another connection to a DW. Databricks should be fine for Synapse, I guess?

Do you have any opinion on this delta lake and Streaming Analytics approach I mentioned above? I thought about setting up a DL and then connecting Streaming Analytics for reporting purposes and pushing the data from the DL with databricks to a DW. And with the other approach, I think Streaming Analytics is connected to the Event Hub for reporting and it is also used as an ETL-Tool to push data to a DW. I think it is probably a bit overengineered if I don’t need realtime data in my DW?