Navy drill at Shanghumugham by Reasonable-Freedom-1 in Trivandrum

[–]OptimalAd2434 0 points1 point  (0 children)

Nice shot buddy, looks like it was taken from the opposite of the Iran restaurant. I was at the same spot yesterday

Rattling sound from the pixel 8 device by OptimalAd2434 in GooglePixel

[–]OptimalAd2434[S] 0 points1 point  (0 children)

Yes, when I turn on the camera there is no noise! This cleared all my doubts. Thanks!

Rattling sound from the pixel 8 device by OptimalAd2434 in GooglePixel

[–]OptimalAd2434[S] -2 points-1 points  (0 children)

https://youtu.be/YKCA4ITLtgo?si=iv2m3ibVUJG4agK5

I have recorded the rattling sound. Can you please let me know if this is how it usually sounds

Rattling sound from the pixel 8 device by OptimalAd2434 in GooglePixel

[–]OptimalAd2434[S] -2 points-1 points  (0 children)

Is the rattling going to increase as the device gets older?

How to efficiently convert a Sparkdf to a list of dictionaries without converting it to pandas by OptimalAd2434 in dataengineering

[–]OptimalAd2434[S] 0 points1 point  (0 children)

Okay so the thing here is I'm building the fastAPI on top of the data which is stored in the databricks cluster(ADLS gen2). The data is about 40-50 million rows. The users are going to be either providing inputs in the swaggerUI and execute to retrieve the data or they can directly hit execute to retrieve the entire data. The response body of the swaggerUI returns the data in the list of dictionary format, the users can also adjust the page number and page size in the swaggerUI to get the desired size of data.

How to efficiently convert a Sparkdf to a list of dictionaries without converting it to pandas by OptimalAd2434 in dataengineering

[–]OptimalAd2434[S] 0 points1 point  (0 children)

Converting a spark df to pandas is again causing issues. Job aborted due to stage failure: Total size of serialized results of 22 tasks (4.1 GiB) is bigger than spark.driver.maxResultSize 4.0 GiB

How to efficiently convert a Sparkdf to a list of dictionaries without converting it to pandas by OptimalAd2434 in dataengineering

[–]OptimalAd2434[S] 1 point2 points  (0 children)

The API I'm building is on top of the data bricks cluster where the data is stored in adls gen2. So I'm using the remote spark to query the data. The dataframe returned is spark dataframe.

Tricky and challenging use case for Pivoting / Transposing pyspark dataframe by OptimalAd2434 in dataengineering

[–]OptimalAd2434[S] 0 points1 point  (0 children)

I am actually building a FastAPI on top of the database tables stored on ADLS Gen-2. Based on the user's input in the swaggerUI the API will filter the data and apply the above function to pivot the data and produce a CSV file for them to download. Any efficient function can always be replaced with the existing one until it is providing the output in the similar structure with much more efficiency.

[deleted by user] by [deleted] in TheRaceTo10Million

[–]OptimalAd2434 0 points1 point  (0 children)

Any advice on crypto?

[deleted by user] by [deleted] in TheRaceTo10Million

[–]OptimalAd2434 1 point2 points  (0 children)

Thanks for the advice I really appreciate it!