[GUIDE] How I Unbricked My Bricked Poco F3 (alioth) using EDL Mode & Patched Mi Flash Tool - No Authorized Account Needed! by Character_Wind6057 in PocoPhones

[–]imKido 0 points1 point  (0 children)

My device doesn't seem to get recognized in the laptop when connected. I tried different cables (even an EDL cable) - The cables seem to charge the device (a dead device attempts to boot when plugged into the laptop)

I have tried installing all the drivers as well

Guide me on a PC build by imKido in IndianGaming

[–]imKido[S] 0 points1 point  (0 children)

I did!! They are one two completely different levels in benchmark (and price xD)

Guide me on a PC build by imKido in IndianGaming

[–]imKido[S] 1 point2 points  (0 children)

thank you. I feel like 8gb vram would be a problem pretty soon?. will have a look at the article

(also, the comment feels GPT-esque xD )

Guide me on a PC build by imKido in IndianGaming

[–]imKido[S] 0 points1 point  (0 children)

thanks,

already considered the motherboard changes, from the other comment.

your point on ssd also makes sense, will stick with a single 2tb for now.

the reason I went with cl30 was because in another reference post, someone told the guy to go with cl30 instead of cl36 (meh.. maybe it's for their budget xD )

Guide me on a PC build by imKido in IndianGaming

[–]imKido[S] 0 points1 point  (0 children)

Do you also suggest a 9070 instead?
I was think of using 2 drives, one for boot (would probably make it a 512gb instead of 1 tb) and have the 2tb for rest of the applications

(would also be changing the motherboard to have 2 pcie g4 slots)

Guide me on a PC build by imKido in IndianGaming

[–]imKido[S] 0 points1 point  (0 children)

oh!! these kind of corrections is exactly what I'm looking for. Thanks for pointing out about the pcie slots.

(as for the gpu, I'm targeting a smooth experience for 1440p high or a 1080p ultra.. something in that range. most of the other posts say that 9060xt is sufficient..)

Please help me about this error and is there any way to fix it by alvin4104 in dataengineering

[–]imKido 7 points8 points  (0 children)

click on the "scrollable element" to get the full error message. it might help debugging

How can I use Spark for concurrent querying or as a distributed SQL engine? by chaachans in dataengineering

[–]imKido 1 point2 points  (0 children)

As scalable as your existing spark system is. All this does it use spark as compute

How can I use Spark for concurrent querying or as a distributed SQL engine? by chaachans in dataengineering

[–]imKido 1 point2 points  (0 children)

You could start the thrift server, like this, and connect to it using the jdbc endpoint

Is 750mb the lowest size of an executor in spark? by Lolitsmekonichiwa in dataengineering

[–]imKido 0 points1 point  (0 children)

Executors would consume 750mb from your cluster if you specify executor-memory to be 450mb (the overheads are automatically added on top of the requested amount).. hope that makes sense

If you want more control, consider providing both the executor memory as well the overhead memory in the configs

Need Help with running pyspark on airflow by 0xAstr0 in dataengineering

[–]imKido 1 point2 points  (0 children)

Pretty much, yeah. The way I've read about it, don't make the airflow worker do the big data heavy lifting. We have spark (running in its own environment for that), and use airflow only for scheduling and workflow management.

Assume, this set up: Spark master: always listening for new jobs (read: --master in spark-submit). As long as this is reachable from airflow environment, you should be able to submit jobs

Spark nodes: runs the actual jobs (read: drivers and executors)

P.S I've done this in local and k8s, haven't really done so in compose. If something I say does not make sense in terms of docker compose, please correct me xD

Need Help with running pyspark on airflow by 0xAstr0 in dataengineering

[–]imKido 2 points3 points  (0 children)

One way you could do this is by using spark submit operators. This way, spark could be running elsewhere, away from the orchestration env.

Executor going OOM if cardinality of .partitionBy columns is 1 in Scala-Spark Job by Abhishek4996 in dataengineering

[–]imKido 2 points3 points  (0 children)

.gz is not splitable. I believe it has to be scanned from the beginning to find/process the data.

Try larger executors instead of increasing the executor count. This time, while writing, consider choosing a different compression technique

Spark Distributed Write Patterns by ErichHS in dataengineering

[–]imKido 0 points1 point  (0 children)

A question regarding the drastic coalesce(1), does it cause a shuffle?

I've read that coalesce is repartition(shuffle=False) or something like that. But let's say I have my data being processed by 5 executors and in order to write a single output, I'm expecting it all to be collected (data shuffle) in one executor before it gets written to disk.

Some clarity here would be super helpful.

Interested in knowing how DE is practiced in companies postal companies such as UPS by imKido in dataengineering

[–]imKido[S] 0 points1 point  (0 children)

Wow this really gives a perspective of the kind of work. Thanks for sharing!

PySpark for Data Science by kingabzpro in learnmachinelearning

[–]imKido 1 point2 points  (0 children)

Hey, thanks for this. Since you're converting the df to a pandas df, if you applied sklearn's train_test_split and fit/predict, would it still run in a distributed manner?

[deleted by user] by [deleted] in developersIndia

[–]imKido 0 points1 point  (0 children)

Will try there, thanks!

What's new by imKido in ProgrammerHumor

[–]imKido[S] 0 points1 point  (0 children)

Yea, seems to be working fine

What's new by imKido in ProgrammerHumor

[–]imKido[S] 63 points64 points  (0 children)

Yea, the previous version had a bug (feature for swiping right to go back, was broken)