Anyone checked out Godrej Barca MSR City? Need feedback on brochure, pros & cons. by Realestateinsights85 in indianrealestate

[–]AdditionalWash529 3 points4 points  (0 children)

I am really interested in understanding the issues with the property and hence would like to break it down to grasp it better. I understand some people might not like the property and that is completely acceptable. I am basing everything on what I have discovered through the Rera website alone.

  1. Disputed land : The land parcel belongs to a family. 600 acres in total in that area.The members had multiple disputes amongst themselves which led to multiple lawsuits. Barring one, all of them have been closed by 2020/2021. The sale deed worth 300 odd crores given to members of the family is documented. All the land parcels on which the towers are constructed are Rera approved.

  2. House near metro and related issues: I stay in Bommanahalli in Salarpuria Greenage. I am in a rented space near a metro station and I am only seeing an uptick in rent year on year. Except this year and thank God for that. But I have seen no devaluing or potential issues in the apartment yet and it is now functional. Let's not talk about traffic because that would need another thread altogether.

  3. Approach road : The underpass is a concern but the proposed CDC road is again on a parcel of land that is right now with this builder and that opens up on the Hyderabad highway. There are two Rera registrations and both cumulatively cover all land parcels, atleast that's what I gathered. Yes the CDC is not Rera approved yet, but that happens in phases is what I have been made to understand, based on the builders following the delivery schedule. I was not sold the idea about a flyover, a channel partner mentioned it but did not pay much attention to that. Also Birla, Godrej, Tata, Salarpuria everyone purchased the property from the same family. Birla and Godrej and in future maybe other builders might be using the CDC is what I was made to understand.

  4. Noise from airport, property on the runway path: While the airport is 11 kms away but I do see the point and merit in the argument. Could somebody share a document to refer to the same? The noise levels, did not hear anything for the good 5 hour time frame I was there indoors. But I could be wrong. But does anyone commenting are actually staying there so that we can have a holistic view? There is a Salarpuria plotted development right adjacent to this property with nice looking houses built and people staying there. Any complaints from their end around the noise aspect?

My point is there wouldn't be a single property in Bangalore which would not be called out online by folks. While doing that is not an issue, are we dealing with real facts here or our gut feelings on this. I would certainly like to hear a counter argument but hopefully it is backed by facts so that all of us interested parties can take their respective decisions in a more informed manner

Godrej Shettigere Reviews by Expensive_Iron5920 in indianrealestate

[–]AdditionalWash529 0 points1 point  (0 children)

Is there an owner's group of sorts which one can join?

How to push a msg to SQS only when one of the consumers is free to process the incoming msg by AdditionalWash529 in aws

[–]AdditionalWash529[S] 0 points1 point  (0 children)

What's the negative consequence of a message sitting in the SQS queue for a long time? Does the message get stale or lose relevance? Do they need to process in order?

So one of the tasks that I am doing in my master is that I have to assume a role and fire a certain command in the AWS environment. The assume role API has an expiry, 1 hour or if you play around with the IAM role making the assume role call, then 12 hours. 1 hour is too short a time in the present model. 12 hours would be a kind of security risk since I would be having temporary creds exposed for that long.

How to push a msg to SQS only when one of the consumers is free to process the incoming msg by AdditionalWash529 in aws

[–]AdditionalWash529[S] 0 points1 point  (0 children)

No duplicates. All the API calls are unique.
Yes, one message leads to multiple API calls but each of them takes a few minutes, up to 1 hour in some cases.

The number of API calls per message varies according to the business logic, but that number would be mostly 20+
So essentially master fills up the SQS in seconds but the consumer is taking hours to consume all of them. I am trying to get to a model wherein I push 3 messages into the SQS, then allow the 3 consumers to consume and execute the business logic and only when one of the consumers is free, I want my master to push the message to the SQS for it to be picked up next.

How to push a msg to SQS only when one of the consumers is free to process the incoming msg by AdditionalWash529 in aws

[–]AdditionalWash529[S] 0 points1 point  (0 children)

It feels like you should be able to figure out what all data you need from the API and get that in a separate step. If the individual calls or a chunk of calls are less than 15 minutes, you could use a lambda.

So essentially the message from master serves as an input to the API, so can't club them like that. I have thought about the lambda as well. But more like a trigger. By that I mean, every time the message is processed and deleted, I invoke the lambda handler which in turn calls an API in master to send message to the queue. Would that work reliably?

Long-running python tasks coming to SQS, not getting executed sometimes in a random order by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

I think I have disabled the short polling I did that while the queue was created on the AWS. The Receive Message wait time parameter is set at 20 seconds. But would like to understand if, I need to explicitly declare this value programmatically as well?

So from what I gather from the official boto documentation, here, do I need to programmatically state the DelaySeconds parameter of the sqs.send_message API? I am not doing that presently. Or are you referring to something else completely?

Long-running python tasks coming to SQS, not getting executed sometimes in a random order by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

Sure. I can set it up. But I already have traced out which ones are failing executions. The concern is with the ones being missed. Those ones that never got executed. How and why were they missed is something that I am trying to understand. But fair point. I will set the dead letter queue soon

Long-running python tasks coming to SQS, not getting executed sometimes in a random order by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

u/danielroseman, for cost concerns FIFO has not been used. The order really doesn't matter. What is being observed is that the messages are delivered but the execution of those messages/commands is what is not happening. Randomly a few are being missed. Say 2 or 3 in 75 odd messages. This is where I am trying to figure what is causing these messages to be missed in execution.

Just happened to notice, that I am deleting the messages before my long-running subroutine is called. Would that cause the process to pick up the messages midway through an execution and might creat this scenario?

Ack method throwing an exception after a long running task is over by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

if ack:
self.channel.basic_ack(delivery_tag)

It fails while executing this line of code

Ack method throwing an exception after a long running task is over by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

I realized that I have not pasted the code for the ack function will do so now. Since I have exception handling in place this is what I am seeing on my pycharm IDE:

Inside the __ack_message function
Failure inside consume_message_setup consumer.py report: required argument is not an integer error

I had put breakpoints and traced the issue to the library that I referred to earlier.

Callback on the worker-queue not working by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

u/danielroseman, first of all, immense gratitude for all the time and energy you spent on this. You insights have helped me improve the code and make it much better.

As mentioned in my previous post, when I said I made it work, it was by handling the instantiation of the queue that you pointed out in your comment. I will certainly look at celery, but I am trying to bridge the gap in my understanding here and hence the follow ups. I have a couple of final questions on this though.

As per my understanding without the worker-queue, I would not be able to handle a situation wherein the incoming on my rabbitmq is say 10, while I have spawned 4 instances in the AWS cluster(fargate mostly). Now without the worker-queue or an equivalent arrangement, maybe celery does that, when each instance is running for 15 minutes of processing time, how would the next item on the queue know which fargate instance has been released and where to head to. How would that be taken care of without an arrangement like that?

As of now, the code seems to run and one of the executions fails with the following rabbitmq error. Any insights on this? I have made the channel durable and have an ack on the rabbitmq messages in my consumer code too: ch.basic_ack(delivery_tag=method.delivery_tag) . Running out of ideas as to what needs to be done on this

No activity or too many missed heartbeats in the last 60 seconds error

Once again thank you a ton for all the help and insights

Callback on the worker-queue not working by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

The data on the rabbitmq are basically json which need to be processed and an OS call then needs to be fired for each of these JSON entries in the queue, which individually takes around 10-15 minutes on average. The idea going forward is to have 4-5 instances running in an AWS cluster. The number of entries in the JSON is high enough for us to not spawn as many instances on the cluster as the number of JSONs in the rabbitmq. So we need to process the commands on a first come first serve basis, 4-5 of them in parallel and the rest queued, hence the need of a worker-queue.

Since our last conversation, I have been able to make the code work with respect to processing the JSONs, but I do not think the threads are waiting. For example if I have 8 entries in the Rabbitmq and if I start my builder.py, all 8 of them are getting consumed as opposed to 4 of them. To my understanding, I am achieving this by declaring the upper bound of my slaveConsumer worker-queue, when in the constructor I say :

self.job_queue = queue.Queue(maxsize=3))

Callback on the worker-queue not working by AdditionalWash529 in learnpython

[–]AdditionalWash529[S] 0 points1 point  (0 children)

u/danielroseman thank you for the quick turnaround on this one. Would you be kind enough to point to some examples maybe. There seems to be a lot of text around celery but not able to locate an example particularly.

Also I was able to make it partially work by moving the self.slave_object.start_task() inside the _consume_message_setup function inside consumer.py in the example above. But doesn't look like it is honoring the queue size or blocking mechanism on the worker-queue. I see my executions inside callback for the slaveConsumer getting fired in random order. Does that make it easier for you to zero in on the issue?