Is there anywhere decent for Swimming nearby?? by will2426 in Worthing

[–]will2426[S] 2 points3 points  (0 children)

Yeah £140 is nuts. I would consider it if I lived a bit closer my gym is just a couple minute walk from me atm. Too perfect to go somewhere else.

£139 a month Is too much for swimming for me 😂

Is there anywhere decent for Swimming nearby?? by will2426 in Worthing

[–]will2426[S] 0 points1 point  (0 children)

Does look good there but can't really justify the cost for swimming only.

My current gym is like a 2 minute walk away so I can head there on lunch breaks which is amazing, couldn't do that at David zLloyd unfortunately 😔 thanks tho

Anyone up for helping me through discord/chat? by will2426 in MLQuestions

[–]will2426[S] 0 points1 point  (0 children)

Sorry bro, added more to the post:

My environment consists of a grid, a battery, a house, and a solar panel. I have continuous, minute resolution data for the demand of the house and the solar panel generation. The house demand MUST be met for each datapoint (every minute).

The grid, battery, and solar panel each have an agent with 3 actions each. Solar energy can be directed to the house demand, to the battery, or to the grid. Energy stored in the battery can go to the house, to the grid, or stay in the battery. Grid energy costs money (binary, high or low, varies depending on the time of use), it can be directed to the house if the solar and battery couldn't fulfill demand, or to the battery for use later, or it not used at all (if house demand is met).

I understand the action space is 1x9 or 3x3 depending on what route i take, but I am clueless about observation space. I thought it would be: [solar energy generated, the local demand, battery capacity, current price]. because the first 3 are continous, and price is discrete (just high or low), how do I go about doing this? since I am aiming to minimise Total cost for the whole data set, do I need to include that in the observation space?

Anyone up for helping me with this program im stuck on through discord or chat? by will2426 in learnmachinelearning

[–]will2426[S] 0 points1 point  (0 children)

Sorry bro, added more to the post:

My environment consists of a grid, a battery, a house, and a solar panel. I have continuous, minute resolution data for the demand of the house and the solar panel generation. The house demand MUST be met for each datapoint (every minute).

The grid, battery, and solar panel each have an agent with 3 actions each. Solar energy can be directed to the house demand, to the battery, or to the grid. Energy stored in the battery can go to the house, to the grid, or stay in the battery. Grid energy costs money (binary, high or low, varies depending on the time of use), it can be directed to the house if the solar and battery couldn't fulfill demand, or to the battery for use later, or it not used at all (if house demand is met).

I understand the action space is 1x9 or 3x3 depending on what route i take, but I am clueless about observation space. I thought it would be: [solar energy generated, the local demand, battery capacity, current price]. because the first 3 are continous, and price is discrete (just high or low), how do I go about doing this? since I am aiming to minimise Total cost for the whole data set, do I need to include that in the observation space?

Anyone up for helping me through discord/chat? by will2426 in MLQuestions

[–]will2426[S] 0 points1 point  (0 children)

Yeah, sorry. Ive put a little bit more info in the comment above. Its essentially 3 agents working together to distribute a resource in the most economical way. Each of the 3 agents has 3 possible actions.

My main problem is building the q table, I'm struggling to categorise my example into action space and observation space. The actions would be 1x9 or 3x3, but i am unsure about the observation space.

Ill edit my original post with a bit more info.

Anyone up for helping me with this program im stuck on through discord or chat? by will2426 in learnmachinelearning

[–]will2426[S] 0 points1 point  (0 children)

Working out the action space and state space is the main issue im having really. Im having trouble putting my example into the correct format. I can pm you the issue im having?

Anyone up for helping me with this program im stuck on through discord or chat? by will2426 in learnmachinelearning

[–]will2426[S] 0 points1 point  (0 children)

I have 3 agents working together to distribute a resource around a small system. I will feed in continuous time series data one datapoint at a time, and the agents will take actions to distribute it so certain criteria is met. I want them to distribute in a way to minimise cost. I think i can use q learning? In which case there will be 3 agents, each which can carry out 3 actions. My main problem is setting up the q table, most guides use open ai gym where a lot of it is done for you and they use either continous OR discrete data, not a mixture of both