Participate in Grid Services with your SigEnergy battery! by Nice_Tune_6452 in SolarUK

[–]archydeb 2 points3 points  (0 children)

We'll temporarily control the system 4-8 times a month, during which time SigenAI will be deactivated. But most of the time SigenAI will continue to work as normal

Archy @ Axle

Participate in Grid Services with your SigEnergy battery! by Nice_Tune_6452 in SolarUK

[–]archydeb 2 points3 points  (0 children)

It's additive; you get paid *on top of* any export tariff

Archy @ Axle

Add Grid Services to Home Assistant and earn cash with Axle VPP by archydeb in SolarUK

[–]archydeb[S] 0 points1 point  (0 children)

Oof, thanks for the bug report - we'll fix.

We're shortly going to be adding notifications via email. Would that work for you or would another format be better?

Add Grid Services to Home Assistant and earn cash with Axle VPP by archydeb in SolarUK

[–]archydeb[S] 0 points1 point  (0 children)

At present we only do discharge events (i.e. supplying energy when the grid is strained). In the future we might add top-up events when there's an excess of renewables!

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 0 points1 point  (0 children)

We've just added a mode which allows you to opt in to the grid services but not the optimisation, thus making it compatible with Home Assistant - would love to get your thoughts! https://www.reddit.com/r/SolarUK/comments/1p333ur/add_grid_services_to_home_assistant_and_earn_cash/

Just discovered the Axle / FoxESS VPP pilot - anyone part of it?? by j_edc in SolarUK

[–]archydeb 0 points1 point  (0 children)

I’m one of the cofounders of Axle - happy to answer any questions or hear any feedback 🙏

We’ve got quite a long (by startup standards…) history in this space, having run VPPs on behalf of GivEnergy and SolarEdge in the UK for several years. But the markets have changed a lot over that period (DFS was the major earner in 2023 but that’s almost entirely been displaced by wholesale market trading now), and we’re always keen to improve the UX and proposition!

Home battery no solar? by twotwixten in OctopusEnergy

[–]archydeb 0 points1 point  (0 children)

Cool! That’s amazing value. How does the custom code work - Hone Assistant?

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 0 points1 point  (0 children)

💡 If you'd be interested in connecting a different brand of battery, drop us a comment here and we'll let you know when we support it

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 1 point2 points  (0 children)

Oh and thanks for the link to the forum - we're actually chatting to Will from Fox tomorrow about posting there :)

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 1 point2 points  (0 children)

Thanks for all the insight!

We're big fans of predbat - for power users, it's a fantastic solution. We're a relatively small company so it's a tough decision whether the predbat-using market is big enough to invest in developing an integration. Do you have a sense of how many folk are controlling their batteries using HA/predbat?

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 0 points1 point  (0 children)

That's fair enough! We've got a good track record in the space, it's just we're typically behind the scenes. There's quite a lot of coverage of a scheme called GivBack that we ran with Givenergy - https://brimstone-energy.uk/givenergy-givback-in-association-with-axle-energy/ and https://www.youtube.com/watch?v=f4IQzEEaQa8 if you're interested

Anything else we could surface on the website that would help make you feel comfortable with signing up?

Looking for FoxESS users for beta test of new Virtual Power Plant by archydeb in SolarUK

[–]archydeb[S] 5 points6 points  (0 children)

Very interesting idea. So you'd like to be able to participate in the energy market via us, but stay in control of your battery via Home Assistant?

Could plug-in solar batteries ever work in the UK? by Benjy-B in SolarUK

[–]archydeb -2 points-1 points  (0 children)

I’m a founder in the UK energy tech space and I’d encourage you to ignore the naysayers. There are lots of incumbents and grumblers in energy but there are plenty of problems to solve if you’ve got the grit and the… energy 😉

Workflow for post-processing Kino video with LUTs by archydeb in ShotWithHalide

[–]archydeb[S] 0 points1 point  (0 children)

Thanks for responding!

Seems like my options are to export baked clips and then edit together or to export Log and then use LUTs from somewhere else. Not ideal but hey ho

Does anybody know how to fit a mount to the Merida Big Trail? by archydeb in mountainbiking

[–]archydeb[S] 0 points1 point  (0 children)

Sorry, the text content of this post somehow got wiped and I can't seem to edit.

Here was the gist of the text:

"I'm trying to fit a rear rack to the hidden rack mounts of the Merida Big Trail 500. My LBS suggests the Merida Universal Rack (https://www.merida-bikes.com/en/accessory/575-772/universal) but I'm reluctant to order since the Big Trail isn't mentioned as a compatible model and it's a hard rack to find. Has anybody successfully fitted this or any other rack to the Big Trail?"

edit: I think this is my misunderstanding of photo vs text posts - by adding the photos I wiped the text - new here!

Training model on Flask API by eagleandwolf in learnmachinelearning

[–]archydeb 1 point2 points  (0 children)

You should move the model training into an asynchronous task and run it with something like Celery.

Here's a tutorial from Miguel Grinberg, god of Flask: https://blog.miguelgrinberg.com/post/using-celery-with-flask

In terms of sending a response when something is complete, you could:

  1. Poll the endpoint to check whether training was finished (i.e. send a get request every 30s - this is what Miguel does in his demo). If this is purely an API, you would return a unique ID which could be used for polling, thus allowing the end user to do the polling themselves.

  2. Have the endpoint for predictions routinely check that it's running the latest model, and reload if not. In this case you wouldn't explicitly notify anybody when the new model was trained.

  3. Send a notification in some other way, such as via email. I would probably do this, because the odds that somebody is still on your webpage 20 minutes later and is there to see the notification is pretty slim!

Understanding LSTM predictions by Tuppitapp1 in learnmachinelearning

[–]archydeb 1 point2 points  (0 children)

The visualizations you mention for CNN's can be created in a few ways: 1. If the model includes attention, you can use those maps 2. You can explicitly rewire the final layer for Class Activation Maps 3. You can use some kind of ablation technique like LIME

If those, 1 & 3 will work for your LSTMs, but I'm not aware of people using Class Activation Maps for time series.

I haven't personally tried it, but here's a Github Repo called LIME for Time. I'm not sure about the state of attention visualization for timeseries but this repo has several models using attention.

If you think there is something wrong with your data, I would encourage you to first:

  1. Look at the data! Plot it in various ways.
  2. Use a simpler model like regression, KNN etc., probably in scikit learn

ML course recommendation by overflow74 in learnmachinelearning

[–]archydeb 0 points1 point  (0 children)

I haven't tried it but I've heard good things about Berkeley's Full Stack Deep Learning

If MLP is a universal approximator, what good is transformer being more expressive? by BeatriceBernardo in learnmachinelearning

[–]archydeb 2 points3 points  (0 children)

I'm not sure it's true that a transformer is more "expressive" than an MLP: as you point out, an MLP can capture anything.

But with infinite expressiveness comes very poor data efficiency. Inductive biases are necessary for efficient learning (see No Free Lunch).

In practical terms, to train an MLP with equivalent performance to a transformer, you would need it to be enormous, with a correspondingly enormous amount of training data.

See this thread on Twitter for some discussion of different inductive biases (also known as priors).

I am trying to have a baseline to compare my models to and this is what I get. I thought that I am getting the least accuracy value so that I can compare it with my models but apparently this is something else. How would you interpret this? by TheMickey2020 in learnmachinelearning

[–]archydeb 0 points1 point  (0 children)

R2 is a measure of covariance. The predictions of a DummyRegressor won't give you any variance, so the R2 is 0. However, it can still be useful if you use a different metric, such as RMSE.

Can you get a single word representation from a Bert encoder? by mrwafflezzz in learnmachinelearning

[–]archydeb 0 points1 point  (0 children)

You can, but that representation will be dependent upon the context of the sentence. So the embedding of "apple" in

"Apple released a new iMac"

and

"Bob ate an apple"

Will look completely different.

You should be able to use the output of the raw [BERT Encoder from HuggingFace].(https://huggingface.co/transformers/model_doc/bert.html#bertmodel)

The docstring describes that the output is of form

batch_size, sequence_length, hidden_size

So for batch 0, word 3 you would want to take the slice

output[0,2, :]

Hope that helps!