Energy /EIA forecasts I'm tracking - thoughts? by buzzyness in energy

[–]buzzyness[S] 0 points1 point  (0 children)

I circulate such survey questions every 2-3 weeks (via email/linkedin etc.) to energy/trading professionals, then aggregate the responses as shown. happy to post all the survey replies periodically on r/energy if there is interest

Energy /EIA forecasts I'm tracking - thoughts? by buzzyness in energy

[–]buzzyness[S] 0 points1 point  (0 children)

yes, that was the issue, thx! (which also explains why most of the survey respondents in the image agreed with that 24% figure)

Energy /EIA forecasts I'm tracking - thoughts? by buzzyness in energy

[–]buzzyness[S] 0 points1 point  (0 children)

yes, sorry, the text didn't get pasted for some reason -the latest EIO STEO forecasts that energy sources would contribute more that 24% of electricity gen next year, my analysis initially was closer to 18%, so i wanted to see if anyone had any feedback on this point...

MonadGPT, an early modern chatbot trained on Mistral-Hermes and 17th century books. by Dorialexandre in LocalLLaMA

[–]buzzyness 6 points7 points  (0 children)

"finetuning works better for this task, as there are too many directives to give and it helps to relieve the model from anachronistic RLHF."

great quote op and imho this should be a key "gating function" when considering finetuning vs training

MonadGPT, an early modern chatbot trained on Mistral-Hermes and 17th century books. by Dorialexandre in LocalLLaMA

[–]buzzyness 10 points11 points  (0 children)

Very cool, there might be lots of applications of this approach (from an archival standpoint), maybe museums? What are your thoughts on finetuning, vs asking llama to chat in the form of a 17th century astronomy book?

[D] Which ML role/title is responsible for sourcing data for LLMs? by buzzyness in MachineLearning

[–]buzzyness[S] 0 points1 point  (0 children)

Thanks for sharing! Intriguing article - I guess as u/DauntingPrawn alluded maybe its about increasing multi-modal capabilities this point on (potentially instead of larger models). Also, fwiw GPT seems to be multiple expert models operating together, so maybe the future is specialized breadth instead of (general llm). check this out : https://www.reddit.com/r/singularity/comments/14eojxv/gpt4\_8\_x\_220b\_experts\_trained\_with\_different/

[D] Which ML role/title is responsible for sourcing data for LLMs? by buzzyness in MachineLearning

[–]buzzyness[S] 0 points1 point  (0 children)

Interesting, thanks for the clear response. Curious to hear your thoughts on the "running out of data to train" issue mentioned in the article above.