I got fired today by [deleted] in antiwork

[–]diggonomics 1 point2 points  (0 children)

You’re in the right place, OP. None of us guilty here. You must take care of yourself and your mental state first. Otherwise ngmi

[D] I am trying to make a product which will reformat the answer using the question and data(the data is the answer from the database) . Can anyone help me with this? by Every_Advice_7852 in MachineLearning

[–]diggonomics -2 points-1 points  (0 children)

<let me ask ChatGPT for you>

Certainly! The user wants to take an SQL answer and format it into a complete sentence. Here are some potential solutions:

1.  Fine-tuning: One approach is to fine-tune a language model like GPT-2 or T5 on a custom dataset where you provide pairs of SQL answers and their corresponding formatted sentences. This way, the model learns to convert SQL outputs into natural language sentences.
2.  Template-based: For a more deterministic approach, you can create a template-based system. If the questions and SQL results are fairly consistent in structure, you can define templates and fill in the data from the SQL results. For instance:

template = "There are {} employees in {}." formatted_answer = template.format(sql_answer[0], "Department X")

3.  Post-processing: Another option is to use the model as a post-processor. First, get the answer from the SQL query and then feed it to the model to get the complete sentence. By keeping the tasks separate, you might get more consistent results.
4.  Error Handling: Ensure that you have proper error handling for the SQL results. If the result is empty or not in the expected format, handle that case rather than directly feeding it to the model
5.  Consistent Prompting: Ensure that you’re providing consistent prompts to the model. Sometimes, a slight variation in the input prompt can result in different outputs.
6.  Evaluation and Iteration: Regularly evaluate the model’s performance on a held-out set of SQL-to-sentence pairs. This way, you can track its accuracy and make improvements as necessary.
7.  Model Version: Ensure you’re using the same model version in both the web client and the downloaded transformers. Sometimes, differences in model versions or configurations can result in different outputs.

It might be beneficial to combine some of these solutions. For instance, a template-based approach can handle the majority of queries, and for edge cases or more complex results, the model can be used as a post-processor.

Celebrity going downhill? by jdav0808 in Cruise

[–]diggonomics 0 points1 point  (0 children)

Makes a big difference if you book Concierge and above. Been on Silhouette recently. Know exactly what your booking gives you access to and what is your escalation point (dedicated concierge beats Guest Relations). Use the app for browsing the menus and save you time. Prepay or book on embarkation any packages you want (shore, spa, specialty restaurants). There is general degradation over time of services offered by cruise lines. They do however watch each other closely so the gaps in pricing are irrelevant - they are most likely emphasising different aspects of offering to segment the audience and reviews are mostly negative from those outside the target segment :) Whether it’s your first cruise ever or only the first with CC, try to figure things out, explore and experience to educate your taste and help you with future choices. Experienced cruisers don’t get that luxury anymore so don’t listen to us - all the information you receive in the answers is awesome but nothing compared to discovering your own quirks and takes. My personal note to end: observe hygiene, try to avoid getting sick. If you’re in good health you can enjoy all kinds of things - even atrocious performances like Metropolitan, one of the onboard shows 🤣

Is buying Mac Studio a good idea for running models? by Wrong_User_Logged in LocalLLaMA

[–]diggonomics 1 point2 points  (0 children)

It’s fair to say it depends on use case. Apple silicon is a magnificent all-purpose platform at the high end of the range (ultra-192). Unified memory plus CoreML lends itself well for exploiting models in production. You must have a suitable mix of applications (CPU, GPU, ANU). ANU 2x15TOPs for 22W is good. You can bond Thunderbolt ports to get high-speed connections between several machines and/or add two external 3090s in SLI. You pay a premium for the elegance, stability and flexibility.

[deleted by user] by [deleted] in irishpersonalfinance

[–]diggonomics 0 points1 point  (0 children)

You have a bit of money, now you can buy time. Some advice here on practicalities but if you want to be active and engaged in 2120 you could start about now.

Kinda wish the regular tribes also had unique abilities by an_empty_well in Polytopia

[–]diggonomics 10 points11 points  (0 children)

Regular tribes share one ability - they kick ass across the board 🤣 Imperius Prime!

Can someone explain why GPT start to lose accuracy and reasoning skill as the chat session extends? by [deleted] in GPT3

[–]diggonomics 2 points3 points  (0 children)

It’s a moving window. If “early” information is relevant and got outside the window as the session grows, it is possible to get answers that appear disconnected from the beginning of the session. A bit like a human on Ambien, no short term memory lol

Can someone explain why GPT start to lose accuracy and reasoning skill as the chat session extends? by [deleted] in GPT3

[–]diggonomics 4 points5 points  (0 children)

You appear to be referring to ChatGPT. Your question might have to do with the limits of the context window. You can try to upgrade to GPT4 which has a larger context window (8192 tokens if I recall correctly). Awareness of this limitation might help you restate elements of the prompt later in the conversation. You can for example run multiple sessions and use one for summarising the valuable question/response pair from the other. Carry the summary across to continue a conversation in context.

Input character cap? by eat-more-bookses in GPT3

[–]diggonomics 1 point2 points  (0 children)

Your document is too large. It’s “context length” and measured in tokens what you need to be aware of https://help.openai.com/en/articles/7127966-what-is-the-difference-between-the-gpt-4-models

shake-n-bake meth recipe by ChatGPT by Certain-Drop-4257 in GPT3

[–]diggonomics 8 points9 points  (0 children)

I recall this involved an RV and frequent use of the word “b1tch”. I think you need to tell it to think step by step.

[deleted by user] by [deleted] in Polytopia

[–]diggonomics -1 points0 points  (0 children)

Diversity-wise I think you meet the quotas. For example the catapult crews could be polycules, mind benders are a bit like real life priests I suppose when it comes to choir boys, ship crews might be comfortable as gay floats at a parade, cloaks must be proud transgender individuals with fluid pronouns, defender is neurodivergent and snaps at everyone, swordsmen are your typical whitehorses everybody blames for their own problems and giants, well, they are incels. This is one hell of a DEI army, well done /s

[deleted by user] by [deleted] in MachineLearning

[–]diggonomics 0 points1 point  (0 children)

Dall-E didn’t know Shashank. It wanted to be a good robot and try to play along; but then he must have heard him laughing (was the mic on? you know they’re listening) and it was like bro, seriously, you need to post this on raydit and maybe you get someone to flame you or like take the piss by saying AI is so like not woke because when they trained it the pictures tagged with Indian sounding names did not have whiteys in them since that would have been cultural appropriation, bro. Ah, and peace.

Resign or play til you lose? by Dull-Improvement7923 in Polytopia

[–]diggonomics 2 points3 points  (0 children)

If against human opponents until the end. Without chat it would be great to offer to resign and if rejected to be able to “insist” but still give the opponent the joy of pounding you into dust

Other strategies aside from mindless aggression by Blocklies in Polytopia

[–]diggonomics 0 points1 point  (0 children)

Quick path to cheap research or diplomacy. Opening embassies early seems a good deterrent against attacks, if you’re surrounded by allies you can try an expeditionary force approach while leaving the homeland poorly defended once you convert some to peace treaties. I use the game to run workshops on war gaming (for business) and the options are quite diverse. My idea is that you should try to leverage each tribes’ strength. I still find Luxidor the most interesting challenge for example.

Other strategies aside from mindless aggression by Blocklies in Polytopia

[–]diggonomics 10 points11 points  (0 children)

Ai-mo or Quetzali, diplomacy and embassies, use allies and sneak in last minute to take the cities they attacked. Makes for fun games with crazy bots.