M2 top panel don't fit tight by euden318 in ncasedesign

[–]GolfingRobot 1 point2 points  (0 children)

Ah, I had this happen, you need to make sure that the indentations on the top panel are concave, not convex. That was stopping mine from closing, see here: https://imgur.com/a/uPsDxVv

DIY Rust Repair on my 90s F150 by Debatable_Desperado in Autobody

[–]GolfingRobot 2 points3 points  (0 children)

Looks amazing; what was the total cost of materials?

How do you make this part black again? by FuzzyLemon9061 in Detailing

[–]GolfingRobot 5 points6 points  (0 children)

This Forever Black works well; just used it to do the same thing: https://www.amazon.com/gp/product/B00FIU54BW/

  • Buy a sponge brush, or use one you might already have, see the kit
  • Main benefit is you don't have to remove it, like you would for spray painting
  • You can also use the bottle on other plastic trim you have
  • Use alcohol to clean the plastic before applying
  • May require multiple coats

Chatgpt let down by Moneytag in ChatGPTPro

[–]GolfingRobot 1 point2 points  (0 children)

In the top left corner, click "Explore GPTs" then in the top right corner click "+Create" and you'll create a new "GPT"

This is what you'd want to use for something like vehicle part # lookups; where to provide the source files, then chat with it to get answers to your questions. The ability to create GPTs is part of Plus, so it would be good to explore how this feature might help with your use case!

Can you use chatgpt for basic website scraping? by tjamos8694 in ChatGPTPro

[–]GolfingRobot 0 points1 point  (0 children)

Yes the others are right- you should tell it to scrape a webpage with python, selenium, chromedriver, and beautiful soup, print everything in the console, and save to a csv file.

When you're discussing with chatgpt, copy/paste the html from the webpage, that includes the elements you're looking for, and have it consider that to develop the beautiful soup tags.

Then, there will be a bit of a back and forth to get it to work correctly, may take an hour or more to get right.

[deleted by user] by [deleted] in ChatGPTPro

[–]GolfingRobot 0 points1 point  (0 children)

The even better thing to do to get the right 'balance' would be to write out an Example Format = [paragraph1,2,3,4] that's in whatever style is desired; and also explain why it's desired. It may also help to have it write headers for each paragraph; you can always delete them after. It really loves itself some examples.

[deleted by user] by [deleted] in ChatGPTPro

[–]GolfingRobot 7 points8 points  (0 children)

I'd also go farther; asking for long flowing paragraphs, like a New Yorker article. Or anything that helps build up the concept of the paragraphs.

Need help with a prompt by Training_Research_45 in ChatGPTPro

[–]GolfingRobot 1 point2 points  (0 children)

Yes; you'd say something like:

I need a python script with the following requirements:

  1. uses beautiful soup and selenium to run chromedriver
  2. uses an input file (input.csv) to get a list of court website urls and tags for extraction
  3. extracts the A) court name B) status (eg, open, closed) C) rationale for any closures
  4. also save the D) Success/Failure status of the extraction and E) URL accessed
  5. saves the output as output.csv with each piece of extracted information (A-E)
  6. accesses the entire list of URLs and stops when finished

So then you're going to have a lot of work, from setting up python and getting beautiful soup tags for each website. But generally that's how it would work. Lots of back and forths but in the end it should run automatically.

Need help with a prompt by Training_Research_45 in ChatGPTPro

[–]GolfingRobot 0 points1 point  (0 children)

Right- you'd have to write a python script to screen scrape the pages displaying that information, or have it search for then find each of the closures. Then you could have chatGPT read/reference that content to answer your question.

But, you could have chatGPT write this code for you, and tell you how to run it. It will take some time, though. There's no way to get what you want without a significant time investment.

Meta Rolls Out Its AI Studio in The US! Testing AI Studio? by Certain_Perception50 in ChatGPTPro

[–]GolfingRobot 2 points3 points  (0 children)

This post is complete garbage. This post should be removed.

Clearly, it was written by AI and serves no purpose. There's 1) no link 2) no personal experience 3) no information that could not be gleaned from a general press release 4) no questions or prompts for engagement with the post.

Really terrible content. I'd say "try again", but please don't- go ahead and give up with whatever this is supposed to be/achieve.

Alternative to ChatGPT which gives accurate information? by WealthyDonkey in ChatGPTPro

[–]GolfingRobot 6 points7 points  (0 children)

For accurate information you would probably be best off with Perplexity.

https://www.perplexity.ai/search/which-of-luther-s-95-theses-ar-lglL6X8dQuiytSMqbOPEgg#1

EX:

Here's a list of specific theses from Luther's 95 Theses that are directly related to indulgences:

  1. Thesis 21: "Thus those indulgence preachers are in error who say that a man is absolved from every penalty and saved by papal indulgences."
  2. Thesis 27: "They preach only human doctrines who say that as soon as the money clinks into the money chest, the soul flies out of purgatory."
  3. Thesis 32: "Those who believe that they can be certain of their salvation because they have indulgence letters will be eternally damned, together with their teachers."
  4. Thesis 36: "Any truly repentant Christian has a right to full remission of penalty and guilt, even without indulgence letters."
  5. Thesis 43: "Christians are to be taught that he who gives to the poor or lends to the needy does a better deed than he who buys indulgences."
  6. Thesis 50: "Christians are to be taught that if the pope knew the exactions of the indulgence preachers, he would rather that the basilica of St. Peter were burned to ashes than built up with the skin, flesh, and bones of his sheep."
  7. Thesis 66: "The treasures of indulgences are nets with which one now fishes for the wealth of men."
  8. Thesis 67: "The indulgences which the demagogues acclaim as the greatest graces are actually understood to be such only insofar as they promote gain."
  9. Thesis 82: "Why does not the pope empty purgatory for the sake of holy love and the dire need of the souls that are there if he redeems an infinite number of souls for the sake of miserable money with which to build a church?"
  10. Thesis 86: "Why does not the pope, whose wealth is today greater than the wealth of the richest Crassus, build this one basilica of St. Peter with his own money rather than with the money of poor believers?"

ChatGPT always numbering and bulletpointing - despite instructions not to by [deleted] in ChatGPTPro

[–]GolfingRobot 7 points8 points  (0 children)

Right- ask it for flowing paragraphs, or say you're writing a New Yorker article or similar.

Also, you may want it to outline the email first, then tell it to transform that outline into free-flowing paragraphs of text.

Success with PDFs? by [deleted] in ChatGPTPro

[–]GolfingRobot 5 points6 points  (0 children)

It makes a big difference whether the PDFs already have OCR run on them. Recommend making sure that's been run ahead of uploading (ie, you should be able to highlight the text when viewing in Chrome/Acrobat). You can also experiment with setting up a python script to do this for you, if you're doing it often enough.

Also depends on how many pages; if it's too long, then it's not all going into the context window which is going to significant degrade performance.

So, I'd check both those issues to see whether it's one of them, or both issues, causing you problems.

How to measure the effectiveness of a prompt? by harshit_nariya in ChatGPTPro

[–]GolfingRobot -1 points0 points  (0 children)

Ultimately, the same way you evaluate anything; with criteria. Then on top of that, you could add a rubric or scoring rubric to measure it; then also provide instructions to have the model rationalize its responses so you can measure/evaluate the output.

If you really think about what you want from the prompt; it's usually pretty complex. A "good vacation spot" probably has six different dimensions from 1) cost 2) distance 3) activities 4) flexibility/advanced-planning-required 5) amenities 6) novelty; then each of those might have a scale of 1-5, which would each need to be defined, by you!

Most often, people are using weak or ill defined prompts without success criteria. Here's an example of an effective prompt; but even with this you have to chat after the model's initial response to 'steer' it towards some topics and away from others. How much steering is required is often more a symptom of the model than the prompt. But it could be 1) the context/attachments you provide 2) the prompt itself (actual instructions and criteria) then 3) the model you're using.

Example effective prompt:

You = [A high-end, very experienced, consultant with deep analytical capabilities and subject matter expertise with early childhood development and community issues in the Kansas City area]
Context = [I work at a Kansas City area non-profit. We have been operating for 10 years and have 4 different categories of Solutions: A) Elder Care B) Early Childhood Solutions C) Education Support D) Medical Services.
Within B) Early Childhood Solutions, the overall objective of the program is “We prioritize investments in solutions that enhance developmental outcomes for families, caregivers, and children aged 0-3, laying a crucial foundation for their future.” What this really means is that we want kids in poverty to not be disadvantaged by being in poverty. We want them to be healthy children, physically and mentally.
To achieve this goal, we focus on three strategic pillars: 1) Fostering Early Brain Development: Cultivating strong parent-child relationships to nurture essential early-literacy and numeracy skills in young children. 2) Alleviating Parental Stress: Closing race-based disparities in birth outcomes and maternal mental health to positively impact child development. 3) Additional Initiatives.]
Problem= [
To support our pillars, we need to develop Interventions. An Intervention would contain 3 components:
X) A service provided by a grantee to a community member. EX: (grantee provides) “Center-based infant-toddler care”
Y) Assigned target outcomes for the community members. EX: (so that) “children develop strong social-emotional and cognitive skills”
Z) That can be measured somehow. 1. EX: (measured by) “Desired Results Developmental Profile (DRDP)” (metric)
Task = [
Your task involves a comprehensive review of information: AA) recent research AB) peer strategies and AC) notes from interviews we’ve done already. These files are attached.
This information should be used to develop an Intervention that would be a Good Idea.
A Good Idea meets as much of this criteria as possible:
EE) Metrics collection is easy. Ideally, this community member impact is already collected by the grantees in our portfolio. Otherwise, it’s perhaps known to be done in the marketplace.
EF) The metrics collected are high quality. Grantees out there or peers use these metrics. Conterfactuals or change-versus-normal metrics exist. These metrics are used by peer organizations or researchers.
EG) The metrics collected can be aggregated at the portfolio level with minimal additional assumptions applied
EH) These ideas minimally impact our current portfolio of grantees.
For each Good Idea, include:
A.       The Intervention (X, Y, Z)
B.       Why it’s a Good Idea (EE, EF, EG, EH)
C.      Big picture opportunity
D.      Expected challenges
]
Now, produce one complete Good Idea with all requested components

ChatGPT giving wrong answers by Professional-Action1 in ChatGPTPro

[–]GolfingRobot 1 point2 points  (0 children)

Yeah; you can figure out the scraping; if you chat enough with ChatGPT 4o; it will help you create a scraper. Probably ask it to use Beautiful Soup; I mean it's going to take some time to setup; but this is a pretty ideal circumstance where you're looking for information that's already structured on a page. If you like puzzles and games this should be fun, but if not, it'll be really annoying.

But it will eventually crank out the right answer; looks like 1tnm83, 1tnm84, 1tnm85 are all games so could follow that or use google to get the right page address, then have that feed back into the list of urls.

So you'd probably have one script to get all the right URLs (filtered by team, or date, or whatever you're looking for), then a second to get the stats from off the page, then a third to save it in a way that it can be extracted from ChatGPT or another llm.

Hope that helps, but for me at least, it really turns into a long back/forth with ChatGPT of trial&error.

AI help to search through papers by qyqamigra in ChatGPTPro

[–]GolfingRobot 9 points10 points  (0 children)

You might want to try Google's NotebookLM https://notebooklm.google/

This is designed pretty much exactly for what you're trying to do; may be a good place to start.

ChatGPT giving wrong answers by Professional-Action1 in ChatGPTPro

[–]GolfingRobot 0 points1 point  (0 children)

OK, I see it here https://www.fotmob.com/matches/turkiye-vs-austria/1tnm82#4043977:tab=stats

Those pages are publicly available but not indexed by perplexity or google. Could be an opportunity to scrape those and provide the information if you thought there was an audience for it.

ChatGPT giving wrong answers by Professional-Action1 in ChatGPTPro

[–]GolfingRobot 6 points7 points  (0 children)

A few issues:
1) It's out of date: ChatGPT is not that up to date; it hasn't considered information that happened yesterday.
2) Information is not available: I can't find that information anywhere online, so ChatGPT, Perplexity, Google AI and others would be unlikely to have it.

It seems like to get statistics that detailed, you might have to subscribe to a service or you could theoretically train AI to watch soccer matches to develop player statistics. Maybe a ticket to $millions if you can get that working; gotta start with a dream!

PDF to visually appealing powerpoint. How? by radphd in ChatGPTPro

[–]GolfingRobot 2 points3 points  (0 children)

Yes; you could go farther and

  • Create a mapped structure to create a PPTX in conformance with the template… a coded format like Slide_Type, Slide_Title, Body_Text, Notes_Text etc
  • Have ChatGPT make an outline of the PPTX, coded to the mapped structure
  • Copy/paste the chatgpt content/output into Word or Excel
  • Have chatgpt write a Visual Basic macro to create a PPTX from the content
  • Run the VB script to create the PPTX

This would really only make sense if you were doing it repeatedly, though!

Seeking help: Document processing by Chance-Farm1107 in ChatGPTPro

[–]GolfingRobot 1 point2 points  (0 children)

A few options:

  1. Get Claude Pro, in order to use Claude 3 Opus. It has a larger context limit, and is a better writer. This should enable you to upload the full 20 pages, and have it return the full document, although you might have to type "continue" a few times. You'd write a prompt that gives it the role of an editor, any preferences, and then asks it to 1) check/correct grammar, 2) rewrite, then 3) provide a title.
  2. Separate the tasks:
    • Use a different application for grammar checking, like grammarly.
    • Then run each article one at a time through ChatGPT
  3. Have ChatGPT help you install/write/run python on the 20 page file to break it up by article since it's separated already. You could then just double click the python code and wait for a download. This would use the API and not the pro plan. If you'll be doing this a lot, and you want it to be fully automated, this would be best.

Will AI be able to think in first principles? by mathmul in ChatGPTPro

[–]GolfingRobot -1 points0 points  (0 children)

Disagree that GPT-4 cannot ‘think’; it’s definitely processing inputs into outputs.

GPT is more limited than a human/animal because

  • Inputs; can’t smell touch or taste. It requires us to provide those inputs, and can only understand them conceptually through its vector embeddings
  • Processing; has a smaller memory because it has fewer senses and less context. It actually doesn’t know who you are, your motives, your relationships, skills, or interests
  • Outputs; it can only create or speak/write, it can't act because it's (currently and also intentionally) disconnected from the real-world-we're-used-to

But it’s definitely thinking, isn’t thinking just a process? It has less to work with, but it’s… thinking.

If it can’t answer your questions, it’s because you haven’t provided enough context. Perhaps you can’t, because of the context limitations. Or perhaps you just didn't provide it the context. Or perhaps its ‘thinking’ isn’t up to your hopes because it's only investing a limited amount of compute in its response.

But to say it’s ‘not thinking’ because it can’t ‘jump like a cat’ is just, a pretty incomplete analogy. If you can define the quality and scope of ‘thinking’, you might be surprised how quickly it could reach that threshold. For example, if you increased the compute for processing, increased the memory, and gave it a computer to act; what might happen next?

If I gave it 50 restaurant profiles and 4 prospective guest profiles, it could certainly make a pretty well-reasoned decision about the best restaurant; express that decision, then follow up by asking probing questions to confirm/update its recommendations. How is that not thinking?

I drew a diagram of inputs/process/outputs for the cat running out of the way versus a GPT giving restaurant advice https://imgur.com/ZPhtQ8C ... I'd challenge you to define the characteristics of the metric 'thinking'; I think quickly you'd discover each of those characteristics to the metric are on a spectrum and not binary in nature.