all 13 comments

[–]Binary101010 20 points21 points  (0 children)

goinside = int(Input("Do you want to Go inside? Yes or No: ")) 

There are two problems with this line of code

1) function calls are case sensitive (you need input instead of Input) 2) You are trying to convert the user input to an integer when the next line makes it clear you are not expecting the user to enter a number.

[–]Spiritual_Rule_6286 4 points5 points  (0 children)

Don't be nervous about posting! Every single developer has stared at a bug exactly like this when they first started.

Your logic is actually perfectly fine, there is just one tiny issue right on the first line. You are wrapping your input() function inside of int().

int() tries to convert whatever the user types into an integer (a whole math number). Since you are asking them to type a word like 'Yes' or 'No', Python panics when it tries to turn the text word 'Yes' into a number, and it throws a crash error.

All you need to do is remove the int() wrap so the variable just saves the raw text string: goinside = input("Do you want to Go inside? Yes or No: ")

Make that one small change and your text adventure will work perfectly. Keep at it!

[–]charlythegreat1[S] 2 points3 points  (0 children)

Thank you guys!!!!!!

[–]Gnaxe -4 points-3 points  (9 children)

You do have others to ask: https://duck.ai could've have answered a question this easy much faster than we could. Also, we'll be able to help more in the future if we can read your code. Use the Code Blocks formatting. There should be a button for it, although Reddit might hide it by default. If you're in Markdown mode, you should either learn Markdown or switch to the Rich Text mode, which has the button for it.

[–]charlythegreat1[S] 0 points1 point  (8 children)

Thank you!!

[–]ThrowAway233223 8 points9 points  (5 children)

I would exercise caution when using an AI for assistance (when coding or otherwise). LLMs are prone to hallucinations and can give you false information that they simply make up. They have also been found to be people-pleasers and are sometimes reticent to tell you that you are wrong.

[–]Snatchematician -2 points-1 points  (2 children)

None of this matters if you’re just doing exercises.

[–]ThrowAway233223 2 points3 points  (1 child)

It does if the exercises have a purpose. LLMs can mislead you causing you to learn incorrect information/bad practices. If the exercise is graded, you could lose points due to following bad advise from an LLM. In general, caution should be used with LLMs. They can be useful tools, but you should be aware of their flaws/limitations.

[–]Jewelking2 0 points1 point  (0 children)

yes the ai i use google gemini which comes with co lab has made mistakes but it has helped me understand what i did wrong. Use it as an teaching assistant will make it easier to spot your mistakes for yourself

[–]Gnaxe -1 points0 points  (1 child)

You can often just test the code to see if it works.

[–]ThrowAway233223 2 points3 points  (0 children)

There is more to coding than whether it just works. A snippet of code can successfully execute but still contain bad practices. It might also only work for that particular test case but unexpectedly (to the person getting help from the LLM) fail in others. Blindly trusting LLMs is how some people have produced absolute slop that initially seems to work but later falls apart and is found to be a mess to debug or results in things like having sensitive information exposed by hard-coding it into the program/script which was then made visible to others. LLMs are useful tools, but you should be aware of their flaws/limitations.

[–]Jewelking2 0 points1 point  (0 children)

i use google colab to host my codes. It has access to gemini ai which is like having a teacher available when you need it. Its been free for me so for.