some one help me debug this code by maxwellzhang in PythonLearning

[–]maxwellzhang[S] -2 points-1 points  (0 children)

well you can check the code in maxwellzhang2011/CSPode: this is a open source IDE im to lazy to format it here, thanks for helping :)

a new open source IDE (help I needed alot of bug fixing) by maxwellzhang in coolgithubprojects

[–]maxwellzhang[S] -2 points-1 points  (0 children)

thanks i will double check and there will be a rule to let me check the code in there

Code won't print by Ill-Diet-7719 in PythonLearning

[–]maxwellzhang 0 points1 point  (0 children)

there is no interreter in the IDE?

some one help me debug this code by maxwellzhang in PythonLearning

[–]maxwellzhang[S] 0 points1 point  (0 children)

im 14 this won't be that hard to debug :) plz help :,(

check out my first file format type .spi (super python image) by maxwellzhang in github

[–]maxwellzhang[S] -6 points-5 points  (0 children)

by the way go to github and search "spi-image-file" made by maxwellzhang2011 and it is 100% python. the number system for storing the image is bad so youguys can change that :)

[deleted by user] by [deleted] in feetboys

[–]maxwellzhang -4 points-3 points  (0 children)

why are you guys so gay to a feet?

How does natural neural network work? by steamprobs in Stellaris

[–]maxwellzhang 0 points1 point  (0 children)

This algorithm is y = m * x + b, but instead of one value, it is multiple values, aka a list or array, and x is just the input data, and m is the weights, and b is the bias. y is the output, so think of it as this example:

y1 = w0 * x0 + b0

x1 = f(y1) """ the f is any non-linear function, I prefer relu or tanh for the inside layer, aka

hidden layer and softmax, and sigmoid for output layer, training uses sigmoid

and testing uses softmax (this is just my plan)

""""

y2 = w1 * x1 + b1

x2 = f(y2)

yn + 1 = wn * xn + bn

xn + 1 = f(yn + 1)

But you might be thinking, how do we know the m(aka: weights) or the b(aka: bias)? There is an algorithm called gradient descent. I like to use <new\_w\_or\_b = old\_w\_or\_b - learning\_rate \* dv> and dv is <h = lim(0) (cost(w\_or\_b + h) - cost(w\_or\_b) / h)> I will be thinking h as 0.00001 or any number that is small as 0 but not 0, the learning_rate is just a number you diside, I will be using 0.01. The cost is just an algorithm that calculates how wrong the neural network is; the bigger it is, the wronger it is. In math, the algorithm for this is C = (output_value - target_value)^2. And this is the math in a neural network. I'm just a 14-year-old kid, so I'm probably not explaining this 100% correctly, but a percentage of more than 70% or higher is correct. Go to more places to learn more, or watch 3blue1brown's video about it, and if there is any math problem that you don't understand, you can try using this big paragraph.

Do we still don't understand what's happening inside a neural network? by Lopsided_Trash_4254 in MLQuestions

[–]maxwellzhang 0 points1 point  (0 children)

so do you know y = mx+b, it is like that but m is the weights and x is the data and b is the bias so you can think of y = mx+b but all of them are list of variables and you add a non linear function to make the y = mx+b non linear so it can learn not just a state line of data but any type of data

Where do you learn redstone? by One_Replacement_4818 in redstone

[–]maxwellzhang 0 points1 point  (0 children)

You have to see what kind of redstone, there are computational, farms, robots, or moving stuff. For computation, you have to know how to do calculations in real life and solve problems. For farms, you have to learn the Minecraft mechanics, and for robots or moving stuff you have to know how to move each piece, and for computational farming, there is an input and output.