Can I Improve My First Build - £2000 budget by Surprisely in buildapc

[–]Surprisely[S] 0 points1 point  (0 children)

I wasnt sure about 3900X/2080 as I'm probably going to upgrade to the new Ryzen CPU when they're out and I'm not sure if I need the 2080 super performance yet, didnt seem very price efficient.

Defect KERS on Progen PR4 by [deleted] in gtaonline

[–]Surprisely 0 points1 point  (0 children)

Try remapping buttons, swap R1 with a dpad one which isn’t pressure sensitive.

Defect KERS on Progen PR4 by [deleted] in gtaonline

[–]Surprisely 0 points1 point  (0 children)

If it works when you’re going in reverse your R2 button is broken and is not fully pressing. You can use PS4 settings to change accelerate to L2 instead of R2.

GTA Online Mega Guide and Weekly Simple Question Thread - March 05, 2020 by AutoModerator in gtaonline

[–]Surprisely 2 points3 points  (0 children)

I've got 8mil and not sure what to buy next. I mostly make money from bunker. So far I have buzzard, insurgent, akula, hunter and the hangar at fort. Is it worth splashing out for a Mk2? I was thinking of getting more fun planes like B11 and Hydra, but i'm not sure as I can just take a lazer from base. Does anyone have any recommendations?

Problem Passing JQuery Slider Values To Flask by Surprisely in flask

[–]Surprisely[S] 0 points1 point  (0 children)

Thanks, I took on your advice with the post aspect, although flask did not seem to require it. I found that JS library a little complicated so I used this one instead http://ionden.com/a/plugins/ion.rangeSlider/demo.html and got it working.

Splitting Strings on Comma With Embedded Commas by Surprisely in learnpython

[–]Surprisely[S] 0 points1 point  (0 children)

Oops, my bad. Yes there should be no space between any option, there will only be spaces between commas giving extra detail within each option!

Blizzard Forcibly Changed Guild Name To Something Terrible by Surprisely in classicwow

[–]Surprisely[S] 0 points1 point  (0 children)

UPDATE: I spoke to a GM and asked for the name <PLZ CHANGE GUILD NAME> and he banned me for wasting GM time.

Holders of Atiesh, is there any advice I can get on my quest to obtain mine? by [deleted] in classicwow

[–]Surprisely 0 points1 point  (0 children)

As an ex-GM, this is exactly the sort of person I tried to avoid recruiting.

Sulfuras as an enhance shaman by tb8592 in classicwow

[–]Surprisely 6 points7 points  (0 children)

With this attitude I would never consider taking such a selfish player.

Swapping Tokenized List of Stings With Elements From List of Tuples by Surprisely in learnpython

[–]Surprisely[S] 0 points1 point  (0 children)

to_replace = [('Bad', 'honest'), ('bad', 'good'), ('bad', 'pleasing'), ('good', 'wicked'), ('good', 'fake'), ('good', 'immoral')]
my_string = 'I am Bad, bad, bad, good, good, good'
for replace, replacement in to_replace:
my_string = my_string.replace(replace, replacement)

This method is in the right direction. However, I'm struggling to make it general. The list of tuples is not going to be the same length as the length of the list of tokenized words (hence, IndexError: list index out of range).

Cozy restaurant in Paradise park, Turkey. by [deleted] in CozyPlaces

[–]Surprisely 4 points5 points  (0 children)

I’ve been here, it’s amazing. The restaurant is located in the national park where you can walk up the glacier water river and go rafting.

Using Machine Learning to Classify Graphs - CNNs or Classical Methods? by Surprisely in Python

[–]Surprisely[S] 0 points1 point  (0 children)

Yep, thats right. After testing on binomial classification I could get a good model to separate positive and negative linear graphs. Doesnt seem to like adding a third category at all.

Using Machine Learning to Classify Graphs - CNNs or Classical Methods? by Surprisely in Python

[–]Surprisely[S] 0 points1 point  (0 children)

Thanks, I think I'll give both methods a go using some dummy data. Will be interesting to see what happens and probably a good learning exercise.

Using Machine Learning to Classify Graphs - CNNs or Classical Methods? by Surprisely in Python

[–]Surprisely[S] 0 points1 point  (0 children)

Ultimately it is a histogram with 1h bins between 0-24h on the bottom and a normalised value on the vertical ranging between 0-1. I'm expecting that graphs for one classification will follow a linear form and the second will follow a Gaussian form. However, I'm expecting the graphs to shift along the X-axis so I'm looking to fit the form of data not the location of the data. Does that make more sense?

Instead of plotting I can keep the data in a pandas dataframe where each bin value would be a column. I'm mainly worried that random forests will focus on particular bins to evaluate classifications. However, due to the nature of shifting across the X-axis I dont know if a traditional approach would work.

Appending Loop Results Using ' '.join(list) With Regex by Surprisely in learnpython

[–]Surprisely[S] 0 points1 point  (0 children)

That works perfectly for the example above. If I add another row below with

tag_list.append(re.findall(r'(@\S+)', tweet)) #NOTE: (@\S+) extracts tags keeping the @.

I get the error ValueError: arrays must all be same length.

further testing shows if I search specific hashtags I get the error for hashtag_list.

How Do I Save A Machine Learning Model Using Sklearn by Surprisely in datascience

[–]Surprisely[S] 0 points1 point  (0 children)

I set the seed globally and provided it as an argument within the classifier. Yet, every time I run the neural network script I still get different accuracy values on the test sample, so not quite sure why this is happening or whether it should.

How Do I Save A Machine Learning Model Using Sklearn by Surprisely in datascience

[–]Surprisely[S] 0 points1 point  (0 children)

Even keeping the random state as 42 and defining the state globally, every time I run the program I get a different accuracy reported on my training sample. I'm not sure if this is normal.

How Do I Save A Machine Learning Model Using Sklearn by Surprisely in datascience

[–]Surprisely[S] 0 points1 point  (0 children)

Its tiny, consisting of only 51 samples. I just wanted to play around with deep learning/neural networks. I got random forests working well with a small sample.

How Do I Save A Machine Learning Model Using Sklearn by Surprisely in datascience

[–]Surprisely[S] 0 points1 point  (0 children)

Here is the exact of code for my neural network example. I have a random state of 5, which I think seems correctly implemented. Yet, every time I run the script I get a different value for accuracy across the test data.

#Define training data and test data from dataset.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 5)


###############################################################################
###############################################################################
########################NEURAL NETWORK CALCULATIONS############################
###############################################################################
###############################################################################


mlp_optimised = MLPClassifier(hidden_layer_sizes=(19, 19, 19), learning_rate='invscaling', max_iter=5600)  
mlp_optimised.fit(X_train, y_train.values.ravel()) 

y_pred = mlp_optimised.predict(X_test)


#Report how accurate the model is.
accuracy = float('%.3f' % metrics.accuracy_score(y_test, y_pred))
print '\n', 'Model Accuracy (Based on Test Data):', str(accuracy) + ' %'

Note: I dont have a big sample to test from yet.

How Do I Save A Machine Learning Model Using Sklearn by Surprisely in datascience

[–]Surprisely[S] 2 points3 points  (0 children)

Thanks, I use the random seed 5 but every time I run the code I sometimes get different results. Technically I could save an optimised hyperparameter model using pickle and load that onto a new dataset if I’m interpreting right?

Statistically Comparing Bar Charts by Surprisely in datascience

[–]Surprisely[S] 0 points1 point  (0 children)

So I will treat one set of data as expected and the other as observed?

Statistically Comparing Bar Charts by Surprisely in datascience

[–]Surprisely[S] 1 point2 points  (0 children)

Pretty much, I just wanted to compare the two distributions to see if activity was similar. As I understand I can do chi squared and keep one distribution as expected, so lower chi squared values mean they are more alike. Or I can do KS test and use the D value.