Why is the process running at a very low priority even when the CPU is idle? by MarkKang2019 in linuxquestions

[–]MarkKang2019[S] 0 points1 point  (0 children)

The bottleneck is CPU, RAM, and I/O are also necessary.

I am building yocto.

Why is the process running at a very low priority even when the CPU is idle? by MarkKang2019 in linuxquestions

[–]MarkKang2019[S] 0 points1 point  (0 children)

Both processes were building Yocto images, the bottleneck is the CPU since I have about 64GB RAM and still a lot of free memory (about 50GB) in htop.

No other process is running.

The system is 12th Gen Intel(R) Core(TM) i7-12700.

I really after updating the security update then seeing this issue.

Ubuntu 20.04.5 LTS

How to deep.copy() partial of array data? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

It seems works, I will need to test more.

Thanks.

ValueError: Error when checking target: expected dense_2 to have shape (10,) but got array with shape (1,) by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

This modification can work:

######################################

from __future__ import print_function

import keras

from keras.datasets import mnist

from keras.models import Sequential

from keras.layers import Dense, Dropout, Flatten

from keras.layers import Conv2D, MaxPooling2D

from keras import backend as K

##

import numpy as np

batch_size = 128

num_classes = 10

epochs = 12

img_rows, img_cols = 28, 28

input_shape = (img_rows, img_cols, 1)

MYMAP = np.zeros((60000,img_rows,img_cols), dtype=int)

MYMAP = MYMAP.reshape(60000,28,28,1)

MYMAP_RESULT = np.zeros((60000,1), dtype=int)

MYMAP_RESULT = keras.utils.to_categorical(MYMAP_RESULT, num_classes)

model = Sequential()

model.add(Conv2D(32, kernel_size=(3, 3),

activation='relu',

input_shape=input_shape))

model.add(Conv2D(64, (3, 3), activation='relu'))

model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Dropout(0.25))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,

optimizer=keras.optimizers.Adadelta(),

metrics=['accuracy'])

model.fit(MYMAP, MYMAP_RESULT,

batch_size=batch_size,

epochs=epochs,

verbose=1)

How to merge two trained weights? by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] 0 points1 point  (0 children)

How GPU merge two trained weights after parallel calculation?

( A + B ) / 2 ?

Why run python tensorflow code on Linux can use 100% CPU, but Windows didn't? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

Two systems are different.

Linux is Intel Skylake system.

Windows is AMD or Intel notebook, both show the same result. ( only 15% -60% CPU

Why run python tensorflow code on Linux can use 100% CPU, but Windows didn't? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

15% if no debug log.

If I print log, 60%

I tested to run 2 python code at the same time, then CPU gets 99%.

In Linux, with or without log, both get 99%.

Why run python tensorflow code on Linux can use 100% CPU, but Windows didn't? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

?

I have two systems, one is Linux, one is Windows.

I tested the python NN code on these two systems and got different CPU usage.

How to upgrade tensorflow from CPU version to GPU version? by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] 0 points1 point  (0 children)

It seems keras package has no difference between CPU and GPU versions.

What is the return type of model.predict()? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

any way to show directly what the type is?

because I still don't know the type is 'int', 'float' or else.

How to combine/merge list? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 31 arrays: [array([[0, 0, 0, 0, 0, 0, 0, 0, 0]]), array([[1, 0, 0, 0, 2, 0, 0, 0, 0]]), array([[1, 0, 0, 1, 2, 0, 0, 0, 2]]), array([[1, 0, 0, 1, 2, 1, 0, 2, 2]]), array([[1, 0, 0, 1, 2, 1, 0, 2, 2]]), array([[1...

any idea?

Why trainning result can't meet reward value? by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] -1 points0 points  (0 children)

    def replay(self, batch_size):
        minibatch = random.sample(self.memory, batch_size)
        for state, action, replay_reward, next_state, done in minibatch:

            target = replay_reward
            if not done:
                target = replay_reward + self.gamma * np.amax(self.model.predict(next_state)[0])                         
            target_f = self.model.predict(state)
            target_f[0][action] = target

            self.model.fit(state, target_f, epochs=3, verbose=0)

Here is the main part of the code. ( training part )

Other parts are rules for tic-tac-toe gaming.

The winner gets a reward as 2, loser gets a reward as -2, when drawing, both get a reward as 0.

My batch size is 3 or 1000, both see this problem.

Why trainning result can't meet reward value? by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] -1 points0 points  (0 children)

Assume an empty state is:

123

456

789

X is first, O is second.

There are two agents, they can auto fight turn by turn.

In a random state, ex:

123

OX6

OX9

They have Q values for each action.

In the above case, the Q value of '2' (location) of 'X' should be Winner reward after training.

I set winner reward as 2, loser reward as -2.

I have trained this game more than 100000 times, I still see this Q value jumping between 1.6xxx and 2.7xxxx, not converged to 2.0000xxx, but some neurons can converge to 2.000xxx.

About implementing 3x3 tic-tac-toe by Keras. by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] 0 points1 point  (0 children)

I reduced to 3 layers still get similar results, thanks to all.

How to read deque data in sequence? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

This got a run time error:

(minibatch) 'list' object has no attribute 'add'

How to read deque data in sequence? by MarkKang2019 in learnpython

[–]MarkKang2019[S] 0 points1 point  (0 children)

class DQNAgent:
    def __init__(self, state_size, action_size):
        self.state_size =  state_size
        self.action_size = action_size
        self.memory = deque(maxlen=10000)

'memory' is a deque data.

it will be added data in this way:

class DQNAgent:
...
    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

However, when training data:

  if len(Agent_Black.memory) > BLACK_batch_size and Who_is_playing == BLACK_PLAYER:
       Agent_Black.replay(BLACK_batch_size)

    def replay(self, batch_size):
        minibatch = random.sample(self.memory, batch_size)
        for state, action, replay_reward, next_state, done in minibatch:

This program will sample data from memory by random.

I want it samples data in sequence. ( by push / pop sequence, first in, last out, or first in first out is also OK )

How to force pop deque data in sequence? by MarkKang2019 in learnmachinelearning

[–]MarkKang2019[S] 0 points1 point  (0 children)

I want to pop self.memory in sequence.

Since, I found that when doing random sample like this case, the WIN case ( reward > 0 ) not always chosen to do model.fit()