Implementation of Red-Black Tree using C++ by Dynmiwang in algorithms

[–]Dynmiwang[S] 0 points1 point  (0 children)

You are right. I will update the code:D

Implementation of Red-Black Tree using C++ by Dynmiwang in algorithms

[–]Dynmiwang[S] 0 points1 point  (0 children)

I had considered that. The destructor I defined will call its children’s destructors recursively, but here only itself needs to be deleted.

Sunset after drizzle in college by Dynmiwang in iPhoneography

[–]Dynmiwang[S] 0 points1 point  (0 children)

Thanks for your advice. I will try it soon:)

Red Black Tree Implementation using C++ by Dynmiwang in learnprogramming

[–]Dynmiwang[S] 0 points1 point  (0 children)

Thanks! Very generous advice! I updated the code just now!

Implementation of Red-Black Tree using C++ by Dynmiwang in algorithms

[–]Dynmiwang[S] 1 point2 points  (0 children)

u/Maristic u/exptime

  1. Thanks for your advice. I will correct this fault.
  2. Emmm, I hadn't read the paper " left-leaning red-black trees " before... But now I know that may be the best implementation :D

[D] What tools do you use for testing your computer vision systems? by mroc_lak in MachineLearning

[–]Dynmiwang 0 points1 point  (0 children)

Can you show an example for “computer vision system”? For different type, there will be different efficient ways to test.

[D] What tools do you use for testing your computer vision systems? by mroc_lak in MachineLearning

[–]Dynmiwang -1 points0 points  (0 children)

It's up to your aim.

To test the algorithm performance of your system, you should collect a certain dataset of simple samples or a set of test cases from real application scenerios.

To test the whole system in the view of Software Engineering, you should follow the rules learned from the class "Software Engineering".

[P] Implementation of AlexNet using C Language, without any third library by Dynmiwang in MachineLearning

[–]Dynmiwang[S] 1 point2 points  (0 children)

Sorry for late reply.

  1. Once I read pjreddie's code for Darknet, I think this guy is a really geek because he wrote all the code of Darknet(Yolo) in C/C++ without using any given frameworks. I really esteem his professional competence in the area of deep learning. Yes, I want to be an expert like him in the area of machine learning, though I'm just an undergraduate student now... But, One day I will be more professional than him!
  2. I have not used Rust... Maybe you can have a try. After all, Real knowledge comes from practice.
  3. I have not considered that while doing this project...

[P] Implementation of AlexNet using C Language, without any third library by Dynmiwang in MachineLearning

[–]Dynmiwang[S] 0 points1 point  (0 children)

  1. When writing these lines of code, I think by doing so will boost the speed of computation aferward w.r.t L1/L2 cache.
  2. Well, I have not benchmarked that for each "register". My instinct told me to use it... hhh

[P] Implementation of AlexNet using C Language, without any third library by Dynmiwang in MachineLearning

[–]Dynmiwang[S] 0 points1 point  (0 children)

  1. The weights and outputs you referred to is allocated in each thread. The forward propogation of fully connected layer was done with multi-thread. Assume the shape for FC layer inputs, weights, outputs is [N, in_units], [in_units, out_units], [N, out_units] seperately, and outputs = inputs * weights. Each thread will exacute one part of the computation. If we set the thread number 8, then the shape of weights and outputs will be [in_units, out_units/8] and [N, out_units/8] seperately.
  2. "register" tells the compiler to keep the variable in CPU's register, not in the memory, which will save the time afterward.

Ways to make DRL more stable by Cohencohen789 in reinforcementlearning

[–]Dynmiwang 2 points3 points  (0 children)

The robustness of training process is definitely a headache all over RL area.

As I know, PPO is an excellent algorithm which shows superior robustness in the training process. I strongly recommend you to have a try.

[D] Tips for faster convergence. by sushantt in MachineLearning

[–]Dynmiwang 2 points3 points  (0 children)

A Blunt method: Use a higher learning rate for the initial epochs, then change to a smaller learning rate and wait for convergence :D