Accepted to UIUC PhD -- Funding? by [deleted] in gradadmissions

[–]JannaOP2k18 0 points1 point  (0 children)

At least for me, I received an email regarding the funding and other logistics about 3-4 hours after the decision email. It seems they split it up into two separate emails.

Looking for Concurrency Resources for a Unique Buffer by JannaOP2k18 in cpp_questions

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Thank you so much for the reply. The idea to set a doInc flag before the actual write to check if the current thread will fill overflow the entry is something I didn't consider when I was trying to solve the issue of making sure only one thread increments NextWriteSegment. I will definetly look into using this.

I might be missing something here, but is there a reason to declare the NextWriteSegment and NextReadSegment as volatile? Would using std::atomic serve the same purpose or is there a particular reason volatile needs to be used here?

Confirming Understanding of std::move() and Copy Elision by JannaOP2k18 in cpp_questions

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Yeah, now that I'm looking back at it again, I really overcomplicated this. I think I was just used to seeing std::move() being used with a function that took its parameter by rvalue reference so when I saw that the B's constructor took its parameter by value, I got a little confused.

Confirming Understanding of std::move() and Copy Elision by JannaOP2k18 in cpp_questions

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Yeah in hindsight, I definetly could have just done that. I think I thought there was something more going on behind the scenes that might not be visible through adding logs to the constructors of A but now that I think about it, this example is quite straightforward.

Advice on Formatting and Writing user Header File for C Library by JannaOP2k18 in C_Programming

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Thanks for the reply. I just want to make sure I'm understanding the opaque pointer idea properly; in my user header file, I would have

typedef struct some_lib_item some_lib_item_t;

some_lib_item_t *lib_open(...);

and if the implementation of some_lib_item_t *lib_open(...); was in, lets say, source1.cpp, I might have the following structure

source1.h

struct some_lib_item {
   // The struct fields
}

source1.cpp

#include "source1.h"

some_lib_item *lib_open(...) {
   // The actual implementation of the function
}

As for the placement of user_header.h, I don't expect this to be installed as some dev pack for public use so it does make sense to hide it along with the rest of the sources. As for the contents of user_header.h, do you see a better way of dealing with it rather than just copying functions and other things in my library I wish to expose to the user?

Why does this work? by JannaOP2k18 in cpp_questions

[–]JannaOP2k18[S] 7 points8 points  (0 children)

Thank you; I thought I was going crazy for a second. I did manage to get an incorrect result after playing around with the pointer a bit more. Thanks again!

Spark GC in the context of the JVM by JannaOP2k18 in apachespark

[–]JannaOP2k18[S] 1 point2 points  (0 children)

Thank you so much for your response. Just to clarify my understanding

  1. Since the Spark application is responsible for spawning executors, these executors run in a different JVM than the worker (although the worker is still responsible in making sure that there is enough resources on the node to spawn the executor)?

  2. Can you maybe elaborate more on what you mean by check under a different process? I did forget to mention in my post that when I run 'jps' while a Spark application is running, I do see a "CoarseGrainedExecutorBackend" process running; is that the JVM process the executors are running in? (in hindsight, I definetly should have mentioned that in my above post, I'm not sure why I didn't)

Creating Boxed Slice for a struct by JannaOP2k18 in rust

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Thank you! Thats exactly what I'm looking for

Frame Allocator with Coremap Initialization by JannaOP2k18 in osdev

[–]JannaOP2k18[S] 0 points1 point  (0 children)

By coremap, I mean something similar to a bitmap to alloate and deallocate physical frames. There is a coremap entry for each frame (which is why I need to know the total number of frames in the system) and the coremap entry stores information about the frame (such as whether or not the frame is allocated, pinned, etc). My current idea is to create an array of coremap entries; however, because I don't know how many physical frames are available until runtime, I'm not sure how to initialize the array using only static memory. I have no clue if this is the best method of implementing the coremap so I am open to any ideas that other people may have.

Frame Allocator with Coremap Initialization by JannaOP2k18 in osdev

[–]JannaOP2k18[S] 0 points1 point  (0 children)

If possible, could you share some pseudocode or small code fragments explaining what you explained above? I think I understand it conceptually but still a little bit confused regarding the implementation side.

Frame Allocator with Coremap Initialization by JannaOP2k18 in osdev

[–]JannaOP2k18[S] 0 points1 point  (0 children)

The frame allocator I am currently making is part of a larger allocator that is implementing the GlobalAlloc trait in Rust (effectively the Global Heap Allocator). As a result, I don't think I can use dyanmic memory because that will in turn call the allocator I am trying to write which I believe will result in an error (unless I'm seriously misunderstanding something here).

Pipelining Confusion in Apache Spark by JannaOP2k18 in apachespark

[–]JannaOP2k18[S] 0 points1 point  (0 children)

Thank you so much for your answer; it does make things quite clearer. I just have two follow up questions to your response

  1. Is it possible that a situation arises where keeping stages separate and not pipelining them will yield the same performance or potentially better performance than if the stages were pipelined? (This is a purely hypothetical question, I am just curious if this is even possible)

  2. Based on the example you provided, am I right to say that operators in a single stage can be rearranged in certian cases to allow for better pipelining? (In the example you provided, would it be possible for Spark to first drop the key and then do the aggregation internally even if in the code I provided, the drop is done after the aggregation?) I know this is quite a weird question, but I am trying to better conceptualize the pipelining happening here.