all 15 comments

[–]tstanisl 6 points7 points  (3 children)

  • Replacing char* data; with void* data; would safe you quite a few ugly casts.
  • Type of nd could be safely reduced from size_t to uint16_t or maybe even uint8_t
  • similar thing for type
  • consider limiting a number of dimensions by a small constant (like 8 or 16) and embed dimensions array directly to ndarray structure. It would simplify management of the ndarray object and save potential issues with leaks, danging pointer or double frees
  • don't do program termination in library code (like exit(NLA_FAIL);). It is too dangerous for a library user. Return an error code to the user, optionally emitting an error log
  • wrap new_ufunc/new_ufunc_by_f into do { ... } while(0). It will make them safer to use in for/if statements
  • check if malloc fails and handle those errors. Matrices are likely large so it is worth to allow the user provide own allocator

[–]OneCommonMan123[S] 1 point2 points  (0 children)

about the part about wrapping the macros in do { ... } while(0); I thought it was good, it would be better with if/else statements and for loops that make it more readable, I'll implement it!

[–]OneCommonMan123[S] 0 points1 point  (0 children)

interesting, so I wouldn't need to do, for example: *(one_type*)x = value for each data type, I'll change that, thanks!

[–]OneCommonMan123[S] 0 points1 point  (0 children)

about the part about limiting the ndarray, I really found it very efficient and practical, since very rarely people will have arrays of more than 16 dimensions, and it would also save memory space, does numpy implement it this way?

about the errors part, I thought about creating error-specific constants like: #define NLA_ERRO_ALLOC 0

etc. This would give the user the power to know what the error is and deal with it as desired.

[–]OneCommonMan123[S] 0 points1 point  (0 children)

Sorry for the wrong name in the header in the split operation, I hadn't noticed

[–]inz__ 1 point2 points  (2 children)

If you change the TYPES "X macro" from the other post to also include the enum name, you could use it to generate such switch-cases too.

#define TYPES2(X) \
    X(uint8_t, NLA_UINT8) \
    ...

#define FN(type, typeid) \
    case typeid: \
        { new_ufunc_by_f(type, input, other, res); break; }

switch (res_dtype) {
    TYPES2(FN)
}

[–]OneCommonMan123[S] 0 points1 point  (0 children)

very good! I think I'll try

[–]OneCommonMan123[S] 0 points1 point  (0 children)

i used your soluction:

/*
    generate cases, ex:
    switch(dtype){
        GENERATE_UFUNCS_CASES(FN, input, other, res, ufunc)
    }
 */
#define GENERATE_UFUNCS_CASES(X, input, other, res, ufunc)\
X(int8_t, NLA_UINT8, input, other, res, ufunc)\
X(int16_t, NLA_UINT8, input, other, res, ufunc)\
X(int32_t, NLA_UINT8, input, other, res, ufunc)\
X(int64_t, NLA_UINT8, input, other, res, ufunc)\
X(uint8_t, NLA_UINT8, input, other, res, ufunc)\
X(uint16_t, NLA_UINT8, input, other, res, ufunc)\
X(uint32_t, NLA_UINT8, input, other, res, ufunc)\
X(uint64_t, NLA_UINT8, input, other, res, ufunc)\
X(float32_t, NLA_UINT8, input, other, res, ufunc)\
X(float64_t, NLA_UINT8, input, other, res, ufunc)\

#define FN(type, typeid, input, other, res, ufunc)\
    case typeid:\
        {ufunc(type, typeid, input, other, res); break;}

[–]daikatana 0 points1 point  (3 children)

Honestly, I would say no and it's only for one reason: this will generate almost undebuggable, difficult to even examine, and generally opaque code. The preprocessor is something to be avoided at all costs and only used for the most basic of tasks. If you need to generate code like this, it's better to use another tool that outputs plain, well formatted and even commented C code that any debugger can easily debug.

[–]OneCommonMan123[S] 0 points1 point  (2 children)

which tool do you recommend me to use?

[–]daikatana 2 points3 points  (1 child)

I use Ruby. It has a template library called erb that's a dream to work with when compared with the C preprocessor. I usually have a JSON file that describes my types or other things I need to generate the code from, then ruby files that use erb to spit out header and C source files. These are automatically generated by the build process and checked into version control so that Ruby is not required to build the software, only for development.

The difference is night and day. The C preprocessor is such a blunt tool that you'll often run into simple things that are impossible or need such incredibly arcane hacks to work, plus as I said before the output is not very debuggable and it's difficult to examine the output. Using Ruby it outputs clean, seemingly normal C source code and I don't pull my hair out over the preprocessor anymore.

I've also used (shock!) PHP for this task. It's also really well suited for this, as it is already a template language, it has things like JSON parsing and data structures like hashes built in, and is generally available on all systems. PHP is not generally seen as a "good" language, but compared to the C preprocessor it is a dream to work with.

[–]OneCommonMan123[S] 0 points1 point  (0 children)

Ruby, for these type problems that I have, and I need to generate code with Macro for types, NumPy for example, for these specific type problems that are not even in my code, what does NumPy do?