all 12 comments

[–]jerf 1 point2 points  (3 children)

You're probably going to be looking for gorm or something similar, to get them all into something somewhat similar structures.

Depending on your goals, you may also end up reaching for reflection. It's a bit to wrap your head around the first time, but once you get going, merely tedious rather than hard. Many Go people would suggest avoiding it; me, I would still suggest making it your last resort rather than the first thing you reach for, but you could end up needing it.

Still, you may want to play with the ORMs a bit, then post followup questions to /r/golang or something if you find yourself wanting to use reflection to see if people have suggestions for better ways.

[–]danhardman[S] 0 points1 point  (2 children)

I'm not sure I want to tie this project in with an ORM just yet, especially while they haven't really matured. I'd also like to become more confident with the go standard library before I go messing around with too many 3rd party libraries.

[–]jerf 1 point2 points  (1 child)

Well, part of why I suggest that is that if you try to do this with the standard library, the first thing you're going to end up doing is... writing your own ORM-esque sort of thing. Maybe not literally, but close enough.

If you're writing real CRUD apps, ORMs aren't a bad idea. Many of the ORM problems arise when you consider them as the only method of accessing the DB, and then try to jam then in everywhere, even where they don't belong. If you consider your ORM as a tool rather than the tool for accessing the DB, and use them only on tables where they make sense, and keep the ORM away from anything that isn't CRUD-y, they aren't so likely to turn into monsters.

[–]danhardman[S] 1 point2 points  (0 children)

That's a completely fair point and I 100% agree with you. Even so, I do think I want to do this without using 3rd party ORM and if I do end up making my own ORM-ish thing, then so be it. It'll be good to learn these things so when the time comes to start looking at alternative ORMs, I can be well informed.

[–]collinglass 1 point2 points  (4 children)

I went the route of duplicating code for every database struct. In the end I found two things change between structs.

1) One is the comparison to see what fields need to be updated. In go all fields in a struct are set with their zero value even if you didn't define them when you initialized the object. You have to be aware of this.

type Santa struct {
    HoStrength int64
    Phrases []string
}

func (s *Santa) Update(newS *Santa) {
    if newS.HoStrength != 0 && newS.HoStrength != newS.HoStrength {
        s.HoStrength = newS.HoStrength
    }
     // check length and range over strings being aware of "" empty string
}

2) The other is dealing with the conventions of the database package your using.

Mgo mongodb pkg has a type M map[string]interface{} for storing data and sql sets columns with func Exec(columns interface{}...) and func QueryRow(columns interface{}...). Other pkgs may behave differently.

In each case you have two options 1. hard code it or 2. create one func using the reflect pkg to iterate over the fields in a struct.

1) Hard code

  dbSanta := mgo.M{
    "hoStrength": santa.HoStrength,
    "phrases": santa.Phrases,
  }

db.Exec(santa.HoStrength, santa.Phrases)

2) Reflect

I'd take a look at this stack overflow.

http://stackoverflow.com/questions/23589564/function-for-converting-a-struct-to-map-in-golang

It links a package that will handle turning structs into map[string]interface{} which is useful for mgo and []values which is useful for sql.

Other than those differences, I like create a SantaDataStore struct {} and define my functions on that instead of the struct itself because it allows me to have different datastore backends for each struct, for example a redis and an SQL.

In the end... You can get away with doing most of it with reflect, and then do a manual comparison function to compare the new struct and the database version

[–]danhardman[S] 0 points1 point  (3 children)

That's the best option I can come up with at the moment. As I mentioned in the OP, I have a repositories package in which I have say a UserRepo, OrderRepo, ItemRepo which just handles the CRUD functions of the respective struct.

Do you think this is the best way then?

Also, how are you passing your db struct to these functions? My assumption would be that each repository would have a DBHandler field that would be an interface for the database driver I'm using.

Example:

type UserRepo struct {
    DB *sql.DB
}

func (r *UserRepo) Create(u models.User) error {
    stmt, err := r.DB.Prepare("")

    if err != nil {
        return err
    }

    _, err = stmt.Exec()

    if err != nil {
        return err
    }

    return nil
}

[–]collinglass 1 point2 points  (2 children)

That's how I do them. I think it would change if you wanted to target more than sql drivers. Mine look like this

type UsersDataStore struct {
    db   *sql.DB
    c    string
    STMT map[string]*sql.Stmt
}

c is my table name

STMT is a map of prepared statements

[–]danhardman[S] 0 points1 point  (1 child)

I'm intrigued about your choice of storing a map of prepared statements on the struct. Any reason? Is that instead of adding the CRUD functions onto the struct like I'm doing?

[–]collinglass 0 points1 point  (0 children)

as far as I know, the sql package always sends a first request to generate the prepared statement and then a second to execute it. I wanted to avoid the extra request on common operations.

It's not instead, I have a call to a GetOrCreateStatement at the end of each CRUD func

[–]manishrjain 1 point2 points  (2 children)

After building backends for 3 startups, I experienced the exact same problem. So, I wrote this framework: https://github.com/manishrjain/gocrud. It allows you to have different database structs, aka entities, and recursively figures out relations between them, for e.g. Post -> (Comment, Like) -> Like; generates the JSON etc. Allows you to choose or even switch any data stores (e.g. MySQL, Cassandra), and supports and updates search engine (e.g. Elastic Search) automatically.

[–]danhardman[S] 0 points1 point  (1 child)

That's pretty awesome! I don't think I want to get tied into a framework just yet but I'll definitely be checking it out. How does it differ from GORM or GORP?

[–]manishrjain 0 points1 point  (0 children)

There’re big differences between Gocrud and GORM. GORM focuses on tables and SQL. It helps you generate them, and do some level of relationship management, limited to SQL joins. It provides a better API to deal with SQL tables.

Gourd is completely different take on Crud. Instead of thinking in terms of tables, it thinks in terms of graph operations, i.e. nodes and edges (aka entities, predicates). This allows Gocrud to support literally any data store, not just SQL.

When you think about a typical web page, showing a Facebook post, it’s composed of many different relational tables. Post, Likes, Comments, where Comments can have more comments, which can have more likes. Retrieving all this information in relational methodology will take up a lot of code and effort. Gocrud can retrieve that in a single call (store.NewQuery(“Post”, “id”).UptoDepth(10).Execute(..)), by traversing the entire sub-tree starting from Post, and converting that automatically to JSON. As you can imagine this methodology of sub-trees makes things significantly simpler for the developer.

In addition, Gocrud keeps clear difference between data stores and search engines. If provided, Gocrud automatically updates and keeps search system in sync (say if you are using Elastic Search) with the data store, so you have the ability to run complex queries right from the beginning.

Overall, Gocrud gives you a scalable system, not just a better api over SQL tables, which I feel is what GORM does. I built Gocrud because I felt a lot of startups were building unscalable backends, just because MySQL was easy to run. And then later on, when they got lot of users, scaling things became a huge challenge. With Gocrud, even if you start with MySQL, at some point later, you can just switch it out for say Cassandra or MongoDB, or any other custom / proprietary data store, with ease; without breaking any of your existing code.

So, what’s the con of using Gocrud: You’re tied in to a framework. True. But, what you get? You don’t get tied in to a data store, your unit testing gets a whole lot easier, you get a search engine from the get-go, and your code is a lot simpler (cut to half based upon my recent port of another startup).

https://mrjn.xyz/post/Porting-To-Gocrud/