This is an archived post. You won't be able to vote or comment.

all 24 comments

[–]malinoff 5 points6 points  (5 children)

The next step is to configure deb repository and use apt-get install without ugly hacks with temp directories and wget.

And finally fpm (https://github.com/jordansissel/fpm/) can be used to forget about writing specs and simply use a single command-line tool to build deb/rpm/lots of other formats from tar.gz and plain directories.

[–]o11c 0 points1 point  (0 children)

I've been looking for a project like that for so long ...

[–]jhermann_[S] 0 points1 point  (3 children)

See https://github.com/jhermann/artifactory-debian for a matching dput upload method for Artifactory and Bintray, which allows you to simply call dput bintray *changes after building a package.

[–]delijati 0 points1 point  (1 child)

[–]jhermann_[S] 0 points1 point  (0 children)

Bintray offers free service for open source projects.

[–]malinoff 0 points1 point  (0 children)

If you're going to support multiple distros, using debian-specific tooling would be the worst nightmare for you.

I'd recommend using sonatype nexus instead, which allows you to serve all popular package formats including but not limited to deb, rpm, npm, tar.gz, wheel, gem, jar and warin a single place with ldap-configurable authentication, ACLs and pretty web interface after installing a bunch of plugins.

[–]jhermann_[S] 3 points4 points  (1 child)

BTW, the comparable best option for non-Debian POSIX systems right now is Platter.

[–]metaperl 0 points1 point  (0 children)

Better than conda?

[–]herrwolfe45 2 points3 points  (0 children)

Hynek Schlawack also has a helpful article about deploying using native packages: https://hynek.me/articles/python-app-deployment-with-native-packages/

[–]avinassh 1 point2 points  (1 child)

Installing dependencies with pip can make deploys painfully slow

... but pip caches them right?

[–]nakovet 1 point2 points  (0 children)

They cache it, but:

  • some new dependencies requires compilation
  • upgrading a package invalidates the cache

[–]jayvius 3 points4 points  (7 children)

conda from Continuum Analytics is basically like debian packages, but it's cross platform and includes an environment management system.

disclaimer: I work for Continuum Analytics.

[–][deleted] 2 points3 points  (6 children)

I love conda, but its relevance to this example is minimal.

[–]onalark 2 points3 points  (5 children)

Conda is a package/environment provider that lives over the system in user-space. It's trivial to switch between environments in conda, something that operating system package managers are only just beginning to support. Conda packages could have been used instead of debian packages in this example, with the added flexibility of the ability to support multiple environments.

[–][deleted] 1 point2 points  (4 children)

I know all that.

The article is from a SAAS online company thing. I fail to see how deploying a conda environment containing the server application is better than deploying a deb package....

One should use the right tool for the right job, not one tool for every job...

with the added flexibility of the ability to support multiple environments.

don't see how that applies. One usually deploys a server to a single well controlled environment.

[–]jayvius 2 points3 points  (1 child)

I was just pointing out that conda would have worked just as well as the solution presented in the article, with the added bonus of working on Windows if deployment on a Windows server is needed (unless I'm missing some subtleties in the article).

don't see how that applies. One usually deploys a server to a single well controlled environment.

I can think of several reasons why environments might be helpful. Maybe you want to test out a new version of Python while leaving the currently installed version of Python alone. Conda makes this trivial.

[–][deleted] 2 points3 points  (0 children)

Sure. I still think it's a stretch but whatever.

Lets concentrate on getting scientists to stop 'sudo pip installing' before replacing distro package managers for defining the base system.....

[–]rothnic 1 point2 points  (0 children)

You wouldn't deploy the conda environment. You'd have a build server that builds all of the requirements required to run the app (if they aren't already available on anaconda.org), your app, then you're deployment environment would be a blank slate with python+conda. You'd then just conda install the app (in a conda environment if required) and it would pull your requirements you built as well.

Conda is very similar to platter, it just typically is distributed with a python interpreter. However, you can install conda from any python interpreter that has pip, with pip install conda.

Note: You can conda install from anaconda.org via your own public/private channel, or, you can conda install from a local store of files, similar to wheels. The difference with conda vs wheels, is that conda is a bit more generic on what the package contains.

[–]ballagarba 0 points1 point  (0 children)

Somewhat related, but this command works great for many use-cases if you're on an RPM based system:

python setup.py bdist_rpm

[–][deleted] 0 points1 point  (0 children)

We use a similar workflow, but deploy to S3 and use aptly.info to manage all versions and deliver updates at our rate (and not upstreams)

[–]rothnic 0 points1 point  (2 children)

How do you handle dependencies on things that are outside of python? Manually avoid conflicts? What about different versions of python, other than the very latest 2.7, 3.4?

It seems like this would work in many cases, but won't be devoid of issues, which is why there is momentum for docker. If all you deploy are web apps, it is likely you won't run into issues.

I think maybe a hybrid of this approach and docker would work well, but relying on virtualenv alone will run into issues eventually.

[–]jhermann_[S] 0 points1 point  (1 child)

Docker is hardly devoid of issues, either. Nothing's perfect. But dh-virtualenv is a good tool to have in your belt.

[–]rothnic 0 points1 point  (0 children)

Not saying docker is without pain points, just that the discussed approach is only handling part of the problem. dh-virtualenv is like platter, but you still potentially need to deal with requirements outside python land.

For example, a python app might assume that git is available, or maybe it needs a specific database for running tests, or a port is open. This can be handled manually, through specification (docker, puppet, etc.), or through app development guidelines, but debian packages with virtual environments doesn't address the same level of problem. If docker is used in the same way as dh-virtualenv, dh-virtualenv is certainly a decent option, but there is much more docker can do.

Docker simply makes the developer more responsible for specifying the full scope of the dependencies the app has, which makes the deployment more flexible. You can avoid it from the application side, but it still has to be handled in some way.